text
stringlengths
1
1.03M
id
stringlengths
1
7.38k
metadata
dict
\section{\label{sec:intro}Introduction} Recently, the 4d/5d honeycomb layered materials have been vigorously studied due to their potential in realizing a quantum spin-liquid (QSL) ground state~\cite{PRL_Jackeli_Mott, PRL_Singh_A2IrO3, Nature_takagi_concept_2019, PRB_Plumb_RCL, PRB_Williams_Incommensurate, nano_wang_range_2020, Todorova_LRO_2011, PRB_Choi_Spin}. First introduced by Alexei Kitaev in 2006, the Kitaev model is an exactly solvable theoretical model with bond-dependent Ising interactions among spin-$1/2$ degrees of freedom on a two-dimensional (2D) honeycomb lattice, which is described by the Kitaev Hamiltonian: $\mathcal{H}=-\sum K_{\gamma}\mathbf{S_i}^{\gamma}\mathbf{S_j}^{\gamma}$~\cite{Kitaev_anyons_2006}. The ground state of this system is magnetically frustrated and is predicted to be a QSL~\cite{Kitaev_anyons_2006}. The applications of a Kitaev QSL in quantum information and the possibility of realizing Majorana fermions have inspired numerous investigations on quasi-2D honeycomb materials~\cite{Nature_takagi_concept_2019,knolle_field_2019,chaloupka_kitaev-heisenberg_2010,PRL_Jackeli_Mott,kim_novel_2008}. The first-generation of these compounds, namely Na$_2$IrO$_3$, $\alpha$-Li$_2$IrO$_3$, Li$_2$RhO$_3$, and $\alpha$-RuCl$_{3}$, were synthesized using conventional solid state methods at high temperatures ($T>700$~$^{\circ}$C). In these materials, heavy transition metal ions (Ru$^{3+}$, Rh$^{4+}$, and Ir$^{4+}$) are octahedrally coordinated with oxygen or chlorine atoms (Fig.~\ref{fig:T2G}a), and the edge-sharing octahedra create honeycomb layers (Fig.~\ref{fig:T2G}b). The combination of octahedral crystal electric field (CEF) and strong spin-orbit coupling (SOC) splits the five-fold degenerate $d$-levels and leaves one electron in the isospin-$1/2$ ($J_{\text{eff}}$=$1/2$) state necessary for the Kitaev model (Fig.~\ref{fig:T2G}c)~\cite{PRL_Jackeli_Mott,PRB_Plumb_RCL, Todorova_LRO_2011, PRB_mehlawat_heat_2017,Kitaev_anyons_2006}. \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{T2G} \caption{\label{fig:T2G} (a) The bond angle ($\phi$) between edge-shared octahedral units plays a significant role in tuning the magnetic interactions. (b) Edge-sharing octahedral units create a honeycomb structure in Kitaev magnets such as $\alpha$-Li$_2$IrO$_3$\ and Na$_2$IrO$_3$. (c) Interplay between CEF and SOC creates the isospin-1/2 state in the Kitaev magnets. } \end{figure} Finding new Kitaev magnets, beyond the first-generation compounds, has become a frontier challenge in solid state chemistry. Prior attempts to replace Na with K in Na$_2$IrO$_3$\ or replacing Cl with Br in $\alpha$-RuCl$_{3}$\ have led to other stable phases with different structures instead of the honeycomb lattice~\cite{weber_trivalent_2017,merlino_orderdisorder_2004}. The amount of physical pressure required to substantially tune the interactions is too high~\cite{bhattacharjee_spinorbital_2012} and chemical doping leads to a change of spin state~\cite{cao_challenge_2018}. Therefore, recent success in synthesizing a second-generation of Kitaev magnets where magnetic interactions can be tuned by topochemical methods has revitalized the field. In this review, we will first explain the different types of exchange reactions (partial and complete), then discuss the interplay between topochemical reactions and magnetism, and finally present heat capacity and magnetization data to compare the properties of the first and second-generation Kiteav magnets. \section{\label{sec:topotactic}Topotactic Exchange Reactions} \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{Synthesis} \caption{\label{fig:Synthesis} Synthesis of the second-generation Kitaev magnets from the first-generation materials through (a) partial and (b) complete exchange reactions. Both generations have honeycomb layers. The topochemical change of inter-layer coordination from octahedral to linear modifies the intra-layer Ir-O-Ir bond angles due to the change of oxygen positions. } \end{figure} The second-generation Kitaev magnets are metastable compounds, i.e. they have a higher enthalpy of formation and a lower decomposition threshold compared to stable counterparts~\cite{aykol_thermodynamic_2018}. Thus, it is impossible to synthesize them with conventional solid state methods at high temperatures. Instead, they are stabilized through topochemical reactions from the first-generation compounds under mild conditions. As shown schematically in Figs.~\ref{fig:Synthesis} the global symmetries of the unit cell (space group and honeycomb structure) do not change during a topochemical reaction. However, the local parameters such as bond lengths and bond angles are modified efficiently. Topotactic exchange reactions can be either partial (Fig.~\ref{fig:Synthesis}a) or complete (Fig.~\ref{fig:Synthesis}b). The most general formulation of a partial exchange reaction is \begin{equation} \label{eq:topo1} \mathrm {2A_2MO_3+3BX}~\rightarrow~\mathrm{B_3AM_2O_6+3AX} \end{equation} where the inter-layer A-atoms (typically Li or Na) in a stable honeycomb structure A$_2$MO$_3$ are exchanged with the B-atoms (typically Cu, Ag, and H) from a halide, nitrate, or sulfate compound BX. For example, Fig.~\ref{fig:Synthesis}a corresponds to A=Li, B=Ag, M=Ir, and X=NO$_3$ for the synthesis of Ag$_{3}$LiIr$_2$O$_6$\ from $\alpha$-Li$_2$IrO$_3$. Replacing the inter-layer Li atoms by H, Cu, or Ag, in $\alpha$-Li$_2$IrO$_3$\ has recently produced H$_{3}$LiIr$_2$O$_6$, Cu$_{3}$LiIr$_2$O$_6$, and Ag$_{3}$LiIr$_2$O$_6$, respectively~\cite{PRL_Bahrami_Thermodynamic, PRB_Geirhos_Quantum, dalton_roudebush_structure}. In a complete topotactic exchange reaction, all A-atoms within and between the layers are replaced by the B-atoms. \begin{equation} \label{eq:topo2} \mathrm {A_2MO_3+2BX}~\rightarrow~\mathrm{B_2MO_3+2AX} \end{equation} For example, Fig.~\ref{fig:Synthesis}b corresponds to A=Na, B=Cu, M=Ir, and X=Cl for the synthesis of Cu$_2$IrO$_3$\ from Na$_2$IrO$_3$. A complete exchange reaction is much less likely to happen and so far Cu$_2$IrO$_3$\ is the only known system in this category~\cite{JACS_abramchuk_cu2iro3}. It is noteworthy that the copper atoms in Cu$_2$IrO$_3$\ are not entirely in a Cu$^+$ state. Both x-ray absorption and electron energy loss spectroscopy (XAS and EELS) confirmed a mixed valence of Cu$^{+}$/Cu$^{2+}$ with a 1/1 ratio within the honeycomb layers~\cite{kenney_coexistence_2019}. A mixed valence of copper, induces a mixed valence of iridium (Ir$^{+3}$/Ir$^{+4}$) and leads to magnetic disorder and spin-glass behavior~\cite{kenney_coexistence_2019,choi_exotic_2019}. \section{\label{sec:synthesis}Synthesis Details} \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{XRD} \caption{\label{fig:XRD} (a) After each heat cycle, the powder x-ray pattern of $\alpha$-Li$_2$IrO$_3$\ shows more pronounced peaks, especially between 20 and 30 degrees where the honeycomb Bragg peaks appear. The number of times each sample has been reheated is shown on the right above its respective pattern. (b) The x-ray patterns of two second-generation Kitaev systems, H$_{3}$LiIr$_2$O$_6$\ (green) and Ag$_{3}$LiIr$_2$O$_6$\ (gray data, reproduced from~\cite{PRB_Bahrami_Effect}). The inset shows the asymmetric broadening of the honeycomb Bragg peaks in Ag$_{3}$LiIr$_2$O$_6$\ due to stacking faults. In H$_{3}$LiIr$_2$O$_6$, the honeycomb peaks are hardly discernible due to high structural disorder. } \end{figure} The first-generation Kitaev magnets are prepared via conventional solid state reaction at high temperatures (T$\ge 700$ K) either in air or under the flow of oxygen/argon gas~\cite{PRB_mehlawat_heat_2017, Todorova_LRO_2011, omalley_structure_2008}. To improve the sample quality and remove stacking faults, it is necessary to perform successive stages of grinding and heating. For example, the x-ray patterns in Fig.~\ref{fig:XRD}a show that the quality of $\alpha$-Li$_2$IrO$_3$\ samples improve by repeating the heat cycles. Specifically, the superstructure peaks between 20 and 30 degrees (inset of Fig.~\ref{fig:XRD}a) that represent the honeycomb ordering become more pronounced in each iteration. Typically, improving the quality of the first-generation compound will improve the quality of the second-generation material after the exchange reaction~\cite{PRB_Bahrami_Effect}. The topotactic cation exchange reaction must be conducted at low temperatures ($T\le 400$~K)~\cite{kitagawa_spinorbital-entangled_2018, JACS_abramchuk_cu2iro3, PRB_Bahrami_Effect}, since higher temperatures will decompose the metastable product. The second-generation Kitaev magnets are prepared by modifying the inter-layer atoms and the associated chemical bonds, and therefore they have more stacking faults than their parent compounds~\cite{PRB_Bahrami_Effect, Tsirlin_review_Kitaev_2021}. This can be seen in the inset of Fig.~\ref{fig:XRD}b that shows an asymmetric broadening of the honeycomb Bragg peaks in Ag$_{3}$LiIr$_2$O$_6$. Unlike the solid state reactions, topotactic exchange cannot be repeated to improve the sample quality. Thus, getting rid of the stacking faults in these materials remains an open challenge. Details of the synthesis procedures for Cu$_2$IrO$_3$\ and Ag$_{3}$LiIr$_2$O$_6$\ have previously been published by Abramchuk and Bahrami \emph{et al.}~\cite{JACS_abramchuk_cu2iro3,PRB_Bahrami_Effect,PRL_Bahrami_Thermodynamic}. Here, we present more details about the synthesis of H$_{3}$LiIr$_2$O$_6$\ based on the earlier work of Bette \emph{et al.}~\cite{bette_solution_2017}. Polycrystalline samples of H$_{3}$LiIr$_2$O$_6$\ are synthesized using a modified version of Eq.~\ref{eq:topo1}. \begin{equation} \label{eq:topo3} \mathrm {4Li_2IrO_3+3H_2SO_4}~\rightarrow~\mathrm{2H_3LiIr_2O_6+3Li_2SO_4} \end{equation} After synthesizing a high-quality of $\alpha$-Li$_2$IrO$_3$\ (Fig.~\ref{fig:XRD}a), approximately 300~mg of the material was added to a 10~ml Teflon-lined steel autoclave filled with H$_{2}$SO$_{4}$ acid (1 M solution) and heated to 120 $^{\circ}$C for several days. After completing the reaction, the product was washed with water and the quality was verified using x-ray diffraction (Fig.~\ref{fig:XRD}b). \section{\label{sec:FAULT}Stacking Faults} \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{TEM} \caption{\label{fig:TEM} HAADF-TEM images from (a) $\alpha$-Li$_2$IrO$_3$\ and (b) Ag$_{3}$LiIr$_2$O$_6$. The images show an abundance of stacking faults in Ag$_{3}$LiIr$_2$O$_6$\ unlike $\alpha$-Li$_2$IrO$_3$, due to the weaker inter-layer bonding in the former. The electron diffraction patterns are presented as insets and reveal fewer streaking in $\alpha$-Li$_2$IrO$_3$\ due to fewer stacking faults compared to Ag$_{3}$LiIr$_2$O$_6$. } \end{figure} A comparison between the insets of Figs.~\ref{fig:XRD}a and \ref{fig:XRD}b suggests fewer stacking faults in $\alpha$-Li$_2$IrO$_3$\ (sharp and well-separated Bragg peaks from the honeycomb layers) and considerable stacking faults in Ag$_{3}$LiIr$_2$O$_6$\ (broadened peaks). The asymmetric broadening of honeycomb peaks is known as the Warren line shape, which is a signature of stacking disorder~\cite{balzar_x-ray_1993}. The higher amount of stacking faults in the second-generation Kitaev magnets is due to the inter-layer chemistry. As seen in Fig.~\ref{fig:Synthesis}, each inter-layer Li atom in $\alpha$-Li$_2$IrO$_3$\ is octahedrally coordinated with three oxygen atoms from the top and three from the bottom honeycomb layers. In contrast, each Ag atom in Ag$_{3}$LiIr$_2$O$_6$\ is connected to only one O from the top and one from the bottom layer in a dumbbell (linear) coordination. The weak dumbbell bonds are responsible for the larger inter-layer separation in Ag$_{3}$LiIr$_2$O$_6$\ and more stacking faults compared to $\alpha$-Li$_2$IrO$_3$~\cite{abramchuk_crystal_2018}. Direct lattice imaging with transmission electron microscope (TEM) is a powerful tool to study the stacking faults. Figures~\ref{fig:TEM}a and \ref{fig:TEM}b (reproduced from Ref.~\cite{PRB_Bahrami_Effect}) are high angle annular dark field TEM (HAADF-TEM) images of $\alpha$-Li$_2$IrO$_3$\ and Ag$_{3}$LiIr$_2$O$_6$\ samples, respectively. Whereas the stacking sequence in $\alpha$-Li$_2$IrO$_3$\ can be flawless for up to 50 unit cells, Ag$_{3}$LiIr$_2$O$_6$\ shows a maximum of 5 unit cells stacked without faults (in the form of twisting between the layers). In H$_{3}$LiIr$_2$O$_6$, the small size of H atoms and their high mobility make the chemical bonds even weaker than in Ag$_{3}$LiIr$_2$O$_6$. As such, H$_{3}$LiIr$_2$O$_6$\ has the highest degree of stacking faults among the second-generation Kitaev magnets~\cite{bette_solution_2017,kitagawa_spinorbital-entangled_2018,Tsirlin_review_Kitaev_2021}. This is why the honeycomb peaks of H$_{3}$LiIr$_2$O$_6$\ are not resolved by x-rays (Fig.~\ref{fig:XRD}b). \section{\label{sec:TUNING}Tuning magnetic interactions with topochemical methods} \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{ORBS} \caption{\label{fig:ORBS} Exchange paths for (a) $K$, (b) $J$, and (c) $\Gamma$ terms in Eq.~\ref{eq:Kit}. The $d$ and $p$ orbitals are painted in blue and red, respectively. The numbers show the hopping sequence in the perturbation. } \end{figure} As seen in Fig.~\ref{fig:Synthesis}, the monoclinic unit cell and the honeycomb ordering in the 2D layers remain unchanged before and after exchange reactions. However, the change of inter-layer coordination from octahedral to dumbbell modifies the M-O-M bond angles within the honeycomb layers (Fig.~\ref{fig:T2G}a and \ref{fig:Synthesis}). Superexchange magnetic interactions are sensitive to a change of bond angles and thus topochemical reactions can be used to tune the magnetic interactions. There are at least three terms in the magnetic Hamiltonian of the Kitaev materials. \begin{equation} \label{eq:Kit} \mathcal{H}=\sum\limits_{\langle i,j \rangle \in {\alpha\beta(\gamma)}} \left[ -K_{\gamma}S_i^{\gamma}S_j^{\gamma} + J \textbf{S}_i \cdot \textbf{S}_j+ \Gamma \left( S_i^{\alpha}S_j^{\beta} + S_i^{\beta}S_j^{\alpha}\right) \right] \end{equation} The Kitaev term ($K$) favors QSL, the Heisenberg term ($J$) favors AFM ordering, and the off-diagonal exchange term ($\Gamma$) controls details of the ordered structure. All three term can be modified via topochemical reactions as follows. Figure~\ref{fig:ORBS} shows the individual exchange paths for each term in Eq.~\ref{eq:Kit}. The Kitaev term is an indirect exchange interaction with hopping matrix elements $t_{dpd}$ between the $d_{xz}$, $p_z$, and $d_{yz}$ orbitals (Fig.~\ref{fig:ORBS}a)~\cite{rau_spin-orbit_2016,kim_crystal_2016}. In addition to the indirect exchange ($K$), Fig.~\ref{fig:ORBS}b shows a direct exchange path for the Heisenberg interaction ($J$) with hopping matrix element $t_{dd}$ between $d_{xy}$ orbitals, leading to $J\sim t_{dd}^2/U$ in Eq.~\ref{eq:Kit}~\cite{winter_challenges_2016}. Finally, a combination of direct and indirect paths in Fig.~\ref{fig:ORBS}c leads to the symmetric off-diagonal exchange, $\Gamma\sim t_{dpd}t_{dd}J_H/U^2$, where $J_H$ is the Hund's coupling between the $e_{\mathrm {g}}$\ and $t_{\mathrm {2g}}$\ orbitals~\cite{PRL_Rau_Generic,rusnacko_kitaev-like_2019}. The hopping matrix elements ($t_{dd}$ and $t_{dpd}$) are tuned by the M-O-M bond angle and the M-M distance which can be tuned by the exchange reactions. For example, (i) the change of oxygen positions within the honeycomb layers due to the change of inter-layer coordinations in Fig.~\ref{fig:Synthesis} modifies the M-O-M bond angle ($\phi$ in Fig.~\ref{fig:T2G}a and \ref{fig:ORBS}a) and thereby tunes $t_{dpd}$; (ii) according to theoreical calculations~\cite{PRL_Jackeli_Mott}, the Heisenberg interaction is canceled between the opposite paths if the bond angle $\phi$ is close to $90^\circ$ (Fig.~\ref{fig:T2G}a and \ref{fig:ORBS}a); (iii) the hybridization between the Ag $d$-orbitals between the layers and O $p$-orbitals within the layers tunes the ratio of $t_{dpd}/t_{dd}$. \section{\label{sec:MAGNETISM}Magnetic characterization of metastable Kitaev materials} \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{CP} \caption{\label{fig:CP} (a) Heat capacity ($C/T$) plotted as a function of temperature below 30~K for the first-generation Kitaev magnet $\alpha$-Li$_2$IrO$_3$\ and its second-generation derivatives Ag$_{3}$LiIr$_2$O$_6$\ and H$_{3}$LiIr$_2$O$_6$. The data for $\alpha$-Li$_2$IrO$_3$\ and Ag$_{3}$LiIr$_2$O$_6$\ are reproduced from Refs.~\cite{PRL_Singh_A2IrO3,PRB_Bahrami_Effect}. (b) A similar comparison is made between Na$_2$IrO$_3$\ (first-generation) and Cu$_2$IrO$_3$\ (second-generation). The data are reproduced from Ref.~\cite{JACS_abramchuk_cu2iro3}. } \end{figure} To demonstrate the effect of topochemical modifications on the magnetic interactions (Eq.~\ref{eq:Kit} and Fig.~\ref{fig:ORBS}), we compare the heat capacity and magnetic susceptibility of the first and second-generation Kitaev magnets. The peak in the heat capacity of $\alpha$-Li$_2$IrO$_3$\ in Fig.~\ref{fig:CP}a confirms long-range magnetic ordering at $T_N=15$~K. The order has been characterized as incommensurate spiral by recent neutron scattering and muon spin relaxation ($\mu$SR) experiments~\cite{PRB_Williams_Incommensurate,PRB_Choi_Spin}. As seen in Fig.~\ref{fig:CP}a, this peak is shifted to lower temperatures in Ag$_{3}$LiIr$_2$O$_6$\ and seemingly disappeared in H$_{3}$LiIr$_2$O$_6$. The suppression of $T_N$ in second-generation compounds Ag$_{3}$LiIr$_2$O$_6$\ and H$_{3}$LiIr$_2$O$_6$\ is a positive sign of approaching the quantum spin-liquid (QSL) phase, where long-range order is replaced by long-range quantum entanglement~\cite{Nature_takagi_concept_2019,knolle_field_2019}. A recent $\mu$SR experiment~\cite{PRB_Bahrami_Effect} has shown a similar incommensurate spiral order in Ag$_{3}$LiIr$_2$O$_6$; however, the long-range order develops at 8~K in Ag$_{3}$LiIr$_2$O$_6$, well below $T_N=15$~K in $\alpha$-Li$_2$IrO$_3$. Thus, the topochemical modification of bond angles seem to strengthen $K$ and weaken $J$ in Eq.~\ref{eq:Kit}. A recent nuclear magnetic resonance (NMR) experiment has shown absence of long-range order in H$_{3}$LiIr$_2$O$_6$, which is another promising result toward the discovery of a QSL phase~\cite{kitagawa_spinorbital-entangled_2018}. A similar trend is observed in Fig.~\ref{fig:CP}b for the first-generation material Na$_2$IrO$_3$\ that shows a peak at $T_N=15$~K and its second-generation counterpart Cu$_2$IrO$_3$\ that does not show a peak but seems to have a broad anomaly below 5~K. Neutron scattering experiments have confirmed a zigzag antiferromagnetic (AFM) order in Na$_2$IrO$_3$~\cite{choi_spin_2012}. Recent $\mu$SR and NMR experiments have revealed a coexistence of static and dynamic magnetism below 5~K in Cu$_2$IrO$_3$\ but without a long-range order, suggesting proximity to the QSL phase~\cite{kenney_coexistence_2019,takahashi_spin_2019}. \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{XT} \caption{\label{fig:XT} (a) Magnetic susceptibility ($\chi$) plotted as a function of temperature below 30~K for the first-generation Kitaev magnet $\alpha$-Li$_2$IrO$_3$\ and its second-generation derivatives Ag$_{3}$LiIr$_2$O$_6$\ and H$_{3}$LiIr$_2$O$_6$. The data for $\alpha$-Li$_2$IrO$_3$\ and Ag$_{3}$LiIr$_2$O$_6$\ are reproduced from Refs.~\cite{PRL_Bahrami_Thermodynamic,PRB_Bahrami_Effect}. (b) A similar comparison is made between Na$_2$IrO$_3$\ (first-generation) and Cu$_2$IrO$_3$\ (second-generation). The data for Na$_2$IrO$_3$\ and Cu$_2$IrO$_3$\ are reproduced from Refs.~\cite{PRL_Singh_A2IrO3,JACS_abramchuk_cu2iro3}. } \end{figure} The suppression of magnetic ordering due to topochemcial changes in metastable Kitaev magnets is also observed in the magnetic susceptibility data. Figure~\ref{fig:XT}a shows the magnetic susceptibility of $\alpha$-Li$_2$IrO$_3$\ (black curve) with a clear anomaly at $T_{N} = 15$~K indicating the incommensurate spiral antiferromagnetic (AFM) order. The green curve representing Ag$_{3}$LiIr$_2$O$_6$\ shows two downturns at $T_{F}=14$~K and $T_{N}=8$~K, corresponding to the onsets of short-range and long-range magnetic orders, respectively~\cite{PRB_Bahrami_Effect}. The orange curve representing H$_{3}$LiIr$_2$O$_6$\ does not show any evidence of magnetic ordering. Figure~\ref{fig:XT}b shows a similar trend, where the first-generation material Na$_2$IrO$_3$\ orders at $T_N=15$~K and the second-generation material Cu$_2$IrO$_3$\ shows a small peak at 2~K, evidence of short-range spin freezing instead of long-range order. \section{\label{sec:Disorder}Challenges and Opportunities} The above results are exciting; however, they need to be interpreted with caution. Topotactic exchange reactions increase disorder that has adverse effects on magnetism. A recent TEM study has shown that the silver atoms in Ag$_{3}$LiIr$_2$O$_6$\ can enter the honeycomb layers and form small inclusions (up to 50 atoms) that disrupt the magnetic ordering~\cite{PRB_Bahrami_Effect}. Such a structural disorder can spuriously hide the long-range order and be misinterpreted as evidence of a QSL phase. As noted earlier, H$_{3}$LiIr$_2$O$_6$\ is even more disordered compared to Ag$_{3}$LiIr$_2$O$_6$\ due to the high mobility of the H atoms, which causes bond randomness and site vacancies within the honeycomb layers~\cite{bette_solution_2017}. Recent theoretical works show that the absence of magnetic ordering in H$_{3}$LiIr$_2$O$_6$\ may be due to bond randomness and a large amount of vacancies~\cite{knolle_bond-disordered_2019,kao_vacancy-induced_2021}. Thus, the most important challenge in this field is to optimize the synthesis conditions for a minimum amount of disorder and to find methods of annealing away the stacking faults and vacancies. Metastable Kitaev magnets have opened a new window of opportunity to realizing the quantum spin-liquid ground state. The Majorana excitations of such materials will form the building blocks of a solid state quantum computer~\cite{frolov_quantum_2021}. Braiding algorithms and logical gates have been theoretically developed for such computers~\cite{field_introduction_2018}. It remains an open challenge for the solid state chemistry community to synthesize the appropriate materials for such models. Another intriguing opportunity is to find unconventional superconductivity in the Kitaev magnets~\cite{you_doping_2012}, an exciting theoretical prediction that awaits experimental discovery. \section{\label{sec:ackn}ACKNOWLEDGMENTS} This material is based upon work supported by the Air Force Office of Scientific Research under award number FA2386-21-1-4059.
proofpile-arXiv_066-13015
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro:sec} Solar prominences \citep[also called filaments; e.g.][]{Tan95} are highly complex magnetic structures that extend from the photosphere up into the corona, where they support material that is much denser ($n\sim 10^{10-12}$ cm$^{-3}$) and cooler ($T\sim$ 1$\times$10$^4$ K) than the surrounding plasma ($n\sim 10^{8-9}$ cm$^{-3}$, $T\sim$ 1-2$\times$10$^6$ K). High-resolution observations of prominences have been available in H I Balmer (H$\alpha$) and Ca II emission for decades from ground-based telescopes and, more recently, in various ion emission bands from satellite-borne instruments. These observations show that the prominence material is highly dynamic, exhibiting persistent flows, waves, and other oscillations, as well as MHD instabilities that can lead to its violent eruption \citep[for reviews, see][]{Lab11,Par14,Arr18}. Idealized models of quiescent prominences often assume an equilibrium magnetic structure that supports the cool material statically within the hot corona. The observations indicate that both the magnetic and thermal structures of prominences are often out of equilibrium, highly dynamic at small scales and gradually evolving at large scales. Small-scale propagating and oscillating features in cool prominence threads and low-lying coronal loops have been studied from space for many years using high-resolution and high-cadence spectral observations. Hinode's Solar Optical Telescope \citep[Hinode/SOT;][]{Kos07} has observed such phenomena in H$\alpha$ and Ca~II emission \citep{Oka07,OW08,Sch13,Ofm15a,KOT18}, as has the Interface Region Imaging Spectrograph \citep[IRIS;][]{Dep14} in chromospheric Mg II emission as spectral lines and slit jaw images \citep{KOT18}. High-resolution prominence observations by Hinode/SOT show that the prominences material exhibits constant down-flows, lateral flows, upflows, and dynamic evolution with the observed velocities in the range $1-100$ km s$^{-1}$ consistent with the effects of magneto-fluid instabilities \citep[e.g.,][]{Ber17}. Recent ground-based high-resolution observations using the New Vacuum Solar Telescope \citep[NVST;][]{Liu14} report evidence of small-scale oscillations and waves detected in H$\alpha$ in quiescent prominences \citep[e.g.,][]{Li18,Li22}. Advanced high-resolution resistive 3D MHD modeling of prominence structure evolution shows that the nonlinear development of the magnetic Rayleigh–Taylor instability produces small scale structures in the prominence material \citep{JK22}, and possibly can provide an alternative (to waves) interpretation of some of the observed small-scale oscillating structures. The (quasi-) periodic, small-scale, oscillating features, with typical time scales of minutes in prominence threads and pillars, have been identified and modeled previously as linear fast magnetosonic waves \citep[e.g.,][]{Sch13}. Because the nonlinearity of these waves is evident in the observations in the form of steepening and asymmetric density compressions, the models were later extended to nonlinear fast magnetosonic (NFM) waves using an MHD model with two-dimensional spatial variations and three-dimensional vector fields \citep[2.5D MHD; see][]{Ofm15a,OK20}. The observed waves can be used to deduce the magnetic structure of prominences by applying techniques of coronal seismology \citep[e.g.,][]{NV05,Anf22}. These indirect methods are invaluable, as the coronal magnetic field is very difficult to measure directly using spectroscopic or other methods, while force-free extrapolation methods have limited applicability in realistic coronal structures. Coronal seismology remains based primarily on linear MHD wave theory. However, nonlinearity may significantly affect the wave structure, phase speed, wave dissipation, and couplings. Thus, interpreting observations of nonlinear waves requires the use of nonlinear wave theory or nonlinear MHD modeling for improved accuracy of the analysis. Plasma flows, in addition to waves, are often observed in cool prominence threads in emission lines such as H$\alpha$ and Ca~II \citep{Oka07,Ale13,Kuc14,Par14,Die18}. These flows may affect the oscillations through, for example, changes in the density that affect the phase speed of the waves and Doppler shifts of the oscillation frequencies. Recently, \citet{KOT18} and \citet{OK20} used Hinode/SOT Ca~II spectral lines to study small-scale motions in prominences. The observed propagating fluctuations were identified as NFM waves using a combination of data analysis and modeling. The observed NFM waves had typical periods $\sim$ 5-11 minutes and wavelengths $\sim$ 2000 km, while the flows had typical speeds $\sim$ 15-50 km s$^{-1}$. The main properties of the observed NFM waves, combined with the effects of mass flows in prominence threads, were replicated by the model \citep{OK20}. The magnetic field strengths in the prominence were estimated to lie in the range 5-17 G. In the present study, we extend the previous studies of propagating waves in prominences with data analysis and modeling of a prominence pillar observed on 2012 February 14 from Hinode/SOT. We employ a new, fully three dimensional (3D) MHD model of small-scale NFM waves in an idealized prominence pillar with more realistic structure than in the previous studies. The new model allows us to investigate more complex fast magnetosonic wave generation, propagation, and interaction than in the previous 2.5D configurations, for example, by including the effects of magnetic shear, and for the first time study the effects of non-force free shear magnetic field. The results are useful for interpreting high-resolution Hinode/SOT observations of prominence small-scale oscillations and for making further advancements in the coronal seismology of solar prominences using MHD waves. Our paper is organized as follows. In \S\ref{obs:sec} we present new observations of propagating features in a prominence pillar. In \S\ref{model:sec} we describe the new 3D MHD model, along with the initial and boundary conditions used in the calculations. In \S\ref{num:sec}, we present the numerical results and compare them with observations. Finally, the discussion and our conclusions are given in \S\ref{dc:sec}. \section{Observations and Data Analysis} \label{obs:sec} The studied prominence was observed by Hinode/SOT on 2014 February 14 from 10:48 - 13:15 UT as part of Hinode Operation Plan (HOP) 114. The data consisted of measurements from both the Broadband Filter Instrument (BFI) and the Narrowband Filter Instrument (NFI) \citep{Kos07,Tsu08}. The BFI was used to observe the \ion{Ca}{2} H line at 3969~\AA\, and the NFI was used to observe the H\ensuremath{\alpha}\ line at 6563.2~\AA, both with cadences of 22.4~s. The H\ensuremath{\alpha}\ line positions for this data set were not well calibrated, but appear to be from near line center and in the blue wing of the line (about 416 m\AA\ from line center), making them not useful for Doppler measurements. The image field of view is about {112\arcsec} square, and the spatial resolution is 0.2-0.3\arcsec. Maps were processed with the {\tt fg\_prep.pro} routine provided by to the Solar Soft library (\url{https://www.lmsal.com/solarsoft/}, \citet{FH98}) by the Hinode team, including dark-current subtraction and flat-field removal. Drift and jitter were corrected using an image cross-correlation ({\tt fg\_rigidalign.pro}) routine. For context, we inspected observations from the Global Oscillation Network Group to image the on-disk structure of the prominence in the days preceding its appearance at the limb. GONG H\ensuremath{\alpha}\ images are provided by a network of six stations around the globe \citep{Har96} with a pixel size of about 1$\arcsec$. The features observed on the limb were part of a long prominence that extended more or less East-West above the northern active-region belt and curved equator-ward on the western end. A portion of that prominence seen against the solar disk three days before the observations we analyzed is shown in Figure~\ref{f:ondisk_context}a. The most evident prominence features seen in H\ensuremath{\alpha}\ are a series of barbs connected by fainter spine flows. These barbs evolve over time, and it is difficult to identify individual barbs near the limb. However, the appearance of the region at the limb from Hinode/SOT, Figures~\ref{f:ondisk_context}b (Ca II) and \ref{f:ondisk_context}c (H$\alpha$), is consistent with associating the pillars with barbs that are oriented mostly along the line of sight. \begin{figure} \centerline{ \includegraphics[width=6cm, trim=20 50 0 0]{GONG_closeup_ondisk_context-eps-converted-to.pdf} \includegraphics[width=6cm, trim=60 50 40 0]{CaII_SOT_LargeFOV_1246-eps-converted-to.pdf} \includegraphics[width=6cm, trim=60 50 40 0]{HaR_SOT_LargeFOV_1246-eps-converted-to.pdf}} \caption{(a) GONG H\ensuremath{\alpha}\ image showing the prominence on 11 Feb 2012, three days before the prominence was observed on the limb. The box shows the approximate field of view of the images in (b) and (c). (b) Hinode Ca~II image showing the prominence on the limb; the box is the field of view shown in Figure \ref{f:2012Feb14CaIIimg}. An animation that corresponds to this panel is available online. The video shows the Hinode Ca~II emission observed on 14-Feb-2012 in the time interval 10:51:06-13:13:45 UT in an accelerated time of 16 s. (c) H$\alpha$ far blue-wing image; the box is the field of view shown in Figure \ref{f:2012Feb14Haimg}.} \label{f:ondisk_context} \end{figure} Figure \ref{f:2012Feb14CaIIimg} shows time-distance diagrams for the small-scale propagating features (i.e., `pulses') observed in the Ca~II images in two locations. The pulses were measured along a 5-pixel-wide area centered on the solid red and green lines shown in panels (a) and (d), respectively. Panel (b) shows a series of pulses with plane-of-sky velocities 12-16 km s$^{-1}$ determined from the slopes of the dashed red lines, which were visually fit to the intensity peaks. The peaks are about 1 min apart, and the distances between pulses are in the range 1330-2030 km. Panel (e) shows another set of pulses corresponding to the location shown in panel (d). These pulses have peaks 1.5-2.3 min apart, velocities 8-11 km s$^{-1}$ obtained from the slopes of the dashed green lines, and distance between pulses in the range 800-1600 km. Panels (c) and (f) show the intensities along the horizontal lines shown in panels (b) and (e) respectively. The variations between the maximum and minimum intensities of the individual features are about 10\% of the total intensity. \begin{figure} \includegraphics[width=6cm, trim=0 20 40 0]{CaII_SOT_Osc_context_1230-eps-converted-to.pdf} \includegraphics[width=6cm, trim=0 40 0 0 ]{tvsd_2012Feb14_1229-eps-converted-to.pdf} \includegraphics[width=6cm, trim=0 0 0 0]{CaII_cut2_d5-eps-converted-to.pdf}\\ \includegraphics[width=6cm, trim=0 20 50 0]{CaII_SOT_Osc_context_1246-eps-converted-to.pdf} \includegraphics[width=6cm, trim=0 40 0 0]{tvsd_2012Feb14_1247-eps-converted-to.pdf} \includegraphics[width=6cm, trim=0 0 0 0]{CaII_cut4_d4-eps-converted-to.pdf} \vspace{0.5cm}\caption{(a) Image of the Ca~II emission obtained with Hinode/SOT on 14-Feb-2012 12:28:56UT. The solid red line indicates the data location for the time-distance diagram. (b) Time-distance diagram showing the propagation of the features along the solid red line in (a). (c) Plots of intensity as a function of time at the locations shown with the blue horizontal line in (b). (d)-(f) The same for a different set of pulses obtained along the solid green line in (d). The slopes of the dashed red (b) and green (e) lines indicate the propagation speed of the pulses in the plane of the sky. A video showing the field of view in panels (a) and (d) is included online. The video shows the Hinode SOT Ca II intensity observed on 14-Feb-2012 in the time interval 12:18:06 UT to 15:56:59 UT in an accelerated time of 4 s.} \label{f:2012Feb14CaIIimg} \end{figure} Figure \ref{f:2012Feb14Haimg} shows time-distance diagrams for moving features seen in the H\ensuremath{\alpha}\ blue wing. Shown are a series of pulses with plane-of-sky velocities 12-16 km s$^{-1}$, peaks 1-5 min apart with sharp non-sinusoidal peaks indicative of nonlinear steepening, and distances between pulses of 1000-3000 km. The plane-of-the-sky propagation speed is likely reduced compared to the `true' phase speed due to projection effects, and the value is in qualitative agreement with possible fast magnetosonic speeds in cool prominence material of the order $\sim$ 20 km s$^{-1}$ \citep[see, e.g.,][]{Sch13}. The variations between the maximum and minimum intensities of the individual features are about 30-60\% of the total intensity. \begin{figure} \hspace{-0.6cm}\includegraphics[width=6.5cm, trim=40 20 40 0]{Ha_SOT_Osc_context_1133-eps-converted-to.pdf} \hspace{-0.4cm}\includegraphics[width=6cm, trim=50 50 10 0]{tvsd_2012Feb14_Ha1133-1-eps-converted-to.pdf} \hspace{0.3cm}\includegraphics[width=5.75cm, trim=0 0 0 0]{Ha_cut3-eps-converted-to.pdf} \caption{(a) Image in the far blue wing of H\ensuremath{\alpha}\ obtained with Hinode/SOT on 14-Feb-2012 at 11:33:52 UT. The solid purple line shows the location of the data for the time-distance diagram. (b) Time-distance diagram showing the propagation of the pulses along the solid purple line indicated in (a). (c)-(d) plots of intensity as a function of time at the locations shown with the blue horizontal lines in (b). The slopes of the dashed purple lines in (b) indicate the propagation speed of the pulses in the plane of the sky. A video that corresponds to the field of view in panel (a) is included online. The observed Hinode SOT H\ensuremath{\alpha}\ far blue wing field of view on 14-Feb-2021 in the time interval 11:25:17-11:42:26UT is shown in an accelerated time of 2 s. } \label{f:2012Feb14Haimg} \end{figure} We have performed a wavelet analysis \citep{TC98} of the oscillations in Ca~II and the far blue wing of H\ensuremath{\alpha}\ cuts shown in Figures~\ref{f:2012Feb14CaIIimg} and \ref{f:2012Feb14Haimg} using the Morlet wavelet. In Figure~\ref{wavelet:fig} we show the results of the analysis with evident highest confidence level for the wavelet magnitude greater than $85\%$. The cones of influence indicate the regions that may be affected by the boundaries. The results show the global wavelet power integrated inside the cone of influence, indicating significant power in $\sim 1-3$ min period oscillations, consistent with the temporal evolution at the indicated temporal cuts, and in the animations of the observed oscillations included online. The wavelet analysis and the global wavelets provide unbiased quantification of the observed oscillations and their statistical significance. \begin{figure}[ht] \centerline{\includegraphics[height=9in]{wavelet_4-eps-converted-to.pdf} } \caption{The results of the wavelet analysis of the oscillations shown in Figures (a) \ref{f:2012Feb14CaIIimg}c; (b) \ref{f:2012Feb14CaIIimg}f (c) \ref{f:2012Feb14Haimg}d; (d) \ref{f:2012Feb14Haimg}e; respectively. The Morlet wavelet was used, and the 85\% confidence level contour is indicated on the wavelet power. The cones of influence where boundary effects may affect the results are indicated with the red curve on each wavelet panel. The global wavelets in the cone of influence for each case are shown in the corresponding right panels.} \label{wavelet:fig} \end{figure} Thus, we observe multiple cases of a short series of oscillatory features propagating in a direction roughly away from the limb in the plane of the sky, separated by $\sim$ 1 min. Each individual feature is slightly elongated perpendicular to the direction of motion, hence is similar to features described previously by others \citep{Sch13,Ofm15a,KOT18}. Because the pillars are likely to be elongated structures along the line of sight, these moving features may be related to motions observed in different (perpendicular) line of sight in extended prominence structures as transverse oscillations combined with flows of cool material \citep{OW08,Oka16,OK20}. \section{Numerical 3D MHD Model, Boundary Conditions, and Parameters} \label{model:sec} In order to model the NFM waves in a prominence pillar we solve the resistive 3D MHD equations using our code NLRAT described in detail in previous papers \citep{OT02,POW18,OL18,OW22}. The normalized resistive MHD equations with gravity, using standard notation for the variables, are \begin{linenomath*} \begin{eqnarray} &&\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mbox{\bf V})=0,\label{cont:eq}\\ &&\frac{\partial(\rho\mbox{\bf V})}{\partial t}+\nabla\cdot\left[\rho\mbox{\bf V}\mbox{\bf V}+\left(E_up+\frac{\mbox{\bf B}\cdot \mbox{\bf B}}{2}\right)\mbox{\bf I}-\mbox{\bf BB}\right]=-\frac{1}{F_r}\rho\mbox{\bf F}_g,\label{mom:eq}\\ &&\frac{\partial\mbox{\bf B}}{\partial t}-\nabla\times(\mbox{\bf V}\times\mbox{\bf B})=\frac{1}{S}\nabla^2\mbox{\bf B},\label{ind:eq}\\ &&\frac{\partial(\rho E)}{\partial t}+\nabla\cdot\left[\mbox{\bf V}\left(\rho E+E_up+\frac{\mbox{\bf B}\cdot \mbox{\bf B}}{2}\right)-\mbox{\bf B}(\mbox{\bf B}\cdot\mbox{\bf V})+\frac{1}{S}\left(\nabla\times\mbox{\bf B}\right)\times\mbox{\bf B}\right] =-\frac{1}{F_r}\rho\mbox{\bf F}_g\cdot\mbox{\bf V}.\label{ener:eq} \end{eqnarray} \end{linenomath*} With our normalization $E_u = \beta/2$ is the magnetic Euler number (ratio of thermal pressure to Alfv\'en-wave pressure), $F_r=V_A^2R_s/(GM_s)$ is the magnetic Froude number (ratio of magnetic force to gravitational force), where $G$ is the gravitational constant, $M_s$ is the solar mass, $R_s$ is the solar radius, and $S$ is the Lundquist number (ratio of resistive diffusion time to Alfv\'en time). The details of the normalization of the variables can be found in \citet{OL18}. The gravitational force, \begin{eqnarray} \mbox{\bf F}_g=\frac{a_0^2}{(R_s+z-z_{min})^2}\mbox{\bf\^z},\label{grav:eq} \end{eqnarray} is modeled with the assumption of small height of the prominence compared to the solar radius $R_s$, where $a_0=0.1R_s$ about 70 Mm is the normalization length scale of the coordinates, and $z_{min}$ is the height of the lower boundary in the model. We note that in the present model we have excluded radiative losses and thermal conduction, and the prominence pillar structure is provided as an initial state, rather than produced self-consistently by the model. The total energy density is given by \begin{eqnarray} \rho E=\frac{E_up}{(\gamma-1)}+\frac{\rho V^2}{2}+\frac{B^2}{2}. \end{eqnarray} In the present model, we neglect radiative cooling and thermal conduction because these losses are small on the typical time scales of the NFM waves. For coronal temperature $T=1\times10^6$ K, density $n=10^9$ cm$^{-3}$, and magnetic field magnitude $B=10$ G we obtain the Alfv\'{e}n speed $V_A=690$ km s$^{-1}$, the Alfv\'{e}n time $\tau_A=101$ s, the plasma $\beta \approx 0.07$, the Froude number $F_r=0.25$, and the Euler number $E_u=3.47\times10^{-2}$ (for the case with $B_0=20$ G, $E_u$ is reduced by a factor of four, to $E_u=8.67\times10^{-3}$). Note that, for uniform magnetic field, the value of $\beta$ is identical to the coronal value all across the prominence pillar, due to the uniform thermal pressure along the magnetic field lines that cross the pillar. For computational stability purposes, the effect of gravity in the model is reduced by a factor of 10 by correspondingly increasing $F_r$, in order to slow the gravitational settling of the cool material in the prominence pillar. The reduced gravity does not affect the results significantly, since the dominant restoring force of the oscillations is the Lorentz force (i.e., magnetic field-line `tension'). In the above equations we have neglected viscosity, radiative losses, and thermal conduction. The resistive terms are used with the Lundquist number set to $S=10^5$, which does not affect the results significantly on the NFM time scales. An empirical value of the nearly isothermal polytropic index, $\gamma=1.05$, is used that accounts for coronal heating. These modeling parameters improve the stability of the background prominence pillar structure on the time scale of MHD wave propagation, without affecting significantly the NFM wave dynamics. \begin{figure}[ht] \centerline{ \includegraphics[width=0.5\linewidth]{n0_T0_prom_color-eps-converted-to.pdf} } \caption{The $x$ dependence (across the prominence pillar) of normalized initial temperature, $T_0$ (red), density, $n_0$ (blue), and thermal pressure $p_0$ (green) in the model prominence and surrounding corona. {The magnitudes of the variables are shown on $Log$ scale.}} \label{n0_T0:fig} \end{figure} The initial $x$-dependent temperature $T_0$ and density $n_0$ structures are given by \begin{eqnarray} &&T_0(x)=T_{max}-(T_{max}-T_{min})e^{-[(x-x_0)/w]^{2q}},\label{T0:eq}\\ &&n_0(x)=p_0/T(x),\label{n0:eq} \end{eqnarray} where the coronal temperature is $T_{max}$, the prominence temperature is $T_{min}$, the exponent $q=2$ defines the sharpness of the temperature transition between the corona and the prominence pillar, $w=0.05$ is the half-width of the prominence pillar, and $x_0=0$ is the center position of the pillar. The initial temperature and density dependencies on the $x$ coordinate across the model prominence pillar are shown in Figure~\ref{n0_T0:fig}. The normalized thermal pressure, $p_0=n_0T_0=1$, is uniform. In the present model we use $T_{min}/T_{max}=0.01$, consistent with the typical ratios of the prominence to corona temperatures. It is evident in Figure~\ref{n0_T0:fig} that $T_0$ decreases from its coronal value by two orders of magnitude, while $n_0$ increases correspondingly by the two orders of magnitude in the prominence pillar. Note, that the fine-scale structuring of the background density in the direction of wave propagation, i.e., with height, may introduce dispersion, enhanced damping, and small deceleration of the fast magnetosonic waves \citep[e.g.,][]{Mur01}. In the present model the fast magnetosonic speed is lower by a factor of 10 in the pillar compared to the surrounding corona, and the speed could be even lower in higher density cool prominence structures. The prominence-corona transition region (PCTR) \citep[see the review,][]{Par14} along the magnetic field is evident in the model, with the length computed as the difference between the half width at half maximum (HWHM) of the prominence pillar and the half width at 10\% of peak density as $\sim 1$ Mm in physical units. This value of PCTR thickness is consistent with previous studies \citep[e.g.,][]{Chi92,Gun11}. In normalized units the mass density is equal to the number density $\rho_0=n_0$. The initial state is in equilibrium when the magnetic field is uniform in the $x$ direction without gravity, which was first used to study prominence oscillations by \citet{JR92} and later in 2.5D MHD models of NFM waves in prominences \citep{Ofm15a,OK20}. Here we adopt this initial state in the 3D MHD model, as well as study additional magnetic configuration that depart from equilibrium. Since we consider the effects of reduced solar gravity the initial state is not strictly in equilibrium. The initial nonequilibrium leads to formation of gradients in the initially uniform magnetic field that produce a Lorentz force balancing gravity. However, the departure from equilibrium is small in the low-$\beta$ prominence model, as shown below (see, Case~0 in Section~\ref{num:sec}). While the initial state of the density is uniform in the $y$ and $z$ directions, transverse variability is introduced by the effects of the source of the waves (i.e., boundary conditions), in addition to the effects of gravity. Since the source of the waves at the lower boundary of the prominence pillar depends on $x$ and $y$ and on time (see, Equation~\ref{Vzt0:eq} below), the compressional fast magnetosonic wave pressure introduces structure primarily in the density and magnetic field in $x$, $y$, and $z$ directions inside the pillar. Realistic three-dimensional force-free extrapolations show that magnetic field of dips in quiescent prominences is mostly horizontal \citep[e.g.,][]{Dud12}. Observed prominence structure shows evidence of magnetic shear and flows (see, e.g., \citet{Ant94} and the recent review by \citet{Gib18}). Our aim is to investigate the effects of uniform as well as sheared magnetic field on the propagation of nonlinear fast magnetosonic waves in the prominence pillar. There are many past observations of flows in prominence pillars \citep[e.g.,][and references within]{OK20}. While there could be several possible sources for the observed flows in prominences of jet-like or large-scale flows, here, we model the effects of an unbalanced Lorentz force (i.e., non-force free magnetic configuration) with small shear as the driver of the large-scale flows in the prominence foot, Cases~4-8. While in some observations of Polarity Inversion Lines (PIL) in prominences the magnetic shear could be large and the magnetic field possibly force-free, modeled with linear force-free field magnetic field \citep[e.g.,][]{AD98}, or nonlinear force-free field \citep[e.g.,][]{Jia14}, our model investigates for the first time the effect of non-force-free field on the formation of large-scale flows and on the propagation of fast magnetosonic waves in the prominence pillar self-consistently. Our model reproduces the main properties of such sheared magnetic configurations by introducing the $x$-dependent $B_y$ component that changes sign in the center of the prominence pillar at x=0, as modeled by Equation~\ref{B0:eq}, \begin{eqnarray} &&\mbox{\bf B}_0=B_{x0}\mbox{\bf\^x}+B_{y0}{ \rm tanh}(x/w)\mbox{\bf\^y}, \label{B0:eq} \end{eqnarray} where $B_{x0}$ and $B_{y0}$ are given in Table~\ref{param:tab} for the eight cases studied, and $w=0.05$ is the fixed half-width of the prominence pillar. When $B_{y0}=0$, the magnetic field is potential, and the initial state given by Equations~\ref{T0:eq} and \ref{n0:eq} is in equilibrium. In order to study initial states that depart from equilibrium and contain currents (non-force-free), we use small values of $B_{y0} \ll B_{x0}$ in the initial state. The corresponding current density $\mbox{\bf j}_0$ and Lorentz force $\mbox{\bf L}_0$ in the $x$-$y$ plane are given by \begin{eqnarray} &&\mbox{\bf j}_0=\nabla\times\mbox{\bf B}_0=\frac{B_{y0}} {w}\mbox{\rm sech}^2(x/w)\mbox{\bf\^z},\\ &&\mbox{\bf L}_0=\mbox{\bf j}_0\times\mbox{\bf B}_0=j_{z0}(-B_{y0}\mbox{ \rm tanh}(x/w)\mbox{\bf\^x}+B_{x0}\mbox{\bf\^y}). \label{j0L0:eq} \end{eqnarray} The $x$ dependence of $\mbox{\bf j}_0$ and $\mbox{\bf L}_0$ along with the corresponding magnetic field in the $x$-$y$ plane are shown in Figure~\ref{B0_xy:fig}. The present model produced the desired Lorentz force that points toward the center of the pillar. Note, that we have also experimented with other forms of $B_y$, such as a centrally peaked profiles, and found similar results for the fast magnetosonic waves, but with different forms of the Lorentz force and directions of the large-scale flows. The location of the prominence pillar is depicted by the yellow-shaded area. \begin{figure}[ht] \includegraphics[width=0.5\linewidth]{B0_x_By0.2-eps-converted-to.pdf} \includegraphics[width=0.5\linewidth, ,trim={0 2cm 0 0},clip]{B0_By0.2_v3-eps-converted-to.pdf} \caption{The initial magnetic field with $B_{y0}=0.2$ (see Case~7, below). (a) The $x$ dependencies of the $y$ component of the magnetic field $\mbox{\bf B}_{0}$ (black), the corresponding $z$ component of the current density $\mbox{\bf j}_{0}$ (blue), and the resultant $x$ component of the Lorentz force $\mbox{\bf L}_{0}$ (red). (b) The initial magnetic field vectors in the $x$-$y$ plane. The shaded area indicates schematically the location of the prominence pillar.} \label{B0_xy:fig} \end{figure} The adopted form of $B_y$ is justified by the dynamics of the observed flows and allows exploring the fast magnetosonic wave propagation effects in non-pontential and non-force-free magnetic field in a prominence. Moreover, this magnetic configuration may correspond qualitatively to a section of a sigmoidal filament structure that is often unstable, leading to eruption \citep[e.g.,][]{Yan12,Dau21}. The vertical extent of the prominence pillar is evident in the observations in Figure~\ref{f:ondisk_context} of about $\sim40$\arcsec in the plane of the sky (i.e., the lower limit). In the model we used $\Delta z=0.4$ for the height of the pillar, with the coordinates normalization of $0.1R_s$ the vertical extent of the model prominence pillar is $0.04R_s=28$ Mm in agreement with observations. Clearly, the extent of the observed low temperature prominence pillar material of $\sim28$ Mm is much longer than the scale height for the $10^4$ K prominence material of $\sim 0.6$ Mm. Thus, one does not expect to see the cool material in gravitational equilibrium at these heights in a field-free or in purely vertically directed magnetic field region, and the prominence material must supported by a horizontal magnetic field component. The boundary conditions at $x=x_{min}$ and $x=x_{max}$ are line tied, and the other boundary conditions are open except for the lower boundary of the prominence pillar ($z=z_{min}=1$). In order to launch the NFM waves at the lower pillar boundary, the following time-dependent boundary conditions are applied on the $V_z$ velocity component: \begin{eqnarray} &&V_z(t,x,y,z=z_{min})=\frac{V_d}{2}\left[1+{\rm cos}(\omega t)\right]e^{-[(x-x_0)/s_x]^4-[(y-y_0)/s_y]^2}. \label{Vzt0:eq} \end{eqnarray} Motivated by the observed wave propagation primarily inside the pillar as evident in Figure~\ref{f:2012Feb14CaIIimg} and the related animations, the source of the wave flux is set to be maximal in the center of the prominence pillar by using the parameters $x_0=y_0=0.0$, $s_x=0.10$, and $s_y=0.15$; the amplitude $V_d$ controls the nonlinearity. The density and magnetic field perturbations are computed by zero-order interpolation from the interior of the computational domain, whereas the transverse velocity components are set to zero at the boundary. This results in periodic perturbations of the magnetic field, density, and velocity $V_z$ that inject NFM waves into the prominence pillar structure. In Table~\ref{param:tab}, we provide the values of $V_d$, $\omega$, $B_{x,0}$, and $B_{y,0}$ for the nine modeled cases in the present study. The results of Case~0 without waves are provided for reference. \begin{table} \caption{Parameters of the numerical 3D MHD models of prominence pillars with waves and flows. The velocity amplitude is given in units of $V_A$, the frequencies in $\tau_A^{-1}$, and the magnetic field strength in Gauss.} \centering \hspace{1cm}\begin{tabular}{cllccc} \hline Case \# & $V_d\ [V_A]$ & $\omega$ $[ \tau_A^{-1}]$ & $B_{x,0}$ [G] & $B_{y,0}/B_0$ \\ \hline 0 & 0 & - & 10 & 0 \\ 1 & 0.01 & 5.28 & 10 & 0 \\ 2 & 0.01 & 12.56 & 10 & 0 \\ 3 & 0.02 & 12.56 & 10 & 0 \\ 4 & 0.01 & 5.28 & 10 & 0.1 \\ 5 & 0.01 & 12.56 & 10 & 0.1 \\ 6 & 0.02 & 12.56 & 10 & 0.1 \\ 7 & 0.02 & 12.56 & 10 & 0.2\\ 8 & 0.01 & 6.28 & 20 & 0.1 \\ \hline \hline \end{tabular} \\ \label{param:tab} \end{table} \section{Numerical Results} \label{num:sec} Here we present the results of the 3D MHD modeling of the NFM waves in the prominence pillar for the parameters given in Table~\ref{param:tab}. In order to explore the details of the waves, we first show in Figure~\ref{nTVfbeta_jB_xz:fig} the results in the $x$-$z$ plane cut at $y=0$ for Case~3 at $t=3.14\tau_A$. This prominence pillar is embedded in a uniform horizontal (potential) magnetic field modeled as described in Equation~\ref{B0:eq} with $B_{y0}$. The NFM waves are launched by the time-dependent velocity source ($V_z$, Eq.~\ref{Vzt0:eq}) at the lower boundary with amplitude $V_d=0.1V_A$ and frequency $\omega=12.56$ localized at the center of the prominence pillar. The waves propagate inside the pillar with nonlinear effects evident in non-modal structure of the oscillations, i.e., asymmetric and sharp peaks in the variables. The nonlinear wave pressure displaces the magnetic field lines upwards, as is evident in this figure, affecting the temperature, magnetosonic speed, and plasma $\beta$ structure. The details of the perturbations due to the waves are particularly clear in the density contrast, $\Delta\rho/\rho_0$. The prominence pillar acts as a leaky waveguide \citep{Cal86} for the NFM waves, as the magnetosonic speed $V_f$ is smaller by an order of magnitude inside the prominence pillar compared to the outside (coronal) region. The small leakage of the wave is most apparent in Figure~\ref{nTVfbeta_jB_xz:fig}e as the periodic density perturbations outside of the prominence pillar. The squared magnitude of the current density, $j^2$, shows the regions of enhanced currents that lead to Ohmic dissipation ($j^2/S$ in normalized units) associated with the NFM waves. The velocity components in the $x$-$z$ plane are shown Figure~\ref{nTVfbeta_jB_xz:fig}f, where the arrows indicate the local direction (not magnitude) of the velocity vectors and the magnitude $V$ is color-shaded as indicated by the color bar. The corresponding magnetic field in the $x$-$z$ plane is shown in Figure~\ref{nTVfbeta_jB_xz:fig}h. The dominant $B_x$ component is evident, along with the perturbations in the magnetic field magnitude $B$ due to the fast magnetosonic waves and the nonlinear wave pressure effects within the base of the prominence pillar. \begin{figure}[ht] \centerline{ \includegraphics[width=\linewidth]{nTVfbeta_BV_vd0.02fd12.56B10_2r-eps-converted-to.pdf} } \caption{The variables in the $x$-$z$ plane at $y=0$, $t=3.14$ with $V_d=0.02$, $\omega=12.56$ for Case 3. (a) $\rho$ with overlaid magnetic field lines. (b) The normalized fast magnetosonic speed $V_f$ magnitude with several isocontours, (c) $T$, (d) $\beta$, (e) $\Delta \rho/\rho_0$, (f) $V$ magnitude with arrows indicating the local direction of the velocity vectors, (g) $j^2$, (h) $B$ magnitude with arrows indicating the local magnetic field direction. An animation of panels (a) and (f) is available online. The video shows the density structure in the $x$-$z$ plane (left) with over-plotted field lines, and the corresponding velocity maps. The duration of the animation is 3.21 in normalized time units in the 5 s video.} \label{nTVfbeta_jB_xz:fig} \end{figure} In Figure~\ref{nTVfbeta_jB_yz:fig} we show the variables in a cut along the prominence pillar axis, i.e., in the $y$-$z$ plane at $x=0$ in the high-density, low-temperature (relative to coronal values) region at time $t=3.14\tau_A$. The effects of the NFM waves generated by the time-dependent boundary conditions in $V_z$ are evident. In particular, the density perturbations are in phase with the magnetic field perturbations, as seen by comparing the panels in Figure~\ref{nTVfbeta_jB_yz:fig}e and h, as expected for the fast magnetosonic waves. The magnitude of the waves is largest in the center of the pillar due to the form of the wave source in Equation~\ref{Vzt0:eq}, as well as to the waveguide trapping of the wave flux. The squared current magnitude $j^2$ is shown in Figure~\ref{nTVfbeta_jB_yz:fig}g, where the larger currents are associated with the wave fronts and are regions of higher Ohmic dissipation (therefore also affecting the temperature). The directions of the perturbations in $V$ and $B$ in the $y$-$z$ plane are shown in Figure~\ref{nTVfbeta_jB_yz:fig}f and h. The waves propagate in the $z$ direction since $V_f$ is nearly uniform in the $y$-$z$ plane, with very small perturbations due to the waves (note the intensity range in the color bar of Figure~\ref{nTVfbeta_jB_yz:fig}b). \begin{figure}[ht] \centerline{ \includegraphics[width=1.1\linewidth]{nTVfbeta_jB_yz_t=3.14-eps-converted-to.pdf} } \caption{The variables in the $y-z$ plane at $x=0$, $t=3.14$ for Case 3. (a) $\rho$, (b) $V_f$, (c) $T$, (d) $\beta$, (e) $\Delta \rho/\rho_0$, (f) $V$ with arrows showing the direction of the velocity, (g) $j^2$, (h) $B$ with arrows showing the direction of the magnetic field.} \label{nTVfbeta_jB_yz:fig} \end{figure} The temporal evolution of the variables at a point in the center of the prominence pillar at x=0, y=0, z=1.2 for Cases~ 0-3 is shown in Figure~\ref{dbvnt_prom_vd0.01-0.02fd5.28-12.56B10_by00.0:fig}. The three components of the velocity and magnetic field perturbations (with respect to $\mbox{\bf B}_0$) and the change in density and temperature are displayed. The difference between Case~1 and Case~2 is the increase of wave frequency by a factor of 2.4, while in Case~3 the amplitude of the velocity at the boundary is increased by a factor of 2 with respect to Cases~1 and 2. Evaluating the propagation speed of the NFM waves from the animations for Cases 2 and 3, we find that they travel close and slightly above (5\%-15\%) the theoretical linear fast magnetosonic speed. The speedup is larger for the higher amplitude waves suggesting nonlinearity effect \citep[see,][and references therein]{OD97}. The nonlinearity of the fast magnetosonic waves is evident primarily in their non-sinusoidal temporal structure, which shows evidence of steepening. This nonlinear effect is more evident in the low-frequency waves and in the large-amplitude waves, where the wave peaks are sharper than the troughs due to steepening. The magnetic field perturbations show secular growth of the amplitude, an indication of nonlinear wave pressure effects on the background magnetic structure. The density perturbations show an oscillatory increasing trend, whereas the temperature perturbations are small. The case without without waves (Case~0) shows the evolution of the background state in the center of the pillar due to the gravitational settling of the initial state. One can estimate the magnetic field change expected for the given density increase of $\sim$5\% due to the gravitational settling. This corresponds to a magnetic pressure change that is 5\% of $\beta$, or about 0.4\%, which equals to the value of $\Delta B/(2B_0)$. Therefore, the estimated $\Delta B/B_0 =$0.8\% = 0.008 is consistent with the magnitude of the changes shown in Figure~\ref{dbvnt_prom_vd0.01-0.02fd5.28-12.56B10_by00.0:fig} in the field component plotted there for Case~0 (green curve). It is evident that the small velocity $V_z$ readjustment exhibits an initial oscillatory evolution due to the effect of gravity, followed by a nearly constant downward velocity $V_z\approx-0.003V_A$ corresponding in physical units to about $-2$ km s$^{-1}$. We investigated the effects of diffusion by repeating Case~0 with higher ($S=10^4$) and lower ($S=10^6$) resistivity. In the latter case the spatial resolution was doubled in each direction ($514^3$) compared to other runs. We find that in the case with $S=10^4$ the small down flow velocity increases by $30\%$. However, in the reduced resistivity, high resolution run with $S=10^6$, the asymptotic down flow speed remains nearly the same as in the case with $S=10^5$, where the density structure shows slight compression and broadening of the lower part of the pillar. The diffusion of cold prominence material through the supporting magnetic field is expected in real prominences in qualitative agreement with the present model {\bf for higher resistivity case}, since the material is partially ionized \citep[for example, see][]{Gil02,Kho14}, and where the frozen in condition breaks down due to finite resistivity \citep{Low12,LE14,JK21}, see the review by \citet{Gib18}. While in the MHD model the down flow velocity {at lower resistivity is due to compressive effects, we find that this velocity is small compared to the phase speed of the fast magnetosonic waves and therefore has no significant effect on the wave propagation. \begin{figure}[ht] \centerline{ \includegraphics[width=\linewidth]{Fig9_rev-eps-converted-to.pdf} } \caption{Temporal evolution of the variables in the center of the prominence pillar for Case 1 (blue) with $V_d=0.01$, $\omega=5.26$, Case 2 (red) with $V_d=0.01$, $\omega=12.56$, and Case 3 (black) with $V_d=0.02$, $\omega=12.56$. (a) Velocity components $V_z$ (solid), $V_y$ (short dashes), $V_x$ (long dashes). (b) Magnetic field component perturbations $\Delta B_x$ (solid), $\Delta B_y$ (short dashes), $\Delta B_z$ (long dashes). (c) Changes in density $\Delta \rho/\rho_0$ (solid) and temperature $\Delta T/T_0$ (long dashes) normalized by the respective initial values $\rho_0$ and $T_0$. The case without waves (Case 0) is shown (green) for reference. Times are in units of $\tau_A$.} \label{dbvnt_prom_vd0.01-0.02fd5.28-12.56B10_by00.0:fig} \end{figure} The structure of the magnetic field and density perturbations due to the NFM waves for Case~2 is shown in Figure~\ref{B3dn3d_by0:fig}. In the present model the initial state was the result of Case~0 without waves, where slight dips form in the magnetic configuration of the pillar due to the effects of reduced gravity. The figure and the animation show the magnetic field lines and the density isocontours in the domain at $t=5.8\tau_A$. A small lifting of the magnetic field lines by the wave pressure is evident mostly in the lower region of the pillar, while the small gravitational dipping of the field lines is most evident in the upper part of the domain{ in the initial state, reduced at later times due to the effects of wave pressure. Isocontours of density indicate the locations of the propagating compressions due to the guided NFM waves, while further details of the wave propagation are exhibited in the animations provided. \begin{figure}[ht] \includegraphics[width=0.43\linewidth]{B3d_t=5.8.png} \includegraphics[width=0.5\linewidth]{n3d_t=5.8.png} \caption{The results of the 3D MHD model in Case~2. (a) Magnetic field lines and (b) density isocontours due to the propagating NFM waves in the domain at $t=5.8\tau_A$. An animation of this figure is available online. The duration of the animation is 3.6 in normalized time units that shown in 2 s video.} \label{B3dn3d_by0:fig} \end{figure} The effects of non-potential, non-force-free magnetic fields on the propagation of the fast magnetosonic waves in the prominence pillar are explored in Cases~4-8. The form of the background magnetic field is given by Equation~\ref{B0:eq}. The parameters of Cases~4-6 are the same as in Cases~1-3 except that $B_{y,0}=0.1$. In Case~7 we consider $B_y=0.2$, with the other parameters as in Case~3, and in Case~8 we consider $B_0=20$ G, with the rest of the parameters the same as in Case~6. These results are discussed below. In Figure~\ref{nTVfbeta_jB_by0.1_xz:fig} we show the variables in the $x$-$z$ plane at $y=0$ for Case~6 with $B_{y0}=0.1$ at $t=3.03\tau_A$. The NFM wave structure is most evident inside the prominence pillar in the relative density compressions, $\Delta\rho/\rho_0$, but also is seen in the variability in $\rho$, $\beta$, $j^2$, and the velocity and magnetic field magnitudes. Comparing $\Delta\rho/\rho_0$ to Case~3 (Fig.~\ref{nTVfbeta_jB_xz:fig}e), we find that the relative magnitude of the leakage in the $x$ direction is reduced. The effects of the $x$ component of the Lorentz force in compressing the prominence pillar density are seen by comparing the structure of $\rho$ to the initial state in Figure~\ref{B0_xy:fig} and to $\rho$ in Case~3 shown in Figure~\ref{nTVfbeta_jB_xz:fig}a. The apparent half-width is reduced by about $30\%$ in the present case. \begin{figure}[ht] \centerline{ \includegraphics[width=\linewidth]{nTVfbeta_jB_by0.1_xz_t=3.03-eps-converted-to.pdf} } \caption{The variables in the $x$-$z$ plane cut at $y=0$, $t=3.03\tau_A$ for $V_d=0.02$, $\omega=12.56$ (Case 6). (a) $\rho$ with field lines indicated with white lines. (b) $V_f$ magnitude with several isocontours. (c) $T$. (d) $\beta$. (e) $\Delta \rho/\rho_0$. (f) $V$ magnitude with arrows showing the local direction of the velocity vectors. (g) $j^2$. (h) $B$ magnitude with arrows showing the local direction of the magnetic field vectors (dominated by $B_x$).} \label{nTVfbeta_jB_by0.1_xz:fig} \end{figure} Figure~\ref{nTVfbeta_jB_by0.1_yz:fig} shows the variables in the $y$-$z$ plane at $x=0$ for the case with $B_{y0}=0.1$ (Case~6) at $t=3.03\tau_A$. The refraction of the wave fronts of the NFM waves due to the effect of the $B_{y0}$ magnetic field component is apparent by comparison with Figure~\ref{nTVfbeta_jB_yz:fig}. The wave structure is evident in the density and magnetic field perturbations, as well as in the corresponding current perturbations. In this magnetic configuration, the waves leak significantly out of the prominence pillar through the side boundary at $y=y_{max}$, decreasing the wave energy flux in the center of the pillar, whereas in the uniform magnetic field case, the main leakage takes place through the top of the prominence pillar ($z=z_{max}$) with open boundary condition. \begin{figure}[ht] \centerline{ \includegraphics[width=\linewidth]{nTVfbeta_jB_by0.1_yz_t=3.03-eps-converted-to.pdf} } \caption{Variables in the $y$-$z$ plane at $x=0$, $t=3.03$ for Case 6. (a) $\rho$. (b) $V_f$. (c) $T$. (d) $\beta$. (e) $\Delta \rho/\rho_0$. (f) $V$. (g) $j^2$. (h) $B$.} \label{nTVfbeta_jB_by0.1_yz:fig} \end{figure} The cut in the $x$-$y$ plane (i.e., the solar `disk' view) of the model shows the structure of the prominence pillar density, temperature, fast magnetosonic speed, $\beta$, $j^2$, {\bf B}, and {\bf V} at height $z=1.2$ at $t=3.03\tau_A$ for Case~6. The effects of the upward propagating NFM waves are evident. In the $x$-$y$ plane, the waves are most clear in $\Delta\rho/\rho_0$, $j^2$, $\beta$, and the magnitudes $B$ and $V$. The $y$ dependence of the wave structure is affected by both the driving source and the wave refraction due to the $B_{y0}$ component of the background magnetic field. It is evident from $\Delta\rho/\rho_0$ that the leakage of the wave is significant in the density compressions outside the prominence pillar region (i.e., $|x|>0.05$). The Lorentz force generates a compression of the density primarily in the $x$ direction, with small magnetic field and density compression in the $y$ direction, as can be seen from the density and velocity structures in the $x$-$y$ plane. \begin{figure}[ht] \centerline{ \includegraphics[width=\linewidth]{vfbeta_xy_prom_vd0.02fd12.56B10_by00.1-eps-converted-to.pdf} } \caption{Variables in the $x$-$y$ plane at $z=1.2$, $t=3.03$ for Case 6. (a) $\rho$. (b) $V_f$ with several isocontours. (c) $T$. (d) $\beta$. (e) $\Delta \rho/\rho_0$. (f) $V$ magnitude with direction arrows. (g) $j^2$. (h) $B$ magnitude with direction arrows.} \label{vfbeta_xy_prom_vd0.02fd12.56B10_by00.1:fig} \end{figure} The temporal evolution of the variables for Cases~4-6 in the center of the prominence pillar at x=0, y=0, z=1.2 are shown in Figure~\ref{dbvnt_prom_vd0.01fd5.28-12.56B10_by00.1:fig}. Evidently, the non-force-free magnetic field introduces flows due to the Lorentz force that lead to compression in the prominence pillar, and corresponding increases of magnetic field strength and density that disrupt the initial gravitational equilibrium. In particular, it is evident that the $V_y$ component has similar accelerated evolution in Cases~4-6, with weak dependence on the properties of the fast magnetosonic waves. Thus, the effects of the Lorentz force in the non-force-free field in introducing mass flows self-consistently becomes evident. The flow accelerates during the simulated time, exceeding 10\% of the Alfv\'{e}n speed (about 70 km s$^{-1}$ with the present normalization) by the final modeled time. \begin{figure}[ht] \centerline{ \includegraphics[width=\linewidth]{Fig14_rev-eps-converted-to.pdf} } \caption{The temporal evolution of the variables in the center of the prominence pillar for Case 4 (red) with $V_d=0.01$, $\omega=5.28$, Case 5 (blue) with $V_d=0.01$, $\omega=12.58$, and Case~6 (black) with $V_d=0.02$, $\omega=12.58$. (a) Velocity components $V_z$ (solid), $V_y$ (short dashes), $V_x$ (long dashes). (b) Magnetic field component perturbations $\Delta B_x$ (solid), $\Delta B_y$ (short dashes), $\Delta B_z$ (long dashes). (c) Changes in density $\Delta \rho/\rho_0$ (solid) and temperature $\Delta T/T_0$ (long dashes) normalized by the respective initial values $\rho_0$ and $T_0$. Times are in units of $\tau_A$.} \label{dbvnt_prom_vd0.01fd5.28-12.56B10_by00.1:fig} \end{figure} The effect of increased Lorentz force on the structure of the prominence pillar and on the NFM waves is demonstrated in Case~7 with $B_{y0}=0.2$. As expected, the increased Lorentz force leads to more rapid and powerful compression of the prominence pillar than in Cases 4-6. This affects the properties of the background density structure of the pillar and also the NFM waves. In particular, the wave frequency has decreased due to the increased compression, mainly due to the increase in density and corresponding decrease in $V_f$ inside the prominence pillar. This also leads to the decrease of the velocity amplitude associated with the NFM waves inside the pillar for the fixed wave source at the boundary. In Case~8 we investigate the effects of increased magnetic field strength on the NFM waves by doubling the assumed magnitude of the magnetic field, $B_0=20$ G. This change with respect to previous cases results in a doubling of $V_A$ and a decrease of plasma $\beta$ by a factor of four. Since the velocity amplitude $V_d$ is in units $V_A$, the magnitude of the nonlinearity of the fast magnetosonic wave in Case~8 is essentially the same as in Case~4. Also, we note that $\tau_A$ is half the value in Case~8 compared to other cases, and that the value of $\omega$ in Case~8 is equal to the value in Case~7 when converted to rad s$^{-1}$. The main difference with respect to previous cases is the effect of the wave pressure, which is now four times larger in Case~8 and leads to correspondingly stronger density compressions. \section{Discussion and Conclusions} \label{dc:sec} Recent high spatial and temporal resolution observations of a prominence pillar from Hinode/SOT in H$\alpha$ and Ca~II and IRIS in Mg~II show evidence of small-scale oscillations and propagating features associated with flows \citep{KOT18}. Analysis of Doppler shifts from Hinode/SOT H\ensuremath{\alpha}\ and IRIS show red-wing/blue-wing contrasts that are consistent with propagating waves and flows on extended magnetic field lines \citep[e.g.,][]{OK20}. Here, we analyze additional observations of propagating small-scale oscillations in the Hinode/SOT Ca~II line and the blue wing of H\ensuremath{\alpha}\ in a prominence on 2012 February 14. Using space-time plots and wavelet analysis, we find oscillations with typical periods of order minutes and wavelengths of order 1000-2000 km with sharp peaks indicative of nonlinear steepening, and we identify the propagating features as signatures of nonlinear fast magnetosonic (NFM) waves. Motivated by past and recent observations of the small-scale oscillations, we developed an idealized 3D MHD model of a prominence pillar that focuses on the generation and propagation of NFM waves guided by observed properties. The advantage of the simplified 3D model is tractability of the wave features when the line-of-sight projection effects inherent to single-point plane-of-the-sky observations are removed. The 3D model reproduces the main physical properties of the prominence cool material embedded in background magnetic field and of the observed propagating small-scale features. The present model extends previous 2.5D MHD studies of the propagating NFM waves into more complex and realistic prominence structures by allowing 3D wave propagation and couplings. There is evidence of nonlinear coupling of the NFM waves to other wave modes in the animation of the density structure, which shows secondary density compressions due to slow magnetosonic mode that appear to follow the compressions associated with NFM waves. There is also evidence of small amplitude Alfv\'{e}nic oscillation in the temporal signatures of the variables. However, we find that the main effects of nonlinearity of the waves are the steepening and the coupling between the NFM waves and the background pillar structure in the low-$\beta$ plasma. We modeled eight cases and varied the main parameters of the waves in two types of magnetic field configurations (uniform potential and for the first time non-force-free) to provide insights on the effects of the various parameters on the generation and propagation of NFM waves. Evidence of velocity and magnetic shear is often observed in pre-eruptive prominences configurations \citep{Gib18}. Therefore, we modeled non-force-free field with magnetic shear that introduces large-scale flows, corresponding (aperiodic) compressions of the prominence pillar, and dynamic changes in wave propagation properties, all self-consistently. We find that the effects of the non-force-free sheared magnetic field on the pillar structure and on the wave propagation are significant, even for relatively small magnitude of the shear-produced Lorentz force, due to the low-$\beta$ state of the prominence material. Thus, the effects of magnetic field shear on the NFM waves may affect the application of coronal seismology in prominence pillars. The modeling results show qualitative agreement with the observed propagating oscillations with nonlinear steepening in the prominence pillar, as demonstrated in previous studies \citep{Ofm15a,OK20} and the present observational analysis. The 3D MHD results confirm further the interpretations of the observed propagating small-scale features in terms of NFM waves that are wave-guided in the cool material (low fast magnetosonic speed) of the prominence pillar region. From the model we find that the wave nonlinearity leads to secular changes in background magnetic field structure, density, and temperature due to the wave pressure, in addition to the wave steeping effects that affect the small-scale compressive structures. The low-frequency wave source leads to higher amplitude guided NFM waves than the high-frequency waves, due to lower leakage and dissipation compared to the high-frequency waves. Our study demonstrates the potential applications to the observed small-scale waves together with modeling for magnetic seismology of the prominence structure. One can apply coronal seismology by using the properties of the observed waves, such as wavelengths and periods to determine the phase speed of the waves. The relation between the phase speed and the magnetic field can be obtained from linear theory for linear waves in simplified geometry \citep[e.g.,][]{NV05}. For nonlinear waves in more complex geometry the phase speed can be obtained from a 3D MHD model. . Finally, by comparing the theoretical/modeled phase speed with the observed phase speed and with the density and temperature information, one can determine the magnetic field in the pillar (taking into account possible plane of the sky (POS) projection effects). The details of magnetic geometry structure could be deduced from the observed direction of wave propagation where a 3D MHD model helps alleviate the POS observational ambiguity. The present model considers the nonlinearity in various idealized magnetic field geometry scenarios, and in the future more realistic 3D MHD wave models will include more detailed magnetic and density structure based on specific observations, thus improving the accuracy of coronal seismology method. LO acknowledges support by NASA Cooperative Agreement 80NSSC21M0180 to The Catholic University of America. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. TAK and CRD were supported by NASA's H-ISFM program at Goddard Space Flight Center. Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway). We are grateful to the late Ted Tarbell for helpful discussions concerning the Hinode/SOT data during our past collaborations. The Global Oscillation Network Group (GONG) Program, managed by the NSO, is operated by AURA Inc.\ under a cooperative agreement with the NSF. \vspace{5mm} \facilities{Hinode/SOT, GONG} \software{SolarSoft}
proofpile-arXiv_066-13111
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \begin{table*} \caption{Observation log of XMM{\it-Newton}\xspace and {\it Chandra}\xspace data used for our spectral modelling of 4U~1820-30\xspace.} \label{tab:observations} \centering \begin{tabular}{ c c c c c c c } \hline\hline \noalign{\vskip 0.75mm} Dataset & Obs. ID & Date & Exposure (ks) & Count Rate ($\rm cts\ s^{-1}$) & Instrument & Mode \\ \noalign{\vskip 0.75mm} \hline \rowcolor{Gray} \multicolumn{7}{c}{{\it Chandra}\xspace} \\ \hline \noalign{\vskip 0.75mm} 1 & \textbf{98} & $10/03/2000$ & 15.0 & 37.7 & LETG/HRC-S & - \\ \arrayrulecolor{gray}\cline{2-7} \noalign{\vskip 0.75mm} \multirow{2}{*}{2} & \textbf{1021} & $21/07/2001$ & 9.6 & 84.3 & HETG/ACIS & TE \\ & \textbf{1022} & $12/09/2001$ & 10.7 & 107.1 & HETG/ACIS & TE \\ \cline{2-7} \noalign{\vskip 0.75mm} \multirow{3}{*}{3} & \textbf{6633} & $12/08/2006$ & 25.1 & 126.0 & HETG/ACIS & CC \\ & \textbf{6634} & $20/10/2006$ & 25.0 & 174.0 & HETG/ACIS & CC \\ & \textbf{7032} & $05/11/2006$ & 46.2 & 148.1 & HETG/ACIS & CC \\ \cline{2-7} \noalign{\vskip 0.75mm} 4 & \textbf{12444} & $08/03/2011$ & 89.7 & 84.4 & LETG/ACIS & CC \\ \arrayrulecolor{black} \noalign{\vskip 0.75mm} \hline \rowcolor{Gray} \multicolumn{7}{c}{XMM{\it-Newton}\xspace} \\ \hline \noalign{\vskip 0.75mm} 5 & \textbf{008411}0201 & $09/10/2001$ & 39.6 & 74.7 & RGS & Spec. HER+SES \\ 6 & \textbf{055134}0201 & $02/04/2009$ & 41.8 & 51.3 & RGS & Spec. HER \\ \noalign{\vskip 0.75mm} \hline \end{tabular} \end{table*} In the space among the stars resides interstellar matter, a vast and heterogeneous mixture of atoms, molecules and solid dust grains. As a function of the temperature and density of the diffuse gas, the interstellar medium can be classified in three main phases \citep[][and reference therein]{Ferriere01}: a \emph{cold} phase (with temperature $T_{\rm ISM} < 100$ K and density $n_{\rm H}$ = $10-10^{6}\ \rm cm^{-3}$), a \emph{warm} phase ($T_{\rm ISM} \sim 8000$ K and $n_{\rm H} = 0.2-0.5\ \rm cm^{-3}$, including both warm neutral and warm photoionised gases), and a \emph{hot} phase ($T_{\rm ISM} \sim 10^6$ K and $n_{\rm H} \sim 6.5 \times 10^{-3}\ \rm cm^{-3}$).\\ The existence of the hot interstellar medium (also known as hot coronal gas) in the Milky Way has been discovered through the detection of the soft X-ray background in the 0.1-1 keV energy range \citep{Tanaka77} and through the UV \ion{O}{vi}\xspace absorption lines in the spectra of OB stars \citep{Jenkins78}. The hot phase has been proposed to account for much of the missing baryons within the Galactic halo, and as a tracer of energetic feedback from stars and supernovae, which plays an important role in shaping the ecosystem of the Galaxy \citep{Mckee77,Miller13}. \\ In the past several years it has been demonstrated that the hot phase of the ISM can be successfully characterised through X-ray absorption line spectroscopy \citep[e.g.][]{Futamoto04,Yao05,Wang10,Liao13,Gatuzz18}. Through the measurements of absorption lines produced by various ions in the spectra of bright X-ray sources, primarily active galactic nuclei and X-ray binaries, it is possible to set physical constraints on ionisation process, temperature, kinematics, and chemical abundances of the hot gas. It is also possible to study its distribution in both the disc and the halo of the Galaxy. \\ However, the interpretation of these high-ionisation absorption lines can be uncertain. Indeed, these features are not always clearly associated with the interstellar hot gas in a collisional ionisation equilibrium state. In some systems, they can be produced by an absorber intrinsic to the background X-ray source and photoionised by its strong radiation. The presence of multiple blueshifted and/or variable high-ionisation lines indicates a wind outflowing in our line of sight, in particular from the accretion disc of stellar mass black holes \citep[e.g.,][]{Lee02,Parmar02,Ueda04,DiazTrigo06}. On the other hand, in dipping X-ray binaries, strong absorption lines that are local to the source are found not to be blueshifted and these are thought to be associated with the corona of the accretion disc \citep{Sidoli01,DiazTrigo06}. Therefore, for some sources, it might be difficult to discern if the presence of high-ionisation lines in the spectrum is associated with a collisionally ionised gas in interstellar space rather than with a photoionised absorber associated to the X-ray source. One example is the source that we analyse in the present work.\\ \object{4U 1820-30} is a well known bright X-ray source in the Sagitarius constellation, first observed by the {\it Uhuru}\xspace satellite \citep{Giacconi72}. This source is an accreting neutron star low-mass X-ray binary, residing in the globular cluster NGC 6624 at $(l,b) = (2{\ensuremath{^{\circ}}}\xspace.8,-7{\ensuremath{^{\circ}}}\xspace.9)$. The binary consists of an ultracompact system with an orbital period of $\sim 11\ \rm min$ and size $r=1.3\times10^{10}\ \rm cm$ \citep{Stella87}. The companion has been identified with a He-white dwarf. The distance of 4U~1820-30\xspace has been determined to be $7.6\pm0.4\ \rm kpc$ \citep{Kuulkers03}, thus it is very close to the Galactic centre. Consequently, our line of sight samples the entire inner Galactic disk radially to a height of $\sim 1\ \rm{kpc}$ off the Galactic plane. \\ The soft X-ray band of 4U~1820-30\xspace was demonstrated to be a useful tool to study simultaneously the different phases of the interstellar medium. In previous works, the cold phase has been studied through the oxygen, neon, and iron photoelectric edges detected in the high-resolution X-ray spectra \citep{Juett04,Juett06,Miller09}. An accurate analysis of the oxygen K-edge and iron L-edges has been performed by \cite[][hereafter \citetalias{Costantini12}]{Costantini12} where they modelled the absorption by both cold gas and interstellar dust. The detection of low-ionisation lines\footnote{For the wavelength of the lines we refer to \cite{Juett06} and the {\textsc{Spex}}\xspace atomic database \cite{Kaastra18}.}, such as \ion{Ne}{ii}\xspace ($14.608$ {\AA}), \ion{Ne}{iii}\xspace ($14.508$ {\AA}), \ion{O}{ii}\xspace ($23.351$ {\AA}), \ion{O}{iii}\xspace ($23.028$ {\AA}) indicates the presence of a low-ionised gas towards 4U~1820-30\xspace with a temperature of $\sim 5 \times 10^{4}\ \rm{K}$ \citep[][\citetalias{Costantini12}]{Cackett08}. \\ The presence of hot gas towards the source has been previously studied. \cite{Futamoto04} clearly detected the \ion{O}{vii}\xspace ($21.602$ {\AA}), \ion{O}{viii}\xspace ($18.967$ {\AA}) and \ion{Ne}{ix}\xspace ($13.447$ {\AA}) absorption lines in the spectrum of 4U~1820-30\xspace. A Gaussian fit to those lines provided estimates for the column densities through the curve of growth analysis. An important shortcoming of such method is that in case of saturated or non-resolved lines, various possible line broadening effects influence the equivalent width of the line and hence the derived column density. To tackle this degeneracy, \cite{Yao05} constructed an absorption line model which includes ionisation equilibrium conditions. This model allows studying directly the physical conditions of the absorbing gas, such as temperature and ionic abundances.\\ Both \cite{Futamoto04} and \cite{Yao05} associated these high-ionisation lines to interstellar hot gas under the assumption of a unity filling factor and that all the absorption arises from the same gas. Moreover, the absolute velocity has been found to be consistent with zero, ruling out the possibility of an outflowing gas. \\ Subsequently, \cite{Cackett08} modelled the high-ionisation lines in the spectra of 4U~1820-30\xspace using the photoionisation code XSTAR \citep{Bautista01} and they found that photo-ionisation can reproduce the observed lines well suggesting a photoionised absorber, possibly an atmosphere intrinsic to the accretion disc as a plausible explanation. \\ Finally, \cite{Luo14} focused on the \ion{Ne}{ix}\xspace and \ion{Fe}{xvii}\xspace absorption lines using a narrow band ($10$ to $15.5$ \AA) of {\it Chandra}\xspace observations. They found a weak correlation between the equivalent width of the lines and the luminosity of the source. This would suggest that a significant fraction of these X-ray absorbers may originate in the photoionised gas intrinsic to the X-ray binaries. \\ \indent In the present work, we aim at investigating the origin of these high-ionisation absorption lines. In order to advance previous studies, we apply state-of-art plasma models of the {\textsc{Spex}}\xspace code, with latest accurate atomic databases, to all the XMM{\it-Newton}\xspace and {\it Chandra}\xspace spectra of 4U~1820-30\xspace. We test a photoionisation model, a collisionally ionized model and combinations of them. We then analyse the X-ray spectra and compare the models through two different statistical approaches: the standard $C$-statistic \citep{Cash79} and the Bayesian analysis \citep{vanDyk01,Gelman13}. The latter method carefully explores the whole parameter space inferring the most probable estimate of the model parameters.\\ The organisation of the paper is as follows: in Section \ref{sec:obs_datareduction}, we describe the XMM{\it-Newton}\xspace and {\it Chandra}\xspace observations and our data reduction procedure. The broadband model of 4U~1820-30\xspace is presented in Section \ref{sec:continuum}. The ionised gas features are analysed and described in Section \ref{sec:hot_gas}, where we also compare the two different models. In Section \ref{sec:discussion} we discuss the nature of the high-ionisation absorption lines seen in the spectra of the accreting source. The summary of the main results of this work is given in Section \ref{sec:summary}. All errors are measured at the 68\% ($1\sigma$) confidence unless marked otherwise. \section{Observations and data reduction} \label{sec:obs_datareduction} 4U 1820-30 has been observed by the grating spectrometers aboard on both {\it Chandra}\xspace and XMM{\it-Newton}\xspace. {\it Chandra}\xspace carries two high spectral resolution instruments, namely the High Energy Transmission Grating \citep[HETG,][]{Canizares05} and the Low Energy Transmission Grating \citep[LETG,][]{Brinkman00}. The LETG can operate with either the Advanced CCD Imaging Spectrometer (ACIS) or the High-Resolution Camera (HRC), whereas the HETG can be coupled only with the ACIS. The HETG consists of two grating assemblies, the high-energy grating (HEG) and the medium-energy grating (MEG). The energy resolution of the HEG, MEG and LETG are 0.012, 0.023, 0.05 {\AA} (FWHM), respectively. On the other hand, XMM{\it-Newton}\xspace carries behind the multi-mirror assemblies two Reflection Grating Spectrometers \citep[RGS,][]{denHerder01}. The energy resolutions of the two instruments, RGS1 and RGS2, are about $0.07$ \AA.\\ In the band of interest of high-ionisation lines, RGS, LETG and HETG have comparable resolving power but different instrumental characteristics. RGS and LETG present higher effective areas and a broader coverage in the softer bands (>10 \AA), which are an optimal combination for detecting in one single observation the absorption lines of highly ionised atoms of multiple elements, down to carbon and nitrogen. HETG best observes higher energy lines such as \ion{Ne}{ix}\xspace and \ion{Ne}{x}\xspace. We model both HEG and MEG data. Our joint modelling of XMM{\it-Newton}\xspace and {\it Chandra}\xspace data enables us to better constrain all the spectral absorption lines, and therefore to better understand the nature and origin of hot gas towards 4U~1820-30\xspace. The synergy of these different grating instruments improves therefore quantitatively and qualitatively the analysis of the X-ray spectrum of 4U~1820-30\xspace.\\ The characteristics of the observation used in this work are displayed in Table~\ref{tab:observations}. The XMM{\it-Newton}\xspace data reduction is performed using the Science Analysis System\footnote{\url{https://www.cosmos.esa.int/web/xmm-newton/sas}} (SAS, version 18.0.0). The earlier observation, obsID 008411, shows a flaring background, which is filtered out. This results in a cut of $\sim3$ ks on the total exposure time. The two RGS observations are taken in two different \textit{spectroscopy mode} flavours: high-event-rate with single-event-reconstruction (HER+SES, for obsID 008411) and high-event-rate (HER, for obsID 055134). \\ We obtain the {\it Chandra}\xspace observations from the Transmission Grating Catalogue\footnote{\url{http://tgcat.mit.edu/}} \citep[TGCat,][]{Huenemoerder11}. We combine the positive and negative first-order dispersion of each observation using the Chandra's data analysis system \citep[CIAO version 4.11,][]{Fruscione06}. The two observations taken in 2001 in timed exposure (TE) mode, display similar fluxes and continuum parameters, therefore we stack the data, for the HEG and MEG arms using the CIAO tool \texttt{combine\_grating\_spectra}. Similarly, we combine the three observations taken in 2006 in continuous clocking (CC) mode. We fit with the same continuum model the HEG and MEG spectra of the same stacked data correcting them for the slightly different ($\sim$ 5\%) instrumental normalization. Therefore, after the combination of the grating spectra, we obtain six different datasets as listed in the first column of Table \ref{tab:observations}.\\ During all the observations, the source is observed in high flux state, on average $ F_{\rm 2-10\ keV} \sim 10^{-8}\ {\ensuremath{\rm{erg\ cm}^{-2}\ \rm{s}^{-1}}}\xspace$. Because of this high flux, the observations are affected by pileup. This effect is particularly present in the {\it Chandra}\xspace observations taken in TE mode. We overcome this issue ignoring the region of the spectra ($\lambda$ between 4 and 19 \AA) most affected by pileup. At longer wavelengths ($\lambda > 20 $ \AA), the combined effect of high absorption and low effective area reduces significantly the observed flux. Thus, this region is unaffected by pileup and can be safely used. However, the narrow absorption features are in general minimally influenced by pileup. \begin{figure} \centering \includegraphics[width=\columnwidth]{plot/continuum4u1820-30.pdf} \caption{Broadband best-fit to the X-ray spectrum of 4U 1820-30 with warm and cold absorption described in Section \ref{sec:continuum}. We display for clarity only RGS1 and RGS2 (in gray and black, respectively) of obsID 008411. We overlap the best model of the broadband spectrum (in red), which is made up of a Comptonisation model (in blue) and a blackbody (in orange). We also show the gaussian (in brown) added to the model in order to fit the possible instrumental bump in excess centred at $24.7$ \AA. We divide the data for the instrumental effective area in order to clean the spectrum from the numerous instrumental features. In the bottom panel we show the residuals defined as $\rm (observation-model)/error$.} \label{fig:continuum} \end{figure} \section{Broadband spectrum} \label{sec:continuum} \begin{table*} \caption{Best-fit parameters of our model for the broadband spectrum of 4U 1820-30} \label{tab:broadband} \small \centering \begin{threeparttable} \begin{tabular}{ c c c c c c c c c } \hline\hline \noalign{\vskip 0.75mm} \multirow{3}{*}{data} & {\tt bb} & \multicolumn{2}{c}{\tt comt\tnotex{tnote:tau} } & {\tt hot} (cold) \tnotex{tnote:cold} & \multicolumn{2}{c}{{\tt hot} (warm)} & \multirow{1}{*}{$F_{0.5-2\ \rm keV}$} & \multirow{3}{*}{$C$stat/dof\tnotex{tnote:tot_c}} \\ \cline{3-4}\cline{6-7} \noalign{\vskip 0.75mm} & $k_{\rm B}T$ & $k_{\rm B}T_{0}$ & $k_{\rm B}T_{1}$ & \ensuremath{N_{\mathrm{H}}}\xspace & \ensuremath{N_{\mathrm{H}}}\xspace & $k_{\rm B}T$ & {\scriptsize$[10^{-9}]$} \\ \noalign{\vskip 0.75mm} & \tiny keV & \tiny keV & \tiny keV & \tiny $\rm 10^{21}\ cm^{-2}$ & \tiny $\rm 10^{20}\ cm^{-2}$ & \tiny $\rm 10^{-3}\ keV$ & \tiny {\ensuremath{\rm{erg\ cm}^{-2}\ \rm{s}^{-1}}}\xspace & \\ \noalign{\vskip 0.75mm} \hline \rowcolor{Gray} \multicolumn{9}{c}{RGS/XMM{\it-Newton}\xspace} \\ \hline \noalign{\vskip 0.75mm} 008411 & $ 0.164\pm0.003 $ & $ 0.65\pm0.08 $ & $24\pm11 $ & $ 1.51\pm0.03$ & $ 0.9\pm0.2$ & $ 2.6\pm0.3$ & $ 2.2\pm0.2$ & $\bf5642/3700$ \\ 055134 & $ 0.139\pm0.004 $ & $ 0.32\pm0.02 $ & $29\pm9 $ & $ 1.56\pm0.03$ & $ 1.2\pm0.3$ & $ 2.8\pm0.2$ & $ 1.5\pm0.3$ & $\bf5541/3921$ \\ \noalign{\vskip 0.75mm} \hline \rowcolor{Gray} \multicolumn{9}{c}{LETG/{\it Chandra}\xspace} \\ \hline \noalign{\vskip 0.75mm} 98 & $ 0.16\pm0.01 $ & $ 0.40\pm0.08 $ & $7\pm5 $ & $ 0.08\pm0.05$ & $ <0.45 $ & $ 2\pm1 $ & $ 1.8\pm0.8$ & $\bf2070/1842$ \\ 12444 & $ 0.22\pm0.01 $ & $ 1.9\pm0.7 $ & $20\pm9 $ & $ 1.4\pm0.1$ & $ 2.0\pm0.9$ & $ 4.6\pm0.4$ & $ 1.5\pm0.7$ & $\bf1141/870$ \\ \noalign{\vskip 0.75mm} \hline \rowcolor{Gray} \multicolumn{9}{c}{HETG/{\it Chandra}\xspace} \\ \hline \noalign{\vskip 0.75mm} 2001 \tnotex{tnote:2001} & $ 0.14\pm0.04 $ & $ 0.20\pm0.05 $ & $1.5\pm0.2 $ & $ 1.1\pm0.2$ & $ 1.5\pm0.9$ & $ 4\pm1 $ & $ 1.7\pm0.2$ & $\bf7423/6628$ \\ 2006 \tnotex{tnote:2006} & $ 0.07\pm0.01 $ & $ 0.16\pm0.02 $ & $9\pm5 $ & $ 2.1\pm0.1$ & $ 1.2\pm0.9$ & $ 3\pm1 $ & $ 2.3\pm0.3$ & $\bf6304/4988$ \\ \noalign{\vskip 0.75mm} \hline \noalign{\vskip 0.75mm} Mean & $0.15\pm0.01$ & $0.6\pm 0.2 $ & $15\pm7$ & $1.53\pm0.08$ & $1.4\pm0.6$ & $ 3.4\pm0.6 $ & $10\pm2$ & \\ \noalign{\vskip 0.75mm} \hline \end{tabular} \begin{tablenotes} \item\label{tnote:tau}{\tiny We fix the optical depth $\tau$ to its default value of 3.} \item\label{tnote:cold}{\tiny We fix the temperature at the lower limit $0.5$ eV. In the cold phase we also consider the contribution of the dust.} \item\label{tnote:tot_c}{\tiny The total $C$-stat/dof of the fit is $28123/21949$} \item\label{tnote:2001}{\tiny Combination of obsId (1021, 1022) taken in 2001.} \item\label{tnote:2006}{\tiny Combination of obsId (6633, 6634 and 7032) taken in 2006.} \end{tablenotes} \end{threeparttable} \end{table*} We fit the HETG, LETG and RGS data simultaneously using the {\textsc{Spex}}\xspace X-ray spectral fitting package\footnote{\url{http://doi.org/10.5281/zenodo.2419563}} \citep[Version 3.05.00,][]{Kaastra18} with the $C$-statistic. Considering all the datasets, the observations cover the soft X-ray energy band between 4 and 35 {\AA} ($0.35 - 3.10$ keV). Given that the source was observed at different epochs, the continuum shape may differ significantly. To take into account continuum variability, we assign each dataset to a specific \texttt{sector} in {\textsc{Spex}}\xspace allowing the continuum parameters to vary freely for each sector. The broadband model with the cold and warm absorptions (described below) is fitted to the data adopting the \emph{C}-statistic test (see Figure \ref{fig:continuum}).\\ The source shows a Comptonised continuum together with a soft black-body component \citep[e.g.,][\citetalias{Costantini12}]{Sidoli01b}, which we reproduce in our modelling using the \texttt{comt} and \texttt{bb} components in {\textsc{Spex}}\xspace \citep[][respectively]{Titarchuk94,Kirchhoff60}. We first characterise the absorption by the neutral ISM gas. This is done by applying a collisionally ionised plasma model \citep[model \texttt{hot} in {\textsc{Spex}}\xspace,][]{dePlaa04,Steenbrugge05} with a temperature frozen to $k_{B}T= 0.5\ \rm eV$ in order to mimic the neutral gas. Continuum absorption by this neutral Galactic absorber takes into account most of the low-energy curvature of the spectrum especially for $\lambda \gtrsim 6 $ {\AA} ($E\lesssim 2$ keV). Due to the complexity of the oxygen K-edge region and to the uncertainties of its modelling we exclude from the fit the region extending between $22.2-24$ {\AA} ($\sim 516-558$ eV). Modelling this complex edge is beyond the scope of this paper.\\ Three other photoabsorption edges are present in the spectrum of the source\footnote{For the wavelength of the photoabsorption edges we refer to \cite{Gorczyca00} and \cite{Kortright00}.}: the neon K-edge at $\lambda = 14.3$ {\AA}, the iron L$_{2,3}$-edges at $\lambda_{2,3} = 17.2,\ 17.5 $ {\AA} and the nitrogen K-edge at $\lambda = 31.1$ {\AA}. While neon, nitrogen are expected to be mostly in gaseous form (especially neon due to its inert nature), iron is highly depleted from the gaseous form \citepalias{Costantini12}. To minimise the residual in the Fe L-edge band and to characterise the dust contribution, we add to our model the \texttt{AMOL} component \citep{Pinto10}. In particular, we adopt the Fe L-edges of the metallic iron. Coupling the abundance of the dust among all the datasets, we find that $(92\pm4)\%$ of the iron is locked into interstellar dust grains.\\ The detection of the \ion{Ne}{ii}\xspace, \ion{Ne}{iii}\xspace lines at 14.608 {\AA}, 14.508 {\AA} (848.74 eV and 854.59 eV), respectively, traces the presence of low-ionised gas along this line of sight. Therefore, to describe these low-ionisation lines we add a second collisionally ionised plasma model to our broadband model. \\ The best fit parameters of the broadband continuum including the cold and warm absorption are shown in Table \ref{tab:broadband}. The different fit of LETG obsID 98 might be caused by residuals at the metal edges, in particular the O K-edge, due to uncertainties in the {\it Chandra}\xspace response matrix \citep{Nicastro05}. Thus, the estimates of the hydrogen column densities are altered by these residuals and we do not consider them in the average computations. However, the high-ionisation absorption lines are not affected and we can reliably consider them in our analysis.\\ We also test the effect of dust scattering using the \texttt{ismdust} model developed by \cite{Corrales16} and implemented into Xspec \citep{Arnaud96}. Adding this component to the continuum model, we obtain about 10\% lower hydrogen column density.\\ \section{High-ionisation lines} \label{sec:hot_gas} The spectra of 4U~1820-30\xspace display several lines from highly ionized elements: \ion{Mg}{xi}\xspace, \ion{Ne}{ix}\xspace, \ion{Fe}{xvii}\xspace, \ion{O}{vii}\xspace and \ion{O}{viii}\xspace. In Table \ref{tab:high_lines}, we list all the lines with their detection significance. The effective area of RGS and LETG HRC should allow us to detect the absorption lines of \ion{C}{vi}\xspace, \ion{N}{vi}\xspace and \ion{N}{vii}\xspace. However, since in this region the cold-gas absorption of the continuum is severe, the signal-to-noise ratio is too low for a meaningful analysis. \\ \begin{table} \caption{High-ionisation lines detected in the XMM{\it-Newton}\xspace and {\it Chandra}\xspace spectrum.} \label{tab:high_lines} \centering \begin{tabular}{ c c c c } \hline\hline \noalign{\vskip 0.75mm} Line & $\lambda$ [\AA] & $E$ [keV] & Significance \\ \noalign{\vskip 0.75mm} \hline \noalign{\vskip 0.75mm} \ion{Mg}{xi}\xspace & $9.1688$ & $1.3522$ & $ 6.5\sigma$ \\ \ion{Ne}{ix}\xspace & $13.447$ & $0.9220$ & $ 3.9\sigma$ \\ \ion{Fe}{xvii}\xspace & $15.012$ & $0.8259$ & $ 10\sigma$ \\ \ion{O}{vii}\xspace He$\alpha$ & $21.590$ & $0.5743$ & $ 4.6\sigma$ \\ \ion{O}{vii}\xspace He$\beta$ & $18.626$ & $0.6657$ & $ 3.1\sigma$ \\ \ion{O}{viii}\xspace Ly$\alpha$ & $18.967$ & $0.6537$ & $ 8\sigma$ \\ \ion{O}{viii}\xspace Ly$\beta$ & $16.005$ & $0.7747$ & $ 11\sigma$ \\ \ion{C}{vi}\xspace & $33.734$ & $0.3675$ & $ <1\sigma$ \\ \ion{N}{vi}\xspace & $28.788$ & $0.4307$ & $ <1\sigma$ \\ \ion{N}{vii}\xspace & $24.779$ & $0.5004$ & $ 1.8\sigma$ \\ \noalign{\vskip 0.75mm} \hline \end{tabular} \end{table} \noindent These lines are produced by ionised plasma present along the line of sight of the source. To reveal the nature of this ionised gas we study the associated absorption features through two different physical models defined by us as \textit{ism} and \textit{photo} models:\\ \indent {\textit{ism model}} - To model the absorption by the hot coronal gas in the interstellar medium we use the collisionally ionised plasma model \texttt{hot} with a temperature above $10^{5.5}$ K ($E\gtrsim 0.05 $ keV).\\ \indent {\textit{photo model}} - To investigate the occurrence of photoionised gas intrinsic to the binary, we adopt the \texttt{xabs} model in {\textsc{Spex}}\xspace \citep{Steenbrugge03}. The \texttt{xabs} model calculates the transmission through a slab of photoionised gas where all ionic column densities are linked in a physically consistent fashion through a photoionisation model. To compute the ionisation balance for the \texttt{xabs} model, we use the photoionisation model of {\textsc{Spex}}\xspace, called \texttt{PION} \citep{Mehdipour16}. This was done for each dataset independently. For this calculation we adopt the proto-solar abundances of \cite{Lodders10}. We retrieve the spectral energy distribution (SED) for each dataset normalising the SED of \citetalias{Costantini12} to the soft X-ray band of the observations. In this energy band, the spectral shape remains consistent and only the normalisation varies. Since our observations do not cover the hard X-ray band, we assume that the shape of SED does not vary and remains consistent to the SED of \citetalias{Costantini12}. The \texttt{PION} calculations yield temperature and ionic column densities as a function of the ionisation parameter $\xi$ \citep{Tarter69}, which are used for fitting with the \texttt{xabs} model. The ionisation state of the absorber is measured through the ionisation parameter, which is defined as: \[ \xi = \frac{L_{\rm ion}}{n_{\rm{H}} r^2}\ , \] where $L_{\rm ion}$ is the source ionisation luminosity between 1 and 1000 Ryd, $n_{\rm{H}}$ the hydrogen density of the absorber and $r$ its distance from the ionising source. \\ We fit the two models using two different approaches: first the $C$-statistic, widely used in X-ray data analysis, and then the Bayesian parameter inference \citep{Gregory05,Gelman13}. The $C$-statistic can have some difficulties to identify multiple, separate, adequate solutions (i.e. local probability maxima) in the parameter space. On the contrary, the Bayesian approach explores the entire parameter space identifying the sub-volumes which constitute the bulk of the probability. Moreover, it performs optimisation and error estimation simultaneously, but requires large computation time depending on the number of free parameters. For each model, we evaluate the equivalent hydrogen column density (\ensuremath{N_{\mathrm{H}}}\xspace), the flow- ($v$) and turbulence-velocities ($\sigma_{\rm v}$ or velocity dispersion) of the absorber, together with the temperature ($k_{\rm B}T$) or the ionisation parameter ($\xi$), depending on the model.\\ \begin{figure*} \centering \includegraphics[width=.8\textwidth]{plot/all_lines.pdf} \caption{High ionisation lines detected in the spectra of 4U~1820-30\xspace. Here, the spectrum is shown in unit of transmittance: the observed counts are divided by the underlying continuum together with the cold and warm absorption. For clarity, we do not display all the datasets. We superimpose the \emph{ism} and \emph{photo} models (in red and blue, respectively) obtained with the Bayesian parameter inference.} \label{fig:all_lines} \end{figure*} \subsection{\textit{C}-statistic analysis} \label{sec:cstat} For each dataset, we apply separately the \textit{ism} and \textit{photo} models to the broadband continuum model. We constrain all the free parameters of the broadband fit to within $1\sigma$ uncertainties. We leave only the normalisations of the blackbody and Comptonisation components free to vary (see Section \ref{tab:broadband}). Then, we fit the \texttt{hot} and \texttt{xabs} parameters separately for each dataset and we report their best values in Table \ref{tab:xabs_hot} with the relative uncertainties. \\ For each epoch, a collisionally ionised gas with an average temperature of $0.16\pm0.01$ keV ($ 1.9\pm0.1 \times 10^6$ K) better represents (with a total $\Delta C \rm{stat} = 312$) the high-ionisation absorption lines than a photoionised gas with an average ionisation parameter $\log \xi = 1.72\pm0.05$. We do not observe any significant flow velocity ($v$) associated to the absorber. The estimates of $v$ are, indeed, consistent within the uncertainties with a static gas. For RGS and LETG observations, we fix the turbulent velocities to their default value of $100$ km/s to tackle the degeneration between the fit parameters (in particular between $\sigma_{\rm v}$ and $N_{\rm H}$). Only with HETG, we are able to resolve the turbulent and flow velocities of the absorber. Moreover, the tabulated parameter, such as temperature and \ensuremath{N_{\mathrm{H}}}\xspace, do not show a significant variability among the different epochs, as shown in Figure \ref{fig:variance}.\\ Furthermore, we fit the high-ionisation lines with both the \textit{ism} and \textit{photo} models to test the possible coexistence of photoionised and collisionally ionised gases along the line of sight. In this scenario the $ism$-model dominates the fit and the contribution of the photoionised gas is negligible for each epoch analysed. Statistically, the fit does not improve significantly to support the coexistence of the two absorbers. For the $ism+photo$ model, we find a $C$-stat/dof = 27032/21906 with a tiny improvement in the $C$-stat ($\Delta C = 8 $) with respect to the fit with the $ism$ model alone. \begin{figure} \centering \includegraphics[width=\columnwidth]{plot/variability.pdf} \caption{Temperature and the hydrogen column density of the $ism$-model versus the $0.5-2$ keV unabsorbed flux for each dataset using the $C$-statistic fitting model. The horizontal red line represents the inverse-variance weighted average and the coloured band indicates the $3\sigma$ confidence band. Only RGS 008411 (the observation with a flux $\sim2.2\times 10^{-9}\ {\ensuremath{\rm{erg\ cm}^{-2}\ \rm{s}^{-1}}}\xspace$), shows a deviating column density value.} \label{fig:variance} \end{figure} \begin{table*}[t] \caption{Comparison of the photo-ionised and collisional-ionised best-fit models parameters obtained through the $C$-statistic analysis.} \label{tab:xabs_hot} \tiny \centering \begin{tabular}{ c c c c c c @{\hspace{0mm}}c@{\hspace{12mm}} c c c c c } \hline\hline \noalign{\vskip 0.75mm} \multirow{3}{*}{dataset} & \multicolumn{4}{c}{{\it ism} model (\texttt{hot})} & \multirow{3}{*}{$C$stat/dof} & & \multicolumn{4}{c}{{\it photo} model (\texttt{xabs})} & \multirow{3}{*}{$C$stat/dof} \\\cline{2-5}\cline{8-11} \noalign{\vskip 0.5mm} & \ensuremath{N_{\mathrm{H}}}\xspace & $k_{\rm B}T$ & $\sigma_{\rm v}$ & $v$ & & & \ensuremath{N_{\mathrm{H}}}\xspace & $\log \xi$ & $\sigma_{\rm v}$ & $v$ & \\ & $10^{19}\rm cm^{-2}$ & keV & km/s & km/s & & & $10^{19}\rm cm^{-2}$ & & km/s & km/s & \\ \noalign{\vskip 0.75mm} \hline \rowcolor{Gray} \multicolumn{12}{c}{RGS/XMM{\it-Newton}\xspace} \\ \hline \noalign{\vskip 0.75mm} 008411 & $ 8\pm1$ & $0.14\pm0.01$ & $ 100\ (f) $ & $-60\pm40 $ & $\bf5379/3697$& & $ 17\pm2$ & $1.86\pm0.01 $ & $100\ (f) $ & $-70^{+30}_{-70} $ & $\bf5429/3697$\\ \noalign{\vskip 0.5mm} 055134 & $ 5.5\pm0.7$ & $0.16\pm0.01$ & $ 100\ (f)$ & $50^{+40}_{-90} $ & $\bf5370/3918$& & $ 7\pm 1$ & $1.72\pm0.03 $ & $100\ (f) $ & $50\pm^{+40}_{-80}$ & $\bf5393/3918$\\ \noalign{\vskip 0.75mm} \hline \rowcolor{Gray} \multicolumn{12}{c}{LETG/{\it Chandra}\xspace} \\ \hline \noalign{\vskip 0.75mm} 98 & $ 5\pm1$ & $0.17\pm0.02$ & $ 100\ (f) $ & $20\pm60 $ & $\bf1994/1839$ & & $ 5\pm1$ & $1.66\pm0.11 $ & $100\ (f) $ & $70\pm{80} $ & $\bf2013/1839$ \\ \noalign{\vskip 0.5mm} 12444 & $ 3.6\pm0.4$ & $0.16\pm0.01$ & $ 100\ (f) $ & $-190\pm40 $ & $\bf1004/866$ & & $ 4.2\pm0.6$ & $1.57\pm0.09 $ & $100\ (f) $ & $-160\pm{50} $ & $\bf1042/866$ \\ \noalign{\vskip 0.75mm} \hline \rowcolor{Gray} \multicolumn{12}{c}{HETG/{\it Chandra}\xspace} \\ \hline \noalign{\vskip 0.75mm} 2001 & $ 7\pm2$ & $0.16\pm0.02$ & $ 40\pm30 $ & $-60\pm30 $ & $\bf7328/6624$ & & $ 10\pm2$ & $1.74\pm0.02 $ & $50\pm20 $ & $-60\pm30 $ & $\bf7362/6624$ \\ \noalign{\vskip 0.5mm} 2006 & $ 4.8\pm0.4$ & $0.16\pm0.01$ & $ 150\pm20$ & $-10^{+10}_{-20} $ & $\bf5962/4984$ & & $ 6.5\pm0.6$ & $1.72\pm0.01 $ & $130^{+20}_{-100} $ & $20^{+10}_{-40} $ & $\bf6113/4984$ \\ \noalign{\vskip 0.75mm} \hline \noalign{\vskip 0.75mm} Mean & $5.7\pm0.9$ & $0.16\pm0.01$ & $95\pm25$ & $-40^{+40}_{-50}$ & & & $8\pm1$ & $1.72\pm0.05$ & $90^{+20}_{-60}$ & $-25^{+40}_{-60}$ & \\ \noalign{\vskip 0.75mm} \hline \end{tabular} \tablefoot{We indicate with $(f)$ the frozen parameters of the model. The total $C$-stat/dof for the $ism$-model is $27040/21928$ whereas for the $photo$-model is $27352/21928$.} \end{table*} \subsection{Bayesian analysis} \label{sec:bayesian} \begin{figure*}[h] \centering \includegraphics[width=.75\textwidth]{plot/hot_params.pdf} \caption{The posterior distribution from MultiNest is plotted with two-dimensional histograms comparing each pair of free parameters \texttt{hot}-model (\ensuremath{N_{\mathrm{H}}}\xspace, $k_{\rm B}T$, $v$ and $\sigma_{v}$). The contours indicate the $1 \sigma$, $2 \sigma$, $3 \sigma$ and $4 \sigma$ confidence intervals in two dimensional space. The hydrogen column density \ensuremath{N_{\mathrm{H}}}\xspace is expressed in unit of $10^{24} cm^{-2}$. } \label{fig:hot_dist} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{plot/xabs_xi_params.pdf} \caption{The posterior distribution from MultiNest is plotted with two-dimensional histograms comparing each pair of free parameters of the \texttt{xabs}-model (\ensuremath{N_{\mathrm{H}}}\xspace, $v$, $\sigma_{v}$ and the $\log \xi$ for each observation. The number of observation is indicated in the distribution panel). The contours indicate the $1 \sigma$, $2 \sigma$, $3 \sigma$ and $4 \sigma$ confidence intervals in two dimensional space. The hydrogen column density \ensuremath{N_{\mathrm{H}}}\xspace is expressed in unit of $10^{24} cm^{-2}$.} \label{fig:xabs_dist} \end{figure*} For the spectral analysis using the Bayesian approach \citep{Bayes63}, we adopt the MultiNest algorithm \citep[version 3.10,][]{Feroz09,Feroz13}. This method is applicable to low-dimensional problems of X-ray spectral modelling. Its strength is the capability to identify local maxima without difficulty computing points of equal weighting similar to a Markov Chain. It provides values, error estimates and marginal probability distributions for each parameter. \\ To connect the X-ray fitting code {\textsc{Spex}}\xspace with the Bayesian methodology, we create a Python package \textsc{BaySpex}\footnote{\textsc{BaySpex} is publicly available on \url{ZenodoLINK}.}, which is a simplified and adapted version of the \texttt{PyMultiNest} and Bayesian X-ray Analysis (\texttt{BXA}) developed by \cite{Buchner14}. This script is adapted to PYSPEX, the python interface to {\textsc{Spex}}\xspace. The logic behind the script is that MultiNest suggests parameters on a unit hypercube which are translated into model parameters, readable by {\textsc{Spex}}\xspace, using the prior definitions. At this point, \textsc{BaySpex} computes a probability using the {\textsc{Spex}}\xspace likelihood implementation, which is passed back to the MultiNest algorithm.\\ \subsubsection{Parameter inference} \label{sec:parameter_inf} Similar to the $C$-statistic approach, we separately multiply the broadband continuum fit with the \emph{ism} and \emph{photo} models. To minimise the number of the free parameters in the fit, we freeze the broadband continuum shape calculated in Section \ref{sec:continuum}. Furthermore, we impose the parameters of the $ism$ model to be the same for all the datasets. Instead, for the $photo$ model, we fit separately the ionisation parameters of the different epochs since each dataset uses a different SED. We couple the velocities and the hydrogen column density to keep the model simple. The Bayesian data analysis is indeed limited by the computation power and time. Moreover, these arrangements are justified by the previous $C$-statistic analysis. The \texttt{hot} and \texttt{xabs} components do not modify the shape of the broadband model and their parameters are constant along all the epochs (see Table \ref{tab:xabs_hot}).\\ In Figures \ref{fig:hot_dist} and \ref{fig:xabs_dist}, we display, respectively, the normalised probability distributions of the \textit{ism} and \textit{photo} model parameters computed through the Bayesian parameter inference. The estimate values with their errors are reported on the top of each panel. We also illustrate the two-dimensional distribution of the probability pairing the free parameters with each other. In both fits, we do not observe any strong covariance between the free parameters of the fit. Only the ionisation parameters ($\xi$) and the hydrogen column density of the absorber show a weak correlation visible in the $\log \xi - \log N_{\rm H}$ plots of Figure \ref{fig:xabs_dist}. \\ We also test the coexistence of collisionally ionised and photoionised gases along the line of sight fitting together the two models (\emph{ism+photo} model). Since the Bayesian approach explores the full parameter space, it can return more solid results than the $C$-stat. We show the probability distribution of the parameters for these models in Figure \ref{fig:hot_xabs_dist}. As an example, we show the ionisation parameter of observation 008411. While the parameters of the \texttt{hot} component (the four upper distributions) are well defined, the \texttt{xabs} quantities (the four lower distributions) are not constrained. Only the relative column density shows a peaked probability distribution indicating that the fraction of the possible photoionisation gas in modelling the high-ionisation lines is very small (less than 0.002). \\ Moreover, we investigate if multiple temperature interstellar hot gas (\emph{ism+ism} model) or multiple photoionised gases (\emph{photo+photo} model) can better describe the absorption features. In both cases we find that a single gaseous component dominates the fit and the contribution of the other can be considered negligible. \\ To understand which candidate among the \emph{ism}, \emph{photo}, \emph{ism+photo}, \emph{ism+ism}, \emph{photo+photo} model better fits the high-ionisation lines of 4U~1820-30\xspace, we compute the Bayesian model analysis presented in the following subsection. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{plot/hot_xabs_xi_params.pdf} \caption{The posterior distribution from MultiNest is plotted with two-dimensional histograms comparing each pair of free parameters of the \texttt{hot}-model (\ensuremath{N_{\mathrm{H}}}\xspace, $k_{\rm B}T$, $v$ and $\sigma_{\rm v}$), in the four upper distributions, together with the free parameters of the \texttt{xabs}-model (\ensuremath{N_{\mathrm{H}}}\xspace, $v$, $\sigma_{v}$ and $\log \xi$. For clarity, we show only the $\log \xi$ of observation 008411) in the four lower distributions. The contour indicate the $1 \sigma$, $2 \sigma$, $3 \sigma$ and $4 \sigma$ confidence intervals in two dimensional space. The hydrogen column densities \ensuremath{N_{\mathrm{H}}}\xspace are expressed in unit of $10^{24} cm^{-2}$.} \label{fig:hot_xabs_dist} \end{figure*} \subsubsection{Model comparison} Bayesian model comparison is done by comparing models on the basis of the posterior probability of the model given the data, known as \textit{Bayesian evidence}. Using the Bayes’s rule, the Bayesian evidence ($Z$) is proportional to the prior probability for the model, $p(M)$, multiplied by the likelihood of the data ($y$) given the model, $p(y|M)$. The choice between two models, $M_1$ and $M_2$, can be made on the basis of the ratio of their Bayesian evidences. This ratio is known as Bayes factor \citep[$B_{1,2}$,][]{Wasserman00,Liddle07,Trotta08,Knuth14}. Since we choose uniform priors for different models in our analysis, the Bayes factor is defined as: \[ B_{1,2} = \frac{Z_1}{Z_2} = \frac{p(y|M_1)}{p(y|M_2)} \ . \] A large value of this ratio gives support for $M_1$ over $M_2$. Specifically, we adopt the scale of \cite{Jeffreys61} and we rule out models which show $B_{1,2}>30$ ($\log B_{1,2}>1.5$). A Bayes factor above 30 represents, indeed, a "very strong evidence" against $M_2$. The strength of this method is that it does not require models to be nested (i.e. $M_{2}$ as a special case of $M_{1}$) nor does it make assumptions about the parameter space or the data. Moreover, it automatically introduces a penalty for including too much model complexity, guarding against overfitting the data \citep{Kass95}. \\ The Bayesian statistics includes an approximation to the Bayesian evidence known as the Bayesian Information Criterion \citep[BIC,][]{Schwarz78} which is defined as ${\rm BIC} = -2\ln \mathcal{L} + k\ln N$, where $\mathcal{L}$ is the maximum likelihood, $k$ the number of parameters of the model and $N$ the number of data points used in the fit.\\ An alternative model comparison are the information-theoretic methods, pioneered by \cite{Akaike74} with his Akaike Information Criterion (AIC), which is defined as ${\rm AIC} = -2\ln \mathcal{L} + 2k$. Similarly to the Bayesian model selection, AIC and BIC can be used to decide which model is more probable to have produced the data: the most probable model corresponds to the fit with the smallest AIC and BIC value, respectively. Both AIC and BIC include an over-fitting term by which the more complex model is disfavoured by the additional number of parameters. A description geared to astronomers can be found in \cite{Takeuchi00} and \cite{Liddle04}, while the full statistical procedure can be found in \cite{Burnham02}. \\ We compare all the models presented in Section \ref{sec:parameter_inf} ($ism$, $photo$, $ism+photo$, $ism+ism$, and $photo+photo$) using the Bayesian evidence. We list the results of the model comparison in Table \ref{tab:comparison}, where we also report the relative AIC and BIC values. An interstellar hot gas with a single temperature is the favourite scenario for describing the high-ionisation lines observed in the spectra of 4U~1820-30\xspace. The result of the Bayesian analysis is, therefore, in agreement with the outcome of the $C$-statistic (see Section \ref{sec:cstat}). \begin{table} \caption{Model selection results for 4U 1820-30.} \label{tab:comparison} \centering \begin{tabular}{ c c c c } \hline\hline \noalign{\vskip 0.75mm} Model & $\log B$ & $\Delta \rm{AIC}$ & $\Delta \rm{BIC}$ \\ \noalign{\vskip 0.75mm} \hline $ism$ & $0.0$ & $0$ & $0$ \\ $ism+photo$ & $+1.3$ & $+19$ & $+40$ \\ $ism+ism$ & $+25.5$ & $+156$ & $+204$ \\ $photo$ & $+77.3$ & $+351$ & $+363$ \\ $photo+photo$ & $+90.1$ & $+389$ & $+421$ \\ \noalign{\vskip 0.75mm} \hline \end{tabular} \tablefoot{The last three columns show the model comparison based on log-evidence, AIC and BIC. The log-evidence is normalised to the maximum value found, whereas both AIC and BIC are normalised to the minimum value which indicates the preferred model. Models with $\log B > 1.5$ or $\Delta AIC (\Delta BIC) >10$ can be ruled out as a plausible model that generates the data \citep{Jeffreys61,Burnham02}.} \end{table} \begin{table*}[t] \caption{Parameter values estimated through the Bayesian analysis.} \label{tab:xabs_hot_bayesian} \tiny \centering \begin{tabular}{ c c c c c @{\hspace{0mm}}c@{\hspace{5mm}} c c c c c c } \hline\hline \noalign{\vskip 0.75mm} \multirow{4}{*}{model} & \multicolumn{4}{c}{\texttt{hot}} & & \multicolumn{6}{c}{\texttt{xabs}} \\ \cline{2-5}\cline{7-12} \noalign{\vskip 0.5mm} & \ensuremath{N_{\mathrm{H}}}\xspace & $k_{\rm B}T$ & $\sigma_{\rm v}$ & $v$ & & \ensuremath{N_{\mathrm{H}}}\xspace & $\log \xi$ & $\log \xi$ & $\log \xi$ &$\sigma_{\rm v}$ & $v$ \\ & $10^{19}\rm cm^{-2}$ & keV & km/s & km/s & & $10^{19}\rm cm^{-2}$ & $008441$ & $98$ & $2001$ & km/s & km/s \\ & & & & & & & $0055134$ & $12444$ & $2006$ & & \\ \noalign{\vskip 0.75mm} \hline \noalign{\vskip 0.75mm} $ism$ & $3.9\pm0.3$ & $0.171\pm0.004$ & $132\pm16$ & $-41\pm14$ & & -- & -- & -- & -- & -- & -- \\ \noalign{\vskip 0.95mm} \multirow{2}{*}{$photo$} & \multirow{2}{*}{--} & \multirow{2}{*}{--} & \multirow{2}{*}{--} & \multirow{2}{*}{--} & & \multirow{2}{*}{ $4.1\pm0.3$ }& $1.79\pm0.04$ & $1.69\pm0.06$ & $1.76\pm0.04$ & \multirow{2}{*}{$90\pm26$} & \multirow{2}{*}{$-34\pm18$} \\ & & & & & & & $1.79\pm0.07$ & $1.70\pm0.07$ & $1.76\pm0.02$ & & \\ \noalign{\vskip 0.95mm} \multirow{2}{*}{$ism+photo$} & \multirow{2}{*}{$3.5\pm0.3$} & \multirow{2}{*}{$0.171\pm0.004$} & \multirow{2}{*}{$133\pm15$} & \multirow{2}{*}{$-42\pm12$} & & \multirow{2}{*}{$4^{+6}_{-3}\times10^{-3}$} & $1.1\pm1.2$ & $1.1\pm1.3$ & $1.0\pm1.3$ & \multirow{2}{*}{$100\pm60$} & \multirow{2}{*}{$-20\pm320$} \\ & & & & & & & $0.9\pm1.3$ & $1.1\pm1.2$ & $1.1\pm1.2$ & & \\ \noalign{\vskip 0.95mm} \multirow{2}{*}{$ism+ism$} & $<3.5$ & $0.3^{+6.8}_{-0.16}$ & $150^{+530}_{-29}$ & $-41^{+228}_{-125}$ & & \multirow{2}{*}{--} & \multirow{2}{*}{--} & \multirow{2}{*}{--} &\multirow{2}{*}{--} &\multirow{2}{*}{--} &\multirow{2}{*}{--} \\ \noalign{\vskip 0.25mm} & $<3.5$ & $0.18^{+6.37}_{-0.01}$ & $151^{+520}_{-29}$ & $-41^{+225}_{-123}$ & & & & & \\ \noalign{\vskip 0.95mm} \multirow{4}{*}{$photo+photo$} & \multirow{4}{*}{--} & \multirow{4}{*}{--} & \multirow{4}{*}{--} & \multirow{4}{*}{--} & & \multirow{2}{*}{$<3.8$} & $1.8\pm0.9$ & $1.7\pm0.3$ & $1.8\pm0.5$ &\multirow{2}{*}{$140^{+550}_{-50}$} & \multirow{2}{*}{$-30^{+300}_{-150}$} \\ & & & & & & & $1.8\pm1.2$ & $1.7\pm1.0$ & $1.7\pm0.6$ & \\ \noalign{\vskip 0.25mm} & & & & & & \multirow{2}{*}{$<4.0$} & $1.8\pm0.8$ & $1.7\pm0.6$ & $1.8\pm0.5$ & \multirow{2}{*}{$140^{+510}_{-40}$} & \multirow{2}{*}{$-40^{+170}_{-160}$} \\ & & & & & & & $1.8\pm1.0$ & $1.6\pm0.9$ & $1.8\pm0.8$ & \\ \noalign{\vskip 0.75mm} \hline \end{tabular} \end{table*} \section{Discussion} \label{sec:discussion} In order to determine the origin of the high-ionisation absorption lines observed in the spectra of 4U~1820-30\xspace, we fit them adopting both photo-ionisation and collisional-ionisation models. In Figure \ref{fig:all_lines}, we compare the fits of the two models obtained through the Bayesian parameter inference. The \emph{ism} model reproduces all the lines better than the \emph{photo} model, in particular the \ion{Fe}{xvii}\xspace which is detected with a significance of $6.5\sigma$. This line has been previously detected by \cite{Yao06a} and \cite{Luo14}. Whereas the former propose an interstellar origin for this line, the latter conclude that the majority of the \ion{Fe}{xvii}\xspace absorbing gas arises from an intrinsic gas to the source. In our analysis, we noticed that the $photo$-model does not describe the \ion{Fe}{xvii}\xspace line (see Figure \ref{fig:all_lines}), since the fraction of the ion according to the photo-ionised model is very low.\\ In Figure \ref{fig:both_col}, we plot the ionisation fractions of the ionised atoms for which we detected the lines in the spectrum. In particular, on the left panel, we show the relative column densities of the \emph{ism} model whereas on the right, the respective column densities for the \emph{photo} model, calculated for observation 008411. With the dashed vertical lines we indicate the best value of $k_{\rm B}T$ and $\xi$ obtained with the Bayesian analysis. The different distributions of the column densities of the ions are crucial to select which model fit the best the spectral features. For a photoionised gas with a mild photoionisation parameter ($\log \xi \sim 1.8$), iron is divided in multiple ionisation states. Consequently, the ionisation fraction of each individual Fe ion becomes low. Instead, for a collisionally ionised gas the column densities are distributed differently: the ionisation fraction is more peaked at the preferred ionisation state, which is determined by the temperature of the gas. Therefore, the detection of the \ion{Fe}{xvii}\xspace line suggests an interstellar nature for the high-ionisation lines. \\ The photoionisation nature is however statistically ruled out (see Table \ref{tab:comparison}). All the three model comparison criteria considered indicate the $ism$ as preferred model. The evidence against the $ism+photo$ model is particularly strong with the $BIC$ criterion which severely penalises complex models. In Table \ref{tab:xabs_hot_bayesian} we list the value of the parameters obtained with Bayesian analysis. In the following, we discuss in detail the results obtained for the \emph{ism} and \emph{photo} models.\\ \begin{figure*} \centering \includegraphics[width=\textwidth]{plot/both_columns.pdf} \caption{Ionisation fractions of oxygen, neon, magnesium and iron ions as a function of the electron temperature for collisional ionisation equilibrium plasma (CIE, $ism$-model on the left panel) and as a function of the ionisation parameter for a gas in photoionisation equilibrium (PIE, $photo$-model, on the right panel). The vertical dashed lines indicate the best values for $k_{\rm B}T$ and $\xi$ (observation 008411) obtained through the Bayesian analysis.} \label{fig:both_col} \end{figure*} \subsection{\it Photoionisation origin} Both Bayesian and $C$-statistic approaches show a less statistical significance for the photoionisation modelling with respect to the collision-ionisation one. The absorber is found to have a small outflow velocity which does not support the presence of a disc wind. This velocity has to be corrected for the solar-system barycentric radial velocity of the satellite. During all the HETG observations (obsid 1021, 1022, 6633, and 6634, 7032) this velocity was $v_{\rm{bar}}\sim30$ km/s, which explains the low outflow velocity observed. Due to its high energy resolution, the HETG instrument puts more stringent constraints on a possible flow velocity.\\ Moreover, the presence of a local ionised gas, such as a disc atmosphere, would be difficult to justify given the physical parameters of the source. Through the ionisation parameter, we can calculate the density, $n_{\rm{H}}$, if one knows where the plasma is located with respect to the X-ray source. Taking an ionising luminosity of $L_{\rm ion} = 8 \times 10^{37}\ \rm erg\ s^{-1}$ \citepalias[computed using the SED of][for the range 1-1000 Ryd]{Costantini12}, a distance $r<1.3 \times 10^{10}$ \citep[which correspondes to the size of the system,][]{Stella87} and a ionisation parameters $\xi = 62\ \rm erg\ s^{-1} cm$ (Section \ref{sec:bayesian}), the density of the plasma must be $n>7\times 10^{15}\ \rm{cm}^{-3}$, some orders of magnitude larger than the density of the typical disc atmosphere of a low-mass X-ray binary system \citep[e.g.,][]{vanPeet09}. This large density value would also imply that the filling factor, $f=N_{\rm H}/nr$, is extremely low $f < 4\times10^{-7}$, with $N_{\rm H} = 3.9 \times 10^{19}\ \rm{cm}^{-2}$. \cite{Cackett08} argued that a proper physical solution for such small filling factor is the presence of dense structures above the disc, like high-density blobs, produced by thermal instabilities. For example, dense clumps, possibly part of the accretion bulge observed in dipping sources, have been observed \citep[e.g.,][]{Psaradaki18}. However, their density is $n_{H} \sim 10^{13}\ \rm{cm}^{-3}$, two orders of magnitude lower than the density expected for a possible locally photoionised material. \\ Another argument against the photoionisation origin is the lack of variability of the lines observed among the different epochs \citep[Figure \ref{fig:variance} and][]{Cackett08}.\\ However, the fact that the collisional-ionisation model best represents the ionised lines of the spectra may not exclude the existence of a second component intrinsic to the source. Thus, to test the possible presence of this photo-ionised gas beside the hot interstellar plasma we compute a fit using both models. Our analysis, in particular the $C$-statistic, shows a lack of a significant contribution by photo-ionised gas towards the source for the epochs covered by the observations. Furthermore the Bayesian parameter inference shows a negligible column density of the photo-ionised gas (see Figure \ref{fig:hot_xabs_dist}).\\ A question that arises is why we do not detect any significant absorption lines from a photoionised wind/atmosphere intrinsic to 4U~1820-30\xspace. Considering the SED of the ionisation source and the small size of the system, the disc/atmosphere could be fully ionised and it will be impossible to detect any absorption lines \citep{Stella87,Futamoto04}. Yet the inclination angle of the system can be a crucial factor for the non-detection of photoionised lines. \cite{DiazTrigo16} showed that the preferential detection of absorbing photoionised plasma is for system with a relative high ($i > 50{\ensuremath{^{\circ}}}\xspace$) inclination. The lower inclination of the 4U~1820-30\xspace \citep[$i=43{\ensuremath{^{\circ}}}\xspace^{+8}_{-9}$,][]{Anderson97} system may imprint absorption lines too weak to be detected as our line of sight would go through a thinner layer of gas. \\ \subsection{\it Interstellar origin} A single temperature hot gas can describe the absorption lines well. In specific, we observe a gas with a temperature of $\sim 1.98\times10^6$ K, which is consistent with the values obtained by previous works \citep[e.g.,][]{Yao06a} and with the temperature observed for several Galactic low-mass X-ray binaries \citep{Yao05}. Furthermore, we tested the presence of multiple temperature gases adding a supplementary \texttt{hot} component. One may expect variation in physical, and possibly chemical, properties of the hot gas along the line of sight to 4U~1820-30\xspace which crosses several bubbles such as the Local Bubble around the Sun \citep[with $T\sim10^6$ K,][]{Kuntz00} and Loop I Bubble \citep[with $T\sim3.5\times10^{6}$ K,][]{Miller08}. However, from our modelling we do not find multiple gases with different temperatures. The opacity of the gas contained in local bubbles is too low to be detected in the spectrum. Therefore, for the line of sight towards 4U~1820-30\xspace, we assume a single hot plasma component which is consistent with the scenario suggested by \cite{Hagihara11}. In addition, this uniform hot coronal gas is also in agreement with the presence of a hot and thick interstellar disc, as suggested by extragalactic source observations \citep{Yao09,Hagihara10}. \\ Through the Bayesian data analysis we are able to evaluate the velocity dispersion of the hot coronal gas, $\sigma_{\rm v}= 132\pm16\ \ensuremath{\mathrm{km\ s^{-1}}}\xspace$. Constraining this quantity is not trivial and a proper collisional ionisation equilibrium model is necessary. \cite{Yao06b} found a dispersion velocity\footnote{\cite{Yao06b} defined the dispersion velocity ($b_{v}$) as $b_{v} = \sqrt{2}\cdot\sigma_{v}$)} between 117 \ensuremath{\mathrm{km\ s^{-1}}}\xspace and 262 \ensuremath{\mathrm{km\ s^{-1}}}\xspace for the hot gas along the line of sight towards 4U~1820-30\xspace. Our velocity dispersion estimate is also in agreement with the dispersion velocities observed from the survey of the \ion{O}{vi}\xspace ($\sigma_{\rm v} = 50-200\ \ensuremath{\mathrm{km\ s^{-1}}}\xspace$) with the Far UltraViolet Spectroscopic Explorer, {FUSE}\xspace \citep{Otte06}. The \ion{O}{vi}\xspace line is often used as tracer of gas with a peak temperature of $T\approx3\times10^5$ K. Such intermediate-temperature gas is expected primarily at the interface between the cool/warm clouds and the hot coronal gas \citep{Sembach03}. Finally, similar values of velocity dispersion and flow velocity have been also found by \cite{Luo18}, where they analysed the \ion{O}{vii}\xspace absorption line. \\ For a collisionally ionised gas with a temperature of $T\sim 1.98\times10^6$ K, the expected thermal broadening is $\sigma_{\rm v} = 0.0321\sqrt{T} = 43.5 \ \ensuremath{\mathrm{km\ s^{-1}}}\xspace$. This value is too low to explain the observed velocity dispersion. This velocity cannot be accounted for by differential Galactic rotation either. The lines are, indeed, far wider than expected if the absorption came from a smoothly distributed ISM corotating with the disc of the Milky Way \citep{Bowen08}. The observed velocity dispersion supports, instead, the picture of a turbulent hot coronal gas mixed up by shock-heated gas from multiple supernova explosions. The numerical simulations of \cite{deAvillez05} show hot gas arises in bubbles around supernovae, which is then sheared through turbulent diffusion, destroying the bubbles and stretching the hot absorbing gas into filaments and vortices that dissipate with time. Furthermore the hot gas is involved in systematic vertical motion as it streams to the halo at a speed of $100-200$ \ensuremath{\mathrm{km\ s^{-1}}}\xspace. This kinetic energy can be transformed into disordered, turbulent motions, resulting in higher turbulent velocities near the halo \citep{Kalberla98}. \\ Comparing the hydrogen column densities of the different components detected along the line sight of the source, we observe the following gas mass fraction: cold $\sim 89\%$, warm $\sim 8\%$ and hot $\sim 3\%$. These values are very similar to the mass fractions found by \cite{Gatuzz18} for 4U 1820-30 ($85\%$, $10\%$ and $5\%$) and they agree with findings for different lines of sight of Galactic sources. For example, \cite{Pinto13}, analysing the high-resolution X-ray spectra of a sample of Galactic X-ray binaries, found that, in average, the $\sim 90\%$, $\sim 7$ and $\sim 3\%$ of the ISM mass correspond to a cold, warm, hot plasma, respectively. \subsection{Abundances of the hot coronal gas} We further investigate the abundances of oxygen, neon, magnesium and iron for the hot phase of the interstellar medium. For these elements, we detect high-ionisation lines. Using the Bayesian parameter inference we obtain abundances close to their protosolar values: $A_{\rm O} = 0.90 \pm 0.06\ A_{\odot}$, $A_{\rm Ne} = 1.16 \pm 0.08\ A_{\odot}$, $A_{\rm Mg} = 1.9 \pm 0.4\ A_{\odot}$, and $A_{\rm Fe} = 1.07 \pm 0.11\ A_{\odot}$, keeping as references the abundances tabulated in \cite{Lodders10}. \\ The hot phase abundances of oxygen and neon are consistent with the characteristic depletion of these elements in the interstellar medium. Since neon is a noble element, it is very unlikely to be depleted in dust grains in all the medium phases and oxygen shows low depletion values in the interstellar medium \citep{Jenkins09}. Iron and magnesium are, instead, highly depleted from the gas phase in the interstellar medium \citep[e.g.][]{Jenkins09,Rogantini18,Rogantini19,Rogantini20}. The depletion of iron, in particular, remains high even in harsh environments \citep{Whittet02}. However, the observed abundances for magnesium and iron suggests that in the hot coronal gas along the line of sight towards 4U~1820-30\xspace, these two elements are mostly present in the gas phase. Thus, this support the scenario where all the interstellar dust in this very hot phase ($T\gtrsim10^{5.5}$ K) is destroyed by frequent shocks during the dust grain processing in the interstellar medium. This interpretation may not be entirely unique. Reports of an overabundance of heavier elements relative to oxygen in neutral matter toward the Galactic centre would elevate the contribution of these elements in the hot phase as well. A comparison with different lines of sight especially towards sources at high latitudes, is therefore essential before drawing conclusion. \section{Summary} \label{sec:summary} Motivated by defining the origin of the high ionisation absorption features present in the spectrum of the ultracompact system 4U~1820-30\xspace, we systematically analyse all the observations present in the {\it Chandra}\xspace and XMM{\it-Newton}\xspace archives: 5 {\it Chandra}\xspace/HETG spectra together with 2 {\it Chandra}\xspace/LETG and 2 XMM{\it-Newton}\xspace/RGS observations. We study the soft X-ray energy band covering the main high-ionisation absorption lines: namely, \ion{Mg}{xi}\xspace, \ion{Ne}{ix}\xspace, \ion{Fe}{xvii}\xspace, \ion{O}{vii}\xspace He$\alpha$, \ion{O}{vii}\xspace He$\beta$, \ion{O}{viii}\xspace Ly$\alpha$ and \ion{O}{viii}\xspace Ly$\beta$. We adopt realistic plasma models to fit simultaneously the multiple lines: in particular we use the {\textsc{Spex}}\xspace model \texttt{xabs} to reproduce a photo-ionised gas and \texttt{hot} for a thermal gas in collisional ionised equilibrium state. A Bayesian framework is used to model the spectral absorption features. Bayesian data analysis provides a robust approach to infer the model parameters and their uncertainties and offers a solid model comparison.\\ Both the $C$-statistic and Bayesian data analysis show the presence of hot coronal gas with a temperature of $T\sim1.98 \times 10^{6}\ \textrm{K}\ (k_{B}T=0.171\pm0.004\ \rm{keV})$ along the line of sight towards 4U~1820-30\xspace. This hot interstellar gas is responsible for the high-ionisation absorption lines detected in the spectra. We summarise our main results as follows: \begin{itemize} \item The mean centroids of the absorption lines are consistent with their rest frame wavelengths. The small line of sight velocity observed is within the uncertainty due to the dynamic radial velocity of the observer. As previous works already concluded \cite[e.g.,][]{Futamoto04}, an outflowing disc wind can be ruled out as possible ionised absorber. \item The lack of variability of the lines through the multiple observations shows also that the absorber is independent from the activity of the source suggesting a non-local origin for the gas. \item In the spectra of 4U~1820-30\xspace we detect the \ion{Fe}{xvii}\xspace line at $15.012$ {\AA} with a significance of $6.5 \sigma$. This line is best reproduced by a gas in collisional ionisation equilibrium. In contrast, for an absorber photoionised by the source the contribution of \ion{Fe}{xvii}\xspace is not high enough to justify the strength of absorption line detected. \item We constrain the turbulent velocity of the hot gas to be $\sigma_{\rm v}= 132\pm16\ \ensuremath{\mathrm{km\ s^{-1}}}\xspace$ which is likely driven by supernova shock-waves. \end{itemize} \begin{acknowledgements} We would like to thank the anonymous referee for the useful suggestions. DR, EC, and MM are supported by the Netherlands Organisation for Scientific Research (NWO) through \emph{The Innovational Research Incentives Scheme Vidi} grant 639.042.525. The Space Research Organization of the Netherlands is supported financially by NWO. This research has made use of data obtained from the Chandra Transmission Grating Catalog and archive (\url{http://tgcat.mit.edu}), and software provided by the \emph{Chandra} X-ray Center in the application package CIAO. We also made used of the XMM{\it-Newton}\xspace Scientific Analysis System developed by a team of scientist located at ESA's XMM{\it-Newton}\xspace Science Operations Centre and at the XMM{\it-Newton}\xspace Survey Science Centre. We are grateful to M. Díaz Trigo for a useful discussion on the photoionised atmosphere of 4U~1820-30\xspace. We thank A. Dekker and D. Lena for reading an early draft of the manuscript and for providing valuable comments and suggestions. We also thank I. Psaradaki for her input on the XMM{\it-Newton}\xspace data reduction and J. de Plaa for his support on the development of the program. \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_067-22
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Appendix} Here we describe the open-domain datasets we use and also list out the hyperparameter values for all of our experiments. \subsection{Open-Domain Datasets} We use SQuAD1.1 to train the synthetic example generator. SQuAD2.0 and Natural Questions are used to fine-tune the machine reading comprehension (MRC) model. Finally, we use Open-NQ to fine-tune the information retrieval (IR) model. \subsubsection{Natural Questions} NQ \cite{kwiatkowski2019natural} is an English MRC benchmark which contains questions from Google users, and requires systems to read and comprehend entire Wikipedia articles. The dataset contains 307,373 instances in the train set, 7,830 examples in the dev set and 7842 in a blind test set. \citeauthor{lee2019latent}~\shortcite{lee2019latent} create an open version of this dataset, called \textit{Open-NQ}, wherein they only keep questions with short answers and discard the given evidence document. This open version contains 79,168, 8,757 and 3,610 examples in the train, dev and test set, respectively. \subsubsection{SQuAD} SQuAD1.1 \cite{rajpurkar2016squad} is an extractive MRC dataset containing questions posed by crowdworkers on a set of Wikipedia articles. All the questions are answerable, with 87,599, 10,570 and 9,533 examples in the train, dev and test set, respectively. SQuAD2.0 \cite{rajpurkar2018know} combines the 100,000+ questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. It contains 130,319, 11,873 and 8,862 examples in the train, dev and test set, respectively. \subsection{Hyperparameters} Tables \ref{tab:hp_syn}, \ref{tab:hp_ir}, \ref{tab:hp_lm}, \ref{tab:hp_mrc} list the hyperparameters for training the example generation, IR, masked language modeling and MRC models, respectively. \begin{table}[h] \small \centering \begin{tabular}{c|c} Hyperparameter & Value \\ \hline Learning rate & 3e-5 \\ Epochs & 3 \\ Batch size & 24 \\ Max source + target sequence length & 1024 \\ \hline \end{tabular} \caption{Hyperparameter settings during training the synthetic example generator on SQuAD1.1.} \label{tab:hp_syn} \end{table} \begin{table}[h] \small \centering \begin{tabular}{c|cc} Hyperparameter & ICT & Adapted DPR \\ \hline Learning rate & 1e-5 & 1e-5 \\ Epochs & 6 & 6 \\ Batch size & 128 & 128 \\ Warm-up steps & 1237 & 1237 \\ Max sequence length & 350 & 350 \\ \hline \end{tabular} \caption{Hyperparameter settings for the IR experiments in fine-tuning the DPR model with different adaption strategies.} \label{tab:hp_ir} \end{table} \begin{table}[ht] \small \centering \begin{tabular}{c|c} Hyperparameter & Value \\ \hline Learning rate & 1.5e-4 \\ Epochs & 8 \\ Batch size & 256 \\ Max sequence length & 512 \\ Masking rate & 0.15 \\ \hline \end{tabular} \caption{Hyperparameter settings during masked language modeling on the CORD-19 collection.} \label{tab:hp_lm} \end{table} \begin{table}[ht] \small \centering \begin{tabular}{c|ccc} Hyperparameter & SQuAD2.0 & SynQ & NQ \\ \hline Learning rate & 3e-5 & 1.6e-5 & 1.6e-5 \\ Epochs & 2 & 1& 1\\ Batch size & 8 & 48 & 48\\ Max sequence length & 384 & 512 & 512 \\ Max question length & 64 & 18 & 18 \\ Document stride & 128 & 192 & 192 \\ \hline \end{tabular} \caption{Hyperparameter settings during MRC fine-tuning of the language model.} \label{tab:hp_mrc} \end{table} \subsection{Datasets} We use documents from the CORD-19 collection \cite{wang2020cord} to create our retrieval corpus. We split the abstract and the main body of text of each article into passages that (a)~contain no more than 120 words, and (b)~align with sentence boundaries. This leads to an inference-time retrieval corpus of around 3.5 million passages. To create the passages from which we generate synthetic training examples, we split the CORD-19 collection into larger chunks of at most 288 wordpieces using the BERT tokenizer, which results in about 1.8 million passages. This setup provides longer contexts for diverse example generation and also facilitates faster experiments due to a smaller number of passages. For our MRC experiments, we split the COVID-QA-2019 dataset into dev and test subsets of 203 and 1,816 examples, respectively. Additionally for retrieval and end-to-end QA experiments, we create an open version (Open-COVID-QA-2019 henceforth) wherein duplicate questions are de-duplicated and different answers to the same question are all included in the set of correct answers. This leaves 201 dev and 1,775 test examples in the open version. Finally, we use the COVID-QA-2019 dev set for all hyperparameter tuning experiments, COVID-QA-147 for MRC evaluation, and COVID-QA-111 for evaluating IR and end-to-end QA. \subsection{Synthetic Example Generation} To train the synthetic example generator, we use the MRC training examples of SQuAD1.1 \cite{rajpurkar2016squad}. We fine-tune BART for 3 epochs with a learning rate of 3e-5. Using this model, we generate 5 MRC examples from each of the 1.8 million passages in the CORD-19 collection. For top-$p$ top-$k$ sampling, we use $p$=$0.95$ and $k$=$10$. Since it is a generative model, we see cases where the answer text output by the model is not in the input passage. We discard such examples. Overall, the model generates about 7.9 million synthetic examples. We store these as passage-question-answer triples. \subsection{Neural IR} As our neural IR baseline, we use the DPR-Multi system---a state-of-the-art neural IR model---from the publicly available implementation\footnote{Code available at \href{https://github.com/facebookresearch/DPR}{https://github.com/facebookresearch/DPR}} provided by \citeauthor{karpukhin2020dense}~\shortcite{karpukhin2020dense}. This system comes pre-trained on the open versions of multiple MRC datasets: Natural Questions~\cite{kwiatkowski2019natural}, WebQuestions~\cite{berant2013semantic}, CuratedTrec~\cite{baudivs2015modeling} and TriviaQA~\cite{joshi2017triviaqa}. We fine-tune the DPR-Multi system for 6 epochs using the synthetic examples with a learning rate of 1e-5 and a batch size of 128. The resulting model is our Adapted DPR model. We also use a second neural IR baseline based on the Inverse Cloze Task (ICT) method proposed in \cite{lee2019latent}. ICT is an unsupervised training procedure wherein a sentence is randomly masked out from the passage with a probability $p$ and used as the query to create a query-passage synthetic training pair. We adopt ICT as an alternative approach to generating synthetic training examples. We set $p$=$0.9$, which \citeauthor{lee2019latent}~\shortcite{lee2019latent} have shown to work best, and use the 288 wordpiece passages from the CORD-19 collection to create 1.8 million training examples. We train for 6 epochs using these ICT examples with a learning rate of 1e-5 and a batch size of 128. We then follow \citeauthor{lee2019latent}~\shortcite{lee2019latent} to do a final round of fine-tuning wherein only the question encoder is trained for 10 epochs using questions from the open version of NQ. Since the above technique does not require any in-domain labeled data, we use it as a baseline domain adaptation approach. We call this model the \textit{ICT} model. \begin{table*}[t] \centering \small \begin{tabular}{p{3.7cm}|p{6.4cm}|p{6.4cm}} \multicolumn{1}{c|}{\textbf{Example}} & \multicolumn{1}{c|}{\textbf{Adapted DPR}} & \multicolumn{1}{c}{\textbf{BM25}} \\ \hline \textbf{Q:} What was the fatality rate for SARS-CoV? \textbf{A:} 10\% & The case fatality rate (CFR) of COVID-19 was 2.3\% (44/1023), much lower than that of SARS (\textbf{10\%}) and MERS (36\%) (de Wit et al. 2016; Wu and McGoogan 2020). Suspected COVID-19 patients (with symptoms) could be diagnosed ... & The analysis estimated that the case-fatality rate of COVID-19 in Europe would range between 4\% and 4.5\%. The case-fatality rate of SARS-COV, which was a similar outbreak, was \textbf{10\%}, while the case-fatality rate of MERS-CoV was over 35\% ...\\ \hline \textbf{Q:} What is the molecular structure of the Human metapneumovirus (HMPV)? \textbf{A:} single-stranded RNA virus & Human bocavirus: hMPV is a paramyxovirus first discovered by van den Hoogen and colleagues15 in 2001. Similar to RSV, hMPV is a \textbf{single-stranded RNA virus} belonging to the Pneumoviridae subfamily, and causes many of the same symptoms ... & ... in specimens from 1976 to 2001. Collectively, these studies show that HMPV has been circulating undetected for many decades. Genome organization and structure: HMPV is a negative-sense, non-segmented, \textbf{single-stranded RNA virus}. \\ \hline \end{tabular} \caption{Examples where Adapted DPR and BM25 both retrieve passages that are not returned by the other system (in the top 100 results).} \label{tab:Retrieved-Ex} \end{table*} \subsection{Machine Reading Comprehension} The baseline MRC system fine-tunes a pre-trained RoBERTa-large model for 3 epochs on SQuAD2.0 and then for 1 epoch on Natural Questions (NQ) training examples. It achieves a short answer EM of 59.4 on the NQ dev set, which is competitive with numbers reported in \cite{liu2020rikinet}. We use the Transformers library \cite{Wolf2019HuggingFacesTS} for all our MRC experiments. For masked LM fine-tuning of the pre-trained RoBERTa-large LM on the CORD-19 collection, we use approximately 1.5GB of text containing 225 million tokens. We train for 8 epochs with a learning rate of 1.5e-4 using the Fairseq toolkit \cite{ott2019fairseq}. For the downstream fine-tuning of this LM to the MRC task, we train for 3 epochs on SQuAD2.0, 1 epoch each on the filtered synthetic MRC examples and the NQ dataset. During roundtrip consistency filtering, we use a high answerability score threshold of $t$=$7.0$, and are left with around 380k synthetic MRC examples after filtering. \subsection{Metrics} We evaluate the IR models using the recall-based Match@20, Match@40 and Match@100 metrics, similar to \cite{karpukhin2020dense}. These metrics measure the top-$k$ retrieval accuracy, which is the fraction of questions for which the top $k$ retrieved passages contain a span that answers the question. For the MRC models, we use the standard Exact Match (EM) and F1 score for evaluation. Finally, we evaluate the end-to-end QA systems on Top-1 F1 and Top-5 F1. \subsection{Dense Passage Retriever (DPR)} Given a collection of passages, the DPR model creates an index in a continuous space to retrieve passages relevant to an input question. It uses a Siamese neural network \cite{koch2015siamese} (aka. dual encoder) model with separate dense encoders $E_{Q}(.)$ and $E_{P}(.)$ for the question and passage, respectively. Each encoder is a BERT \cite{devlin2019bert} (base, uncased) model that produces the hidden representation of the [CLS] token as output. The similarity between a question and a passage is the dot product of their encoder outputs: \begin{equation}\label{eq:sim} sim(q,p) = E_Q(q)^TE_P(p) \end{equation} \noindent Since Eq.~\ref{eq:sim} is decomposable, representations of all passages in the collection are pre-computed and stored in an index using FAISS \cite{johnson2019billion}. Given an input question \textit{q}, the top \textit{k} passages with representations close to $E_{Q}(q)$ are then retrieved. \citeauthor{karpukhin2020dense}~\shortcite{karpukhin2020dense} show that training examples for such a dual-encoder model can be obtained from existing MRC datasets. Each training instance $(q_i, p_i^+, p_{i,1}^-, ..., p_{i,n}^-)$ contains a question $q_i$, one positive passage $p_i^+$ and $n$ negative passages $p_{i,j}^-$. The training loss is the negative log-likelihood of the positive passage: \begin{equation} L = -\log\frac{e^{sim(q_i, p_i^+)}}{e^{sim(q_i, p_i^+)} + \sum_{j=1}^n e^{sim(q_i, p_{i,j}^-)}} \end{equation} \noindent While negative passages for a given question can be simply sampled from the collection, \citeauthor{karpukhin2020dense}~\shortcite{karpukhin2020dense} show that having a top passage returned by BM25 among the negatives helps improve performance. To make the training process more efficient, the trick of in-batch negatives \cite{yih2011learning, gillick2019learning} is also used. Thus, for each question in a training mini-batch, the following passages are used as negatives: (1)~a passage returned by BM25 that is not labeled positive, (2)~positive passages as well as BM25-retrieved negatives for other questions in the mini-batch. In open-domain QA, DPR outperforms a strong Lucene-BM25 system by 9-19\% top-20 passage retrieval accuracy on a wide range of benchmark datasets. \subsection{Synthetic Training} We train our synthetic example generator on existing open-domain MRC data and give it target domain passages as input to produce multiple question-answer pairs per passage. For IR, we discard the generated answers to first construct positive question-passage pairs. To create negative examples, for each question, we select a BM25-retrieved passage which does not contain the answer text but has a high lexical overlap with the question. For each generated question, we create an IR training example by aggregating the question, the positive and the negative passage. During fine-tuning of the DPR model, at each iteration, a set of questions is randomly sampled from the generated dataset. Following \cite{karpukhin2020dense}, we use in-batch negatives while training. We call this final model the \emph{Adapted DPR} model. Prior to retrieval using Adapted DPR, we pre-compute the passage representations for the entire retrieval corpus, which for the work presented in this paper was obtained from the CORD-19 collection. The embeddings are indexed using FAISS for efficient run-time retrieval. Finally, given a question, the same inference procedure as in DPR is followed for retrieval. \section{Introduction} \blfootnote{* Work done during AI Residency at IBM Research.} \input{introduction.tex} \section{Related Work} \input{related.tex} \section{COVID-19 Datasets} \input{datasets.tex} \section{Generating Synthetic Training Examples} \input{generator.tex} \section{Information Retrieval} \input{ir.tex} \section{Machine Reading Comprehension} \input{qa.tex} \section{Experimental Setup} \input{experiments.tex} \section{Results and Discussion} \input{results.tex} \section{Conclusion} We present an approach for zero-shot adaptation of an open-domain end-to-end question answering system to a target domain, in this case COVID-19. We propose a novel example generation model that can produce synthetic training examples for both information retrieval and machine reading comprehension. Importantly, our generation model, trained using open-domain supervised QA data, is used to generate synthetic question-answer pairs in the target domain. By running extensive evaluation experiments, we show that our end-to-end QA model as well as its individual IR and MRC components benefit from the synthetic examples. Low-resource target domains can present significant challenges for natural language processing systems. Our work shows that synthetic generation can be an effective domain adaptation approach for QA. Future work will explore semi-supervised and active learning approaches to determine how a small amount supervision can further improve results. \bibliographystyle{aaai21} \subsection{CORD-19 Language Modeling} \citeauthor{gururangan-etal-2020-dont}~\shortcite{gururangan-etal-2020-dont} have shown that adapting a pre-trained open-domain LM to unlabeled text in a target domain before task-specific fine-tuning can be beneficial for the target task. We begin with a pre-trained RoBERTa-large LM \cite{liu2019roberta} and continue masked LM training \cite{devlin2019bert} on the CORD-19 documents. This target domain LM serves as the starting point for the later MRC fine-tuning steps. \subsection{MRC Model Architecture} Before giving the details of MRC fine-tuning, we briefly discuss our MRC model architecture. We use a standard extractive MRC model \cite{devlin2019bert} that extracts a short answer from a passage given a question. The network uses two classification heads on top of a pre-trained RoBERTa LM, which point to the start and end positions of the answer span. For unanswerable examples, the classification heads point to the position of the [CLS] token. Let $start(.)$ and $end(.)$ be the outputs of the start and end classification heads. Then the MRC score of an answer span $(s,e)$, where $s$ is the start and $e$ is the end token, is defined as: \begin{equation}\label{eq:mrc} \begin{split} Sc&ore(s,e) = \\ &start(s) + end(e) - start(\mbox{[CLS]}) - end(\mbox{[CLS]}) \end{split} \end{equation} \subsection{Synthetic Training with Roundtrip Consistency} To fine-tune the CORD-19 LM for the MRC task, we use both human-annotated data from two open-domain MRC datasets---SQuAD2.0 \cite{rajpurkar2018know} and Natural Questions (NQ) \cite{kwiatkowski2019natural}---and the synthetic question-answer pairs generated by our example generator from CORD-19 passages. For the synthetic training examples, we use a roundtrip consistency \cite{alberti2019synthetic} filter to remove noisy examples from the generated data. It utilizes a pre-trained MRC model to evaluate the quality of automatically generated question-answer pairs. Specifically, following \citeauthor{alberti2019bert}~\shortcite{alberti2019bert}, we fine-tune a RoBERTa LM first on SQuAD2.0 and then on NQ. Given a synthetic question, this MRC model first computes the MRC scores of candidate answer spans (Eq.~\ref{eq:mrc}) in the passage. We take the highest score over all candidate spans as the \emph{answerability} score of the synthetic question, and filter the example out if this score is lower than a threshold (tuned on our dev fold of the COVID-QA-2019 dataset). This filter uses the answerability score of the question as a measure of noise in the generated example, since the question generated from the passage is expected to be answerable. \subsection{Fine-Tuning Sequence} We adapt the CORD-19 LM to the final MRC task using the following sequence of fine-tuning steps. First, we fine-tune on SQuAD2.0 examples, then on the roundtrip-consistent synthetic examples from the CORD-19 passages, and finally on the NQ examples. We experimented with other sequence orders too but found the above to yield the best performance on our dev fold of the COVID-QA-2019 \cite{mollercovid} dataset. We call this final model the \emph{Adapted MRC} model. \subsection{Information Retrieval} We evaluate our proposed system, Adapted DPR, against a number of traditional term matching and neural IR baselines. Specifically, we use BM25\footnote{\href{https://lucene.apache.org}{Lucene} Implementation. BM25 parameters $b=0.75$ (document length normalization) and $k_1=1.2$ (term frequency scaling) worked best.} as the term matching baseline, the DPR-Multi system as the zero-shot open-domain baseline and ICT as a domain adaptation baseline. Further, we also evaluate models that combine term matching and neural approaches. We take the top 2,000 passages retrieved by BM25 and neural models separately, and score each passage using a convex combination of its BM25 and neural IR scores after normalization (weight is tuned on the Open-COVID-QA-2019 dev set). We create two such combined systems: BM25+DPR-Multi and BM25+Adapted DPR. \subsubsection{Results} Table \ref{tab:IR} shows the performance of different IR systems on the Open-COVID-QA-2019 and COVID-QA-111 datasets. BM25 shows strong performance on both datasets, demonstrating the robustness of such term matching methods. While the neural DPR-multi system is competitive with BM25 on COVID-QA-111, it is considerably behind on the larger Open-COVID-QA-2019 dataset. The ICT model improves over DPR-multi, showing that domain adaption using such unsupervised techniques can be beneficial. Our Adapted DPR system achieves the best single system results on both datasets, demonstrating the effectiveness of using our synthetic example generation for domain adaptation. On the Open-COVID-QA-2019 test set, our model improves over the baseline DPR-Multi system by more than 100\%. Finally, we see that a combination of BM25 and the neural approaches can give considerable performance improvements. Combining DPR-Multi with BM25 does not lead to any gains on Open-COVID-QA-2019 likely due to the fact that DPR-Multi performs poorly on this dataset. However, we see large gains from combining BM25 and our Adapted DPR system, as both perform well individually on the two datasets. Our final BM25+Adapted DPR system is better than the next best baseline by about 14 points across all metrics on the test set of Open-COVID-QA-2019 and up to 7 points on Match@100 on COVID-QA-111. \subsubsection{Analysis} On a closer look at the passages retrieved individually by BM25 and Adapted DPR, we observe that the two sets of retrieved passages are very different. For the Open-COVID-QA-2019 dataset, only 5 passages are common on average among the top 100 passages retrieved separately by the two systems. This difference is also visible in the relevant passages (passages that contain an answer to the question) that are returned by the two systems. We observe many cases where the two systems retrieve mutually exclusive relevant passages in the top 100 retrieved results. Table \ref{tab:Retrieved-Ex} shows two such examples. This diversity in retrieval results demonstrates the complementary nature of the two systems and also explains why their combination leads to improved IR performance. \begin{table}[h] \centering \begin{tabular}{llll} \multicolumn{1}{c}{Model} & M@20 & M@40 & M@100 \\ \hline NQ-style SynQ & 20.4 & 23.9 & 27.9 \\ Squad-style SynQ & 28.0 & 31.8 & 39.0 \\ \hline \end{tabular} \caption{Retrieval performance on the dev fold of Open-COVID-QA-2019.} \label{tab:IR_Qtype} \end{table} \begin{table*}[!] \centering \begin{tabular}{l||ll|ll||ll} \multicolumn{1}{c||}{Model} & \multicolumn{4}{c||}{Open-COVID-QA-2019} & \multicolumn{2}{c}{COVID-QA-111} \\ \hline \multicolumn{1}{c||}{} & \multicolumn{2}{c|}{Dev} & \multicolumn{2}{c||}{Test} & \multicolumn{2}{c}{Test} \\ \multicolumn{1}{c||}{} & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 \\ \hline BM25 $\rightarrow$ Baseline MRC & 21.7 & 31.8 & 27.1 & 38.7 & 24.1 & 39.3\\ (BM25 + DPR-Multi) $\rightarrow$ Baseline MRC & 21.4 & 30.9 & 25.2 & 37.2 & 24.4 & 43.2 \\ (BM25 + Adapted DPR) $\rightarrow$ Baseline MRC & 24.2 \begin{scriptsize}(0.9)\end{scriptsize} & 35.6 \begin{scriptsize}(0.3)\end{scriptsize} & 29.5 \begin{scriptsize}(0.1)\end{scriptsize} & 44.2 \begin{scriptsize}(0.2)\end{scriptsize} & 25.0 \begin{scriptsize}(0.2)\end{scriptsize} & 45.9 \begin{scriptsize}(0.8)\end{scriptsize} \\ (BM25 + Adapted DPR) $\rightarrow$ Adapted MRC & \textbf{27.2} \begin{scriptsize}(0.9)\end{scriptsize} & \textbf{37.2} \begin{scriptsize}(0.2)\end{scriptsize} & \textbf{30.4} \begin{scriptsize}(0.3)\end{scriptsize} & \textbf{44.9} \begin{scriptsize}(0.1)\end{scriptsize} & \textbf{26.5} \begin{scriptsize}(0.5)\end{scriptsize} & \textbf{47.8} \begin{scriptsize}(0.8)\end{scriptsize} \\ \hline \end{tabular} \caption{End-to-end question answering F1 scores on the open version of COVID-QA-2019 and COVID-QA-111.} \label{tab:Open_QA} \end{table*} To further investigate synthetic example generation, besides SQuAD, we also train the generator on a second MRC dataset, namely, the Natural Questions (NQ) dataset \cite{kwiatkowski2019natural}. NQ contains information seeking questions from real-life users of Google search whereas SQuAD contains well-formed questions created by annotators after looking at the passage. Thus these two datasets contain distinct question styles; we explore which one yields a better synthetic example generator for our application. Table \ref{tab:IR_Qtype} compares the performance of the NQ-style synthetic examples with the SQuAD-style examples while adapting the DPR-Multi model. We can see from the results that using questions from a SQuAD-trained synthetic generator is considerably better. \subsection{Machine Reading Comprehension} Table \ref{tab:QA} shows results on the test sets of different MRC datasets. Input to the models is a question and an annotated document that contains an answer. The Adapted MRC model incorporates both language modeling on the CORD-19 collection and synthetic MRC training. Over our state-of-the-art open-domain MRC baseline, we see 2 and 3.7 F1 improvements on test sets of COVID-QA-2019 and COVID-QA-147, respectively. \begin{table}[h] \centering \small \begin{tabular}{lllll} \multicolumn{1}{c}{Model} & \multicolumn{2}{c}{COVID-QA-2019} & \multicolumn{2}{c}{COVID-QA-147} \\ \hline \multicolumn{1}{c}{} & EM & F1 & EM & F1 \\ \hline Baseline MRC & 34.7 & 62.7 & 8.8 & 31.0\\ Adapted MRC & \textbf{37.2} \begin{scriptsize}(0.4)\end{scriptsize} & \textbf{64.7} \begin{scriptsize}(0.1)\end{scriptsize} & \textbf{11.3} \begin{scriptsize}(0.6)\end{scriptsize} & \textbf{34.7} \begin{scriptsize}(1.1)\end{scriptsize}\\ \hline \end{tabular} \caption{MRC performance on the test folds of two datasets.} \label{tab:QA} \end{table} To further demonstrate the improvements from language modeling and using synthetic examples, we present in Table \ref{tab:QA_dev} the results on the COVID-QA-2019 dev set from incrementally applying the two domain adaptation strategies. We see that both strategies yield performance gains. However, it can be seen that using the synthetic MRC examples from our generator contributes more, with 3.1 EM and 2.6 F1 increase vs 1.5 EM and 0.8 F1 improvement from language modeling. \begin{table}[h] \centering \begin{tabular}{lcc} \multicolumn{1}{c}{Model} & EM & F1 \\ \hline \hline Baseline MRC & 34.0 & 59.4 \\ + Language Modeling on CORD-19 & 35.5 & 60.2 \\ + Adding SynQ during MRC training & 38.6 & 62.8 \\ \hline \end{tabular} \caption{Machine reading comprehension performance on the dev split of COVID-QA-2019.} \label{tab:QA_dev} \end{table} \subsection{End-to-End Question Answering} Finally, we combine different IR and MRC systems to create end-to-end QA systems. We measure the improvements from our domain adaptation strategy in this setting, where only the question is given as input for QA over the entire corpus. In the retrieval phase, we take the top $K$ passages ($K$ tuned on dev) from the IR system. Each passage is then passed to the MRC model to get the top answer and its MRC score. Finally, we normalize the IR and MRC scores and combine via a convex combination (IR weight = $0.7$, tuned on dev). We observe that using $K$=$100$ works best when IR is BM25 only and $K$=$40$ works best for BM25 + Neural IR systems. Table \ref{tab:Open_QA} shows the end-to-end F1 performance of the combination of IR and MRC systems. We see that both having a better retriever (BM25+Adapted DPR) and a better MRC (Adpated MRC) model contribute to improvements in end-to-end QA performance. To verify the statistical significance of our end-to-end QA results, we perform a paired $t$-test \cite{hsu2005paired} on the Top-5 F1 scores for both datasets. Our final end-to-end QA system is significantly better than the baseline system at $p<0.01$.
proofpile-arXiv_067-54
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Summary of Contributions} \label{sec:features} Brax trains locomotion and dexterous manipulation policies in seconds to minutes using just one modern accelerator. Brax achieves this by making extensive use of auto-vectorization, device-parallelism, just-in-time compilation, and auto-differentiation primitives of the JAX\cite{jax2018github} library. In doing so, it unlocks simulation of simple rigidbody physics systems in thousands of independent environments across hundreds of connected accelerators. For an individual accelerator, Brax reaches millions of simulation steps per second on environments like OpenAI Gym's MuJoCo Ant\cite{schulman2015high}. See Sec. \ref{sec:perf} for more details, or our Colab\cite{trainingurl} to train a policy interactively. The structure of the paper is as follows: we first provide motivation for our engine in Sec. \ref{sec:intro}. In Sec. \ref{sec:physics_loop}, we describe the architecture of Brax, starting from the low level physics primitives, how they interact, and how they can be extended for practitioners interested in physics based simulation. In Sec. \ref{sec:envs}, we review our ProtoBuf environment specification, and detail how it can be used to construct rich physically simulated tasks, including the suite of tasks bundled in this initial release. In Sec. \ref{sec:brax_rl}, we tour some of the reinforcement learning algorithms bundled with Brax. In Sec. \ref{sec:perf}, we catalog scaling behavior of Brax on accelerators as well as performance comparisons between Brax and MuJoCo on OpenAI Gym-style learning problems. Finally, in Sec. \ref{sec:limitations}, we discuss the limitations and possible extensions of our engine. \section{Motivation} \label{sec:intro} The reinforcement learning community has made significant progress on the study and control of physically simulated environments over the past several years. This progress stems from the confluence of strong algorithmic techniques~\citep{haarnoja2018softreview,haarnoja2018soft,schulman2017proximal,akkaya2019solving,levine2020offline,mania2018simple} with accessible simulation software~\citep{todorov2012mujoco,tassa2018deepmind,summers2020lyceum,fan2018surreal,juliani2018unity}. On the algorithmic side, model-free optimization techniques like proximal policy optimization (PPO)\cite{schulman2017proximal} and soft actor critic methods (SAC)\cite{haarnoja2018soft} have exploded in popularity and can easily solve many of the ``hard'' control problems of the previous decade. On the simulation side, practitioners have the choice of a variety of engine backends to power their study of simulated environments, including MuJoCo\cite{todorov2012mujoco}, pybullet\cite{coumans2021}, and physX, among many others, many of which are differentiable\cite{heiden2021neuralsim,hu2019taichi,hu2019difftaichi,werling2021fast,degrave2019differentiable,de2018end, gradu2021deluca, juliani2018unity}. While these engines and algorithms are quite powerful, and have provided the firmament of algorithmic innovation for many years, they do not come without drawbacks. Reinforcement learning, as it is practiced, remains prohibitively expensive and slow for many use cases due to its high sample complexity: environments with only hundreds of dimensions of state space require millions to billions of simulation steps during RL exploration. As environments increasingly require interactive physics calculations as part of the environment step, this problem will only grow worse\cite{vinyals2019grandmaster,jaderberg2019human,berner2019dota}. While some progress has been made to lower this sample complexity using off-policy algorithms\cite{levine2020offline,fu2020d4rl,agarwal2020optimistic, hoffman2020acme}, RL systems instead frequently address sample complexity by scaling out the environment simulation to massive distributed systems. These distributed simulation platforms yield impressive RL results at nearly interactive timescales\cite{espeholt2018impala, espeholt2019seed, openai_rapid, salimans2017evolution}, but their hardware and power costs make them inaccessible to most researchers. The design of the simulation engine contributes to this inaccessibility problem in three ways: First, most simulation engines in use today run on CPU, while the RL algorithm runs on GPU or TPU, in another process or another machine. Latency due to data marshalling and network traffic across machines becomes the dominant factor in the time it takes to run an RL experiment. Second, most simulation engines are black boxes: they do not offer a gradient for the sampled environment state, which makes them suitable only for model-free RL approaches. This lack of differentiability forces the researcher to use slower, less efficient optimization methods. Finally, most simulation engines are black boxes in another way: they are either closed source, or built on an entirely different technical stack than the reinforcement learning algorithms. This lack of introspectability not only harms productivity by limiting rapid iteration and debugging, but it prevents researchers from understanding the relationship between the environment’s state and action space, which is often critical to guiding new RL research. We submit Brax as a proposed solution to all three problems at once. Brax puts a physics engine and RL optimizer together on the same GPU/TPU chip, improving the speed/cost of RL training by 100-1000x. It is differentiable, opening the door to new optimization techniques. And it’s an open source library that is packaged to run in Colabs, so that anyone can do RL research for free. \section{Using Brax: The core physics loop} \label{sec:physics_loop} Brax simulates physical interactions in maximal coordinates\cite{featherstone2014rigid}, where every independent entity in a scene that can freely move is tracked separately. This data---position, rotational orientation, velocity, and angular velocity---is typically the \emph{only} data that changes dynamically in the course of a simulation. All other dynamical relationships, like joints, actuators, collisions, and integration steps are then built as transformations on this fundamental state data. This is codified in the data primitive \emph{QP}, implemented as a flax\cite{flax2020github} dataclass, and named whimsically after the canonical coordinates \emph{q} and \emph{p} that it tracks. To make vectorization easy, \emph{QP}s have leading batch dimensions for the number of parallel scenes as well as the number of bodies in a scene. For example the shape of \emph{QP.pos} for 4 parallel scenes with 10 bodies per scene would be $[4,10,3]$. \begin{algorithm} \begin{lstlisting}[escapeinside={(*}{*)}] def pseudo_physics_step((*$qp$*), action, dt): (*$qp$*) = kinematic_integrator.apply((*$qp$*), dt) for jo in joints: (*$dp_j$*) += jo.apply((*$qp$*)) for ac in actuators: (*$dp_a$*) += ac.apply((*$qp$*), action) for co in colliders: (*$dp_c$*) += co.apply((*$qp$*)) (*$qp$*) = potential_integrator.apply((*$qp$*), (*$dp_j$*) + (*$dp_a$*), dt) (*$qp$*) = collision_integrator.apply((*$qp$*), (*$dp_c$*)) \end{lstlisting} \caption{Pseudocode for the structure of a physics step in Brax. Impulsive updates ($dp_i$) are collected in parallel for each type of joint, actuator, and collider. Integrator transformations then apply these updates to the \emph{qp}.} \label{alg:braxloop} \end{algorithm} A physically simulated object generally includes extra data, thus we bundle other information---masses, inertias, dimensions of objects, etc.---in abstractions associated with particular QPs. These abstractions are \emph{bodies}, \emph{joints}, \emph{actuators}, and \emph{colliders}. As an example, a \emph{joint.revolute} class bundles together all of the relevant metadata that governs a 1-degree-of-freedom constraint for a pair of parent and child \emph{bodies}. The \emph{apply} function for this class then calculates forces and torques---i.e., changes in velocity, angular velocity, position, and rotation---necessary to constrain the two bodies into a 1-degree-of-freedom joint configuration. These two bodies are associated with particular indices in the \emph{QP} object. Thus, calling \emph{joint.revolute.apply(qp)} gathers the relevant physical data from the full QP object---i.e., the two qp entities that are being constrained---and returns a vectorized, differential update to the full QP state. All Brax transformations follow this pattern where an apply function transforms a \emph{QP} in this way. To complete the physics step, Brax then sums up all of the differential updates to \emph{QP} data in the course of a single short timestep, and transform the system state via a second order symplectic Euler update (extensions to higher order integrators are straightforward, but see \ref{sec:limitations} for more details). Throughout, we parallelize wherever possible, across actuators, joints, colliders, and even entire simulation scenes. See Alg. \ref{alg:braxloop} for pseudocode for the structure of this loop, or \cite{brax_physics_loop} for the code of the loop. An overarching \emph{system} class handles the coordination and bookkeeping of all of these updates and physical metadata. This class also provides a way to perform a single simulation step via the \emph{step} function. $$qp_{t+\delta t} = \textrm{Brax\_system.step}(qp_{t}, actions)$$ where \emph{actions} are any torques or target angles needed by any actuators in the system. Modifying or extending this control flow is as simple as implementing a new Brax\_transformation that conforms to this structure, and then appropriately inserting this transformation in the physics step function. In order to better visualize and explore Brax's core physics loop, please see our basics Colab \cite{basicsurl}. \section{Using Brax: Creating and evaluating environments} \label{sec:envs} Brax provides an additional abstraction layer for interacting with physically simulated scenes. In Sec. \ref{sec:sys_spec}, we describe the ProtoBuf specification for defining Brax systems---i.e., the lowest level data that describes any physics constraints in a system. Next, In Sec. \ref{sec:env_spec}, we motivate the \emph{env} class, which allows practitioners to construct gym-like decision problems on top of Brax \emph{systems}. Finally, we discuss the environments that have been prepackaged with Brax. \subsection{System specification} \label{sec:sys_spec} Our ProtoBuf text specification allows a user to define all of the \emph{bodies} in a scene, how they are connected to each other via \emph{joints}, as well as any \emph{actuators} or \emph{colliders} between objects, pairwise. For any tree of \emph{bodies} connected by \emph{joints}, Brax's \emph{system} class will automatically determine the position and rotation of the \emph{qp} that places each body in a valid joint configuration through the $\textrm{system.default\_qp}$ method. Reminiscent of, e.g., MuJoCo's xml-based system definition, users can define systems in text, or they can define systems programmatically. We provide an example configuration that defines a joint between a parent and child body in Appendix \ref{app:config}, both in the pure text form, and the programmatic form. Similar configuration files define every system in the Brax repo within each respective environment file, e.g. \cite{antdef}. See our introductory Colab notebooks for an interactive tour of both of these apis. \subsection{Gym-like environments} \label{sec:env_spec} For sequential decision problems, we must track extra metadata beyond what is necessary for a physics update. We provide an \emph{env} class for handling book-keeping of any initializing, resetting, observing, acting, or reward function defining required to fully specify a sequential decision problem. We also provide a wrapper around this class to interface directly with it as an OpenAI gym-style interface. To illustrate the versatility of Brax as an engine, we include and solve several example environments in our initial release: MuJoCo-likes (Ant, Humanoid, Halfcheetah), Grasp (a dexterous manipulation environment), and Fetch (a goal-based locomotion environment). See Table \ref{tab:env_info} for the dimension data for these environments. \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{Env Name} & \textbf{Obs Dim} & \textbf{Act Dim} & \textbf{Type} \\ \hline Halfcheetah & 25 & 7 & continuous \\ \hline Ant & 87 & 8 & continuous \\ \hline Humanoid & 299 & 17 & continuous \\ \hline Grasp & 139 & 19 & continuous \\ \hline Fetch & 101 & 10 & continuous \\ \hline \end{tabular} \caption{Observation and action space data for the environments included in Brax.} \label{tab:env_info} \end{table} \subsubsection{MuJoCo Gym-Likes} The reinforcement learning and control communities have used the OpenAI Gym MuJoCo tasks as benchmarks for developing algorithms for the past several years. While these tasks are well-understood, and essentially solved, we provide our own fairly faithful reconstructions of three of these environments as a baseline point of comparison to help ground practitioner expectations. Owing to subtle engine differences, these environments are not perfectly identical to the MuJoCo scenes on which they are based, and we call out major differences in Appendix \ref{app:muj_compare} \subsubsection{Grasp} Dexterous manipulation tasks have exploded in popularity as algorithmic and hardware advances have enabled robots to solve more complicated problems. \emph{Grasp} is a simple pick-and-place environment, where a 4-fingered claw hand must pick up and move a ball to a target location. We include this environment primarily as a proof-of-concept to demonstrate that the contact physics of our engine are sufficient to support nontrivial manipulation tasks. For a representative sample trajectoy of a successful policy, see Fig. \ref{fig:grasp_traj} in Appendix \ref{app:grasp_fig}. \subsubsection{Fetch} We performed extensive experimentation on a variety of goal-directed locomotion tasks. \emph{Fetch} represents a generally stable environment definition that is able to train a variety of morphologies to locomote within 50 million environment frames. For this release, we include a toy, boxy dog-like quadruped morphology as the base body, but it is straightforward to modify this scene for new body morphologies. \section{Using Brax: Solving locomotion and manipulation problems} \label{sec:brax_rl} To train performant policies on the environments included in this release and interactively evaluate them, see our training Colab\cite{trainingurl}. \subsection{Learning Algorithms Bundled with Brax} Brax includes several common reinforcement learning algorithms that have been implemented to leverage the parallelism and just-in-time-compilation capabilities of JAX. These algorithms are: \begin{itemize} \item Proximal Policy Optimization (PPO) \cite{schulman2017proximal} \item Soft Actor Critic (SAC) \cite{haarnoja2018softreview} \item Evolution Strategy (ES) \cite{salimans2017evolution} \item Analytic Policy Gradient (APG) \end{itemize} Each algorithm is unique in some respects. PPO is an on-policy RL algorithm, SAC is off-policy, ES is a black-box optimization algorithm, and APG exploits differentiability of the rewards provided by the environment. This breadth of algorithmic coverage demonstrates the flexibility of Brax, as well as its potential to accelerate research and reduce costs. For this work, we focus our experimental analysis on PPO and SAC (see, e.g., Sec \ref{sec:perf}), and defer analysis of ES and APG to future work. \subsubsection{Proximal Policy Optimization (PPO)} In order to capture all benefits of a JAX based batched environment that could run on an accelerator(s) we built a custom implementation of PPO. In particular the environment data (rollouts) are generated on an accelerator and subsequently processed there by an SGD optimizer. There's no need for this data to ever leave the accelerator nor is there any need for context switches between various processes. The whole training loop (env rollouts + SGD updates) happens within a single non-interrupted jitted function. The training proceeds as follows: \begin{itemize} \item the batch is split evenly between every available accelerator core and environment rollouts are collected \item normalization statistics are computed based on this batch, stats are synced between all cores and then observations are normalized \item each accelerator core splits the batch into an appropriate number of mini batches for which gradient updates are computed, synced between all cores, and then applied synchronously \end{itemize} The performance/throughput of the algorithm heavily depends on the hyperparameters (e.g. batch size, number of minibatches, number of optimization epochs). We noticed that for the best hyperparameters, our implementation of PPO is efficient enough that the primary bottleneck comes from the environment(e.g., 75\% time goes to running the env for Ant), even though the environment itself is quite fast. \subsubsection{Soft Actor Critic (SAC)} Unlike PPO, SAC uses a replay buffer to sample batches from. In order to use the whole potential of Brax we implemented a custom SAC with a replay buffer living completely on an accelerator. This allowed the whole training procedure to be compiled into a single jitted function and run without any interruptions. The training roughly proceeds as follows: \begin{itemize} \item each available accelerator core runs the environment for a few steps and adds this data to an individual per-core replay buffer \item normalization statistics are computed based on the newly generated data, stats are synced between all cores \item several SGD updates are performed, where each accelerator core samples its part of a batch from its own replay buffer, computes gradient updates, and synchronizes the final update with other cores \end{itemize} SAC is much more sample efficient than PPO, thus we observed that the training throughput now becomes bottlenecked by SGD updates (12\% for running the env, 10\% for working with replay buffer, 78\% for SGD updates). Because of the poor scaling of SGD updates to multiple cores, using more than 1 accelerator core was providing marginal benefit, so the most cost efficient setup was achieved with a single accelerator core. \subsubsection{Evolution Strategy (ES)} To implement ES we followed the same paradigm as for PPO/SAC: we ran everything on an accelerator without any interruptions, keeping all processing contained within the accelerator. The training proceeds as follows: \begin{itemize} \item a lead accelerator generates policy parameters perturbations \item policy parameters perturbations are split evenly between all available accelerator cores for evaluation \item the lead computes gradients based on evaluation scores and updates the policy \end{itemize} The algorithm spends > 99\% of running time evaluating environment steps. \subsubsection{Analytic Policy Gradient (APG)} As a proof of concept of how to leverage the differentiablity of our engine, we provide a APG implementation. Training is significantly simpler than the previous algorithms: \begin{itemize} \item compile a function that takes a gradient of the loss through a short trajectory \item perform gradient descent with this function \end{itemize} After compiling the gradient update, this algorithm spends the majority of the remaining time evaluating the gradient function. This algorithm is less mature than the previous three, and does not currently produce locomotive gaits, and instead seems prone to being trapped in local minima on the environments we provide. Differentiating through long trajectories is an active area of research\cite{toussaint2018differentiable, de2018end, hu2019difftaichi} and is known to be difficult to optimize\cite{williams1990efficient, metz2019understanding}, thus we defer more advanced differentiable algorithms to future releases. \subsection{Training Performance} As part of our release, we include performant hyperparameters for all of our environments. These hyperparameters typically solve their environment with a standard accelerator in seconds to minutes. For exhaustive listings of our hyperparameter experiments see our \textbf{\href{https://github.com/google/brax/tree/main/datasets}{repo}}\cite{datasets}. For plots of performance of the best 20 hyperparameter settings for each environment for exhaustive hyperparameter sweeps over SAC and PPO, see Appendix \ref{app:hyperparam_sweeps}. \section{Performance Benchmarking} \label{sec:perf} \subsection{Parallelizing over Accelerators} \begin{figure}[!htbp] \centering \begin{overpic}[width=0.75\textwidth]{brax_scaling.png} \end{overpic} % \caption{ (left) Scaling of the effective environment steps per second for each environment in this release on a 4x2 TPU v3. (right) Scaling of the effective environment steps per second for several accelerators on the Ant environment. Error bars are not visible at this scale. \label{fig:brax_scaling} } \end{figure} By leveraging JAX's vectorization and device parallelism primitives, we can easily scale Brax up to hundreds of millions of steps per second of performance by distributing environment computation within and across accelerators. Fig. \ref{fig:brax_scaling} depicts these scaling curves for the suite of environments included in this release on a particular fast, modern accelerator cluster (4x2 topology of TPUv3), as well as the performance scaling on the Ant environment for a variety of accelerators and TPU topologies. For reference, Colab TPU instances currently provide limited free usage of 2x2 TPUv2 accelerators. \subsection{Engine Comparisons} A perfectly apples to apples comparison between engines is difficult, primarily because the main way to parallelize the most widely used engines is either by custom multithreading harnesses over CPU, or by distributed aggregation of headless workers with attached accelerators---typically bespoke setups not available to most practitioners. Thus, it probably isn't fair to compare Brax's Ant environment compiled to and running on a TPUv3 8x8 accelerator (\textasciitilde{}hundreds of millions of steps per second) to the typical use case of a practitioner running the OpenAI gym MuJoCo-ant on a single threaded machine (\textasciitilde{}thousands of steps per second). While we include Brax results from deployment on large clusters of TPUs, we emphasize that Brax performance on a single 1x1 TPUv2 is significantly better than what the vast majority of practitioners have, until now, been able to achieve at dramatically reduced cost. To make this performance gap clear, we first consider a qualitative comparison of training speed for the Ant environment with Brax's PPO implementation over a variety of architectures. We compare this to a traditional setup, with a standard implementation of PPO\cite{hoffman2020acme}---i.e., not compiled nor optimized for parallelism, visualized in Fig. \ref{fig:brax_wallclock}. Note that Brax reaches performant locomotion in ten seconds or so, whereas the standard PPO implementation takes close to half an hour. \begin{figure}[!h] \centering \begin{overpic}[width=0.75\textwidth]{brax_wallclock.png} \end{overpic} % \caption{Qualitative comparisons of training curves for Brax's compiled and optimized PPO implementation versus a standard PPO implementation\cite{hoffman2020acme}. Note the x-axis is log-wallclock-time in seconds. All curves with ``brax'' labels are Brax's version of Ant, whereas the MuJoCo curve is MuJoCo-Ant-v2. Both implementations of ppo were evaluated for 10 million environment steps. Shaded region indicates lowest and highest performing seeds over 5 replicas, and solid line indicates mean. See App. \ref{app:hyperparams} for hyperparameters used. \label{fig:brax_wallclock} } \end{figure} \begin{figure}[!htbp] \centering \begin{overpic}[width=1.0\textwidth]{brax_muj_compare.png} \end{overpic} % \caption{Qualitative comparisons of training curve trajectories in MuJoCo and Brax. (left) Training curves for MuJoCo-Humanoid-v2 and brax-humanoid, (middle) MuJoCo-Ant-v2 and brax-ant, and (right) MuJoCo-HalfCheetah-v2 and brax-halfcheetah. All environments were evaluated with the same standard implementation of SAC\cite{hoffman2020acme} with environments evaluated on CPU and learning on a 2x2 TPUv2---i.e., \emph{not} Brax's accelerator-optimized implementation. Solid lines indicate average performance, envelopes are variance over random seeds. See App. \ref{app:hyperparams} for hyperparameters used. See Appendix \ref{app:muj_compare} for a short discussion of the gap in performance for halfcheetah. \label{fig:brax_muj_compare} } \end{figure} \begin{figure}[!htbp] \centering \begin{overpic}[width=1.0\textwidth]{brax_engine_compare.png} \end{overpic} % \caption{Linear momentum (left), angular momentum (middle), and energy (right) non-conservation scaling for Brax as well as several other engines. Non-Brax data was adapted with permission from the authors of \cite{erez2015simulation} and plotted here for comparison. Following Erez et al., in the momentum conservation scene we disabled damping, collisions, and gravity, and randomly actuated the limbs for 1 second with approximately .5 $Nm$ of torque per actuator per step. For energy, we additionally disabled actuators, gave every body part a random $1 m/s$ kick, and measured the energy drift after 1 second of simulation. All measurements averaged over 128 random seeds with single precision floats. \label{fig:brax_engine_compare} } \end{figure} Next, to verify that Brax's versions of MuJoCo's environments are qualitatively similar to MuJoCo's environments, we depict training curves for a standard implementation of SAC on our environments side-by-side with training curves for MuJoCo's versions. Qualitatively, for a fixed set of SAC hyperparameters, Brax environments achieve similar reward in a similar number of environment steps compared to their MuJoCo counterparts. Note that this is not meant to be a claim that we facilitate ``higher reward'', because comparing different reward functions is somewhat theoretically fraught (though Brax's reward functions are very close to the MuJoCo gym definitions, see Appendix \ref{app:muj_compare} for more details). We intend only to demonstrate that the progression of reward gain is similar, and that Brax environments achieve qualitatively similar performance over a similar number of learning steps. Finally, we consider the simulation quality of our engine by how it performs in the ``astronaut'' diagnostic introduced by \cite{erez2015simulation}---a modified version of the humanoid scene which measures momentum and energy nonconservation as a function of simulation fidelity, depicted in Fig. \ref{fig:brax_engine_compare}. Qualitatively, Brax achieves competitive linear momentum conservation scaling owing to its maximal cartesian coordinate representation of positions and symplectic integration scheme. Energy conservation performance is in line with Havok and MuJoCo's euler integrator. Brax does exceptionally well at angular momentum conservation, comparatively. \section{Limitations and Future Work} \label{sec:limitations} In this section, we detail several important limitations and frailties of our engine. \subsection{Spring Joints} It is well known that physics engines that rely on spring constraints instead of more sophisticated Featherstone-style methods can be brittle and can require careful tuning of damping forces. Practically, these instabilities arise as a small radius of convergence in the integrator, necessitating small integration step sizes. Worse, these instabilities grow as a function of the difference in mass scale present in a problem. While relying on spring constraints has greatly simplified the core primitives of our engine, it does mean that ensuring stability in a new physics scene can require a fair amount of tuning of damping forces, mass and inertia scale, and integration step size. Additionally, because our systems are essentially large coupled spring-mass configurations, there is more ``jitter'' in our simulation traces than in a hypothetical corresponding Featherstone simulation. This can be mitigated by increasing the strength of joint spring constraints, but this comes at the cost of a reduced maximum stable integration step size. For the environments in this release, we chose these spring constants so as to maximize simulation speed while still retaining qualitatively smooth simulation, and we will investigate Featherstone methods in future work. \subsection{Collisions} Inspired by the Tiny Differentiable Simulator\cite{heiden2021neuralsim}, we use velocity-level collision updates with Baumgarte stabilization for all of our collision primitives. We did experiment with fully springy, impulsive collisions, but found the motion quality and stability to suffer. Because of this choice, we inherit the known tuning requirements and intrinsic non-physicality of these methods\cite{baumgarte1972stabilization}. We experimented with time-of-impact based collision detection, but, similar to the authors of DiffTaichi\cite{hu2019taichi}, we found it provided little accuracy advantage for the complexity penalty it added to the codebase. Additionally, we currently only use the quadratically-scaling, naive collision detection for any colliders included in a scene. Typical physics-based sequential decision problems don't involve enough colliders for this to be a significant bottleneck, given that we can still easily parallelize over all collision primitives in a scene without straining modern accelerator memory buffers, but we imagine this will become more strained over time as tasks grow in complexity. We leave more advanced collision physics, e.g. LCP-based solvers, and more efficient collision pruning to a future release. \subsection{Jitting, JAX, and XLA} While we tout our ability to compile pythonic physics environments and learning algorithms side-by-side to XLA as a strong comparative advantage that our library inherits from JAX, this does not come without any development friction. Of most salience for end-users of Brax, JIT compilation times can sometimes approach or exceed the training time for complicated environments (i.e., compilation can take minutes). We iterated extensively on the core design patterns of Brax to ameliorate this, and in some cases, collaborated directly with the JAX development team to adjust XLA compilation heuristics on TPU to improve compilation speed and performance. Ultimately, compilation time remains a small bottleneck, particularly for learning algorithms that leverage differentiability. \subsection{Algorithms} This work presents results for our PPO and SAC implementations. While we include APG and ES in this release, they have not been as thoroughly tested, nor have we performed as many hyperparameter explorations with them. We leave it to future work to fully leverage the differentiability of our engine. \subsection{Social Impacts} \label{sec:social_impact} Producing another version of what practitioners commonly use almost definitionally further complicates the landscape of existing benchmarks, but we hope that the development velocity unlocked by our library more than makes up for this extra friction. At the same time, the democratizing effect of releasing an engine that can solve control problems quickly can be double edged: the difference between a piece of democratizing technology and a weapon depends entirely on who is wielding it. Mastery over the control of robots represents a society-transforming opportunity, thus we hope our engine only helps to improve and accelerate the equitable automation of our future. There remains a chance, however, that by releasing a significantly faster engine, we inadvertently dramatically increase the compute spent on reinforcement learning problems, in much the same way building a new highway in a city can counter-intuitively \emph{increase} traffic\cite{litman2017generated}. At least for our own energy expenditure, the experiments we performed were done in datacenters that are on track to be fully renewably sourced by 2030\cite{googlecarbonneutral}. \begin{ack} The authors thank Erwin Coumans for invaluable advice on the subtle implementation details of physics engines, Blake Hechtman and James Bradbury for answering the authors numerous questions and providing optimization help with JAX and XLA, Luke Metz and Shane Gu for stimulating feedback and helpful discussions throughout the development of this project, and Yuval Tassa for exceptional feedback on an early draft of this manuscript. The authors further thank Vijay Sundaram, Wright Bagwell, Matthew Leffler, Gavin Dodd, Brad Mckee, and Logan Olson for helping to incubate this project. \end{ack} \small \newpage
proofpile-arXiv_067-160
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec_intro} This is part of a series of papers \cite{KMX1,KMX2,kmx_approximation_low_p,kmx_rumin} on geometric mapping theory in Carnot groups, in which we establish regularity, rigidity, and partial rigidity results for bilipschitz, quasiconformal, or more generally, Sobolev mappings, between Carnot groups. Our focus in this paper is on the Iwasawa $N$ group for $\sl(n,\mathbb{F})$ and the associated flag manifold, for $\mathbb{F}\in \{\mathbb{R},\mathbb{C},\H\}$. To state our main theorem we first briefly recall a few facts; see Section~\ref{sec_preliminaries} for details and references. For simplicity we will stick to the case $\mathbb{F}=\mathbb{R}$ in this introduction. Fix $n\geq 3$. Let $N\subset \operatorname{GL}(n,\mathbb{R})$ denote the subgroup of upper triangular matrices with $1$s on the diagonal, with Lie algebra $\mathfrak{n}\subset \operatorname{\fg\fl}(n,\mathbb{R})$, and Carnot group structure given by the grading $\mathfrak{n}=\oplus_i V_i$, where $V_i=\{A\in \mathfrak{n}\mid A_{jk}=0\;\text{if}\; k\neq j+i\}$ corresponds to the $i$th superdiagonal. Let $\mathcal{F}$ be the manifold of complete flags in $\mathbb{R}^n$, i.e. the collection of nested families of linear subspaces $$ \{0\}\subsetneq W_1\subsetneq\ldots\subsetneq W_{n-1}\subsetneq\mathbb{R}^n\,. $$ We consider a standard subbundle $W$ of the tangent bundle $T\mathcal{F}$ which is invariant under both the natural action $\operatorname{GL}(n,\mathbb{R})\operatorname{\curvearrowright} \mathcal{F}$ and the diffeomorphism $\psi:\mathcal{F}\rightarrow\mathcal{F}$ induced by orthogonal complement: $$ \psi(W_1,\ldots,W_{n-1})=(W_{n-1}^\perp,\ldots,W_1^\perp)\,. $$ There is a dense open subset $\hat N\subset \mathcal{F}$ and a diffeomorphism $\alpha:N\rightarrow \hat N$ carrying the horizontal bundle $V_1\subset TN$ to $W\mbox{\Large \(|\)\normalsize}_{\hat N}$, i.e. $(\hat N,W\mbox{\Large \(|\)\normalsize}_{\hat N})$ is contact diffeomorphic to $(N,V_1)$. We equip $(\mathcal{F},W)$ with a Carnot-Caratheodory distance $d_{CC}$, and let $\nu$ denote the homogenous dimension of $(\mathcal{F},W)$. Our main result is a rigidity theorem for Sobolev mappings satisfying a nondegeneracy condition: \begin{theorem} \label{thm_main} Suppose $n\geq 4$. Let $U\subset \mathcal{F}$ be a connected open subset, and $f:U\rightarrow \mathcal{F}$ be a $W^{1,p}_{\operatorname{loc}}$-mapping for $p>\nu$, such that the Pansu differential is an isomorphism almost everywhere. Then $f$ is the restriction of a diffeomorphism $\mathcal{F}\rightarrow \mathcal{F}$ of the form $\psi^{\varepsilon}\circ g$ where $g\in \operatorname{GL}(n,\mathbb{R})$, $\varepsilon\in \{0,1\}$. \end{theorem} In the theorem and below we use the convention that $\psi^0=\operatorname{id}$ and $\psi^1=\psi$. Note that the rigidity assertion is false when $n=3$, because $(\mathcal{F},W)$ is locally equivalent to the Heisenberg group, and hence has an infinite dimensional group of contact diffeomorphisms, which are all Sobolev mappings with nondegenerate Pansu differential. Quasiconformal homeomorphisms are $W^{1,p}_{\operatorname{loc}}$-mappings for some $p>\nu$ \cite{heinonen_koskela}, and therefore we obtain: \begin{corollary} \label{cor_main_cor} When $n\geq 4$, then any quasiconformal homeomorphism $\mathcal{F}\supset U\rightarrow U'\subset \mathcal{F}$ between connected open subsets is the restriction of a diffeomorphism of the form $\psi^{\varepsilon}\circ g$ for some $g\in \operatorname{GL}(n,\mathbb{R})$, $\varepsilon\in \{0,1\}$. \end{corollary} Since $(N,V_1)$ is contact diffeomorphic to $(\hat N,W\mbox{\Large \(|\)\normalsize}_{\hat N})$, the above results give a classification of Sobolev mappings $N\supset U\rightarrow N$ with nondegenerate Pansu differential; this applies in particular quasiconformal homeomorphisms and quasiregular maps by \cite[Section 5]{kmx_approximation_low_p}. Corollary~\ref{cor_main_cor} was previously known for real analytic diffeomorphisms \cite{tanaka_differential_systems_graded_lie_algebras,yamaguchi_differential_systems}, $C^2$ diffeomorphisms \cite{cowling_et_al_iwasawa_n_groups,contact_conformal_maps_parabolic_geometry}, and Euclidean bilipschitz homeomorphisms \cite{lelmi}. We also obtain more refined rigidity for global quasiconformal homeomorphisms of $N$: \begin{theorem} \label{thm_global_qc_rigidity} Any quasiconformal homeomorphism $N\rightarrow N$ is affine, i.e. the composition of a graded automorphism and a left translation. \end{theorem} \bigskip \subsection*{Historical notes} We give only the briefest indication of some of the context, and the references provided are only a sampling of the literature. See \cite{KMX1,KMX2} for more extensive discussion and references. For a diffeomorphism $\mathcal{F}\supset U\rightarrow U'\subset\mathcal{F}$, the quasiconformality condition reduces (locally) to the condition of being contact, i.e. preserving the subbundle $W\subset T\mathcal{F}$. The study of contact diffeomorphisms has a long history in differential geometry and exterior differential systems going back to Cartan \cite{cartan_1904,singer_sternberg_infinite_groups_lie_cartan,tanaka_differential_systems_graded_lie_algebras,yamaguchi_differential_systems,contact_conformal_maps_parabolic_geometry,ottazzi_warhurst}. The particular case considered in this paper -- the flag manifold $\mathcal{F}$ equipped with the subbundle $W$ -- was treated in the papers \cite{cowling_et_al_iwasawa_n_groups,contact_conformal_maps_parabolic_geometry} along with other parabolic geometries. Quasiconformal homeomorphisms in $\mathbb{R}^n$, $n\geq 3$, have been heavily studied since the 1960s. In the Carnot group setting they first appeared in Mostow's rigidity theorem \cite{mostow_strong_rigidity}, and have been investigated intensely since roughly 1990, due to influential contributions of Gromov, Pansu, and applications to geometric group theory. For almost 30 years, these historical threads have merged with other developments in analysis on metric spaces \cite{heinonen_koskela,cheeger_differentiability} and PDE \cite{capogna,capogna_cowling,iwaniec_martin_quasiregular_even_dimensions}. The rigidity (or regularity) problem for quasiconformal mappings of Carnot groups crystallized over time, first appearing implicitly in the seminal paper \cite{pansu}, then in \cite[Remark 6.12]{heinonen_calculus_carnot_groups}, \cite{koranyi_math_review} and finally taking shape with Ottazzi-Warhurst's notion of a rigid Carnot group \cite[p.2]{ottazzi_warhurst}, \cite{ottazzi_warhurst_algebraic_prolongation}, explicitly upgraded to a conjecture in \cite{KMX1}. Corollary~\ref{cor_main_cor} settles the Regularity Conjecture for the Carnot group $N$. Rigidity of quasiconformal mappings is an instance of rigidity for (solutions to) weak differential inclusions, and more generally partial differential relations, about which there is a vast literature, see for instance \cite{reshetnyak_space_mappings_bounded_distortion,iwaniec_martin_quasiregular_even_dimensions,vodopyanov_foundations,nash54, tartar79, murat81, gromov_pdr, scheffer93, dacorogna_marcellini99, muller99, muller_sverak03, delellis_szekelyhidi09, delellis_szekelyhidi16, isett18, buckmaster_vicol19}. \bigskip\bigskip \subsection*{Discussion of the proof}~ For every $1\leq j\leq n-1$, let $\pi_j:\mathcal{F}\rightarrow G(j,n)$ be the $\operatorname{GL}(n,\mathbb{R})$-equivariant fibration given by $\pi_j(W_1,\ldots,W_{n-1})=W_j$; here $G(j,n)$ is the Grassmannian of $j$-planes in $\mathbb{R}^n$. The first step of the proof of Theorem~\ref{thm_main}, which is implemented in Sections~\ref{sec_preliminaries} and \ref{sec_sobolev_mappings_preservation_foliations}, is to show that the mapping $f$ preserves the fibers of the fibration $\pi_j$ for all $1\leq j\leq n-1$ (possibly after first post-composing $f$ with the orthogonal complement mapping $\psi:\mathcal{F}\rightarrow \mathcal{F}$). The heart of the argument is to exclude a certain type of oscillatory behavior in the Pansu differential, and this is achieved by using the Pullback Theorem from \cite{KMX1}. The second step of the proof (Section~\ref{sec_rigidity_foliation_preserving_maps}) is to show that a continuous map $U\rightarrow\mathcal{F}$ which preserves the fibrations $\pi_j$, and satisfies a certain nondegeneracy condition, must agree with the diffeomorphism $\mathcal{F}\rightarrow \mathcal{F}$ induced by some element $g\in \operatorname{GL}(n,\mathbb{R})$. Our approach to this was inspired by incidence geometry -- the Fundamental Theorem of Projective Geometry and its generalization by Tits -- although the final form is rather different. See Section~\ref{subsec_fibration_preserving_n_3_case} for a sketch of this argument in the $n=3$ case. The approaches taken in \cite{tanaka_differential_systems_graded_lie_algebras,yamaguchi_differential_systems,cowling_et_al_iwasawa_n_groups,contact_conformal_maps_parabolic_geometry,lelmi} are all based on classifying the contact vector fields, and then using an ``integration'' argument to obtain rigidity for contact mappings \cite[Theorem 3.1]{kobayashi_transformation_groups}, \cite{palais_global_formulation}. To carry out the latter step one has to know that the pushforward of a smooth contact vector field under a contact mapping is still a contact vector field (although possibly of low regularity, a priori). This pushforward assertion is obvious for $C^2$ diffeomorphisms and was shown to hold for Euclidean bilipschitz homeomorphisms in \cite{lelmi}; this appears to be the minimal regularity needed to implement such an argument. \bigskip \subsection*{Organization of the paper}~ The proof of Theorem~\ref{thm_main} is carried out in Sections~\ref{sec_sobolev_mappings_preservation_foliations}-\ref{sec_proof_thm_main}. The statement and proofs of the analogous results in the complex and quaternionic cases are in Section~\ref{sec_complex_quaternionic}. Rigidity for global quasiconformal homeomorphisms (Theorem~\ref{thm_global_qc_rigidity}) is proven in Section~\ref{sec_global_qc_homeos}. \section{Preliminaries} \label{sec_preliminaries} \subsection{The Iwasawa $N$ group, and the manifold of complete flags}~ Most of this subsection is standard material from Lie theory. We have tried to give a more self-contained treatment accessible to readers less familiar with Lie theory; see for instance \cite{contact_conformal_maps_parabolic_geometry} for another approach. \subsection*{Notation} Let $n\geq 2$ be an integer. We will use the following notation: \begin{itemize} \item $P^+,P^-\subset \operatorname{GL}(n,\mathbb{R})$ will denote the subgroups of upper and lower triangular matrices, respectively. \item $N\subset P^+$ denotes the subgroup of upper triangular matrices with $1$s on the diagonal. \item $G:=\operatorname{PGL}(n,\mathbb{R})=\operatorname{GL}(n,\mathbb{R})/\{\lambda \operatorname{id}\mid\lambda\neq 0\}$ is projective linear group. \item We let $P:=P^-/\{\lambda\operatorname{id}\mid \lambda\neq 0\}\subset \operatorname{GL}(n,\mathbb{R})/\{\lambda\operatorname{id}\mid \lambda \neq 0\}=G$ be the image of $P^-\subset \operatorname{GL}(n,\mathbb{R})$ under quotient map $\operatorname{GL}(n,\mathbb{R})\rightarrow \operatorname{PGL}(n,\mathbb{R})$. \item Since $N\cap P^-=\{e\}$ the quotient map $\operatorname{GL}(n,\mathbb{R})\rightarrow G$ restricts to an embedding $N\hookrightarrow G$, and we identify $N$ with its image in $G$. \item $\mathfrak{n}$, $\mathfrak{p}^\pm$, $\mathfrak{p}$, $\operatorname{\fg\fl}(n,\mathbb{R})$, and $\mathfrak{g}$ denote the Lie algebras of $N$, $P^\pm$, $P$, $\operatorname{GL}(n,\mathbb{R})$, and $G$, respectively. \item $X_{ij}\in \operatorname{\fg\fl}(n,\mathbb{R})$ is the matrix with a $1$ in the $ij$-entry and $0$s elsewhere, so \begin{equation} \label{eqn_bracket_relations} \begin{aligned} X_{i_1,j_1}X_{i_2,j_2}&=\delta_{i_2,j_1}X_{i_1,j_2}\\ [X_{i_1,j_1},X_{i_2,j_2}]&=\delta_{i_2,j_1}X_{i_1,j_2}-\delta_{i_1,j_2}X_{i_2,j_1}\,. \end{aligned} \end{equation} \end{itemize} The Carnot group structure on the upper triangular matrices $N$ is defined by the grading $\mathfrak{n}=\oplus_iV_i$ where $$ V_i=\{A\in \operatorname{\fg\fl}(n,\mathbb{R})\mid A_{jk}=0\;\text{if}\;k-j\neq i\}\,. $$ For $r\neq 0$ the Carnot dilation $\delta_r:N\rightarrow N$ is given by conjugation with diagonal matrix $\operatorname{diag}(r^{-1},\ldots,r^{-n})$. \bigskip \subsection*{The flag manifold} The {\bf flag manifold $\mathcal{F}$} is the set of (complete) flags in $\mathbb{R}^n$, i.e. the collection of nested families of linear subspaces of $\mathbb{R}^n$ $$ W_1\subset\ldots \subset W_{n-1} $$ where $W_j$ has dimension $j$; this inherits a smooth structure as a smooth submanifold of the product of Grassmannians $G(1,n)\times\ldots\times G(n-1,n)$. \begin{lemma}[Properties of $\mathcal{F}$] \label{lem_properties_of_f} \mbox{} \begin{enumerate} \item\label{item_transitive_actions} The action of $\operatorname{GL}(n,\mathbb{R})\operatorname{\curvearrowright} G(j,n)$ induces transitive actions $$ \operatorname{GL}(n,\mathbb{R})\operatorname{\curvearrowright} \mathcal{F}\quad\text{and}\quad G\operatorname{\curvearrowright} \mathcal{F}\,. $$ \item Letting $W_j^+:=\operatorname{span}(e_1,\ldots,e_j)$, $W_j^-:=\operatorname{span}(e_n,\ldots,e_{n-j+1})$, the stabilizer of the flag $(W_j^\pm)_{1\leq j\leq n-1}\in \mathcal{F}$ in $\operatorname{GL}(n,\mathbb{R})$ is $P^\pm$. \item The stabilizer of $(W_j^-)$ in $G$ is $P$. Henceforth we identify $\mathcal{F}$ with the homogeneous space (coset space) $G/P$ using the orbit map $g\mapsto g\cdot (W_j^-)$. \item The differential of the fibration $G\rightarrow G/P\simeq \mathcal{F}$ at $e\in G$ yields an isomorphism $\mathfrak{g}/\mathfrak{p}\simeq T_P(G/P)\simeq T_{(W_j^-)}\mathcal{F}$. \item The fibration map $G\rightarrow G/P$ is $P$-equivariant w.r.t. the action of $P$ on $G$ by conjugacy and on $G/P$ by left translation. \item The differential at $e\in G$ induces an isomorphism of $P\stackrel{\operatorname{Ad}\mbox{\Large \(|\)\normalsize}_P}{\operatorname{\curvearrowright}}\mathfrak{g}/\mathfrak{p}$ to the (isotropy) representation $P\operatorname{\curvearrowright} T_P(G/P)$; here $G\stackrel{\operatorname{Ad}}{\operatorname{\curvearrowright}} \mathfrak{g}$ is the Adjoint representation and $\operatorname{Ad}\mbox{\Large \(|\)\normalsize}_P$ is the restriction to $P$. \item \label{item_fn_fg_fp_isomorphism} The composition $\mathfrak{n}\hookrightarrow \mathfrak{g}\rightarrow \mathfrak{g}/\mathfrak{p}$ is an isomorphism. \item \label{item_p_invariance_v_1} The image $\hat V_1$ of $V_1\subset\mathfrak{n}$ under $\mathfrak{n}\rightarrow \mathfrak{g}/\mathfrak{p}\simeq T_P(G/P)$ is a $P$-invariant subspace of $T_P(G/P)$. This defines a $G$-invariant subbundle $\mathcal{H}\subset T(G/P)\simeq T\mathcal{F}$. \item \label{item_orbit_map_n} The orbit map $\alpha:N\rightarrow \mathcal{F}$ given by $\alpha(g)= g\cdot (W_j^-)$ is an $N$-equivariant embedding of $N$ onto an open subset of $\mathcal{F}$, which we denote by $\hat N\subset\mathcal{F}$. The image $\hat N$ may be characterized as the collection of flags $(W_j)$ such that $W_j\cap W_{n-j}^+=\{0\}$ for every $1\leq j\leq n-1$. \item The embedding $\alpha$ is contact with respect to the subbundles $V_1\subset TN$ and $\mathcal{H}\subset T(G/P)$. \item \label{item_delta_hat_delta} For every $r\in (0,\infty)$ we have $\alpha\circ\delta_r=\hat \delta_r\circ\alpha$, where $\hat\delta_r:\mathcal{F}\rightarrow\mathcal{F}$ is given by $\hat\delta_r(x)=g_r\cdot x$ with $g_r=\operatorname{diag}(r^{-1},\ldots,r^{-n})$. \end{enumerate} \end{lemma} \begin{proof} (\ref{item_transitive_actions})-(\ref{item_fn_fg_fp_isomorphism}) follow readily from linear algebra or basic theory of manifolds. (\ref{item_p_invariance_v_1}). To see this, it suffices to show that the image of $V_1\subset \mathfrak{n}$ under $\mathfrak{n}\hookrightarrow \operatorname{\fg\fl}(n,\mathbb{R})/\mathfrak{p}^-$ is invariant under $\operatorname{Ad}\mbox{\Large \(|\)\normalsize}_{P^-}$. Since $P^-$ is connected, it therefore suffices to see that $[\mathfrak{p}^-,V_1]\subset V_1+\mathfrak{p}^-$; this follows readily by applying \eqref{eqn_bracket_relations}. (\ref{item_orbit_map_n}). The orbit map $\alpha:N\rightarrow \mathcal{F}$ is smooth and $N$-equivariant. Since the differential of $\alpha$ at $e\in N$ is an isomorphism $D\alpha(e):\mathfrak{n}\rightarrow \mathfrak{g}/\mathfrak{p}$, the map $\alpha$ is an $N$-equivariant immersion onto an open subset. But $N\cap P^-=\{e\}$ so it is also injective, and hence $\alpha$ is an embedding. Let $Z:=\{(W_j)\in \mathcal{F}\mid W_j\cap W_{n-j}^+=\{0\}\;\text{for all}\;1\leq j\leq n-1\}$. If $(W_j)$ belongs to the orbit $\hat N=N\cdot (W_j^-)$ then $(W_j)\in Z$ since $W_j^-\cap W_{n-j}^+=\{0\}$ and $N$ fixes $W_{n-j}^+$ for all $j$. Now suppose $(W_j)\in Z$. We prove by induction on $k$ that the $N$-orbit of $(W_j)$ contains a flag $(W_j^k)$ such that $W_j^k=W_j^-$ for all $j\leq k$. Since $W_1\cap W_{n-1}^+=\{0\}$ we may choose $v=(v_1,\ldots,v_n)\in W_1$ with $v_n=1$. If $n\in N$ is the block matrix \begin{equation*} \left[ \begin{matrix} I& -b\\0&1 \end{matrix} \right] \end{equation*} with $b=[v_1,\ldots,v_{n-1}]^t$ we have $n\cdot v\in W_1^-$, so letting $(W_j^1):=n\cdot (W_j)$ we have $W_1^1=W_1^-$, establishing the $k=1$ case. The inductive step follows similarly, by working in the quotient $\mathbb{R}^n/W_{k-1}^-$. (11). Both $V_1$ and $\mathcal{H}$ are $N$-invariant subbundles, and $\alpha$ is $N$-equivariant, so the contact property follows from the fact that $\mathcal{H}(P)\subset T_P(G/P)$ is the image of $V_1\subset \mathfrak{n}$ under $D\alpha(e)$. (\ref{item_delta_hat_delta}). Since $g_r\in P^-$ we have $$ \alpha(\delta_r(g))=\delta_r(g)\cdot P=g_rgg_r^{-1}\cdot P=g_rg\cdot P=\hat\delta_r(\alpha(g))\,. $$ \end{proof} \bigskip \subsection*{Automorphisms} We collect some properties of automorphisms of $G$ and graded automorphisms of $N$. \begin{lemma}[Properties of $\operatorname{Aut}(G)$] \label{lem_properties_aut_g} \mbox{} \begin{enumerate} \item Transpose inverse $T\mapsto (T^t)^{-1}$ descends to a Lie group automorphism of $G$. The group $\operatorname{Aut}(G)$ of Lie group automorphisms of $G$ is generated by inner automorphisms and transpose inverse. \item Let $\tau:\operatorname{GL}(n,\mathbb{R})\rightarrow \operatorname{GL}(n,\mathbb{R})$ be the automorphism given by $$ \tau(T)=\Pi (T^t)^{-1}\Pi^{-1} $$ where $\Pi\in\operatorname{GL}(n,\mathbb{R})$ is the permutation matrix with $\Pi e_i=e_{n-i+1}$. Then the induced automorphism $\operatorname{\fg\fl}(n,\mathbb{R})\rightarrow\operatorname{\fg\fl}(n,\mathbb{R})$ -- which we also denote by $\tau$ by abuse of notation -- is given by \begin{equation} \label{eqn_tau_on_fg} (\tau(A))_{i,j}=-A_{n-j+1,n-i+1}\,. \end{equation} Then $\tau$ preserves $N$, and induces a graded automorphism $\mathfrak{n}\rightarrow\mathfrak{n}$. \item The automorphism $\tau:\operatorname{GL}(n,\mathbb{R})\rightarrow\operatorname{GL}(n,\mathbb{R})$ descends to an automorphism $G\rightarrow G$, which we also denote by $\tau$ (by further abuse of notation). \end{enumerate} \end{lemma} \begin{proof}~ (1). This follows from Theorem~\ref{diedo} since the only automorphism of $\mathbb{R}$ is the identity map. Therefore there exists $\Phi_1\in \operatorname{Aut}(G)$ in the group generated by inner automorphisms and transpose-inverse, such that $D\Phi_1=D\Phi$. Hence $\Phi_2:=\Phi_1^{-1}\circ\Phi\in \operatorname{Aut}(G)$ is the identity on the connected component of $e\in G$. By considering centralizers of elements $g\in G$ of order $2$, it is not hard to see that $\Phi_2=\operatorname{id}$, and hence $\Phi=\Phi_1$. (2) and (3) are straightforward. \end{proof} \bigskip\bigskip We now consider the group $\operatorname{Aut}_{gr}(N)$ of graded automorphisms of $N$. \bigskip \begin{lemma}[Properties of $\operatorname{Aut}_{gr}(N)$] \label{lem_properites_aut_gr_n} Assume $n\geq 4$. \begin{enumerate} \item \label{item_phi_inducing_graded_automorphism} If $\Phi\in \operatorname{Aut}(G)$ and $\Phi(N)=N$, then $\Phi$ induces a graded automorphism of $N$ if and only if $\Phi\mbox{\Large \(|\)\normalsize}_N=\tau^\varepsilon\circ I_g\mbox{\Large \(|\)\normalsize}_N$ for some $\varepsilon\in \{0,1\}$, $g=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)\in G$. \item Every graded automorphism of $N$ arises as in (1). \item For $1\leq j\leq n-1$ let $\mathfrak{k}_j\subset \mathfrak{n}$ be the Lie subalgebra generated by $\{X_{i,i+1}\}_{i\neq n-j}$, and $K_j\subset N$ be the Lie subgroup with Lie algebra $\mathfrak{k}_j$. A graded automorphism $N\rightarrow N$ is induced by conjugation by some $g=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ if and only if it preserves the subgroups $K_j$ for $1\leq j\leq n-1$. \end{enumerate} \end{lemma} \begin{proof} (\ref{item_phi_inducing_graded_automorphism}). Suppose $\Phi=\tau^\varepsilon\circ I_g$ for some $g=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$. Note that $I_g$ induces a graded automorphism of $N$ since $$ I_g\circ \delta_r=I_g\circ I_{g_r}=I_{gg_r}=I_{g_rg}=I_{g_r}\circ I_g=\delta_r\circ I_g\,. $$ Combining with Lemma~\ref{lem_properties_aut_g}(2) we get that $\Phi$ induces a graded automorphism of $N$, proving the ``if'' direction. Now suppose $\Phi\in \operatorname{Aut}(G)$ preserves $N$, and induces a graded automorphism $N\rightarrow N$. By Lemma~\ref{lem_properties_aut_g}(1) we have $\Phi=\tau^{\varepsilon}\circ I_g$ for some $\varepsilon\in\{0,1\}$, $g\in G$. By Lemma~\ref{lem_properties_aut_g}(2), after postcomposing with $\tau$ if necessary, we may assume without loss of generality that $\Phi=I_g$ is an inner automorphism. The condition $\Phi(N)=N$ is equivalent to saying that $g$ belongs to the normalizer of $N$ in $G$. This is just the image of $P^+$ under the quotient map $\operatorname{GL}(n,\mathbb{R})\rightarrow G$. (To see this, note that if $\hat g\in \operatorname{GL}(n,\mathbb{R})$ is a lift of $g$, then $\hat g$ normalizes $N\subset\operatorname{GL}(n,\mathbb{R})$. But $\operatorname{span}(e_1)$ is the unique fixed point of the action $N\operatorname{\curvearrowright} G(1,n)$, so $\hat g$ fixes $\operatorname{span}(e_1)$; passing to the quotient $\mathbb{R}^n/\operatorname{span}(e_1)$ and arguing by induction, one gets that $\hat g\in P^+$. ) Furthermore, after multiplying $g$ by a diagonal matrix, we may assume without loss of generality that $g\in N$. Now $I_g:N\rightarrow N$ is a graded automorphism, so for all $r>0$ we have $I_g\circ \delta_r=\delta_r\circ I_g$ , i.e. \begin{align*} &I_g\circ I_{g_r}=I_{g_r}\circ I_g\\ \implies &I_{g_rgg_r^{-1}g^{-1}}=\operatorname{id}_N\\ \implies &g_rgg_r^{-1}g^{-1}\in \operatorname{Center}(N)\\ \implies &g_rXg_r^{-1}-X\in \operatorname{Center}(\mathfrak{n}) \end{align*} where $g=\exp X$ for $X\in \mathfrak{n}$; here we are using the fact that the exponential $\exp:\mathfrak{n}\rightarrow N$ is a diffeomorphism. Calculating in $\operatorname{\fg\fl}(n,\mathbb{R})$ and letting $X=\sum_{j>i}a_{ij}X_{ij}$ we have $$ g_rXg_r^{-1}-X=\sum_{j>i}a_{ij}(r^{j-i}-1)X_{ij}\in \operatorname{Center}(\mathfrak{n})\,. $$ Therefore $X\in \operatorname{Center}(\mathfrak{n})$, and $g=\exp X\in \operatorname{Center}(N)$, and $I_g=\operatorname{id}$. (2). Let $\phi:\mathfrak{n}\rightarrow \mathfrak{n}$ be a graded automorphism of $\mathfrak{n}$. We will use the fact that for any $x\in V_1$ and any $j\ge 1$, we have $\phi\circ (\operatorname{ad} x\mbox{\Large \(|\)\normalsize}_{V_j})=(\operatorname{ad}\, \phi x\mbox{\Large \(|\)\normalsize}_{V_j})\circ \phi$, and in particular $\operatorname{rank}(\operatorname{ad}\, x|_{V_j})= \operatorname{rank}(\operatorname{ad}\, \phi x|_{V_j})$. First suppose $n=4$. Set \begin{equation} \begin{aligned} \label{eqn_x_y_z_defs} X_0=X _{23}\,,\;X_1=X _{12}\,,\;X_2=X _{34}\,,\\ Y_1=X _{13}\,,\;Y_2=-X _{24}\,,\;Z=X _{14} \end{aligned} \end{equation} Then the following are the only nontrivial bracket relations between the basis elements: \begin{equation} \label{eqn_x_y_z_brackets} \begin{aligned}~ [X_0,X_1]=-Y_1, \;\; [X_0, X_2]=-Y_2,\\ [X_1, Y_2]=-Z=[X_2, Y_1]. \end{aligned} \end{equation} Let $W_0:=\operatorname{span}(X_0)$, and $W:=\operatorname{span}(X_1, X_2)$. If $x\in V_1$ then $\operatorname{rank}(\operatorname{ad} x\mbox{\Large \(|\)\normalsize}_{V_1})\leq 1$ iff $x\in W$, and $\operatorname{rank}(\operatorname{ad} x\mbox{\Large \(|\)\normalsize}_{V_2})=0$ iff $x\in W_0$. It follows that $\phi (W)=W$ and $\phi (W_0)=W_0$, and so $$\phi X_0=a_0X_0\,,\quad \phi X_1=a_{11}X_1+a_{21}X_2\,,\quad \phi X_2=a_{12}X_1+a_{22}X_2$$ for some $0\not=a_0\in \mathbb{R}$, $(a_{ij})\in \operatorname{GL}(2,\mathbb{R})$. Hence \begin{align} \notag \phi Y_1&=[\phi X_1, \phi X_0]=a_0a_{11}Y_1+a_0a_{21}Y_2\\ \notag \phi Y_2&=[\phi X_2, \phi X_0]=a_0a_{12}Y_1+a_0a_{22}Y_2\\ \label{eqn_ax1_ay1} 0&=[\phi X_1,\phi Y_1]=-2a_0a_{11}a_{21}Z\\ \label{eqn_ax2_ay2}0&=[\phi X_2,\phi Y_2]=-2a_0a_{12}a_{22}Z. \end{align} {\em Case 1. $a_{11}\neq 0$.} By (\ref{eqn_ax1_ay1}) we have $a_{21}=0$, and since $\det(a_{ij})\neq 0$ it follows that $a_{22}\neq 0$, and so $a_{12}=0$ by (\ref{eqn_ax2_ay2}). Thus $\phi \mbox{\Large \(|\)\normalsize}_{V_1}$ is diagonal in the basis $X_0,X_1,X_2$. Hence it is induced by an inner automorphism $I_g:G\rightarrow G$, where $g$ is diagonal. {\em Case 2. $a_{11}=0$.} Since $\det(a_{ij})\neq 0$ it follows that $a_{12}$ and $a_{21}$ are both nonzero. Then (\ref{eqn_ax2_ay2}) gives $a_{22}=0$. Therefore modulo post-composition with the automorphism $\tau$ we get that $\phi \mbox{\Large \(|\)\normalsize}_{V_1}$ is diagonal, and so (2) holds. Now suppose $n\geq 5$. First observe that $$\{x\in V_1\mid \text{rank}(\text{ad}\; x|_{V_1})=1\}=(\mathbb{R} X_{1,2}\cup \mathbb{R} X_{n-1,n})\backslash\{0\}.$$ So $\phi $ either preserves or switches the two lines $\mathbb{R} X_{1,2}$, $\mathbb{R} X_{n-1,n}$, and therefore after postcomposing with $\tau$ if necessary, we may assume without loss of generality that $\phi$ preserves them. Then we observe that $$\{x\in V_1\mid \text{rank}(\text{ad}\; x|_{V_{n-2}})=0\}=\text{span}\{X_{2,3}, \cdots, X_{n-2, n-1}\}.$$ So $\phi (\mathfrak h_0)=\mathfrak h_0$, where $\mathfrak h_0$ is the Lie subalgebra of $\mathfrak{n}$ generated by $X_{2,3}, \cdots, X_{n-2, n-1}$. Note $\mathfrak h_0$ is isomorphic to $\mathfrak{n}_{n-2}$, i.e. the strictly upper triangular matrices in $\operatorname{\fg\fl}(n-2,\mathbb{R})$. Suppose $n=5$. In this case $\mathfrak h_0{\cap V_1}=\text{span}\{X_{2,3}, X_{3,4}\}$. We have $$\{x\in \mathfrak h_0\cap V_1\mid [x, X_{1,2}]=0\}=\mathbb{R} X_{3,4}$$ and $$\{x\in \mathfrak h_0\cap V_1\mid [x, X_{4,5}]=0\}=\mathbb{R} X_{2,3}.$$ Because $\phi $ preserves the two lines $\mathbb{R} X_{1,2}$, $\mathbb{R} X_{4,5}$ then it also preserves the two lines $\mathbb{R} X_{2,3}$, $\mathbb{R} X_{3,4}$. Hence $\phi$ is diagonal in the basis $\{X_{i,i+1}\}_{1\leq i\leq 4}$, and therefore (2) holds. Now we assume $n\ge 6$ and that (2) holds for $\mathfrak{n}_k$ with $k\le n-2$. We already observed $\phi (\mathfrak h_0)=\mathfrak h_0$, and have reduced to the case that the two lines $\mathbb{R} X_{1,2}$, $\mathbb{R} X_{n-1,n}$ are preserved by $\phi$. Since $\mathfrak h_0$ is isomorphic to $\mathfrak{n}_{n-2}$, the induction hypothesis implies that either $\phi (\mathbb{R} X_{i, i+1})=\mathbb{R} X_{i, i+1}$ for all $i=2,\cdots, n-2$ or $\phi (\mathbb{R} X_{i, i+1})=\mathbb{R} X_{n-i, n+1-i}$ for all $i=2,\cdots, n-2$. Since $[X_{1,2}, X_{2,3}]\not=0$ and $[X_{1,2}, X_{n-2,n-1}]=0$, and $\phi $ is an automorphism of Lie algebra, we see that $\phi (\mathbb{R} X_{i, i+1})=\mathbb{R} X_{i, i+1}$ for all $i=1,\cdots, n-1$, and so (2) holds. (3). Let $\Phi$ be a graded automorphism, so by (2) we have $\Phi=\tau^{\varepsilon}\circ I_g$ for $\varepsilon\in \{0,1\}$ and some $g$ diagonal. If $\varepsilon=0$ then $D\Phi(X_{i,i+1})\in \mathbb{R} X_{i,i+1}$, so $\Phi$ preserves $K_j$ for all $j$. On the other hand, by \eqref{eqn_tau_on_fg} we have $\tau(K_j)=K_{n-j}$, so if $\Phi(K_j)=K_j$ for all $1\leq j\leq n-1$ then $\varepsilon=0$. \end{proof} \bigskip We now consider the action of $\operatorname{Aut}(G)$ on the flag manifold. \begin{lemma} \label{lem_action_aut_g_on_f} \mbox{} \begin{enumerate} \item There is a 1-1 correspondence between cosets of $P$ and their stabilizers in $G$ with respect to the action $G\operatorname{\curvearrowright} G/P$: if $g_1,g_2\in G$ then \begin{align*} \operatorname{Stab}(g_1P)=\operatorname{Stab}&(g_2P)\quad \iff \quad g_1Pg_1^{-1}=g_2Pg_2^{-1}\\ &\iff g_1P=g_2P\,. \end{align*} \item Every $\Phi\in\operatorname{Aut}(G)$ permutes the conjugates of $P$; by (1) we thereby obtain an action of $\operatorname{Aut}(G)\stackrel{\rho}{\operatorname{\curvearrowright}}G/P$ defined by \begin{equation} \label{eqn_aut_g_action_def} \operatorname{Stab}(\Phi\cdot gP)=\Phi(\operatorname{Stab}(gP))\,. \end{equation} More explicitly, if $\Phi(P)=hPh^{-1}$ then \begin{equation} \label{eqn_aut_g_action_def_2} \Phi\cdot gP=\Phi(g)hP\,; \end{equation} in particular $\rho(\Phi)$ defines a smooth diffeomorphism. \item For every $\Phi\in \operatorname{Aut}(G)$ the map $$ \rho(\Phi):G/P\longrightarrow G/P $$ is $G$-equivariant from $G\stackrel{\ell}{\operatorname{\curvearrowright}}G/P$ to $G\stackrel{\ell_\Phi}{\operatorname{\curvearrowright}}G/P$, where $\ell(\bar g)( gP):=\bar ggP$ and $\ell_\Phi(\bar g)(gP):=\Phi(\bar g)gP$. \item The action $\operatorname{Aut}(G)\stackrel{\rho}{\operatorname{\curvearrowright}} G/P$ preserves the horizontal subbundle $\mathcal{H}\subset T(G/P)$. \item If $\Phi_0\in \operatorname{Aut}(G)$ is transpose-inverse, then $\Phi_0\cdot (W_j)=(W_j^\perp)$ for every $(W_j)\in \mathcal{F}$, i.e. $\rho(\Phi_0)=\psi$. \end{enumerate} \end{lemma} \begin{proof}(1). Note that the normalizer of $P$ in $G$ is $P$ itself. Therefore \begin{align*} \operatorname{Stab}&(g_1P)=\operatorname{Stab}(g_2P)\quad \iff \quad g_1Pg_1^{-1}=g_2Pg_2^{-1}\\ &\iff (g_2^{-1}g_1)P(g_2^{-1}g_1)^{-1}=P\quad \iff\quad g_2^{-1}g_1\in P\\ &\iff g_1P=g_2P\,. \end{align*} (2). Note that $\tau(P)=P$ so $\tau$ permutes the conjugates of $P$; by Lemma~\ref{lem_properties_aut_g}(1) it follows that every $\Phi\in\operatorname{Aut}(G)$ permutes the conjugates of $P$. Since $\operatorname{Stab}(gP)=gPg^{-1}$, if $\Phi(P)=hPh^{-1}$, then for every $g\in G$ \begin{align*} \operatorname{Stab}(\Phi\cdot gp)=\Phi(\operatorname{Stab}(gP))=\Phi(gPg^{-1})\\ =\Phi(g)hPh^{-1}(\Phi(g))^{-1}=\operatorname{Stab}(\Phi(g)hP) \end{align*} so $\Phi\cdot gP=\Phi(g)hP$. (3). For $g,\bar g\in G$, by (2) we have \begin{align*} \Phi\cdot(\ell(\bar g)\cdot gP)=\Phi\cdot(\bar ggP)=\Phi(\bar gg)hP\\ =\ell(\Phi(\bar g))\cdot(\Phi(g)hP)=\ell_\Phi(\bar g)\cdot(\Phi\cdot gP) \end{align*} as claimed. (4). If $\Phi=I_{\bar g}$ is an inner automorphism, then $I_{\bar g}\cdot gP=\bar ggP$ by \eqref{eqn_aut_g_action_def_2}; hence $I_{\bar g}\cdot$ preserves $\mathcal{H}$ by Lemma~\ref{lem_properties_of_f}(9). In the case $\Phi=\tau$, we have $\tau(P)=P$, so by \eqref{eqn_aut_g_action_def_2} we have $\tau\cdot gP=\tau(g)P$, and since $\tau(X_{i,i+1})=X_{n-i,n-i+1}$ we have $\tau(V_1)=V_1$. It follows that the differential of $\tau\cdot : G/P\rightarrow G/P$ at $P$ preserves $\mathcal{H}(P)$. Using (3) and the fact that the actions $\ell$ and $\ell_\tau$ preserve $\mathcal{H}$, we conclude that $\tau\cdot$ preserves $\mathcal{H}$. By Lemma~\ref{lem_properties_aut_g} the group $\operatorname{Aut}(G)$ is generated by $\tau$ and the inner automorphisms, so (4) follows. (5). Let $\Pi\in \operatorname{GL}(n,\mathbb{R})$ be the permutation matrix with $\Pi e_i=e_{n-i}$, so $\Phi_0(P)=\Pi P\Pi^{-1}$. If $(W_j)=g\cdot (W_j^-)$, then using \eqref{eqn_aut_g_action_def_2} we have \begin{align*} \Phi_0\cdot (W_j)&=\Phi_0\cdot (g\cdot(W_j^-))=(\Phi_0(g)\Pi)\cdot(W_j^-)\\ &=((g^{-1})^t\Pi W_j^-)=((g^{-1})^tW_j^+)\,. \end{align*} Since $(g^{-1})^tW_j^+$ is orthogonal to $gW_{n-j}^-$ for every $1\leq j\leq n-1$, assertion (5) follows. \end{proof} \bigskip \subsection*{Fibrations between flag manifolds} We now consider partial flags and the associated flag manifolds. If $\Sigma\subset \{1,\ldots,n-1\}$, we let $\mathcal{F}_{\Sigma}$ be the collection of partial flags $(W_j)_{j\in \Sigma}$ where $W_j\subset \mathbb{R}^n$ has dimension $j$. Then, as in the case when $\Sigma=\{1,\ldots,n-1\}$, the set $\mathcal{F}_{\Sigma}$ is a smooth submanifold of the product $\prod_{j\in \Sigma}G(j,n)$ of Grassmannians, and the actions $G\operatorname{\curvearrowright} G(j,n)$ yield a transitive smooth action $G\operatorname{\curvearrowright} \mathcal{F}_{\Sigma}$. Taking $(W_j^-)_{j\in \Sigma}$ to be the basepoint, and letting $P_{\Sigma}=\operatorname{Stab}((W_j^-)_{j\in \Sigma})\subset G$ be its stabilizer, we obtain a $G$-equivariant diffeomorphism $G/P_{\Sigma}\rightarrow \mathcal{F}_{\Sigma}$; we identify $\mathcal{F}_{\Sigma}$ with $G/P_{\Sigma}$ accordingly. We will be interested in the case when $\Sigma=\{j\}$ for some $j\in \{1,\ldots,n-1\}$, so to simpify notation we let $\mathcal{F}_j:=\mathcal{F}_{\{j\}}=G(j,n)$. \begin{lemma} \label{lem_properties_fibration_between_flag_manifolds} \mbox{} \begin{enumerate} \item If $\Sigma_1\subset \Sigma_2\subset \{1,\ldots,n-1\}$ then we obtain a smooth $G$-equivariant fibration $\pi_{\Sigma_1,\Sigma_2}:\mathcal{F}_{\Sigma_2}\rightarrow \mathcal{F}_{\Sigma_1}$ by ``forgetting subspaces'', i.e. $\pi_{\Sigma_1,\Sigma_2}((W_j)_{j\in \Sigma_2}):=(W_j)_{j\in \Sigma_1}$. \item The fiber of $\pi_{\Sigma_1,\Sigma_2}$ passing through $(W_j^-)$ is the $P_{\Sigma_1}$ orbit $P_{\Sigma_1}\cdot (W_j-)$, which is diffeomorphic to $P_{\Sigma_1}/P_{\Sigma_2}$. To simplify notation, for $j\in \{1,\ldots,n-1\}$ we let $\mathcal{F}_j:=\mathcal{F}_{\{j\}}=G(j,n)$ and $$\pi_j:=\pi_{\{j\},\{1,\ldots,n-1\}}:\mathcal{F}=\mathcal{F}_{\{1,\ldots,n-1\}}\rightarrow \mathcal{F}_j\,, $$ so the fiber of $\pi_j$ passing through the basepoint $P$ is $P_jP$. \item The intersection of the fiber $\pi_j^{-1}(W_j^-)$ with $\hat N=N\cdot (W_j^-)$ is $K_j\cdot (W_j^-)$, where $K_j\subset N$ is as in Lemma~\ref{lem_properites_aut_gr_n}(3). Hence the orbit map $\alpha:N\rightarrow \hat N$ carries the coset foliation of $K_j$ to the foliation defined by $\pi_j$. \end{enumerate} \end{lemma} \begin{proof} (1) and (2) are a special case of a general fact: if $K$ is a Lie group and $H_1\subset H_2\subset K$ are closed subgroups, the quotient map $K/H_1\rightarrow K/H_2$ is a $K$-equivariant fibration; fiber passing through $H_1$ is $H_2H_1\subset K/H_1$ and it is diffeomorphic to $H_2/H_1$. (4). The subbundle of $T(G/P)$ tangent to the fibers of the fibration $\pi_j:G/P\rightarrow G/P_j$ is $G$-invariant, and its value at the basepoint $P\in G/P$ is the subspace $\mathfrak{p}_j/\mathfrak{p}$. The subbundle of $TN$ tangent to the coset foliation of $K_j$ maps under the orbit map $\alpha:N\rightarrow \hat N$ to an $N$-invariant subbundle of $T\hat N$ whose value at $P$ is the subspace $(\mathfrak{k}_j+\mathfrak{p})/\mathfrak{p}$. But $\mathfrak{k}_j+\mathfrak{p}=\mathfrak{p}_j$, so the two subbundles are the same. It follows that $\pi_j^{-1}(W_j^-)\cap\hat N$ is the union of a closed set of $K_j$-orbits in $\hat N$; however, $\pi_j^{-1}(W_j)\cap\hat N$ is invariant under the action of $\hat \delta_r$, so it can contain only one $K_j$-orbit. \end{proof} \bigskip \subsection*{Dynamics of Carnot dilations}~ The following result is a special case of \cite[Lemma 3.9]{tits_alternative}; we give a self-contained proof for the reader's convenience. \begin{lemma} \label{lem_dilation_dynamics} Pick $1\leq j\leq n-1$. \begin{enumerate} \item Let $K_1\subset \mathcal{F}_1$ be a compact subset disjoint from $Z_1:=\{W_1\in \mathcal{F}_1\mid W_1\subset W_{n-j}^+\}$, and $U_1\subset\mathcal{F}_1$ be an open subset containing $Z_2:=\{W_1\in \mathcal{F}_1\mid W_1\subset W_j^-\}$. Then there exists $\bar r_1=\bar r_1(K_1,U_1)>0$ such that for every $r\leq \bar r_1$ we have $\hat \delta_r(K_1)\subset U_1$. \item Let $K\subset \mathcal{F}_j$ be a compact subset such that $W_j\cap W_{n-j}^+=\{0\}$ for every $W_j\in K$, and $U\subset\mathcal{F}_j$ be an open subset containing $W_j^-$. Then there exists $\bar r=\bar r(K,U)>0$ such that for every $r\leq \bar r$ we have $\hat \delta_r(K)\subset U$. \end{enumerate} \end{lemma} \begin{proof} (1). Let $\pi_{W_{n-j}^+}$ and $\pi_{W_j^-}$ be the projections onto the summands of the decomposition $\mathbb{R}^n=W_{n-j}^+\oplus W_j^-$. Define $\hat\beta:\mathbb{R}^n\setminus W_{n-j}^+\rightarrow [0,\infty)$ by $\hat\beta(v):=\frac{\|\pi_{W_{n-j}^+}v\|}{\|\pi_{W_j^-}v\|}$. Note that by homogeneity $\hat\beta$ descends to $\beta:\mathcal{F}_1\setminus Z_1\rightarrow [0,\infty)$. Since $K_1$ is compact and $U_1$ is open, we may choose $C_2\leq C_1<\infty$ such that $\beta(K_1)\subset [0,C_1]$ and $\beta^{-1}([0,C_2])\subset U$. If $v\in \mathbb{R}^n\setminus W_{n-j}^+$ and $r\leq 1$ then \begin{align*} \|\pi_{W_{n-j}^+}(\hat\delta_rv)\|&=\|\sum_{i=1}^{n-j}r^{-i}v_i\|\leq r^{-(n-j)}\|\pi_{W_{n-j}^+}v\|\\ \|\pi_{W_j^-}(\hat\delta_rv)\|&=\|\sum_{i=n-j+1}^nr^{-i}v_i\|\geq r^{-(n-j+1)}\|\pi_{W_j^-}v\|\\ \implies&\hat\beta(\hat\delta_rv)\leq r\hat\beta(v)\,. \end{align*} Hence for $W_1\in\mathcal{F}_1\setminus Z_1$ and $r\leq 1$ we have $\beta(\hat\delta_rW_1)\leq r\beta(W_1)$. Taking $\bar r_1:=C_2/C_1$, if $r\leq \bar r_1$ then $$ \hat\delta_r(K_1)\subset\hat \delta_r(\beta^{-1}([0,C_1]))\subset \beta^{-1}([0,C_2])\subset U_1\,. $$ (2). Note that $$K_1:=\{W_1\in\mathcal{F}_1\mid \exists W_j\in K\;\text{s.t.}\;W_1\subset W_j\}$$ is a compact subset of $\mathcal{F}_1\setminus Z_1$, and there is an open subset $U_1\subset\mathcal{F}_1$ containing $Z_2$ such that if $W_j\in \mathcal{F}_j$ and $\{W_1\in\mathcal{F}_1\mid W_1\subset W_j\}\subset U_1$, then $W_j\in U$. Let $\bar r=\bar r_1(K_1,U_1)$ be as in (1), and choose $W_j\in K$, $r\leq \bar r$. Then by the choice of $r$, for every $W_1'\in\mathcal{F}_1$ with $W_1'\subset W_j$ we have $\hat\delta_r(W_1')\in U_1$. Therefore $\{W_1\in \mathcal{F}_1\mid W_1\subset \hat\delta_r(W_j)\}\subset U_1$, and hence $\hat \delta_r(W_j)\in U$, by our assumption on $U_1$. \end{proof} \bigskip\bigskip \subsection{Sobolev mappings and the Pullback Theorem} \label{subsec_sobolev_mappings_pullback_theorem}~ We give a very brief discussion here, and refer the reader to \cite[Section 2]{KMX1} and the references therein for more details. Let $H$, $H'$ be Carnot groups with gradings $\mathfrak{h}=\oplus_jV_j$, $\mathfrak{h}'=\oplus_j V_j'$, Carnot dilations denoted by $\delta_r$, and Carnot-Caratheodory distance functions $d_{CC}$, $d_{CC}'$. Let $\nu=\sum_jj\dim V_j$ be the homogeneous dimension of $H$. Choose $p>\nu$. If $U\subset H$ is open and $f:U\rightarrow H'$ is a continuous map then $f$ is a $W^{1,p}_{\operatorname{loc}}$-mapping if there exists $g\in L^p_{\operatorname{loc}}(U)$ such that for every $1$-Lipschitz function $\psi:H'\rightarrow \mathbb{R}$ and every unit length $X\in V_1$, the distribution derivative $X(\psi\circ f)$ satisfies $|X(\psi\circ f)|\leq g$ almost everywhere. If $f:H\supset U\rightarrow U'\subset H'$ is a quasiconformal homeomorphism then $f$ is a $W^{1,p}_{\operatorname{loc}}$-mapping for some $p>\nu$. If $f:U\rightarrow H'$ is a $W^{1,p}_{\operatorname{loc}}$-mapping then for a.e. $x\in U$ the map $f$ has a Pansu differential, i.e. letting $f_x:=\ell_{f(x)^{-1}}\circ f\circ \ell_x$, $f_{x,r}:= \delta_r^{-1}\circ f_x\circ \delta_r$ there is a graded group homomorphism $D_Pf(x):H\rightarrow H'$ such that $$ f_{x,r}\stackrel{C^0_{\operatorname{loc}}}{\longrightarrow} D_Pf(x) $$ as $r\rightarrow 0$. A Lie group homomorphism $\Phi: H\rightarrow H'$ is graded if $\delta_r\circ \Phi=\Phi\circ \delta_r$ for all $r>0$. We will often use $D_Pf(x)$ to denote the induced graded homomorphism of Lie algebras $\mathfrak{h}\rightarrow \mathfrak{h}'$. We let $\{X_j\}_{1\leq j\leq \dim H}$ be a graded basis for $\mathfrak{h}$, and $\{\th_j\}_{1\leq j\leq \dim H}$ be the dual basis. The {\em weight} of a subset $I\subset \{1,\ldots,\dim H\}$ is given by $$ \operatorname{wt} I:=-\sum_{i\in I}\deg i $$ where $\deg:\{1,\ldots,\dim H\}\rightarrow \{1,\ldots, s\}$ is defined by $\deg i=j$ iff $X_i\in V_j$. For a non-zero left-invariant form $\alpha = \sum_{I} a_I \theta_I$ we define $\operatorname{wt}(\alpha) = \max \{ \operatorname{wt} I : a_I \ne 0\}$ and set $\operatorname{wt}(0):=-\infty$; here $\th_I$ denotes the wedge product $\Lambda_{i\in I}\th_i$. We use primes for the objects associated with $H'$. Fix $p>\nu$, and let $f:U\rightarrow H'$ be a $W^{1,p}_{\operatorname{loc}}$-mapping for some open subset $U\subset H$. If $\omega\in\Omega^k(H')$ is a differential $k$-form with continuous coefficients, the {\em Pansu pullback of $\omega$} is the $k$-form with measurable coefficients $f_P^*\omega$ given by $$ f_P^*\omega(x):=(D_Pf(x))^*\omega(f(x))\,, $$ where $D_Pf(x):\mathfrak{h}\rightarrow \mathfrak{h}'$ is the Pansu differential of $f$ at $x\in U$. We will use the following special case of \cite[Theorem 4.2]{KMX1}: \begin{theorem}[Pullback Theorem (special case)] \label{co:pull_back2} Suppose $\varphi \in C_c^\infty(U)$ and that $\alpha$ and $\beta$ are closed left invariant forms which satisfy \begin{equation} \label{eq:weight_special} \deg \alpha + \deg \beta = N -1 \quad \hbox{and} \quad \operatorname{wt}(\alpha) + \operatorname{wt}(\beta) \le -\nu + 1. \end{equation} Then \begin{equation} \label{eq:pull_back_identity_special} \int_U f_P^*(\alpha) \wedge d(\varphi \beta) = 0. \end{equation} \end{theorem} \bigskip We now consider Sobolev mappings on the flag manifold. Let $f:\mathcal{F}\supset U\rightarrow \mathcal{F}$ be a continuous map. Then: \begin{itemize} \item $f$ is a {\bf $W^{1,p}_{\operatorname{loc}}$-mapping} if for every $x\in U$ there is an open neighborhood $V$ of $x$ and group elements $g,g'\in G=\operatorname{PGL}(n,\mathbb{R})$ such that $V\subset g\cdot \hat N$, $f(V)\subset g'\cdot\hat N$, and the composition \begin{equation} \label{eqn_f_in_charts} (\rho(g)\circ\alpha)^{-1}(V)\stackrel{\rho(g)\circ \alpha}{\longrightarrow}V\stackrel{f}{\longrightarrow}g'\cdot\hat N\stackrel{(\rho(g')\circ\alpha)^{-1}}{\longrightarrow}N \end{equation} is a $W^{1,p}_{\operatorname{loc}}$-mapping. \item The map $f$ is {\bf Pansu differentiable at $x\in U$} if for some $x\in U$, $V\subset U$, $g,g'\in G$ as above, the composition \eqref{eqn_f_in_charts} is Pansu differentiable at $(\rho(g)\circ\alpha)^{-1}(x)$. By the chain rule for Pansu differentials, Pansu differentiability is independent of the choice of $g,g'$, and the resulting Pansu differential $\mathfrak{n}\rightarrow \mathfrak{n}$ is well-defined up to pre/post composition with graded automorphisms. In particular, the property being an isomorphism is well-defined. \end{itemize} Equivalently, one may work directly with the flag manifold as an equiregular subriemannian manifold, and use the notion of Pansu differential in that setting (see \cite[Section 1.4]{gromov_carnot_caratheodory_space_seen_from_within}, \cite{vodopyanov_carnot_manifolds}, and \cite[Appendix A]{KMX1}). \bigskip \section{Sobolev mappings and the (virtual) preservation of coset foliations}~ \label{sec_sobolev_mappings_preservation_foliations} The main result in this section is the following: \begin{lemma} \label{lem_main_no_oscillation} Let $n\ge 4$. Let $U\subset N$ be a connected open subset, and for some $p>\nu$ let $f:N\supset U\rightarrow N$ be a $W^{1,p}_{\operatorname{loc}}$-mapping whose Pansu differential is an isomorphism almost everywhere. Then after possibly composing with $\tau$, if necessary, for a.e. $x\in U$ the Pansu differential $D_Pf(x)$ preserves the subspace $\mathbb{R} X_{i,i+1}\subset V_1$ for every $1\leq i\leq n-1$. \end{lemma} \begin{corollary} \label{cor_preservation_coset_foliation} Let $n\ge 4$ and $f:U\rightarrow N$ be as in Lemma~\ref{lem_main_no_oscillation}, and $K_j$ be as in Lemma~\ref{lem_properites_aut_gr_n}(3). Then after possibly composing with $\tau$, if necessary, for every $1\leq j\leq n-1$, $f$ locally preserves the coset foliation of $K_j$, i.e. for every $x\in U$ there is an $r>0$ such that for every $g\in N$ the image of $gK_j\cap B(x,r)$ under $f$ is contained in a single coset of $K_j$. \end{corollary} The corollary follows from Lemma~\ref{lem_main_no_oscillation} and \cite[Lemma 2.30]{KMX2}. Note that Lemma~\ref{lem_main_no_oscillation} fails when $n=3$ (when $N$ is a copy of the Heisenberg group), even for automorphisms. Examples from \cite{kmsx_infinitesimally_split_globally_split} show that Lemma~\ref{lem_main_no_oscillation} can fail even for bilipschitz mappings whose Pansu differential preserves the splitting $V_1=\mathbb{R} X_{1,2}\oplus \mathbb{R} X_{2,3}$ for a.e. $x$. \bigskip \begin{proof}[Proof of Lemma~\ref{lem_main_no_oscillation}] We begin with the $n=4$ case of the lemma. As in the proof of Lemma~\ref{eqn_x_y_z_brackets}, we let \begin{equation*} \begin{aligned} X_0=X _{23}\,,\;X_1=X _{12}\,,\;X_2=X _{34}\,,\\ Y_1=X _{13}\,,\;Y_2=-X _{24}\,,\;Z=X _{14}\,. \end{aligned} \end{equation*} Let $\alpha_0, \alpha_1, \alpha_2$, $\beta_1, \beta_2$, $\gamma$ be the basis of left invariant $1$-forms that are dual to $X_0, X_1, X_2$, $Y_1, Y_2$, $Z$. Then using \eqref{eqn_x_y_z_brackets} we have \begin{align*} d\alpha_0=d\alpha_1=d\alpha_2=0;\\ d \beta_1=\alpha_0\wedge\alpha_1,\;\;\; d \beta_2=\alpha_0\wedge \alpha_2;\\ d\gamma=\alpha_1\wedge \beta_2+\alpha_2\wedge\beta_1. \end{align*} Let $\omega:=\alpha_1\wedge\beta_1\wedge\gamma$. By Lemma \ref{lem_properites_aut_gr_n}(2) on graded automorphisms, $$ f_P^*\omega=u_1\alpha_1\wedge \beta_1\wedge\gamma+u_2\alpha_2\wedge \beta_2\wedge \gamma $$ where $u_1$, $u_2$ are measurable and with $S_i:=\{u_i\neq 0\}$ the union $S_1\cup S_2$ has full measure and $S_1\cap S_2$ is null. Let $\eta=\alpha_2\wedge\beta_2$ and pick $\varphi\in C^\infty_c(U)$. Then $\omega$ and $\eta$ are closed left invariant forms and their degrees and weights satisfy the assumption of the pullback theorem (Theorem \ref{co:pull_back2}), and so $$\int_U f_P^* \omega\wedge d(\varphi \eta)=0\,.$$ As $ d(\varphi \eta)=d\varphi\wedge \eta$ we have $f_P^* \omega\wedge d(\varphi \eta)=\pm u_1(X_0\varphi) d\text{vol}$ and therefore $\int_U u_1(X_0\varphi)d \operatorname{vol}=0$. Since $\varphi$ was arbitrary, it follows that $X_0u_1=0$ in the sense of distributions. Similarly by picking $\eta=\alpha_0\wedge\beta_2$ we obtain $X_2u_1=0$. It follows that $Y_2u_1=[X_2,X_0]u_1=X_2X_0u_1-X_0X_2u_1=0$. Let $H_i$ ($i=1, 2$) be the Lie subgroup of $N$ whose Lie algebra is generated by $X_0$ and $X_i$. Then we see that for almost every left coset $gH_2$, $u_1$ is locally constant almost everywhere. Consequently $X_0\chi_{S_1}=X_2\chi_{S_1}=Y_2\chi_{S_1}=0$, where $\chi_S$ denotes the characteristic function of a subset $S\subset U$. Similarly by using $\eta=\alpha_1\wedge \beta_1$, $\alpha_0\wedge \beta_1$, we obtain $X_0u_2=X_1u_2=Y_1u_2=0$ and $X_0\chi_{S_2}=X_1\chi_{S_2}=Y_1\chi_{S_2}=0$. As $\chi_{S_1}+\chi_{S_2}=1$, we infer that $X_i\chi_{S_j}=0$ for all $0\le i\le 2$ and $j=1, 2$. Since $X_0, X_1, X_2$ generates $\mathfrak n$, this gives $X\chi_{S_j}=0$ for every $X\in \mathfrak{n}$, and hence $\chi_{S_j}$ is locally constant a.e. By the connectedness of $U$, $\chi_{S_j}$ is constant a.e. As $\chi_{S_1}$ takes on only two possible values ($0$ or $1$), we have $\chi_{S_1}=0$ a.e. or $\chi_{S_1}=1$ a.e. If $\chi_{S_1}=1$ a.e., then by the definition of $S_1$ we have $Df_x(\mathbb R X_{i})=\mathbb R X_{i}$ for all $0\le i\le 2$ and a.e. $x\in U$. If $\chi_{S_1}=0$ a.e., then $\tau\circ Df_x(\mathbb R X_{i})=\mathbb R X_{i}$ for a.e. $x\in U$. Now assume $n\geq 5$. Let $\{\th_{i,j}\mid 1\le i<j\le n\}$ be the basis of left invariant $1$-forms on $N$ that are dual to the basis $\{X_{i,j}\mid 1\le i<j\le n\}$ of $\mathfrak{n}$. Then we have \begin{equation} \label{eqn_dthij} d \th_{i,j}=\left\{ \begin{array}{rl} 0& \text{ if } j=i+1\\ -\sum_{k=i+1}^{j-1}\th_{i, k}\wedge \th_{k,j} & \text{ if } j>i+1 \end{array}\right. \end{equation} Let $\omega_+=\th_{1,2}\wedge\th_{1,3}\wedge\cdots\wedge \th_{1,n}$ and $\omega_-=\th_{n-1,n}\wedge\th_{n-2,n}\wedge\cdots\wedge \th_{1,n}$. Note that $\omega_+$ is closed since by \eqref{eqn_dthij} we have $d\th_{1,2}=0$, and $d\th_{1,j}=-\sum_{k=2}^{j-1}\th_{1,k}\wedge\th_{k,j}$ for $j\ge 3$ and so $\th_{1,2} \wedge\cdots \wedge \th_{1, j-1}\wedge d\th_{1,j}=0$. Similarly $\omega_-$ is closed. By Lemma~\ref{lem_properites_aut_gr_n} we have $$f^*_P(\omega_+)=u_+\omega_++u_-\omega_-$$ for some measurable functions $u_+$, $u_-$. For $2\le k\le n-1$, let $$\eta_{k-}=\bigwedge_{2\le i<j\le n, (i,j)\not=(k, k+1)} \th_{i,j},$$ and for $1\le k\le n-2$, let $$\eta_{k+}=\bigwedge_{1\le i<j\le n-1, (i,j)\not=(k, k+1)} \th_{i,j}.$$ We show that $\eta_{k-}$ is closed. First we have $d\th_{l, l+1}=0$. For $m>l+1$, $d\th_{l, m}=-\sum_{s=l+1}^{m-1}\th_{l, s}\wedge\th_{s, m}$ and at least one of the two terms $\th_{l, s}$, $\th_{s, m}$ is already present in $$\bigwedge_{2\le i<j\le n, (i,j)\not=(k, k+1), (l,m)} \th_{i,j};$$ it follows that $d\th_{l, m}\wedge \bigwedge_{2\le i<j\le n, (i,j)\not=(k, k+1), (l,m)} \th_{i,j}=0$. Similarly $\eta_{k+}$ is closed. We next show that the degree and weight conditions in the pullback theorem (Theorem \ref{co:pull_back2}) are satisfied: for $\omega_+$ and $\eta=\eta_{k-}$ or $\eta_{k+}$ we have $\text{degree}(\omega_+)+\text{degree}(\eta)=N-1$ and $\text{wt}(\omega_+)+\text{wt}(\eta)=-\nu+1$. First these conditions are satisfied by $\omega_+$ and $\eta_-$ as $\omega_+\wedge \eta_-=\pm\bigwedge_{(i,j)\not={k, k+1}}\theta_{ij}$ and $\theta_{k, k+1}$ has degree one and weight $-1$. These conditions are also satisfied by $\omega_+$ and $\eta_+$ as $\tau(\eta_-)=\pm\eta_+$ and so $\eta_-$, $\eta_+$ have the same degree and weight. Hence by Theorem \ref{co:pull_back2}, for any $\varphi\in C^\infty_c(U)$, we have $\int_U f_P^*(\omega_+)\wedge d(\varphi \eta)=0$. As $d(\varphi \eta)=d\varphi\wedge \eta$, we get $f_P^*(\omega_+)\wedge d(\varphi \eta_{k-})=\pm u_+X_{k,k+1}\varphi \text{Vol}$ (for $2\le k\le n-1$) and $f_P^*(\omega_+)\wedge d(\varphi \eta_{k+})=\pm u_-X_{k,k+1}\varphi \text{Vol}$ (for $1\le k\le n-2$). It follows that distributionally we have $X_{k,k+1} u_+=0$ for all $2\le k\le n-1$ and $X_{k,k+1} u_-=0$ for all $1\le k\le n-2$. Let $S_+\subset U$ be the subset where the Pansu differential of $f$ fixes all the directions $\mathbb{R} X_{i, i+1}$. Similarly let $S_-\subset U$ be the set of points $x\in U$ such that $\tau\circ Df_x$ fixes all the directions $\mathbb{R} X_{i, i+1}$. Again by Lemma \ref{lem_properites_aut_gr_n} (2), $S_+\cup S_-$ has full measure in $U$ and $S_+\cap S_-$ is null. Notice that $S_+$ agrees with $\{x\in U\mid u_+(x)\not=0\}$ up to a set of measure $0$ and $S_-$ agrees with $\{x\in U\mid u_-(x)\not=0\}$ up to a set of measure $0$. Hence $X_{k,k+1} \chi_{S_+}=0$ for all $2\le k\le n-1$ and $X_{k,k+1} \chi_{S_-}=0$ for all $1\le k\le n-2$. Since $\chi_{S_+}+\chi_{S_-}=1$ we have $X_{k,k+1} \chi_{S_+}=0$ for all $1\le k\le n-2$. So $X_{k,k+1} \chi_{S_+}=0$ for all $1\le k\le n-1$. Since the $X_{k, k+1}$, $1\le k\le n-1$ generates the Lie algebra $\mathfrak n$, we see that $X \chi_{S_+}=0$ for any $X\in \mathfrak n$ and we conclude as in the $n=4$ case. \end{proof} \bigskip \section{Rigidity for fibration preserving maps} \label{sec_rigidity_foliation_preserving_maps} In the section we show that mappings of $\mathcal{F}$ which respect the fibrations $\pi_j:\mathcal{F}\rightarrow \mathcal{F}_j$ for all $1\leq j\leq n-1$ are locally projective. \begin{definition} A mapping $f:\mathcal{F}\supset U\rightarrow \mathcal{F}$ is {\bf fibration-preserving} if for every $1\leq j\leq n-1$, and every $W_j\in \mathcal{F}_j$, the image of $U\cap \pi_j^{-1}(W_j)$ under $f$ is contained in $\pi_j^{-1}(W_j')$ for some $W_j'\in\mathcal{F}_j$. Equivalently, letting $U_j:=\pi_j(U)$, there exist mappings $f_j:U_j\rightarrow \mathcal{F}_j$ such that $\pi_j\circ f=f_j\circ \pi_j$. A mapping $f:\mathcal{F}\supset U\rightarrow \mathcal{F}$ is {\bf locally fibration preserving} if it is fibration preserving near any point in $U$. \end{definition} \begin{proposition} \label{thm_fiber_preserving_projective}~ Suppose $n\geq 3$ and $f:U\rightarrow \mathcal{F}$ is a continuous, locally fiber preserving mapping, where $U\subset \mathcal{F}$ is a connected open subset. If for some $x\in U$ the Pansu differential $D_Pf(x)$ exists and is an isomorphism (see Subsection~\ref{subsec_sobolev_mappings_pullback_theorem}) then $f$ agrees with $g:\mathcal{F}\rightarrow \mathcal{F}$ for some $g\in \operatorname{PGL}(n,\mathbb{R})$. \end{proposition} \bigskip \subsection{Proof of Proposition~\ref{thm_fiber_preserving_projective} in the $n=3$ case} \label{subsec_fibration_preserving_n_3_case} In this subsection we fix $n=3$, and without further mention $\mathcal{F}$ and $\mathcal{F}_i$, and $\pi_i:\mathcal{F}\rightarrow \mathcal{F}_i$ will be the objects associated with $\mathbb{R}^3$. \begin{definition} A {\bf projective frame} is an indexed collection $\{W_1^i\}_{0\leq i\leq 3}\subset \mathcal{F}_1$ where any three elements span $\mathbb{R}^3$. The {\bf standard projective frame} $\{\hat W_1^i\}_{0\leq i\leq 3}$ is given by $\hat W_1^i=\operatorname{span}(e_i)$ for $1\leq i\leq 3$, and $\hat W_1^0=\operatorname{span}(e_1+e_2+e_3)$. \end{definition} Note that for any projective frame $\{W_1^i\}_{0\leq i\leq 3}\subset \mathcal{F}_1$ there exists a unique $g\in \operatorname{PGL}(3,\mathbb{R})$ such that $g(\hat W_1^i)=\hat W_1^i$ for $0\leq i\leq 3$, i.e. $\operatorname{PGL}(3,\mathbb{R})$ acts freely transitively on the collection of projective frames. The proof of Proposition~\ref{thm_fiber_preserving_projective} is inspired by the following elementary classical result: \begin{theorem}[Fundamental Theorem of Projective Geometry]~ \label{thm_fundamental_theorem_projective_geometry} Every fibration-preserving bijection $f:\mathcal{F}\rightarrow\mathcal{F}$ agrees with some $g\in \operatorname{PGL}(3,\mathbb{R})$. \end{theorem} Note that this is an equivalent reformulation of the classical result using the flag manifold rather the (traditional) projective plane. \begin{proof}[Proof of Theorem~\ref{thm_fundamental_theorem_projective_geometry}] Since $f$ is a bijection, so are the maps $f_i:\mathcal{F}_i\rightarrow \mathcal{F}_i$ for $i\in \{1,2\}$. It follows that $f_1$ maps $\pi_1(\pi_2^{-1}(W_2))$ bijectively to $\pi_1(\pi_2^{-1}(f_2(W_2)))$ for every plane $W_2\in\mathcal{F}_2$. Therefore three elements of $\mathcal{F}_1$ lie in a plane if and only if their images under $f_1$ lie in a plane. Hence $\{f_1(\hat W_1^i)\}_{0\leq i\leq 3}$ is a projective frame, so after composing $f$ with some element of $\operatorname{PGL}(3,\mathbb{R})$, we may assume that $f_1(\hat W_1^i)=\hat W_1^i$ for $0\leq i\leq 3$. It follows that $f_1$ also fixes $\hat W_1^4:=\hat W_2^{30}\cap \hat W_2^{12}$. We identify $\mathbb{R}^2$ with the affine plane $\{W_1\in \mathcal{F}_1\mid W_1\not\subset \hat W_2^{12}=\operatorname{span}(e_1,e_2)\}$ by $(x_1,x_2)\leftrightarrow \operatorname{span}(x_1,x_2,1)$. We may define a bijection $\phi:\mathbb{R}^2\rightarrow\mathbb{R}^2$ by $\operatorname{span}(\phi(x_1,x_2),1)=f_1(\operatorname{span}(x_1,x_2,1))$. Using the fact that $f_1(\hat W_1^i)=\hat W_1^i$ for $0\leq i\leq 4$, one sees that: \begin{enumerate}[label=(\alph*)] \item $\phi$ fixes $(0,0)$, $(1,1)$. \item $\phi$ maps lines bijectively to lines. \item For every $v\in \{e_1,e_2,e_1+e_2\}$ and every line $L$ parallel to $v$, the image $f(L)$ is a line parallel to $v$. \end{enumerate} Using a geometric construction (see Figure~\ref{addition1}) it follows that the restriction of $\phi$ to $\mathbb{R}\times\{0\}\simeq \mathbb{R}$ is a field isomorphism. Since $\operatorname{id}_{\mathbb{R}}$ is the only automorphism of $\mathbb{R}$, it follows that $\phi$ fixes $\mathbb{R}\times\{0\}$. Hence it also fixes $\{0\}\times\mathbb{R}$, then all of $\mathbb{R}^2$; and $f_1$ fixes $\mathcal{F}_1$, and $f$ fixes $\mathcal{F}$. \begin{figure} \centering \includegraphics[width=90mm]{affine2} \caption{ Geometric addition. \label{addition1}} \end{figure} \end{proof} \bigskip The $n=3$ case of Proposition~\ref{thm_fiber_preserving_projective} may be viewed as a localized version of Theorem~\ref{thm_fundamental_theorem_projective_geometry} for mappings which are continuous, but not necessarily bijective. Before proceeding, we first give a rough idea how the argument goes. The first step is to show that local projectivity of $f$ propagates: if $f$ locally agrees with some $g\in \operatorname{PGL}(3,\mathbb{R})$ near some point $x\in U$, then $f$ will locally agree with $g$ near points in the fibers of $\pi_1$, $\pi_2$ passing through $x$ (see Lemma~\ref{lem_projective_propagation}). This readily implies that the subset of $U$ where $f$ agrees with $g$ is a connected component of $U$. Thus we are reduced to finding a single point near which $f$ is locally projective. To that end, we consider $(W_1,W_2)\in U$ a point of differentiability where differential is an isomorphism. Using the definition of differentiability, we argue that the map $f$ is nondegenerate near $(W_1,W_2)$, in the sense that the map $f_1$ induced on $\mathcal{F}_1$ carries a projective frame localized near $W_1$ to a projective frame (see below the claim in the proof of the $n=3$ case of Proposition~\ref{thm_fiber_preserving_projective}). Then by pre/post-composing with suitable elements of $\operatorname{PGL}(3,\mathbb{R})$, and working in $\mathcal{F}_1$, we are able to reduce to a map $\phi:\mathbb{R}^2\supset U\rightarrow \mathbb{R}^2$ which satisfies a localized version of the conditions (a)-(c) appearing in the proof of Theorem~\ref{thm_fundamental_theorem_projective_geometry}; this is shown in Lemma~\ref{lem_special_affine} to be locally affine, which then implies that $f$ is locally projective near $(W_1,W_2)$. \bigskip We now return to the preparations for the proof of Proposition~\ref{thm_fiber_preserving_projective}. \bigskip \begin{lemma}[Propagation of projectivity] \label{lem_projective_propagation} Suppose $U\subset \mathcal{F}$ is open and $f:U\rightarrow \mathcal{F}$ is continuous and fibration preserving. \begin{enumerate} \item If $f$ agrees with $g\in \operatorname{PGL}(3,\mathbb{R})$ near $(\bar W_1,\bar W_2)\in U$, then $f$ agrees with $g$ near the fiber $\pi_i^{-1}(\bar W_i)\cap U$, for $i\in \{1,2\}$. \item For every $g\in \operatorname{PGL}(3,\mathbb{R})$, the set where $f$ locally agrees with $g$ is a connected component of $U$. \end{enumerate} \end{lemma} \begin{proof} (1). We prove the assertion for $i=2$; the case when $i=1$ is similar. After postcomposing with an element of $\operatorname{PGL}(3,\mathbb{R})$, may assume without loss of generality that $f\equiv\operatorname{id}$ on an open set $V\subset U$ with $(\bar W_1,\bar W_2)\in V$. Pick $(W_1,W_2)\in U\cap \pi_2^{-1}(\bar W_2)$, i.e. $W_2=\bar W_2$. Choose $(W_1,W_2')\in U\setminus \{(W_1,W_2)\}$ such that $\pi_2^{-1}(W_2')\cap V\neq \emptyset$. Now suppose $(W_1^j,W_2^j)\rightarrow (W_1,W_2)$. We may choose $\{W_2'^j\}\subset \mathcal{F}_2$ such that $W_1^j\subset W_2'^j$ and $W_2'^j\rightarrow W_2'$ as $j\rightarrow\infty$. Since $V$ is open, after dropping finitely many terms, we may assume that $\pi_2^{-1}(W_2^j)\cap V\neq \emptyset$ and $\pi^{-1}(W_2'^j)\cap V\neq\emptyset$. Since $f$ is fibration-preserving and $f\equiv\operatorname{id}$ on $V$, it follows that $f(\pi_2^{-1}(W_2^j)\cap U)\subset \pi_2^{-1}(W_2^j)$ and $f(\pi^{-1}(W_2'^j)\cap U)\subset \pi^{-1}(W_2'^j)$. The map $f$ is fibration preserving, so $f(W_1^j,W_2^j), f(W_1^j,W_2'^j)\in \pi_1^{-1}(f_1(W_1^j))$. Therefore $\pi_1^{-1}(f_1(W_1^j))$ intersects both $\pi_2^{-1}(W_2^j)$ and $\pi_2^{-1}(W_2'^j)$ nontrivially, forcing $f_1(W_1^j)=W_2^j\cap W_2'^j=W_1^j$. Thus $f(W_1^j,W_2^j)=(W_1^j,W_2^j)$. Since the sequence $\{(W_1^j,W_2^j)\}$ was arbitrary, this proves (1). (2). Define an equivalence relation on $U$ where $x,x'\in U$ are equivalent if there is a path $\gamma:[0,1]\rightarrow U$ from $x$ to $x'$ which is piecewise contained in a fiber of one of the fibrations $\pi_i:\mathcal{F}\rightarrow \mathcal{F}_i$. We claim that the equivalences classes are open subsets of $U$. To see this, pick $x=g\cdot (W_1^-,W_2^-)\in U$, and note that the vector fields $X_{1,2}$, $X_{2,3}$ on $N$ are bracket generating and tangent to the cosets of $K_1$ and $K_2$, respectively; and hence their pushforwards $X_{1,2}'$, $X_{2,3}'$ under the composition $N\stackrel{\alpha}{\rightarrow}\hat N\stackrel{g}{\rightarrow}g\cdot\hat N$ are bracket generating vector fields on $g\cdot\hat N$ which are tangent to the fibers of $\pi_1$ and $\pi_2$ respectively, by Lemma~\ref{lem_properties_fibration_between_flag_manifolds}(4). By \cite[pp.50-52]{montgomery_book} the equivalence class of $x$ contains a neighborhood of $x$. It follows that the equivalence classes are connected components of $U$. Since the set where $f$ locally agrees with $g$ is a union of equivalences classes by (1), assertion (2) follows. \end{proof} \begin{lemma} \label{lem_special_affine} Suppose $U\subset \mathbb{R}^2$ is a connected open subset, and $$\phi =(\phi_1,\phi_2):\mathbb{R}^2\supset U\rightarrow\mathbb{R}^2$$ is a continuous map such for every $v\in \{e_1,e_2,e_1+e_2\}$ and every line $L$ parallel to $v$, the image of $L\cap U$ is contained in a line parallel to $v$. Then $\phi$ is of the form \begin{equation} \label{eqn_special_affine} \phi(x,y)=(mx+b_1,my+b_2) \end{equation} for some $m,b_1,b_2\in \mathbb{R}$. \end{lemma} \begin{proof} First assume that $\phi$ is smooth. Applying the hypothesis with $v\in \{e_1,e_2\}$ implies that $\partial_2\phi_1=\partial_1\phi_2\equiv 0$, so $\phi_i$ depends only on $x_i$. Applying the hypothesis when $v=e_1+e_2$ we have $$ 0\equiv (\partial_1+\partial_2)(\phi_2-\phi_1)=\partial_2\phi_2-\partial_1\phi_1\,. $$ But since $\partial_i\phi_i$ depends only on $x_i$ this forces $\partial_1\phi_1=\partial_2\phi_2=const$ and so \eqref{eqn_special_affine} holds. Since the conditions are preserved by taking linear combinations and precomposing with translations, the general case follows by mollification. Alternatively, one may argue as follows. Without loss of generality, one may assume that $\phi(0,0)=(0,0)$. By geometric contruction, $\phi(x+x',0)=\phi(x,0)+\phi(x',0)$ when $x,x'\in (-r,r)$ for $r$ small. Hence $\phi(x_1,0)=mx_1$ for some $m\in\mathbb{R}$, for $x_1\in (-r,r)$. Invoking the hypotheses again, we get $\phi(x_1,x_2)=(mx_1,mx_2)$. Thus the lemma holds locally, i.e. for every $(x_1,x_2)\in U$ there is an open set $V_{x_1,x_2}$ containing $(x_1,x_2)$ and $m,b_1,b_2$ depending on $(x_1,x_2)$ such that \eqref{eqn_special_affine} holds in $V_{x_1,x_2}$; since $U$ is connected and $m,b_1,b_2$ are locally constant, it follows that they are independent of $x_1,x_2$, and so \eqref{eqn_special_affine} holds. \end{proof} \bigskip\bigskip \begin{definition} An indexed tuple $\{W_1^i\}_{0\leq i\leq 4}\subset\mathcal{F}_1$ is an {\bf augmented projective frame} if $\{W_1^i\}_{0\leq i\leq 3}$ is a projective frame and $$ W_1^4=\operatorname{span}(W_1^3,W_1^0)\cap\operatorname{span}(W_1^1,W_1^2)\,. $$ The {\bf standard augmented projective frame} $\{\hat W_1^i\}_{0\leq i\leq 4}$ is given by \begin{equation} \hat W_1^i= \begin{cases} e_1+e_2+e_3,\quad &i=0\\ e_i,\quad &1\leq i\leq 3\\ e_1+e_2,\quad &i=4 \end{cases} \end{equation} \end{definition} Given a subset $\Sigma\subset \mathcal{F}_1$, we obtain (possibly empty) subsets of $\mathcal{F}_2$ and $\mathcal{F}$: \begin{equation} \label{eqn_f_sigma} \begin{aligned} \mathcal{F}_2(\Sigma)&:=\{\operatorname{span}(\sigma_1,\sigma_2)\mid\; \sigma_i\in\Sigma\,,\;\sigma_1\neq \sigma_2\}\,.\\ \mathcal{F}(\Sigma)&:=\{(W_1,W_2)\mid W_1\in\Sigma\,,\;W_2\in \mathcal{F}_2(\Sigma)\}\,. \end{aligned} \end{equation} \begin{lemma} \label{lem_small_projective_frame} There is an augmented projective frame $\{\tilde W_1^i\}_{0\leq i\leq 4}\subset \mathcal{F}_1$ such that $\mathcal{F}(\{\tilde W_1^i\}_{0\leq i\leq 4})\subset \hat N$ and $$ (\tilde W_1^3,\operatorname{span}(\tilde W_1^3,\tilde W_1^2))=(W_1^-,W_2^-)\,. $$ \end{lemma} \begin{proof} Since $\hat N$ is an open dense subset of $\mathcal{F}$ by Lemma~\ref{lem_properties_of_f}, we may choose $g\in G$ such that $$g\cdot\mathcal{F}(\{\hat W_1^i\}_{0\leq i\leq 4})=\mathcal{F}(\{g\cdot\hat W_1^i\}_{0\leq i\leq 4})$$ lies in $\hat N$. Then we may let $\tilde W_1^i:=(ng)\cdot \hat W_1^i$ for $0\leq i\leq 4$, where $n\in N$ satisfies $n\cdot (g\cdot(\hat W_1^3,\operatorname{span}(\hat W_1^3,\hat W_1^2)))=(W_1^-,W_2^-)$. \end{proof} \bigskip \begin{proof}[Proof of Proposition~\ref{thm_fiber_preserving_projective} in the $n=3$ case.] By Lemma~\ref{lem_projective_propagation}(2) and the connectedness of $U$, it suffices to show that $f$ locally agrees with some element of $\operatorname{PGL}(3,\mathbb{R})$ near $x$. Therefore after shrinking $U$ we may assume that $f$ is fibration-preserving, not just locally fibration-preserving. After pre/postcomposing with elements of $\operatorname{PGL}(3,\mathbb{R})$, we assume without loss of generality that $x=(W_1^-,W_2^-)\in U$, $f(x)=x$, and $D_Pf(x)$ is defined and an isomorphism. Let $\{\tilde W_1^i\}_{0\leq i\leq 4}\subset\hat N$ be the augmented projective frame from Lemma~\ref{lem_small_projective_frame}. For $r>0$ let $\{W_1^i\}_{0\leq i\leq 4}$ be the image of $\{\tilde W_1^i\}_{0\leq i\leq 4}$ under $\hat\delta_r$; we may assume $r$ is small enough that $\mathcal{F}(\{W_1^i\}_{0\leq i\leq 4})\in U$. \begin{claim} If $r$ is sufficiently small, then $\{f_1(W_1^i)\}_{0\leq i\leq 3}$ is a projective frame. \end{claim} \begin{proof}~ Since $f$ is Pansu differentiable at $x$ and $D_Pf(x)$ is an isomorphism \begin{equation} \label{eqn_pansu_differential_f} \hat \delta_{r^{-1}}\circ f\circ \hat \delta_r\rightarrow D_Pf(x) \end{equation} uniformly on compact sets as $r\rightarrow 0$, where $D_Pf:\hat N\rightarrow \hat N$ is a graded automorphism; here we identify $N$ with $\hat N$. Note that $D_Pf(x):\hat N\rightarrow \hat N$ is fibration-preserving because it is a limit of fibration-preserving maps. By Lemma~\ref{lem_properites_aut_gr_n}(3) there exist $\lambda_1,\lambda_2,\lambda_3\neq 0$ such that $\hat\Phi=\operatorname{diag}(\lambda_1,\lambda_2,\lambda_3) \in \operatorname{PGL}(3,\mathbb{R})$ agrees with $D_Pf(x)$ on $\hat N$. Therefore $(\hat\Phi)_1$ maps $\{\tilde W_1^i\}_{0\leq i\leq 3}$ to a projective frame. It follows that for $r$ small both $\{(\hat \delta_{r^{-1}}\circ f\circ \hat \delta_r)_1(\tilde W_1^i)\}_{0\leq i\leq 3}$ and $\{f_1(W_1^i)=(f\circ\hat\delta_r)_1(\tilde W_1^i)\}_{0\leq i\leq 3}$ are projective frames. \end{proof} \bigskip Let $\{\hat W_1^i\}_{0\leq i\leq 4}$ be the standard augmented projective frame, and let $\hat W_2^{ij}:=\operatorname{span}(\hat W_1^i,\hat W_1^j)$ for $0\leq i\neq j\leq 4$. Since $\{W_1^i\}_{0\leq i\leq 3}$ and $\{f_1(W_1^i)\}_{0\leq i\leq 3}$ are both projective frames, there are elements $g_1,g_2\in \operatorname{PGL}(3,\mathbb{R})$ such that $g_1(\hat W_1^i)=W_1^i$ and $g_2(f_1(W_1^i))=\hat W_1^i$ for $0\leq i\leq 3$. We let $\hat U:=g_1^{-1}(U)\subset \mathcal{F}$ and $\hat f:=g_2\circ f\circ g_1:\hat U\rightarrow \mathcal{F}$. Then $\mathcal{F}(\{\hat W_1^i\}_{0\leq i\leq 4})\subset \hat U$, the map $\hat f$ is fibration-preserving, and $(\hat f)_1(\hat W_1^i)=\hat W_1^i$ for all $0\leq i\leq 3$. Since two distinct lines lie in a unique plane, and two distinct planes intersect in a line, the fact that $\hat f$ is fibration-preserving implies that: \begin{enumerate} \item $\hat f_2(\hat W_2^{ij})=\hat W_2^{ij}$ for $0\leq i\neq j\leq 3$. \item $\hat f_1(\hat W_1^4)=(\hat f)_1(\hat W_2^{30}\cap \hat W_2^{12})=\hat W_1^4$. \item $\hat f_2(\hat W_2^{ij})=\hat W_2^{ij}$ for $0\leq i\neq j\leq 4$. \item $\hat f(\hat W_1^i,\hat W_2^{ij})=(\hat W_1^i,\hat W_2^{ij})$ for $0\leq i\neq j\leq 4$. \end{enumerate} Since $\hat f_1([e_3])=[e_3]$, for small $r>0$ we may define $\phi:\mathbb{R}^2\supset B(0,r)\rightarrow \mathbb{R}^2$ by $\operatorname{span}(\phi(x_1,x_2),1)=\hat f_1(\operatorname{span}(x_1,x_2,1))$. The map $\hat f$ is fibration-preserving and it fixes the standard augmented projective frame, so the hypotheses of Lemma~\ref{lem_special_affine} hold for $\phi$. Applying Lemma~\ref{lem_special_affine} to $\phi$, we get that for some $m\in \mathbb{R}$ we have $\hat f_1([x_1,x_2,1])=[mx_1,mx_2,1]$ for $x_1,x_2\in \mathbb{R}$ small. Suppose $m=0$. Then $\hat f$, and hence also $f$, takes values in a single fiber of $\pi_1$ near $x=(W_1^-,W_2^-)$. It follows that the Pansu differential $D_Pf(x)=\lim_{r\rightarrow 0}\hat \delta_{r^{-1}}\circ f\circ\delta_r$ takes values in a single fiber of $\pi_1$; this contradicts the nondegeneracy of the Pansu differential. Hence $m\neq 0$. Let $\hat g:=\operatorname{diag}(m,m,1)\in\operatorname{PGL}(3,\mathbb{R})$. Then $\hat g_1:\mathcal{F}_1\rightarrow\mathcal{F}_1$ agrees with $\hat f_1$ near $[e_3]$. Since $\hat g$ and $\hat f$ are both fibration-preserving, they agree near $x=(W_1^-,W_2^-)$. \end{proof} \bigskip\bigskip \subsection{Proof of Proposition~\ref{thm_fiber_preserving_projective}, general case} The $n\geq 4$ case is similar to the $n=3$ case. The replacement for Lemma~\ref{lem_special_affine} is: \begin{lemma} \label{lem_special_affine_n_geq_4} Suppose $V\subset \mathbb{R}^n$ is a connected open subset, and $$\phi=(\phi_1,\ldots,\phi_n):V\rightarrow\mathbb{R}^n$$ is a continuous map such for every $v\in \{e_1,\ldots,e_n,e_1+\ldots+e_n\}$ and every line $L$ parallel to $v$, the image $\phi(L\cap V)$ is contained in a line parallel to $v$. Then $\phi$ is of the form \begin{equation} \label{eqn_special_affine_general_n} \phi(x_1,\ldots,x_n)=(mx_1+b_1,\ldots,mx_n+b_n) \end{equation} for some $m,b_1,\ldots,b_n\in \mathbb{R}$. \end{lemma} We omit the proof as it is similar to the proof of Lemma~\ref{lem_special_affine}. The $n\geq 4$ version of Lemma~\ref{lem_projective_propagation} is: \begin{lemma} \label{lem_projective_propagation_n_geq_4} Suppose $U\subset \mathcal{F}$ is open and $f:U\rightarrow \mathcal{F}$ is fibration preserving. \begin{enumerate} \item Suppose $f$ agrees with $g\in \operatorname{PGL}(n,\mathbb{R})$ near $(\bar W_1,\ldots,\bar W_{n-1})\in U$. For $i\in \{1, \cdots, n-1\}$, let $V_i$ be the connected component of $\pi_i^{-1}(\bar W_i)\cap U$ containing $(\bar W_1,\ldots,\bar W_{n-1})$. Then $f$ agrees with $g$ near $V_i$. \item For every $g\in \operatorname{PGL}(n,\mathbb{R})$, the set where $f$ locally agrees with $g$ is a connected component of $U$. \end{enumerate} \end{lemma} \begin{proof} We prove the lemma by induction on the dimension $n\geq 3$. Lemma~\ref{lem_projective_propagation} covers the case $n=3$, so we may assume inductively that the lemma holds for dimensions strictly smaller than $n$. (1). We may assume without loss of generality that $g=\operatorname{id}$. Suppose $f\equiv \operatorname{id}$ on an open subset $V\subset U$ containing $(\bar W_1,\ldots,\bar W_{n-1})$. Arguing by contradiction, suppose $i\in \{1, \cdots, n-1\}$, and for some $(W_1,\ldots,W_{n-1})\in V_i$, there is a sequence $ \{(W_1^j,\ldots,W_{n-1}^j)\}\subset U$ which converges to $(W_1,\ldots,W_{n-1})$ as $j\rightarrow \infty$, but $$f(W_1^j,\ldots,W_{n-1}^j)\neq (W_1^j,\ldots,W_{n-1}^j)$$ for all $j$. After passing to a subsequence, we may assume that for all $j$ the connected component of $\pi_i^{-1}(W_i^j)\cap U$ containing $(W_1^j,\ldots,W_{n-1}^j)$ intersects $V$. Since $f$ is fibration preserving and $f\equiv \operatorname{id}$ on $V$, it follows that $f$ maps $\pi_i^{-1}(W_i^j)\cap U$ into $\pi_i^{-1}(W_i^j)$. Identifying $\pi_i^{-1}(W_i^j)$ with the flag manifold in $\mathbb{R}^{n-1}$, the restriction $f$ to $\pi_i^{-1}(W_i^j)$ induces a fibration-preserving mapping; by the induction assumption, since $f$ fixes $V$ it will also fix $(W_1^j,\ldots,W_{n-1}^j)$. This is a contradiction. Hence (1) holds. (2). Note that each element of the basis $X_{1,2},\ldots,X_{n-1,n}$ for $V_1\subset\mathfrak{n}$ is tangent to one of the subgroups $K_j$ for $1\leq j\leq n-1$. Hence we may use Lemma~\ref{lem_properties_fibration_between_flag_manifolds}(4) and argue as in Lemma~\ref{lem_projective_propagation}(2). \end{proof} \bigskip \begin{definition} An indexed tuple $\{W_1^i\}_{0\leq i\leq n}\subset\mathcal{F}_1$ is a {\bf projective frame} if any subset of $n$ elements spans $\mathbb{R}^n$. The {\bf standard projective frame} $\{\hat W_1^i\}_{0\leq i\leq n}$ is given by $\hat W_1^i=e_i$ for $1\leq i\leq n$ and $\hat W_1^0=e_1+\ldots+e_n$. An indexed tuple $\{W_1^i\}_{0\leq i\leq n+1}\subset\mathcal{F}_1$ is an {\bf augmented projective frame} if $\{W_1^i\}_{0\leq i\leq n}$ is a projective frame and $W_1^{n+1}=\operatorname{span}(W_1^n,W_1^0)\cap\operatorname{span}(W_1^1,\ldots,W_1^{n-1})$. The {\bf standard augmented projective frame} is $\{\hat W_1^i\}_{0\leq i\leq n+1}$ with $\hat W_1^{n+1}=e_1+\ldots+e_{n-1}$. \end{definition} Given a subset $\Sigma\subset \mathcal{F}_1$, we obtain (possibly empty) subsets of $\mathcal{F}_j$ and $\mathcal{F}$: \begin{align*} \mathcal{F}_j(\Sigma)&:=\{\operatorname{span}(\Sigma')\mid\; \Sigma'\subset\Sigma,\; |\Sigma'|=j,\; \dim\operatorname{span}(\Sigma')=j\}\,.\\ &\mathcal{F}(\Sigma):=\{(W_1,\ldots,W_{n-1})\in \mathcal{F}\mid W_j\in \mathcal{F}_j(\Sigma)\}\,. \end{align*} \begin{lemma} There is an augmented projective frame $\{\tilde W_1^i\}_{0\leq i\leq n+1}\subset\mathcal{F}_1$ such that: \begin{itemize} \item $\mathcal{F}(\{\tilde W_1^i\}_{0\leq i\leq n+1})$ is contained in $\hat N$. \item $\operatorname{span}(\tilde W_1^n,\ldots,\tilde W_1^{n-j+1}) =\operatorname{span}(e_n,\ldots,e_{n-j+1})$ for all $1\leq j\leq n-1$. \end{itemize} \end{lemma} \begin{proof} This follows as in the proof of Lemma~\ref{lem_small_projective_frame}. \end{proof} \bigskip \begin{proof}[Proof of Proposition~\ref{thm_fiber_preserving_projective}, $n\geq 4$ case] The proof parallels the $n=3$ case closely, so we will be brief. It suffices to show that $f$ locally agrees with some element of $\operatorname{PGL}(n,\mathbb{R})$ near $x$. Also, we may assume without loss of generality that $f$ is fibration-preserving, $x=(W_1^-,\ldots,W_{n-1}^-)\in U$, $f(x)=x$, and that $D_Pf(x)$ is well-defined and an isomorphism. For $r>0$ let $W_1^i:=\hat\delta_r(\tilde W_1^i)$ for $0\leq i\leq n+1$; we take $r$ small enough that $\mathcal{F}(\{W_1^i\}_{0\leq i\leq n+1})\subset U$. \begin{claim} For $r$ small $\{f_1(W_1^i)\}_{0\leq i\leq n}\subset \mathcal{F}_1$ is a projective frame. \end{claim} We omit the proof, as it is similar to claim in the proof of the $n=3$ case. Let $g_1,g_2\in \operatorname{PGL}(n,\mathbb{R})$ be such that $g_1(\hat W_1^i)=W_1^i$, $g_2(f_1(W_1^i))=\hat W_1^i$ for $0\leq i\leq n$. We now define $\hat U=g_1^{-1}(U)$ and $\hat f:=g_2\circ f\circ g_1:\hat U\rightarrow \mathcal{F}$. Arguing as in the $n=3$ case, one obtains that $\hat f$ is fibration-preserving, $ \mathcal{F}(\{\hat W_1^i\}_{0\leq i\leq n+1})\subset\hat U$, $f_j$ fixes $\mathcal{F}_j(\{\hat W_1^i\}_{0\leq i\leq n+1})$ elementwise and $f$ fixes $ \mathcal{F}(\{\hat W_1^i\}_{0\leq i\leq n+1})$ elementwise. For $r>0$ small we define $\phi:\mathbb{R}^n\supset B(0,r)\rightarrow \mathbb{R}^n$ by $$ \operatorname{span}(\phi(x_1,\ldots,n),1)=f_1(\operatorname{span}(x_1,\ldots,x_n,1))\,. $$ Applying Lemma~\ref{lem_special_affine_n_geq_4}, for some $m\in \mathbb{R}$ we get $$ \phi(x_1,\ldots,x_n)=(mx_1,\ldots,mx_n)\,. $$ As in the $m=3$ case we see that $m\neq 0$, and that $f_1$ agrees with $g:=\operatorname{diag}(m,\ldots,m,1)$ near $e_n$. This implies that $f$ agrees with $g$ near $x=(W_1^-,\ldots,W_{n-1}^-)$. \end{proof} \bigskip \section{The proof of Theorem~\ref{thm_main}} \label{sec_proof_thm_main} For every $x\in U$ choose a connected open set $U_x\subset U$ containing $x$ and group elements $\bar g_x,\bar g_x'\in\operatorname{PGL}(n,\mathbb{R})$ such that $\bar g_x(U_x),\bar g_x'(f(U_x))\subset \hat N$, and let $\hat f_x:=\bar g_x'\circ f\circ \bar g_x^{-1}:\hat N\supset\hat V_x \rightarrow\hat V_x'\subset \hat N$, where $\hat V_x:=\bar g_x(U_x)$, $\hat V_x':=\bar g_x'(f(U_x))$. Let $f_x:=\alpha^{-1}\circ \hat f_x\circ\alpha:N\supset V_x\rightarrow V_x'\subset N$ where $V_x:=\alpha^{-1}(\hat V_x)$, $V_x':=\alpha^{-1}(\hat V_x')$. By Corollary~\ref{cor_preservation_coset_foliation}, for some $\varepsilon_x\in \{0,1\}$, the map $\tau^{\varepsilon_x}\circ f_x$ locally preserves the coset foliation of $K_j$ for all $1\leq j\leq n-1$. Now Lemma~\ref{lem_properties_fibration_between_flag_manifolds}(4) gives that the map $$ \alpha\circ(\tau^{\varepsilon_x}\circ f_x)\circ\alpha^{-1}=(\alpha\circ \tau^{\varepsilon_x}\circ\alpha^{-1})\circ \hat f_x=\rho(\tau^{\varepsilon_x})\circ \hat f_x $$ locally preserves the fibration $\pi_j$ for $1\leq j\leq n-1$; here we have used the fact that $\rho(\tau)\circ\alpha=\alpha\circ \tau$. Applying Proposition~\ref{thm_fiber_preserving_projective}, we see that $\rho(\tau^{\varepsilon_x})\circ\hat f_x$ agrees with some element of $G$ and therefore $f=\rho(\Phi_x)\mbox{\Large \(|\)\normalsize}_{U_x}$ for some $\Phi_x\in \operatorname{Aut}(G)$. Since $\Phi_x$ is locally constant as a function of $x$, by the connectedness of $U$, the automorphism $\Phi_x$ is independent of $x$, and so $f=\rho(\Phi)\mbox{\Large \(|\)\normalsize}_U$ for some $\Phi\in \operatorname{Aut}(G)$. By Lemma~\ref{lem_properties_aut_g}(1) we have $\Phi=\Phi_0^{\varepsilon}\circ I_g$ for some $\varepsilon\in \{0,1\}$, $g\in G$, where $\Phi_0$ denotes transpose-inverse; then $\rho(\Phi)=(\rho(\Phi_0))^{\varepsilon}\circ\rho(I_g)=\psi^\varepsilon\circ g$ using Lemma~\ref{lem_action_aut_g_on_f}(5). \bigskip \section{The complex and quaternionic cases}~ \label{sec_complex_quaternionic} The arguments from the previous sections are also valid in the complex and quaternion cases, with some straightforward modifications. In this section we indicate what modifications are needed in these cases. The necessity for these modifications are due to the presence of nontrivial automorphisms of $\mathbb C$ and $\mathbb H$ and the non-commutativity of the quaternions. We first recall some facts about quaternions. Given any quaternion $x=x_0+x_1i+x_2j+x_3k\in \mathbb H$ ($x_i\in \mathbb R$), the conjugation of $x$ is $\bar x=x_0-x_1i-x_2j-x_3k$. It is easy to check that $\overline{xy}=\bar y\bar x$ for any $x,y\in \mathbb H$. Let $\lambda, \mu$ be unit quaternions satisfying $\lambda^2=\mu^2=(\lambda \mu)^2=-1$. Set $\nu=\lambda\mu$. Then we have $\mu=\nu\lambda$ and $\lambda=\mu\nu$. Define a map $h=h_{\lambda, \mu, \nu}: \mathbb H\rightarrow \mathbb H$ by $$h(a_0+ia_1+ja_2+ka_3)=a_0+\lambda a_1+\mu a_2+\nu a_3.$$ Then it is easy to check that $h$ is an automorphism of $\mathbb H$: it is a real linear isomorphism and $h(xy)=h(x)h(y)$ for any $x, y\in \mathbb H$. Conversely, for any automorphism $h: \mathbb H\rightarrow \mathbb H$, if we set $\lambda:=h(i)$, $\mu:=h(j)$, $\nu:=h(k)$, then $\lambda^2=\mu^2=\nu^2=-1$, $\nu=\lambda\mu$ and $h=h_{\lambda, \mu, \nu}$. By the Skolem-Noether theorem, every automorphism of $\H$ is inner. \bigskip \subsection{Changes needed for Section 2}\label{changes2} Let $F=\mathbb C, \mathbb H$. Let $\operatorname{GL}(n,F)$ be the group of invertible elements in the ring $M_n(F)$ of $n\times n$ matrices with entries in $F$. The objects $P_F^+, P_F^-$ and $N_F$ are defined as before with $\mathbb R$ replaced by $F$. The group $G_F$ is defined as before in the complex case with $\mathbb R$ replaced with $\mathbb C$: $$G_{\mathbb C}=\text{GL}(n, \mathbb C)/{\{\lambda\,\text{id}|0\not=\lambda\in \mathbb C\}}.$$ In the quarternion case it is defined by $$G_{\mathbb H}=\text{GL}(n, \mathbb H)/{\{\lambda\,\text{id}\mid 0\not=\lambda\in \mathbb R\}}.$$ Note that $\{aI\mid a\in \mathbb H\backslash \{0\}\}$ is not normal in $GL(n, \mathbb H)$ and so we cannot quotient out by this subgroup. Similarly, $P_{\mathbb C}=P_{\mathbb C}^-/{\{\lambda\,\text{id}\mid 0\not=\lambda\in \mathbb C\}} $ and $P_{\mathbb H}=P_{\mathbb H}^-/{\{\lambda\,\text{id}\mid 0\not=\lambda\in \mathbb R\}} . $ \bigskip \subsection*{The flag manifold} We view $F^n$ as a right $F$ module. The {\bf flag manifold $\mathcal{F}_{F}$} is the set of (complete) flags in $F^n$, i.e. the collection of nested families of submodules of $F^n$ $$ W_1\subset\ldots \subset W_{n-1} $$ where $W_j$ has dimension (rank) $j$. Matrix multiplication yields an action $\operatorname{GL}(n,F)\operatorname{\curvearrowright} F^n$ by $F$-module automorphisms in the usual way, which induces actions $\operatorname{GL}(n,F)\operatorname{\curvearrowright} \mathcal{F}_{k,F}$, where $ \mathcal{F}_{k,F}$ is the Grassmannian of submodules of dimension (rank) $k$. Lemma \ref{lem_properties_of_f} holds without changes. \bigskip \subsection*{Automorphisms} The map $A\mapsto (A^*)^{-1}$ is a Lie group automorphism of $\operatorname{GL}(n,F)$, where $A^*$ denotes the conjugate transpose of $A$. So the map $\tau: GL(n, F) \rightarrow GL(n, F)$ given by $\tau(A)=\Pi (A^*)^{-1}\Pi^{-1}$ is also an automorphism and induces an automorphism (still denoted by $\tau$) of $\mathfrak{gl}(n, F)$ which is given by $$(\tau(A))_{ij}=-\overline{A}_{n-j+1, n-i+1}.$$ For any automorphism $h$ of $F$, the automorphism $ GL(n, F) \rightarrow GL(n, F)$, $(a_{ij})\mapsto (h(a_{ij}))$ of $GL(n, F)$ induces an automorphism of $G$, which we denote by $\hat{h}$. \begin{theorem} (\cite[Theorems 1 and 2]{dieudonne}) \label{diedo} Every automorphism of $G_F$ is induced by an automorphism of $GL(n, F)$. The group $\text{Aut}(G_F)$ is generated by $\tau$, maps of the form $\hat h$ (with $h\in Aut(F)$) and the inner automorphisms. \end{theorem} Notice that the automorphisms $h$ associated with Lie group automorphisms are continuous. We recall that there are only two continuous automorphism of $\mathbb C$: the identity map and the complex conjugation. On the other hand, by the Skolem-Noether theorem, every automorphism $h: \mathbb H\rightarrow \mathbb H$ is inner. It follows that the automorphism $\hat{h}$ of $G_{\mathbb H}$ is also inner: if $h=I_a$ for some $a\in \mathbb H$, then $\hat{h}=I_g$ with $g=\text{diag}(a, \cdots, a)$. The following is the counterpart of Lemma \ref{lem_properites_aut_gr_n} in the quaternion and complex cases. \begin{lemma}\label{graded-quaternion} Let $F=\mathbb C, \mathbb H$. Let $n\ge 3$ if $F=\mathbb H$ and $n\ge 4$ if $F=\mathbb C$. \begin{enumerate} \item If $\Phi\in \operatorname{Aut}(G_F)$ and $\Phi(N_F)=N_F$, then $\Phi$ induces a graded automorphism of $N_F$ if and only if $\Phi\mbox{\Large \(|\)\normalsize}_{N_F}=\tau^\varepsilon\circ \hat h\circ I_g\mbox{\Large \(|\)\normalsize}_{N_F}$ for some $\varepsilon\in \{0,1\}$, some continuous automorphism $h$ of $F$ and $g=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)\in G_F$. \item Every graded automorphism of $N_F$ arises as in (1). \item For $1\leq j\leq n-1$ let $\mathfrak{k}_j\subset \mathfrak{n}_F$ be the Lie subalgebra generated by $\{a X_{i,i+1}|a\in F, i\neq n-j\}$, and $K_j\subset N_F$ be the Lie subgroup with Lie algebra $\mathfrak{k}_j$. A graded automorphism $N_F\rightarrow N_F$ is induced by conjugation by some $g=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ if and only if it preserves the subgroups $K_j$ for $1\leq j\leq n-1$. \end{enumerate} \end{lemma} We remark that Lemma \ref{graded-quaternion} (2) implies that every graded automorphism of ${\mathfrak n}_{n, \mathbb C}$ is either complex linear or complex antilinear. \begin{proof}[Proof of Lemma \ref{graded-quaternion} in the quaternion case] Let $n\ge 3$. The proof of (1) and (3) are the same (by using Theorem \ref{diedo}) as that of (1) and (3) in Lemma~\ref{lem_properites_aut_gr_n}. Here we prove (2). The proof is by induction on $n$. We first consider the case $n=3$. Denote $X=X_{12}$, $Y=X_{23}$ and $Z=X_{13}$. We have $[aX, bY]=abZ$ for $a, b\in \mathbb H$. Note that the Lie bracket is not linear over $\mathbb H$. Let $\mathbb H X=\{aX\mid a\in \mathbb H\}$ be the subspace spanned by $X$. It has dimension $4$ over $\mathbb R$. Similarly we have $\mathbb H Y$ and $\mathbb H Z$. The grading $\mathfrak n_{3,\H}=V_1\oplus V_2$ is given by $V_1=\mathbb H X\oplus \mathbb H Y$ and $V_2=\mathbb H Z$. Let $A:\mathfrak n_{3,\H}\rightarrow \mathfrak n_{3,\H}$ be a graded automorphism. We claim that $A$ satisfies either $A(\mathbb H X)= \mathbb H X$, $A(\mathbb H Y)= \mathbb H Y$ or $A(\mathbb H X)= \mathbb H Y$, $A(\mathbb H Y)= \mathbb H X$. There are $a, b\in \mathbb H$ such that $A(X)=aX+bY$. To prove the claim it suffices to show that $a=0$ or $b=0$. Suppose $a, b\not=0$ we shall get a contradiction. There are $c, d\in \mathbb H$ such that $A(iX)=cX+dY$. As $[X, iX]=0$ we have $0=[A(X), A(iX)]=(ad-cb)Z$ (being careful about the order in $cb$), which yields $a^{-1}c=db^{-1}$. Set $\lambda=a^{-1}c$. We have $c=a\lambda$ and $d=\lambda b$ and so $A(iX)=a\lambda X+\lambda b Y$. Similarly there are $\mu, \nu\in \mathbb H$ such that $A(jX)=a\mu X+\mu b Y$ and $A(kX)=a\nu X+\nu b Y$. By further considering the brackets between $A(iX)$, $A(jX)$, $A(kX)$ we get $a\lambda \mu b=a\mu \lambda b$, $a\lambda \nu b=a\nu \lambda b$ and $a \mu \nu b=a\nu \mu b$. It follows that $\lambda$, $\mu$ and $\nu$ commute with each other. Recall the fact that two quaternions commute with each other if and only if their imaginary parts are real multiples of each other. Hence there are real numbers $r_i, t_i$ ($i=1, 2,3$) and a purely imaginary quaternion $h$ such that $\lambda=r_1+t_1 h$, $\mu=r_2+t_2 h$, $\nu=r_3+t_3 h$. We then have $A(iX)=r_1(aX+bY)+t_1(ahX+hbY)$ and so $A(iX)$ lies in the $2$-dimensional real vector subspace spanned by $aX+bY$ and $ahX+hbY$. Similarly $A(jX)$ and $A(kX)$ also lie in this subspace, contradicting the fact that $A$ is an isomorphism. This finishes the proof of the claim. After possibly composing $A$ with $\tau$ we may assume that $A(\mathbb H X)=\mathbb H X$ and $A(\mathbb H Y)=\mathbb H Y$. There are $a_0, a_1, a_2, a_3, b_0, b_1, b_2, b_3\in \mathbb H$ such that $A(X)=a_0X$, $A(iX)=a_1X$, $A(jX)=a_2X$, $A(kX)=a_3X$ and $A(Y)=b_0Y$, $A(iY)=b_1Y$, $A(jY)=b_2Y$, $A(kY)=b_3Y$. By applying $A$ to $[X, iY]=[iX, Y]=[jX, kY]=-[kX, jY]$, $[X, jY]=[jX, Y]=[kX, iY]=-[iX, kY]$ and $[X, kY]=[kX, Y]=[iX, jY]=-[jX, iY]$ we obtain $$a_0b_1=a_1b_0=a_2b_3=-a_3b_2,$$ $$a_0b_2=a_2b_0=a_3b_1=-a_1b_3,$$ $$a_0b_3=a_3b_0=a_1b_2=-a_2b_1.$$ From these we get $$a_0^{-1}a_1=b_1b_0^{-1}=-b_2b_3^{-1}=b_3b_2^{-1},$$ $$a_0^{-1}a_2=b_1b_3^{-1}=b_2b_0^{-1}=-b_3b_1^{-1},$$ $$a_0^{-1}a_3=-b_1b_2^{-1}=b_2b_1^{-1}=b_3b_0^{-1}.$$ Set $\lambda=a_0^{-1}a_1$, $\mu=a_0^{-1}a_2$ and $\nu=a_0^{-1}a_3$. From $\lambda=-b_2b_3^{-1}=b_3b_2^{-1},$ we get $\lambda^2=-1$. Similarly from $\mu=b_1b_3^{-1}=-b_3b_1^{-1},$ we get $\mu^2=-1$. Finally $\nu=b_3b_0^{-1}=b_3b_2^{-1} b_2b_0^{-1}=\lambda\mu$. Also notice $a_1=a_0\lambda$, $a_2=a_0\mu$, $a_3=a_0\nu$ and $b_1=\lambda b_0$, $b_2=\mu b_0$, $b_3=\nu b_0$. So the automorphism $A$ is given by $A(X)=a_0X$, $A(iX)=a_0\lambda X$, $A(jX)=a_0\mu X$, $A(kX)=a_0\nu X$ and $A(Y)=b_0Y$, $A(iY)=\lambda b_0Y$, $A(jY)=\mu b_0 Y$, $A(kY)=\nu b_0 Y$. Now it is easy to check that $A=\text{Ad}_g\mbox{\Large \(|\)\normalsize}_{\mathfrak n_\H}\circ \hat{h}_{\lambda, \mu,\nu}$, where $g=\text{diag}(a_0, 1, b_0^{-1})$. Now assume $n\ge 4$. By an argument similar to the real case (using rank and the induction hypothesis), we get (after possibly composing with $\tau$) $A(\mathbb H X_{i, i+1})=\mathbb H X_{i, i+1}$ for each $1\le i\le n-1$. Then there are nonzero quaternions $a_1, \cdots, a_{n-1}$ such that $A(X_{i, i+1})=a_iX_{i, i+1}$. Set $b_n=1$ and $b_i=(a_i\cdots a_{n-1})^{-1}$ for $1\le i\le n-1$, and let $g=\text{diag}(b_1, \cdots, b_n)$. By composing $A$ with $\text{Ad}_g\mbox{\Large \(|\)\normalsize}_{\mathfrak n_\H}$ we may assume that $A(X_{i,i+1})=X_{i, i+1}$ for each $1\le i\le n-1$. Now for each $1\le i\le n-2$, $\mathbb H X_{i, i+1}$ and $\mathbb H X_{i+1, i+2}$ generate a Lie subalgebra isomorphic to $\mathfrak{n}_3$. By the previous paragraph we see that there are $\lambda_i, \mu_i$ satisfying $\lambda_i^2=\mu_i^2=(\lambda_i\mu_i)^2=-1$ such that $A((a_0+a_1i+a_2j+a_3k)X)=(a_0+a_1\lambda_i+a_2\mu_i+a_3\nu_i)X$ for $X=X_{i,i+1}, X_{i+1, i+2}, X_{i, i+2}$, where $\nu_i=\lambda_i\mu_i$ and $a_0, a_1, a_2, a_3\in \mathbb R$. By considering the Lie subalgebra generated by $X_{i+1, i+2}$ and $X_{i+2, i+3}$ and comparing the values for $A((a_0+a_1i+a_2j+a_3k)X_{i+1, i+2})$ we get $\lambda_i=\lambda_{i+1}$, $\mu_i=\mu_{i+1}$ and $\nu_i=\nu_{i+1}$. It follows that $A=\hat{h}_{\lambda, \mu, \nu}$ with $\lambda=\lambda_1$, $\mu=\mu_1$ and $\nu=\lambda\mu$. \end{proof} \bigskip \begin{proof}[Proof of Lemma \ref{graded-quaternion} in the complex case] Let $n\ge 4$. The proofs of (1) and (3) are the same as in the real case by using Theoem \ref{diedo}. As remarked above, the automorphism $\hat h$ is either the identity map or the complex conjugation. (2) The proof is by induction on $n$. We first consider the case $n=4$. Let $A: {\mathfrak n}_{4, \mathbb C} \rightarrow {\mathfrak n}_{4, \mathbb C}$ be a graded automorphism. We observe that ${\mathfrak n}_{n, \mathbb C}$ is the complexification of $\mathfrak n_n$. An easy calculation shows $\text{rank}(\text{ad}\, x)=\dim(\text{ad}\, x( {\mathfrak n}_{4, \mathbb C}))\ge 4$ for any nonzero element $x$ in the first layer of $ {\mathfrak n}_{4, \mathbb C} $. Clearly, $\text{rank}(\text{ad}\, x)\le 6$. So the condition $$\max\{\text{rank}(\text{ad}\, x)|x\in V_1\}<2 \min\{\text{rank}(\text{ad}\, x)|0\not=x\in V_1\}$$ in \cite[Lemma 4.7]{KMX1} is satisfied and we conclude that every graded automorphism of ${\mathfrak n}_{4, \mathbb C}$ is either complex linear or complex antilinear. So after possibly composing $A$ with the complex conjugation we may assume $A$ is complex linear. The rest of the arguemnt in the case $n=4$ is the same as in the real case. Now let $n\ge 5$ and assume the statement holds for all integers less than $n$. Let $A: {\mathfrak n}_{n, \mathbb C} \rightarrow {\mathfrak n}_{n, \mathbb C}$ be a graded automorphism. Arguing as in the real case using rank, we see that after possibly composing $A$ with $\tau$ we have $A(\mathbb C X_{i, i+1})=\mathbb C X_{i, i+1}$ for each $i$. So $A (\mathfrak n_+)=\mathfrak n_+$ and $A (\mathfrak n_-)=\mathfrak n_-$, where $\mathfrak n_+$ is the Lie sub-algebra of $ {\mathfrak n}_{n, \mathbb C}$ generated by $\{X_{i, i+1}, 1\le i\le n-2\}$ and $\mathfrak n_-$ is the Lie sub-algebra of $ {\mathfrak n}_{n, \mathbb C}$ generated by $\{X_{i, i+1}, 2\le i\le n-1\}$. Since $\mathfrak n_+$ and $\mathfrak n_-$ are isomorphic to ${\mathfrak n}_{n-1, \mathbb C} $, the induction hypothesis implies that each of $A|_{\mathfrak n_+}$, $A|_{\mathfrak n_+}$ is either complex linear or complex antilinear. Since $\mathfrak n_+$ and $\mathfrak n_-$ have nontrivial intersection, either both $A|_{\mathfrak n_+}$, $A|_{\mathfrak n_+}$ are complex linear or both are complex antilinear. Hence after possibly composing $A$ with the complex conjugation we may assume $A$ is complex linear. As $A$ also satisfies $A(\mathbb C X_{i, i+1})=\mathbb C X_{i, i+1}$ for all $i$, we see that $A=\text{Ad}_g$ for some $g=\text{diag}(\lambda_1, \cdots, \lambda_n)\in G_{\mathbb C}$. This finishes the proof of (2). \end{proof} \bigskip The ``Hermitian product'' on $\mathbb H^n$ is defined by $<z,w>=\sum_i\bar{z_i}w_i$ for $z=(z_i), w=(w_i)\in \mathbb H^n$. Then one can check by direct calculation that $\overline{<z,w>}=<w,z>$ and $<A^*z, w>=<z, Aw>$ for $z,w\in \mathbb H^n$ and $A\in M_n(\mathbb H)$. For any $\mathbb H$-linear subspace (submodule) $W$ of $\mathbb H^n$, the ``orthogonal complement'' $W^\perp$ is defined by $W^\perp=\{z\in \mathbb H^n| <z,w>=0\; \forall w\in W\}$. Lemma \ref{lem_action_aut_g_on_f} holds for $F=\mathbb C, \mathbb H$ where in (5) the automorphism $\Phi_0$ is induced by the map $A\mapsto (A^*)^{-1}$. The proof of Lemma \ref{lem_action_aut_g_on_f} (5) goes through in the quaternion case since $$<(g^{-1})^*W^+_j, gW^-_{n-j}>=<W^+_j, g^{-1}gW^-_{n-j}>=0.$$ Of course, the proof is also valid in the complex case if we use the standard Hermitian product in $\mathbb C^n$. Lemmas~\ref{lem_properties_fibration_between_flag_manifolds} and \ref{lem_dilation_dynamics} hold in the complex and quaternion cases without change. \bigskip \subsection{Changes needed for Section 3}\label{changes3} Lemma~\ref{lem_main_no_oscillation} and Corollary \ref{cor_preservation_coset_foliation} hold in the complex case for $n\ge 4$ and in the quaternion case for $n\ge 3$. Note that the analog of Lemma~\ref{lem_main_no_oscillation} in the $F=\mathbb{C}$ case fails when $n=3$, as in the real $n=3$ case. We indicate the changes needed in the quaternion case below. We skip the complex case since it is similar. Alternatively the complex case also follows from Corollary 8.2 of \cite{KMX1} as by Lemma \ref{graded-quaternion} (2) every graded automorphism of $\mathfrak n_{n, \mathbb C}$ is either complex linear or complex antilinear. \begin{lemma} \label{no_oscillation_quarternion} Let $n\ge 3$. Let $U\subset N_{\mathbb H}$ be a connected open subset, and for some $p>\nu$ let $f:N_{\mathbb H}\supset U\rightarrow N_{\mathbb H}$ be a $W^{1,p}_{\operatorname{loc}}$-mapping whose Pansu differential is an isomorphism almost everywhere. Then after possibly composing with $\tau$, if necessary, for a.e. $x\in U$ the Pansu differential $D_Pf(x)$ preserves the subspace $\H X_{i,i+1}\subset V_1$ for every $1\leq i\leq n-1$. \end{lemma} Here we indicate the differential forms used in the calculations, the rest of the argument being the same. For $1\le s<t\le n$, denote $Y_{st}=iX_{st}$, $Z_{st}=jX_{st}$, $W_{st}=kX_{st}$. Then $\{X_{st}, Y_{st}, Z_{st}, W_{st} |1\le s< t\le n\}$ form a basis of left invariant vector fields on $N$. The only nontrivial bracket relations between the basis elements are given by (for $1\le s_1<s_2<s_3\le n$): $$-[X_{s_1 s_2}, X_{s_2 s_3}]=[Y_{s_1 s_2}, Y_{s_2 s_3}]=[Z_{s_1 s_2}, Z_{s_2 s_3}]=[W_{s_1 s_2}, W_{s_2 s_3}]=-X_{s_1s_3}$$ $$[X_{s_1 s_2}, Y_{s_2 s_3}]=[Y_{s_1 s_2}, X_{s_2 s_3}]=[Z_{s_1 s_2}, W_{s_2 s_3}]=-[W_{s_1 s_2}, Z_{s_2 s_3}]=Y_{s_1s_3}$$ $$[X_{s_1 s_2}, Z_{s_2 s_3}]=[Z_{s_1 s_2}, X_{s_2 s_3}]=[W_{s_1 s_2}, Y_{s_2 s_3}]=-[Y_{s_1 s_2}, W_{s_2 s_3}]=Z_{s_1s_3}$$ $$[X_{s_1 s_2}, W_{s_2 s_3}]=[W_{s_1 s_2}, X_{s_2 s_3}]=[Y_{s_1 s_2}, Z_{s_2 s_3}]=-[Z_{s_1 s_2}, Y_{s_2 s_3}]=W_{s_1s_3}. $$ Let $\alpha_{st}$, $\beta_{st}$, $\gamma_{st}$, $\eta_{st}$ be the dual basis of left invariant $1$-forms. We have $$d\alpha_{s_1s_3}=\sum_{s_1<s_2<s_3} (-\alpha_{s_1s_2}\wedge\alpha_{s_2s_3}+\beta_{s_1s_2}\wedge \beta_{s_2s_3}+\gamma_{s_1s_2}\wedge \gamma_{s_2s_3}+\eta_{s_1s_2}\wedge\eta_{s_2s_3} ) $$ $$d\beta_{s_1s_3}=\sum_{s_1<s_2<s_3} (-\alpha_{s_1s_2}\wedge\beta_{s_2s_3}-\beta_{s_1s_2}\wedge \alpha_{s_2s_3}-\gamma_{s_1s_2}\wedge \eta_{s_2s_3}+\eta_{s_1s_2}\wedge\gamma_{s_2s_3} ) $$ $$d\gamma_{s_1s_3}=\sum_{s_1<s_2<s_3} (-\alpha_{s_1s_2}\wedge\gamma_{s_2s_3}-\gamma_{s_1s_2}\wedge \alpha_{s_2s_3}-\eta_{s_1s_2}\wedge \beta_{s_2s_3}+\beta_{s_1s_2}\wedge\eta_{s_2s_3} ) $$ $$d\eta_{s_1s_3}=\sum_{s_1<s_2<s_3} (-\alpha_{s_1s_2}\wedge\eta_{s_2s_3}-\eta_{s_1s_2}\wedge \alpha_{s_2s_3}-\beta_{s_1s_2}\wedge \gamma_{s_2s_3}+\gamma_{s_1s_2}\wedge\beta_{s_2s_3} ). $$ We pull back the following closed left invariant forms $$\omega_+=\bigwedge_{2\le s\le n} (\alpha_{1s}\wedge\beta_{1s}\wedge\gamma_{1s}\wedge\eta_{1s})$$ $$\omega_-=\bigwedge_{1\le s\le n-1} (\alpha_{sn}\wedge\beta_{sn}\wedge\gamma_{sn}\wedge\eta_{sn}).$$ By Lemma \ref{graded-quaternion} on graded automorphisms, the pull-back has the form $f_P^*\omega_+=u_+\omega_++u_-\omega_-$ as before. Let $$\eta_-=\bigwedge_{2\le s<t\le n} (\alpha_{st}\wedge\beta_{st}\wedge \gamma_{st}\wedge\eta_{st})$$ $$\eta_+=\bigwedge_{1\le s<t\le n-1} (\alpha_{st}\wedge\beta_{st}\wedge \gamma_{st}\wedge\eta_{st}).$$ We apply the pull-back theorem to $f_P^*\omega_+$ and $\eta=i_X\eta_-$, $i_X\eta_+$, where $X\in\{X_{s (s+1)}, Y_{s(s+1)}, Z_{s(s+1)}, W_{s(s+1)}\}$ and $i_X$ denotes the interior product with respect to $X$. As before, this yields $Xu_+=0$ for $X\in\{X_{s (s+1)}, Y_{s(s+1)}, Z_{s(s+1)}, W_{s(s+1)}\}$ with $2\le s\le n-1$ and $Xu_-=0$ for $X\in\{X_{s (s+1)}, Y_{s(s+1)}, Z_{s(s+1)}, W_{s(s+1)}\}$ with $1\le s\le n-2$. The rest of the argument is the same as in the real case. \bigskip \subsection{Changes needed for Section 4}\label{changes4} In the quaternion case, we note that the action of $PGL(n, \mathbb H)$ on the projective frames is still transitive, but is no longer free. The reason is that $a I_n$ defines a nontrivial element in $PGL(n, \mathbb H)$ for $a\in \mathbb H\setminus \mathbb R$, but fixes the standard projective frame. Below is a version of Lemma \ref{lem_special_affine} for the quaternion case. A similar statement holds for the complex case. A line in $\mathbb H^2$ is a subset of the form $\{(a_1, a_2)x+(b_1, b_2)|x\in \mathbb H\}$ for some $(b_1, b_2)\in \mathbb H^2$, $(0,0)\not=(a_1, a_2)\in \mathbb H^2$. This line is said to be parallel to $(a_1, a_2)=a_1e_1+a_2e_2$. \begin{lemma} \label{lem_special_affine.quaternion} Suppose $U\subset \mathbb H^2$ is a connected open subset, and $$\phi =(\phi_1,\phi_2):\mathbb H^2\supset U\rightarrow\mathbb H^2$$ is a continuous map such that for every $v\in \{e_1,e_2,e_1+e_2\}$ and every line $L$ parallel to $v$, the image of $L\cap U$ is contained in a line parallel to $v$. Assume further that for each $q\in \{i, j, k\}$, lines parallel to $e_1+qe_2$ are mapped into lines (not necessarily parallel to $e_1+qe_2$). Then $\phi$ is of the form \begin{equation} \phi(x,y)=(a h(x)+b_1, a h(y)+b_2) \end{equation} where $a, b_1,b_2\in \mathbb H$ and $h: \mathbb H\rightarrow \mathbb H$ is either an automorphism of $\mathbb H$ or the zero map. \end{lemma} \begin{proof} The argument of Lemma \ref{lem_special_affine} yields that $\phi$ is of the form $\phi(x,y)=(m(x)+b_1, m(y)+b_2)$, where $b_1,b_2\in \mathbb H$ and $m:\mathbb H\rightarrow \mathbb H$ is a real linear map. We shall show that either $m$ is the zero map or there is some automorphism $h$ of $\mathbb H$ and some $a\in \mathbb H$ such that $m(x)=a h(x)$. We may assume $b_1=b_2=0$ after possibly composing $\phi$ with a translation. Then the assumption implies that the line $(1,i)\mathbb H$ is mapped by $\phi$ into a line $(a_1, a_2)\mathbb H$ (at least one of $a_1, a_2$ is nonzero) through the origin. There are $t_1, t_2\in \mathbb H$ such that $(m(1), m(i))=\phi(1,i)=(a_1t_1, a_2t_1)$ and $(m(i), m(-1))=\phi(i, -1)=(a_1t_2, a_2t_2)$. By comparing the components we get $m(i)=a_2t_1=a_1t_2$, $m(1)=a_1t_1=-a_2t_2$. Suppose $m(1)=0$. As at least one of $a_1, a_2$ is nonzero we have $t_1=0$ or $t_2=0$, which implies $m(i)=0$. Similarly $m(j)=m(k)=0$. In this case $m$ is the zero map. Now we assume $m(1)\not=0$. Then there is some $\lambda\in \mathbb H$ such that $\phi(i, -1)=\phi(1,i)\lambda$. By comparing the two components of both sides we get $m(i)=m(1)\lambda$ and $-m(1)=m(-1)=m(i)\lambda$, which yields $\lambda^2=-1$. Similarly by considering the lines $(1,j)\mathbb H$ and $(1,k)\mathbb H$ we see that there are $\mu$ and $\nu$ satisfying $\mu^2=\nu^2=-1$ such that $m(j)=m(1)\mu$, $-m(1)=m(-1)=m(j)\mu$, $m(k)=m(1)\nu$ and $-m(1)=m(-1)=m(k)\nu$. Since $(j,k)=(1,i)j\in (1,i)\mathbb H$, there is some $c\in \mathbb H$ such that $\phi(j,k)=\phi(1,i) c$. This gives us $m(j)= m(1) c$ and $m(k)=m(i)c$. As we also have $m(i)=m(1)\lambda$, $m(j)=m(1)\mu$ and $m(k)=m(1)\nu$, we conclude $c=\mu$ and $\nu=\lambda \mu$. The three numbers $\lambda, \mu,\nu$ satisfy $\lambda^2=\mu^2=\nu^2=-1$ and $\nu=\lambda\mu$. As $m$ is $\mathbb R$-linear, for any $x=x_0+ix_1+jx_2+kx_3$ ($x_i\in \mathbb R$) we get $m(x)=m(1) h_{\lambda, \mu, \nu}(x)$. \end{proof} The arguments in Section \ref{sec_rigidity_foliation_preserving_maps} show that we may assume the map $\phi$ sends lines parallel to $v\in \{e_1, e_2, e_1+e_2\}$ into lines parallel to $v$. We claim that for any $q\in \{i,j,k\}$, lines parallel to $e_1+qe_2$ are mapped by $\phi$ into a family of parallel lines (not necessarily parallel to $e_1+qe_2$). To see this, we notice that for a suitable diagonal matrix $g=\text{diag}(a_1, a_2, 1)\in GL(3, \mathbb H)$, $g\circ \hat{f}_1$ satisfy $g\circ \hat{f}_1(\text{span}(e_1+qe_2+e_3))=\text{span}(e_1+qe_2+e_3)$ and $g\circ \hat{f}_1(\hat{W}^i_1)=\hat{W}^i_1$ for $i=1,2,3$. Since $\H^3$ is a right $\H$ module, here for any $(x_1, x_2, x_3)\in \H^3$, $\operatorname{span}(x_1,x_2, x_3)=\{(x_1x, x_2x, x_3x)|x\in \H\}$. Then it follows that $\bar g\circ \phi$ sends lines parallel to $e_1+qe_2$ to lines parallel to $e_1+qe_2$, where $\bar{g}:\mathbb H^2\rightarrow \mathbb H^2$ is the linear map given by the diagonal matrix $\text{diag}(a_1, a_2)$. Consequently $\phi$ sends lines parallel to $e_1+qe_2$ into a family of parallel lines. \bigskip \begin{proof}[Proof of Counterpart of Lemma \ref{thm_fiber_preserving_projective} in the case $n=3$, $F=\mathbb H$]. Only the last paragraph and the third last paragraph of the proof of Lemma \ref{thm_fiber_preserving_projective} need some changes. As before we have $\hat f_1([e_3])=[e_3]$. Hence for small $r>0$ we may define $\phi:\H^2\supset B(0,r)\rightarrow \H^2$ by $\operatorname{span}(\phi(x_1,x_2),1)=\hat f_1(\operatorname{span}(x_1,x_2,1))$. The fact $\hat f_1([e_3])=[e_3]$ implies $\phi(0,0)=(0,0)$. By Lemma \ref{lem_special_affine.quaternion} the map $\phi: \mathbb H^2\supset B(0,r) \rightarrow \mathbb H^2$ in this case has the form $\phi(x_1, x_2)=(a h(x_1)+b_1, a h(x_2)+b_2)$ where $a, b_1,b_2\in \mathbb H$ and $h: \mathbb H\rightarrow \mathbb H$ is either an automorphism of $\mathbb H$ or the zero map. As $\phi(0,0)=(0,0)$, we have $b_1=b_2=0$ and so $\phi(x_1, x_2)=(a h(x_1), a h(x_2))$. The argument for $m\not=0$ applies here and shows that $h$ is an automorphism. Since any automorphism of $H$ is inner, $h(x)=bxb^{-1}$ for some $0\not= b\in \mathbb H$ and so $\phi(x_1, x_2)=(a bx_1b^{-1}, a bx_2b^{-1})$. Let $\hat g:=\operatorname{diag}(ab, ab, b)\in\operatorname{PGL}(3,\H)$. Then $\hat g_1:\mathcal{F}_1\rightarrow\mathcal{F}_1$ agrees with $\hat f_1$ near $[e_3]$. Since $\hat g$ and $\hat f$ are both fibration-preserving, they agree near $x=(W_1^-,W_2^-)$. \end{proof} \bigskip \subsection{Changes needed for Section 5}\label{changes5} The counterpart of Theorem \ref{thm_main} for the complex and quaternion cases is: \begin{theorem}\label{thm_mainF} Let $U\subset \mathcal{F}_F$ be a connected open subset, and $f:U\rightarrow \mathcal{F}_F$ be a $W^{1,p}_{\operatorname{loc}}$-mapping for $p>\nu$, such that the Pansu differential is an isomorphism almost everywhere. \begin{enumerate} \item If $F=\mathbb C$ and $n\ge 4$, then $f$ is the restriction of a diffeomorphism $\mathcal{F}\rightarrow \mathcal{F}$ of the form $\psi^{\varepsilon_1}\circ \mathcal{C}^{\varepsilon_2} \circ g$ where $g\in \operatorname{PGL}(n,\mathbb{C})$, $\varepsilon_i\in \{0,1\}$ and $\mathcal C$ is complex conjugation. \newline \item If $F=\mathbb H$ and $n\ge 3$, then $f$ is the restriction of a diffeomorphism $\mathcal{F}\rightarrow \mathcal{F}$ of the form $\psi^{\varepsilon} \circ g$ where $g\in \operatorname{PGL}(n,\H)$, $\varepsilon\in \{0,1\}$. \end{enumerate} \end{theorem} The proof of Theorem \ref{thm_mainF} is the same as that of Theorem \ref{thm_main} except in the complex case where we may need to compose $f$ with the complex conjugation if necessary. \bigskip \section{Global quasiconformal homeomorphisms} \label{sec_global_qc_homeos} In this section we identify all global nondegenerate Sobolev maps $N\rightarrow N$. These are exactly the graded affine maps of $N$. This result is an immediate consequence of Theorem \ref{thm_main}. An affine map of a Lie group $G$ is a map of the form $L_g\circ \phi$, where $\phi$ is an automorphism of $G$ and $L_g$ is left translation by $g\in G$. A graded affine map of a Carnot group $N$ is an affine map where the automorphism is a graded automorphism of $N$. The following result applies to global quasiconformal homeomorphisms since quasiconformal maps are nondegenerate Sobolev maps. \begin{theorem}\label{rigidity global qc}~ Let $N$ be the Iwasawa group of $\text{GL}(n, F)$, with $n\ge 4$ for $F=\mathbb R, \mathbb C$ and $n\ge 3$ for $F=\mathbb H$. Suppose $f:N\rightarrow N$ is a $W^{1,p}_{\operatorname{loc}}$-mapping for $p>\nu$, such that the Pansu differential is an isomorphism almost everywhere. Then $f$ is a graded affine map of $N$. \end{theorem} \begin{proof} Let $f$ be as above. We shall show that there is a graded affine map $f_0$ such that $f_0^{-1}\circ f$ is the identity map. After replacing $f$ with $L_{f(0)^{-1}}\circ f$ we may assume $f(0)=0$. We identify $N$ with $\hat N$ and view $f$ as a map $\mathcal F\supset \hat N\rightarrow \hat N\subset \mathcal F$. We first consider the case $F=\mathbb R$. By Theorem \ref{thm_main}, $f$ is the restriction to $\hat N$ of a map of the form $\psi^{\varepsilon}\circ g$ where $g\in \operatorname{GL}(n,\mathbb{R})$, $\varepsilon\in \{0,1\}$. Recall that the automorphism $\tau=I_{\Pi}\circ \Phi_0$ of $\text{GL}(n, \mathbb R)$ induces a graded automorphism (again denoted by $\tau$) of $N=\hat N$ and acts on $\mathcal F$ as $\Pi\circ \psi$. By replacing $f$ with $\tau\circ f$ if necessary (when $\epsilon=1$), we may assume $\varepsilon=0$ and so $f=g|_{\hat N}$. This implies $g\in P^-$ as the stabilizer of $0$ is $P^-$. So $g=(g_{ij})$ is a lower triangular matrix. Now we can further assume that the entries on the diagonal of $g$ are $1$, after replacing $g$ with $D^{-1}g$, where $D=\text{diag}(g_{11}, \cdots, g_{nn})$. So now $g$ is a lower triangular matrix with $1$s on the diagonal and such that $g(\hat N)=\hat N$. We next show that $g=I_n$. Suppose $g\not=I_n$. We shall find a flag $F\in \hat N$ such that $g(F)\notin \hat N$, contradicting $g(\hat N)=\hat N$. Let $1\le k\le n-1$ be the integer such that $g_{ij}=0$ for all $i>j>k$ and $g_{jk}\not=0$ for some $j>k$. Let $j_0>k$ be such that $g_{j_0k}\not=0$ and $g_{jk}=0$ for all $j>j_0$. Denote $v_{j_0}=e_k-\sum_{j=k+1}^{j_0} g_{jk} e_j$. Let $F=\{W_j\}$ be the flag defined by $W_j=W^-_j$ for $1\le j\le n-j_0$ and $n-k<j\le n$, $W_{n-j_0+1}=\text{span}\{e_n, \cdots, e_{j_0+1}, v_{j_0}\}$ and $W_j=\text{span}\{e_n, \cdots, e_{j_0+1}, v_{j_0}, e_{j_0-1}, \cdots, e_{n-j+1}\}$ for $n-j_0+2\le j\le n-k$. It is straightforward to check that $F\in \hat N$. However, $g(v_{j_0})=e_k$ and $g(W_{n-j_0+1})\cap W^+_{j_0-1}$ contains the nontrivial element $e_k$ and so $g(F)\notin \hat N$. The proof in the complex and quaternion cases are the same except we use Theorem \ref{thm_mainF} and in the complex case we may need to compose $f$ with the complex conjugation. \end{proof}
proofpile-arXiv_067-351
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the mid 1970-ies, several years after the appearance of the singularity theorems of Penrose and Hawking (see e.g.\ \cite[Ch.\ 8]{HawEll73}), D.\ Gannon \cite{Gan75,Gan76} and C.W.\ Lee \cite{Lee76} independently derived a body of results that relate the singularities of a Lorentzian manifold to its topology. More precisely, in these results often dubbed Gannon-Lee theorem(s), they established that under an appropriate asymptotic flatness assumption, a non-trivial fundamental group of a (partial) Cauchy surface $\Sigma$ necessarily leads to the existence of incomplete causal geodesics. Most of these early results assumed global hyperbolicity, with the notable exception of \cite[Thms.\ 2.1-2]{Gan75}. However, the proofs relied on the false deduction that maximizing geodesics in a covering spacetime project to maximizing geodesics of the base, as pointed out in \cite{Gal83}. In the same paper Galloway proved a Gannon-Lee theorem without assuming global hyperbolicity by invoking the Hawking-Penrose theorem and heavily using a result from geometric measure theory. The latter fact was also reflected in the formulation of the main theorem, which assumed an extrinsic condition on the three surface $\Sigma$ and the topological condition was that $\Sigma$ is not a handlebody\footnote{Under these conditions \cite{MSY82} guarantees the existence of a trapped surface within $\Sigma$.}. A related recent result for the globally hyperbolic case in a setting compatible with a positive cosmological constant was given in \cite{GL18}, providing a precise connection between the topology of a future expanding compact Cauchy surface and the existence of past singularities. Another issue with the earlier results was that the nontrivial topology was confined to a compact region of a (partial) Cauchy surface bounded by a topological $2$-sphere $S$. In the context of topological censorship \cite{FSW95}, $S$ is naturally interpreted as a section of an event horizon and in four dimensions the $S^2$ topology is only natural in the light of Hawking's black hole topology theorem \cite[Sec.\ 9.2]{HawEll73}. However, since the latter fails to hold in higher dimensions, with more complicated horizon topologies occurring (see \cite{Emp08}, but \cite{GalSch06} for corresponding restrictions), the demand for higher dimensional Gannon-Lee type results with more natural assumptions on the topology of $S$ arises. Such a result was indeed given by Costa e Silva in \cite{Sil10} which also avoided the assumption of global hyperbolicity. More precisely the result was given for causally simple spacetimes, i.e.\ causal spacetimes where the causality relation is closed. However, we have recently discovered that this proof relies on an analogous false deduction: It uses that causal simplicity lifts to coverings, which is not true in general, as was explicitly shown in \cite{MinSil20}. In the latter paper Minguzzi and Costa e Silva also present a corrected result, which replaces the condition of causal simplicity by the assumption of past reflectivity, which also has to be supposed for certain covering spacetimes. The line of arguments using past reflectivity in the context of the Penrose singularity theorem was put forward in \cite{Min19a}, which also argues that this condition holds true in a black hole evaporation scenario. In the causality part of our arguments we will follow this path\footnote{ In an earlier version of this manuscript we aimed at a different strategy, replacing causal simplicity by null pseudoconvexity which \emph{does} lift to Lorentzian coverings. However, we have learned from Ettore Minguzzi that a crucial step in our argument, securing that maximal null pseudoconvexity plus strong causality implies causal simplicity, see also \cite[Thm.\ 2]{VMPRE19}, is false. An explicit counterexample was later given in \cite{HMS:21}. However, we do not know whether null pseudoconvexity plus strong causality implies causal simplicity, a fact which would allow for an argument that allows to avoid assumptions on the causality of the cover.}. \medskip In this paper we extend the validity of the Gannon-Lee theorems to Lorentzian metrics of low regularity, in particlular to (certain) $C^1$-spacetimes. Having low regularity singularity theorems at hand is especially favorable in the context of extending spacetimes, as already noted in \cite[Ch.\ 8]{HawEll73}. In particular, they rule out the possibility to extend the spacetime to a complete one, even under mild regularity assumptions on the metric. Related results on $C^0$- and Lipschitz non-exendability have recently been given in \cite{Sbie18,GSL18,CK18,Sbie20}. Indeed during the last couple of years the classical singularity theorems of Penrose, Hawking and Hawking-Penrose have been extended to $C^{1,1}$-regularity in \cite{KSSV15}, \cite{KSV15}, and \cite{GGKS18}. These results built upon extensions of Lorentzian causality theory to low regularity \cite{CG12,KSSV14,Min15,Sam16}. Most recently, Graf in \cite{Graf19} was able to further lower the regularity assumptions for the Penrose and the Hawking theorem to $C^1$. We will extend her recent techniques, in particular to the non-globally hyperbolic setting, to prove our result. \medskip This note is organized in the following way: In the next section we discuss preliminaries for our work, especially the intricacies arising in $C^1$-regularity and we state our results. In section \ref{sec:c1} we lay the analytical foundations of the proof of the main theorem. In particular, we will show that any causal maximizer in a $C^1$-spacetime is a geodesic and hence a $C^2$-curve. Finally in section \ref{sec:proof} we provide new focusing results for null geodesics and collect all our results together to prove the main theorem. \section{Preliminaries and results}\label{sec:results} We begin by introducing our main notations and conventions, as well as some basic notions that are necessary to give a precise formulation of our results. Our main reference for all matters of Lorentzian geometry is \cite{ON83}. We will assume that all manifolds $M$ are smooth, Hausdorff, as well as second countable and of dimension $n$ with $n\geq 3$. We will consider Lorentzian metrics $g$ on $M$ that are of regularity at least $C^1$ with signature $(-,+,\dots,+)$. Another important regularity class is $g\in C^{1,1}$ which means that $g$ is $C^1$ and its first derivatives are locally Lipschitz continuous. A spacetime is a Lorentzian manifold with a time orientation, which we assume to be induced by a smooth timelike vector field. A curve $\gamma:I\to M$ defined on some interval $I$ is called timelike, causal, null, future or past directed, or spacelike, if $\gamma$ is locally Lipschitz continuous and its velocity vector $\dot\gamma$ which exists Lebesgue almost everywhere has the respective property. We denote the timelike and causal relation by $p\ll q$ and $p \leq q$, respectively and write $I^+(A)$ and $J^+(A)$ for the chronological and causal future of a set $A\subseteq M$. Finally we denote the future horismos of $A$ by $E^+(A):=J^+(A)\bs I^+(A)$. The respective past versions of these sets will be denoted by $I^-$, $J^-$ and $E^-$, respectively. When we refer to these sets with regard to a particular metric $g$, it will appear in subscript, e.g.\ $E_g^+(A)$ denotes the future horismos of $A$ w.r.t.\ $g$. For Lorentzian metrics $g_1$, $g_2$ one says that $g_1$ has narrower lightcones than $g_2$ (respectively, $g_2$ has wider lightcones than $g_1$), denoted as $g_1 \prec g_2$, if $g_1(X,X)\leq 0$ implies $g_2(X,X)<0$ for any $0\not=X\in TM$. Throughout we will fix a complete Riemannian background metric $h$ and use its induced norm $\|\ \|_h$ and distance $d_h$. All local estimates will be independent of the choice of $h$. We will denote the fundamental group of a manifold $M$ by $\pi_1(M)$. Also if $i:N \rightarrow M$ is a continuous map, the induced homomorphism of the fundamental groups is denoted by $i_\#: \pi_1(N) \to \pi_1(M)$. \subsection{Low regularity} During the last couple of years the bulk of Lorentzian causality theory has been transferred to $C^{1,1}$-spacetimes, where the exponential map and convex neighbourhoods are still available. While convexity fails below that regularity \cite{HW51,SS18}, nevertheless most aspects of causality theory can be maintained even under Lipschitz regularity of the metric. Further below some significant changes occur \cite{CG12,KGSS20}, while some robust features continue to hold even in more general settings \cite{Min18,KS18,BS18,GKSS19}. In particular, for $C^1$-spacetimes the push-up principle is still valid and $I^+(A)$ is open for any set $A\subseteq M$. However, there are two essential features one loses when going from the $C^{1,1}$-setting to $C^1$-spacetimes: uniqueness of solutions of the geodesic equation and the local boundedness of the curvature tensor. Given the first fact, one has to make a choice concerning the definition of geodesic completeness. We will follow the natural approach of \cite{Graf19} and define a spacetime to be timelike (respectively null or causal) geodesically complete if \emph{all}\footnote{The alternative would be to only demand the existence of \emph{one} complete geodesic for every timelike (or null or causal) initial condition to the geodesic equation.} inextendible timelike (respectively null or causal) geodesics are defined on $\mathbb{R}$. Concerning the second issue, first note that $C^1$ is well within the maximal class of spacetimes allowing for a (stable definition of) distributional curvature, which is $g$ locally in $H^1\cap L^\infty$ \cite{GT87,LM07,SV09}. The Riemann and the Ricci tensor are then tensor distributions in $\mathcal{D}'\mathcal{T}_{3}^{1}(M)$ and $\mathcal{D}'\mathcal{T}_{2}^{0}(M)$, respectively, where we recall that \begin{equation}\label{eq:3} \mathcal{D}'\mathcal{T}_{s}^{r}(M):=\Gamma_{c}\left(M,T_{r}^{s}M\otimes\vol(M)\right)' =\mathcal{D}'\left(M\right)\otimes_{\,\mathcal{C}^{\infty}}\mathcal{T}_{s}^{r}\left(M\right)\,. \end{equation} Here $\vol(M)$ is the volume bundle over $M$, $\Gamma_c$ denotes spaces of sections with compact support and $\mathcal{D}'\left(M\right)$ is the space of scalar distributions on $M$, i.e.\ the topological dual of the space of compactly supported volume densities $\Gamma_c(M,\vol(M))$. We remark that, containing derivatives of the continuous connection, the distributional Riemann and Ricci tensors are of order one and that the usual coordinate formulae \begin{align} \mathrm{Riem}^m_{\;\;\;ijk} &:= \partial_j \Gamma^m_{ik}-\partial_k \Gamma^m_{ij}+\Gamma^m_{js}\Gamma^s_{ik}-\Gamma^m_{ks}\Gamma^s_{ij}\,,\\ \mathrm{Ric}_{ij} &:= \partial_m \Gamma^m_{ij}-\partial_j \Gamma^m_{im}+\Gamma^m_{ij}\Gamma^k_{km}-\Gamma^m_{ik}\Gamma^k_{jm} \end{align} apply. For further details on tensor distributions see \cite[Ch.\ 3.1]{GKOS01}. \medskip Naturally, we define curvature bounds resp.\ energy conditions using the notion of positivity for distributions, i.e.\ $\mathcal{D}'\left(M\right)\ni u\geq 0$ ($u>0$) if $\langle u,\mu\rangle\geq 0$ ($>0$) for all non-negative (positive) volume densities $\mu\in \Gamma_c(M,\vol(M))$. Then $(M,g)$ is said to satisfy the strong energy condition (resp.\ to have non-negative Ricci curvature) if the scalar distribution $\mathrm{Ric}(\mathcal{X},\mathcal{X})$ is non-negative for all smooth timelike vector fields $\mathcal{X}$. In the case of $g$ being smooth this condition coincides with the classical one, $\mathrm{Ric}(X,X)\geq 0$ for all timelike $X\in T_pM$ and all $p\in M$ by the fact that all such $X$ can be extended to smooth timelike vector fields on $M$. For the same reason the condition for $g\in C^{1,1}$ is equivalent to the condition $\mathrm{Ric}(\mathcal{X},\mathcal{X})\geq 0$ almost everywhere for all smooth timelike vector fields $\mathcal{X}$, used in the context of the $C^{1,1}$-singularity theorems. However, generalizing the null energy condition is more tricky due to the obstacles one encounters when extending null vectors. For a detailed discussion see \cite[Sec.\ 5]{Graf19}, and we define following her: \begin{definition}[Distributional null energy condition]\label{def:nec} A $C^1$-metric $g$ satisfies the \textit{distributional null energy condition}, if for any compact set $K\subseteq M$ and any $\delta>0$ there exists $\epsilon(\delta,K)$ such that $\text{Ric}(\mathcal{X},\mathcal{X})>-\delta$ (in the sense of distributions) for any local smooth vector field $\mathcal{X} \in \mathfrak{X}(U)$, $U\subseteq K$ with $\|\mathcal{X}\|_h=1$ and which is $\epsilon (\delta, K)$ close to a $C^1$ $g$-null vector field $\mathcal{N}$ on $U$, i.e.\ $||\mathcal{X}-\mathcal{N}||_h<\epsilon(\delta, K)$ on $U$. \end{definition} Again this condition is equivalent to the classical null energy condition if the metric is smooth. Moreover, in case $g\in C^{1,1}$ it is equivalent to the condition used in the $C^{1,1}$-setting i.e.\ $\text{Ric}(\mathcal{X},\mathcal{X})>0$ for all Lipschitz-continuous local null vector fields $\mathcal{X}$. \medskip One key technique in low regularity Lorentzian geometry is regularization. More specifically Chrusciel and Grant in \cite{CG12} have put forward a technique to regularize a continuous metric $g$ by smooth metrics $\check g_\varepsilon$ with narrower lightcones resp.\ by a net $\hat g_\varepsilon$ with wider lightcones than $g$. The basic operation (denoted by $*$) is chartwise convolution with a standard molifier $\rho_\epsilon$, which is globalized using cut-offs and a partition of unity, cf.\ \cite[Thm.\ 3.2.10]{GKOS01}. To manipulate the lightcones in the desired way one has to add a ``spacelike correction term''. The most recent version of this construction, which also quantifies the rate of convergence in terms of $\epsilon$ is \cite[Lem.\ 4.2]{Graf19}, which we recall here. \begin{Lemma}\label{lem:2.2} Let $(M,g)$ be a spacetime with a $C^1$-Lorentzian metric. Then for any $\epsilon>0$, there exist smooth Lorentzian metrics $\check{g}_\epsilon$ and $\hat{g}_\epsilon$ with $\check{g}_\epsilon \prec g \prec \hat{g}_\epsilon$, both converging to $g$ in $C^1_\text{loc}$. Additionally, on any compact set $K$ there is $c_k>0$ such that for all small $\epsilon$ \begin{equation}\label{eq:neweps} \| \check{g}_\epsilon - g * \rho_\epsilon\|_{\infty,K} \leq c_K \epsilon \quad\text{and}\quad \| \hat{g}_\epsilon - g * \rho_\epsilon\|_{\infty,K} \leq c_K \epsilon\,. \end{equation} \end{Lemma} A main step in the proof of singularity theorems in low regularity is to show that the energy condition (Definition \ref{def:nec}, in our case) implies that the regularized metrics $\hat g_\epsilon$ and/or $\check g_\epsilon$ violate the classical energy conditions (the NEC, in our case) only by a small amount --- small enough, such that null geodesics still tend to focus. Technically this is done by a Friedrichs-type lemma, which in the present case is \cite[Lem.\ 4.5]{Graf19}, and draws essentially from \eqref{eq:neweps}. The corresponding result is then \cite[Lem.\ 5.5]{Graf19}: \begin{Lemma}[Surrogate energy condition]\label{lem:sec} Let $M$ be a $C^1$-spacetime where the distributional null energy condition holds. Given any compact set $K\subseteq M$ and $c_2>c_1>0$, then for all $\delta>0$ there is $\epsilon_0>0$ such that $\forall \epsilon<\epsilon_0$ \begin{align}\nonumber \mathrm{Ric}[\check{g}_\epsilon](X,X) > -\delta\quad \forall X \in TM|_K &\text{ with } \check{g}_\epsilon (X,X)=0\\ &\text{ and } 0<c_1 \leq ||X||_h\leq c_2\,. \end{align} \end{Lemma} Here $\check{g}_\epsilon$ is as in Lemma \ref{lem:2.2} and $\mathrm{Ric}[\check{g}_\epsilon]$ is its Ricci tensor. We will use this result in an essential way, when showing compactness of the horismos of a certain set, which is needed for the causal/analytic part of the proof of our Gannon-Lee theorem. A main difference to the arguments in \cite{Graf19} is that we do not assume global hyperbolicity, but are able to compensate for it by assuming non-branching of null maximizers. Formally we define: \begin{definition}\label{def:branch} Let $M$ be a spacetime and let $\gamma:[0,1]\to M$ be a maximizing null curve. We say that $\gamma$ branches if there exists another maximizing null curve $\sigma:[0,1]\to M$ such that $\gamma(t)=\sigma(t)$ for all $0\leq t\leq a$ for some $0<a<1$ and $\gamma(t)\neq \sigma(t)$ for all $1\geq t>a $. The point $\gamma(a)$ is called branching point. If no maximizing null curve branches, we say that there is \textit{no null branching} in $M$. \end{definition} In light of the fact that (even) for $C^1$-metrics causal maximizers are geodesics (to be proven in Theorem \ref{thm:maxgeod}, below), we see that if null branching were to occur in $\gamma(a)$, there would be (at least) two different, maximizing solutions to the geodesic equations with initial values $\gamma(a)$ and $\gamma'(a)$. Generally speaking, in the low regularity Riemannian setting, branching of maximizers is associated with unbounded sectional curvature from below. More precisely, in length spaces with a lower curvature bound branching does not occur \cite[Lem.\ 2.4]{Shi93}. In smooth Lorentzian manifolds sectional curvature bounds, although more delicate to handle, are still characterized by triangle comparison \cite{AB08} and in the setting of Lorentzian length spaces \cite {KS18} synthetic curvature bounds extending sectional curvature bounds for smooth spacetimes have been introduced. In \cite[Section 4]{KS18} the authors show that a synthetic curvature bound from below prevents the branching of timelike maximizers. Unfortunately it is not clear at the moment how one could extend such a result to triangles with null sides. However, in a merely $C^1$-spacetime the curvature is generically not locally bounded and so it seems natural that an additional condition as in Definition \ref{def:branch} is needed. Indeed, this condition enters in an essential way into our arguments. It will be a topic of future investigations to relate null branching to curvature bounds. \medskip \subsection{Results} Next we introduce the particular notions needed for the formulation of our results. In spirit they reflect Gannon's \cite{Gan75} assumptions on the spacetime, inspired by asymptotic flatness, however, we stay close to the formulations of \cite[Sec.\ 2]{Sil10}. \begin{definition} An \emph{asymptotically regular hypersurface} is a spacelike, smooth, connected partial Cauchy surface $\Sigma$ which possesses an enclosing surface $S$, i.e.\ a compact, connected submanfiold of codim.\ 2 in $M$ with the properties \begin{enumerate} \item $S$ separates $\Sigma$ into two (open, sub-) manifolds $\Sigma_+, \Sigma_-$ such that $ \bar \Sigma_+ $ is non-compact and $\Sigma_-$ connected, \item the map $h_\#:\pi_1(S) \to \pi_1 (\bar \Sigma_+ )$, induced by the inclusion $h: S \to \bar \Sigma_+$ is surjective, \item $k_- >0 $ on $S$, i.e.\ $S$ is inner trapped. \end{enumerate} We further say that a surface $\Sigma$ admits a \emph{piercing}, if there exists a timelike vector field $X$ on $M$ such that every integral curve of $X$ meets $\Sigma$ exactly once. \end{definition} To elaborate on this definition, we first fix some notation. Throughout let us denote the future directed timelike unit vector field perpendicular to $\Sigma$ near $S$ by $U$. Further as $S$ is a hypersurface in $\Sigma$ we denote by $N_\pm$ the unit normals to $S$ in $\Sigma$ such that $N_-$ points into $\Sigma_-$ and $N_+$ into $\Sigma_+$. We obtain future directed null normals to $S$ via $K_\pm := U|_S + N_\pm$. We will refer to $K_-(p)$ as the ingoing null vector. The convergence of a point in $S$ is defined via $k_\pm:=g(H_p,K_\pm(p))$, where $H_p$ is the mean-curvature vector field of $\Sigma$ at $p$. Observe that since $g\in C^1$, $H_p$ is still continuous and all corresponding formulae hold ``classically''. Also note that any piercing of $\Sigma$ induces a continuous, open map $\rx:M \to \Sigma$, which maps any point $p$ in $M$ to the unique intersection of the integral curve of $X$ through $p$ with $\Sigma$. However, existence of a piercing is a strictly weaker condition than global hyperbolicity, see e.g.\ \cite[p.\ 4, 2nd paragraph]{Sil10}, although it implies that $M$ is homeomorphic to the product $\R\times\Sigma$ as pointed out in \cite[below 3.2]{MinSil20}. \medskip We are now ready to state our main result, a Gannon-Lee theorem for non-globally hyperbolic $C^1$-spacetimes without null branching. We will discuss several of its special cases below. \begin{Theorem}[$C^1$-Gannon-Lee theorem]\label{thm:glc1} Let $(M,g)$ be a past reflecting, null geodesically complete $C^{1}$-spacetime without null branching and such that the distributional null energy condition holds. Let $\Sigma$ be an asymptotically regular hypersurface (with enclosing surface $S$) admitting a piercing. Further let one of the following possibilities hold \begin{enumerate}[(i)] \item any covering spacetime of $(M,g)$ is past reflecting, or \item $S$ is simply connected and the universal covering spacetime of $(M,g)$ is past reflecting. \end{enumerate} Then the map $i_\# :\pi_1(S) \to \pi_1(\Sigma)$, induced by the inclusion $i: S \to \Sigma$, is surjective. \end{Theorem} Note that for $C^{1,1}$-metrics the geodesic equation is uniquely solvable and hence there can be no null branching. Moreover using that the distributional null energy condition reduces to the ``almost everwhere condition'' for $C^{1,1}$-metrics we immediately obtain the following $C^{1,1}$-Gannon-Lee theorem. \begin{Corollary}[$C^{1,1}$-Gannon-Lee theorem] Let $(M,g)$ be a past reflecting, null geodesically complete $C^{1,1}$-spacetime such that the null energy condition $\text{Ric}(\mathcal{X},\mathcal{X}) \geq 0 $ holds for all local Lipschitz-continuous null vector fields $\mathcal{X}$. Let $\Sigma$ be an asymptotically regular hypersurface (with enclosing surface $S$) admitting a piercing. Further let one of the assumptions (i) or (ii) of Theorem \ref{thm:glc1} hold. Then the map $i_\# :\pi_1(S) \to \pi_1(\Sigma)$ is surjective. \end{Corollary} Going back to $C^1$ and assuming global hyperbolicity we clearly can skip past reflectivity and any assumption on the covering spacetime. Due to results in \cite{Graf19}, the assumption of no null branching is also obsolete. Moreover, in this case also some of the assumptions in \cite{Sil10} can be dropped, as they are implied by the existence of a Cauchy surface. \begin{Corollary} [Globally hyperbolic $C^1$-Gannon-Lee theorem]\label{cor:ghc1} Let $(M,g)$ be a globally hyperbolic, null geodesically complete $C^{1}$-spacetime such that the distributional null energy condition holds. Further let $\Sigma$ be an asymptotically regular hypersurface (with enclosing surface $S$), then the map $i_\# :\pi_1(S) \to \pi_1(\Sigma)$ is surjective. \end{Corollary} A simpler formulation is obtained assuming that $S$ is simply connected: the theorems then state that the entire spacetime is simply connected provided it is null complete. Originally the theorem of Gannon was given in contrapositive form, saying that if $S$ is (topologically) a sphere and $\Sigma$ is not simply connected, then $M$ has to be null incomplete. \medskip In proving our results we will follow the general layout of \cite{Sil10}. The proof consists of a causal and an analytic part as well as a purely topological part. At the heart of the causal part lies Proposition 4.1 of \cite{Sil10}, which we will prove for past reflecting $C^1$-spacetimes without null branching in Proposition \ref{prop:cp-c1}. It essentially states that the inside region $\Sigma_-$ of an asymptotically regular hypersurface $\Sigma$ is relatively compact. We will first establish the needed causality properties for $C^1$-spacetimes. \section{Maximizers and Causality in $C^1$}\label{sec:c1} In this section we establish properties of geodesics and results on causality in $C^1$-spacetimes. Building on recent results of \cite{GraLin18} and \cite{Graf19} we will establish that causal maximizers in $C^1$-spacetimes are geodesics. Note that this result was also independently discovered very recently in \cite{LLS20}. First note that by \cite[Thm.\ 1.1]{GraLin18} maximizers have a causal character, even in Lipschitz spacetimes. We start by showing that also in a $C^1$-spacetime broken causal geodesics are not maximizing. As a prerequisite we use a variational argument similar to the one in \cite{ON83}, 10.45-46, which still holds true in our setting. \begin{Lemma}\label{l3} Let $c:[0,1]\to M$ be a causal pieceweise $C^2$-curve in a $C^1$-spacetime $M$ and let $X$ be a piecewise $C^1$-vector field along $c$. Then there is a piecewise $C^2$-variation $c_s$ of $c$ with variation vector field $X$. Moreover, if $g(X',c')<0$ along $c$ then for any variation $c_s$ of $c$ with variation vector field $X$ and small enough $s$, the longitudinal curve $c_s$ is timelike and longer than $c$. \end{Lemma} \begin{proof} First we expand $X$ to a vector field $\tilde X$ in a neighbourhood of $c$ and set $c_s(t):= \text{Fl}^{\tilde X}_s(c(t))$, which is a variation of $c$ with variation vector field $X$. For the second part note that $g(c_0'(t),c_0'(t))\leq 0$ for all $t$ (except possible break points) as $c$ is causal and further \begin{equation*} \frac{\partial}{\partial s}|_0 g(c_s'(t),c_s'(t)) = 2 g(\frac{\partial}{\partial s}|_0 \frac{\partial}{\partial t} c_s(t),c_0'(t)) = 2 g(\frac{\partial}{\partial t} X(t),c'(t)) <0 \end{equation*} by assumption. So for small $s$ we have $g(c_s'(t), c_s'(t))< g(c'(t),c'(t))\leq 0$ (for almost all $t$) and hence $L(c_s)= \int_0^1 (-g(c_s'(t), c_s'(t)))^{\frac{1}{2}}\, dt > \int_0^1 (-g(c_0'(t), c_0'(t)))^{\frac{1}{2}}\, dt =L(c)$. \end{proof} \begin{Lemma}\label{l4} In a $C^1$-spacetime no broken causal geodesic is maximizing. \end{Lemma} \begin{proof} Let $\gamma:[0,2] \to M$ be a broken causal geodesic with a break point at $\gamma(1)$. Hence $v:=\lim_{t \uparrow 1} \gamma'(t)$ and $w:=\lim_{t \downarrow 1} \gamma'(t)$ are linearly independent. Also since both vectors are either future or past pointing, we have $\la v,w \ra <0$. Let us show that the parameteriziation of $\gamma$ can be chosen such that: \begin{equation}\label{eq:vw} \la v,v\ra - \la v, w\ra >0 \quad \text{as well as}\quad \la v,w \ra - \la w,w \ra <0. \end{equation} If both $v$ and $w$ are null, this is clear by $ \la v,w \ra <0$. If both vectors are timelike we can w.l.o.g.\ assume them to be unit vectors. By the reverse Cauchy-Schwarz inequality we then have $| \la v,w \ra | > 1$ (recall that $v$ and $w$ are not colinear) and hence $\la v,w \ra < -1$. So $\la v,v \ra - \la v,w \ra > 0$, and in the same way it follows that $\la v,w \ra - \la w,w\ra <0$. We now show that there exists a timelike curve from $\gamma(0)$ to $\gamma(2)$ longer than $\gamma$. First note that by continuity of the the Christoffel symbols we can solve the linear equations for parallel transport along $\gamma$ (uniquely) and the solution is a $C^1$-vector field. We set $y= v-w$ and let $Y_1$ and $Y_2$ be the vector fields along $\gamma|_{[0,1]}$ and $\gamma|_{[1,2]}$, obtained by parallel transport of $y$ along $\gamma|_{[0,1]}$ and $\gamma|_{[1,2]}$, respectively. Next we define a piecewise $C^1$-vector field $Y$ along $\gamma$ by setting $Y|_{[0,1]}=Y_1$ and $Y|_{[1,2]}=Y_2$. Since $\gamma$ is a geodesic and $Y$ is parallel we have on $[0,1]$ using \eqref{eq:vw} \begin{equation}\label{eq:vw1} \la \gamma'(t), Y(t) \ra = \la v,v-w \ra = \la v,v\ra -\la v,w\ra >0 \end{equation} and similarly on $[1,2]$ we have $\la \gamma'(t),Y \ra <0$. Now let $f:[0,2]\to [0,\infty)$ be a continuous, piecewise linear function such that $f(0)=0=f(2)$, $f'|_{[0,1)}=1$, and $f'|_{(1,2]}=-1$ and set $X(t):= f(t) Y(t)$. Then $X$ is a piecewise $C^1$-vector field along $\gamma$. Now consider the variation $\gamma_s$ of $\gamma$ with variation vector field $X$. By \eqref{eq:vw1} we have $\la \gamma',X' \ra <0$ and hence by Lemma \ref{l3} there exists some small $s$, such that $\gamma_s$ is longer than $\gamma$. By the choice of $f$ the endpoints agree and we have shown the statement. \end{proof} \begin{Theorem}\label{thm:maxgeod} Let $(M,g)$ be a $C^1$-spacetime, then any maximizing causal curve is a causal geodesic and hence a $C^2$-curve. \end{Theorem} \begin{proof} Observe that being a geodesic is a local property and that any part of a maximizing curve is maximizing. Moreover, since any point in a $C^1$-spacetime has a globally hyperbolic neighbourhood\footnote{Actually there exists a smooth metric with wider lightcones, which has a neighbourhood base of globally hyperbolic sets, but these are also globally hyperbolic for $g$.}, we can assume $M$ to be globally hyperbolic. Let $\gamma:[0,1]\to M$ be a maximizer and set $p=\gamma(0)$, $q=\gamma(1)$. By \cite{Graf19}, Proposition 2.13, there exists a maximizing causal geodesic from $p$ to $q$ of the same causal character as $\gamma$. Also there exist maximizing, causal geodesics $\sigma_1$ from $p$ to $\gamma(\frac{1}{2})$ and $\sigma_2$ from $\gamma(\frac{1}{2})$ to $q$. Note that since $\sigma_i$ are maximizing, we have $L(\sigma_1)=L(\gamma|_{[0,\frac{1}{2}]})$ and $L(\sigma_2)=L(\gamma|_{[\frac{1}{2},1]})$. This means $L (\sigma_1 \circ \sigma_2)=L(\gamma)$. So the curve $\gamma_1:=\sigma_1 \circ \sigma_2$ is maximizing and hence by Lemma \ref{l4} it is an unbroken geodesic. This procedure can be iterated to obtain a sequence of maximizing causal geodesics $\gamma_n$ from $p$ to $q$, which meet $\gamma$ at all parameter values $\frac{k}{2^n}$, for $\mathbb{N}\ni k \leq 2^n$. Observe that $\gamma_n$ converge to $\gamma$ uniformly: First, for any $\epsilon>0$ we cover $\gamma$ by finitely many open, causally convex, sets $V^\epsilon_{p_i}$ around $p_i\in\gamma$ of $h$-diameter at most $\epsilon$. The union $V^\epsilon=\sup_i V^\epsilon_{p_i}$ is a neighbourhood of $\gamma$, and there exist dyadic numbers $t_m$, $m=0,\ldots k$, $t_0=0$ and $t_k=1$, such that $\gamma(t_m)$ and $\gamma(t_{m+1})$ lie in a single $V^\epsilon_{p_i}$ for some $i$. Moreover, there exists some $N(\epsilon)$, such that for all $n \geq N$ all curves $\gamma_n$ meet every $\gamma(t_m)$ and hence the segments of $\gamma_n$ from $t_m$ to $t_{m+1}$ are contained in $V^\epsilon_{p_i}$. So we conclude that $d_h(\gamma(t),\gamma_n(t))\leq \epsilon$, so $\gamma_n \to \gamma$ uniformly. We can now parameterize $\gamma_n$, such that $\|\gamma_n'(0)\|_h=1$ and hence pass to a subsequence (again denoted by $\gamma_n$) such that $\gamma_n'(0) \to v$. By \cite[Chap.\ II Thm.\ 3.2,]{Hart02}, there exists a subsequence of $\gamma_n$ which converges uniformly on compact sets to a geodesic $\sigma$ with initial values $\sigma(0)=p$ and $\sigma'(0)=v$. But as $\gamma_n$ converges to $\gamma$, so must any subsequence and hence $\gamma=\sigma$ on the entire domain of $\gamma$. Finally $\sigma$ reaches $q$ since otherwise, $\sigma$ would agree with $\gamma$ on its entire maximal domain of definition, would be future inextendible and contained in the compact set $\gamma([0,1])$, a contradiction to non-imprisonment which holds in any globally hyperbolic $C^1$-spacetime. \end{proof} Assuming non-branching we are able to prove results on limits of maximizers needed in the following. \begin{Proposition}\label{Prop:nonbranch} Let $M$ be a $C^1$-spacetime without null branching. If two causal geodesic segments contained in an achronal set intersect, they are segments of the same geodesic or they intersect at the endpoints. \end{Proposition} \begin{proof} Suppose such a segment intersetcs a second one in the interior of its domain. Then either their tangents at the meeting point are not proportional and so by Lemma \ref{l4} their concatination, which is a broken causal geodesic, stops maximising, contradicting the fact that both segments are contained in an achronal set. Or otherwise their tangents at the meeting point are proportional and hence null branching would occur, again a contradiction. \end{proof} Using Theorem \ref{thm:maxgeod} we may also give a slightly different formulation: If two different initially maximizing null curves starting at the same point meet again, they stop maximizing. \medskip The final result of this section will be essential in the proof of the main theorem and it is the main point at which we use the assumption that null branching doesn't occur. Again, $\check g_\epsilon$ is as in Lemma \ref{lem:2.2}. \begin{Corollary}\label{Cor:epslimitmax} Let $(M,g)$ be a $C^1$-spacetime without null branching. Let $S$ be an enclosing surface in a partial Cauchy surface $\Sigma$ and let $\gamma:[0,1]\to E_g^+(S)$ be a $g$-null curve. Then for any $1>\delta>0$, $\gamma|_{[0,1-\delta]}$ is a limit of $\check{g}_{\epsilon_n}$-null curves contained in $E_{\check{g}_{\epsilon_n}}^+(S)$ for an appropriate subsequence of $\check{g}_{\epsilon_n}$. \end{Corollary} \begin{proof} Let $\gamma$ be a future directed $g$-null $S$-maximizer starting at $p=\gamma(0)\in S$, so $\gamma\subseteq E^+(p)$. For any $1>\delta>0$ there exist points $ q_n^\delta \in \pt I^+_{n}(S):=\pt I^+_{\check g_{\epsilon_n}}(S)$ converging to $\gamma(1-\delta)=:q^\delta$: To see this let $U_k$ be a sequence of connected nested neighbourhoods of $q^\delta$ with $\bigcap_k U_k=\{q^\delta \}$. Choose some $q^e_k \in U_k \bs \overline{J^+(S)} $ and $q^i_k \in U_k \cap I^+(S)$. For large $n$ we can achieve $q^i_k \in I^+_{{n}}(S)$ and since also $q^e_k \in U_k\bs \overline{J^+_{{n}}(S)}$ there exists a curve from $q^i_k$ to $q^e_k$ which starts in $I^+_{{n}}(S)$ and leaves it and hence must meet $\pt I^+_{n}(S)$ in a point which we call $q_n^\delta$. Hence there are future directed $\check{g}_{\epsilon_n}$-null maximizing geodesics $\gamma_n^\delta$ ending at $q_n^\delta$ and contained in $\pt I^+_{{n}}(S)$. Note that the $\gamma_n^\delta$ either meet $S$ or are past inextendible. Now by \cite[Chap.\ II, Thm.\ 3.2]{Hart02} there exists a subsequence of $\gamma_n^\delta$, denoted again by $\gamma_n^\delta$, converging to a maximizing $g$-null geodesic $\sigma^\delta$ ending at $q^\delta$ which is entirely contained in $\pt I^+(S)$. By Thm.\ \ref{thm:maxgeod} $\gamma$ is a geodesic and continues to be maximizing after $q^\delta$. Also, by construction $\sigma^\delta$ is non-trivial, so $\gamma$ and $\sigma^\delta$ coincide on some $\gamma|_{[a,1-\delta]}$ (with $0\leq a<1-\delta$) by Proposition \ref{Prop:nonbranch}. There are two possibilities: Either there exists a subsequence of $\check{g}_{\epsilon_n}$ such that any $\gamma_n^\delta$ meets $S$, in which case, by passing to this subsequence, also $\sigma^\delta$ meets $S$ and hence $\sigma^\delta\supseteq \gamma|_{[0,1-\delta]}$ and we obtain the desired property. The other possibility is that there is no subsequence, such that all $\gamma_n^\delta$ meet $S$. We show that this is impossible. If this were the case we could choose a subsequence $\gamma_n^\delta$ of past-inextendible curves. Then also $\sigma^\delta$ is past-inextendible and hence must leave $\gamma([0,1-\delta])$. Thus again $\sigma^\delta\supseteq \gamma|_{[0,1-\delta]}$ and it remains to show that $\gamma_n^\delta \subseteq E^+_n(S)$ for large $n$. To this end let $U:=D(\Sigma)$, which is a globally hyperbolic, open, and causally convex neighbourhood of $p=\gamma(0)\in S$ for the metric $g$ \cite[Cor.\ 3.36, Prop.\ 3.43]{Min19}\footnote{The proofs carry over verbatim to $C^1$-metrics.} and hence also for $g_{\varepsilon_n}$\footnote{This follows immediately since we approximate from the inside. However, global hyperbolicity is stable in the interval topology even for continuous metrics \cite{Sam16} and $C^1$-convergence is stronger.}. Then there are points $p_n \in \gamma_n^\delta$ with $p_n\to p$ and so for large $n$, $p_n \in \pt I^+_n(S)\cap U= \pt I^+_n(S,U)=E^+_n(S,U)$. Hence there is a $g_{\varepsilon_n}$-null geodesic from $S$ to $p_n$ contained in $E^+_n(S,U)$ which by Proposition \ref{Prop:nonbranch} coincides with (a part of) $\gamma_n^\delta$. Hence $\gamma_n^\delta$ must meet $S$, a contradiction. \end{proof} Observe that Corollary \ref{Cor:epslimitmax} clearly holds true for $C^{1,1}$-spacetimes. For globally hyperbolic $C^1$-spacetimes (where, in principle, null branching can occur) a similar result was established in \cite[Prop.\ 2.16]{Graf19}, with slightly different assumptions on $S$ and a weaker conclusion, which only guarantees that for any $p\in E^+_g(S)$, there is a geodesic segment which is an appropriate limit of approximating $\check{g}_\epsilon$-maximizers. \section{Proof of the main result}\label{sec:proof} We will split the proof of Theorem \ref{thm:glc1} into two parts. The first, analytic part will be concerned with showing that the set $\Sigma_-$ is relatively compact. Here we will generalize (the proof of) \cite[Prop.\ 4.1]{Sil10} by proving new focusing statements for null geodesics using the results from section \ref{sec:c1}. The second, topological part uses the results from causality theory detailed in section \ref{sec:c1} to generalize (the proof of) \cite[Thm.\ 2.1]{Sil10}. To be self-contained we will briefly sketch also those parts of the original (smooth) proofs which do not need major revision. \subsection{Analytic aspects} The main analytical ingredient of our proof is a generalization of \cite[Prop.\ 4.1]{Sil10} to $C^1$-spacetimes, which we will give below in Proposition \ref{prop:cp-c1}. In order to do so we need a focusing result for null geodesics in smooth spacetimes which violate the null energy condition by a small margin $\delta$, as in Lemma \ref{lem:sec}. To this end we apply a result of \cite{FK19}\footnote{Observe the opposite signature convention used there.}, which itself is a generalization of \cite[Prop.\ 10.43]{ON83}. \begin{Proposition} (\cite[Prop.\ 2.7]{FK19}) Let $S$ be a spacelike submanifold of co-dimension $2$ in a smooth spacetime and let $\gamma$ be a null geodesic joining $p\in S$ to $q\in J^+(S)$. If there exists a smooth function $f$ on $\gamma$ which is nonvanishing at $p$ but vanishes at $q$ and so that \begin{equation}\label{eqn:nullgen} \int_\gamma \left( (n-2)(f')^2 -f^2 \, \mathrm{Ric}(\gamma',\gamma') \right) \ \leq\ (n-2)\ \langle f^2 \, \gamma', H)\rangle\,|_{p} \,, \end{equation} then there is a focal point to $S$ along $\gamma$. \end{Proposition} \begin{Lemma}\label{lem:deltafocusing} Let $S$ be a $C^2$-spacelike submanifold of codimension 2 in a smooth spacetime. Let $\gamma$ be a geodesic starting at some $p\in S$ such that $\nu:=\gamma'(0)$ is a future pointing null normal to $S$. Let the convergence \begin{equation}\label{eq:fk} c:=k_S(\nu):= \langle H_{\gamma(0)},\nu \rangle >0 \end{equation} and choose some $b>\frac{1}{c}$ and $0<\delta (b,c)=:\delta \leq \frac{3}{b^2} (n-2)(bc-1)$. Now, if $\text{Ric}(\gamma',\gamma') \geq - \delta$ along $\gamma$, then $\gamma|_{[0,b]}$ cannot be maximizing to $S$, provided it exists that long.\footnote{One can specify the choice of $b$ further to allow for bigger violations of $\delta$, but we do not need this here.} \end{Lemma} \begin{proof} We set $f(t):= 1-\frac{t}{b}$ and check condition \eqref{eqn:nullgen}. For our choice of $\delta$ we obtain \begin{align}\nonumber \int_{0}^{b}&(n-2)\,\frac{1}{b^2}\, dt - \int_{0}^{b} \left(1-2\, \frac{t}{b}+\frac{t^2}{b^2}\right)\, \text{Ric}(\gamma'(t),\gamma'(t)) \, dt \\ &\leq \frac{n-2}{b}+ \delta\, \frac{b}{3} \leq (n-2)\, c =(n-2)\, k_S(\nu) =(n-2)\, \langle f^2 \gamma', H)\rangle\,|_{p} \,, \end{align} and hence $\gamma|_{[0,b]}$ cannot be maximizing. \end{proof} Recall that for $C^{1}$-metrics the mean curvature and the convergence of $S$ are still continuous. The core of the following proof is largely from \cite{MinSil20}. \begin{Proposition}\label{prop:cp-c1} Let $(M,g)$ be an $n$-dimensional (with $n\geq 3$), past reflecting, null geodesically complete \emph{$C^{1}$}-spacetime without null branching which satisfies the distributional null energy condition and admits an asymptotically regular hypersurface $\Sigma$ with a piercing. Then for any enclosing surface $S\subseteq\Sigma$ the closure of its inside, $\overline\Sigma_-=S\cup\Sigma_-$ is compact. \end{Proposition} We consider the closed set $T:=\partial I^+(\Sigma_+)\setminus\Sigma_+$. Also, by Theorem \ref{thm:maxgeod} the set $E^+(S)$ really consists of null \emph{geodesics} emanating from $S$ and perpendicular to $S$, which follows as in \cite[10.45, 10.50]{ON83} by using Lemma \ref{l3}. So we define $\mathcal{H}^+$ as the subset of all points $p\subseteq E^+(S)$ on future directed null geodesics $\gamma:[0,1]\to M$ with $\gamma(0)\in S$, $\gamma'(0)=K_-(\gamma(0))$. Note that $S\subseteq\mathcal{H}^+$. Further, no point on $\mathcal{H}^+$ can lie on a null geodesic from $S$ in direction of $K_+$, see e.g.\ \cite[Lemma 1.1]{Gan75}\footnote{The proof there is given for smooth spacetimes and one assumes $S$ to be simply connected, however the method of proof also works in our case. } The proof consists in successively establishing the following three claims: \medskip \begin{enumerate} \item [(1)] $\mathcal{H}^+$ is relatively compact, \qquad (2) $T$ is compact, \qquad (3) $\rho_X(T)=\overline{\Sigma}_-$. \end{enumerate} \medskip \noindent Steps (1) and (2) combine the causality part of the proof with the analytical arguments which we have to provide in $C^1$-regularity. Some arguments in steps (2) and (3) do not require changes from the original proofs put forward in \cite{MinSil20} resp.\ \cite{Sil10} but will be included as a sketch for the sake of completeness. \begin{proof} (1) Any point $p$ on $\mathcal{H}^+$ lies on a null geodesic emanating from $S$ and its initial tangent vector is inward pointing, i.e.\ proportional to $K_-(q)$ for some $q \in S$. By continuity $k_-$ possesses a minimum $c:= \min_{p\in S} k_-(p)= \min_{p\in S} \langle H_p,K_-(p) \rangle$ on $S$. Also, the set $K:= \{ (p,\lambda K_-(p)) \in TS ^\perp \, |\, 0 \leq \lambda \leq \frac{2}{c} \} \subseteq TM$ is compact and, by \cite[Prop.\ 2.11]{Graf19} (or rather a simplified version without $\epsilon$) the set \begin{equation} F:= \bigcup_{\dot \gamma \text{ with } \dot \gamma(0) \in K} \text{im}(\dot \gamma|_{[0,1]}) \end{equation} is relatively compact (Here $\dot \gamma$ denotes the trajectory of a geodesic in $TM$ with the specified initial conditions). We will show that $\mathcal{H}^+ \subseteq \pi( F)$, where $\pi:TM \to M$ is the projection. Assume the contrary and let $p \in \mathcal{H}^+ \bs \pi(F)$. Let $\gamma:[0,1]\to M$ be a null geodesic from $S$ to $p$ maximizing the distance to $S$. As $\gamma \subseteq \mathcal{H}^+$ we know that $\gamma'(0)= \mu K_-(\gamma(0))$ for some $\mu>\frac{2}{c}$, since $\gamma'(0) \not \in K$. This means that $k_S(\gamma'(0))= \langle H(\gamma(0)), \mu K_-(\gamma(0)) \rangle \geq \mu c > 2$. Let $\check g_{\epsilon_k}$ be as in Lemma \ref{lem:2.2}. By Corollary \ref{Cor:epslimitmax} $\gamma|_{[0,1-\delta]}$ for some arbitrarily small $\delta$ is the $C^1$-limit of $\check g_{\epsilon_k}$-null geodesics $\gamma_{\epsilon_k}:[0,b_k]\to M$ with $b_k\to 1-\delta$ contained in $E^+_{\check g_{ \epsilon_k}}(S)$. Further we can assume that all $\gamma_{\epsilon_k}$ are contained in a compact neighbourhood $\tilde K$ of $\gamma$ and that $c_1 < \| \gamma'_{\epsilon_k}\|_h < c_2$ for some $c_i>0$. Additionally for $k$ large enough we have $k_S^{\epsilon_k}(\gamma'_{\epsilon_k}(0)):=c_k>2$ and by Lemma \ref{lem:sec} we can also achieve $\text{Ric}[g_{\epsilon_k}](\gamma'_{\epsilon_k},\gamma'_{\epsilon_k}) \geq - 3(n-2)$. In order to apply Lemma \ref{lem:deltafocusing} for $b=1$ we set $\delta_k = \frac{3}{b^2}(n-2)(b \, c_k-1)=3(n-2)(c_k-1)$. Since $c_k>2$ for all large $k$, we have $-\delta_k<- 3 (n-2)$ and hence $\text{Ric}[g_{\epsilon_k}](\gamma'_{\epsilon_k},\gamma'_{\epsilon_k}) \geq -3(n-2) > -\delta_k$. So by Lemma \ref{lem:deltafocusing}, $\gamma_{\epsilon_k}$ cannot be maximizing up to $\frac{1}{c_k}< \frac{1}{2} <1-\delta<b$ but by construction it is maximizing up to $b_k\to 1-\delta$, a contradiction. \medskip (2) We prove the inclusion $T \subseteq \mathcal{H}^+$ which gives that $T$ is compact. Assume by contradiction that there were $q \in T\bs \mathcal{H}^+$. By \cite[Proof of 3.5]{MinSil20} we can find $q_n \in I^+(S)$ such that $q_n \to q$ and future directed, fututre inextendible timelike curves $\sigma_n:[0,\infty)\to M$ parametrized by $h$-arclength starting at $S$ with $\sigma_n(t_n)=q_n$. Further by the limit curve theorem, which is valid in $C^1$ spacetimes, see \cite[Thm.\ 14]{Min18} one obtains a future directed, future inextendible causal curve $\sigma$ starting at $S$. If $q$ were to lie on $\sigma$, it had to be a maximizing null geodesic perpendicular to $S$\footnote{by the same argument as above using \cite[10.45, 10.50]{ON83} and Lemma \ref{l3}}. It can however neither start inward going (in direction $K_-$) as then $q\in \mathcal{H}^+$ nor outward going (in direction $K_+$) as then $q \in I^+(\Sigma_+)$. Hence $q\not\in\sigma$ and so $t_n\to \infty$. Since by (1) there is no inward pointing $S$-null ray, there is $b\in(0,\infty)$ with $\sigma(b)\in I^+(S)$. But then, again by the limit curve theorem, $q \in \overline{I^+(\sigma(b))}$. By past-reflectivity one obtains $\sigma(b)\in \overline{I^-(q)}\cap I^+(S)$, implying $q\in I^+(S)$ and hence $q \in I^+(\Sigma_+)$, contradicting $q\in T$. \medskip (3) First $\rx(T) \subseteq \overline \Sigma_-$ (since otherwise $T \cap I^+(\Sigma_+)\not=\emptyset$) and $\overline \Sigma_- \bs \rx(T) \subseteq \Sigma_-$ (since $S = \rx(S) \subseteq \rx(T)$). Now assuming indirectly that $\overline\Sigma_-\setminus\rx(T)\not=\emptyset$, there is $p\in \pt_\Sigma\rx(T)\cap\Sigma_-$ and we will reach a contradiction by showing that $p \in \text{int}\rx(T)$. By compactness of $T$ there is $q\in T$ with $\rx(q)=p$, and $q\not\in S$ (otherwise $p=q\in S\cap\Sigma_-=\emptyset$). So $q\in T\setminus S=\pt I^+(\Sigma_+)\setminus\overline\Sigma_+$ which is a topological hypersurface ($S$ being the edge of the achronal set $T$). So there is $V_0$, an $M$-neighbourhood of $q$ with $V_0 \cap T$ open in $T\bs S$, $\rx(V_0) \subseteq \Sigma_-$, and $V_0 \cap \Sigma= \emptyset$. Next denote by $\Psi$ the local flow of $X$ and choose $\epsilon >0$ and $U_0$, an open $M$-neighbourhood of $q$, so small that $\Psi(U_0 \times (-\epsilon, \epsilon))\subseteq V_0$. Further set $\Psi_0 := \Psi_{|(U_0\cap T)\times (-\epsilon,\epsilon)}$, and $W:= \text{Im}\Psi_0$. By achronality of $T$ and invariance of domain $W$ is open and $\Psi_0$ is a homeomorphism. But then $p\in\rx(U_0\times(-\epsilon,\epsilon))=\rx(W)$ and the latter set is open by openness of $\rx$ (\cite[14.31]{ON83}) and so $p\in \text{int}\rx(T)$. \end{proof} \subsection{Topological aspects} Finally we invoke Proposition \ref{prop:cp-c1} to prove our main result. Here we will be brief on the topological aspects laid out already in the proof of \cite[Thm.\ 2.1]{Sil10}. \begin{proof}[Proof of theorem \ref{thm:glc1}] Let $\Phi:\tilde{M} \to M$ be a connected (smooth) covering with $\Phi_\# (\pi_1 (\tilde M)) = j_\#(\pi_1 (S))$, where $j$ is the inclusion of $S$ in $M$. Note that w.l.o.g.\ one can assume the vector field of the piercing to be complete and by properties of its flow map easily show that $M \cong \R \times \Sigma$. Hence the inclusion $m$ of $\Sigma$ in $M$ induces an isomorphism $m_\# : \pi_1(\Sigma) \to \pi_1(M)$. In particular, $\Phi_\Sigma := \Phi|_{\tilde{\Sigma}}: \Phi^{-1}(\Sigma):=\tilde{\Sigma} \to \Sigma$ is a Riemannian covering with $\tilde{\Sigma}$ connected. In Lemma \ref{lem:final} below we will establish that $\Phi_\Sigma$ is trivial. Accepting this for the moment, we will show the theorem, i.e.\ for every $y \in \pi_1(\Sigma)$ there is $x \in \pi_1(S)$ with $i_\#(x) =y$. From the diagrams\\ \begin{minipage}{0.4 \textwidth} \centering \begin{tikzcd} & M\\ S \arrow[r,hook, "i"] \arrow[ru,hook,"j"] &\Sigma \arrow[u,hook, "m"] & \tilde{M} \arrow[lu,"\Phi"] \\ & \tilde{\Sigma}\arrow[u,leftrightarrow, "\Phi_\Sigma"] \arrow[ru,"\tilde{m}"] \end{tikzcd} \end{minipage} \begin{minipage}{0.55 \textwidth} \centering \begin{tikzcd} & & j_\#(\pi_1(S)) \arrow[d,hook]\\ \pi_1(S) \arrow[r, "i_\#"] \arrow[urr, bend left, "j_\#"] & \pi_1(\Sigma) \arrow[r, leftrightarrow,"m_\#"]& \pi_1(M) \\ & \pi_1(\tilde{\Sigma}) \arrow[r] \arrow[u] & \pi_1(\tilde{M}) \arrow[u] \arrow[uu,bend right=70,"\Phi_\#"] \end{tikzcd} \end{minipage}\\ we see that \begin{equation} m_\#(y) = (\Phi \circ \tilde{m} \circ \Phi_\Sigma^{-1})_\#(y) = \Phi_\# (\tilde{m} \circ \Phi_\Sigma^{-1})_\# (y) \in \Phi_\#(\pi_1(\tilde{M})) = j_\#(\pi_1(S)). \end{equation} Since $j=m \circ i$ we have $m_\#(y) = j_\#(x) = m_\#(i_\#(x))$, and since $m_\#$ is an isomorphism and we are done. \end{proof} \begin{Lemma}\label{lem:final} $\Phi_\Sigma := \Phi|_{\tilde{\Sigma}}: \tilde{\Sigma}:=\Phi^{-1}(\Sigma) \to \Sigma$ is a trivial covering. \end{Lemma} \begin{proof} First, by definition there is a local deformation $F : S \times (-1,1) \to \Sigma$ of $S$. Further set $U_F^- := F(S \times (0,1))$ and $V:= U_F^- \cup S \cup \Sigma_+$. Then $\bar \Sigma_+$ is a deformation retract of $V$ and hence $\pi_1(\bar \Sigma_+)\cong \pi_1(V)$. \medskip Next we establish that on every connected component $\tilde{V}$ of $\Phi_\Sigma^{-1}(V)$ the map $\Phi_V:=\Phi|_{\tilde{V}} : \tilde{V} \to V$ is a diffeomorphism. We only have to show injectivity since $\Phi_V$ is a local diffeomorphism. Take $\tilde{p}, \tilde{q} \in \tilde{V}$ such that $\Phi_V(\tilde{p})= \Phi_V(\tilde{q})=:p \in V$. Let $\tilde{\alpha} :[0,1] \to \tilde{V}$ be a path connecting these two points, then $\alpha := \Phi \circ \tilde{\alpha}$ is a loop in $V$, homotopic to a loop in $S$ since $\pi_1(V) \cong \pi_1(\bar \Sigma_+) \cong \pi_1(S)$. Further since $\Phi_\#(\pi_1(\tilde{M}))= j_\#(\pi_1(S))$, there is a loop $\tilde{\beta}$ in $\tilde{M}$, fixed endpoint-homotopic to $\tilde{\alpha}$ and so we must have $\tilde{p}=\tilde{q}$. \medskip In order to show that $\Phi_\Sigma$ is trivial we assume the converse, and, in particular that $\Phi_\Sigma^{-1}(S)$ has more than one component. Since $S \subseteq V$ each of these components is diffemorphic to $S$, and they separate $\tilde \Sigma$. Let $\tilde{S}_1, \tilde{S}_2$ be two such different components and let $\tilde{V}_1, \tilde{V}_2$ be the respective copies of $V$ containing them. Since $\bar \Sigma_+ \subseteq V$, each $\tilde{V}_i$ ($i=1,2$) contains a diffeomorphic copy of $\bar \Sigma_+$ called $\tilde{C}_i$, which are closed and non-compact. Further since $\tilde{\Sigma}$ is connected and separated by $\tilde{S}_1$, the set $\tilde{C}_2$ is contained in $\tilde{S}_1 \cup \tilde{\Sigma}_-^{(1)}:= \tilde{S}_1 \cup (\tilde{\Sigma} \bs \tilde{C}_1)$, since otherwise $\tilde{V}_1 \cap \tilde{V}_2 \neq \emptyset$. So $\tilde{C}_2 \subseteq \tilde{S}_1 \cup \tilde{\Sigma}_-^{(1)}$. Being local properties, both null geodesic completeness and the distributional null convergence condition lift to $\tilde M$ and by assumption $\tilde M$ is past-reflecting. Further as null branching is a local property as well, it also cannot occur in $\tilde M$. Thus the assumptions of Proposition \ref{prop:cp-c1} are fulfilled for $\tilde{M}$, $\tilde{\Sigma}$, $\tilde{C}_1$ and $\tilde{\Sigma}\bs \tilde{C}_1$ implying that $\tilde{S}_1 \cup (\tilde{\Sigma} \bs \tilde{C}_1)$ is compact. However this set contains the non-compact, closed set $\tilde{C}_2$, a contradiction and we are done. \end{proof} Finally we \emph{sketch the proof of Corollary \ref{cor:ghc1}}, i.e.\ the globally hyperbolic $C^{1}$-Gannon-Lee theorem. We can proceed analogously to the one of Theorem \ref{thm:glc1}, where most points will even be significantly easier. In fact the topological part remains the same and the only aspect one has to pay attention to is proving that ${\mathcal H}^+$ is relatively compact, i.e. step (1) in Proposition \ref{prop:cp-c1}: We have to prove that any maximizing $g$-null geodesic is a limit of maximizing $\check g_\epsilon$-null geodesics. Corollary \ref{Cor:epslimitmax} does not apply, but we can replace it by the ``limiting-result'' \cite[Prop.\ 2.13]{Graf19}. Note that compared to Corollary \ref{Cor:epslimitmax} the result in \cite[Prop.\ 2.13]{Graf19} only shows that there is one $g$-geodesic which can be approximated by $\check{g}_\epsilon$ maximizers, but this is sufficient for the proof to work out. \medskip \section*{Acknowledgement} We are greatful to Michael Kunzinger for sharing his experience and to Melanie Graf and Clemens Sämann for helpful discussions. We would especially like to thank Ettore Minguzzi for pointing out a mistake in an earlier version of this note, and to the anonymous referees for their valuable comments. This work was supported by FWF-grants P28770, P33594 and the Uni:Docs program of the University of Vienna.
proofpile-arXiv_067-542
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introdução} Dos teoremas clássicos da análise matemática, o \textit{teorema do valor médio} se destaca por sua simplicidade e vasta aplicabilidade. É, sem dúvida, um dos resultados mais conhecidos pela comunidade matemática, e, sem favor algum, um dos tijolos que constituem os alicerces do Cálculo e da Análise Matemática como um todo. O objetivo deste artigo é apresentar uma gama de outros teoremas do tipo valor médio que somam-se ao clássico e que também são de altíssima aplicabilidade, simplicidade e contribuem para o avanço da Matemática. Atualmente existem diversas maneiras para abordar o teorema do valor médio, cada qual ligada a alguma meta específica a qual se pretende chegar. Neste caso, como queremos um panorama acerca dos diferentes teoremas do tipo valor médio, utilizaremos a abordagem clássica. Nossa história começa em 1691 quando Rolle\footnote{Michel Rolle (1652-1719), matemático francês.} usou técnicas do Cálculo diferencial e integral para provar o seguinte resultado, nosso primeiro teorema do tipo valor médio: \begin{teo}[Teorema de Rolle] Seja $f: [a,b] \to \mathbb{R}$ uma função contínua em $[a,b]$ e diferenciável em $(a,b)$. Se $f(a)=f(b)$, então, existe $c \in (a,b)$ tal que $f'(c)=0$, isto é, a reta tangente ao gráfico de $f$ no ponto $(c, f(c))$ é horizontal. \end{teo} Ainda que Rolle tenha sido merecidamente homenageado pela formalização do resultado, é interessante comentar que Bhaskara II\footnote{Bhaskara Akaria (1114-1185), matemático indiano.}, demonstrou um caso particular do teorema de Rolle muito tempo antes, embora sem pouca ou nenhuma formalidade. O teorema de Rolle ficou mais conhecido depois que Drobisch\footnote{Moritz Wilhelm Drobisch (1802-1896), matemático alemão.} usou o termo pela primeira vez em 1834, seguido por Bellavitis\footnote{Giusto Bellavitis (1803-1880), matemático italiano} em 1846. Maiores detalhes podem ser encontrados em \cite{HIST1}. O teorema do tipo valor médio mais famoso da história é o \emph{teorema do valor médio de Lagrange:} \begin{teo}[Teorema do valor médio de Lagrange] Se $f: [a,b] \to \mathbb{R}$ é uma função contínua em $[a, b]$ e derivável em $(a,b)$, então existe $c \in (a,b)$ tal que $$f'(c)=\frac{f(b)-f(a)}{b-a}.$$ \end{teo} Este resultado foi inicialmente descoberto por Lagrange\footnote{Joseph Louis Lagrange (1736-1813), matemático italiano.}, que o demonstrou sem, inicialmente, fazer menção ao teorema de Rolle. Entretanto, a dedução mais conhecida tem como ideia principal a aplicação do teorema de Rolle à função auxiliar \begin{align*} \varphi(x) = f(x)- \bigg[\frac{f(b)-f(a)}{b-a}(x-a) + f(a)\bigg], \end{align*} e esta foi feita por Bonnet\footnote{Pierre Ossian Bonnet (1819-1892),matemático francês.}. Publicamente, o teorema do valor médio de Lagrange foi citado pela primeira vez em um trabalho do renomado físico Ampére\footnote{André-Marie Ampére (1775-1836), físico francês.}. Para maiores detalhes, a referência \cite{SAHOO} pode ser consultada. Geometricamente, o teorema do valor médio de Lagrange diz que existe um ponto $c$ dentro do intervalo $(a, b)$, tal que a reta tangente ao gráfico de $f$ no ponto $(c, f(c))$ é paralela à reta secante que passa pelos pontos $(a, f(a))$ e $(b, f(b))$. Fisicamente, o teorema do valor médio de Lagrange garante que se uma partícula possui uma trajetória suave $(t, f(t))$ no intervalo de tempo $[a,b]$, então existirá um instante $t_c \in (a,b)$ tal que a velocidade instantânea da partícula (em $t = t_c$) coincide com a velocidade média de todo o percurso. Motivado por esta aplicação, Cauchy\footnote{Augustine-Louis Cauchy (1789-1857), matemático francês.} se perguntou o que poderia ser dito a respeito de uma partícula de trajetória suave $(f(t),g(t))$ no intervalo de tempo $[a,b]$. Então, Cauchy aplicou o teorema de Rolle à função $$\varphi(x)=[g(b)-g(a)] f(x)- [f(b)-f(a)]g(x)$$ e o resultado foi o que conhecemos hoje por \emph{Teorema do valor médio de Cauchy}. \begin{teo}[Teorema do valor médio de Cauchy] Se $f,g: [a,b] \to \mathbb{R}$ são funções contínuas em $[a, b]$ e deriváveis em $(a,b)$, então existe $c \in (a,b)$ tal que \begin{align*} f'(c)[g(b)-g(a)]=g'(c)[f(b)-f(a)]. \end{align*} \end{teo} Geometricamente, o teorema do valor médio de Cauchy garante que dada uma trajetória suave $(f(t),g(t))$ com $t \in [a,b]$, existirá $c \in (a,b)$ de tal forma que a reta tangente à trajetória no ponto $(f(c),g(c))$ é paralela à reta que passa pelos pontos $(f(a), g(a))$ e $(f(b), g(b))$. \medskip As demonstrações detalhadas dos teoremas do tipo valor médio discutidos nesta seção podem ser encontradas em \cite[Teorema 6.2.4 e Teorema 6.3.2]{bartle} ou em \cite[Teorema 2.2 e Teorema 2.17]{SAHOO}. O leitor interessado em variações do teorema de Lagrange pode consultar \cite{Lozada-C0}. Para variações e aplicações do teorema de Cauchy, recomendamos \cite{Lozada-C} e \cite{Lozada-C2} \section{Teorema de Flett e suas variações}\label{def1} O teorema a seguir é uma versão do clássico teorema do valor médio para integrais e a partir de observações acerca dele que nasce a motivação para o teorema do tipo valor médio que abordaremos a seguir \begin{teo}\label{teo:TVMIntegrals} Se $f:[a,b] \to \mathbb{R}$ é uma função contínua, então existe $\eta \in [a,b]$ tal que $$\int\limits_a^b f(x)dx = f(\eta)(b-a).$$ \end{teo} A demonstração do Teorema \ref{teo:TVMIntegrals} pode ser encontrada em \cite[Teorema 7.1]{SAHOO}. \medskip Considere, agora, uma função $g: [a,b] \to \mathbb{R}$ como no Teorema \ref{teo:TVMIntegrals}, então existe $\xi \in (a,b)$ tal que $$g(\xi)=\dfrac{1}{b-a}\int\limits_a^b g(t)dt.$$ Além disso, se considerarmos uma tal função $g$, contínua, e tal que $$g(a)=0, \ \ \int\limits_a^b g(t)dt=0,$$ e definirmos a função \begin{align*} \varphi(x)=\begin{cases} \dfrac{1}{x-a}\int\limits_a^x g(t)dt, & x \in (a,b] \\ 0, & x=a, \end{cases} \end{align*} teremos, então, pelo teorema de Rolle, uma vez que $\varphi$ é contínua em $[a,b]$, derivável em $(a,b)$ e $\varphi(a)=0=\varphi(b)$, que existe $\xi \in (a,b)$ tal que $\varphi'(\xi)=0$. Mas, se $x \in (a,b)$, $\varphi'(x)=-\frac{1}{(x-a)^2} \int\limits_a^x g(t)dt + \frac{g(x)}{x-a}$. Assim, existe $\xi \in (a,b)$ tal que \begin{equation} g(\xi)=\frac{1}{\xi-a}\int\limits_a^{\xi}g(t)dt.\label{eq1} \end{equation} Note que \eqref{eq1} é exatamente o teorema do valor médio para integrais com a substituição de $b$ por $\xi$. Olhando rapidamente para o teorema fundamental do cálculo, podemos escrever a equação \eqref{eq1} da seguinte forma: \begin{equation}G'(\xi)=\dfrac{G(\xi)-G(a)}{\xi-a}\label{eq2}\end{equation} onde $G$ é uma primitiva de $g$, ou seja, $G'=g.$ Neste sentido, dada uma função $g$, é natural nos perguntarmos se podemos trocar a condição de que $\int\limits_a^b g(t)dt =0$ simplesmente por $g(b)=0$. A seguir, como uma consequência do teorema do valor intermediário, vemos que é possível. Estas observações foram feitas por Flett\footnote{Thomas Muirhead Flett (1923-1976), matemático britânico.} e o resultado (de 1958) leva o seu nome como homenagem. \medskip O teorema de Flett (veja \cite{FLETT}) é uma variação do teorema de Rolle onde a condição $f(a)=f(b)$ foi substituída por $f'(a)=f'(b)$. Por este motivo, dizemos que o teorema de Flett é um teorema do tipo de Lagrange com uma condição do tipo Rolle, ou simplesmente, teorema do tipo valor médio com uma condição do tipo Rolle. \begin{teo}[Teorema do valor médio de Flett \cite{FLETT}]\label{teo:Flett} Seja $f:[a,b] \to \mathbb{R}$ uma função diferenciável em $[a,b]$ com $f'(a)=f'(b)$. Então, existe $\xi \in (a,b)$ tal que \begin{align}\label{eqn: TVMFlett} f'(\xi)=\frac{f(\xi)-f(a)}{\xi-a}. \end{align} \end{teo} \demo Sem perda de generalidade podemos supor $f'(a)=f'(b)=0$, pois caso contrário fazemos $\psi(x)=f(x)-xf'(a)$ e daí teremos $\psi'(a)=\psi'(b)=0$. Definamos a função $\varphi:[a, b]\to \mathds{R}$ dada por \begin{align*} \varphi(x)=\begin{cases} \dfrac{f(x)-f(a)}{x-a}, & x \in (a,b] \\ f'(a), & x=a. \end{cases} \end{align*} A função $\varphi$ é contínua em $[a, b]$, derivável em $(a, b]$ e para $ x \in (a,b)$, $$ \varphi'(x) = \frac{f'(x)}{x-a}-\frac{\varphi(x)}{x-a}.$$ Observe que, $\varphi(a)=0$. Se $\varphi(b)=0$, pelo teorema de Rolle existe $\xi \in (a,b)$ tal que $\varphi'(\xi)=0$ e o teorema está provado. \medskip Suponhamos $\varphi(b)\neq0$. Se $\varphi(b)>0$, segue que \begin{align*} \varphi'(b)=\frac{f'(b)-\varphi(b)}{b-a}=-\frac{\varphi(b)}{b-a} <0. \end{align*} Logo, para $\epsilon>0$ suficientemente pequeno existe $x_1 \in (b-\epsilon,b)$ tal que $\varphi(b)<\varphi(x_1)$. Como $\varphi$ é contínua em $(a, x_1)$ e $0=\varphi(a)<\varphi(b)<\varphi(x_1)$, segue do teorema do valor intermediário, que existe $\eta \in (a,x_1)$ tal que $\varphi(\eta)=\varphi(b)$. E então, do teorema de Rolle aplicado ao intervalo $[\eta, b]$, existe $\xi \in (\eta, b) \subset (a,b)$ tal que $\varphi'(\xi)=0$, isto é, $f'(\xi)=\frac{f(\xi)-f(a)}{\xi-a}.$ \medskip O caso $\varphi(b)<0$ é análogo.\fimdemo \medskip Geometricamente, o teorema de Flett diz que se uma curva $(t,f(t))$ é suave no intervalo $[a,b]$ e as retas tangentes nos extremos $(a,f(a))$ e $(b,f(b))$ são paralelas, então, existe um ponto $\xi \in (a,b)$ de modo que a reta tangente ao gráfico de $f$ que passa por $(\xi, f(\xi))$ também passa por $(a,f(a)),$ como podemos ver na Figura \ref{figflett}. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5]{Flett.eps} \caption{Interpretação geométrica do teorema de Flett} \label{figflett} \end{center} \end{figure} Por outro lado, do ponto de vista cinemático, Flett concluiu que, se as velocidades inicial e final de uma partícula com trajetória $(t,f(t))$ suave no intervalo de tempo $[a,b]$ forem iguais, então, existe um momento $t_j \in (a,b)$ tal que a velocidade instantânea da partícula neste instante, é exatamente a velocidade média do percurso até o instante $t_j$. \medskip Por comodidade, dada uma função $f:[a,b] \to \mathbb{R}$, chamaremos o ponto $\xi\in (a, b)$ tal que satisfaz a conclusão do teorema de Flett simplesmente de \textit{ponto de Flett}. Um exemplo básico para exemplificar o uso do teorema de Flett pode ser dado quando consideramos a função $f:[-2,2] \to \mathds{R}$ dada por $f(x)=x^3+2x-1$. Como $f$ é um polinômio segue que $f$ é derivável em $[-2,2]$ e usando \eqref{eqn: TVMFlett} vemos fácilmente que um $\xi = 1\in (-2, 2)$ é um ponto de Flett de $f$. A seguir trataremos brevemente dos resultados apresentados por R. Meyers em 1977. Estes apresentam variações do teorema de Flett. As interpretações geométricas e físicas são análogas às interpretações do teorema de Flett, portanto, não serão verbalmente discutidas nesta seção. \begin{teo}[{\cite[Teorema $1'$]{MEYER}}] \label{flett2} Seja $f:[a,b] \longrightarrow \mathbb{R}$ uma função diferenciável em $[a,b]$ com $f'(a)=f'(b)$. Então, existe $\xi \in (a,b)$ tal que \begin{align*} f'(\xi)=\frac{f(b)-f(\xi)}{b-\xi}. \end{align*} \end{teo} \begin{figure}[t] \begin{center} \includegraphics[scale=0.4]{M1.eps} \caption{Interpretação geométrica Teorema \ref{flett2}} \label{figM1} \end{center} \end{figure} \begin{teo}[{\cite[Teorema 2]{MEYER}}] Seja $f:[a,b] \longrightarrow \mathbb{R}$ uma função diferenciável em $[a,b]$ com $f'(a)=f'(b)$. Então, existe $\xi \in (a,b)$ tal que $$f'(\xi)=\frac{f(b)-f(\xi)}{\xi-a}.$$ \label{flett3} \end{teo} \begin{figure}[b] \begin{center} \includegraphics[scale=0.4]{flett3.eps} \caption{Interpretação geométrica Teorema \ref{flett3}} \label{figflett3} \end{center} \end{figure} \begin{teo}[{\cite[Teorema $2'$]{MEYER}}] Seja $f:[a,b] \longrightarrow \mathbb{R}$ uma função diferenciável em $[a,b]$ com $f'(a)=f'(b)$. Então, existe $\xi \in (a,b)$ tal que $$f'(\xi)=\frac{f(\xi)-f(a)}{b-\xi}.$$ \label{flett4} \end{teo} \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{flett4.eps} \caption{Interpretação geométrica Teorema \ref{flett4}} \label{figflett4} \end{center} \end{figure} \begin{teo}[{\cite[Teorema 3]{MEYER}}] Se $f$ é diferenciável e $f'$ é contínua em $[a,b]$ e $[f(b)-f(a)][f(b)-f(a)-(b-a)f'(b)]<0$. Então, existe $\xi \in (a,b)$ tal que $$f'(\xi)=\frac{f(b)-f(a)}{\xi-a}.$$ \end{teo} \begin{teo}[{\cite[Teorema $3'$]{MEYER}}] Se $f$ é diferenciável e $f'$ é contínua em $[a,b]$ e $[f(b)-f(a)][f(b)-f(a)-(b-a)f'(a)]<0$. Então, existe $\xi \in (a,b)$ tal que $$f'(\xi)=\frac{f(b)-f(a)}{b-\xi}.$$ \end{teo} \begin{teo}[{\cite[Teorema 4]{MEYER}}] Se $f$ é diferenciável e $f'$ é contínua em $[a,b]$ e $f'(a)[f(b)-f(a)-(b-a)f'(b)]>0$. Então, existe $\xi \in (a,b)$ tal que $$f'(\xi)=\frac{f(\xi)-f(a)}{b-a}.$$ \end{teo} \begin{teo}[{\cite[Teorema $4'$]{MEYER}}] Se $f$ é diferenciável e $f'$ é contínua em $[a,b]$ e $f'(b)[f(b)-f(a)-(b-a)f'(a)]>0$. Então, existe $\xi \in (a,b)$ tal que \begin{align*} f'(\xi)=\frac{f(b)-f(\xi)}{b-a}. \end{align*} \end{teo} \section{Existência de Pontos de Flett} O assunto discutido nessa seção tem motivação no exemplo a seguir. Seja $[a,b]$ um intervalo fechado que contém o $0$ no seu interior e considere a função $f:[a,b] \longrightarrow \mathbb{R}$ dada por $f(x)=|x|$, que não é diferenciável em $x=0$, entretanto, supondo que $a < x < 0$ temos, \begin{align*} \dfrac{f(x)-f(a)}{x-a}=\dfrac{|x|-|a|}{x-a}=\dfrac{-x+a}{x-a} = -1 = f'(x), \ \forall x \in (a,0). \end{align*} Portanto, existem infinitos pontos de Flett em $(a,0) \subset (a,b)$. Este exemplo mostra que o conjunto das funções que satisfazem as hipóteses do teorema de Flett está estritamente contido no conjunto das funções que têm um ponto de Flett. Portanto, é natural perguntarmos que outras condições suficientes existem que garantam a existência de pontos de Flett. Originalmente, em 1958, T.M. Flett demonstrou que pontos de Flett existem sob as hipóteses de que $f$ seja diferenciável no intervalo fechado $[a,b]$ e que $f'(a)=f'(b)$. Mas esta não é a única. Os primeiros estudos sobre os resultados de T.M Flett e suas generalizações foram feitos em $1966$ pelo matemático Donald. H. Trahan em 1966 (ver \cite{TRAHAN}). Ele deu uma nova condição para a existência de um ponto de Flett através de algumas desigualdades, usando uma comparação entre a inclinação da reta secante ao gráfico da função $f:[a,b] \to \mathbb{R}$ passando pelos extremos $(a,f(a))$ e $(b,f(b))$ e a inclinação das retas tangentes ao gráfico passando pelos mesmos. \medskip Os seguintes resultados são necessários para a compreensão da condição de Trahan. \begin{lema}[{\cite[Lema 1]{TRAHAN}}]\label{lema:General-Rolle} Se $f: [a,b] \to \mathbb{R}$ é uma função contínua, diferenciável em $(a,b]$ e $f'(b)[f(b)-f(a)] \leqslant 0$, então existe $c \in (a,b]$ tal que $f'(c)=0$. \end{lema} \begin{lema}[{\cite[Lema 2]{TRAHAN}}]\label{lema:General-Rolle2} Se $f: [a,b] \to \mathbb{R}$ é uma função contínua, diferenciável em $(a,b]$ e $f'(b)[f(b)-f(a)] < 0$, existe $c \in (a,b)$ tal que $f'(c)=0$. \end{lema} Observe que os Lemas \ref{lema:General-Rolle} e \ref{lema:General-Rolle2} são generalizações do Teorema de Rolle. \begin{teo}[\textbf{Condição de Trahan \cite{TRAHAN}}]\label{teo:Trahan66} Seja $f:[a,b] \to \mathbb{R}$ uma função diferenciável e tal que \begin{equation} \label{eqtr} \big(f'(b)-\tfrac{f(b)-f(a)}{b-a}\big) \big(f'(a)-\tfrac{f(b)-f(a)}{b-a}\big)\geqslant 0. \end{equation} Então, existe um ponto de Flett em $(a,b]$. \end{teo} \demo Considere a função $\varphi:[a,b] \to \mathbb{R}$ definida por $$\varphi(x)=\begin{cases} \dfrac{f(x)-f(a)}{x-a}, & x \in (a,b] \\ f'(a) ,& x=a \end{cases}$$ Observe que $\varphi$ é contínua em $[a,b]$, é diferenciável em $(a,b]$ e que $\varphi'(b)[\varphi(b)-\varphi(a)] \leqslant 0.$ Logo, pelo Lema \ref{lema:General-Rolle}, existe $\xi \in (a,b)$ tal que $\varphi'(\xi) =0$, o que significa que $$f'(\xi)=\frac{f(\xi)-f'(a)}{\xi-a},$$ou seja, $\xi$ é um ponto de Flett em $(a,b]$. \fimdemo No exemplo a seguir, temos uma função que não satisfaz a condição de Flett, mas satisfaz a de Trahan e possui, portanto, um ponto de Flett. \begin{ex} \label{trhflt} Considere a função $f:[-\tfrac{1}{2},1] \to \mathbb{R}$ dada por $f(x)=x^3$. Note que $f$ é diferenciável e que $f'(-\tfrac{1}{2}) \neq f'(1)$, logo, $f$ não satisfaz a condição de Flett. No entanto, $f$ satisfaz a condição de Trahan e, portanto, possui um ponto de Flett, a saber $\xi = \frac{1}{4} \in \left(-\tfrac{1}{2},1\right]$. \label{estricttrahan}\end{ex} Outra condição suficiente para a existência de um ponto de Flett foi provada por J. Tong em \cite{TONG}. Um ponto interessante desta condição é que Tong só exige a diferenciabilidade de $f$ em $(a,b)$, mas usa os conceitos de média aritmética de $f$, $\mathscr{M}(f):= \frac{f(a)+ f(b)}{2}$ e média de $f$, $\mathscr{I}(f):=\frac{1}{b-a}\int\limits_a^b f(t) dt$. \begin{teo}[\textbf{Condição de Tong \cite[Teorema 2]{TONG}}] \label{teot} Seja $f:[a,b] \to \mathbb{R}$ uma função contínua em $[a,b]$ e diferenciável em $(a,b)$. Se $\mathscr{M}(f)= \mathscr{I}(f)$ então, $f$ admite um ponto de Flett em $(a,b)$. \end{teo} \demo Basta observar que a função $h$ dada por $$\begin{array}{ccclc} h: &[a,b] & \to & [a,b]\\ &x & \mapsto & h(x)=\tfrac{f(x)+ f(a)}{2} (x-a)- \int\limits_a^x f(t)dt. \end{array}$$ é contínua em $[a, b]$ e diferenciável em $(a, b)$ com derivada $$h'(x)=\frac{1}{2}f'(x)(x-a) + \frac{1}{2} (f(x)+ f(a)) -f(x).$$ Como $h(a)=0$ e $\mathscr{M}(f)= \mathscr{I}(f)$, segue que $h(b)=0$. Agora, usando o teorema de Rolle obtemos a conclusão do eorema. \fimdemo \medskip O exemplo a seguir traz uma função que não satisfaz a condição de Flett, nem de Trahan, mas satisfaz a de Tong. \begin{ex}Considere a função $f(x)=\arcsin x$ sobre o intervalo $[-1,1].$ Note que $f$ não satisfaz a condição de Flett, nem a de Trahan, pois não é diferenciável nos extremos. No entanto, um cálculo simples usando integração por partes garante que $\mathscr{M}(f)= \mathscr{I}(f),$ o que prova que a função $\arcsin x$ satisfaz a condição de Tong no intervalo $[-1,1]$ e que, portanto, possui um ponto de Flett. \end{ex} \medskip A terceira condição se deve ao matemático B. Malesevic e é feita em termos de uma função infinitesimal. Para tanto, seja $f:[a,b] \in \mathbb{R}$ uma função diferenciável em $[a,b]$ e diferenciável um número arbitrário de vezes numa vizinhança à direita do ponto $x=a$. Considere a expansão de Taylor de ordem um, com resto, dado por $$f(x)=f(a)+f'(a)(x-a)+\varphi(x)(x-a),$$ onde $\lim\limits_{x \to a^+} \varphi(x)=0$. Então, definimos a função $\varphi_1: [a,b] \in \mathbb{R}$ por \begin{equation} \label{T1} \varphi_1(x)= \begin{cases} \dfrac{f(x)-f(a)}{x-a}-f'(a), & x \in (a,b] \\ 0, & x=a. \end{cases} \end{equation} A partir desta função $\varphi_1$ Malesevic demonstrou o seguinte resultado: \begin{teo}[Condição de Malesevic \cite{MAL}]\label{MAL} Seja $f:[a,b] \to \mathbb{R}$ é uma função diferenciável e $\varphi_1$ como em \eqref{T1}. Se uma das seguinte condições \begin{align*} {\rm T}_1:&\; \; \varphi_1'(b)\, \varphi_1(b) < 0\;\; e \\ {\rm M}_1:&\; \; \varphi_1'(a)\, \varphi_1(b) < 0 \end{align*} é satisfeita, então $f$ possui um ponto de Flett. \end{teo} \demo Se a condição ${\rm T}_1$ é satisfeita então $\varphi_1'(b)[\varphi_1(b)-\varphi_1(a)] < 0$. Logo, do Lema \ref{lema:General-Rolle2} existe $\xi_1 \in (a,b)$ tal que $\varphi_1'(\xi_1)=0$, i.e., \begin{align*} \tfrac{1}{\xi_1-a} \big( f'(\xi_1)- \tfrac{f(\xi_1)-f(a)}{\xi_1-a} \big)=0 \Leftrightarrow f'(\xi_1)= \tfrac{f(\xi_1)-f(a)}{\xi_1-a}, \end{align*} Agora se a condição ${\rm M}_1$ é satisfeita então $\varphi_1'(a) \big[ \varphi_1 (b)-\varphi_1(a)\big] <0$. Logo, do Corolário 3 do Teorema 2 em \cite{Malesevic} existe $\xi_2\in (a, b)$ tal que $\varphi_1'(\xi_2)=0$, i.e., \begin{align*} \tfrac{1}{\xi_2-a} \big( f'(\xi_2)- \tfrac{f(\xi_2)-f(a)}{\xi_2-a} \big)=0 \Leftrightarrow f'(\xi_2)= \tfrac{f(\xi_2)-f(a)}{\xi_2-a}. \end{align*} \fimdemo \begin{Obs} Se ambas as condições ${\rm T}_1$ e ${\rm M}_1$ do {\rm Teorema} $\ref{MAL}$ são satisfeitas, então existem dois pontos $($distintos$)$ de Flett. $($veja \cite{Malesevic}$)$. \end{Obs} \medskip A relação entre funções que satisfazem as condições de Flett, Trahan, Tong e Malesevic é representada na Figura \ref{relac}. \begin{figure}[b] \begin{center} \includegraphics[scale=0.7]{quarto-malesevic.eps} \caption{Relação entre as condições de Flett, Tong, Trahan e Malesevic} \label{relac} \end{center} \end{figure} Observe que $\Delta_{12} \neq \emptyset$, uma vez que $f(x)= {\rm sgn}(x)$ é uma função que não satisfaz nenhuma das condições, pois não é diferenciável em $(a,b)$ para qualquer intervalo $[a,b]$ da reta, que contenha o zero. Entretanto, possui infinitos pontos de Flett. Analogamente, \noindent {\bf (i)} $f(x)=x^3$, $x \in [-1,1]$ está em $\Delta_1$. \medskip \noindent {\bf (ii)} $f(x)=\sin(x), x \in \left[-\frac{\pi}{2},\frac{5\pi}{2}\right]$ está em $\Delta_2.$ \medskip \noindent {\bf (iii)} $f(x)=x^3, x \in \left[-\frac{2}{3},1\right]$ está em $\Delta_3$. \medskip \noindent {\bf (iv)} $f(x)=\arcsin(x), x \in [-1,1]$ está em $\Delta_{12}$. \medskip Prova-se que todos os conjuntos $\Delta_i$, $i=1,2,...,12$ são não vazios, mas a confecção de exemplos para $i=4,5,6,7,8,10$ e $11$ fogem ao objetivo deste trabalho e por isso não serão discutidos aqui. Maiores detalhes das demonstrações e exemplos vistos nesta seção, sugerimos a referência \cite{MOLNAROVA}. \section{Generalizações e Aplicações} \subsection{Algumas Generalizações} A seguir trataremos de algumas generalizações e consequências do teorema de Flett. O teorema abaixo foi demonstrado em 1998. Este resultado é uma generalização do teorema de Flett que não exige a condição tipo Rolle, nesse caso o resultado é mais geral. Note que nos dois teoremas a seguir, o caso em que $f'(a)=f'(b)$ é exatamente o teorema de Flett. \begin{teo}[\textbf{Riedel-Sahoo \cite[Teorema 5.2]{SAHOO}}] Se $f: [a,b] \to \mathbb{R}$ é diferenciável em $[a,b]$, então, existe $\xi \in (a,b)$ tal que \begin{align}\label{eqn:Rie-Sa} f(\xi)-f(a)=(\xi-a)f'(\xi)-\frac{1}{2}\frac{f'(b)-f'(a)}{b-a}(\xi-a)^2. \end{align} \end{teo} \demo Definamos a função $\varphi:[a,b] \to \mathbb{R}$ por $$\varphi(x)=f(x)-\frac{1}{2}\frac{f'(b)-f'(a)}{b-a}(x-a)^2.$$ Fácilmente vemos que $f$ é diferenciável em $[a, b]$ e $$\varphi'(x)=f'(x)-\frac{f'(b)-f'(a)}{b-a}(x-a).$$ Como $\varphi'(a)=f'(a)=\varphi'(b)$, segue do teorema de Flett que existe $\xi \in (a,b)$ tal que $$\varphi'(\xi)=\frac{\varphi(\xi)-\varphi(a)}{\xi-a}.$$ Portanto, existe $\xi \in (a,b)$ tal que a equação \eqref{eqn:Rie-Sa} é satisfeita, o que prova o teorema. \fimdemo \medskip Inspirados, então, pela afirmação do Teorema \ref{flett2}, demonstra-se o resultado abaixo. \begin{teo}[{\cite[Teorema 2.1]{Cakmak}}] Se $f: [a,b] \to \mathbb{R}$ é diferenciável em $[a,b]$, então, existe $\xi \in (a,b)$ tal que $$f(b)-f(\xi)=(b-\xi)f'(\xi)+\frac{1}{2}\frac{f'(b)-f'(a)}{b-a}(b-\xi)^2.$$ \end{teo} O resultado abaixo é também uma generalização do teorema de Flett com outra condição do tipo Rolle $f''(a)=f''(b)$. \begin{teo}[{\cite[Exercício 5.3.11(b)]{Radulescu}}]\label{teo:f2a=f2b} Seja $f:[a,b] \to \mathbb{R}$ duas vezes diferenciável e tal que $f''(a)=f''(b)$. Então, existe $\xi \in (a,b)$ tal que $$f(\xi)-f(a)=(\xi-a)f'(\xi)-\frac{(\xi-a)^2}{2}f''(\xi).$$ \end{teo} O resultado abaixo é análogo ao Teorema anterior, também com a condição do tipo Rolle $f''(a)=f''(b)$. \begin{teo}\label{teo:f2da=f2db} Seja $f:[a,b] \longrightarrow \mathbb{R}$ duas vezes diferenciável e tal que $f''(a)=f''(b)$. Então, existe $\xi \in (a,b)$ tal que $$f(b)-f(\xi)=(b-\xi)f'(\xi)-\frac{(b-\xi)^2}{2}f''(\xi).$$ \end{teo} A demonstração do Teorema \ref{teo:f2da=f2db} é análoga à demonstração do Teorema \ref{teo:f2a=f2b} e portanto deixamos como exercício para o leitor. Os Teoremas \ref{teo:Flett} e \ref{teo:f2a=f2b} foram generalizados por I. Pawlikowska em \cite{Pawlikowska} para funções $n$ vezes diferenciáveis, com a condição do tipo Rolle $f^{(n)}(a)=f^{(n)}(b)$. \begin{teo}[{\cite[Lema 2.2]{Pawlikowska}}] Seja $f:[a,b] \to \mathbb{R}$ $n$ vezes diferenciável e tal que $f^{(n)}(a)=f^{(n)}(b)$. Então, existe $\xi \in (a,b)$ tal que $$f(\xi)-f(a)=\sum\limits_{i=1}^{n}\dfrac{(-1)^{i+1}}{i!}(\xi-a)^if^{(i)}(\xi).$$ \end{teo} \subsection{Algumas Aplicações} A seguir vamos apresentar algumas aplicações do teorema de Flett. Vamos tratar, principalmente, dos trabalhos feitos por C. Lupu e T. Lupu em \cite{LUPU} e por C. Lupu em \cite{LUPU2}. \medskip O objetivo desta seção é apresentar algumas propriedades importantes sobre alguns operadores integrais, como o de Volterra. Lembremos que $C([0, 1])$ denota o conjunto das funções contínuas reias definidas em $[0, 1]$ e $C^1([0, 1])$ é o conjunto de todas as funções reais continuamente diferenciáveis definidas no mesmo intervalo. Definamos operadores $T,S:C([0,1])\to C([0,1])$ por \begin{align*} (T\varphi)(t) &=\varphi(t)-\int\limits_{0}^{t}\varphi(x)dx\\ (S\psi)(t) &=t\psi(t)-\int\limits_{0}^{t}x\psi(x)dx. \end{align*} As seguintes propriedades valem para os operadores $T$ e $S$. \begin{teo}[{\cite[Teorema 2.11]{LUPU}}]\label{teo:OpInt} Se $f,g:[0,1] \to \mathbb{R}$ são funções contínuas, então existem $\xi_{1},\xi_{2},\xi_{3}\in(0,1)$ tal que \begin{align*} \int\limits_{0}^{1}f(x)dx\,(Tg)(\xi_{1}) &=\int\limits_{0}^{1}g(x)dx\, (Tf)(\xi_{1})\\ (Tf)(\xi_{2}) &=(Sf)(\xi_{2})\\ \int\limits_{0}^{1}f(x)dx\,(Sg)(\xi_{3}) &=\int\limits_{0}^{1}g(x)dx\, (Sf)(\xi_{3}). \end{align*} \end{teo} \begin{teo}[{\cite[Teorema 2.12]{LUPU}}]\label{teo:OpInt2} Se $f,g:[0,1]\to\mathbb{R}$ são funções contínuas, então existem $\xi_{1},\xi_{2}\in(0,1)$ tal que \begin{align*} \int\limits_{0}^{1}(1-x)f(x)dx\, (Tg)(\xi_{1}) &=\int\limits_{0}^{1}(1-x) g(x)\,dx \, (Tf)(\xi_{1})\\ \int\limits_{0}^{1}(1-x)f(x)dx\, (Sg)(\xi_{2}) &=\int\limits_{0}^{1}(1-x) g(x) \,dx\, (Sf)(\xi_{2}). \end{align*} \end{teo} As demonstrações dos Teoremas \ref{teo:OpInt} e \ref{teo:OpInt2} encontram-se com detalhes em \cite[Teorema 2.11 e Teorema 2.12]{LUPU}. \medskip O seguinte resultado é a versão equivalente do teorema do valor médio de Flett para o teorema do valor médio de Cauchy e é criticamente utilizado para obter os resultados seguintes. \begin{teo}[{\cite[Lema 2.1]{LUPU2}}] \label{flettcauchy} Sejam $f,g: [a,b] \to \mathbb{R}$ funções diferenciáveis em $[a,b]$ com $g'(x)\neq 0$ para todo $x\in [a, b]$ e $\frac{f'(a)}{g'(a)}=\frac{f'(b)}{g'(b)}.$ Então, existe $\xi \in (a,b)$ tal que $$\frac{f(\xi)-f(a)}{g(\xi)-g(a)}=\frac{f'(\xi)}{g'(\xi)}.$$ \end{teo} \medskip Denotemos por $L^2 ((0,1))$ o espaço vetorial das funções reais quadrado integráveis a Lebesgue sobre $(0,1)$, i.e., \begin{align*} L^2 ((0,1)) \!=\!\bigg\{\! f: (0,1)\to \mathds{R}\!: f \,\hbox{é Lebesgue mensurável e} {\small \int\limits_a^b f^2(x) dx<\infty}\bigg\}. \end{align*} Observemos que para funções contínuas no intervalo $(0, 1)$ a integral de Lebesgue e de Riemann coincidem, logo podemos pensar num primeiro momento no espaço $L^2 ((0,1))$ como sendo o espaço das funções contínuas em $(0, 1)$ cujo quadrado é Riemann integrável em $(0,1)$. Mais ainda, o espaço $L^2 ((0,1))$ é um espaço vetorial normado com a norma $$\|f\|_{L^2((0,1))}=\left(\int_a^b f^2(x) dx\right)^{1/2}, \ \ f \in L^2((0,1)).$$ \begin{defi}[Operador de Volterra] Sejam $f \in L^2((0,1))$ e $x \in (0,1)$. Definimos o operador de Volterra, $V$, por $$\begin{array}{ccclc} V: &L^2((0,1)) & \to & L^2((0,1))\\ &f & \to & V(f)(x)=\int\limits_0^x f(t)dt. \end{array}$$ \end{defi} Sejam $\Psi, \phi:[0,1] \to \mathbb{R}$ funções sendo $\Psi$ contínua e $\phi$ diferenciável com $\phi'(x) \neq 0$ para todo $x \in (0,1)$. Definimos o operador do \textit{tipo Volterra com peso} $$V_{\phi}\Psi(t)=\int\limits_0^t \phi(x)\Psi(x)dx.$$ No que segue, definimos os espaços \begin{align*} \mathfrak{C}([a,b])&:=\big\{\phi \in C^1([a,b]), \phi'(x)\neq0, x \in [a,b], \phi(a)\!=\!0\big\}\; \hbox{e}\\ C_{nula}([a,b]) &:= \Big\{ f\in C([a, b]): \int\limits_a^b f(x) dx =0\Big\}. \end{align*} \begin{teo}\label{apl0} Seja $f \in C_{nula}([a,b])$ e $g \in C^1([a,b])$, com $g'(x) \neq 0$ para todo $x \in [a,b]$. Então, existe $\xi \in (a,b)$ tal que $$V_g f(\xi)=g(a)\cdot Vf(\xi).$$ \end{teo} \demo Consideremos as funções $\varphi,\eta:[a,b] \to \mathbb{R}$ dadas por \begin{align*} \varphi(t) &=\int\limits_a^t f(x)g(x)dx -g(t)\int\limits_a^t f(x)dx \; \hbox{e}\\ \eta(t) &=g(t). \end{align*} Como $\varphi$ é diferenciável segue que $$\varphi'(t)=f(t)g(t)-\Big(g'(t)\int\limits_a^t f(x)dx + g(t)f(t)\Big) = -g'(t)\int\limits_a^t f(x)dx.$$ Observe que $\varphi'(a)=0$, assim $\dfrac{\varphi'(a)}{\eta'(a)}=0$. Por outro lado, $\varphi'(b)=-g'(b)\int\limits_a^b f(x)dx=0$ pois $f \in C_{nula}([a,b]),$ e daí, $$\dfrac{\varphi'(b)}{\eta'(b)}=0.$$ Logo, do Teorema \ref{flettcauchy} segue que existe $\xi \in (a,b)$ tal que $$\frac{\varphi(\xi)-\varphi(a)}{\eta(\xi)-\eta(a)}=\frac{\varphi'(\xi)}{\eta'(\xi)}.$$ Equivalentemente $$\frac{\int\limits_a^{\xi} f(x)g(x)dx - g(\xi)\int\limits_a^{\xi} f(x)dx}{g(\xi)-g(a)}=\frac{-g'(\xi)\int\limits_a^{\xi} f(x)dx}{g'(\xi)}.$$ Disso segue que $$\int\limits_a^{\xi} f(x)g(x)dx = g(a)\int\limits_a^{\xi} f(x)dx.$$ \fimdemo \begin{teo}\label{apl1} Se $f,g$ são funções reais contínuas em $[0,1]$ e $\phi \in \mathfrak{C}([0,1])$, então existe $\xi \in (0,1)$ tal que \begin{align*} & V_{\phi}f(\xi)\int\limits_0^1 g(x)dx - V_{\phi} g(\xi)\int\limits_0^1 f(x)dx \\ &\qquad \qquad \qquad \qquad = \phi(0)\bigg(Vf(\xi)\int\limits_0^1 g(x)dx - Vg(\xi)\int\limits_0^1 f(x)dx\bigg). \end{align*} \end{teo} Os detalhes da demonstração do Teorema \ref{apl1} podem ser encontrados em \cite[Teorema 2.4]{LUPU2}. Além disso, algumas observações a respeito deste teorema são pertinentes: \medskip \noindent ${\bf (i)}$ Se $\phi(0)=0$ então existe $\xi \in (0,1)$ tal que $$V_{\phi}f(\xi)\int\limits_0^1 g(x)dx = V_{\phi} g(\xi)\int\limits_0^1 f(x)dx;$$ Em particular se $\phi(x)=x$, existe $\xi \in(0,1)$ tal que $$\int\limits_0^1 f(x)dx \int\limits_0^{\xi} xg(x)dx = \int\limits_0^1 g(x)dx \int\limits_0^\xi xf(x)dx.$$ \noindent ${\bf (ii)}$ Considere o espaço $L^2$ com peso dado por $$L^2_{\phi}(0,\xi)=\Big\{ u: (0,\xi)\to \mathds{R}:\int_0^\xi u^2(x) \phi(x) \,dx<\infty\Big\}$$ e equipado com a norma $$\|u\|_{L_{\phi}^2(0,\xi)} = \left(\int_0^\xi u^2(x)\phi(x)dx\right)^{1/2}, \ \ u \in L_{\phi}^2(0,\xi).$$ Substituindo $f$ e $g$ por $f^2$ e $g^2$ temos que existe $\xi \in (0,1)$ tal que \begin{align*} & V_{\phi}f^2(\xi)\!\!\int\limits_0^1 g^2(x)dx \!-\! V_{\phi} g^2(\xi)\!\int\limits_0^1 f^2(x)dx \\ &\qquad \qquad \quad =\phi(0) \!\left(\!Vf^2(\xi)\!\!\int\limits_0^1 g^2(x)dx \!-\!\! Vg^2(\xi)\!\!\int\limits_0^1 f^2(x)dx\!\right), \end{align*} ou seja, \begin{align*} & ||f||_{L_{\phi}^2(0,\xi)}^2||g||_{L^2(0,1)}^2\!-||g||_{L_{\phi}^2(0,\xi)}^2||f||_{L^2(0,1)}^2\\ &\qquad \qquad \qquad \quad = \phi(0) \big(||f||_{L^2(0,\xi)}^2||g||_{L^2(0,1)}^2 \!-||g||_{L^2(0,\xi)}^2||f||_{L^2(0,1)}^2\big). \end{align*} Desta última igualdade, se $\phi(0)=0$, obtemos \begin{equation}\label{volt} ||f||_{L_{\phi}^2(0,\xi)}||g||_{L^2(0,1)}=||g||_{L_{\phi}^2(0,\xi)}||f||_{L^2(0,1)}. \end{equation} Escrevendo a equação \eqref{volt} da seguinte maneira \begin{equation} \frac{||f||_{L_{\phi}^2(0,\xi)}}{||g||_{L_{\phi}^2(0,\xi)}}=\frac{||f||_{L^2(0,1)}}{||g||_{L^2(0,1)}},\label{volt2} \end{equation} concluímos a seguinte propriedade interessante: dadas duas funções $f, g$ que tem normas iguais $($ou proporcionais$)$ em $L^2(0,1)$, e se for dada uma função peso não constante $\phi$, então existe um número $\xi \in (0,1)$ onde as normas das funções serão iguais $($ou proporcionais$)$ em $L^2_{\phi}(0,\xi)$. \section{Problemas em aberto} O estudo de condições necessárias e suficientes para a existência de pontos de Flett (veja seção \ref{def1}) não é, até onde sabemos, completo. Recorde que a função $f(x) = \mbox{sgn}(x)$ pertence ao conjunto $\Delta_{12}$ (veja a Figura \ref{relac}), portanto, não satisfaz nenhuma das condições discutidas anteriormente. No entanto, possui infinitos pontos de Flett. Esta observação, sozinha, torna natural três perguntas, ainda não respondidas na literatura: \begin{pgt} Além das apresentadas neste trabalho, existem outras condições suficientes para a existência de pontos de Flett? \end{pgt} \begin{pgt} Existe uma condição necessária para a existência de pontos de Flett? \end{pgt} \begin{pgt} Assumindo que uma função possua pelo menos um ponto de Flett. Sob quais condições este seria único? \end{pgt} \medskip \noindent {\bf Agradecimentos:} Os autores agradecem ao parecerista pelos comentários e observações, que ajudaram a melhorar de maneira significativa a apresentação deste trabalho. Este trabalho é fruto da Iniciação Científica realizada pelo primeiro autor, quem agradece o apoio da FAPESP através do Processo 2013/03866-9.
proofpile-arXiv_067-545
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The desire for mobility is a driving force in progressing technology, with \textit{autonomous driving} (AD) clearly being the next major step in automotive technology along with electromobility. An AD vehicle is a highly complex system with several sensors and subcomponents, one of them being \textit{vehicle-to-everything} (V2X) communication. \ifthenelse{\equal{1}{0}}{}{ \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{./fig/fig1.pdf} \caption[toc entry]{An autonomous driving (AD) research vehicle \ifthenelse{\equal{0}{1}}{\textcolor{red}{ equipped with }} {equipped with }radio detection and ranging (RaDAR, colored in orange), light detection and ranging (LiDAR, colored in yellow), and camera \ifthenelse{\equal{0}{1}}{\textcolor{red}{ sensors }} {sensors }(colored in purple). The sensors are placed at different locations to obtain an extensive environment sensing.}\vspace{-0.4cm} \label{fig:car} \end{figure}} In the context of AD, V2X communication has several applications, e.g., path planning \ifthenelse{\equal{0}{1}}{\textcolor{red}{and}}{and} decision making~\cite{Zeng2019}, \ifthenelse{\equal{0}{1}}{\textcolor{red}{or}}{or} systems for localization \ifthenelse{\equal{0}{1}}{\textcolor{red}{and}}{and} cooperative perception~\cite{kim2015impact}. All autonomous systems need a perception stage which constitutes the first step in the process chain of sensing the environment. The purpose of cooperative perception systems in AD is the exploitation of information stemming from other traffic participants to increase safety, efficiency and comfort aspects while driving~\cite{hobert2015enhancements}. The common concept lies in information transmission between various vehicles as well as between vehicles and back-end servers over any kind of (wireless) transmission channel. The transmitted information ranges from trajectories of the ego vehicle and other traffic participants over vehicle state information to sensor data coming from radio detection and ranging (RaDAR), light detection and ranging (LiDAR), and camera, and assists in constructing a more complete model of the physical world. Each decision \ifthenelse{\equal{0}{1}}{\textcolor{red}{of}}{of} an AD vehicle is based on the underlying \textit{environment perception} and is intended to lead to an appropriate action. Hence, the proper perception of the environment is an essential ingredient for reducing road accidents to a bare minimum to foster public acceptance of AD. The most common sensors of a single AD vehicle's environment perception system (\cite{Bengler2014, Levinson2011, Wei2013}) are illustrated in Fig.~\ref{fig:car}. \ifthenelse{\equal{1}{0}}{}{ \begin{figure*}[t!] \centering \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig2a}\vspace{0.1cm} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig2b} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig2c} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig2d} \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]} (a) & (b) & (c) & (d) \end{tabu} \caption[toc entry]{A simple adversarial attack using the iterative least-likely class method (LLCM)~\cite{Kurakin2017a} to fool the ICNet \cite{Zhao2018a} on \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{a hand-picked image from}}{a hand-picked image from} the Cityscapes validation set; (a) clean input image, (b) semantic segmentation of clean input image, (c) adversarial example, and (d) semantic segmentation of adversarial example.}\vspace{-0.4cm} \label{fig:adv_llm_images} \end{figure*}} Several external sensors, i.e., RaDAR, LiDAR, and camera, are mounted on an AD vehicle. RaDAR sensors are already widely used in multiple automotive functions and are considered to play a key role in enabling AD (\cite{Engels2017, Patole2017}). LiDAR sensors are capable of detecting obstacles \cite{Levinson2011} and were already used in numerous AD competitions \cite{Bengler2014}. Camera sensors on the other hand are mainly used for detecting lane markings or traffic signs \cite{Wei2013}, but can also be used for object detection and semantic segmentation \cite{Zhao2018a}. The data captured by the three sensor groups is gathered within a central processing unit to extract semantic information from the environment. Over the past few years, the interest \ifthenelse{\equal{0}{1}}{\textcolor{red}{in employing deep neural networks (DNNs)}}{in employing deep neural networks (DNNs)} increased noticeably as they constantly achieved state-of-the-art performance in multiple vision-related tasks and benchmarks, including \textit{semantic segmentation} for AD (\cite{Cordts2016, Long2015}). Semantic segmentation is a classical computer vision task, where each pixel of an RGB image is assigned to a corresponding semantic class, see Fig.~\ref{fig:adv_llm_images} (a), (b). Since such camera-based technology is both cheaper and uses less data compared to LiDAR-based technology, it is of special interest for AD. Recent progress in semantic segmentation enables real-time processing \cite{Zhao2018a}, making this an even more promising technology for AD applications. Nevertheless, the environment perception system of an AD vehicle is a highly safety-relevant function. Any error can lead to catastrophic outcomes in the real world. While DNNs revealed promising functional performance in a wide variety of tasks, they \ifthenelse{\equal{0}{1}}{\textcolor{red}{show vulnerability}}{show vulnerability} to certain input patterns, denoted as \textit{adversarial examples} \cite{Szegedy2014}. Adversarial examples are almost imperceptibly \ifthenelse{\equal{0}{1}}{\textcolor{red}{altered}}{altered} versions of an image and are able to fool state-of-the-art DNNs in a highly robust manner, see Fig.~\ref{fig:adv_llm_images} (c), (d). Assion et al.~\cite{Assion2019} showed that a virtually unlimited set of adversarial examples can be created on each state-of-the-art machine learning model. This intriguing property of DNNs is of special concern, when looking at their applications in AD \ifthenelse{\equal{0}{1}}{\textcolor{red}{and needs to be addressed further by DNN certification methods (\cite{Dvijotham2018, Wu2018})}}{and needs to be addressed further by DNN certification methods (\cite{Dvijotham2018, Wu2018})} or means of uncertainty quantification \cite{Michelmore2019}. Cooperative perception for example can be seen as one of the weak spots in the data processing during the environment perception of an AD vehicle. It can be used as a loophole to intrude adversarial examples to fool AD vehicles in range. Note, this is only one of many possible scenarios how adversarial examples can find their way into the system. In this article, we will examine the \ifthenelse{\equal{0}{1}}{\textcolor{red}{vulnerability}}{vulnerability} of DNNs towards adversarial attacks, while focusing on environment perception for AD. For this purpose we chose semantic segmentation as the underlying function we want to perform adversarial attacks on, since it is a promising technology for camera-based environment perception. The remainder of this article is structured as follows: First, we give a brief overview of semantic segmentation and introduce the ICNet \cite{Zhao2018a} as a potential network topology, which we will then adopt for our experiments. Second, we continue with adversarial attacks, starting with simple image classification and extending to adversarial attacks for semantic segmentation. We demonstrate several visual examples to raise awareness for DNNs' vulnerability towards adversarial attacks. Third, we examine techniques for defending against the adversarial attacks shown before and compare the obtained qualitative results. Lastly, we conclude by providing final remarks and discuss some future research directions \ifthenelse{\equal{0}{1}}{\textcolor{red}{pointing out that certification is an important aspect to ensure a certain level of robustness when employing DNNs}}{pointing out that certification is an important aspect to ensure a certain level of robustness when employing DNNs}. The article is intended to sensibilize the reader towards \ifthenelse{\equal{0}{1}}{\textcolor{red}{vulnerability}}{vulnerability} issues of DNNs in environment perception for AD and to stir interest in the development of new defense strategies for adversarial attacks. \section{Semantic Segmentation} An RGB image is a high-dimensional source of data, with pixels being the smallest units of semantic information. Semantic segmentation is a popular method to extract the semantic information from an RGB image, \ifthenelse{\equal{0}{1}}{\textcolor{red}{where each pixel is tagged}}{where each pixel is tagged} with a label taken from a \ifthenelse{\equal{0}{1}}{\textcolor{red}{finite}}{finite} set of classes. Today's state of the art in semantic segmentation is dominated by convolutional neural networks (CNNs)\ifthenelse{\equal{0}{1}}{\textcolor{red}{, a special form of DNNs}}{, a special form of DNNs}. This section introduces some mathematical notation regarding CNNs and gives an overview of the CNN architecture used for semantic segmentation throughout this article. \subsection{Mathematical Notation} For the sake of simplicity, we first assume having a CNN, which takes one input image and outputs only a corresponding class for the entire image. \ifthenelse{\equal{0}{1}}{\textcolor{red}{Hence}}{Hence}, we begin with simple image classification and \ifthenelse{\equal{0}{1}}{\textcolor{red}{then}}{then} extend to semantic segmentation. \ifthenelse{\equal{1}{0}}{} { \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{./fig/fig3} \caption{Architectural overview of ICNet~\cite{Zhao2018a}. The ICNet takes different scales of an RGB image as inputs (left gray block) to output a semantic segmentation mask (right gray block). The encoder consists of three scale-dependent parts to extract multi-scale features from the inputs (\textcolor{tu5}{shades of blue}). Each of these three encoder parts perform a downsampling by a factor of eight during feature extraction. To save computational complexity, the bigger scales are limited to low-level and mid-level feature extraction. The extracted multi-scale features are then fused within the decoder by a multi-scale fusion block (\textcolor{tu3}{light magenta}), before performing final upsampling to obtain a full-resolution semantic segmentation mask with respect to the input.}\vspace{-0.4cm} \label{fig:icnet} \end{figure*} } First of all, the input image is denoted as $\boldsymbol{x}\in\mathcal{X}\subset\mathbb{G}^{H\times W\times C}$, with image height in pixels $H$, image width in pixels $W$, number of color channels $C$, dataset $\mathcal{X}$, and the set of integer gray values $\mathbb{G}$. Each image contains gray values $x_i\in\mathbb{G}^{C}$ at each pixel position $i\in\mathcal{I}$, with $\mathcal{I}$ being the set of pixel positions, having the cardinality $|\mathcal{I}|=H\cdot W$. Smaller patches of an image are denoted as $\boldsymbol{x}_{\mathcal{I}_i} \in\mathbb{G}^{h\times w\times C}$, with patch height in pixels $h$, crop width in pixels $w$, and the set of pixel positions $\mathcal{I}_i \subseteq\mathcal{I}$ with $i$ being the center pixel and $|\mathcal{I}_i|=h\cdot w$. For the special case of $\mathcal{I}_i=\mathcal{I}$, we obtain $\boldsymbol{x}_{\mathcal{I}_i} = \boldsymbol{x}$. A CNN usually consists of several layers $\ell\in\mathcal{L}$ containing feature map activations $\boldsymbol{f}_\ell\!\left( \boldsymbol{x} \right) \in\mathbb{R}^{H_\ell\times W_\ell\times C_\ell}$ of the respective layer $\ell\in\mathcal{L}$, and 1st layer input image $\boldsymbol{x}$, with the set of layers $\mathcal{L}$, feature map height $H_\ell$, feature map width $W_\ell$, and number of feature maps $C_\ell$. Fed with the input image $\boldsymbol{x}$, a CNN \textit{for image classification} outputs a probability score $P\!\left( s | \boldsymbol{x} \right) \in\mathbb{I}$ for each class $s\in\mathcal{S}$, with $\mathbb{I}=\left[ 0,1\right]$, and the set of classes $\mathcal{S}$, with the number of classes $N=|\mathcal{S}|$, leading to \begin{equation} \mathfrak{F}_\text{classification}: \mathbb{G}^{H\times W\times C} \to \mathbb{I}^{N}. \end{equation} For better readability, the CNN parameters $\boldsymbol{\theta}$ are omitted in our notation. The predicted class $s^*\!\left( \boldsymbol{x} \right) \in\mathcal{S}$ for the input image $\boldsymbol{x}$ is then obtained by \begin{equation} s^*\!\left( \boldsymbol{x} \right) = \operatorname*{\text{argmax}}_{s\in\mathcal{S}} P\!\left( s| \boldsymbol{x} \right). \end{equation} From now on a CNN is considered, which is capable of performing \textit{semantic segmentation}. The respective CNN outputs a probability $P\!\left(s | i,\boldsymbol{x} \right)$ for each pixel position $i\in\mathcal{I}$ of the input image $\boldsymbol{x}$ and class $s\in\mathcal{S}$. Altogether, it outputs class scores $\boldsymbol{p} \!\left( \boldsymbol{x} \right)=\mathfrak{F}_\text{segmentation}\!\left( \boldsymbol{x} \right) \in\mathbb{I}^{H\times W\times N}$ for all pixel positions $i\in\mathcal{I}$ and classes $s\in\mathcal{S}$, leading to \begin{equation} \mathfrak{F}_\text{segmentation}: \mathbb{G}^{H\times W\times C} \to \mathbb{I}^{H\times W\times N}. \end{equation} The semantic segmentation mask $\boldsymbol{m}\!\left( \boldsymbol{x} \right) \in\mathcal{S}^{H\times W}$ containing the predicted class $m_i\!\left( \boldsymbol{x} \right) = s^*_i\!\left( \boldsymbol{x} \right)$ at each pixel position $i\in\mathcal{I}$ of the input image $\boldsymbol{x}$ is then obtained by \begin{equation} \boldsymbol{m}\!\left( \boldsymbol{x} \right) = \operatorname*{\text{argmax}}_{s\in\mathcal{S}} \boldsymbol{p} \!\left( \boldsymbol{x} \right). \end{equation} The performance of such a CNN is measured by the \textit{mean intersection-over-union} (mIoU) \begin{equation} \text{mIoU} = \frac{1}{N} \sum_{s\in\mathcal{S}} \frac{\text{TP}\!\left( s \right)}{\text{TP}\!\left( s \right) + \text{FP}\!\left(s \right) + \text{FN}\! \left( s \right)}, \end{equation} with the class-specific true positives $\text{TP}\!\left( s \right)$, false positives $\text{FP}\!\left( s \right)$, and false negatives $\text{FN}\!\left( s \right)$. \subsection{Architecture for Semantic Segmentation} Today's state-of-the-art CNN architectures for semantic segmentation are often based on the work of Long et al.~\cite{Long2015}. They proposed to use a CNN, pretrained on image classification, as a feature extractor and further extend it to recover the original image resolution. The extended part is often referred to as the decoder and fulfills the task of gathering, reforming and rescaling the extracted features for the task of semantic segmentation. One characteristic of this proposed network architecture is the absence of fully connected layers. Such CNNs are therefore called fully convolutional networks (FCNs). Especially for AD, a real-time capable state-of-the-art CNN being robust to minimal changes in the input is needed. Arnab et al.~\cite{Arnab2018} analyzed the robustness of various CNNs for semantic segmentation towards simple adversarial attacks (\cite{Goodfellow2015, Kurakin2017a}), and concluded that CNNs using the same input with different scales are often most robust. \ifthenelse{\equal{0}{1}}{\textcolor{red}{ The ICNet developed by Zhao et al.~\cite{Zhao2018a} comprises both, a light-weight CNN architecture with multi-scale inputs. }} {The ICNet developed by Zhao et al.~\cite{Zhao2018a} comprises both, a light-weight CNN architecture with multi-scale inputs. } The overall structure of the ICNet is depicted in Fig.~\ref{fig:icnet}. The ICNet is designed to extract multi-scale features by taking different scales of the image as inputs. The extracted multi-scale features are fused before being upsampled to obtain a full-resolution semantic segmentation mask. The ICNet mainly profits from the combination of high-resolution low-level features (i.e., edges) with low-resolution high-level features (i.e., spatial context). For the sake of reproducibility, an openly available reimplementation\footnote{https://github.com/hellochick/ICNet-tensorflow} of the ICNet based on TensorFlow is used and tested on the widely applied Cityscapes dataset \cite{Cordts2016}. Cityscapes serves as a good dataset for exploring CNNs using semantic segmentation for AD, having pixel-wise annotations for 5000 images (validation, training and test set combined), with relevant classes such as pedestrians and cars. The reimplementation of the ICNet achieves 67.26\,\% mIoU on the Cityscapes validation set and runs at about 19 fps on our \texttt{Nvidia Tesla P100} and about 26 fps on our \texttt{Nvidia Geforce GTX 1080Ti} with an input resolution of $1024\times 2048$. These numbers are promising and indicate that semantic segmentation could serve as a technology for the environment perception system of AD vehicles. \section{Adversarial Attacks}\label{sec:adversarial_attacks} Although CNNs exhibit state-of-the-art performance in several vision-related fields of research, Szegedy et al.~\cite{Szegedy2014} revealed their \ifthenelse{\equal{0}{1}}{\textcolor{red}{vulnerability}}{vulnerability} towards certain input patterns. The CNN topologies they investigated were fooled by just adding small and imperceptible patterns to the input image. An algorithm producing such \textit{adversarial perturbations} is called an \textit{adversarial attack} and a perturbed image is referred to as an \textit{adversarial example}. Based on the obvervations of Szegedy et al., new approaches arised for crafting adversarial examples more efficiently (\cite{Athalye2018, Carlini2017, Goodfellow2015, Kurakin2017a, Moosavi-Dezfooli2016}) and were even extended to dense prediction tasks, e.g., semantic segmentation (\cite{Assion2019, Metzen2017, Mopuri2018}). In the following, two types of adversarial attacks will be introduced: \textit{individual adversarial attacks}, aiming at fooling on the basis of one particular input image, as well as \textit{universal adversarial attacks}, aiming at fooling on the basis of a whole bunch of images at the same time. \subsection{Individual Adversarial Perturbations} For the sake of simplicity, CNNs \textit{for image classification} are considered in the following to describe the basic nature of targeted and non-targeted adversarial attacks using individual adversarial perturbations. As shown before, image classification can be easily extended to semantic segmentation. Common adversarial attacks aim at fooling a CNN, so that the predicted class $s^*\!\left(\boldsymbol{x}\right)$ does not match with the ground truth class $s\!\left(\boldsymbol{x}\right)\in\mathcal{S}$ of the input image $\boldsymbol{x}$. One example for such type of an adversarial attack is the fast gradient sign method (FGSM) introduced by Goodfellow et al.~\cite{Goodfellow2015}. FGSM adopts the loss function $J\!\left(s^*\!\left(\boldsymbol{x}\right), s\!\left(\boldsymbol{x}\right)\right)$ that is used during training of the underlying CNN and computes the adversarial examples by \begin{equation}\label{fgsm} \begin{split} \boldsymbol{x}^\text{adv} = \boldsymbol{x} + \boldsymbol{r} = \boldsymbol{x} + \lambda \, \text{sign}\!\left(\nabla_{\boldsymbol{x}}\, J\!\left(s^*\!\left(\boldsymbol{x}\right), s\!\left(\boldsymbol{x}\right)\right) \right), \end{split} \end{equation} with the adversarial perturbation $\boldsymbol{r}\in\mathbb{R}^{H\times W\times C}$, the step size $\lambda\in\mathbb{R}^+$, and the gradient with respect to the input image $\nabla_{\boldsymbol{x}}\, J\left(s^*\!\left(\boldsymbol{x}\right), s\left(\boldsymbol{x}\right)\right)$. Note that $\text{sign}\!\left(\, \cdot\,\right)\in\lbrace \pm 1 \rbrace^{H\times W\times C}$. FGSM lets the perturbation $\boldsymbol{r}$ effectively \textit{in}crease the loss in each dimension by manipulating the input image into positive (``+'') gradient direction. \ifthenelse{\equal{0}{1}}{\textcolor{red}{Thus, one is not limited to use the ground truth $s\!\left(\boldsymbol{x}\right)$ as depicted in (6), but can in fact use the output of the respective DNN $s^*\!\left(\boldsymbol{x}\right)$.}}{Thus, one is not limited to use the ground truth $s\!\left(\boldsymbol{x}\right)$ as depicted in (6), but can in fact use the output of the respective DNN $s^*\!\left(\boldsymbol{x}\right)$.} Kurakin et al.~\cite{Kurakin2017a} extended FGSM by an iterative algorithm, changing the adversarial perturbation slightly in each iteration by a small $\lambda$. To prevent the adversarial perturbation's magnitude from getting too large, it is upper-bounded by \begin{equation} ||\boldsymbol{r}||_\infty \le \epsilon, \end{equation} with $\epsilon\in\mathbb{R}^+$ being the upper bound of the infinity norm and $\epsilon \ge \lambda$. This way, the perceptibility of the adversarial perturbation is controlled by adjusting $\epsilon$ accordingly. For the iterative case, (\ref{fgsm}) extends to \begin{equation} \begin{split} \boldsymbol{x}^\text{adv}_0 & = \boldsymbol{x},\\ \boldsymbol{x}^\text{adv}_{\tau+1} & = \boldsymbol{x}^\text{adv}_{\tau} + \boldsymbol{r}_{\tau+1}\\ & = \boldsymbol{x}^\text{adv}_{\tau} + \lambda \, \text{sign}\!\left( \nabla_{\boldsymbol{x}} J\!\left( s^*\!\left(\boldsymbol{x}^\text{adv}_{\tau}\right), s\!\left(\boldsymbol{x} \right) \right) \right), \end{split} \end{equation} with $\tau\in\lbrace 0,1,2,... \rbrace$ being the current iteration index and therefore $\boldsymbol{x}^\text{adv}_{\tau}$ the adversarial example at iteration\footnote{The total number of iterations is set by flooring $\lfloor \text{min}\left(\epsilon + 4, 1.25\epsilon\right)\rfloor$.} $\tau$. Considering AD vehicles, there exists no ground truth for the data being inferred. \ifthenelse{\equal{0}{1}}{\textcolor{red}{As already pointed out,}}{As already pointed out,} a naive attacking idea in this setup would be finding an adversarial perturbation $\boldsymbol{r}$, such that (classification!) \begin{equation} s^*\!\left( \boldsymbol{x^\text{adv}} \right) \ne s^*\!\left( \boldsymbol{x} \right). \end{equation} Such an attack is the least-likely class method (LLCM) introduced by Kurakin et al.~\cite{Kurakin2017a}. LLCM aims at finding an adversarial pertubation $\boldsymbol{r}$ to obtain \begin{equation} s^\text{o}\!\left( \boldsymbol{x} \right) = \operatorname*{\text{argmax}}_{s\in\mathcal{S}} P\!\left( s | \boldsymbol{x}^\text{adv} \right) = \operatorname*{\text{argmin}}_{s\in\mathcal{S}} P\!\left( s | \boldsymbol{x} \right), \end{equation} with the least-likely class $s^\text{o}\!\left( \boldsymbol{x} \right)$ of the input image $\boldsymbol{x}$. Different from before, the adversarial example using LLCM is obtained by taking a step into the negative direction of the gradient with respect to the input image $\boldsymbol{x}$, according to \begin{equation} \boldsymbol{x}^\text{adv} = \boldsymbol{x} - \lambda \, \text{sign}\!\left( \nabla_{\boldsymbol{x}} J\!\left( s^*\!\left(\boldsymbol{x}\right), s^\text{o}\!\left( \boldsymbol{x} \right) \right) \right), \end{equation} \textit{minimizing} the loss function. Similar to FGSM, LLCM can also be performed in an iterative fashion, where in each step a small adversarial perturbation is added to the respective input image. Another well-known approach for crafting adversarial examples is DeepFool \cite{Moosavi-Dezfooli2016} introduced by Moosavi-Dezfooli and his colleagues. Compared with FGSM and LLCM, DeepFool does not only search for individual adversarial perturbations, but also tries to find the minimal adversarial perturbation, with respect to an $l_p$-norm, changing the network's output. This leads us to the following equation \begin{equation} \boldsymbol{r}_\text{min} = \operatorname*{\text{argmin}}_{\boldsymbol{r}} ||\boldsymbol{r}||_p, \, \text{s.t.} \,\,\, s^*\!\left(\boldsymbol{x} + \boldsymbol{r}_\text{min}\right) \ne s^*\!\left( \boldsymbol{x} \right), \end{equation} with $|| \cdot ||_p$ being the $l_p$-norm restricting the magnitude of $\boldsymbol{r}_\text{min}$. Moosavi-Dezfooli et al.~primarily experimented with $p=2$, \ifthenelse{\equal{0}{1}}{\textcolor{red}{ showing DeepFool's superiority in terms of speed and magnitude}}{showing DeepFool's superiority in terms of speed and magnitude} compared to FGSM, when targeting the same error rate for the respective CNN. We will not go further into detail here, but we refer the interested reader to \cite{Moosavi-Dezfooli2016} for more information about DeepFool instead. \ifthenelse{\equal{0}{1}}{\textcolor{red}{ Carlini and Wagner proposed an approach, which showed to be extremely effective regarding adversarial example detection mechanisms \cite{Carlini2017}. They use \begin{equation} \begin{split} \boldsymbol{r}_\text{min} = \operatorname*{\text{argmin}}_{\boldsymbol{r}} \left( ||\boldsymbol{r}||_2 + c \cdot f(\boldsymbol{x} + \boldsymbol{r}) \right), \\ \text{s.t.} \,\,\, s^*\!\left(\boldsymbol{x} + \boldsymbol{r}_\text{min}\right) \ne s^*\!\left( \boldsymbol{x} \right), \end{split} \end{equation} as an objective function, with $c$ being a hyperparameter and $f(\boldsymbol{x} + \boldsymbol{r})$ being a loss function. Athalye et al.~\cite{Athalye2018} adopted this approach and as a result managed to circumvent several state-of-the-art defense mechanisms. We refer the interested reader to \cite{Athalye2018} and \cite{Carlini2017} for more fine-grained information about both approaches and their specific variations. }} {Carlini and Wagner proposed an approach, which showed to be extremely effective regarding adversarial example detection mechanisms \cite{Carlini2017}. They use \begin{equation} \centering \begin{split} \boldsymbol{r}_\text{min} = \operatorname*{\text{argmin}}_{\boldsymbol{r}} \left( ||\boldsymbol{r}||_2 + c \cdot f(\boldsymbol{x} + \boldsymbol{r}) \right), \\ \text{s.t.} \,\,\, s^*\!\left(\boldsymbol{x} + \boldsymbol{r}_\text{min}\right) \ne s^*\!\left( \boldsymbol{x} \right), \end{split} \end{equation} as an objective function, with $c$ being a hyperparameter and $f(\boldsymbol{x} + \boldsymbol{r})$ being a loss function. Athalye et al.~\cite{Athalye2018} adopted this approach and as a result managed to circumvent several state-of-the-art defense mechanisms. We refer the interested reader to \cite{Athalye2018} and \cite{Carlini2017} for more fine-grained information about both approaches and their specific variations. } So far, we \ifthenelse{\equal{0}{1}}{\textcolor{red}{introduced}}{introduced} adversarial attacks that were successfully applied on image classification. Arnab et al.~\cite{Arnab2018} did the first extensive analysis on the behavior of different CNN architectures \textit{for semantic segmentation} using FGSM and LLCM, both performed iteratively and non-iteratively. They report results on a large variety of CNN architectures, including both lightweight CNN architectures and heavyweight CNN architectures. The main observation was that network models using residual connections are often more robust when it comes to adversarial attacks. In addition, lightweight CNN architectures tend to be almost equally robust as heavyweight CNN architectures. In summary, the results on the Cityscapes dataset demonstrated the vulnerability of CNNs in general. We show a \ifthenelse{\equal{0}{1}}{\textcolor{red}{typical attack}}{typical attack} in Fig.~\ref{fig:adv_llm_images} using the iterative LLCM on the ICNet with the hyper parameters $\lambda=1$ and $\epsilon=8$. Despite being mostly imperceptible for the human eye, the adversarial example leads to a dramatically altered network output. To show the overall effect on the Cityscapes validation set, we computed the mIoU ratio for the iterative LLCM and the non-iterative LLCM using different values for $\epsilon$. The mIoU ratio $Q$ is defined by \begin{equation} Q = \frac{\text{mIoU}_\text{adv}}{\text{mIoU}_\text{clean}}, \end{equation} with $\text{mIoU}_\text{adv}$ being the mIoU on adversarially perturbed images $\boldsymbol{x}^\text{adv}$, and $\text{mIoU}_\text{clean}$ being the mIoU on clean images $\boldsymbol{x}$. The results are plotted in Fig.~\ref{fig:adv_llm_curves}. As expected, the stronger the adversarial perturbation (in terms of $\epsilon$) the lower the mIoU on adversarial examples $\text{mIoU}_\text{adv}$ and thus a lower mIoU ratio $Q$ is obtained. As pointed out by Arnab et al.~\cite{Arnab2018}, we also observe that the non-iterative LLCM is even stronger than its iterative counterpart, \ifthenelse{\equal{0}{1}}{\textcolor{red}{which contradicts the original observation made by Kurakin et al.~\cite{Kurakin2017a} on image classification. Arnab et al.~argue that this phenomenon might be a dataset property of Cityscapes, since the effect does not occur on their second dataset (Pascal VOC 2012). Nonetheless, we do not investigate this further as we anyway want to focus on more realistically looking adversarial attacks in the following. }} {which contradicts the original observation made by Kurakin et al.~\cite{Kurakin2017a} on image classification. Arnab et al.~argue that this phenomenon might be a dataset property of Cityscapes, since the effect does not occur on their second dataset (Pascal VOC 2012). Nonetheless, we do not investigate this further as we anyway want to focus on more realistically looking adversarial attacks in the following. } Metzen et al.~\cite{Metzen2017} introduced new adversarial attacks for semantic segmentation. Instead of only fooling the CNN, they additionally wanted the respective CNN to output more realistically looking semantic segmentation masks. To do so, Metzen et al.~developed two methods. The first method uses a fake semantic segmentation mask $\boldsymbol{m}\!\left(\boldsymbol{z}\right)$ instead of the original semantic segmentation mask $\boldsymbol{m}\!\left( \boldsymbol{x} \right)$, with $\boldsymbol{x}\ne\boldsymbol{z}\in\mathcal{X}$, meaning that the fake segmentation mask refers to an existing image of the dataset $\mathcal{X}$. The overall assumption of Metzen et al.~is that a possible attacker might invest time to create a few uncorrelated fake semantic segmentation masks himself. Assuming that the attacker wants to use the same fake semantic segmentation mask to fool the respective CNN on several images, he is restricted to stationary situations to operate unnoticed, i.e., the AD vehicle doesn't move and thus the scenery captured by the camera sensor only slightly changes. Because of \ifthenelse{\equal{0}{1}}{\textcolor{red}{this operational constraint}}{this operational constraint}, we call this method stationary segmentation mask method (SSMM). The second method modifies the CNN's original semantic segmentation mask $\boldsymbol{m}\!\left( \boldsymbol{x} \right)$ by replacing a predefined objective class $o = m_i\!\left( \boldsymbol{x} \right)\in\mathcal{S}$ at each corresponding pixel position $i\in\mathcal{I}_o\!\left( \boldsymbol{x} \right)\subset\mathcal{I}$ by the spatial nearest-neighbor class $n_i\!\left( \boldsymbol{x} \right) = m_j\!\left( \boldsymbol{x} \right) \in\mathcal{S}$, with $m_j\!\left( \boldsymbol{x} \right)\ne m_i\!\left( \boldsymbol{x} \right)$. Here, $\mathcal{I}_o\!\left( \boldsymbol{x} \right)$ is the set of all pixel positions, where $m_i\!\left( \boldsymbol{x} \right)=o$ holds. By completely removing the objective class $o$ from the semantic segmentation mask, we obtain \begin{equation} m_i^{\text{DNNM}}\!\left( \boldsymbol{x} \right) =\begin{cases} m_i\!\left( \boldsymbol{x} \right), & \text{if $m_i\!\left( \boldsymbol{x} \right)\ne o$}. \\ n_i\!\left( \boldsymbol{x} \right), & \text{otherwise}, \end{cases} \end{equation} \ifthenelse{\equal{1}{0}}{}{ \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{./fig/fig4} \caption{Adversarial attacks on the ICNet using the iterative or the non-iterative least-likely class method (LLCM) from Kurakin et al.~\cite{Kurakin2017a} on the Cityscapes validation set with different values for $\epsilon$, an upper bound of the $l_\infty$-norm of the adversarial perturbation $\boldsymbol{r}$. We set $\lambda=\epsilon$ for the non-iterative LLCM and $\lambda=1$ for the iterative LLCM. A lower mIoU ratio $Q$ means a stronger adversarial attack. Note that the non-iterative LLCM appears to be even more aggressive than the iterative LLCM.}\vspace{-0.4cm} \label{fig:adv_llm_curves} \end{figure}} \ifthenelse{\equal{1}{0}}{}{ \begin{figure*}[t!] \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig5a_1} \vspace{0.1cm} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig5b_1} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig5c_1} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig5d_1} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig5a_2}\vspace{0.1cm} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig5b_2} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig5c_2} \includegraphics[width=0.245\textwidth, height=0.125\textwidth] {./fig/fig5d_2} \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]} {\small (a)} & (b) & (c) & (d) \end{tabu} \caption[toc entry]{Adversarial attacks on the ICNet using the dynamic nearest neighbor method (DNNM) \cite{Metzen2017} on \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{ two example images, one with pedestrians and one with cars, from }} {two example images, one with pedestrians and one with cars, from }the Cityscapes validation set. The adversarial examples aim at removing pedestrians (first row) and cars (second row) from the scene. (a) clean input image, (b) semantic segmentation output on clean input image, (c) adversarial example created by DNNM, and (d) semantic segmentation output on adversarial example created by DNNM.}\vspace{-0.4cm} \label{fig:adv_metzen_attack} \end{figure*}} with $m^{\text{DNNM}}_i\!\left( \boldsymbol{x} \right)$ being the new target class at pixel position $i\in\mathcal{I}$. Metzen et al.~suggested to use the Euclidean distance of two pixel positions $i$ and $j$ in order to find the nearest-neighbor class \ifthenelse{\equal{0}{1}}{\textcolor{red}{satisfying}}{satisfying} $n_i\!\left( \boldsymbol{x}\right) = m_j\!\left( \boldsymbol{x} \right)\ne m_i\!\left( \boldsymbol{x} \right)=o$. In contrast to SSMM, the created realistically looking fake semantic segmentation mask using this method is now unique for each real semantic segmentation output. Additionally, specific properties, such as the correlation between two consecutive real semantic segmentation outputs, are transferred to the created fake ones. Altogether, a possible attacker is able to create a sequence of correlated realistically looking fake semantic segmentation masks making this kind of attack suitable for situations, where the respective AD vehicle moves. Due to these properties we call this method dynamic nearest neighbor method (DNNM). The application of DNNM has the potential to create safety-relevant perception errors for AD. This can be seen in Fig.~\ref{fig:adv_metzen_attack}, where DNNM is used to remove pedestrians or cars from the scene. The adversarial examples were created by setting $\epsilon=10$ followed by the same procedure as with iterative LLCM. Astonishingly, the semantic classes different from the objective class are completely preserved and the nearest neighbor class seems to be a good estimate for the regions occluded by the objective class, thereby dangerously providing a plausible but wrong semantic segmentation mask. \subsection{Universal Adversarial Perturbations} So far, we \ifthenelse{\equal{0}{1}}{\textcolor{red}{discussed}}{discussed} approaches that generate adversarial perturbations for single input images. In reality, however, it is hard for a possible attacker to generate adversarial examples for each incoming image of an environment perception system, considering a camera running at 20 fps. Therefore, in AD applications a special interest lies in single adversarial perturbations being capable of fooling a CNN on a set of input images, e.g., a video sequence. This class of adversarial perturbations is called universal adversarial perturbation (UAP). One of the first works towards finding UAPs was done by Moosavi-Dezfooli et al.~\cite{Moosavi-Dezfooli2017}. Their idea was to find a UAP $\boldsymbol{r}_\text{uni}$ that fools almost all images in some image set $\mathcal{T}$ in an image classification task (again, only one class per image). To achieve this, they used the DeepFool algorithm in an iterative fashion to solve the optimization problem \newcommand\mystrut{\rule{0pt}{7.5pt}} \begin{equation}\label{eq:moos_uni} s^*\!\left( \boldsymbol{x} + \boldsymbol{r}_\text{uni} \right) \ne s^*\!\left( \boldsymbol{x} \right) \quad \forall \boldsymbol{x}\in \mathcal{T}^{\prime} \subset\mathcal{T}, \end{equation} with the subset of respective images $\mathcal{T}\mystrut^{\prime}$ for which the CNN is fooled, and the set of all respective images $\mathcal{T}$ the UAP is optimized on. The UAP is again constrained by \begin{equation}\label{eq:moos_uni_2} ||\boldsymbol{r}_\text{uni}||_p \le \epsilon, \end{equation} with $|| \cdot ||_p$ being the $l_p$-norm of $\boldsymbol{r}_\text{uni}$, and $\epsilon$ being its upper bound. In their experiments, Moosavi-Dezfooli et al.~obtained the best results setting $p=\infty$ and $\epsilon=10$. Different from all the attacks shown before, the UAP optimized on $\mathcal{T}$ generalizes well, meaning the UAP can even fool a respective system on a disjoint set of images $\mathcal{V}$, with $\mathcal{V}\cap\mathcal{T}=\emptyset$, on which the UAP was not optimized on. \ifthenelse{\equal{1}{0}}{} {\begin{figure*}[t!] \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6a_1} \vspace{0.1cm} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6b_1} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6c_1} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6d_1} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6a_2} \vspace{0.1cm} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6b_2} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6c_2} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6d_2} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6a_3} \vspace{0.1cm} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6b_3} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6c_3} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6d_3} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6a_4} \vspace{0.1cm} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6b_4} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6c_4} \includegraphics[width=0.245\textwidth, height=0.125\textwidth]{./fig/fig6d_4} \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]} (a) & (b) & (c) & (d) \end{tabu} \caption[toc entry]{Adversarial attacks on the ICNet using a (single) universal adversarial perturbation $\boldsymbol{r}_\text{uni}$ created by Fast Feature Fool (FFF) \cite{Mopuri2018}. We show its effectiveness in fooling the ICNet on four \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{example }}{example} images \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{ with cars or pedestrians }} {with cars or pedestrians }from the Cityscapes validation set. Each row corresponds to one attack scenario; (a) clean input image, (b) semantic segmentation of clean input image, (c) adversarial example created by FFF, and (d) semantic segmentation of adversarial example created by FFF.}\vspace{-0.4cm} \label{fig:adv_mopuri_attack} \end{figure*}} While Moosavi-Dezfooli et al.~use samples from a set of images $\mathcal{T}$ to craft UAPs, Mopuri et al.~\cite{Mopuri2017} introduced a dataset-independent method named Fast Feature Fool (FFF). In the following we are considering the formulation of FFF from their extended work \cite{Mopuri2018}. Adopting the overall objective in (\ref{eq:moos_uni}), FFF aims at finding a UAP that increases the mean activation in each layer $\ell\in\mathcal{L}$, without any knowledge about the respective images $\boldsymbol{x}\in\mathcal{T}\mystrut^{\prime}$ to fool. This is done by minimizing the following loss function \begin{equation} J\!\left( \boldsymbol{r}_\text{uni} \right) = -\text{log}\!\left( \prod_{\ell\in\mathcal{L}} || \boldsymbol{f}_\ell\!\left( \boldsymbol{r}_\text{uni} \right) ||_2 \right), \end{equation} with respect to $\boldsymbol{r}_\text{uni}$ (as 1st layer input image), constrained by (\ref{eq:moos_uni_2}) with $p=\infty$. We used FFF on the ICNet to show the effectiveness and transferability of UAPs on several images taken from the Cityscapes validation set by following Mopuri et al.~in choosing $\epsilon=10$. The obtained results for some images are illustrated in Fig. \ref{fig:adv_mopuri_attack}. While not generating realistically looking semantic segmentation masks as DNNM does, FFF still completely fools the ICNet on several diverse images and needs to be computed only once to obtain $\boldsymbol{r}_\text{uni}$. Moreover, safety-critical classes such as pedestrians and cars are removed from the scene in all examples, underlining again the risk of adversarial attacks for AD. Note that the particular danger of this method for AD lies in the fact that it just requires a generic adversarial pattern to be added to any unknown sensorial data $\left(\boldsymbol{x} + \boldsymbol{r}_\text{uni} \right)$ during driving, causing major errors in the output segmentation mask. \section{Adversarial Defense} So far, we demonstrated that DNNs can be fooled in many different ways by means of almost imperceptible modifications of the input image. This behavior of DNNs puts challenges to their application within environment perception in AD. Therefore, appropriate adversarial defense strategies are needed to decrease the risk of DNNs being completely fooled by adversarial examples. In this section, some adversarial defense strategies are presented, that have been hypothesized and developed to defend against adversarial attacks. In general, adversarial defense strategies can be distinguished as being specific or agnostic to a model at hand. In the following, we will provide a brief introduction to model-specific defense techniques, but then we will focus on model-agnostic ones. \subsection{Model-Specific Defense Techniques} Model-specific defense techniques aim at modifying the behavior of \ifthenelse{\equal{0}{1}}{\textcolor{red}{a specific}}{a specific} DNN in a way that the \ifthenelse{\equal{0}{1}}{\textcolor{red}{respective}}{respective} DNN becomes more robust towards adversarial examples. \ifthenelse{\equal{0}{1}}{\textcolor{red}{ Note that such a technique most often can alternatively be applied to numerous DNN topologies, however, once being applied it always defends only the specific DNN at hand. }} {Note that such a technique most often can alternatively be applied to numerous DNN topologies, however, once being applied it always defends only the specific DNN at hand. } One well-known and intuitive method of model-specific defense techniques is \textit{adversarial training}. In adversarial training the original training samples of the DNN are extended with their adversarial counterparts, e.g., created by FGSM from Goodfellow et al.~as shown before, and then retrained with this set of clean and adversarially perturbed images. Whereas the performance of the DNN on adversarial examples increases (\cite{Goodfellow2015, Moosavi-Dezfooli2017}), the effect is still marginal \cite{Moosavi-Dezfooli2016}. More importantly, it is also not clear which amount or type of adversarial examples is sufficient to increase the DNN's robustness up to a desired level. Xie et al.~\cite{Xie2019} investigated the effect of adversarial examples on the feature maps in several layers. Their observation was that adversarial examples create noise-like patterns in the feature maps. To counter this, they proposed to add trainable denoising layers containing a denoising operation followed by a convolution operation. Xie et al.~obtained the best results by using the non-local means algorithm (NLM)~\cite{Buades2005} for feature denoising. Bär et al.~\cite{Baer2019} explored the effectiveness of teacher-student approaches in defending against adversarial attacks. Here, an additional student DNN is included to increase the robustness against adversarial attacks, assuming that the potential attacker has a hard time to deal with a constantly adapting student DNN. It was concluded that in combination with simple output voting schemes this approach could be a promising model-specific defense technique. Nevertheless, a major drawback of model-specific defense techniques is that the respective DNN has to be retrained, or one has to modify the network architecture, which is not always possible when using pre-trained DNNs. \subsection{Model-Agnostic Defense Techniques} In contrast to model-specific defense techniques, model-agnostic \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{ defense techniques, once developed, can be applied in conjunction with any model, }} {defense techniques, once developed, can be applied in conjunction with any model, }as they do not modify the model itself but rather the input data. \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{ In particular, the model does not need to be retrained. }} {In particular, the model does not need to be retrained. } Hence, it serves as an image pre-processing, where the adversary is removed from the input image. \ifthenelse{\equal{1}{0}}{}{ \begin{figure*}[t!] \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]X[c]} {\fontsize{10}{12}\selectfont Clean output} & {\fontsize{10}{12}\selectfont DNNM attack ...} & {\fontsize{10}{12}\selectfont ... defended by NLM} & {\fontsize{10}{12}\selectfont ... by IQ} & {\fontsize{10}{12}\selectfont ... by NLM+IQ} \end{tabu} \includegraphics[width=\textwidth]{./fig/fig7.pdf}\vspace{0.1cm} \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]X[c]} avg. mIoU $= 67.3$ \% & $... = 57.6$ \% & $... = 60.2$ \% & $... = 61.2$ \% & $... = 63.3$ \%\\ (a) & (b) & (c) & (d) & (e) \end{tabu} \caption[toc entry]{Adversarial attacks on the ICNet using the dynamic nearest neighbor method (DNNM) \cite{Metzen2017}, defended by image quilting (IQ) \cite{Guo2018} and the non-local means algorithm (NLM) \cite{Buades2005}. \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{ Both image rows correspond to the examples shown in Fig. \ref{fig:adv_metzen_attack}. }} {Both image rows correspond to the examples shown in Fig. \ref{fig:adv_metzen_attack}. }The first row contains an example, where DNNM was used to remove pedestrians from the scene, while the second row contains an example, where DNNM was used to remove cars instead; (a) clean output, (b) adversarial output using DNNM, (c) adversarial output using DNNM defended by NLM, (d) adversarial output using DNNM defended by IQ, and (e) adversarial output using DNNM defended by NLM and IQ combined. \ifthenelse{\equal{0}{1}}{\textcolor{red}{The mIoU values in the bottom line refer to the \textit{average} mIoU over the entire Cityscapes validation set.}}{The mIoU values in the bottom line refer to the \textit{average} mIoU over the entire Cityscapes validation set.}}\vspace{-0.4cm} \label{fig:adv_metzen_defense} \end{figure*}} Guo et al.~\cite{Guo2018} analyzed the effectiveness of non-differentiable input transformations in destroying adversarial examples. Non-differentiability is an important property of adversarial defense strategies considering that the majority of adversarial attacks is build on gradient-based optimization. Guo et al.~used image quilting (IQ) amongst some other input transformation techniques and observed IQ to be an effective way of performing model-agnostic defense against several adversarial attacks. IQ is a technique, wherein the input image $\boldsymbol{x}$ is viewed as a puzzle of small patches $\boldsymbol{x}_{\mathcal{I}_i}$, with $i$ being the position of the center pixel. To remove potential adversaries from an image, each of its patches $\boldsymbol{x}_{\mathcal{I}_i}$, irrelevant of being adversarially perturbed or not, is replaced by a nearest neighbor patch $\hat{\boldsymbol{x}}_{\mathcal{I}_i}\in\mathcal{P}\subset\mathbb{G}^{h\times b\times C}$ to obtain a quilted image $\boldsymbol{x}^\text{IQ}$, with $\mathcal{P}$ being a large set of patches created beforehand from random samples of clean images. \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{ The aim is to synthetically construct an adversary-free image having the original semantic content. }} {The aim is to synthetically construct an adversary-free image having the original semantic content. } \ifthenelse{\equal{1}{0}}{}{ \begin{figure*}[t!] \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]X[c]} {\fontsize{10}{12}\selectfont Clean output} & {\fontsize{10}{12}\selectfont FFF attack ...} & {\fontsize{10}{12}\selectfont ... defended by NLM} & {\fontsize{10}{12}\selectfont ... by IQ} & {\fontsize{10}{12}\selectfont ... by NLM+IQ} \end{tabu} \includegraphics[width=\textwidth]{./fig/fig8.pdf}\vspace{0.1cm} \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]X[c]} avg. mIoU $= 67.3$ \% & $... = 4.6$ \% & $... = 25.7$ \% & $... = 18.8$ \% & $... = 46.0$ \%\\ (a) & (b) & (c) & (d) & (e) \end{tabu} \caption[toc entry]{Adversarial attacks on the ICNet using Fast Feature Fool (FFF) \cite{Mopuri2018}, defended by image quilting (IQ) \cite{Guo2018} and the non-local means algorithm (NLM) \cite{Buades2005}. We show results on \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{the}}{the} four semantic segmentation outputs \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{from Fig.~\ref{fig:adv_mopuri_attack} using}}{from Fig.~\ref{fig:adv_mopuri_attack} using} the Cityscapes validation set. Each row corresponds to an attack scenario to be defended by NLM, IQ, or a combination of both; (a) clean output, (b) adversarial output using FFF, (c) adversarial output using FFF defended by NLM, (d) adversarial output using FFF defended by IQ, and (e) adversarial output using FFF defended by NLM and IQ combined. \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{The mIoU values in the bottom line refer to the \textit{average} mIoU over the entire Cityscapes validation set.}}{The mIoU values in the bottom line refer to the \textit{average} mIoU over the entire Cityscapes validation set.}}\vspace{-0.4cm} \label{fig:adv_mopuri_defense} \end{figure*}} Another model-agnostic defense technique is the non-local means algorithm (NLM) from Buades et al.~\cite{Buades2005}. NLM aims at denoising the input image. To accomplish this, NLM replaces each pixel value $x_i$ by \begin{equation}\label{eq:NLM_1} x^\text{NLM}_i =\sum_{j\in\mathcal{I}} w_{i,j}\, x_j, \end{equation} with the NLM-denoised pixel $x^\text{NLM}_i$, the inter-pixel weighting factor {$w_{i,j}\in [0, 1]$} for which $\sum_{j\in\mathcal{I}} w_{i,j} = 1$ holds, and the pixel value $x_j$ at position $j$. The inter-pixel weighting factor $w_{i,j}$ \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{relates}}{relates} the respective pixel $x_i$ at pixel position $i$ \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{to}}{to} the pixel $x_j$ at pixel position $j$. It is defined by \begin{equation} w_{i,j} = \frac{1}{\alpha_i} \,\text{exp} \left( -\frac{||\boldsymbol{x}_{\mathcal{I}_i} - \boldsymbol{x}_{\mathcal{I}_j} ||^2_{2,a}}{h^2}\right), \end{equation} with the patches $\boldsymbol{x}_{\mathcal{I}_i}$ and $\boldsymbol{x}_{\mathcal{I}_j}$ centered at pixel positions $i$ and $j$, the squared Gaussian weighted Euclidean distance $||\cdot ||^2_{2, a}$, with $a>0$ as the standard deviation of the Gaussian kernel, the hyperparameter for the degree of filtering $h$, and the normalizing factor $\alpha_i$. By incorporating the squared Gaussian weighted Euclidean distance, a large weight is put to pixels $x_j$, whose neighborhood $\boldsymbol{x}_{\mathcal{I}_j}$ looks similar to $\boldsymbol{x}_{\mathcal{I}_i}$ (the neighborhood of the respective pixel $x_i$ to be denoised). The idea behind NLM is to remove the high local dependency of adversarial perturbations. Nevertheless, applying NLM on the complete input image, as stated in (\ref{eq:NLM_1}), can be computationally demanding. Thus, the search window is often reduced to an image region $\mathcal{R}_i\subset\mathcal{I}$ of size $|\mathcal{R}_i|=R\times R$. Note that $\mathcal{I}_i\subset\mathcal{R}_i$, with $|\mathcal{I}_i|=|\mathcal{I}_j|<|\mathcal{R}_i|$. Now let's look into results of both model-agnostic defense methods, IQ and NLM, on the adversarial examples shown before. For IQ, the patch dataset $\mathcal{P}$ was created using samples from the Cityscapes training set. Here, we followed Guo et al.~and collected $|\mathcal{P}|=1,000,000$ patches of size $5\times 5$ pixels in total. Increasing the size of the patch dataset will lead to better approximations of the patches, but on the other hand also increases the search space. The same holds when decreasing the size of the patches up to a certain level. \ifthenelse{\equal{1}{0}}{ \begin{figure*}[t!] \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]X[c]} {\fontsize{9}{12}\selectfont Clean output} & {\fontsize{9}{12}\selectfont DNNM attack ...} & {\fontsize{9}{12}\selectfont\mbox{... defended by NLM}} & {\fontsize{9}{12}\selectfont ... by IQ} & {\fontsize{9}{12}\selectfont ... by NLM+IQ} \end{tabu} \includegraphics[width=\textwidth]{./fig/fig_metzen_def.pdf}\vspace{0.1cm} \begin{tabu} to \textwidth {X[c]X[c]X[c]X[c]X[c]} \mbox{\fontsize{9}{1}\selectfont avg.~mIoU $= 67.3$ \% }& {\fontsize{9}{12}\selectfont $... = 57.6$ \%} & {\fontsize{9}{12}\selectfont $... = 60.2$ \% } & {\fontsize{9}{12}\selectfont $... = 61.2$ \% } & {\fontsize{9}{12}\selectfont $... = 63.3$ \%}\\ (a) & (b) & (c) & (d) & (e) \end{tabu} \caption[toc entry]{Adversarial attacks on the ICNet using the dynamic nearest neighbor method (DNNM) \cite{Metzen2017}, defended by image quilting (IQ) \cite{Guo2018} and the non-local means algorithm (NLM) \cite{Buades2005}. \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{ Both image rows correspond to the examples shown in Fig. \ref{fig:adv_metzen_attack}. }} {Both image rows correspond to the examples shown in Fig. \ref{fig:adv_metzen_attack}. }The first row contains an example, where DNNM was used to remove pedestrians from the scene, while the second row contains an example, where DNNM was used to remove cars instead; (a) clean output, (b) adversarial output using DNNM, (c) adversarial output using DNNM defended by NLM, (d) adversarial output using DNNM defended by IQ, and (e) adversarial output using DNNM defended by NLM and IQ combined. \protect\ifthenelse{\equal{0}{1}}{\textcolor{red}{The mIoU values in the bottom line refer to the \textit{average} mIoU over the entire Cityscapes validation set.}}{The mIoU values in the bottom line refer to the \textit{average} mIoU over the entire Cityscapes validation set.}}\vspace{-0.4cm} \label{fig:adv_metzen_defense} \end{figure*}}{} For NLM, patches $\mathcal{I}_i$ and $\mathcal{I}_j$ of size $7\times 7$ were used and the image region for neighbor pixel was restricted according to $|\mathcal{R}_i|=9\times9$ to keep an adequate algorithm complexity. The degree of filtering $h$ was computed by $h=2.15\,\tilde{\sigma}\!\left( \boldsymbol{x}\right)$, with $\tilde{\sigma}\!\left( \boldsymbol{x}\right)$ being an estimate for the Gaussian noise standard deviation on the input image $\boldsymbol{x}$. Using these settings, we tested IQ, NLM, as well as a combined version of both, denoted as \text{IQ+NLM}, on the adversarial attacks shown in Section \ref{sec:adversarial_attacks} (see Fig.~\ref{fig:adv_metzen_attack} and Fig.~\ref{fig:adv_mopuri_attack}). It is important to note that we applied both defense methods without any extensive hyperparameter search. The adversarial defenses on \textit{DNNM-attacked images} are depicted in Fig.~\ref{fig:adv_metzen_defense}. From left to right, the original semantic segmentation mask is reconstructed better and better, with the combination of NLM and IQ showing the best results (Fig.~7 (e)). Comparing NLM and IQ separately, it can be seen that IQ is able to reconstruct the original semantic segmentation mask even more precisely. \ifthenelse{\equal{0}{1}}{\textcolor{red}{The same behavior can be observed when looking at the mIoU values in Fig.~\ref{fig:adv_metzen_defense}, where we report averages over the entire Cityscapes validation set.}}{The same behavior can be observed when looking at the mIoU values in Fig.~\ref{fig:adv_metzen_defense} where we report averages over the entire Cityscapes validation set.} \ifthenelse{\equal{0}{1}}{\textcolor{red}{Altogether, the results show that by combining NLM with IQ one can lever the destructiveness of DNNM---an important and releaving observation.}}{Altogether, the results show that by combining NLM with IQ one can lever the destructiveness of DNNM---an important and releaving observation.} The adversarial defenses on \textit{FFF-attacked images} are illustrated \ifthenelse{\equal{0}{1}}{\textcolor{red}{and supported by the corresponding \textit{average} mIoU values on the Cityscapes validation set in Fig.~\ref{fig:adv_mopuri_defense}}}{and supported by the corresponding \textit{average} mIoU values on the Cityscapes validation set in Fig.~\ref{fig:adv_mopuri_defense}}. \ifthenelse{\equal{0}{1}}{\textcolor{red}{Here,}}{Here,} it is not trivial to judge by only looking at the images, which defense is superior\ifthenelse{\equal{0}{1}}{\textcolor{red}{, IQ or NLM}}{, IQ or NLM}. In some cases, NLM \ifthenelse{\equal{0}{1}}{\textcolor{red}{seems to lead}}{seems to lead} to the better results, whereas in other cases IQ \ifthenelse{\equal{0}{1}}{\textcolor{red}{seems to outperform}}{seems to outperform} NLM. \ifthenelse{\equal{0}{1}}{\textcolor{red}{Yet, looking at the average mIoU values for the entire Cityscapes validation set leads to the conclusion that overall NLM is superior to IQ.}}{Yet, looking at the average mIoU values for the entire Cityscapes validation set leads to the conclusion that overall NLM is superior to IQ.} \ifthenelse{\equal{0}{1}}{\textcolor{red}{Moreover, combining NLM with IQ again shows the best results leading to an overall significant improvement in restoration of the segmentation masks.}}{Moreover, combining NLM with IQ again shows the best results leading to an overall significant improvement in restoration of the segmentation masks.} This observation is both extremely important and relieving, as the existence of UAPs is particularly dangerous for the use case of DNNs in AD. \ifthenelse{\equal{0}{1}}{\textcolor{red}{ Even though we observe a certain level of effectiveness in using model-agnostic defense methods, there is still room left for improvement in defending against adversarial attacks. The work of Carlini and Wagner \cite{Carlini2017} and Athalye et al.~\cite{Athalye2018} are just two of many representative examples. Carlini and Wagner bypassed several state-of-the-art detection systems for adversarial examples with their approach, whereas Athalye et al.~circumvented the non-differentiality property of some state-of-the-art defenses by different gradient approximation methods. }}{Even though we observe a certain level of effectiveness in using model-agnostic defense methods, there is still room left for improvement in defending against adversarial attacks. The work of Carlini and Wagner \cite{Carlini2017} and Athalye et al.~\cite{Athalye2018} are just two of many representative examples. Carlini and Wagner bypassed several state-of-the-art detection systems for adversarial examples with their approach, whereas Athalye et al.~circumvented the non-differentiality property of some state-of-the-art defenses by different gradient approximation methods. } \section{Summary and Future Directions} Deep neural networks (DNNs) are one of the most promising technologies for the use case of environment perception in autonomous driving (AD). Assuming the environment perception system consists of several camera sensors, a DNN trained for semantic segmentation can be used to perform extensive environment sensing in real-time. Nevertheless, today's state-of-the-art DNNs still unveil flaws when fed with specifically crafted inputs, denoted as adversarial examples. It was step-by-step demonstrated that it is quite easy and intuitive to craft adversarial examples for individual input images using the least-likely class method (LLCM) or the dynamic nearest neighbor method (DNNM) by simply performing gradient updates on the clean input image. It is even possible to craft adversarial examples to fool not only one but a set of images using the Fast Feature Fool (FFF) method, without any knowledge of the respective input image to be perturbed. This in turn highlights the importance of appropriate defense strategies. From a safety-concerned perspective, the lack of robustness shown by DNNs is a highly relevant and important challenge to deal with, before AD vehicles are released for public use. DNNs' lack of robustness evoked the need for defense strategies and other fallback strategies regarding the safety relevance for AD applications. Model-agnostic defense strategies only modify the potentially perturbed input image to decrease the effect of adversarial attacks. This way, an already pretrained DNN can be used without the need of retraining or modifying the DNN itself. We explored two model-agnostic defense strategies, namely image quilting (IQ) and the non-local means algorithm (NLM), both on DNNM and FFF attacks, where the combination of IQ and NLM shows the best results on almost all images. Nevertheless, although clearly robustifying the DNNs towards adversarial attacks, the current state of research in model-agnostic defense strategies also showed that \ifthenelse{\equal{0}{1}}{\textcolor{red}{vulnerability of DNNs is not entirely}}{vulnerability of DNNs is not entirely} solved yet. However, \ifthenelse{\equal{0}{1}}{\textcolor{red}{ensembles}}{ensembles} of model-agnostic defenses could be promising \ifthenelse{\equal{0}{1}}{\textcolor{red}{for}}{for} tackling adversarial attacks, as well as intelligent redundancy, e.g., by teacher-student approaches. \ifthenelse{\equal{0}{1}}{\textcolor{red}{We would also like to point out that certification methods (\cite{Dvijotham2018, Wu2018}) should be further investigated to really obtain provable robustness.}}{We would also like to point out that certification methods (\cite{Dvijotham2018, Wu2018}) should be further investigated to really obtain provable robustness.} What does this mean regarding the application of DNNs for AD? Are today's DNNs not suitable for safety-critical applications in AD? We would argue that this is to some extent true, if we only consider applying model-agnostic defenses \ifthenelse{\equal{0}{1}}{\textcolor{red}{without certification}}{without certification}. DNN training and DNN understandability are two highly dynamic academic fields of research. Research so far mainly focused on increasing the performance of DNNs, widely neglecting their robustness \ifthenelse{\equal{0}{1}}{\textcolor{red}{and certification}}{and certification}. In order to develop \ifthenelse{\equal{0}{1}}{\textcolor{red}{employable}}{employable} machine learning-based functions that are realistically usable in a real world setting, it is extremely important to establish their robustness against slight input alterations in addition to improving the task performance. Furthermore, new mature defense \ifthenelse{\equal{0}{1}}{\textcolor{red}{and certification}}{and certification} strategies are needed, including fusion approaches, redundancy concepts, and modern fallback strategies. \ifthenelse{\equal{0}{1}}{\textcolor{red}{We especially recommend automotive companies to focus on certication of DNNs.}}{We especially recommend automotive companies to focus on certication of DNNs.} Otherwise, doors would open for potentially fatal attacks which in turn would have consequences on public acceptance of AD. \section*{Acknowledgement} The authors gratefully acknowledge support of this work by Volkswagen Group Automation, Wolfsburg, Germany\ifthenelse{\equal{0}{1}}{\textcolor{red}{ , and would like to thank Nico M. Schmidt and Zeyun Zhong for their help in setting up final experiments. }} {, and would like to thank Nico M. Schmidt and Zeyun Zhong for their help in setting up final experiments.} \section*{Authors} \textbf{Andreas Bär} (andreas.baer@tu-bs.de) received his B.Eng. degree from Ostfalia University of Applied Sciences, Wolfenbüttel, Germany, in 2016, and his M.Sc. degree from Technische Universität Braunschweig, Braunschweig, Germany, in 2018, where he is currently a Ph.D. degree candidate in the Faculty of Electrical Engineering, Information Technology, and Physics. His research interests include convolutional neural networks for camera-based environment perception and the robustness of neural networks to adversarial attacks. In 2020, he won the Best Paper Award at the Workshop on Safe Artificial Intelligence for Automated Driving, held in conjunction with the IEEE Conference on Computer Vision and Pattern Recognition, along with coauthors Serin John Varghese, Fabian Hüger, Peter Schlicht, and Tim Fingscheidt. \textbf{Jonas Löhdefink} (j.loehdefink@tu-bs.de) received his B. Eng. degree from Ostfalia University of Applied Sciences, Wolfenbüttel, Germany, in 2015, and his M.Sc. degree from Technische Universität Braunschweig, Braunschweig, Germany, in 2018, where he is currently a Ph.D. degree candidate in the Faculty of Electrical Engineering, Information Technology, and Physics. His research interests include learned image compression and quantization approaches by means of convolutional neural networks and generative adversarial networks. \textbf{Nikhil Kapoor} (nikhil.kapoor@volkswagen.de) reeived his B.Eng. degree from the Army Institute of Technology, Pune, India, in 2012, and his M.Sc. degree from RWTH Aachen University, Germany, in 2018. Currently, he is a Ph.D. degree candidate at Technische Universität Braunschweig, Braunschweig, Germany, in cooperation with Volkswagen Group Research. His research focuses on training strategies that range from improving the robustness of neural networks for camera-based perception tasks to augmentations and adversarial perturbations using concept-based learning. \textbf{Serin John Varghese} (john.serin.varghese@volkswagen.de) received his B.Eng. degree from the University of Pune, India, in 2013, and his M.Sc. degree from Technische Universität Chemnitz, Germany, in 2018. Currently, he is a Ph.D. degree candidate at Technische Universität Braunschweig, Braunschweig, Germany, in cooperation with Volkswagen Group Research. His research is focused on compression techniques for convolutional neural networks used for perception modules in automated driving, with a focus on not only inference times but also maintaining, and even improving, the robustness of neural networks. \textbf{Fabian Hüger} (fabian.hueger@volkswagen.de) received his M.Sc. degree in electrical and computer engineering from the University of California, Santa Barbara, as a Fulbright scholar in 2009. He received his Dipl.-Ing. and Dr.-Ing. degrees in electrical engineering from the University of Kassel, Germany, in 2010 and 2014, respectively. He joined Volkswagen Group Research, Germany, in 2010, and his current research is focused on safe and efficient use of artificial intelligence for autonomous driving. \textbf{Peter Schlicht} (peter.schlicht@volkswagen.de) received his Ph.D. degree in mathematics from the University of Leipzig, Germany. After a two-year research stay at the Ecole Polytechnique Fédérale, Lausanne, Switzerland, he joined Volkswagen Group Research, Wolfsburg, Germany, in 2016 as an artificial intelligence (AI) architect. There he deals with research questions on AI technologies for automatic driving. His research interests include methods used for monitoring, explaining, and robotizing deep neural networks as well as securing them. \textbf{Tim Fingscheidt} (t.fingscheidt@tu-bs.de) received his Dipl.-Ing. and Ph.D. degrees in electrical engineering, both from RWTH Aachen University, Germany, in 1993 and 1998, respectively. Since 2006, he has been a full professor with the Institute for Communications Technology, Technische Universität Braunschweig, Braunschweig, Germany. He received the Vodafone Mobile Communications Foundation prize in 1999 and the 2002 prize of the Information Technology branch of the Association of German Electrical Engineers (VDE ITG). In 2017, he coauthored the ITG award-winning publication, “Turbo Automatic Speech Recognition.” He has been the speaker of the Speech Acoustics Committee ITG AT3 since 2015. He served as an associate editor of IEEE Transactions on Audio, Speech, and Language Processing (2008–2010) and was a member of the IEEE Speech and Language Processing Technical Committee (2011–2018). His research interests include speech technology and vision for autonomous driving. He is a Senior Member of IEEE. {\small \bibliographystyle{ieee}
proofpile-arXiv_067-568
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The idea of percolation was first conceived by Flory in 1941 in the context of gelation transition \cite{ref.Flory}. However, percolation as a mathematical model was first formulated by Broadbent and Hammersley in 1957 to understand the motion of gas molecules through the maze of pores in carbon granules filling a gas mask \cite{ref.Broadbent}. Since then, it remained one of the most studied theories in statistical physics. To study percolation one has to first choose a skeleton, an empty lattice or a graph, which has two entities namely sites or nodes and bonds or links. One of its entity, depending on whether it is bond or site type percolation, is occupied with probability $p$ independent of the state of its neighbors \citep{ref.Stauffer}. As the occupation probability $p$ is tuned starting from $p=0$ clusters, i.e. contiguous occupied sites, are gradually formed, merged and grown. Remarkably, in the process there appears a cluster that spans across the entire linear size of the lattice at a certain non-trivial threshold value $p_c$. When it happens, it happens so abruptly that many observable quantities diverge at $p_c$. This is reminiscent of continuous thermal phase transition where physical properties like susceptibility, specific heat etc. diverge in a similar fashion \cite{ref.Stanley}. Phase transitions are classified according to how an order parameter (OP), a quantity which is zero in one phase and non-zero in the other, varies in the immediate vicinity of the critical point. For instance, phase transitions are called discontinuous (or first order) if the OP itself is discontinuous at $p_c$ and they are called continuous (or second order) if OP is continuous across the whole range of $p$. Percolation transition is well-known as a paradigmatic model of second order phase transition since the OP, the relative size of the spanning cluster $P$, grows from zero at $p_c$ following a power-law $P\sim (p-p_c)^\beta$ which is exactly how magnetization behaves in ferromagnetic transition. The insights into the percolation theory therefore facilitates the understanding of phase transition and critical phenomena which is one of the most elegant field of research in statistical and condensed matter physics \cite{ref.Schwabl}. In 2009 Achlioptas {\it et al.} proposed a biased occupation rule, known as the Achlioptas process (AP), that encourages slower growth of the larger clusters and faster growth of the smaller clusters instead of random occupation in classical percolation \cite{ref.Achlioptas}. According to this rule a pair of bonds are first picked uniformly at random from all possible distinct links. However, of the two, only the one that satisfies the pre-selected rule is finally chosen to occupy and the other one is discarded. The preset rule is usually chosen so that it discourages the growth of the larger clusters and encourages the growth of the smaller clusters. As a result, the percolation threshold is delayed and hence the corresponding $p_c$ is always higher than the case where only one bond is always selected. Furthermore, it is natural to expect that close to $p_c$ nearly equal sized clusters, waiting to merge, are so great in number that occupation of a few bonds results in an abrupt global connection and thus the name ``Explosive Percolation'' (EP). Through their seminal paper Achlioptas {\it et al.} claimed for the first time that EP can describe the first order phase transition (see for recent reviews in \cite{ref.saberi, ref.review}). Their results jolted the scientific community through a series of claims, unclaims and counter-claims \cite{ref.Ziff_1, ref.Ziff_2, ref.Radicchi, ref.Souza, ref.Cho_1, ref.Cho_2, ref.Raissa, ref.Hayasaka, ref.Costa_1, ref.Costa_2} . However, recent studies on the percolation transition under original AP rule and its various variants suggest that the transition is actually continuous in character \cite{ref.Riordan, ref.Grassberger}. Moreover, there are also claims that albeit it is a continuous transition it also exhibits some unusual behaviors \cite{ref.Ziff_2, ref.Cho_2, ref.Grassberger, ref.Tian, ref.Bastas}. For instance, the critical exponent $\beta$ of the order parameter is so small in comparison to that of its random percolation that it can be easily mistaken for zero which can lead to conclude OP suffering a jump \cite{ref.da_Costa}. The EP model was first implemented on the ER network. The idea was then extended to other planar lattices and to scale-free networks \cite{ref.Ziff_1, ref.Choi}. As many variants of the EP model were introduced, it became more apparent that EP actually describes continuous phase transition. Recently, it has been further generalized by picking a fixed $m\geq 2$ number of candidate bonds at each step instead of a pair bonds only. It has been claimed that AP in the limit $m\rightarrow \infty$ on a lattice can still yield a discontinuous percolation transition at $p_c$ \cite{ref.Qian}. It is noteworthy to mention that the extent of connectivity, which depends on the distance of its state from the $p_c$, is highly important in many systems . There are systems where large-scale connectivity is desired and there are systems where it can be a liability too. For instance, in the case of virus spreading on social or computer network, a higher $p_c$ is desired so that even if $p$ is high the spread of viruses can still be contained in small, isolated clusters. However, in the case of communication network, a smaller $p_c$ is desired so that the system can have large scale connectivity even at small $p$. Note that the smaller the $p_c$, the better the connectivity even at small $p$. The flexibility in controlling the location of the percolation threshold $p_c$ therefore can be of great interest. One of the advantages of the EP model is that we can either enhance or lessen the $p_c$ value simply by inverting the condition of the AP rule. We can also tune the $p_c$ by using various variants of the AP rule. Besides, finding the critical exponents of the EP model can also be of significant interest since most of the studies on EP have primarily been focused on resolving the debate whether it describes continuous or discontinuous transition. This is in sharp contrast to random percolation (RP) for which we know critical exponents for a wide range of regular and random lattices ( see Refs. \cite{ref.Wiki, ref.Hsu} and references there in). One of the extraordinary findings of RP is that the critical exponents are found to be universal in the sense that they depend only on the dimension of the lattice. That is, regardless of whether the skeleton is a square, triangular or honeycomb lattice as long as they are planar in the sense that their dimension coincides with the dimension of the space where they are embedded, they will share the same critical exponents regardless of whether the percolation is of bond or site type. However, recently we have performed site and bond RP on a multifractal scale-free weighted planar stochastic lattice (WPSL) and found an exception for the first time \cite{ref.Hassan_Rahman_1,ref.Hassan_Rahman_2}. To be precise, we found that both site and bond percolation on WPSL belong to the same universality class which is different from the universality class where all the known planar latices belong. It is note worthy to mention that there have not been enough efforts to classify the EP into universality classes since most studies on EP focused on resolving the issue whether it describes continuous or discontinuous phase transition. Having just overcome that transient phase, it is now time to focus on finding critical exponents for various lattices or graphs to classify them into universality classes. The focus of this article is on finding the critical exponents of explosive bond percolation (EBP) on WPSL and compare its results with those of the random bond percolation (RBP) on the same lattice. It is a special lattice with some unique features that no other known lattice has. For instance on one hand, unlike network or graph, it has property of lattice as its sites are spatially embedded. On the other, unlike lattices, its dual display the property of networks as its coordination number distribution follows a power-law. Besides, unlike regular lattice, the size of its cells are not equal rather the distribution of the area size of its blocks obeys dynamic scaling \cite{ref.Hassan_Dayeen}. Moreover, the dynamics of the growth of this lattice is governed by infinitely many conservation laws, one of which being the trivial conservation of total area. One more interesting property of the WPSL is that each of the non-trivial conservation law can be used as a multifractal measure and hence it is also a multi-multifractal \cite{ref.Hassan_Dayeen}. Krapivsky and Ben-Naim also showed that it exhibits multiscaling \cite{ref.Krapivsky_Naim}. Yet another property of the WPSL is that it can be mapped as a network if we consider the center of each block as a node and the common border between block as the link between the center of the corresponding nodes. Interestingly, the degree distribution of the corresponding network exhibits power-law \cite{ref.Hassan_Pavel}. Considering these links as bonds we perform percolation on the WPSL and find numerically the values of the critical exponents $\beta, \gamma, \nu$ as well as the exponent $\tau$ that characterizes the cluster size distribution function $n_s(p_c)$ and the fractal dimension $d_f$ that characterizes the spanning cluster. One of the advantage that WPSL has over network or graph is that we can identify the spanning cluster. Note that networks or graphs do not have edges, sides or boundaries and hence the relative size of the largest cluster is defined as the order parameter instead that of the spanning cluster. We compare the results of the EP of bond with those of the RP and found a distinct set of exponents. In particular, we find that the exponent $\beta$ of EP is remarkably smaller than that of the RP on the WPSL which justifies the name explosive. We also show that the scaling functions of the EP are different from those of the RP on the WPSL. To the best of our knowledge, this is the first comprehensive study of EP where all the usual critical exponents are obtained. We show that these values satisfy all the scaling and hyperscaling relations among themselves like they do in the case of RP. To test our values, we further use the idea of data-collapse which stands as an ultimate test of their accuracy. These result reveals that EP model is just another variants of percolation theory. The rest of this article is organized as follows. In section II we briefly discuss the construction and the properties of the WPSL. In section III, we first find the percolation threshold $p_c$ for EP using the idea of spanning probability $W(p)$ that there is a cluster that spans across the entire lattice at $p$. Second, using the same $W(p)$ we also find an estimate for the critical exponent $\nu$. Third, we use the idea of percolation probability (order parameter), ratio of the size of the spanning cluster to the size of the lattice, and the idea of mean cluster size to find the numerical estimates for the critical exponent $\beta$ and $\gamma$ respectively. Besides, we find the exponents $\tau$ and $d_f$ of the cluster size distribution function $n_s(p)$ and spanning cluster at $p_c$. Finally in section IV we summarize our findings. \section{WPSL and its properties} We start by giving a brief description of how we construct the WPSL \cite{ref.Hassan_Pavel}. It starts with a square of unit area which we regard as an initiator. The generator then divides the initiator, in the first step, randomly with uniform probability into four smaller blocks. In the second step and thereafter, the generator is applied to only one of the blocks. The question is: How do we pick that block when there are more than one blocks? The most generic choice would be to pick preferentially according to their areas so that the higher the area the higher the probability to be picked. For instance, in step one, the generator divides the initiator randomly into four smaller blocks. Let us label their areas starting from the top left corner and moving clockwise as $a_1,a_2,a_3$ and $a_4$. But of course the way we label is totally arbitrary and will bear no consequence to the final results of any observable quantities. Note that $a_i$ is the area of the $i$th block which can be well regarded as the probability of picking the $i$th block. Interestingly, these probabilities are naturally normalized $\sum_i a_i=1$ since we choose the area of the initiator equal to one. In step two, we pick one of the four blocks preferentially with respect to their areas. Consider that we pick the block $3$ and apply the generator onto it to divide it randomly into four smaller blocks. Thus the label $3$ is now redundant and hence we recycle it to label the top left corner while the rest of three new blocks are labelled $a_5, a_6$ and $a_7$ in a clockwise fashion. In general, in the $j$th step, we pick one out of $3j-2$ blocks preferentially with respect to area and divide randomly into four blocks. The detailed algorithm can be found in Ref. \cite{ref.Hassan_Dayeen, ref.Hassan_Pavel}. \begin{figure} \includegraphics[width=6.5cm,height=6.0cm,clip=true]{./c_3001.eps} \caption{A snapshot of the weighted planar stochastic lattice. } \label{fig:1} \end{figure The creation of the WPSL can also describe the following processes. First, two mutually perpendicular cuts grow upon random sequential nucleation of a seed in the initiator. Second, the tips of the two cuts move with a constant velocity until they are hit or intercepted either by another cut or by the boundary. The algorithm can also describe kinetics of fragmentation of planar objects through the effect the effects of size and shape can be dealt in a minimalist way \cite{ref.Hassan_Rodgers, ref.redner_book}. Despite the simplicity, the process yet yields a lattice that looks seemingly complex, manifestly intricate and inextricably intertwined, which makes it an interesting candidate to look if there are some scaling and order (geometrical or topological). Perhaps a representative snapshot of the lattice, see Fig. \ref{fig:1}, can give a better impression about the lattice than a mere description. In this work we shall treat this as a random lattice which has the following non-trivial properties. \begin{itemize} \item One of the interesting observable physical quantities for the WPSL can well be the block size distribution function $C(a,t)$ where $a$ represents the area of the blocks. It describes the concentration of blocks of the area within the size range $a$ and $a+da$ at time $t$. We have recently shown that it exhibits dynamic scaling \cite{ref.Hassan_Dayeen}. Note that the WPSL is a disordered lattice that emerges through evolution and hence it can only be useful if the snapshots taken at different late stages are similar. In physics, similarity and self-similarity have a specific meaning. Two snapshots of the WPSL taken at two very different times can be similar if they differ in the numerical values of the dimensional quantities while the numerical values of the corresponding dimensionless quantities coincide. \item The dynamics of the process is governed by infinitely many non-trivial conservation laws including the trivial conservation of the total area of all the blocks of the lattice. That is, if the $i$th block is described by the size of its length $x_i$ and width $y_i$ then we find that the numerical value of the quantity $M_m=\sum_i^N x_i^{(4/m)-1}y_i^{m-1}$ remains the same regardless of the size of the lattice for any value of $m$ where $m=2$ corresponds to the total area (the trivial conservation laws). \item Each of the nontrivial conservation law $M_m$ are distributed in the WPSL such that the fraction of this quantity that the $i$th block has is $p_i\sim x_i^{(4/m)-1}y_i^{m-1}$. After constructing the partition function, the $q$th moment of $p_i$, and measuring that with a square of side $\delta$ equal to the mean block area of the WPSL we find it follows a power law with exponent $\tau(q,m)=(1-q)D_q(m)$. The Legendre transform of $\tau(q,m)$ gives the multifractal $f(\alpha)$ spectrum revealing that each non-trivial conserved quantity is a multifractal measure and hence the WPSL is a multi-multifractal \cite{ ref.Hassan_Pavel}. \item We can map the WPSL as a network if we regard each block of the WPSL as node and the common border between blocks as links. We find that the fraction of the total nodes (blocks) which has degree $k$, that describes the probability $P(k)$ that a node picked at random has a degree $k$, is known as the degree distribution $P(k)$. This is equivalent to the coordination number distribution in the WPSL. We find that $P(k)$ decays obeying a power-law \cite{ref.Hassan_Dayeen, ref.Hassan_Pavel}. Thus we see that the WPSL in one hand has the properties of networks since unlike a typical lattice its coordination number distribution follow power-law. On the other hand, unlike networks, its nodes are spatially embedded and have edges or boundaries. \item It has a mixture of properties of both lattice and graph. In one hand, like lattice, its cells are embedded spatially in the space of dimension $D=2$ and on the other, like scale-free network, its coordination number distribution follows a power-law. \item It also has interesting neighborhood statistics. For instance, the mean area $\langle A\rangle_k$ of only those blocks which share $k$ neighbours obeys Lewis law, i.e., $\langle A\rangle_k \propto k$, for up to $k=8$ and beyond that it reaches to a constant exponentially. Besides, if we regard $m_k$ as the mean or typical number of neighbors of only those blocks which has exactly $k$ neighbours then we find that $km_k$ is a constant (statistical sense). It implies that the Aboav-Weaire law, $km_k\propto k$, is violated in the WPSL for the entire range of $k$ \cite{ref.Hassan_Dayeen}. \end{itemize} \section{Explosive bond percolation on the WPSL} In this article, we investigate explosive and random bond percolation on the WPSL. In either case we have to first understand what is bond in the context of WPSL. Second, how many bonds are there in the WPSL of size $N$ blocks. To understand what is bond in the WPSL we first map the WPSL into a network which we call dual of the WPSL. This is obtained by replacing each block by a node at their centers and the common border between two blocks by a link connecting the corresponding nodes. It is note worthy to mention that the network corresponding to the dual of the WPSL has surface sites while networks or graphs do not have such surface sites. It is this feature of the network corresponding to dual of the WPSL which gives spanning probability a meaning in this case. As regard to the second question, the number of bonds in the WPSL of fixed size varies in each independent realization. Interestingly, its average over many $\cal{N}$ independent realizations reaches a constant value as we let $\cal{N}\rightarrow \infty$. Initially, the dual of the WPSL consisting of $N$ nodes (blocks) has exactly $N$ number of cluster of size one. We first label all the bonds as $1,2,...,m$ so that a bond $e_{mn}$ connects two sites $m$ and $n$ which belong to clusters say of sizes $s_m$ and $s_n$ respectively. Then, in the explosive bond percolation (EPB) according to the AP rule, we pick a pair of bonds $e_{ij}$ and $e_{kl}$ at random from all possible distinct bonds. However, this is only a trial attempt from which one of the links that minimizes the product of the size of the two clusters to which it attaches is finally occupied and the attempt to occupy the other is discarded. That is, if $s_is_j<s_ks_l$ then the bond $e_{ij}$ is occupied and if $s_ks_l<s_is_j$ then $e_{kl}$ is occupied while attempt to occupy $e_{kl}$ is discarded in the former case and $e_{ij}$ in the latter. On the other hand, in the random bond percolation (RBP) only one bond is picked at random and occupied regardless of the size of the component clusters it attaches. In either case, each time we occupy a bond, a cluster at least of size two or more is formed. The size of the cluster in the case bond percolation, be it EBP or RBP, is measured by the number of sites connected by occupied bonds. Understanding the nature of percolation transition and accurately predicting the percolation threshold are of fundamental importance and it is one of the central tasks in the study of percolation \citep{ref.ziff_pc_1, ref.ziff_pc_2}. On the other hand, it is thought that if finding the critical exponents in random percolation is hard then finding them in the explosive percolation is even harder especially the $\beta$ value. In this article we will find all the critical exponents and verify them using the scaling and hyperscaling relations. \subsection{Spanning probability $W(p)$} We first attempt to find the percolation threshold $p_c$ and the critical exponent $\nu$ for explosive percolation. The best observable quantity to find both is the spanning probability $W(p)$. The spanning probability $W(p)$ describes the likelihood of finding a cluster that spans across the entire system either horizontally or vertically at the occupation probability $p$. To find how $W(p)$ behaves with the control parameter $p$ we perform many, say $M$, independent realizations under the same identical conditions. In each realization for a given finite system size we take record of the $p_c$ value at which the spanning cluster appears for the first time. Thus there is a spanning cluster for all $p>p_c$ and hence we set zero for all $p<p_c$ and one for each $p\geq p_c$ value. We then count all the $1$s whose sum can at best be equal to $M$ where $M$ is the number of independent realizations. To find a regularity or a pattern among all the $M$ numbers for a given $p$ value we count all the ones at which a spanning cluster exists. We use this data to obtain the relative frequency of occurrence at a given $p$ that we regard as the spanning probability $W(p)$. In Figs. \ref{fig:2a} and \ref{fig:2b}, we show a set of plots of $W(p)$ for explosive and random bond percolation respectively as a function of $p$ where distinct curves represent different system size $L=\sqrt{N}$. One of the significant features of such plots is that all the distinct plots for different size $L$ meet at one particular $p$ value. Each curve represents a polynomial equation in $p$ for a given $L$. The significance of the meeting is that it is the root of all the polynomial equations and it is actually the critical point $p_c$. In the case of explosive bond percolation, we find $p_c=0.4021$ which is higher than $p_c=0.3457$ for random bond percolation as expected since the AP rule systematically delays the emergence of spanning cluster. We can further tune the value of $p_c$ to get a better estimate using the finite-size scaling for $W(p)$. \begin{figure} \centering \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true,angle=-90] {explosive_wpsl_sp.eps} \label{fig:2a} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {wpsl_bond_sp_final.eps} \label{fig:2b} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {combine_nu.eps} \label{fig:2c} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {combine_sp_datacollapse.eps} \label{fig:2d} } \caption{Spanning probability $W(p,L)$ vs $p$ in WPSL for (a) explosive bond and (b) random bond percolation. The simulation result of the percolation threshold is $p_{c}=0.4021$ for EBP and $0.3457$ for random bond percolation. In (c) we plot $\log(p-p_c)$ vs $\log L$ for both the cases. The two lines have slopes $1/\nu=0.8801 \pm 0.0049$ and $0.6117\pm 0.0074$ for explosive and random bond respectively. In (d) we plot dimensionless quantities $W$ vs $(p-p_c)L^{1/\nu}$ and we find distinct plots in (a) and (b) collapse superbly into their own scaling function. } \label{fig:2abcd} \end{figure} It is interesting to note that the idea of spanning probability can also be used to find the critical exponent $\nu$. In pursuit of this we find it worthwhile to observe the direction of shift of the $W(p)$ vs $p$ curves on either side of $p_c$ as the system size $L$ increases. This shift shows a clear sign of march of the curves towards $p_c$ from either side revealing that $W(p)$ will ultimately be like a step function in the limit $L\rightarrow \infty$. In other words it is expected that $W(p)=0$ for $p\leq p_c$ and $W(p)=1$ for $p>p_c$ which is the hallmark of percolation transition. We can quantify the extent at which they are marching by measuring the magnitude of the difference $(p_c- p)$ for different $L$ for a fixed $W(p)$ value. We do it by drawing a horizontal line at a given value of $W$, preferably at the position where this difference is the most to minimize the error, and take records of the difference $p_c-p$ as a function of system size $L$. Plotting the resulting data after taking log of both the variables we find a straight line, see Fig. \ref{fig:2c}, with slope $0.8801\pm 0.0049$ for explosive and $0.6135 \pm 0.0038$ random bond percolation. The slopes are actually equal to inverse of $\nu$ and hence we can write \begin{equation} \label{eq:10} p_c- p\sim L^{-{{1}\over{\nu}}}. \end{equation} Indeed, it implies that in the limit $L\rightarrow \infty$ all the $p$ take the value $p_c$ revealing that $W(p)$ will ultimately become a step function. To further test and to further verify the value of $\nu$ we use the finite-size scaling hypothesis \begin{equation} \label{eq:fss1} W(p,L)=L^{-a/\nu}\phi_nu((p-p_c)L^{1/\nu}). \end{equation} Now $W(p)$ is a step function means $a=0$ and hence if we plot $W(p)$ vs $(p_c- p)L^{{{1}\over{\nu}}}$ then all the distinct plots of $W(p)$ vs $p$ should collapse in to a single universal curve. Indeed, we find an excellent collapse of all the distinct plots for different sizes which is shown in Fig. \ref{fig:2d}. The quality of data-collapse also provides a test that the values of $p_c$ and $\nu$ obtained numerically are quite accurate up to an excellent extent. \subsection{Order parameter: Percolation probability $P(p)$} To find the critical exponent $\beta$ we have to consider the equivalent counterpart of the order parameter in percolation. In percolation, the percolation probability $P(p)$ (also sometimes called percolation strength) is defined as ratio of the size spanning cluster $A_{{\rm span}}$ to the size of the largest possible cluster $N$ (which is actually the size of the lattice). In the case of percolation on graph or network we, however, use the largest cluster $A_{{\rm largest}}$ in place of the spanning cluster since in network the term spanning does not exist. In the present case, we can use the former since we can recognize the spanning cluster in the WPSL. We plot percolation probability $P$ in Figs. \ref{fig:3a} and \ref{fig:3b} as a function of $p$ for both explosive and random bond percolation respectively. Looking at the plots, one may think that all the plots for different $L$ meet at a single unique point like it does for $W(p)$ vs $p$ plot. However, if one zooms in, then it becomes apparent it is not so and hence the $p_c$ value from this plot will not be as accurate as it is from the $W(p)$ vs $p$ plot. We also find that $P(p)$ is not strictly equal to zero at $p<p_c$, rather there is always a non-zero chance of finding a spanning cluster even at $p<p_c$ as long as the system size $L$ is finite. However, the plots of $P$ vs $p$ also give a clear indication that the chances of getting spanning cluster at $p<p_c$ diminishes with increasing $L$. There is also a lateral shift of the $P$ vs $p$ plot to the left for $p>p_c$. The extent of this shift, however, decreases but never becomes a step function like in the case of $W(p)$ vs $p$ plot. In contrast to RBP, the rise of $P$ in EBP is much sharper. However, the growth of $P(p)$ for EBP in the Erd\"{o}s-Renyi network is so sharp that it can be mistaken as a step function in which case the critical exponent would have been zero. It has been later found that $\beta$ value in that case is actually too low. One of the goals of this work is to find the $\beta$ value for explosive percolation on the WPSL and compare its value with that RBP. \begin{figure} \centering \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true,angle=-90] {explosive_wpsl_pp.eps} \label{fig:3a} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {wpsl_bond_pp_final.eps} \label{fig:3b} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {combine_beta.eps} \label{fig:3c} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {combine_pp_datacollapse.eps} \label{fig:3d} } \caption{Percolation strength or percolation probability $P(p,L)$ in WPSL for (a) explosive bond and (b) random bond percolation. In (c) we plot $\log P$ vs $\log L$ using data for fixed value of $(p-p_c)L^{1/\nu}$ and find almost parallel lines with slopes $\beta/\nu=0.059835 \pm 0.00038$ and $ 0.1357 \pm 0.0002$ for random bond percolation respectively which clearly implies that the critical exponent $\beta=0.0679$ is negligibly small for explosive compare to $\beta=0.222$ random bond on the same lattice. In (d) we plot $PL^{\beta/\nu}$ vs $(p-p_c)L^{1/\nu}$ and find distinct plots of (a) and (b) collapses into their own scaling functions. } \label{fig:3abcd} \end{figure} To show that the percolation probability $P$ does not suffer a jump or discontinuity we need to show it behaves like $P\sim (p-p_c)^\beta$ with $\beta>0$ since $\beta=0$ would mean first order transition. In order to check if the exponent $\beta=0$ or $\beta>0$ for infinite system size $L$, we again apply the idea of finite-size scaling \begin{equation} \label{eq:fss2} P(p,L)\sim L^{-\beta/\nu}\phi_\beta((p-p_c)L^{1/\nu}). \end{equation} We already know the $\nu$ value from the $W(p)$ vs $p$ curves. To find $\beta/\nu$ we first plot $P(p)$ vs $(p_c- p_c(L))L^{{{1}\over{\nu}}}$ and find that unlike $W(p)$ vs $(p_c- p_c(L))L^{{{1}\over{\nu}}}$ it does not collapse. It immediately implies that $\beta\neq 0$ and hence $P(p)$ does not suffer a jump revealing that EP is not first order. To find the value of $\beta/\nu$, we measure the heights $P_{{\rm height}}$ at a given value of $(p-p_c)L^{1/\nu}$ for different $L$. We then plot $\log (P_{{\rm height}})$ vs $\log (L)$ as shown in Fig. \ref{fig:3c} and find a straight line with slopes $\beta/\nu=0.0598 \pm 0.0003$ for EBP and $ 0.1357 \pm 0.0002$ for RBP revealing that \begin{equation} \label{eq:11} P(p,L)\sim L^{-\beta/\nu}. \end{equation} Now according to Eq. \ref{eq:fss2} if we now plot $PL^{\beta/\nu}$ vs $(p-p_c)L^{1/\nu}$ all the distinct plots of $P$ vs $p$ should collapse into a single universal curve. Indeed, we see that all the distinct plots of Figs. \ref{fig:3a} and \ref{fig:3b} collapse superbly into their own universal scaling curves (see Fig. \ref{fig:3d}). Now using Eq. (\ref{eq:10}) in Eq. (\ref{eq:11}) to eliminate $L$ in favor of $p-p_c$ we get \begin{equation} \label{eq:pp4} P\sim (p-p_c)^\beta, \end{equation} where $\beta=0.0679$ and $\beta=0.222$ for explosive and random bond percolation. It is clear that the $\beta$ value for explosive is unusually smaller than that of its value for random bond percolation. \subsection{Susceptibility: Mean cluster size S(p)} \begin{figure} \centering \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true,angle=-90] {explosive_wpsl_mcs_1.eps} \label{fig:4a} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {wpsl_bond_mcs_final.eps} \label{fig:4b} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {combine_gamma.eps} \label{fig:4c} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {combine_mcs_datacollapse.eps} \label{fig:4d} } \caption{The mean cluster size $S(p,L)$ for (a) explosive bond and (b) random bond percolation as a function of $p$ for different size of the WPSL. In the case of bond the cluster size is measured by the number of sites each cluster contains and in the case of sites it is the area of the contiguous blocks that belongs to the same cluster. In (c) we plot $\log S$ vs $\log L$ using the size of $S$ for fixed value of $(p-p_c)L^{1/\nu}$ and find almost parallel lines with slope $\gamma/\nu$ equal to $1.8818 \pm 0.0069$ and $1.7315\pm -0.0019$ for explosive and random bond percolation respectively. In order to obtain a better estimate for the $\gamma$ value we also plot the same data of (a) and (b) in the self-similar coordinates namely $C_2SL^{-\gamma/\nu}$ vs $(p-p_c)L^{1/\nu}$ in (d). We again find that all distinct plots of (a) and (b) collapse into their respective universal curve. } \label{fig:4abcd} \end{figure} The mean cluster size is regarded as the equivalent counterpart of the susceptibility. Using the idea of the cluster size distribution function $n_s(p)$, the number of clusters of size $s$ per site, we can define the mean cluster size $S(p)$ as \begin{equation} \label{eq:nsp3} S(p)=\sum_s sf_s={{\sum_s s^2n_s}\over{\sum_s sn_s}}, \end{equation} where the sum is over the finite clusters only i.e., the spanning cluster is excluded from the enumeration of $S$. In Figs. \ref{fig:4a} and \ref{fig:4b} we show the plots of $S(p)$, for both explosive and random bond percolation, as a function of $p$ for different lattice sizes $L=\sqrt{N}$. We observe that in either cases, the peak height grows profoundly with $L$ in the vicinity of $p_c$. To find the critical exponent $\gamma$ we first plot $S$ vs $(p_c-p)L^{1/\nu}$ and find that the peak heights $S_{{\rm peak}}$ lie along the same line. We then measure the size of the $S_{{\rm peak}}$ for different $L$. Plotting $\log[S_{{\rm peak}}]$ vs $\log(L)$ in Fig. (\ref{fig:4c}) we find a straight line for both explosive and random bond percolation revealing that \begin{equation} \label{eq:nsp5} S_{{\rm peak}}\sim L^{\gamma/\nu}, \end{equation} where we find that $\gamma/\nu$ equal to $1.8818 \pm 0.0069$ and $1.7280 \pm 0.0019$ for random bond percolation. Plotting now $SL^{-\gamma/\nu}$ vs $(p_c-p)L^{1/\nu}$ in Fig. (\ref{fig:4d}) we find that all the distinct plots of Figs. (\ref{fig:4a}) and (\ref{fig:4b}) collapse superbly into universal curves. Such a data-collapse is a clear testament that the mean cluster size too exhibits finite-size scaling \begin{equation} S(p,L)\sim L^{\gamma/\nu}\phi_\gamma((p-p_c)L^{1/\nu}). \end{equation} Eliminating $L$ from Eq. (\ref{eq:10}) in favor of $(p_c-p)$ using $(p_c-p)\sim L^{-1/\nu}$ we find that the mean cluster diverges \begin{equation} \label{eq:nsp7} S\sim (p_c-p)^{-\gamma}, \end{equation} where $\gamma=2.13816$ and $\gamma=2.825$ for explosive and random bond percolation respectively. This value is significantly different from the known value $\gamma= 2.389$ for all the regular planar lattices. \begin{figure} \centering \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true,angle=-90] {combine_distribution.eps} \label{fig:5a} } \subfloat[] { \includegraphics[height=4.0 cm, width=2.4 cm, clip=true, angle=-90] {combine_fractal.eps} \label{fig:5b} } \caption{We plot (a) the cluster size distribution function $\log(n_s(p_c))$ vs $\log s$ for different size of the WPSL and find almost parallel lines with slopes are $2.030$ and $2.0728$ for explosive and random bond percolation respectively. It implies that the $\tau$ value is independent of the type of percolation. (b) The mass of the spanning cluster $M$, the total area in the case of site and the number of sites in the case of bond, is shown as a function of system size $L$. The two lines with slope $d_f=1.9415 \pm 0.0055$ for explosive and $1.8643 \pm 0.0014$ for bond once again reveals that the fractal dimension of the spanning cluster is independent of the type of percolation. } \label{fig:5ab} \end{figure} It is well known that the cluster size distribution function $n_s(p)$ obeys \begin{equation} \label{eq:nsp8} n_s(p)\sim s^{-\tau}\phi((p-p_c)^{1/\sigma}s), \end{equation} and hence at $p=p_c$ it is \begin{equation} \label{eq:tau} n_s(p_c)\sim s^{-\tau} \end{equation} where $\tau$ is called the Fisher exponent. To obtain the value of $\tau$ numerically we plot $\log[n_s(p)]$ vs $\log(L)$ always at $p_c$ in Fig. (\ref{fig:5a}) for both explosive and random bond percolation. The resulting plot in both the cases are straight lines with a hump near the tail due to finite size effect. However, we also observe that as the lattice size $L$ increases the extent up to which we obtain a straight line increases too. It implies that if the size $L$ were infinitely large, we would have a perfect straight line obeying Eq. (\ref{eq:tau}). The slopes of the lines are $\tau=2.030$ for explosive and $\tau=2.0725$ for random bond percolation. It implies that the exponent $\tau$ is almost the same $\tau\sim 2.072$ for both site and bond percolation on WPSL and its value is different than the value for all known planar lattices $\tau=2.0549$. \subsection{Fractal dimension of the spanning cluster} Let $M(L)$ denote the mass or size of the percolating cluster at $p_c$ of linear size $L$. Now we check the geometric nature of the spanning cluster. First, if the cluster is an Euclidean object, then its mass $M(L)$ would grow as $M(L)\sim L^d$ with $d=2$ since the dimension of the embedding space of the WPSL is $d=2$. Now, a litmus test of whether the spanning cluster is a fractal or not would be to check if the exponent $d=2$ or $d<2$. If we find $d<2$ then that would mean the density of occupied sites is less as $L$ increases which would essentially mean that the spanning cluster is ramified or is stringy object. To find the value of $d$ in the present case we plot the size or mass of the spanning cluster $M$ as a function of lattice size $L$ in the log-log scale as shown in Fig. (\ref{fig:5b}). Indeed, we find a straight line with slope $d_f=1.9415\pm 0.0055$ for EBP and $d_f=1.8637 \pm -0.0224$ for RBP. The difference between the two values may appear small but it is important to remember that even a small difference in fractal dimension has a huge impact in its degree of ramification. It is well known that the numerical values of the various exponents $\beta, \gamma, \tau, d_f$ etc. for RBP cannot just assume any arbitrary values rather they are bound by some scaling and hyperscaling relations. We find that the same is true also for EPB as we find its exponents too are bound by the same scaling and hyperscaling relations such as $\tau=3-\gamma \sigma$, $\tau=1+d/d_f$, $\beta=\nu(d-d_f)$, $\gamma=\nu(2d_f-d)$ etc. We find that our estimates for various critical exponents satisfy these relations up to quite a good extent regardless of whether it is about EBP or RBP. \begin{table}[h!] \centering \begin{tabular}{| l | l | l |} \hline Exponents & RBP on WPSL & EBP on WPSL \\ \hline $\nu$ & 1.635 & 1.136 \\ \hline $\beta$ & 0.222 & 0.0679 \\ \hline $\gamma$ & 2.825 & 2.137 \\ \hline $ \tau$ & 2.0728 & 2.03 \\ \hline $d_f$ & 1.864 & 1.941 \\ \hline \end{tabular} \caption{The characteristic exponents for explosive and random bond percolation on the WPSL.} \label{table:1} \end{table} \section{Summary and discussion} In this article, we have studied explosive bond percolation on WPSL using extensive Monte Carlo simulations. The primary goal of this article is to study explosive bond on the WPSL. To this end, we have first obtained the percolation threshold $p_c=0.4021$ for EBP which is greater than the $p_c=0.3457$ of the random bond percolation, as expected. We studied numerically the spanning probability $W(p)$, the percolation strength $P(p)$ and the mean cluster size $S(p)$ using the NZ algorithm. The resulting data is then used in the finite-size scaling theory to obtain the various critical exponents $\nu, \beta$, $\gamma$ as well as other related exponents like $\tau$ and $d_f$. To that end, we obtained them numerically for EBP and compared them with those for the RBP (see table \ref{table:1} for detailed comparison). Note that in all cases we found excellent data collapse. The quality of data-collapses provide a clear testament that the estimated values for various exponents are exceedingly close to the exact value. Besides, we found that these values obey all the scaling and hyperscaling relations like we find in the random percolation. It implies that EP is no special except the fact that the $\beta$ value is extremely low compare to the value we find in the random percolation. Such low $\beta$ value makes it difficult to distinguish the behavior of the order parameter whether it has really suffer a jump or show continuity. This was exactly the reason why explosive percolation was considered to describe first order transition. Note that a comprehensive study of EBP to find critical exponents and to classify it into universality classes has not yet even begun. In contrast, the classification of the random percolation is extensively studied and the results are quite interesting. For instance, it has been found that the numerical values of the critical exponents are universal in the sense that their values depend only on the dimension of the lattice. Their values neither depend on the detailed nature of the structure of the lattice nor on the type of percolation i.e., whether the percolation is site or bond type. Remarkably, similar classification has also been found true in the case of models for thermal continuous phase transition. Indeed, it has been found that the corresponding critical exponents of thermal phase transition neither depend on the lattice structure nor on the nature of interaction, but only on the spatial dimensionality, spin dimensionality and the range of the interactions. Recently, we have shown that random percolation on WPSL does not belong to the same universality class where all the known planar lattices belong despite the dimension of the WPSL and that of the space, where WPSL is embedded, are the same. This is really an exceptional case which is not so surprising owing to the fact that WPSL is itself an exceptional lattice. We hope that our findings will have a significant impact in the future study of the percolation theory especially in classifying the explosive percolation into universality classes.
proofpile-arXiv_067-884
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Synchronization among populations of the same species is a widely observed collective phenomenon in the studies of ecological networks \cite{blasius1999complex,HoHa08}. Synchrony in the dynamics of populations creates interdependence in their abundance, and simultaneous low abundance can lead to simultaneous extinction. Thus, synchrony increases the risk of network-wise extinction and correspondingly reduces species persistence. Population synchrony can be driven by several factors, including dispersal network structures or connectivity patterns \cite{HoHa08,gupta2017increased}. Even though species connectivity via dispersal has attracted much attention due to its both positive and negative effects on the persistence and stability of spatially separated populations \citep{koelle2005dispersal,gravel2011persistence,fox2017population,dutta2015spatial}, how temporal changes in the connectivity can influence species persistence have received much less attention. Specifically, how temporal changes in species connectivity influences nonlinear dynamics of a metacommunity is still unclear \citep{pilosof2017multilayer}. \citet{leibold2004metacommunity} has defined a {\em metacommunity} as ``a set of local communities that are linked by dispersal of multiple potentially interacting species". Connectivity between spatially separated habitat patches is an integral component of metacommunity ecology \citep{walther2002ecological,HoHa08,hodgson2009climate,senior2019global}. \citet{taylor1993connectivity} described `connectivity' among habitat patches as ``$\dots$connectivity is the degree to which the landscape facilitates or impedes movement among resource patches''. Over time, many approaches have led to alternative definitions of population connectivity, e.g.,~structural, genetic, and functional connectivity \citep{kool2013population}. Though population connectivity can be defined in various ways under diverse ecological circumstances, they share a common characteristic that corresponds to spatial linkages/dependencies between populations or individuals. Many studies have shown that population connectivity via dispersal is as important to population viability as distribution of resources \citep{fahrig1988effect}; however, connectivity patterns in fragmented landscapes are in general ignored. Landscapes, where species movements occur, can vary temporally through distribution and quality of habitat over time \citep{zeigler2014transient}. As a result, species dynamics can vary along the complex spectrum of `static' to `dynamic' environments \citep{levins1969some,Han99}. Metacommunity dynamics in static environments have mostly focused on static networks, where links offer a permanent connectivity pattern between habitat patches \citep{moilanen1998long,HoHa08}. However, habitats are disturbance-driven in dynamic landscapes, and links between them are best described to form `temporal' networks, where connectivity may change across different timescale \citep{bishop2018evaluating}. For example, the marsh fritillary butterfly {\em Euphydryas aurinia} in Finland, inhabiting dynamic landscapes, exhibits patch networks that vary over time \citep{wahlberg2002dynamic}. In a temporal patch network, the links/connectance between species habitat patches varies over time \citep{holme2012temporal,pilosof2017multilayer,sundaresan2007network}. More specifically, a temporal patch network is a `sequence of separate networks' on the same set of patches/nodes, where each such snapshot is characterized by an adjacency matrix (i.e., a square matrix representing the structure of a finite graph/network) for a particular time duration \citep{li2017fundamental}. Therefore, temporal connectivity can also be recognized with a `transient' feature. For dynamic environments, \citet{zeigler2014transient} describe that structure of connectivity should be seen as time-varying (transient) rather than static, due to changes in biotic and abiotic conditions influencing metacommunity dynamics. Temporal connectivity pattern is also known to create a short window during which temporal opportunities for movement between particular patches increase depending on a species' generation time or life history. Nonetheless, to understand metacommunity dynamics governing by the changes in the species interaction patterns due to their life history or anthropogenic factors, temporal networks could provide a useful framework \citep{olesen2008temporal,mucha2010community,olesen2011strong}. For static network structures, it is known that an increased dispersal strength inevitably induces a higher degree of synchrony and ultimately reduces metapopulation persistence \citep{hastings1993complex,blasius1999complex}. \citet{koelle2005dispersal} have shown that by adopting a metacommunity framework, this pattern of persistence can be altered, resulting in dispersal induced de-synchronization. Exploring the nature of synchronization/de-synchronization dynamics under the framework of temporal networks has led to exciting observations in generic networks of nonlinear units \cite{Boccaletti06,Sorrentino08,Ghosh22}. The basin stability measure \cite{menck2013basin} has been used to determine the stability of the synchronous state in temporal networks \cite{kohar2014}. \citet{masuda2013temporal} describes that synchronization is more challenging to achieve in temporal networks than in the corresponding aggregate networks. Surprisingly, most of the studies deal with networks whose structure changes faster than the characteristic timescale of their individual units. In this fast-changing network structure, the dynamics of the system may be considered as static in terms of synchronization stability under the adiabatic approximation \cite{Stilwell06,Porfiri06,kohar2014,Petit17}. However, in ecological networks, change in network structure occurs at a much slower rate, which can be best predicted by the dominant period or the corresponding harmonics of oscillations of individual nodes. Motivated by the above arguments, in this paper, we study the synchronous/asynchronous dynamics of an ecological time-varying network whose time rate of change (or the rewiring frequency) is comparable with the natural frequency or its subharmonics of the constituent nodes. We consider the small-world network topology \cite{strogatz2001} as the core network structure, and the uncoupled dynamics of the nodes are governed by the chaotic Hastings-Powell model of the three-species food chain \cite{hastings1991chaos}. Here, we employ the wavelet transform method to identify each node's dominant period of oscillation and its harmonics, and the network is rewired following those periods. Moreover, appropriate coupling strengths are chosen based on the master stability function approach \cite{pecora1998master}. Importantly, for suitable coupling strength, average degree, and rewiring period, we find that an increase in the rewiring probability drives the network from asynchronous to synchronous state; however, further increase of rewiring probability eventually leads to asynchronous dynamics. We also find that temporal networks with a higher average degree and small rewiring period can propel the asynchronous dynamics to a synchronous one and, therefore, reduce species persistence. Our results are supported by measures from master stability function \cite{pecora1998master} and the basin stability \cite{menck2013basin}. We further corroborate our results using the concept of clustering frequency and the transient time of synchronization. Finally, we demonstrate the generality of our study through another temporal network, where the species dynamics in each node are governed by the Blasius-Huppert-Stone foodweb model \cite{blasius1999complex}. \section{Models and Methods \label{s:model}} \subsection{A metacommunity model} We study the dynamics of a metacommunity model consisting of $N$ spatially separated patches connected by dispersal that follows a time-varying network topology. In each patch, the uncoupled dynamics are governed by a chaotic three-species food chain model \citep{hastings1991chaos}; with a basal resource population ($x$), an intermediate consumer population ($y$), and a top predator population ($z$). Within the patch, dynamics of the food chain are characterized by the logistic growth function and the type-II functional response. Further, diffusive dispersal connects the interacting patches, which forms a time-varying network described by the following set of differential equations: \begin{subequations}\label{eq1} \begin{align} \frac{dx_i}{dt} & = x_i(1-x_i)-\frac{a_1 x_i y_i}{1+b_1 x_i},\\ \frac{dy_i}{dt} & = \frac{a_1 x_i y_i}{1+b_1 x_i}- \frac{a_2 y_i z_i}{1+b_2 y_i}-d_1 y_i +\epsilon_1 \sum_{j=1}^{N} L_{ij} y_j,\\ \frac{dz_i}{dt} & = \frac{a_2 y_i z_i}{1+b_2 y_i}-d_2 z_i+\epsilon_2 \sum_{j=1}^{N} L_{ij} z_j, \end{align} \end{subequations} where $i (=1,2,...,N)$ describes the node/patch index. Here, the consumer $y$ depends on the resource $x$ for its survival, and the predator $z$ at the top level depends on the consumer $y$. The system parameters of the uncoupled model (i.e., when $\epsilon_1=0$ and $\epsilon_2=0$) are: $a_1$, $a_2$, $b_1$, $b_2$, $d_1$ and $d_2$. Unless stated, throughout this paper we consider the parameter values of the uncoupled model as: $a_1 = 5$, $a_2 = 0.1$, $b_1 = 3$, $b_2 = 2$, $d_1 = 0.4$, and $d_2 = 0.01$ \citep{hastings1991chaos}. The diffusive dispersal connects the patches with dispersal rates $\epsilon_1$ and $\epsilon_2$ for the consumer ($x$) and the top predator ($y$), respectively. For simplicity, in this study we have assumed $\epsilon_1 = \epsilon_2 = \epsilon$. Here, both species immigration and emigration are described by the Laplacian matrix ($L_{ij}$) obtained from the adjacency matrix ($A_{ij}$) of the considered network. In particular, elements of the adjacency matrix are defined as: $A_{ij}=1$, if patches $i$ and $j$ are connected via dispersal; and otherwise $A_{ij}=0$. The diagonal elements of the Laplacian matrix are the sum of columns (or rows) of the adjacency matrix with the negative sign, representing the emigration from the $i$-th patch to other connected patches. In other words, the diagonal elements of the Laplacian matrix is the degree of each $i$-th patch with the negative sign, i.e., $L_{ii} = -\sum_{j=1}^{N} L_{ij}$~(for each $i)~= -$ the degree of $i$-th node, and $L_{ij}=A_{ij}$ when $i \neq j$. \subsection{Temporal-network with each snapshot following a small-world network topology} Various network structures can be considered depending on the connectivity pattern between spatially separated patches, such as regular, small-world, and random networks, in the metacommunity model (\ref{eq1}). These network structures are widely used in ecology, and other fields to study the collective dynamics of coupled oscillators \cite{ranta2007population,HoHa08,stankovski2017coupling,ArDu18,arumugam2019dynamic}. Each of these network structures can be generated by the Watts-Strogatz algorithm \citep{WaSt98} for different values of a rewiring probability ($p$). For example, a network is regular if $p=0$, completely random when $p=1$, and follows a small-world structure if $0<p<1$. \begin{figure}[!ht] \centering \includegraphics[width=0.98\columnwidth,angle=0]{Fig1PSD.ps} \caption{Schematic representation of a time-varying network composed with a `sequence of separate networks'. Each sub-figure represents a snapshot that follows a small-world network topology associated with a rewiring probability ($p$). After a fixed period (say $T$), there is a change in the network structure, keeping the rewiring probability unaltered. A chaotic dynamical system governs the uncoupled dynamics in each node.} \label{f:schme} \end{figure} Traditional research on ecological networks has considered small-world and random network structures under the framework of static networks \cite{ranta2007population,HoHa08}. In a static network, the connectivity structure is invariant over time. However, in a temporal network, the connectivity evolves involving two key mechanisms, i.e., when and how the connectivity changes. Here, we study the collective dynamics of the metacommunity model (\ref{eq1}) that follows a temporal network structure and is composed of a chaotic oscillator at each patch. For the sake of completeness and comparison, we also study the system's dynamics for static network structure. Figure~\ref{f:schme} demonstrates a schematic representation of our modeling framework. The initial network is chosen after rewiring a regular network with the probability ($p$). The patch connectivity is rewired at each fixed period ($T$), keeping the rewiring probability unaltered. Although we allow connectivity to evolve between patches at a fixed time interval $T$, the average degree in the network remains unaltered. Hence, for a time-varying network, the Laplacian matrix ($L_{ij}$) in model \eqref{eq1} intermittently varies at each period $T$; otherwise, in the intermediate time, it remains unaltered. The following section discusses the choice of the rewiring period ($T$). \begin{figure* \centering \includegraphics[width=0.95\textwidth,angle=0]{Fig2PSD.eps} \caption{Wavelet analysis to a chaotic time series of the model \eqref{eq1} in the absence of coupling. Chaotic time series of (a) the resource, (b) the consumer, and (c) the top predator; corresponding (d) phase portrait of the chaotic attractor, (e) wavelet power spectra, and (f) wavelet global spectrum. Model parameters are $a_1=5$, $a_2=0.1$, $b_1=3$, $b_2=2$, $d_1=0.4$, and $d_2=0.01$.} \label{f:time_phase} \end{figure*} \subsection{Wavelet analysis of a chaotic time series} Unlike most studies on temporal networks, here we do not consider changes in the network structure at every integration step size. We rewire the network structure at a rate with a period $T$, which is determined by the characteristic time of the nodal oscillators. To determine $T$, we employ the wavelet transform to a chaotic time series of the uncoupled Hastings-Powell model. The chaotic time series consisting of multi-cycles (see Figs.~\ref{f:time_phase}(a)-\ref{f:time_phase}(c)) is analyzed through wavelet transform, which determines the localized variations within time series \citep{torrence1998}. From Figs.~\ref{f:time_phase}(e) and \ref{f:time_phase}(f), the dominant period and associated subharmonics of the chaotic time series can be found. From a practical point of view, the dominant period is comparable to the life cycle of a species; keeping this in mind, we rewire the networks at a subharmonic of the dominant period assuming that a species may change its dispersal networks structure a few times in a life cycle. Later in this paper, we show the importance of choosing the rewiring period ($T$). \subsection{Linear stability analysis of synchronized solutions \label{M:MSF}} The interaction/coupling strength plays a crucial role in governing the collective dynamics of a system of coupled oscillators. It is known that in the weak coupling regime, decreasing the coupling strength may weaken synchrony. Therefore, it is essential to know the suitable coupling range where the synchronous solution is stable. To determine the appropriate coupling range for the model \eqref{eq1}, here we follow the master stability function (MSF) approach \cite{pecora1998master}. Below, we briefly describe the MSF approach for temporal networks. Consider a coupled system of identical oscillators written as: $\dot{X_{i}}=F(X_{i})$, $i=1,2,\dots,N$, $X_{i}\in R^{d}\rightarrow R^{d}$, $F:R^{d}\rightarrow R^{d}$, where $X_{i}$ represents the $d$-dimensional vector which describes the dynamics at the $i$-th node. At each isolated node of the network, the dynamics are governed by the function $F(X_{i})$. If each node interacts with its neighbours, then the dynamics of the $i$-th node can be written as: \begin{eqnarray} \dot{X_{i}} &= & F(X_{i})+\epsilon \sum_{j=1}^{j=N} A_{ij}(t)[H(X_{j})-H(X_{i})],\nonumber \\ & = & F(X_{i})+\epsilon \sum_{j=1}^{j=N} L_{ij}(t)H(X_{j});\; i=1,..,N, \label{Eq:2C} \end{eqnarray} where $\epsilon$ represents the coupling strength, $L_{ij}$ is the Laplacian matrix, and $H: R^{d}\rightarrow R^{d}$ defines the coupling function representing the interaction between different nodes. Further, we calculate the local asymptotic stability of the oscillators along the synchronization manifold $X_{1}=X_{2}=X_{3}=\dots=X_{N}=X_{0}$. The variational equation of \eqref{Eq:2C} is given by: \begin{equation}\label{eq:3P} \dot{\xi}=[I_{N}\otimes DF+\epsilon L(t)\otimes DH]\xi, \end{equation} where $\xi=(X_{1}-X_{0}, X_{2}-X_{0}, X_{3}-X_{0}, \dots, X_{N}-X_{0})^{T}$ is the perturbation vector, $I_{N}$ is the $N \times N$ identity matrix, $\otimes$ represents the Kronecker product, $DF$ and $DH$ are the Jacobian function of $F$ and $H$, respectively, evaluated on the synchronous solution ($X_{0}$). If the Laplacian matrices $L(t)$ and $L(t')$ commute for any $t$ and $t'$, then we can find an orthogonal matrix $Q$ such that $Q^{T}L(t)Q$ is diagonal for all $t$, where $Q^T$ stands for transpose of $Q$. Using the block diagonalization form of \eqref{eq:3P} we obtain $N$ independent $d$ dimensional equation: \begin{equation}\label{eq:4P} \dot{\delta_{i}}=[DF+\epsilon \lambda_{i}(t) DH] \delta_{i}, ~~i=1,\dots,N, \end{equation} where $(\delta_{1},\delta_{2},\dots, \delta_{N})^{T}=(Q^{T}\otimes I_{d})\xi$, and $\lambda_{i}$ are eigenvalues of $L$. The synchronous solution is stable, if all perturbation modes transverse to the synchronization manifold decaying asymptotically to zero. Decoupled variation equations \eqref{eq:4P} differ in $\lambda_{i}(t)$ and others terms are equal. To study the stability of the synchronous state it is enough to study the maximum Lyapunov exponent of \eqref{eq:4P} which is a function of $\alpha$. \begin{equation} \dot{\zeta}=[DF+\alpha DH] \zeta. \end{equation} Here, $\alpha$ is the function of the eigenvalues $\lambda_{i}$ and coupling strength $\epsilon$, also known as the MSF and denoted by $\Lambda(\alpha)$. The synchronous solution is stable if the MSF $\Lambda(\alpha)$ is negative for all transverse modes ($i\geq 2$). Further, there are mainly three cases possible for $\Lambda(\alpha)<0$: (i) no such $\alpha$ exists: $\Lambda(\alpha)$ has no crossing point (ii) $\alpha_{1}<\epsilon \lambda_{i}$: $\Lambda(\alpha)$ has one crossing point, (iii) $\alpha_{1}<\epsilon \lambda_{i}<\alpha_{2}$: $\Lambda(\alpha)$ has two crossing points \citep{huang2009generic}. Structural evolution in complex temporal networks has been studied more often via different rewiring techniques, such as slow switching (rewiring links after longer periods) and fast switching (more frequent rewiring). The condition for a stable synchronous state varies for slow and fast switching. Let a network switches among $M$ different configurations (snapshots) ${L_{1}, L_{2},\dots, L_{M}}$ after certain rewiring time period $T$, then the necessary condition for achieving stable sync state is \cite{zhou2016synchronization}: \begin{align*} \sum_{k=1}^{M} \dfrac{1}{M}\Lambda(\epsilon \lambda_{k}^{i_{k}})<0. \end{align*} If the network switch at a fast scale yielding $M$ arbitrary sequential structures, then the condition of stable sync state is as follows: \begin{align*} \Lambda(\dfrac{1}{M}\sum_{k=1}^{M} \epsilon \lambda_{k}^{i_{k}})<0. \end{align*} For a fast switching instance, stability of the synchronous state in a network with time-varying topology can be obtained by calculating the MSF for the static time-averaged network. Hence, when network structure evolves via fast switching, calculating the MSF from the time average of matrix $\bar{L}=\dfrac{1}{M}\sum_{k=1}^{M} L_{k}$ is sufficient \citep{stilwell2006sufficient}. Thus the type of switching scheme favorable for synchronization can be anticipated from the MSF approach pertaining to the switching variants. Thereafter, a concave(convex) MSF shape indicates that the network supports synchronization dynamics under a fast (slow) switching \cite{zhou2016synchronization}. \subsection{Basin stability} The basin stability (BS) is a non-local and nonlinear measure of stability related to the basin volume of multistable systems, including higher-dimensional complex networks \cite{menck2013basin}. The BS measure is known to complement the linear stability analysis. To determine the BS of the considered system \eqref{eq1}, we numerically simulate it for different initial conditions ($I$), chosen uniformly from the region $[0,1]\times[0,0.5]\times[7.5,11.5]$ (which has been chosen from the existence region of the chaotic attractor depicted in Fig.~\ref{f:time_phase}). If $I_{s}$ is the number of initial conditions that arrives at the synchronous state, then we define the $\mbox{BS} = \dfrac{I_{s}}{I}$. Whether an initial condition is converging to a synchronous state or not has been determined by an order parameter, namely the synchrony measure ($\sigma_m$) evaluated for a large enough time $\hat t$. The synchrony measure ($\sigma_m$) is defined as below \citep{KoMu11,ArDu18}: \begin{eqnarray*}\label{syncmeaseq} \sigma_m &= &\sqrt{1 - \left \langle \frac{\sum_{i=1}^{N}[X_i(t)-\overline{X(t)}~]^2}{\sum_{i=1}^{N}{X_i(t)}^2} \right \rangle}, \end{eqnarray*} where $\overline{X(t)}=\frac{1}{N} \sum_{i=1}^{N}{X_i(t)}$, and $ \langle \dots \rangle$ denotes the average over the time period $\hat t$. The synchrony measure $\sigma_m$ varies between $0$ and $1$. In particular, $\sigma_m=1$ denotes complete synchronization (perfect synchrony), $\sigma_m=0$ denotes no synchrony, and $0<\sigma_m<1$ marks partial synchrony. The BS can change depending on the coupling strength and structural properties of a network. For each set of parameters, using $10^4$ initial conditions, we compute the BS in static and time-varying networks. In each case, after removing the transients, we use the measure $\sigma_m$ to identify whether the metacommunity is synchronized or not. \subsection{Cluster identification} The cluster analysis \cite{HoHa08,gupta2017increased} is used to study the coherence dynamics between a pair of patches $(i,j)$ of the metacommunity model \eqref{eq1}. Specifically, we calculate the linear correlation coefficient ($\rho_{ij}$) to compare the dynamics between a pair of patches $(i,j)$. Here, by considering the top predator populations ($z$) from patches $i$ and $j$, the pairwise linear correlation coefficient ($\rho_{ij}$) is computed as: \begin{eqnarray*} \rho_{ij} &=& \frac{\langle z_i z_j\rangle - \langle z_i\rangle \langle z_j\rangle}{\sqrt{\langle z_i^2\rangle - \langle z_i\rangle^2} \sqrt{\langle z_j^2\rangle - \langle z_j\rangle^2}}, \end{eqnarray*} where $\langle \dots \rangle$ denotes the average over the time interval $[t,~t+{\hat t}]$, $\hat t$ denotes a long enough fixed time-period. The $i$-th and $j$-th patches form a cluster whenever $\rho_{ij} \approx 1$. By calculating $\rho_{ij}$ for all pairs of patches, the number of clusters in each simulation of the time-varying network (with $N$-nodes) can be identified. Here, 1-cluster denotes global (perfect) synchrony, whereas $N$-cluster denotes complete asynchrony. Also, the time-varying network might exhibit ${n}$-clusters, where $1\leq n \leq N$. Using these, we compute the frequency of the $n$-cluster, where the frequency at time $t$ is defined as: \[\mbox{Frequency of}~ n\mbox{-cluster solution} = \frac{\mbox{No. of} \leq n\mbox{-clusters}}{\mbox{No. of simulations}}\;.\] This will be useful, in particular, to understand the intermediate solutions (no of clusters between 2 to $N-1$); other than complete synchrony and asynchrony. The degree of metacommunity persistence can be understood from the cluster identification. Note that the linear correlation coefficient $\rho_{ij}$ is different from the synchrony measure $\sigma_m$, in the sense that the synchrony measure characterizes the coherent behavior among all the interacting patches, whereas the correlation coefficient characterizes the coherent behavior between two patches ($i$ and $j$). \begin{figure* \centering \includegraphics[width=0.85\textwidth,angle=0]{Fig3PSD.eps} \caption{(a) Master stability function for the metapopulation model \eqref{eq1}. The black dashed line marks the neutral line. Using the MSF approach, the coupling range of stable synchronized solution is calculated with variations in the rewiring probability for both the static and averaged networks; in (b) $k = 2$ and (c) $k = 8$. The region bounded below by the dashed (solid) curve marks the region of stable synchronous state for the averaged (static) network. As observed, on increasing the average degree ($k$), stable synchronous state is achieved even for lower coupling strength.} \label{f:smt1} \end{figure*} \subsection{Synchronization time} For a fixed rewiring period ($T$), the rewiring probability ($p$) and the coupling strength ($\epsilon$) simultaneously affect the coherence dynamics of a temporal network. The effect of variable $p$ and $\epsilon$ on the occurrence of complete synchrony can be determined by calculating the synchronization time \cite{kohar2014}. The time to reach the synchronous state in a complex network is known as the synchronization time ($S_t$). Indeed, the synchronization time divides the network dynamics into transient and asymptotic states. In the transient state (i.e., $t \in [0,~S_t]$), the dynamics of a time-varying network fluctuate between synchronous and asynchronous states, whereas in the asymptotic case, only synchronized dynamics exists (i.e., $t>S_t$). Hence, we compute the synchronization time whenever the network shows complete synchrony. While calculating the $S_t$, to determine network synchrony, we have used the synchrony measure ($\sigma_m$) for each small sub-intervals of a time series. In particular, when successive values of $\sigma_m$ in sub-intervals reach the maximum value ($\sigma_m=1$), the time of the first sub-intervals is denoted as the synchronization time of that particular network. \section{Results} \subsection{Determining the coupling range of stable synchronous solution using the MSF approach} We start our analysis by calculating the coupling range in which the synchronous solution of model \eqref{eq1} is stable according to the MSF approach (discussed in Subsection~\ref{M:MSF}). From the MSF depicted in Fig.~\ref{f:smt1}(a), we find that the temporal network can stably synchronize below a critical coupling strength after crossing the zero line. For different values of the average degree $k$, Figs.~\ref{f:smt1}(b)-\ref{f:smt1}(c) illustrate the coupling range in which the synchronous solution for static as well as averaged networks are stable, with variation in the rewiring probability. We see that, for the averaged network, the range of stable synchronous state is broader than that of the static network. This also holds good for other metacommunity models (see Fig.~\ref{f:smt3} in the Appendix). Hence, the temporal network outperforms the static network in terms of synchronization stability. Further, with an increase in the average degree, there is an increase in the coupling range, i.e., the minimum coupling strength at which the synchronized state is stable decreases further with an increase in the average degree. The difference between the coupling ranges of static and averaged networks minimizes when the average degree increases (see Fig.~\ref{f:smt1}(c)). \subsection{Synchronous and asynchronous dynamics in static and time-varying networks following the BS measure} \begin{figure* \centering \includegraphics[width=0.38\textwidth,angle=0]{Fig4aPSD.eps} \hspace{0.2in} \includegraphics[width=0.38\textwidth,angle=0]{Fig4bPSD.eps} \caption{Basin stability of (a) static, and (b) time-varying networks with variations in the rewiring probability ($p$), for different coupling strengths ($\epsilon$). At each value of $p$, the basin stability is computed using $10^4$ independent simulations in the time-interval $[0,10^4]$. Other parameters are $N=100$, $k=2$, and $T=16$.} \label{f:fig3} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.925\textwidth]{Fig5PSD.eps} \caption{Spatiotemporal dynamics and corresponding time series of the metacommunity model \eqref{eq1} with a temporal network structure: (a),(d) asynchronous oscillations for $p=0.01$; (b),(e) synchronous oscillations for $p=0.5$; and (c),(f) asynchronous oscillations for $p=0.99$. The other parameter values are: $N=100$, $k=2$, $\epsilon=0.005$, and $T=16$.} \label{f:hp_tv_ts} \end{figure*} To understand the influence of network structure on collective dynamics of the metacommunity model \eqref{eq1}, we start with studying static networks. We consider static networks (i.e., $L_{ij}$ remains unchanged with time) that follow the Watts-Strogatz (WS) network topology with a rewiring probability ($p$). Then we consider static networks with $p$ value ranging from $p=0$ (regular) to $p=1$ (completely random). For each $p$ value, $10^3$ networks are generated, and the corresponding synchronous dynamics are analyzed in a time interval $[0,10^4]$ for an $\epsilon$. We calculate the BS measure to analyze synchrony in the metacommunity (see Fig.~\ref{f:fig3}(a)). We find that, as the $p$ value increases, for moderate values of $\epsilon$, the BS first increases and eventually decreases to zero (see Fig.~\ref{f:fig3}(a)). Therefore, for moderate values of $\epsilon$, random networks yield lower synchronization regions than a regular network, increasing the metacommunity persistence. However, as expected for weak $\epsilon$ values, the BS remains at zero, and there is no synchronization region. Our results are in agreement with previous literature - increasing randomness in a static network structure through the rewiring probability $p$ decreases the metacommunity synchronization and hence increases species persistence \citep{ranta2007population}. \begin{figure* \centering \includegraphics[width=0.38\textwidth]{Fig6aPSD.eps}\hspace{0.24in} \includegraphics[width=0.375\textwidth]{Fig6bPSD.eps} \caption{Effects of changes in: (a) the average degree $k$ (with $\epsilon=0.005$ and $T=16$), and (b) the rewiring period $T$ (with $\epsilon=0.005$ and $k=2$), on the BS measure of temporal networks. For networks with high average degree ($k = 20$) (i.e., for more connected networks) the BS is almost one irrespective of the chosen rewiring probability $p$. On increasing $T$ from $T=16$, the BS decreases. However, decreasing $T$ has reverse effects, resulting in higher BS.} \label{f:smt} \end{figure*} Next, we consider a time-varying network structure of the metacommunity with a rewiring period ($T$), where each snapshot of a network follows the WS topology. Here, the rewiring period is considered as $T=16$, which is a sub-harmonics of the Hastings-Powell model's dominant period determined using the wavelet analysis. Here, the BS is computed for varying rewiring probability ($p$) at different values of $\epsilon$. At each value of $p$, a total of $10^3$ simulations is performed with a fixed $\epsilon$ in the time interval $[0,10^4]$. Figure~\ref{f:fig3}(b) shows the BS of the time-varying networks computed for different $p$ values. For a range of $\epsilon$ values, the BS increases on increasing the rewiring probability and then decreases on further increase in $p$ value. In other words, the temporal network with $T=16$ exhibits larger synchronization regions for intermediate values of $p$ and smaller synchronization regions for low and high $p$ values. Hence, in the proximity of regular and completely random network structures, a metacommunity will exhibit higher species persistence by reducing the synchronization region. Figure~\ref{f:hp_tv_ts} shows the spatial dynamics of the temporal network for different values of $p$. In accordance with the results depicted in Fig.~\ref{f:fig3}(b), depending upon the rewiring probability $p$, here the model displays either asynchronous or synchronous dynamics. For $p=0.01$ and $p=0.99$ the temporal network exhibits asynchrony (see Figs.~\ref{f:hp_tv_ts}(a) and \ref{f:hp_tv_ts}(d) and Figs.~\ref{f:hp_tv_ts}(c) and \ref{f:hp_tv_ts}(f)). However, for $p=0.5$ the synchronous dynamics in the system is easily visible from Figs.~\ref{f:hp_tv_ts}(b) and \ref{f:hp_tv_ts}(e). \subsection{Effects of the average degree ($k$) and the rewiring period ($T$) on metacommunity persistence} In this section, we discuss the impact of average degree $k$ and rewiring period $T$ on the collective dynamics of the network. Both of these factors influence the connectivity structure of the metacommunity and hence can significantly influence the population persistence. Figure~\ref{f:fig3}(b) displays that at $\epsilon=0.005$, $k=2$ and $T=16$ the network can exhibit both synchronous and asynchronous dynamics depending upon the rewiring probability $p$. Next, we show that this result significantly depends upon choices of $k$ and $T$. To start with, we fix the dispersal rate at $\epsilon=0.005$ and the rewiring period at $T=16$ and determine the BS measure for different values of $k$. With an increase in $k$, the BS increases, resulting in larger synchronization regions (see Fig.~\ref{f:smt}(a)). Eventually, the BS reaches $1$ for a large enough $k$, irrespective of the rewiring probability $p$. Whilst, at higher $k$, the BS does not change depending on $p$, and the network achieves global synchronization regions, at lower $k$ values, the BS exhibits unimodal dynamics along gradients of $p$. Thus, the chance of reaching the synchronous state is more for more connected networks. The BS shown in Fig.~\ref{f:smt}(a) is calculated from $10^3$ independent simulations estimating the frequency of reaching the synchronized state. The result is shown in Fig.~\ref{f:smt}(a) also holds good for different values of the coupling strength $\epsilon$. Until now, we have considered the rewiring period as $T=16=2^4$, which is a subharmonic of the dominant period $2^7$ as determined by the wavelet analysis (see Fig.~\ref{f:time_phase}). Here, we address how the synchronization region changes with variations in $T$. To calculate the BS measure, we fix $\epsilon=0.005$ and $k=2$. On increasing $T$ (slower rewiring), from $T=16$, the synchronization regions decrease for different $p$. However, the synchronization regions increase by decreasing $T$ (faster rewiring). This result is depicted in Fig.~\ref{f:smt}(b). Further, we see that when $T=128$, the BS is almost zero irrespective of the rewiring probability $p$. This suggests that if we give the network more time to adapt to the changes in the structure (by increasing the rewiring period $T$), the synchronization regions shrink, resulting in species persistence via asynchrony. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth,angle=0]{Fig7PSD.eps} \caption{Distribution of clustering: ((a)-(c)) At weak ($\epsilon=0.001$), and ((d)-(f)) moderate ($\epsilon=0.005$) dispersal rates with less and more connectivity (left to right panels). The frequency of each cluster is shown using $10^4$ independent simulations. Other parameters are $p=0.2$, and $T=16$.} \label{f:sm2} \end{figure} \subsection{Multi-clustering in time-varying networks} For a combination of average degree and coupling strength, clusters in the metacommunity \eqref{eq1} are computed with variations in time (Fig.~\ref{f:sm2}). The $N$-patch metacommunity with the time-varying network structure can show $n$-clusters, $1 \leq n \leq N$, which vary over time conditioned by the near neighbor connections ($k$). Here, $N$-cluster represents complete asynchrony (supports species persistence), and $1$-cluster represents complete synchrony (can trigger community collapse and reduce species persistence). Using $10^4$ independent simulations, the frequency of the clusters have been computed and is shown in Figs.~\ref{f:sm2}(a)-\ref{f:sm2}(f) at weak dispersal rate (top panel), moderate dispersal rate (bottom panel), with less number of connections ($k=2$), followed by more number of connections ($k=4$) and ($k=8$). The frequency of $n(\geq 11)$-clusters is high when $k=2$. With increasing $k$, the patches become more synchronous, and we see more $n(\leq 10)$ clusters. Similarly, for a fixed average degree $k$, an increase in the dispersal rate $\epsilon$ increases the synchrony in the systems, and the frequency of 1-cluster solution increases. Hence, a higher dispersal rate and higher average degree are detrimental for metacommunity persistence as they increase the frequency of $n(\leq 10)$-clusters and 1-clusters (global synchrony). \subsection{Synchronization time of time-varying networks} We calculate synchronization time to assess the influence of network properties in driving the system to synchrony and the time after which it is completely synchronized. The computed synchronization time for networks are represented by violin plots corresponding to each rewiring probability ($p=0.2, 0.4, 0.6$ and $0.8$) (see Fig.~\ref{f:sm3}). By sorting the synchronization time from least to greatest, we determine the average time in which most networks reach the synchronized state. \begin{figure}[!h] \centering \includegraphics[width=0.95\columnwidth,angle=0]{Fig8PSD.eps} \caption{Violin plots for synchronization time, corresponding to different $p$ value, for varying coupling strength $\epsilon$. With an increase in $\epsilon$, the number of networks synchronizing with a smaller sync time increases. Also, irrespective of change in $p$, the mean sync time decreases for an increase in $\epsilon$. For each coupling strength, the central black colored mark indicates the mean. A white dot denotes the median, and each violin's bottom and top edges indicate the 25th and 75th percentiles, respectively. The size of a violin represents the initial conditions for which the network synchronizes in the considered time interval. Here, we have considered $5 \times 10^3$ initial conditions to study the sync time for the four different rewiring probabilities ($p=0.2, 0.4, 0.6, 0.8$). Other parameters are $N=100$, $k=2$, and $T=16$.} \label{f:sm3} \end{figure} In Fig.~\ref{f:sm3}, the minimum and the maximum synchronization time have been indicated by the lower and the upper extremes, respectively. At each rewiring probability, the mean synchronization time of the networks is indicated by the central black mark in each violin. The majority of the networks require less synchronization time with increasing rewiring probability and a further increase in rewiring probability ($p=0.8$), synchronization time increases. The number of synchronized networks increases with rewiring probability and decreases at high rewiring probability. A clear implication from the calculation of synchronization time is at the extremes (very low and very high) rewiring probabilities, the number of networks reaching synchronized state is lesser in agreement with our basin stability measure results in Fig.~\ref{f:fig3}(b). However, results are more prominent at low coupling strength ($\epsilon=0.005$). The results will be qualitatively similar and hold good for different coupling strengths. \section{Conclusions and Discussion} The dispersal network structure is an essential factor determining the fate of ecological communities amidst environmental degradation \cite{ranta2007population}. Species may switch interactions and opt for a more viable choice owing to unfavorable habitat conditions in a dynamic environment. Interestingly, these changes can be envisioned in networks at different time scales \cite{zhou2016synchronization}. However, to the best of our knowledge, the dynamics of ecological networks under the framework of a time-varying network topology remains less explored. Extinction in ecological networks has been associated with synchronous dynamics, further increasing risks of a community collapse \cite{earn2000coherence}. Under this backdrop, we study the dynamics of time-varying ecological networks and their impact on metacommunity persistence. Here we take a novel approach of evolving structure in networks for a range of rewiring probabilities with varying rewiring time scales. We obtain an interesting yet alarming result - the time scale of rewiring and the rewiring probability interplay in inducing or dissuading synchrony in the system. Our key results indicate that coupling strength has a positive effect on a certain rewiring probability $p$ leading to synchrony in the system. Post a critical threshold value of $p$, networks tend to be more random, and the system reaches an asynchronous state. One of the main results of our study is that the slower rewiring periods promote asynchrony in the system. We observe that on increasing the rewiring period, the BS decreases irrespective of the rewiring probability and eventually pushes the system to an asynchronous state. Apart from the basin stability measure, the estimated synchrony time and multi-frequency cluster analysis support our key findings. Our work presents an in-depth study of collective population dynamics in temporal networks using the MSF approach and the BS measure that aids in investigating local and global synchrony, respectively. Certainly, quantifying the stability of the synchronous manifold is of grave ecological importance. While in the face of global environmental change, the evolution of species dispersal network structure is inevitable, our results indicate that slowing the evolutionary time-scale can serve as a mitigation strategy to prevent synchrony -- thus reducing global extinction risk. We believe our results have much broader implications for managing real ecological networks and demand further in-depth research in this direction. We validate the robustness of our results for another important ecological model, namely, the Blasius-Huppert-Stone model \cite{blasius1999complex} (see Appendix). We obtain qualitatively similar findings for both models. Our approach provides intriguing results, albeit requiring future investigation in a large class of other ecological networks. While structural evolution is obligatory across networks of diverse origin, such as biogeochemical networks \cite{falkowski1998biogeochemical}, food-trade networks \cite{wang2021evolution}, and other socio-economic networks \cite{schweitzer2009economic,liu2012structure}, further work along this direction can provide practical mitigation policies towards a sustainable future. \section*{Acknowledgments} P.S.D. acknowledges financial support from SERB, Department of Science and Technology (DST), India (Grant number: CRG/2019/002402). S.B. acknowledges Ramesh Arumugam for helpful discussion. \section*{APPENDIX: The Blasius-Huppert-Stone metacommunity model} \begin{figure}[htpb] \centering \includegraphics[width=0.9\columnwidth,angle=0]{Fig9PSD.eps} \caption{Wavelet analysis to a chaotic time series of the metacommunity model \eqref{eqA1}: (a) Phase-portrait depicting a chaotic trajectory, (b) corresponding wavelet power spectra, and (c) the wavelet global spectrum. Model parameters are $a=1$, $b=1$, $c=10$, $\beta_1=0.2$, $\beta_2=1$, $K_{1}=0.05$, and $w^*=0.006$.} \label{f:sm4} \end{figure} \begin{figure* \centering \includegraphics[width=0.85\textwidth,angle=0]{Fig10PSD.eps} \caption{(a) Master stability function for the coupled Blasius-Huppert-Stone model \eqref{eqA1}. The black dashed line marks the neutral line. The region of stable synchronized solution is plotted as a function of $p$ and $\epsilon$ in solid (dashed) curve corresponding to the static (averaged) network calculated using the MSF approach; for (b) $k = 2$ and (c) $k = 8$. The region between solid (dashed) curves corresponds to the range of coupling strength $\epsilon$ where the synchronous state is stable for the static (averaged) network.} \label{f:smt3} \end{figure*} \begin{figure* \centering \includegraphics[width=0.38\textwidth,angle=0]{Fig11aPSD.eps} \hspace{0.2in} \includegraphics[width=0.3748\textwidth,angle=0]{Fig11bPSD.eps} \caption{Basin stability (BS) of the time-varying network \eqref{eqA1} across different values of the rewiring probability $p$: (a) For different values of $\epsilon$ (with $k=2$ and $T=16$), and (b) for different values of $T$ (with $k=2$ and $\epsilon=0.1$).} \label{f:BSAp} \end{figure*} We demonstrate results for the MSF approach and the BS regions for another temporal ecological network model - the Blasius-Huppert-Stone model \cite{blasius1999complex}. The results obtained are qualitatively similar to the Hastings-Powell model and add to the generality of our study. The coupled Blasius-Huppert-Stone network model is represented as follows: \begin{subequations}\label{eqA1} \begin{align} \frac{dx_i}{dt}&= a x_i-\beta_{1}\frac{x_i y_i}{1+K_1 x_i},\\ \frac{dy_i}{dt}&= \beta_{1}\frac{x_i y_i}{1+K_1 x_i}-\beta_{2} y_i z_i-b y_i +\epsilon \sum_{j=1}^{N} L_{ij}y_j,\\ \frac{dz_i}{dt}&= -c(z_{i}-w^{*})+ \beta_{2} y_i z_i + \epsilon \sum_{j=1}^{N} L_{ij} z_j, \end{align} \end{subequations} where $x_{i}$, $y_{i}$, $z_{i}$ represents vegetation, herbivore and predator populations, respectively, in the $i$-th patch. The growth rates of each trophic species in the absence of interspecific interaction are represented by the parameters $a$, $b$ and $c$, respectively. The predator-prey and consumer-resource interactions are incorporated into the equation via the Lotka-Volterra term or the Holling type-II interaction term. $\epsilon$ denotes the dispersal rate. When $\epsilon=0$, for a specific set of parameters dynamics of the model \eqref{eqA1} are chaotic, and the attractor is displayed in Fig.~\ref{f:sm4}(a). Corresponding wavelet analyses, to determine the rewiring period $T$, are presented in Figs.~\ref{f:sm4}(b)-\ref{f:sm4}(c). Parameters of the uncoupled model \eqref{eqA1} (when $\epsilon=0$) used for numerical simulations are $a=1$, $b=1$, $c=10$, $\beta_1=0.2$, $\beta_2=1$, $K_{1}=0.05$, and $w^*=0.006$. We have calculated the stability regions of synchronous state using the MSF approach for static and temporal networks as shown in Fig.~\ref{f:smt3}. Figure~\ref{f:smt3}(a) shows expected stability intervals for varying normalized coupling strength ($\alpha$). The range of the coupling strength ($\epsilon$) in which a synchronous solution is stable for different rewiring probability and average degree ($k=2$ and $k=8$) are plotted in Figs.~\ref{f:smt3}(b)-\ref{f:smt3}(c). We observe that the synchronous state is stable in $\alpha_{1}<\epsilon \lambda_{k}< \alpha_{2}$, where $\alpha_{1}=0.13$ and $\alpha_{2}=2.62$. One can also conclude that the expected regions of stable synchronous solution decrease with decreasing rewiring probability $p$, and the result is similar to the one shown in Fig.~\ref{f:smt1}. While, in Fig.~\ref{f:smt1} the stability region is bounded below only, here in Fig.~\ref{f:smt3} it is bounded both below and above. Figure~\ref{f:BSAp} exhibits changes in the BS for different coupling strengths $\epsilon$ and rewiring period $T$, with $k=2$. In Fig.~\ref{f:BSAp}(a), we observe that for moderate $\epsilon$ values, the synchronous solution is stable for intermediate rewiring probabilities. However, for low and high $\epsilon$ values, the BS is zero irrespective of the choice of rewiring probability $p$, resulting in complete asynchrony in the system. This result is in agreement with the synchronization region calculated using the MSF approach (see Fig.~\ref{f:smt3}(b)). Further, increasing $T$ lowers the BS of the time-varying networks (Fig.~\ref{f:BSAp}(b)), and eventually, the BS becomes zero for all values of $p$ at a high rewiring period $T$. These results are in line with our previous findings illustrated in Fig.~\ref{f:fig3}(b) and Fig.~\ref{f:smt}(b) for the Hastings-Powell model.
proofpile-arXiv_067-971
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The study of directed polymers in random environments has witnessed important progress in recent years. A common feature of the models is an interaction between a reference path measure, which is typically given by a random walk or a Brownian motion, and a background random environment. The polymer measure is then formulated as the Gibbs measure with a Hamiltonian describing the accumulated energy collected along the path in the random environment. The physically interesting quantities include the fluctuations of the free energy, the typical behaviors of the paths, and so on, see e.g. the books \cite{comets2017directed,den2009random,giacomin2007random} and the references therein. While the random walk/Brownian motion is diffusive, the random environment can change the polymer's behavior drastically. Indeed, when the temperature is low, the typical path is expected to be super-diffusive, and the transverse displacement of the path of length $T$ is expected to be of order $T^{\xi}$ with an exponent $\xi>\tfrac12$. In $d=1$, it has been conjectured that $\xi=\tfrac23$ and the directed polymer model falls into the KPZ universality class. The proofs of this conjecture in several settings can be found, for example, in \cite{amir2011probability,balazs2011fluctuation,barraquand2017random,borodin2014macdonald,borodin2014free,borodin2013log,corwin2016kpz,ferrari2006scaling,johansson2000shape,landim2004superdiffusivity,quastel2007t1,seppalainen2012scaling}. For a more complete list, we refer to the review articles \cite{corwin2012kardar,quastel2015one}. Another feature of the polymer paths in low temperatures is that they localize and concentrate in small regions, see e.g. \cite{bates2016endpoint,carmona2006strong,comets2003directed,vargas2007strong} and the references therein. In this paper, we consider the Wiener measure as the reference path measure and a generalized spacetime Gaussian random field as the environment, and our focus will be on the endpoint distribution of the polymer path. The results presented in the sequel offer a possibly new perspective of this problem. We develop a deterministic hierarchy governing the evolution of the endpoint distribution. \gu{From the hierarchy, we compute the generator of the Markov process associated with the endpoint distribution, for linear functionals.} In $d\geq 3$ and for high temperatures, we make use of the hierarchy to quantify the diffusive behavior of the polymer path. \gu{We also study a nonlocal reaction-diffusion equation motivated by the generator, and establish a super-diffusive behavior with exponent $\tfrac23$; that is, we establish spreading on spatial scales $O(t^{2/3})$.} Our arguments are based on a mixture of tools from both probability and partial differential equations. \subsection{Main result} Let $\eta$ be a spacetime white noise built on the probability space $(\Omega,\mathcal{F},\mathbf{P})$ and the expectation with respect to $\eta$ is denoted by $\mathbf{E}$. Fix a mollifier $0 \leq \phi\in \mathcal{C}_c^{\infty}(\mathbb{R}^d)$ with $\int \phi=1$. We smooth $\eta$ in the $x-$variable and define \[ \eta_\phi(t,x)=\int_{\mathbb{R}^d}\phi(x-y)\eta(t,y)dy, \] which is a generalized Gaussian random field. The covariance function of $\eta_\phi$ is then given by \[ \mathbf{E}[\eta_\phi(t,x)\eta_\phi(s,y)]=\delta(t-s)R(x-y), \] with the spatial covariance function \[ R(x)=\int_{\mathbb{R}^d} \phi(x+y)\phi(y)dy \in C_c^\infty(\mathbb{R}^d). \] Let $(\Sigma,\mathcal{A},\mathbb{P})$ be another probability space and $w$ be a Brownian motion built on it. The initial location $w_0$ is distributed according to $\mu_0(dx)$. Throughout the paper, we assume \[ \mu_0(dx)=q_0(x)dx, \quad\quad\quad\quad q_0\geq 0, \quad \int_{\mathbb{R}^d} q_0(x)dx=1, \] and consider the two cases (i) $q_0\in C_c(\mathbb{R}^d)$ (ii) $q_0(x)=\delta(x)$. While other initial distributions can be considered as well, our main focus here is on $q_0(x)=\delta(x)$. We denote the expectation with respect to $w$ by $\mathbb{E}_{\mu_0}$, and define the energy of any Brownian path $w:[0,T]\to\mathbb{R}^d$ in the Gaussian environment $\eta_\phi$ by \begin{equation}\label{e.defHt} H(T,w)=\int_0^T \eta_\phi(t,w_t)dt. \end{equation} In this paper, we study the endpoint distribution of the random polymer, obtained by tilting the Brownian motion by a factor of $e^{\beta H(T,w)}$, where $\beta>0$ is the inverse temperature; that is, for any $T\geq 0$, we are interested in: \begin{equation}\label{e.endpoint} \begin{aligned} &\mu_T(dx)=q(T,x)dx, \\ &q(T,x)=Z(T)^{-1}\mathbb{E}_{\mu_0}[\delta(w_T-x)\exp(\beta H(T,w)-\tfrac12\beta^2R(0)T)],\ \ \text{ and}\\ &Z(T)=\mathbb{E}_{\mu_0}[\exp(\beta H(T,w)-\tfrac12\beta^2R(0)T)]. \end{aligned} \end{equation} In $d=1$, we also consider the random environment given by the spacetime white noise $\eta$, without any mollification in the spatial variable. To unify the notation, we allow $\phi$ to be the Dirac function $\phi(x)=\delta(x)$ in $d=1$, and the spatial covariance function in this case is \[ R(x)=\int_{\mathbb{R}} \phi(x+y)\phi(y)dy=\delta(x). \] We define \begin{equation}\label{e.endpointwhite} \begin{aligned} &q(T,x)=Z(T)^{-1}\mathbb{E}_{\mu_0}[\delta(w_T-x):\exp(\beta\textstyle\int_0^T \eta(s,w_s)ds):],\\ &Z(T)=\mathbb{E}_{\mu_0}[:\exp(\beta \textstyle\int_0^T \eta(s,w_s)ds):]. \end{aligned} \end{equation} Here $:\exp:$ is the Wick-ordered exponential, see e.g. \cite{nualart1997weighted}. In both \eqref{e.endpoint} and \eqref{e.endpointwhite}, the endpoint density $q$ is related to a stochastic heat equation with a multiplicative noise, which we will make more precise in Section~\ref{s.poshe}. We emphasize that the case of $\phi(x)=R(x)=\delta(x)$ is restricted to $d=1$. For any $t\geq0, n\geq 1$ and $\mathbf{ x}_{1:n}:=(x_1,\ldots,x_n)\in\mathbb{R}^{nd}$, define the $n-$point density by \begin{equation}\label{e.ndensity} Q_n(t,\mathbf{x}_{1:n})=\mathbf{E}[q(t,x_1)\ldots q(t,x_n)]. \end{equation} For any two functions $f,g$, we define $\langle f,g\rangle:=\int f(x)g(x)dx$ as long as the integral is well-defined in a standard way. If $f$ and $g$ also depend on the $t$ variable, then we write $\langle f(t),g(t)\rangle=\int f(t,x)g(t,x)dx$; see \Cref{s.notation} for more details on the notations and conventions. We are now able to state the first main result. \begin{theorem}\label{t.bbgky} For any $n\geq1$ and $T>0$, if $f\in C_b^{1,2}([0,T]\times \mathbb{R}^{nd})$, we have \begin{equation}\label{e.eqP} \begin{aligned} \langle f(T), Q_n(T)\rangle=\langle f(0), q_0^{\otimes n}\rangle&+\textstyle\int_0^T \langle (\partial_t+\tfrac12\Delta) f(t), Q_n(t)\rangle dt\\ &+\beta^2\sum_{k=0}^2\int_0^T\langle f_{k,R}(t),Q_{n+k}(t) \rangle dt, \end{aligned \end{equation} where the functions $f_{k,R}: [0,T]\times\mathbb{R}^{(n+k)d}\to \mathbb{R}$ are given by \begin{equation}\label{e.defFR} \begin{aligned} &f_{0,R}(t,\mathbf{x}_{1:n})=f(t,\mathbf{x}_{1:n})\textstyle\sum_{1\leq i<j\leq n}R(x_i-x_j),\\ &f_{1,R}(t,\mathbf{x}_{1:n},x_{n+1})=-nf(t,\mathbf{x}_{1:n})\textstyle\sum_{i=1}^n R(x_i-x_{n+1}),\\ &f_{2,R}(t,\mathbf{x}_{1:n},x_{n+1},x_{n+2})=\tfrac12n(n+1)f(t,\mathbf{x}_{1:n})R(x_{n+1}-x_{n+2}). \end{aligned} \end{equation} In other words, $\{Q_n\}_{n\geq 1}$ is a weak solution to the following hierarchy: \begin{equation}\label{e.hierarchy} \begin{split} \partial_tQ_n&(t,\mathbf{x}_{1:n})=\tfrac12\Delta Q_n(t,\mathbf{x}_{1:n})+\beta^2Q_n(t,\mathbf{x}_{1:n})\textstyle\sum_{1\leq i<j\leq n}R(x_i-x_j)\\ &-\beta^2n\textstyle\int_{\mathbb{R}^d}Q_{n+1}(t,\mathbf{x}_{1:n},x_{n+1})\sum_{i=1}^nR(x_i-x_{n+1})dx_{n+1}\\ &+\beta^2\tfrac{n(n+1)}{2}\textstyle\int_{\mathbb{R}^{2d}} Q_{n+2}(t,\mathbf{x}_{1:n},x_{n+1},x_{n+2})R(x_{n+1}-x_{n+2})dx_{n+1}dx_{n+2}, \end{split} \end{equation} with the initial condition $Q_n(0,\cdot)=q_0^{\otimes n}$. \end{theorem} \subsection{Applications of the PDE hierarchy} Let $\mathcal{M}_1(\mathbb{R}^d)$ be the space of probability measures on $\mathbb{R}^d$. Due to white-in-time correlation of $\eta$, $\{\mu_T\}_{T\geq0}$ is a Markov process taking values in $\mathcal{M}_1(\mathbb{R}^d)$. For any $f\in C_b(\mathbb{R}^d)$, we associate it in the natural way with a functional $\mathrm{F}_f:\mathcal{M}_1(\mathbb{R}^d)\to \mathbb{R}$ given by \[ \mathrm{F}_f(\mu)=\langle f,\mu\rangle =\int_{\mathbb{R}^d} f(x) \mu(dx), \] where we abused the notation to also let $\langle\cdot,\cdot\rangle$ denote the pairing between $C_b(\mathbb{R}^d)$ and $\mathcal{M}_1(\mathbb{R}^d)$. Denote the generator of $\{\mu_T\}_{T\geq0}$ by $\mathcal{L}$, and let $\star$ denote the convolution. An immediate consequence of Theorem~\ref{t.bbgky} is \begin{corollary}\label{c.generator} Assume $\mu_0(dx)=q_0(x)dx$ with $q_0\in C_c(\mathbb{R}^d)$. For any $f\in C_b^2(\mathbb{R}^d)$, \begin{equation}\label{e.generatorcolor} \mathcal{L} \mathrm{F}_f(\mu_0)=\langle \tfrac12\Delta f,q_0\rangle+\beta^2\langle f, \mathcal{T} q_0\rangle, \end{equation} with the operator $\mathcal{T}$ defined as \begin{equation}\label{e.defLR} \begin{aligned} \mathcal{T} q_0(x)= \langle R\star q_0,q_0\rangle q_0(x)-q_0(x)R\star q_0(x). \end{aligned} \end{equation} \end{corollary} Another application of Theorem~\ref{t.bbgky} is to study the diffusive behavior of the polymer endpoint in a high temperature regime when $d\geq3$. It is well-known from the classical work \cite{albeverio,bolt,commets,spencer,Song} that in this case and under a diffusive rescaling, the polymer endpoint converges to a standard normal distribution in the quenched sense. In our notation, as $q(T,\cdot)$ denotes the quenched density of the endpoint of the polymer of length $T$, the result says that for almost every realization of the random environment, and any $h\in C_b(\mathbb{R}^d)$, we have \begin{equation}\label{e.qinv} \int_{\mathbb{R}^d} h(x) T^{d/2}q(T,\sqrt{T}x) dx\to \int_{\mathbb{R}^d} h(x) G_1(x)dx, \quad\quad \mbox{ as }T\to\infty. \end{equation} Here $G_1(\cdot)$ is the density of $N(0,\mathrm{I}_d)$. The results in \cite{bolt,commets,spencer} are for a discrete i.i.d. random environment. In the setting of the continuous Gaussian environment considered in this paper, the same result was proved in \cite{chiranjib}. The above quenched central limit theorem~\eqref{e.qinv} immediately implies the annealed one (recall that $Q_1=\mathbb{E}[q]$) \begin{equation}\label{e.ainv} \int_{\mathbb{R}^d} h(x) T^{d/2}Q_1(T,\sqrt{T}x) dx\to \int_{\mathbb{R}^d} h(x) G_1(x)dx, \quad\quad \mbox{ as }T\to\infty. \end{equation} As $\{Q_n\}_{n\geq1}$ solves the PDE hierarchy \eqref{e.hierarchy}, which can be viewed as a ``perturbation'' of the heat equation, it is natural to ask if we can analyze the system of equations and show that the ``perturbation'' is indeed small in this asymptotic regime. It turns out that the hierarchy provides a nice analytic framework for us to give a simple proof of \eqref{e.ainv} and to also quantify the convergence rate. Let $X_T$ denote a random variable with the density $T^{d/2}Q_1(T,\sqrt{T}x)$, then the Wasserstein distance between $X_T$ and $N(0,\mathrm{I}_d)$ is defined as \[ d_{\mathrm{W}}(X_T,N(0,\mathrm{I}_d)):=\sup_{h\in\mathrm{Lip}(1)}\left|\int_{\mathbb{R}^d} h(x)T^{d/2}Q_1(T,\sqrt{T}x) dx-\int_{\mathbb{R}^d} h(x) G_1(x)dx\right|, \] where $\mathrm{Lip}(1)=\{h\in C(\mathbb{R}^d): |h(x)-h(y)|\leq |x-y|\}$. \begin{theorem}\label{t.qclt} Assume $q_0(\cdot)=\delta(\cdot)$. In $d\geq3$, there exist positive constants $\beta_0(d,R),C(d,R,\beta)$ such that if $\beta<\beta_0$, we have for all $T>0$ that \begin{equation}\label{e.wdbd} \begin{aligned} d_{\mathrm{W}}(X_T,N(0,\mathrm{I}_d)) \leq C\left(\tfrac{\log T}{\sqrt{T}}\mathbbm{1}_{d=3}+\tfrac{1}{\sqrt{T}}\mathbbm{1}_{d\geq 4}\right). \end{aligned} \end{equation} \end{theorem} \gu{Similar results were obtained in \cite[Theorem 4]{bold}, and these suggest that the error estimates above are sharp when $d\geq 5$. Of particular interest is the mean square displacement of the polymer endpoint. We have the following error bound \begin{theorem}\label{t.msd} Under the same assumption of Theorem~\ref{t.qclt}, it holds for all $T>0$ that \begin{equation}\label{e.msdbd} \big|\tfrac{1}{T}\int_{\mathbb{R}^d} |x|^2 Q_1(T,x)dx-d\big|\leq C\left(\tfrac{1}{\sqrt{T}}\mathbbm{1}_{d=3}+\tfrac{\log T}{T}\mathbbm{1}_{d=4}+\tfrac{1}{T}\mathbbm{1}_{d\geq5}\right). \end{equation} \end{theorem} There are several results on error estimates for the mean square displacement \cite{albeverio,spencer,Song}, and Theorem~\ref{t.msd} seems to provide the best rate. We discuss the relation of Theorem~\ref{t.msd} to the previous results in more details in Remark~\ref{r.err} below. \gu{\subsection{A nonlocal reaction-diffusion equation motivated by $\mathcal{L}$} Since $\{\mu_T\}_{T\geq0}$ is a Markov process, we can consider the forward Kolmogorov equation associated with it. Recall that $\mathcal{L}$ denotes its generator. For any $f\in C_b(\mathbb{R}^d)$, let \[ \mathscr{U}:[0,\infty)\times \mathcal{M}_1(\mathbb{R}^d)\to \mathbb{R} \] be the solution to \begin{equation}\label{e.abseq} \begin{aligned} &\partial_t \mathscr{U}(t,\mu)=\mathcal{L} \mathscr{U}(t,\mu), \quad \quad t>0,\\ &\mathscr{U}(0,\mu)=\langle f,\mu\rangle. \end{aligned} \end{equation} With $\mu_0(dx)=q_0(x)dx$, the solution can be written as \[ \mathscr{U}(t,\mu_0)=\mathbf{E}[\langle f,\mu_t\rangle]=\int_{\mathbb{R}} f(x)Q_1(t,x)dx, \] where we used the fact that $\mu_t(dx)=q(t,x)dx$ and $\mathbf{E}[q(t,x)]=Q_1(t,x)$. Thus, one can try to study the asymptotic behavior of $Q_1(t,\cdot)$ as $t\to\infty$ by considering the equation \eqref{e.abseq}, rather than the system \eqref{e.bbgky}.} \gu{Nevertheless, computing the generator $\mathcal{L}$ for general functionals is difficult, and, in addition, the obtained formula might not be easy to manipulate. In Corollary~\ref{c.generator}, we considered the simplest possible linear functionals, and the equation \eqref{e.generatorcolor} shows that, for these functionals on $\mathcal{M}_1(\mathbb{R}^d)$, the action of $\mathcal{L}$ is equivalent to that of the differential operator $\tfrac12\Delta +\beta^2\mathcal{T}$ acting on the corresponding density. This motivates us to consider the following deterministic PDE associated with the operator $\mathcal{T}$. Take the case of $R(\cdot)=\delta(\cdot)$ in $d=1$, and denote the $L^2(\mathbb{R}^d)$ norm by $\|\cdot\|$. For any $f\in \gu{L^2(\mathbb{R})}$, we have \[ \mathcal{T} f(x)= \|f\|^2 f(x)-f(x)^2. \] Consider the following equation \begin{equation}\label{e.maineq} \begin{aligned} \partial_t g(t,x)&=\tfrac12\Delta g(t,x)+\beta^2\mathcal{T} g(t,\cdot)\\ &=\tfrac12\Delta g(t,x)+\beta^2\|g(t,\cdot)\|^2 g(t,x)-\beta^2g(t,x)^2, \quad\quad t>0, x\in\mathbb{R},\\ g(0,x)&=q_0(x). \end{aligned} \end{equation} We see in the sequel that this describes the evolution of a probability density.} \gu{It is unclear whether \eqref{e.maineq} has anything to do with \eqref{e.abseq}, since we only know that the two evolutions match at $t=0$, which is due to the fact that $\mathscr{U}(0,\mu)=\langle f,\mu\rangle$ is a linear functional on $\mathcal{M}_1(\mathbb{R})$. Nevertheless, the following result shows a super-diffusive behavior of $g$ with the exponent $\tfrac23$. \begin{theorem}\label{t.23} In $d=1$, assume $0\leq\, q_0\in C_c(\mathbb{R})$ and $\int_{\mathbb{R}} q_0(x)dx = 1$. For any $p\geq1$, there exists a constant $C=C(p,\beta,q_0)>0$ such that \[ C^{-1}T^{\frac{2p}{3}}\leq \int_{\mathbb{R}}|x|^pg(T,x)dx \leq C T^{\frac{2p}{3}}, \quad\quad \mbox{ for all } T\geq1. \] \end{theorem}} \gu{It is unclear to us whether the above super-diffusion with the ``right'' exponent $\tfrac{2}{3}$ is a coincidence or not. In the case of the spacetime white noise environment in $d=1$, the function $Q_1$ is the annealed density of the endpoint of the continuum directed random polymer \cite{alberts2014continuum}, and $\{Q_n\}_{n\geq 1}$ is a weak solution of the infinite system: \begin{equation}\label{e.bbgky} \begin{aligned} \partial_t Q_n(t,x_1,\ldots,x_n)=&\tfrac12\Delta Q_n(t,x_1,\ldots,x_n)\\ &+\beta^2Q_n(t,x_1,\ldots,x_n)\textstyle\sum_{1\leq i<j\leq n}\delta(x_i-x_j)\\ &-\beta^2n\textstyle\sum_{i=1}^nQ_{n+1}(x_1,\ldots,x_n,x_i)\\ &+\beta^2\tfrac{n(n+1)}{2}\textstyle\int_{\mathbb{R}}Q_{n+2}(x_1,\ldots,x_n,\tilde{x},\tilde{x})d\tilde{x}, \quad\quad n\geq1. \end{aligned} \end{equation} For this model, the super-diffusion with exponent $\tfrac{2}{3}$ was shown in \cite[Theorem 1.11]{corwin2016kpz}. Taking $n=1$, the above system yields \begin{equation}\label{e.bbgky1} \partial_t Q_1(t,x)=\tfrac12\Delta Q_1(t,x)+\beta^2\textstyle\int_{\mathbb{R}} Q_3(t,x,\tilde{x},\tilde{x})d\tilde{x}-\beta^2Q_2(t,x,x). \end{equation} Thus, the question reduces to justifying if \eqref{e.maineq} is a reasonable approximation of \eqref{e.bbgky1} or not. Incidentally, if we make the assumption of a factorized joint density to close the hierarchy \begin{equation}\label{e.ass} Q_2(t,x_1,x_2)\approx Q_1(t,x_1)Q_1(t,x_2), \quad \quad Q_3(t,x_1,x_2,x_3)\approx \prod_{j=1}^3 Q_1(t,x_j), \end{equation} which is similar in spirit to the molecular chaos assumption in the BBGKY hierarchy of kinetic theory \cite{cercignani2013mathematical}, then \eqref{e.bbgky1} reduces to \eqref{e.maineq}: \begin{equation}\label{e.ass1} \partial_t Q_1(t,x)\approx \tfrac12\Delta Q_1(t,x)+\beta^2\|Q_1(t,\cdot)\|^2Q_1(t,x)-\beta^2 Q_1^2(t,x). \end{equation} However, we do not claim either \eqref{e.ass} or \eqref{e.ass1} in this paper.} \subsection{Discussions} As the endpoint density of the Brownian motion solves the standard heat equation, we look for a counterpart when the Brownian motion is weighted by a random environment. For each fixed realization of the random environment, it is known that the polymer model is equivalent to a diffusion in a (different) random environment \cite[Theorem 2]{bakhtin2018global}. Thus, in the quenched setting, the analogue of the standard heat equation we are looking for is a Fokker-Planck equation with a random coefficient, describing the evolution of the density of the aforementioned diffusion. However, studying either the solution to the Fokker-Planck equation or its ensemble average seems to be as complicated as the polymer model itself; hence, the main message we wish to convey here is the following: rather than studying the single point distribution, one could instead look at the multipoint distributions defined in \eqref{e.ndensity}. By definition, for each $T\geq 0$, $Q_n(T,\cdot)$ is a probability density on $\mathbb{R}^{nd}$. While we do not have an underlying dynamics that reproduces the evolution of $Q_n$, heuristically, it can be viewed as the joint density of $n$ particles, interacting indirectly through their separate individual interaction with the common random environment, similar to ``independent walkers in the same environment''. Theorem~\ref{t.bbgky}, which comes from a straightforward application of It\^o's formula, shows that $\{Q_n\}_{n\geq1}$ solves a hierarchical PDE system. In this way, the study of the endpoint distribution of the random polymer, in the annealed setting, may be reduced to the study of $Q_1$ and the analysis of the deterministic PDE system satisfied by $\{Q_n\}_{n\geq1}$. We make a few remarks. \begin{remark} A result similar to Theorem~\ref{t.bbgky} was proved in \cite[Theorem 3.1]{carmona2006strong} for a different polymer model (albeit not formulated as a PDE hierarchy), with a random walk reference path measure and an environment on $\mathbb{R}_+\times \mathbb{Z}^d$ made of i.i.d. copies of Brownian motions. \end{remark} \begin{remark} We show in the sequel (see \Cref{l.timereversal}) that \begin{equation}\label{e.numerator} Q_n(t,x_1,\ldots,x_n)=\mathbf{E}\bigg[\frac{u(t,x_1)\ldots u(t,x_n)}{(\int_{\mathbb{R}} u(t,x)dx)^n}\bigg], \end{equation} with $u$ solving the stochastic heat equation \[ \partial_t u=\tfrac12\Delta u+\beta u\eta_\phi, \quad\quad u(0,\cdot)=q_0(\cdot). \] If we only consider the numerator of the r.h.s.~of \eqref{e.numerator}, and define \[ \tilde{Q}_n(t,x_1,\ldots,x_n)=\mathbf{E}[u(t,x_1)\ldots u(t,x_n)], \] it is well-known that $\tilde{Q}_n$ solves \begin{equation}\label{e.tildeQ} \partial_t \tilde{Q}_n=\tfrac12\Delta \tilde{Q}_n+\beta^2\tilde{Q}_n\textstyle\sum_{1\leq i<j\leq n} R(x_i-x_j)=:\mathscr{H}_n\tilde{Q}_n. \end{equation} In this case there is no coupling between $\tilde{Q}_n$ for different values of $n$, and the Hamiltonian $\mathscr{H}_n$ is the so-called Delta-Bose gas if we have a contact interaction $R(\cdot)=\delta(\cdot)$ in $d=1$. There are many studies on the moments of the stochastic heat equation, either relying on the Feynman-Kac representation of the solution to \eqref{e.tildeQ} or the spectral property of $\mathscr{H}_n$, and we refer to the monograph \cite{davar} and the references therein. The equation \eqref{e.tildeQ} should be compared with \eqref{e.hierarchy}, in which we have additional terms related to $Q_{n+1}$ and $Q_{n+2}$ \end{remark} \begin{remark} For the \emph{annealed} endpoint distribution considered in the present paper, with the density given by $\mathbf{E}[q(T,\cdot)]$, it is conjectured that in $d=1$, the rescaled density $T^{\frac23}\mathbf{E}[q(T,T^{\frac23}\cdot)]$ converges weakly in space to some universal limit as $T\to\infty$, \gu{see the result on a related last passage percolation model \cite{johansson2003discrete}.} The limit was further identified in \cite{flores2013endpoint,schehr2012extremes}. \end{remark} \gu{\begin{remark} The asymptotics of the solution of~\eqref{e.maineq} are not obvious. In order to understand the exponent $\frac23$ in Theorem~\ref{t.23}, one can make the following back-of-the-envelope computation. If we assume spreading at spatial scales $O(t^p)$, then to preserve the fact that $g$ is a probability measure, we must have $\sup g \sim O(t^{-p})$. This yields $\|g\|^2 \sim O(t^{-p})$. In order to use this, we linearize the equation around zero to obtain \[ \partial_t g \approx \frac{1}{2} \Delta g + \beta^2 \|g\|^2 g \approx \frac{1}{2} \Delta g + O(t^{-p}) g. \] Then, using the large $x$ asymptotics of the heat equation and the fact that the last term yields an integrating factor, we find \[ g(t,x) \approx e^{\int_0^t O(s^{-p}) ds - \frac{x^2}{2t} + O(\log(t))} \approx e^{O(t^{1-p}) - \frac{x^2}{2t}}. \] For consistency with our assumption of spreading in $x$ like $O(t^p)$, we require that $g$ is ``large'' to the left of $O(t^p)$ and ``small'' to the right of $O(t^p)$. This means that the two terms in the exponent should cancel at $x \sim O(t^p)$. In other words, we require $O(t^{1-p}) = (t^p)^2/2t$. Solving this yields $p=\frac23$. Unfortunately, this argument is far from rigorous. Instead, as with the heat equation, in order to establish the spreading behavior of $g$, the key estimate is an upper bound on the $L^\infty$ norm that yields decay to zero at the sharp rate, which is $O(t^{-\frac23})$. Since the Laplacian (diffusion) can only cause decay like $O(t^{-\frac12})$ in $d=1$, the nonlinear terms have to provide the mechanism for this decay. Our proof proceeds by establishing a functional inequality relating the two nonlinear terms at any maximum of $g$. \gu{This, combined with} a differential inequality satisfied by the maximum, shows that $g(t) \lesssim t^{-\frac23}$. From there we obtain the upper bound in \Cref{t.23} via the construction of a supersolution and the lower bound via a simple variational argument. \end{remark}} \gu{\begin{remark} It is natural to investigate further questions on the PDE \eqref{e.maineq}, including the asymptotics of $T^{\frac{2}{3}}g(T,T^{\frac23}\cdot)$ as $T\to\infty$ and the behavior of $g$ in high dimensions. One can study the equation corresponding to the spatially correlated noise: \[ \partial_t g(t,x)=\tfrac12\Delta g(t,x)+\beta^2\langle R\star g(t,\cdot),g(t,\cdot)\rangle g(t,x)-\beta^2g(t,x)R\star g(t,x) \] where $R\in C_c^\infty(\mathbb{R}^d)$ is the spatial covariance function of the noise. Compared to \eqref{e.maineq} which is the case of $R(\cdot)=\delta(\cdot)$, the above equation is ``more nonlocal'', and the analytic tools used in this paper do not seem to apply. Indeed, the functional inequalities and delicate identities used in the proof of \Cref{t.23} either are not true or do not make sense and do not have obvious analogues in this more general setting. These are presented in \cite{gh}. \end{remark}} \begin{remark} We note that the proof of \Cref{t.23} does not use any regularity of $q_0$, and would apply equally well to $q_0$ that is a localized probability measure such as $\delta$; however, it is not immediately obvious that~\eqref{e.maineq} is well-posed with measure initial data. Hence, in this work, we impose the condition that $q_0 \in C_c(\mathbb{R})$ in order to avoid technical issues. In a future work, we show that this condition may be relaxed; that is, the estimates established here are sufficient to establish such a well-posedness result for localized probability measures. \end{remark} \begin{remark} In $d\geq 3$ and the high temperature regime, Theorems~\ref{t.qclt} and \ref{t.msd} concern the annealed distribution $Q_1$, which is obtained after taking an average with respect to the random environment. It is natural to ask about the extra error induced by the random fluctuations of the environment. As the focus of the paper is on the PDE hierarchy, which involves $Q_1$ rather than $q$, we do not study this problem here, but note that it is not very hard to extract error bounds on the random fluctuations from the proof of Theorem~\ref{t.bbgky}. \gu{We refer to Remark~\ref{r.781} below for more details.} \end{remark} \begin{remark} In Theorems~\ref{t.qclt} and \ref{t.msd}, we are deep in the high temperature regime $\beta\ll1$ to obtain the error estimate in the annealed central limit theorem. The quenched central limit theorem actually holds in the full weak disorder regime $\beta<\beta_c$ for some critical value of $\beta_c$ \cite{commets}. As we strive for a more quantitative estimate here, it is unclear to us whether similar error estimates can be proved for all $\beta<\beta_c$. \end{remark} \begin{remark}\label{r.err} The first error bound on the convergence of the mean square displacement was proved in \cite[Eq (1.7)]{spencer}, and their result in the discrete setting translates to our case as \[ \big|\tfrac{1}{T}\int_{\mathbb{R}^d} |x|^2 Q_1(T,x)dx-d\big|\leq C\tfrac{1}{T^{\theta-\delta}}. \] Here $\theta=\min(\tfrac{d-2}{4},\tfrac{3}{4})$ and $\delta>0$ can be arbitrarily small. In \cite{albeverio,Song}, similar results were shown which corresponds to the following in our setting \[ \big|\tfrac{1}{T}\int_{\mathbb{R}^d} |x|^2 Q_1(T,x)dx-d\big| \leq C \Big( \tfrac{1}{T^{(d-2)/4}}\mathbbm{1}_{2<d<6}+\tfrac{\sqrt{\log T}}{T}\mathbbm{1}_{d=6}+\tfrac{1}{T}\mathbbm{1}_{d>6}\Big). \] It is worth mentioning that in these works, the quenched error estimates were also proved, while we only focus on the annealed case here. \end{remark} \subsubsection*{Organization of the paper} In Section~\ref{s.poshe}, we briefly discuss the connection between the directed polymer and the stochastic heat equation, which will be used later in our proof. Sections~\ref{s.bbgky}, \ref{s.qclt} and \ref{s.23} are devoted to the proofs of Theorem~\ref{t.bbgky}, \ref{t.qclt} and \ref{t.23} respectively. In Appendix~\ref{s.she}, we review some basics about stochastic heat equations for the convenience of readers. The proofs of some technical lemmas are presented in Appendix~\ref{s.lem}. \subsection{Notation and conventions} \label{s.notation} We recall and define some notation (i) The expectation with respect to the Gaussian random environment is denoted by $\mathbf{E}$, and the expectation with respect to Brownian motions is $\mathbb{E}$. (ii) We consider two cases of spatial covariance functions of the Gaussian environment (a) $R(\cdot)\in C_c^\infty(\mathbb{R}^d), d\geq 1$ and (b) $R(\cdot)=\delta(\cdot), d=1$. (iii) The initial distribution $\mu_0(dx)=q_0(x)dx$ is fixed, and we include the two cases (a) $q_0(\cdot)\in C_c(\mathbb{R}^d)$ and (b) $q_0(\cdot)=\delta(\cdot)$. (iv) For functions $f,g$ and measure $\mu$, we write $\langle f,g\rangle=\int fg$, $\langle f,\mu\rangle=\int f(x)\mu(dx)$, and $\|f\|^2=\langle f,f\rangle$. (v) We use $\star$ to denote convolution in the spatial variable $x$, and the standard heat kernel of $\partial_t -\tfrac12\Delta$ is $G_t(x)=(2\pi t)^{-d/2}\exp(-|x|^2/(2t))$. \subsubsection*{Acknowledgement} YG was partially supported by the NSF through DMS-1907928 and the Center for Nonlinear Analysis of CMU. CH was partially supported by NSF grant DMS-2003110. YG would like to thank Xi Geng for several discussions. \section{Directed polymer and stochastic heat equation} \label{s.poshe} In this section, we briefly discuss the relationship between directed polymers and the stochastic heat equation with a multiplicative noise. First, we define the time reversal of $\eta$ and $\eta_\phi$: \begin{equation}\label{e.deftimereversal} \xi(t,x)=\eta(-t,x), \quad\quad \xi_\phi(t,x)=\eta_\phi(-t,x)=\int_{\mathbb{R}^d} \phi(x-y)\xi(t,y)dy. \end{equation} Fix the inverse temperature $\beta>0$. For any $s\in\mathbb{R}$ and $x\in\mathbb{R}^d$, define $U(s,x;t,y)$ as the solution of \begin{equation}\label{e.shephi} \begin{aligned} \partial_t U(s,x;t,y)&=\tfrac12\Delta_y U(s,x;t,y)+\beta\, U(s,x;t,y)\xi_\phi(t,y), \quad\quad t>s, y\in\mathbb{R}^d,\\ U(s,x;s,y)&=\delta(y-x). \end{aligned} \end{equation} Here the product between $U$ and $\xi_\phi$ is interpreted in the It\^o-Walsh sense \cite{walsh1986introduction}, and we have included the spacetime white noise case of $\phi(\cdot)=\delta(\cdot)$ in $d=1$. Then the quenched endpoint density $q(T,x)$, defined in \eqref{e.endpoint} and \eqref{e.endpointwhite}, is also given by \begin{equation}\label{e.defq} q(T,x)=\frac{\int_{\mathbb{R}^d}U(-T,x;0,y)q_0(y)dy}{\int_{\mathbb{R}^{2d}} U(-T,\tilde{x};0,y)q_0(y)dyd\tilde{x}}. \end{equation} To see this, consider the spatially correlated case with $\phi\in C_c^\infty(\mathbb{R}^d)$. We only need to use Feynman-Kac formula \cite{bertini1995stochastic} to rewrite \begin{equation}\label{e.fk} \begin{aligned} \int_{\mathbb{R}^d} U(-T,x;0,y)q_0(y)dy=&\mathbb{E}_{\mu_0}[\delta(w_T-x)e^{\beta\int_0^T\xi_\phi(-s,w_s)ds-\frac12\beta^2R(0)T}]\\ =&\mathbb{E}_{\mu_0}[\delta(w_T-x)e^{\beta H(T,w)-\frac12\beta^2R(0)T}]. \end{aligned} \end{equation} Here we recall that $w$ is a standard Brownian motion that is independent from $\xi$, and $\mathbb{E}_{\mu_0}$ denotes the expectation with respect to $w$, with the starting point \[ w_0\sim \mu_0(dx)=q_0(x)dx. \] For any $y\in\mathbb{R}^d$, let $\mathbb{E}_y$ denote the expectation with respect to the Brownian motion starting at $w_0=y$. Then we can rewrite \eqref{e.fk} as \[ \int_{\mathbb{R}^d} U(-T,x;0,y) q_0(y)dy=\int_{\mathbb{R}^d} q_0(y)\mathbb{E}_y[\delta(w_T-x)e^{\beta \int_0^T\xi_\phi(-s,w_s)ds-\frac12\beta^2R(0)T}]dy. \] For the case of $\phi(\cdot)=\delta(\cdot)$ in $d=1$, by the definition of the Wick-ordered exponential, we have \[ \begin{aligned} &\mathbb{E}_{\mu_0}[\delta(w_T-x):\exp(\beta\textstyle\int_0^T \eta(s,w_s)ds):]\\ &=\mathbb{E}_{\mu_0}[\delta(w_T-x):\exp(\beta\textstyle\int_0^T \xi(-s,w_s)ds):]=\int_{\mathbb{R}^d}U(-T,x;0,y)q_0(y)dy. \end{aligned} \] \section{Proof of Theorem~\ref{t.bbgky}} \label{s.bbgky} We make use of the Feynman-Kac representation in \eqref{e.endpoint} to study the case of spatially correlated noise, i.e., when the spatial covariance function $R(\cdot)\in C_c^\infty(\mathbb{R}^d)$. Through an approximation argument, we derive the corresponding result for the case of spacetime white noise. \subsection{Colored noise environment: $R(\cdot)\in C_c^\infty(\mathbb{R}^d), d\geq1$} We first introduce some notation. Fix $n\geq1,T>0$ and a $C_b^{1,2}$ function $f:[0,T]\times \mathbb{R}^{nd}\to \mathbb{R}$, we define \begin{equation}\label{e.defYZ} \begin{aligned} &M(t,w)=\exp(\beta H(t,w)-\tfrac12\beta^2R(0)t),\\ &Y_f(t)=f(t,w_t^1,\ldots,w_t^n)\textstyle\prod_{j=1}^nM(t,w^j),\ \ \ \text{ and}\\ &X_f(t)=\mathbb{E}_{\mu_0}[Y_f(t)], \end{aligned} \end{equation} where $\{w^j\}_{j=1,\ldots,n}$ are independent copies of Brownian motions built on $(\Sigma,\mathcal{A},\mathbb{P})$. Thus, we have \[ X_{\mathbbm{1}}(t)=Z(t)^n, \] where $Z(t)$ is the partition function defined in \eqref{e.endpoint} and $\mathbbm{1}$ stands for the constant function $\mathbbm{1}(x)\equiv 1$. With the new notation, we define \[ \mathcal{X}_f(t) :=\langle f(t,\cdot),\mu_t^{\otimes n}\rangle =\langle f(t,\cdot), q(t,\cdot)^{\otimes n}\rangle=\frac{X_f(t)}{X_{\mathbbm{1}}(t)}=\frac{X_f(t)}{Z(t)^n}. \] \begin{proof}[Proof of \eqref{e.eqP}] In the following, the differential $d$ is the full stochastic differential with respect to both the Gaussian environment and the Brownian motions. We first note that for each fixed $w$, \[ H(t,w)=\int_0^t \eta_\phi(s,w_s)ds \] is a Brownian motion with variance $R(0)$, and for $w^i,w^j$, we have the bracket process \[ \langle H(\cdot,w^i),H(\cdot,w^j)\rangle_t=\int_0^t R(w^i_s-w^j_s)ds. \]This implies \begin{equation}\label{e.gbm} \begin{aligned} &dM(t,w)=\beta M(t,w)dH(t,w),\\ &d\langle M(\cdot,w^i),M(\cdot,w^j)\rangle_t=\beta^2 M(t,w^i)M(t,w^j) R(w^i_t-w^j_t)dt. \end{aligned} \end{equation} We also know that $H(t,w)$ is independent of $w$. Now we apply It\^o's formula to $Y_f(t)$: \[ \begin{aligned} dY_f(t) =&d[f(t,w_t^1,\ldots,w_t^n)\textstyle\prod_{j=1}^n M(t,w^j)]\\ =&f(t,w_t^1,\ldots,w_t^n)d[\textstyle\prod_{j=1}^n M(t,w^j)]+\textstyle\prod_{j=1}^n M(t,w^j)d[f(t,w_t^1,\ldots,w_t^n)]\\ &+d\langle f(\cdot,w_{\cdot}^1,\ldots,w_{\cdot}^n),\textstyle\prod_{j=1}^n M(\cdot,w^j)\rangle_t. \end{aligned} \] By \eqref{e.gbm}, we have \[ \begin{aligned} d\textstyle\prod_{j=1}^n M(t,w^j)=&\beta \textstyle\sum_{k=1}^n\textstyle\prod_{j=1}^n M(t,w^j)dH(t,w^k)\\ &+\beta^2\textstyle\sum_{1\leq k<l\leq n} \textstyle\prod_{j=1}^n M(t,w^j)R(w_t^k-w_t^l)dt. \end{aligned} \] We also have \[ df(t,w_t^1,\ldots,w_t^n)=\textstyle\sum_{j=1}^n \nabla_j f(t,w_t^1,\ldots,w_t^j)dw_t^j+(\partial_t +\tfrac12\Delta )f(t,w_t^1,\ldots,w_t^n)dt, \] where $\nabla_j$ denotes the gradient with respect to the $j-$th variable and $\Delta=\textstyle\sum_{j=1}^n \nabla_j\cdot\nabla_j$. Since $H(t,w)$ is independent of $w$, by \eqref{e.gbm}, we have \[ \langle f(w_{\cdot}^1,\ldots,w_{\cdot}^n),\textstyle\prod_{j=1}^n M(\cdot,w^j)\rangle_t\equiv 0. \] Thus, we have \[ \begin{aligned} Y_f(T)=f(0,w_0^1,\ldots,w_0^n)&+\beta \sum_{k=1}^n\int_0^TY_f(t) dH(t,w^k)+\beta^2\sum_{1\leq k<l\leq n}\int_0^T Y_f(t) R(w_t^k-w_t^l) dt\\ &+\sum_{k=1}^n \int_0^T \prod_{j=1}^n M(t,w^j)\nabla_k f(t,w_t^1,\ldots,w_t^n)dw_t^k\\ &+\int_0^T \prod_{j=1}^nM(t,w^j)(\partial_t+\tfrac12\Delta) f(t,w_t^1,\ldots,w_t^n)dt. \end{aligned} \] Taking expectation with respect to the Brownian motions, we have \begin{equation}\label{e.dZ} \begin{aligned} X_f(T)=\mathbb{E}_{\mu_0}[Y_f(T)]=&\mathbb{E}_{\mu_0}[f(0,w_0^1,\ldots,w_0^n)]+\beta\sum_{k=1}^n\mathbb{E}_{\mu_0}\big[\int_0^TY_f(t)dH(t,w^k)\big]\\ &+\beta^2\sum_{1\leq k<l\leq n}\int_0^T\mathbb{E}_{\mu_0}[ Y_f(t) R(w_t^k-w_t^l)] dt\\ &+\int_0^T \mathbb{E}_{\mu_0}\big[\prod_{j=1}^nM(t,w^j)(\partial_t+\tfrac12\Delta) f(t,w_t^1,\ldots,w_t^n)\big]dt. \end{aligned} \end{equation} For the second term on the r.h.s., recall that $\eta_\phi(t,x)=\int_{\mathbb{R}^d}\phi(x-y)\eta(t,y)dy$ with $\eta$ a spacetime white noise, using \eqref{e.defHt}, we can rewrite it as \begin{equation}\label{e.dZma} \begin{aligned} \beta \textstyle\sum_{k=1}^n\mathbb{E}_{\mu_0}[\int_0^TY_f(t)dH(t,w^k)]=\beta\textstyle\sum_{k=1}^n \int_0^T \int_{\mathbb{R}^d} \mathbb{E}_{\mu_0}[Y_f(t) \phi(w^k_t-y)] \eta(t,y)dydt. \end{aligned} \end{equation} Now we apply It\^o's formula to $\mathcal{X}_f(t)=X_f(t)Z(t)^{-n}$: \begin{equation}\label{e.itoX} d\mathcal{X}_f(t)=\frac{1}{Z(t)^n}dX_f(t)-\frac{nX_f(t)}{Z(t)^{n+1}}dZ(t)+\frac{n(n+1)X_f(t)}{2Z(t)^{n+2}} d\langle Z,Z\rangle_t-\frac{n}{Z(t)^{n+1}}d\langle X_f,Z\rangle_t. \end{equation} By \eqref{e.dZ}, the martingale component of $X_f$ is given by \eqref{e.dZma}. A simpler version of \eqref{e.dZ} also gives \[ Z(T)=1+\beta\int_0^T\int_{\mathbb{R}^d}\mathbb{E}_{\mu_0}[M(t,w)\phi(w_t-y)]\eta(t,y)dydt, \] which implies \begin{equation}\label{e.braket} \begin{aligned} d\langle X_{f},Z\rangle_t=&\beta^2\sum_{k=1}^n\big(\int_{\mathbb{R}^{d}}\mathbb{E}_{\mu_0}\big[f(t,w_t^1,\ldots,w_t^n) \prod_{j=1}^{n+1} M(t,w^j) \phi(w_t^k-y)\phi(w_t^{n+1}-y)\big] dy \big)dt\\ =&\beta^2\sum_{k=1}^n \mathbb{E}_{\mu_0}\big[f(t,w_t^1,\ldots,w_t^n)\prod_{j=1}^{n+1} M(t,w^j) R(w_t^k-w_t^{n+1})\big] dt. \end{aligned} \end{equation} Combining \eqref{e.dZ}, \eqref{e.dZma}, \eqref{e.itoX}, \eqref{e.braket}, we have \begin{equation}\label{e.781} \mathcal{X}_f(T)=\mathcal{X}_f(0)+\sum_{i=1}^4 I_i(T), \end{equation} with \[ \begin{aligned} I_1(T)=\int_0^T\frac{1}{Z(t)^n}dX_f(t)=&\beta\sum_{k=1}^n\int_0^T\int_{\mathbb{R}^d}\frac{1}{Z(t)^n}\mathbb{E}_{\mu_0}[Y_f(t)\phi(w_t^k-y)] \eta(t,y)dydt\\ &+\beta^2\sum_{1\leq k<l\leq n}\int_0^T\frac{1}{Z(t)^n}\mathbb{E}_{\mu_0}[ Y_f(t) R(w_t^k-w_t^l)] dt\\ &+\int_0^T \frac{1}{Z(t)^n}\mathbb{E}_{\mu_0}[\prod_{j=1}^n M(t,w^j)(\partial_t+\tfrac12\Delta) f(t, w_t^1,\ldots,w_t^n)]dt, \end{aligned} \] \[ \begin{aligned} I_2(T)&=-n\int_0^T\frac{X_f(t)}{Z(t)^{n+1}}dZ(t)\\ &=-n\beta\int_0^T\int_{\mathbb{R}^d}\frac{X_f(t)}{Z(t)^{n+1}}\mathbb{E}_{\mu_0}[M(t,w)\phi(w_t-y)] \eta(t,y)dydt, \end{aligned} \] \[ \begin{aligned} I_3(T)&=\frac{n(n+1)}{2}\int_0^T \frac{X_f(t)}{Z(t)^{n+2}} d\langle Z,Z\rangle_t\\ &=\frac{\beta^2n(n+1)}{2}\int_0^T \frac{X_f(t)}{Z(t)^{n+2}}\mathbb{E}_{\mu_0}\big[\prod_{j=1}^2M(t,w^j)\cdot R(w^1_t-w^2_t)\big] dt, \end{aligned} \] and \[ \begin{aligned} I_4(T)&=-n\int_0^T \frac{1}{Z(t)^{n+1}}d\langle X_f,Z\rangle_t\\ &=-n\beta^2\sum_{k=1}^n\int_0^T\frac{1}{Z(t)^{n+1}}\mathbb{E}_{\mu_0}\big[f(t,w_t^1,\ldots,w_t^n)\prod_{j=1}^{n+1} M(t,w^j) R(w_t^k-w_t^{n+1})\big] dt. \end{aligned} \] Taking the expectation with respect to $\eta$, we have \begin{equation} \begin{aligned} \mathbf{E}[I_1(T)]&=\int_0^T \langle \beta^2 f_{0,R}(t)+(\partial_t+\tfrac12\Delta) f(t),Q_n(t) \rangle dt, \quad\quad \mathbf{E}[I_2(T)]=0,\\ \mathbf{E}[I_3(T)]&=\int_0^T \langle \beta^2 f_{2,R}(t), Q_{n+2}(t)\rangle dt, \quad\quad \mathbf{E}[I_4(T)]=\int_0^T \langle \beta^2f_{1,R}(t),Q_{n+1}(t)\rangle dt. \end{aligned} \end{equation} It suffices to note that $\mathbf{E}[\mathcal{X}_f(T)]=\langle f(T), Q_n(T)\rangle$ and $\mathcal{X}_f(0)=\langle f(0),q_0^{\otimes n}\rangle$ to complete the proof of \eqref{e.eqP}. \end{proof} \gu{ \begin{remark}\label{r.781} The equation \eqref{e.eqP} results from taking expectation with respect to the noise on both sides of \eqref{e.781}, in which the martingale terms disappear. If we are interested in the size of the random fluctuations, it suffices to study the martingale terms in $I_1(T),I_2(T)$. Actually, an SPDE satisfied by $q(t,x)$ was derived in \cite[Proposition 3.2]{gk}. \end{remark} } \subsection{White noise environment: $R(\cdot)=\delta(\cdot), d=1$} The proof in this case is through an approximation of the spacetime white noise by colored noise. Recall that we only consider $d=1$ in this case. For each $\varepsilon>0$, define \begin{equation}\label{e.defxieps} \xi_\varepsilon(t,x)=\int_{\mathbb{R}}\phi_\varepsilon(x-y)\xi(t,y)dy, \quad \phi_\varepsilon(x)=\tfrac{1}{\varepsilon}\phi(\tfrac{x}{\varepsilon}) \end{equation} and the spatial covariance function \[ R_\varepsilon(x)=\tfrac{1}{\varepsilon}R(\tfrac{x}{\varepsilon})=\int_{\mathbb{R}} \phi_\varepsilon(x+y)\phi_\varepsilon(y)dy. \] For any $s\in\mathbb{R},x\in\mathbb{R}$, let $U_\varepsilon$ be the solution to \begin{equation}\label{e.sheeps} \begin{aligned} \partial_t U_\varepsilon(s,x;t,y)&=\tfrac12\Delta_y U_\varepsilon(s,x;t,y)+\beta\,U_\varepsilon(s,x;t,y)\xi_\varepsilon(t,y), \quad\quad t>s,y\in\mathbb{R},\\ U_\varepsilon(s,x;s,y)&=\delta(y-x). \end{aligned} \end{equation} In other words, $U_\varepsilon$ solves \eqref{e.shephi} with $\xi_\phi$ replaced by $\xi_\varepsilon$. Similarly, the quenched endpoint distribution of the polymer in the environment $\xi_\varepsilon$, with the starting point distributed as $\mu_{0,\varepsilon}(dx)=q_0(x)dx$, is defined as \begin{equation} \mu_{T,\varepsilon}(dx)=q_\varepsilon(T,x)dx, \quad q_\varepsilon(T,x)=\frac{\int_{\mathbb{R}}U_\varepsilon(-T,x;0,y)q_0(y)dy}{\int_{\mathbb{R}^2} U_\varepsilon(-T,\tilde{x};0,y)q_0(y)dyd\tilde{x}}. \end{equation} For any $n\geq 1,T\geq0$ and $\mathbf{x}_{1:n}=(x_1,\ldots,x_n)\in\mathbb{R}^n$, define \[ Q_{n,\varepsilon}(T,\mathbf{x}_{1:n})=\mathbf{E}[q_\varepsilon(T,x_1)\ldots q_\varepsilon(T,x_n)]. \] By \eqref{e.eqP} (for the case of $\phi \in C_c^\infty(\mathbb{R}^d)$), we have, for any $f\in C_b^{1,2}([0,T]\times \mathbb{R}^n)$, \begin{equation}\label{e.eqQeps} \begin{aligned} \langle f(T),Q_{n,\varepsilon}(T)\rangle=\langle f(0),q_0^{\otimes n}\rangle&+\int_0^T \langle (\partial_t+\tfrac12\Delta) f(t), Q_{n,\varepsilon}(t)\rangle dt\\ &+\beta^2\sum_{k=0}^2\int_0^T\langle f_{k,\varepsilon}(t),Q_{n+k,\varepsilon}(t) \rangle dt, \end{aligned} \end{equation} with the shorthand notation $f_{k,\varepsilon}:=f_{k,R_\varepsilon}$. Recall that for any $R\in C_c^\infty(\mathbb{R}^d)$, the functions $f_{k,R}$ were defined in \eqref{e.defFR}, for $k=0,1,2$. To prove \eqref{e.eqP} for the case of $\phi(\cdot)=\delta(\cdot)$, we need the following two technical lemmas: \begin{lemma}\label{l.bdrhoeps} For any $p\geq 1, t>0,x\in\mathbb{R}$, $q_\varepsilon(t,x)\to q(t,x)$ in $L^p(\Omega,\mathcal{F},\mathbf{P})$, as $\varepsilon\to0$. The convergence is uniform for $x\in\mathbb{R}$ and $t$ in compact subsets of $(0,\infty)$. In addition, there exists $C=C(p,\beta,T)>0$ such that for all $\varepsilon\in(0,1), t\in (0,T],x\in\mathbb{R}$, \begin{equation}\label{e.bdrhoeps} \mathbf{E}[|q_\varepsilon(t,x)|^p]+\mathbf{E}[|q(t,x)|^p] \leq C (G_t\star q_0(x))^p . \end{equation} \end{lemma} \begin{lemma}\label{l.conQ} $Q_n$ is continuous on $(0,\infty)\times \mathbb{R}^n$. \end{lemma} The proof of Lemmas~\ref{l.bdrhoeps} and \ref{l.conQ} is given in Appendix~\ref{s.she}. \begin{corollary}\label{c.conQ} There exist $C=C(\beta,T)>0$ such that for all $ t\in(0,T],(x_1,\ldots,x_n)\in\mathbb{R}^n$ and $\varepsilon\in(0,1)$, \[ Q_{n,\varepsilon}(t,x_1,\ldots,x_n)+Q_n(t,x_1,\ldots,x_n)\leq C\prod_{j=1}^n G_t\star q_0(x_j). \] In addition, $Q_{n,\varepsilon}(t,x_1,\ldots,x_n)\to Q_n(t,x_1,\ldots,x_n)$ as $\varepsilon\to0$, and the convergence is uniform for $(x_1,\ldots,x_n)\in\mathbb{R}^n$ and $t$ in compact subsets of $(0,\infty)$. \end{corollary} Now we can finish the proof of Theorem~\ref{t.bbgky}. \begin{proof} We start from \eqref{e.eqQeps} and pass to the limit of $\varepsilon\to0$ for each term. First, by Corollary~\ref{c.conQ}, we have \[ \langle f(T),Q_{n,\varepsilon}(T)\rangle\to \langle f(T),Q_n(T)\rangle, \] and \[ \int_0^T \langle (\partial_t+ \tfrac12\Delta) f(t), Q_{n,\varepsilon}(t)\rangle dt\to \int_0^T \langle (\partial_t+\tfrac12\Delta)f(t), Q_{n}(t)\rangle dt. \] The rest of the $\varepsilon-$dependent terms in \eqref{e.eqQeps} are treated in the same way, so we take $\int_0^T\langle f_{0,\varepsilon}(t),Q_{n,\varepsilon}(t)\rangle dt$ as an example: for any $t$, \[ \begin{aligned} \langle f_{0,\varepsilon}(t),Q_{n,\varepsilon}(t)\rangle=&\sum_{1\leq i<j\leq n}\int_{\mathbb{R}^n} f(t,\mathbf{x}_{1:n})R_\varepsilon(x_i-x_j)Q_{n,\varepsilon}(t,\mathbf{x}_{1:n}) d\mathbf{x}_{1:n}. \end{aligned} \] It suffices to consider fixed $i,j$ from the summation. By the change of variable $x_i\mapsto x_i, x_j\mapsto x_i-\varepsilon x_j$, the integral equals to \[ \begin{aligned} \int_{\mathbb{R}^n}& f(t,x_1,\ldots,x_i,\ldots,x_i-\varepsilon x_j,\ldots x_n)R(x_j)\\ &\times Q_{n,\varepsilon}(t,x_1,\ldots,x_i,\ldots,x_i-\varepsilon x_j,\ldots x_n) d\mathbf{x}_{1:n}. \end{aligned} \] By Corollary~\ref{c.conQ}, we have \[ Q_{n,\varepsilon}(t,x_1,\ldots,x_i,\ldots,x_i-\varepsilon x_j,\ldots x_n) \leq Ct^{-\frac{1}{2}}\prod_{\ell:\,\ell \neq j} G_t\star q_0(x_\ell) \] where we also used the elementary estimate $G_t\star q_0(x_i-\varepsilon x_j) \leq Ct^{-\frac12}$, which clearly holds for the two cases of $q_0$ we considered in the paper: $q_0\in C_c(\mathbb{R})$ or $q_0(x)=\delta(x)$. For fixed $t\in(0,T)$ and $(x_1,\ldots,x_n)\in\mathbb{R}^n$, by the continuity of $f$, Lemma~\ref{l.conQ} and Corollary~\ref{c.conQ}, we obtain \[ \begin{aligned} &f(t,x_1,\ldots,x_i,\ldots,x_i-\varepsilon x_j,\ldots x_n)Q_{n,\varepsilon}(t,x_1,\ldots,x_i,\ldots,x_i-\varepsilon x_j,\ldots x_n)\\ &\to f(t,x_1,\ldots,x_i,\ldots,x_i,\ldots x_n)Q_{n}(t,x_1,\ldots,x_i,\ldots,x_i,\ldots x_n), \quad \mbox{ as } \varepsilon\to0. \end{aligned} \] Note that $\int R=1$, we can apply dominated convergence theorem to conclude that \[ \int_0^T\langle f_{0,\varepsilon}(t),Q_{n,\varepsilon}(t)\rangle dt\to \int_0^T\langle f_{0,\delta(\cdot)}(t),Q_{n}(t)\rangle dt, \quad\mbox{ as }\varepsilon\to0. \] Here we recall from \eqref{e.defFR} that \[ f_{0,\delta(\cdot)}(t,\mathbf{x}_{1:n})=f(t,\mathbf{x}_{1:n})\sum_{1\leq i<j\leq n}\delta(x_i-x_j). \] The proof is complete. \end{proof} \subsection{Proof of Corollary~\ref{c.generator}} The proof of the cases $R(\cdot)\in C_c^\infty(\mathbb{R}^d)$ and $R(\cdot)=\delta(\cdot)$ are similar, and we only deal with the latter. Fix $n=1$, $q_0\in C_c(\mathbb{R}^d)$ and $f\in C_b^2(\mathbb{R}^{nd})$, by Theorem~\ref{t.bbgky}, we have \[ \langle f, Q_1(T)\rangle=\langle f, q_0\rangle+\int_0^T \langle \tfrac12\Delta f, Q_1(t)\rangle dt+\beta^2\sum_{k=0}^2\int_0^T\langle f_{k,\delta(\cdot)},Q_{1+k}(t) \rangle dt. \] Recall that $\mathrm{F}_f(\mu_T)=\langle f, \mu_T\rangle$, so we have \[ \begin{aligned} \frac{\mathbf{E}[\mathrm{F}_f(\mu_T)]-\mathrm{F}_f(\mu_0)}{T}=& \frac{\langle f, Q_1(T)\rangle-\langle f, q_0\rangle}{T}\\ =&\frac{1}{T}\int_0^T \langle \tfrac12\Delta f, Q_1(t)\rangle dt+\beta^2\sum_{k=0}^2 \frac{1}{T}\int_0^T\langle f_{k,\delta(\cdot)},Q_{1+k}(t) \rangle dt. \end{aligned} \] By definition $f_{0,\delta(\cdot)}=0$ when $n=1$. For $k=1$, we have \[ \begin{aligned} \frac{1}{T}\int_0^T\langle f_{k,\delta(\cdot)},Q_{1+k}(t) \rangle dt=&-\frac{1}{T}\int_0^T \int_{\mathbb{R}^2} f(x_1)\delta(x_1-x_2) Q_2(t,x_1,x_2)dx_1dx_2dt\\ =&-\frac{1}{T}\int_0^T \int_\mathbb{R} f(x_1)Q_2(t,x_1,x_1) dx_1dt\\ =&-\int_0^1\int_{\mathbb{R}} f(x_1)Q_2(Tt,x_1,x_1)dx_1dt. \end{aligned} \] Similarly, when $k=2$, we have \[ \begin{aligned} \frac{1}{T}\int_0^T\langle f_{k,\delta(\cdot)},Q_{1+k}(t) \rangle dt=&\frac{1}{T}\int_0^T \int_{\mathbb{R}^3} f(x_1)\delta(x_2-x_3)Q_3(t,x_1,x_2,x_3) dx_1dx_2dx_3dt\\ =&\frac{1}{T}\int_0^T \int_{\mathbb{R}^2} f(x_1)Q_3(t,x_1,x_2,x_2) dx_1dx_2dt\\ =&\int_0^1 \int_{\mathbb{R}^2} f(x_1)Q_3(Tt,x_1,x_2,x_2)dx_1dx_2dt. \end{aligned} \] By applying Corollary~\ref{c.conQ} and Lemma~\ref{l.tzero} below, we have \[ \begin{aligned} \frac{\mathbf{E}[\mathrm{F}_f(\mu_T)]-\mathrm{F}_f(\mu_0)}{T}\to \langle \tfrac12\Delta f,q_0\rangle&-\beta^2\int_{\mathbb{R}}f(x_1)q_0(x_1)^2dx_1\\ &+\beta^2\int_{\mathbb{R}^2} f(x_1)q_0(x_1)q_0(x_2)^2dx_1dx_2 \end{aligned} \] as $T\to0$. The r.h.s.\ equals to \[ \langle \tfrac12\Delta f,q_0\rangle+ \beta^2\langle f_{1,\delta(\cdot)},q_0^{\otimes 2}\rangle+\beta^2\langle f_{2,\delta(\cdot)},q_0^{\otimes 3}\rangle=\langle \tfrac12\Delta f,q_0\rangle+\beta^2\langle f, \mathcal{T} q_0\rangle, \] which completes the proof of \eqref{e.generatorcolor}. \begin{lemma}\label{l.tzero} Assume $q_0\in C_c(\mathbb{R}^d)$. For any $n\geq1$ and $(x_1,\ldots,x_n)\in\mathbb{R}^{nd}$, as $t\to0$, \[ \begin{aligned} Q_n(t,x_1,\ldots,x_n)&\to \prod_{j=1}^n q_0(x_j). \end{aligned} \] \end{lemma} The proof of Lemma~\ref{l.tzero} is given in Appendix~\ref{s.she}. \section{Quantitative central limit theorem: proofs of Theorems~\ref{t.qclt} and \ref{t.msd}} \label{s.qclt} In this section, we consider the high dimensions $d\geq3$ and a high temperature regime with $\beta\ll1$. The goal is to prove Theorems~\ref{t.qclt} and \ref{t.msd}. With a change of variable, \eqref{e.wdbd} and \eqref{e.msdbd} are equivalent with, for any $h\in \mathrm{Lip}(1)$, \begin{equation}\label{e.201} \left|\int_{\mathbb{R}^d} h(\varepsilon x) Q_1(\tfrac{1}{\varepsilon^2},x)dx-\int_{\mathbb{R}^d} h(x)G_1(x)dx \right| \leq C\Big( \varepsilon|\log \varepsilon|\mathbbm{1}_{d=3}+\varepsilon\mathbbm{1}_{d\geq 4} \Big), \end{equation} and \begin{equation}\label{e.202} \left| \int_{\mathbb{R}^d} |\varepsilon x|^2 Q_1(\tfrac{1}{\varepsilon^2},x)dx - d\right| \leq C \Big(\varepsilon\mathbbm{1}_{d=3}+\varepsilon^2|\log \varepsilon|\mathbbm{1}_{d=4}+\varepsilon^2\mathbbm{1}_{d\geq5}\Big). \end{equation} To unify the notation, we view~\eqref{e.202} as a special case of~\eqref{e.201} with $h(x) = |x|^2$ even though this choice of $h$ is not an element of $\mathrm{Lip}(1)$. Note that, although the function $h$ here is not necessarily bounded, it grows at most polynomially at infinity. Thus, by Corollary~\ref{c.conQ}, it is easy to see that the two integrals in \eqref{e.201} are both well-defined. Our proof below is based on selecting appropriate test functions in the equation satisfied by $Q_1$ in order to quantify the cancellation between the $Q_2$ and $Q_3$ terms Throughout the section, we assume that $q_0(\cdot)=\delta(\cdot)$; that is, the starting point of the polymer path is at the origin. Recall that in high dimensions we assumed the random environment is smooth in the spatial variable and $R(\cdot)\in C_c^\infty(\mathbb{R}^d)$ is the spatial covariance function. \subsection{Error form} The first step is to derive an exact error expression in \eqref{e.201} using the hierarchical PDE system. We define an auxiliary test function as follows: for any $\varepsilon>0$ and a function $h$, let $f_\varepsilon(t,x)$ be the solution to the backward heat equation \begin{equation}\label{e.backheat} \begin{aligned} &\partial_t f_\varepsilon(t,x)+\tfrac12\Delta f_\varepsilon(t,x)=0,\quad\quad t<\tfrac{1}{\varepsilon^2},x\in\mathbb{R}^d,\\ &f_\varepsilon(\tfrac{1}{\varepsilon^2},x)=h(\varepsilon x). \end{aligned} \end{equation} Then we have \begin{lemma}\label{l.errde} For a continuous function $h$ with at most polynomial growth at infinity, we have \begin{equation}\label{e.203} \begin{aligned} \mathcal{E}_\varepsilon(h)&:=\int_{\mathbb{R}^d} h(\varepsilon x) Q_1(\tfrac{1}{\varepsilon^2},x) dx- \int_{\mathbb{R}^d} h(x) G_1(x)dx\\ &=\beta^2\int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{3d}}[f_\varepsilon(t,x)-f_\varepsilon(t,y)]R(y-z)Q_3(t,x,y,z)dxdydzdt. \end{aligned} \end{equation} \end{lemma} \begin{proof} We first assume in addition that $h\in C_b^2(\mathbb{R}^d)$. As $f_\varepsilon$ solves the backward heat equation, it holds that $f_\varepsilon\in C_b^{1,2}([0,\varepsilon^{-2}]\times\mathbb{R}^d)$. In the hierarchical PDE system \eqref{e.eqP}, we take $n=1,T=\varepsilon^{-2}$, and the test function to be $f_\varepsilon$ to obtai \begin{equation}\label{e.131} \begin{split} \int_{\mathbb{R}^d} f_\varepsilon(\tfrac{1}{\varepsilon^2},x)&Q_1(\tfrac{1}{\varepsilon^2},x)dx=\int_{\mathbb{R}^d} f_\varepsilon(0,x)Q_1(0,x)dx\\ &-\beta^2\int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{2d}}f_\varepsilon(t,x)R(x-y)Q_2(t,x,y)dxdydt\\ &+\beta^2\int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{3d}}f_\varepsilon(t,x)R(y-z)Q_3(t,x,y,z)dxdydzdt. \end{split} \end{equation} As $Q_1(0,x)=q_0(x)=\delta(x)$, the first term on the r.h.s.\ of \eqref{e.131} is \begin{equation}\label{e.204} \begin{aligned} \int_{\mathbb{R}^d}f_\varepsilon(0,x)Q_1(0,x)dx=f_\varepsilon(0,0)=\int_{\mathbb{R}^{d}}h(\varepsilon x)G_{\varepsilon^{-2}}(x)dx=\int_{\mathbb{R}^{2d}} h(x)G_1(x)dx, \end{aligned} \end{equation} where the last step is through a change of variable and using the scaling property of the heat kernel. By definition, $Q_n(t,x_1,\ldots,x_n)$ is symmetric in the $x-$variables and \[ Q_2(t,x,y)=\int_{\mathbb{R}^d} Q_3(t,x,y,z)dz. \] Thus, \eqref{e.131} can be rewritten as \[ \begin{aligned} \mathcal{E}_\varepsilon(h =\beta^2\int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{3d}}[f_\varepsilon(t,x)-f_\varepsilon(t,y)]R(y-z)Q_3(t,x,y,z)dxdydzdt. \end{aligned} \] where we also used the fact that $R(\cdot)$ is even. Through an approximation and the bound on $Q_n$ given in Corollary~\ref{c.conQ}, the above identity extends to the case of $h$ having at most polynomial growth at infinity, which completes the proof. \end{proof} \begin{remark} For the case of $q_0(\cdot)\in C_c(\mathbb{R}^d)$, a similar error decomposition as \eqref{e.203} can be derived. The only change to make in the proof is in \eqref{e.204}, where an extra error term comes out of the weak convergence of $\tfrac{1}{\varepsilon^d} q_0(\tfrac{x}{\varepsilon})\to \delta(x)$. \end{remark} \subsection{Estimating $\mathcal{E}_\varepsilon(h)$} The proof of Theorems~\ref{t.qclt} and \ref{t.msd} reduces to the estimate of $\mathcal{E}_\varepsilon(h)$ for $h\in \mathrm{Lip}(1)$ and $h(x)=|x|^2$ respectively. By using a probabilistic representation, the following bounds on $Q_n$ hold in the high temperature regime in $d\geq3$: \begin{lemma}\label{l.bdQn} For any $d\geq3$ and $n\geq1$, there exists constants $\beta_0(d,n,R)>0$ and $C(d,n,R,\beta)$ such that if $\beta<\beta_0(d,n,R)$, we have \[ Q_n(t,x_1,\ldots,x_n)\leq C \prod_{j=1}^n G_t(x_j), \quad\quad \mbox{ for all } t>0,x_1,\ldots,x_n\in\mathbb{R}^d. \] \end{lemma} The proof of Lemma~\ref{l.bdQn} is in Appendix~\ref{s.lem}. Before undertaking the proofs of \Cref{t.qclt,t.msd}, we provide a heuristic argument that shows how the diffusive behavior of the polymer endpoint behavior follows from the convergence of the error $\mathcal{E}_\varepsilon(h)$ to zero. Recall that \[ \mathcal{E}_\varepsilon(h)=\beta^2\int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{3d}}[f_\varepsilon(t,x)-f_\varepsilon(t,y)]R(y-z)Q_3(t,x,y,z)dxdydzdt. \] Consider the simple case of $h\in C_b(\mathbb{R}^d)$, so $|f_\varepsilon|\leq \sup_x|h(x)|$. The key point here is that, with the assumption of $\beta\ll1$ and $d\geq3$, \[ \int_{\mathbb{R}^{3d}}R(y-z)Q_3(t,x,y,z)dxdydz=\int_{\mathbb{R}^{2d}}R(y-z) Q_2(t,y,z)dydz \leq C t^{-d/2} \] and, hence, is integrable for $t\in [1,\infty)$. This is ultimately related to the fast decay of the heat kernel in high dimensions $d\geq3$, with the smallness of $\beta$ ensuring that the effect of the random environment is ``summable'' in the limit (hidden in the proof of Lemma~\ref{l.bdQn}). Therefore, the main contribution to $\mathcal{E}_\varepsilon(h)$ actually comes from the time integration in a \emph{microscopically} large domain $[0,M]$ for $1 \ll M \ll \varepsilon^{-2}$; that i \[ \mathcal{E}_\varepsilon(h)\approx \beta^2\int_0^{M}\int_{\mathbb{R}^{3d}}[f_\varepsilon(t,x)-f_\varepsilon(t,y)]R(y-z)Q_3(t,x,y,z)dxdydzdt. \] On the other hand, when $t \leq M \ll \varepsilon^{-2}$, the $f$ terms cancel for the following reason: for any fixed $(t,x)$, it is straightforward to check that \[ f_\varepsilon(t,x)=\int_{\mathbb{R}^d} h(\varepsilon z) G_{\varepsilon^{-2}-t}(x-z)dz=\int_{\mathbb{R}^d} h(z) G_{1-\varepsilon^2t}(\varepsilon x-z)dz\to \int_{\mathbb{R}^d} h(z)G_1(z)dz, \] which is independent of $x$. Thus, by the dominated convergence theorem, we obtain that $\mathcal{E}_\varepsilon(h)\to0$. The proofs of Theorems~\ref{t.qclt} and \ref{t.msd} then rely on quantifying this argument, which we do now In the proofs below, $C$ is a constant independent of $\varepsilon$ which may change from line to line. \begin{proof}[Proof of Theorem~\ref{t.qclt}] Fix $h\in \mathrm{Lip}(1)$, we have \[ |f_\varepsilon(t,x)-f_\varepsilon(t,y)|\leq \int_{\mathbb{R}^d}G_{\varepsilon^{-2}-t}(z) |h(\varepsilon(x-z))-h(\varepsilon(y-z))| dz \leq \varepsilon \,|x-y|, \] which implies \begin{equation}\label{e.bdEeps} \begin{aligned} |\mathcal{E}_\varepsilon(h)| \leq\, &\beta^2\varepsilon\int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{3d}} |x-y|R(y-z)Q_3(t,x,y,z)dxdydzdt\\ \leq\, & C\varepsilon\int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{3d}} |x-y| R(y-z) G_t(x)G_t(y)G_t(z)dxdydzdt. \end{aligned} \end{equation} While the above integral can be estimated directly (as in the proof of Theorem~\ref{t.msd} below), we present a simple probabilistic argument here. Let $B^1,B^2,B^3$ be three independent Brownian motions, then the integral can be written as \[ \int_{\mathbb{R}^{3d}} |x-y| R(y-z) G_t(x)G_t(y)G_t(z)dxdydz=\mathbb{E}[|B^2_t-B^1_t|\cdot R(B^2_t-B^3_t)]. \] Since $B^2-B^1$ and $B^2-B^3$ are correlated Brownian motions with variance $2$ and covariance $1$, we can rewrite the expectation as \[ \mathbb{E}[|B^2_t-B^1_t|\cdot R(B^2_t-B^3_t)] =\mathbb{E}\Big[\Big|\sqrt{\tfrac{1}{2}}W^1_t-\sqrt{\tfrac32}W^2_t\Big| \cdot R(\sqrt{2}W^1_t)\Big], \] with $W^1,W^2$ independent Brownian motions. Since $R\in C_c^\infty$, we estimate the above expectation by \[ \mathbb{E}\Big[\Big|\sqrt{\tfrac{1}{2}}W^1_t-\sqrt{\tfrac32}W^2_t\Big| \cdot R(\sqrt{2}W^1_t)\Big] \leq C \mathbb{E}\Big[\Big|\sqrt{\tfrac12}W^1_t-\sqrt{\tfrac32}W^2_t\Big|\cdot \mathbbm{1}_{\{|W^1_t|\leq M\}}\Big] \] for some constant $M>0$. For $t\leq 1$, we have the obvious bound \[ \mathbb{E}\Big[\Big|\sqrt{\tfrac12}W^1_t-\sqrt{\tfrac32}W^2_t\Big|\cdot\mathbbm{1}_{\{|W^1_t|\leq M\}}\Big]\leq C. \] For $t>1$, by first averaging $W^2$, we have \[ \mathbb{E}\Big[\Big|\sqrt{\tfrac{1}{2}}W^1_t-\sqrt{\tfrac{3}{2}}W^2_t\Big|\cdot \mathbbm{1}_{\{|W^1_t|\leq M\}}\Big] \leq C\sqrt{t}\,\mathbb{P}[|W^1_t|\leq M] \leq C t^{-\frac{d-1}{2}}. \] Combining the two cases and plugging into \eqref{e.bdEeps}, we derive \[ |\mathcal{E}_\varepsilon(h)| \leq C\varepsilon\, \Big(1+\int_1^{\varepsilon^{-2}}t^{-\frac{d-1}{2}}dt\Big) \leq C\Big(\varepsilon\, |\log \varepsilon|\mathbbm{1}_{d=3} +\varepsilon\mathbbm{1}_{d\geq4}\Big). \] The proof is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{t.msd}] Let $h(x)=|x|^2$, then \[ f_\varepsilon(t,x)=\int_{\mathbb{R}^d} G_{\varepsilon^{-2}-t}(y)|\varepsilon (x-y)|^2 dy=\varepsilon^2|x|^2+(1-\varepsilon^2 t)d. \] Applying Lemmas~\ref{l.errde} and \ref{l.bdQn}, we have \begin{equation}\label{e.205} \begin{aligned} |\mathcal{E}_\varepsilon(h)|=\left|\beta^2\varepsilon^2 \int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{3d}}(|x|^2-|y|^2)R(y-z)Q_3(t,x,y,z)dxdydzdt\right|\\ \leq C \varepsilon^2 \int_0^{\varepsilon^{-2}}\int_{\mathbb{R}^{3d}} ||x|^2-|y|^2| R(y-z) G_t(x)G_t(y)G_t(z)dxdydzdt. \end{aligned} \end{equation} To estimate the above integral, as $R$ is compactly supported (suppose its support has a radius $M>0$), we have \[ \int_{\mathbb{R}^d} R(y-z) G_t(z) dz \leq C \int_{\mathbb{R}^d} \mathbbm{1}_{\{|y-z| \leq M\}} G_t(z)dz \leq C\Big(\mathbbm{1}_{t\leq 1}+\mathbbm{1}_{t>1} \sup_{z:|z-y|\leq M} G_t(z)\Big). \] For the case of $t>1$, by considering $|y|\leq 2M$ and $|y|>2M$ separately, we derive \[ \sup_{z:|z-y|\leq M} G_t(z) \leq C G_{c_1t}(y), \quad \mbox{ for some } c_1>0. \] From \eqref{e.205} and the above estimate, the following bound holds \[ \begin{aligned} |\mathcal{E}_\varepsilon(h)| \leq& C\varepsilon^2 \int_0^1 \int_{\mathbb{R}^{2d}}||x|^2-|y|^2| G_t(x)G_t(y) dxdydt\\ &+C\varepsilon^2\int_1^{\varepsilon^{-2}}\int_{\mathbb{R}^{2d}}||x|^2-|y|^2| G_t(x)G_t(y)G_{c_1t}(y)dxdydt=A_1+A_2. \end{aligned} \] We first get $A_1\leq C\varepsilon^2$. For $A_2$, by the fact that \[ G_t(y)G_{c_1t}(y) \leq C t^{-d/2}G_{c_2t}(y), \quad \mbox{ for some } c_2>0, \] we have \[ \begin{aligned} A_2&\leq C\varepsilon^2\int_1^{\varepsilon^{-2}}\int_{\mathbb{R}^{2d}}t^{-d/2}||x|^2-|y|^2| G_{t}(x)G_{c_2t}(y)dxdydt \\ &\leq C \varepsilon^2\int_1^{\varepsilon^{-2}}t^{1-d/2}dt\leq C\Big( \varepsilon\mathbbm{1}_{d=3}+\varepsilon^2|\log \varepsilon|\mathbbm{1}_{d=4}+\varepsilon^2\mathbbm{1}_{d\geq5}\Big). \end{aligned} \] This completes the proof. \end{proof} \section{Growth of moments: proof of Theorem~\ref{t.23}} \label{s.23} In the interest of the simplest presentation, we remove the parameter $\beta$ by scaling. Indeed, let $\overline g(t,x) = \beta^{-2} g(t \beta^{-4}, x \beta^{-2})$, and observe that \[ \overline g_t = \overline g_{xx} + \overline g (\|\overline g\|^2 - \overline g). \] Hence, for the remainder of the section, we set $\beta = 1$; that is, we are interested in \begin{equation}\label{e.maineq1} \begin{aligned} \partial_t g(t,x &=\tfrac12\Delta g(t,x)+ \|g(t,\cdot)\|^2 g(t,x)- g(t,x)^2, \quad\quad t>0, x\in\mathbb{R},\\ g(0,x)&=q_0(x). \end{aligned} \end{equation} Here we abused notation by reverting to $g$ as opposed to using $\overline g$. Undoing this simple scaling reveals the dependence on $\beta$ of our results. In order to control the moments of $g$, it is necessary to understand the asymptotic behavior of $\|g\|^2$ as $t\to\infty$. By interpolation and the fact that $g$ is a probability density (noted below), it is enough to control the maximum of $g$. In the following section, we state the main estimate on the decay of the maximum of $g$. After, we show how to use this to obtain upper and lower bounds on the moments of $g$ by constructing sharp sub and supersolutions of $g$. In \Cref{s.max_decay}, we show how to obtain the correct asymptotics on the maximum of $g$. \subsection{Statement of the main inequality and its application to the moments of $g$} In order to streamline the argument, we define a few quantities that play key roles in the proof. For any $t \geq 0$, let \begin{equation}\label{e.M_E_D} M(t) = \max_{x\in\mathbb{R}} g(t,x), \quad E(t) = \int_{\mathbb{R}} g(t,x)^2 dx, \quad \text{ and }\quad D(t) = \int_{\mathbb{R}} |g_x(t,x)|^2 dx. \end{equation} The key inequality that we require is stated in the following proposition, proved in \Cref{s.max_decay}. \begin{proposition}\label{p.max_decay} There is a universal constant $C_0$, independent of the initial data, such that \[ M(t) \leq \frac{C_0}{t^{2/3}} \qquad\text{ for all } t >0. \] \end{proposition} Two more useful facts are the following. Integrating~\eqref{e.maineq1}, we see that \[ \frac{d}{dt} \int_{\mathbb{R}} g(t,x) dx = E(t) \left(\int_{\mathbb{R}} g(t,x)dx - 1\right) \] Since $\int g(0,x) dx =\int q_0(x)dx=1$, by assumption, a simple ODE argument yields, for any $t\geq 0$, \begin{equation}\label{e.mass_one} \int_{\mathbb{R}} g(t,x) dx = \int_{\mathbb{R}} q_0(x) dx = 1. \end{equation} Thus $g(t,\cdot)$ is a probability density (note that $g\geq0$ by comparison principle and the fact that $q_0\geq0$). This is unsurprising given the derivation of the model~\eqref{e.maineq1}; however, it is crucial in our analysis. Indeed, we immediately deduce the following useful inequality: \begin{equation}\label{e.E_less_M} E(t) \leq M(t) \int_{\mathbb{R}} g(t,x) dx = M(t). \end{equation} We now show how to conclude \Cref{t.23} assuming \Cref{p.max_decay}. We begin with the upper bound. \begin{proof}[Proof of the upper bound in \Cref{t.23}] The first step is to replace the $\|g\|^2 = E$ term in~\eqref{e.maineq1}. From \Cref{p.max_decay} and~\eqref{e.E_less_M}, we see that \[ E(t) \leq \frac{C_0}{t^{2/3}}. \] This, along with~\eqref{e.maineq1}, implies that \[ \partial_t g - \frac{1}{2} \Delta g - \frac{C_0}{t^{2/3}} g \leq 0. \] The comparison principle implies that $g\leq \overline g$, where $\overline g$ solves \begin{equation}\label{e.supersoln} \begin{cases} \partial_t \overline g - \frac{1}{2} \Delta \overline g - \frac{C_0}{t^{2/3}} \overline g = 0 \qquad &\text{ in } (0,\infty) \times \mathbb{R},\\ \overline g = q_0 &\text{ on } \{0\}\times \mathbb{R}. \end{cases} \end{equation} The second step is to obtain a bound on $\overline g$, and, hence, on $g$, for large $x$. The first thing to notice is that \[ \overline h(t,x) = \exp\left\{- \int_0^t \frac{C_0}{s^{2/3}} ds\right\} \overline g(t,x) \] solves the heat equation, $\partial_t\overline h = \frac{1}{2} \Delta \overline h$. It follows that \[ \begin{split} \overline g(t,x) &= \exp\left\{\int_0^t \frac{C_0}{s^{2/3}} ds\right\} \overline h(t,x)\\ &= \exp\left\{\int_0^t \frac{C_0}{s^{2/3}} ds\right\} \int_{\mathbb{R}} \frac{1}{\sqrt{2\pi t}} e^{- \frac{y^2}{2t}} q_0(x-y) dy. \end{split} \] By assumption, $q_0$ is compactly supported. A straightforward estimate of the convolution, as well as a simple evaluation of the time integral, yields, for any $t \geq 1$ and any $x$, \[ g(t,x) \leq \overline g(t,x) \leq \frac{C}{\sqrt t} e^{3C_0 t^{1/3} - \frac{x^2}{2t}} \] for some positive constant $C$ depending only on the initial data. We now conclude the bound on the moments of $g$. Pairing the above arguments with~\eqref{p.max_decay}, we have established that, for all $t\geq 1$, \[ g(t,x) \leq C \min\left\{\frac{1}{t^{2/3}}, \frac{1}{t^{1/2}} \exp\left\{ 3C_0 t^{1/3} - \frac{x^2}{2t}\right\}\right\}. \] We now use this to conclude the proof. Indeed, for any $p\geq 1$, we find \[\begin{split} \int_{\mathbb{R}} |x|^p g(t,x) dx &\leq \int_{|x| \leq 6 C_0 t^{2/3}} |x|^p \frac{C}{t^{2/3}} dx + \int_{|x| > 6 C_0 t^{2/3}} |x|^p\frac{C}{t^{1/2}} e^{3 C_0 t^{1/3} - \frac{x^2}{2t}} dx\\ &\leq C t^\frac{2p}{3} + Ct^{p/2} \int_{|y| > 6 C_0 t^{1/6}} |y|^p e^{3 C_0 t^{1/3} - \frac{y^2}{2}} dy\\ &\leq C t^\frac{2p}{3} + C t^{p/2} t^{(p-1)/6} e^{ - (2(3C_0)^2 - (3C_0)) t^{1/3}}. \end{split}\] where $C$ is a constant depending only on $q_0$ and $p$ that changes line-by-line. The second term clearly tends to zero. This completes the proof. \end{proof} It is now possible to deduce the lower bound using the upper bound \gu{given in Proposition~\ref{p.max_decay}} and~\eqref{e.mass_one}. We require one lemma. \begin{lemma}\label{l.minimizer} Assume that $\lambda > 0$, $d\geq 1$, and $w: [0,\infty) \to \mathbb{R}$ is an increasing function. Then \[ \min_{ \substack{ \int g(x) dx = 1,\\ 0 \leq g \leq \lambda } } \int_{\mathbb{R}^d} w(|x|) g(x)dx = \lambda \int_{B_{(\lambda \omega_d)^{-1/d}}} w(|x|) dx. \] where $B_r$ denotes the ball centered at the origin with radius $r$ and $\omega_d$ is the volume of the unit ball in $\mathbb{R}^d$. \end{lemma} This lemma is elementary and follows from the fact that the minimizer is clearly $\lambda \mathbbm{1}_{B_{(\lambda \omega_d)^{-1/d}}}(x)$. Hence, we omit its proof. We now conclude the proof of \Cref{t.23}. \begin{proof}[Proof of the lower bound in \Cref{t.23}] From \Cref{p.max_decay}, we know that $g(t,x)\leq \frac{C_0}{(1+t)^{2/3}}=:\lambda$ for all $t$. Hence \[ \int_{\mathbb{R}} |x|^p g(t,x)dx \geq \min_{\substack{\int \overline g(x) dx = 1,\\ 0 \leq \overline g \leq \lambda}} \int_\mathbb{R} |x|^p \overline g(x) dx. \] Applying \Cref{l.minimizer}, we have \[ \min_{\substack{\int \overline g(x) dx = 1,\\ 0 \leq \overline g \leq \lambda}} \int_\mathbb{R} |x|^p \overline g(x) dx \geq \lambda \int_{-1/2\lambda}^{1/2\lambda} |x|^p dx = \frac{2^{-(p+1)}}{p+1} \lambda^{-p} = \frac{2^{-(p+1)}}{p+1} \left(\frac{(1+t)^{2/3}}{C_0}\right)^p, \] which concludes the proof. \end{proof} \subsection{Decay of the maximum of $g$}\label{s.max_decay} Classical techniques for decay of parabolic equations are often based on Nash's inequality, which relates the $L^2$ norm of the gradient of $g$ with the $L^2$ norm of $g$. Such an estimate necessarily gives decay like $O(t^{-1/2})$ in $d=1$, which is slower than the rate of decay we prove below. Hence, such a strategy is not useful here. In other words, the Laplacian term (and the related Dirichlet energy $D$) is not sufficient to obtain decay like $O(t^{-2/3})$. The only other term in the equation is $g(E-g)$, and, hence, our proof must be based on this term. The key observation is that near the maximum of $g$, we expect $g(E - g) \approx M(E-M) < 0$. As such, we require an estimate that quantifies how negative this term is. In fact, our argument is more subtle than this. We use the decay induced by both terms $-D$ and $-M(M-E)$. Indeed, if $M-E$ is large, then the nonlinear term $-M(M-E)$ is a large negative number. On the other hand, if $M \approx E$, it must be that $g$ ``flattens'' quickly after reaching the maximum, making $D$ large (recall that $g$ is a probability measure so if $M \approx E$ then $g$ is near the optimal case in H\"older's inequality, which, in turn, implies that $g$ is nearly an indicator function). In both cases, we get a large decay term. The key estimate quantifying this heuristic is the following, which is proved at the end of the section. \begin{lemma}\label{l.dissipation} There is a universal constant $C_1>0$ such that \[ M - E \geq \frac{M^4}{C_1 D}. \] \end{lemma} Before beginning the proof of \Cref{p.max_decay}, we collect two more inequalities. The first is that, for any $0 < t_1 < t_2$, \begin{equation}\label{e:mp} M(t_2) \leq M(t_1) + \int_{t_1}^{t_2} (E(s) M(s) - M(s)^2) ds. \end{equation} Informally, this can be seen by noting that, at a maximum, $\Delta g \leq 0$, so that~\eqref{e.maineq1} reads $\dot M \leq EM - M^2$, where we used the physics notation $\cdot$ to denote the time derivative. This differential inequality has to be interpreted in the suitable weak sense, but this purely technical issue is standard in parabolic theory and, hence, we omit the details. The second inequality is, for all $t_1 < t_2$, \begin{equation}\label{e:e_i2} E(t_2) \leq E(t_1) - \int_{t_1}^{t_2} D(s) ds. \end{equation} In order to see this, simply multiply~\eqref{e.maineq1} by $g$ and integrate in $x$ in order to obtain \[ \frac{1}{2} \dot E + \frac{1}{2} D \leq E^2 - \int g^3 . \] Since \[ E(t) = \int_{\mathbb{R}} g^{3/2}(t,x) g^{1/2}(t,x) dx \leq \left( \int_\mathbb{R} g^3(t,x) dx \right)^{1/2} \left( \int_\mathbb{R} g(t,x) dx \right)^{1/2}, \] then $\frac{1}{2} \dot E + \frac{1}{2} D \leq 0$. Integrating this in time yields~\eqref{e:e_i2}. We now proceed with the proof of \Cref{p.max_decay}. \begin{proof}[Proof of \Cref{p.max_decay}] Let \[ t_0 = \sup\big\{ t >0 : \sup_{s \in [0,t]} s^{2/3} M(s) < A\big\}, \quad\quad A=2C_1^{1/3}, \] with the $C_1$ from Lemma~\ref{l.dissipation}. It is clear that if $t_0 = \infty$ then the proof is finished. We proceed by contradiction assuming that $t_0$ is finite. By continuity, it is also clear that $M(t_0) = A t_0^{-2/3}$. There are two cases to consider. {\bf Case one: $M(t_0/2) > 2 M(t_0)$.} Since $t_0/2 < t_0$, then, using the definition of $t_0$, we have \[ \frac{2A}{t_0^{2/3}} = 2M(t_0) < M(t_0/2) < \frac{A}{(t_0/2)^{2/3}} = \frac{2^{2/3} A}{t_0^{2/3}}. \] This is a contradiction since $2 > 2^{2/3}$. Hence, this case cannot occur. {\bf Case two: $M(t_0/2) \leq 2 M(t_0)$.} We first combine \Cref{l.dissipation} and~\eqref{e:mp} to find \[ M(t) \leq M(t_0/2) - \frac{1}{C_1} \int_{t_0/2}^t \frac{M(s)^5}{D(s)} ds. \] Since this is true for all $t$, it follows that \[ M(t_0) \leq \overline M(t_0), \] where $\dot{\overline M} = - C_1^{-1} \overline M^5 D^{-1}$ and $\overline M(t_0/2) = M(t_0/2)$. Elementary calculus yields \begin{equation}\label{e:c1171} M(t_0) \leq \overline M(t_0) = \left( M(t_0/2)^{-4} + \frac{4}{C_1} \int_{t_0/2}^{t_0} D(s)^{-1} ds \right)^{-1/4}. \end{equation} Then, using (in order) Cauchy-Schwarz, \eqref{e:e_i2}, \eqref{e.E_less_M}, and the assumption that $M(t_0/2) \leq 2 M(t_0)$, we find \begin{equation} \begin{split} \frac{1}{4} t_0^2 &= \left(\int_{t_0/2}^{t_0} \sqrt{D(s)}\frac{1}{\sqrt{D(s)}} ds\right)^2 \leq (E(t_0/2) - E(t_0)) \int_{t_0/2}^{t_0} D(s)^{-1} ds\\ &\leq E(t_0/2) \int_{t_0/2}^{t_0} D(s)^{-1} ds \leq M(t_0/2) \int_{t_0/2}^{t_0} D(s)^{-1} ds\\ &\leq 2 M(t_0) \int_{t_0/2}^{t_0} D(s)^{-1} ds. \end{split} \end{equation} Using this inequality in~\eqref{e:c1171}, we obtain \begin{equation} M(t_0) \leq \left( \frac{1}{M(\frac{t_0}{2})^{4}} + \frac{1}{2C_1}\frac{t_0^2}{M(t_0)} \right)^{-\frac14} \leq \left( \frac{1}{2C_1}\frac{t_0^2}{M(t_0)} \right)^{-\frac14} = \left(2C_1 \frac{M(t_0)}{t_0^2}\right)^{\frac14}. \end{equation} Re-arranging this yields \begin{equation} M(t_0) < \frac{(2C_1)^{1/3}}{t_0^{2/3}}. \end{equation} However, by the construction of $t_0$, we have that $M(t_0) = A t_0^{-2/3}=2C_1^{1/3}t_0^{-2/3}$. Hence, we have reached a contradiction, and we conclude that case two cannot occur either. Since both cases yield a contradiction, it follows that $t_0 = \infty$, which completes the proof. \end{proof} It only remains to establish \Cref{l.dissipation}. We do this now. The idea of the proof is to re-write $M-E$ in terms of a single integral term and then use the proof of a lemma of Constantin, Kiselev, Oberman, and Ryzhik~\cite[Lemma 2]{Constantin_2000}. This lemma was originally used to establish the key inequality in a proof of lower bounds on the speed of Fisher-KPP fronts in the presence of shear flows in a cylinder. \begin{proof}[Proof of \Cref{l.dissipation}] As time plays no role in this lemma, we omit it notationally. First, observe that, due to~\eqref{e.mass_one}, we have \[ M - E = M \int_\mathbb{R} g(x) dx - \int_\mathbb{R} g(x)^2 dx = \int_\mathbb{R} g(x) (M - g(x)) dx. \] Notice that the integrand $g(M - g)$ is nonnegative. Since $M$ is the maximum of $g$ and $\lim_{x\to-\infty} g(x) = 0$, we can find $x_1 < x_2$ such that \begin{equation} g(x_1) = \frac{M}{3}, \quad g(x_2) = \frac{2M}{3}, \quad \text{and} \quad \frac{M}{3} \leq g(x) \leq \frac{2M}{3} \text{ for all } x \in (x_1,x_2). \end{equation} Then we have that \begin{equation}\label{e:c1172} \frac{M}{3} = \int_{x_1}^{x_2} g_x\ dx \leq \sqrt{x_2 - x_1} \left( \int_\mathbb{R} |g_x|^2 dx\right)^{1/2}. \end{equation} On the other hand, since $M/3 \leq g(x) \leq 2M/3$ for all $x \in (x_1,x_2)$, then $g(M-g) \geq M^2/9$ on $(x_1,x_2)$. It follows that \begin{equation}\label{e:c1173} \int_\mathbb{R} g(x)(M - g(x)) dx \geq \int_{x_1}^{x_2} g(x) (M - g(x)) dx \geq \int_{x_1}^{x_2} \frac{M^2}{9} dx = \frac{M^2}{9} |x_2-x_1|. \end{equation} After squaring~\eqref{e:c1172} and inserting~\eqref{e:c1173} into it, we find \begin{equation} \frac{M^2}{9} \leq \frac{9}{M^2} \int_\mathbb{R} g(x) (M - g(x)) dx \int_\mathbb{R} |g_x|^2 dx, \end{equation} which yields the claim. \end{proof}
proofpile-arXiv_067-1092
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Live commenting is an emerging feature of online video sites that allows real-time comments to fly across the screen or roll at the right side of the videos, so that viewers can see comments and videos at the same time. Automatic live commenting aims to provide some additional opinions of videos and respond to live comments from other viewers, which encourages users engagement on online video sites. Automatic live commenting is also a good testbed of a model's ability of dealing with multi-modality information \cite{ma2018livebot}. It requires the model to understand the vision, text, and audio, and organize the language to produce the comments of the videos. Therefore, it is an interesting and important task for human-AI interaction. Although great progress has been made in multimodal learning \cite{liu2018simnet,wang2018video,whitehead2018incorporating}, live commenting is still a challenging task. Recent work on live commenting implements an encoder-decoder model to generate the comments \cite{ma2018livebot}. However, these methods do not model the interaction between the videos and the comments explicitly. Therefore, the generated comments are often general to any videos and irrelevant to the specific input videos. Figure~\ref{fig:example} shows an example of the generated comments by an encoder-decoder model. It shows that the encoder-decoder model tends to output the popular sentences, such as ``Oh my God !'', while the reference comment is much more informative and relevant to the video. The reason is that the encoder-decoder model cares more about the language model, rather than the interaction between the videos and the comments, so generating popular comments is a safe way for the model to reduce the empirical risk. As a result, the encoder-decoder model is more likely to generate a frequent sentence, rather than an informative and relevant comment. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{catlive_1.png}\\ \begin{tabular}{c p{4cm}} \\ \hline Case 1 & Oh My God !!!!!\\ Case 2 & So am I.\\ \hline Reference & The cat is afraid of spicy.\\ \hline \end{tabular} \caption{An example of the generated comments by the encoder-decoder model. Above is a frame extracted from a selected video. Below are two cases generated by the encoder-decoder model around the above frame, as well as a reference comment by human.} \label{fig:example} \end{figure} Another problem with current state-of-the-art live commenting models is that they do not take the audio into consideration. Audio, as an important part of videos, carries information that may not appear in the vision or text. For example, when the video is about playing the piano, it is difficult to make a proper comment without the audio. The audio also includes dialogues or background music, which helps understand the story in videos. Therefore, the audio should not be neglected if the model needs to fully understand videos and make an informative comment. In this work, we build a novel live commenting model to make more relevant comments. Based on existing observations, we propose a multimodal matching transformer to learn the cross-modal interaction between videos and comments explicitly. The proposed multimodal matching network can match the most relevant comments with the given videos from a candidate set, so it can encourage the produced comments to be more informative and less general. Our model is based on the transformer architecture, and it jointly learns the cross-modal representations of text, vision, and audio. We evaluate our model on a live commenting dataset~\cite{ma2018livebot}. Experiments show that the proposed multimodal matching transformer model is effective and significantly outperforms state-of-the-art methods. The contributions of this paper can be summarized as follows: \begin{itemize} \item We propose using the audio information for the task of live commenting, which is neglected by previous work. \item We propose a novel multimodal matching network to capture the relationship among text, vision, and audio, based on the state-of-the-art transformer framework. \item Experiments show that the proposed multimodal matching model significantly outperforms the state-of-the-art methods. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{attn.pdf}\\ \caption{Architecture of the multi-head attention.} \label{fig:mult-head-attn} \end{figure} \section{Background} In this work, our proposed model is based on the Transformer network \cite{vaswani2017attention}. This section will give an introduction for core modules of the Transformer network. \subsection{Input Representation} In the Transformer network architecture, there is no recurrence and no convolution. To leverage the order of the sequence, it introduces positional embeddings and each positional embedding is computed based on the token's position in the sequence: \begin{align} \begin{split} \bm{PE}_{(pos,2i)}=sin(pos/10000^{2i/d_{model}}) \\ \bm{PE}_{(pos,2i+1)}=cos(pos/10000^{2i/d_{model}}) \end{split} \end{align} where $pos$ is the position in the sequence and $i$ is the dimension. Namely, the input representation of the Transformer network contains two parts: word embeddings $\{e_{w_{1}}, e_{w_{2}}, ..., e_{w_{n}}\}$ and positional embeddings $\{e_{p_{1}}, e_{p_{2}}, ..., e_{p_{n}}\}$, where $n$ is the length of the input sentence. The two parts are fused by an addition operation. \subsection{Multi-Head Attention} After obtaining outputs of previous layers, the Transformer network uses a multi-head attention mechanism to learn the context-aware representation for the sequence. Figure~\ref{fig:mult-head-attn} shows the architecture of the multi-head attention. $Q$, $K$ and $V$, three matrices, are inputs derived from previous layers and $H$ is the output. The Multi-head attention can be denoted as: \begin{align} \begin{split} H&=MultiHead(Q, K, V) \\ &=Concat(O_{1}, O_{2}, ..., O_{h})W^{O} \end{split} \end{align} where $W^{O}$ is a trainable parameter. $h$ is the number of parallel attention layers. $O_{i}$ is computed by Eq.~(\ref{eq:attn}). \begin{align} \label{eq:attn} O_{i}=Attention(QW^{Q}_{i}, KW^{K}_{i}, VW^{V}_{i}) \end{align} where $W^{Q}_{i}$, $W^{K}_{i}$ and $W^{V}_{i}$ are trainable parameters and $Attention(.;.;.)$ is the scaled dot-product attention which can be denoted as: \begin{align} Attention(Q, K, V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \end{align} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{model.pdf}\\ \caption{Architecture of the matching transformer. Part (a) illustrates the whole architecture. In this model, the matching layer consists of $N$ matching blocks and part (b) illustrates the structure of a matching block.} \label{fig:model} \end{figure*} \section{Multimodal Matching Transformer} Automatic live commenting aims to make comments on a video clip. According to the analysis in \cite{ma2018livebot}, the live comments are relevant to not only the video clip, but also the surrounding comments from other viewers. In this work, we find it helpful to incorporate the audio information into the live commenting model. However, the surrounding comments, the vision part, and the audio part of the videos are from different modalities, and not trivial to model their relationships. To address these issues, we propose a multimodal matching model, which we denote as Matching Transformer. The model is based on the popular transformer architecture \cite{vaswani2017attention,devlin2018bert}. Our model can jointly learn the cross-modal representations of textual context, visual context, audio context, and model the relationships among them. \subsection{Task Definition} We formulate the automatic live commenting as a ranking problem. Formally, given a video $\bm{V}$ and a time-stamp $\bm{t}$, automatic live commenting aims to select a comment $\bm{y^{*}}$ from a candidate set $\bm{Y}$, which is most relevant to the video clip near the time-stamp based on the surrounding comments $\bm{C}$, the visual part $\bm{F}$ and the audio part $\bm{A}$. Concretely, we extract $N_{c}$ comments near the time-stamp $\bm{t}$ as $\bm{C}=\{c_{1}, c_{2},...,c_{N_{c}}\}$, where $c_{i}$ is a comment. For the vision part $\bm{F}$, we sample $N_{f}$ video frames near the time-stamp $\bm{t}$ as $\bm{F}=\{f_{1},f_{2},...,f_{N_{f}}\}$, where $f_{i}$ is an image and the interval between two images is 1 second. We convert a 5-second audio clip surrounding the time-stamp $\bm{t}$ to a log magnitude mel-frequency spectrogram, and it can be denoted as $\bm{A}=\{a_{1}, a_{2}, ..., a_{N_{a}}\}$, where $a_{i}$ is a vector and $N_{a}$ is the length of the audio clip. The candidate comment can be denoted as $\bm{y}=\{y_{1},y_{2},...,y_{k}\}$, where $y_{i}$ is a word and $k$ is the number of words. In this way, the task can be formulated as searching the most relevant comment to the video clip in the multimodal semantic space: \begin{align} \bm{y^*}=\argmax_{\bm{y}\in\bm{Y}}{S_{m}(\bm{C}, \bm{F}, \bm{A}, \bm{y})} \end{align} where $S_{m}$ is a model to produce the similarity between $(\bm{C}, \bm{F}, \bm{A})$ and $\bm{y}$. \subsection{Model Overview} Figure~\ref{fig:model} shows the architecture of the Matching Transformer. The model consists of three components: (1) \textbf{an encoder layer} converts different modalities of a video clip (including surrounding comments, the vision part of the video, the audio part of the video) and a candidate comment into vectors; (2) \textbf{a matching layer} iteratively learns the attention-aware representation for each modality; (3) \textbf{a prediction layer} outputs a score measuring the matching degree between a video clip and a comment. Formally, given different contexts of a video clip $(\bm{C}, \bm{F}, \bm{A})$ and a comment $\bm{y}$, the model can be denoted as: \begin{align} s=S_{m}(\bm{C}, \bm{F}, \bm{A}, \bm{y}) \end{align} Next, we introduce each layer in detail. \subsection{Encoder Layer} As shown in part (a) of Figure~\ref{fig:model}, our model contains three kinds of encoder: a comment encoder, a vision encoder and an audio encoder. These encoders convert a comment, a vision clip and an audio clip into vectors respectively. In our model, the existing surrounding comments $\bm{C}$ and the candidate comment $\bm{y}$ share the same comment encoder. \paragraph{Comment Encoder} In our model, $N_{c}$ comments near the time-stamp, $\bm{C}=\{c_{1}, c_{2},...,c_{N_{C}}\}$, are first concatenated into one comment $\bm{C}=\{w_{1},w_{2},...,w_{l_{c}}\}$, where $w_{i}$ is the i-th word in the comment and $l_{c}$ is the total number of words. Then, the comment encoder converts words of the comment into vectors $\{e_{c_{1}},e_{c_{2}},...,e_{c_{l_{c}}}\}$ by looking up $M$, where $M\in \bm{R}^{d\times |V|}$ is the embedding table. $d$ is the dimension of the embedding and $|V|$ is the size of the vocabulary. Similarly, the comment encoder also converts the candidate comment $\bm{y}$ into vectors: $\{e_{y_{1}},e_{y_{2}},...,e_{y_{k}}\}$. \paragraph{Vision Encoder} The vision encoder converts a vision clip $\bm{F}=\{f_{1},f_{2},...,f_{N_{F}}\}$ into vectors $\{e_{f_{1}},e_{f_{2}},...,e_{f_{l_{f}}}\}$ by a pre-trained model, where $l_{f}$ is equal to $N_{f}$. Similar to \cite{ma2018livebot}, we leverage a pre-trained 18-layer ResNet \cite{he2016deep} to encode the frames within a vision clip. It can be denoted as: \begin{align} e_{f_{i}}=ResNet(f_{i}) \end{align} \paragraph{Audio Encoder} For audio encoding, we first slice a 5-second audio clip $\bm{A}=\{a_{1}, a_{2}, ..., a_{N_{a}}\}$ into five audio frame sets, $\{\{a_{1}^t, a_{2}^t, ...,a_{l_{a}^{t}}^t\}\}_{t=1}^5$, based on the timestamp. Then, we use a GRU \cite{chung2014empirical} to encode each set. It can be denoted as: \begin{align} h_{i}^{t}=GRU(a_{i}^{t}, h_{i-1}^{t}) \end{align} At last, we use the last hidden state of each set $\{h_{l_{a}^{1}}^{1}, h_{l_{a}^{2}}^{2}, ..., h_{l_{a}^{5}}^{5}\}$ as the representation of the audio clip: $\{e_{a_{1}},e_{a_{2}},...,e_{a_{l_{a}}}\}$. \paragraph{Positional Embedding} To exploit the temporal information in each modality, following \cite{vaswani2017attention}, we also use positional embedding (PE) by adding it to the output of each encoder. \subsection{Matching Layer} Inspired by the recent successful deep learning frameworks \cite{he2016deep,vaswani2017attention}, we adopt a matching layer which consists of $N$ matching blocks to iteratively learn the attention-aware representation for each modality. The structure of a matching block is shown in part (b) of Figure~\ref{fig:model}. Each matching block is composed of four parts: a multi-head self-attention, a multi-head cross attention and two position-wise FNN. Compared to the basic block defined in \cite{vaswani2017attention}, our matching block adds a multi-head cross attention and a position-wise FNN. We use these auxiliary mechanisms to learn attention-aware representation from other modalities. For simplicity, we take the candidate comment as the example to illustrate the matching layer. Formally, in the $t$-th block, given the output of previous matching block corresponding to the candidate comment: $H_{y}^{t-1}=\{h_{y_{1}}^{t-1}, h_{y_{2}}^{t-1},...,h_{y_{k}}^{t-1}\}$, we first utilize a multi-head self-attention and a position-wise FNN to learn the context of the candidate comment $\widehat{H}_{y}^{t}$: \begin{align} \label{eq:multihead} \bar{H}_{y}^{t}=&MultiHead(H_{y}^{t-1},H_{y}^{t-1},H_{y}^{t-1}) \\ \label{eq:mlp} \widehat{H}_{y}^{t}&=MLP(ReLU(MLP(\bar{H}_{y}^{t}))) \end{align} Similar to Eq.~(\ref{eq:multihead}) and Eq.~(\ref{eq:mlp}), we also compute the context vectors of surrounding comment $\widehat{H}_{c}^{t}$, visual clip $\widehat{H}_{f}^{t}$, and audio clip $\widehat{H}_{a}^{t}$. Then we employ a multi-head cross attention to learn the attention-aware representation from each modality: \begin{align} \widetilde{H}_{yc}^{t}=MultiHead(\widehat{H}_{y}^{t},\widehat{H}_{c}^{t},\widehat{H}_{c}^{t}) \\ \widetilde{H}_{yf}^{t}=MultiHead(\widehat{H}_{y}^{t},\widehat{H}_{f}^{t},\widehat{H}_{f}^{t}) \\ \widetilde{H}_{ya}^{t}=MultiHead(\widehat{H}_{y}^{t},\widehat{H}_{a}^{t},\widehat{H}_{a}^{t}) \end{align} After geting these three attention-aware representations, we use MLP to build a fusional gate and combine them with the weighted sum: \begin{align} &g_{c}=MLP(\widetilde{H}_{yc}^{t},\widetilde{H}_{yf}^{t},\widetilde{H}_{ya}^{t}) \\ \begin{split} \widetilde{H}_{y}^{t}&=g_{c_{[:d]}}\odot \widetilde{H}_{yc}^{t}+g_{c_{[d:2d]}}\odot \widetilde{H}_{yf}^{t}\\ &+g_{c_{[2d:]}}\odot \widetilde{H}_{ya}^{t} \end{split} \end{align} where $\odot$ means the element-wise dot and $d$ is the dimension of $\widetilde{H}_{yc}^{t}$, $\widetilde{H}_{yf}^{t}$, and $\widetilde{H}_{ya}^{t}$. Finally, we feed $\widetilde{H}_{y}^{t}$ into a position-wise FNN to produce the output of the $t$-th matching block corresponding to the candidate comment: \begin{align} \label{eq:pos-fnn} H_{y}^{t}=MLP(ReLU(MLP(\widetilde{H}_{y}^{t}))) \end{align} As described above, Eq.~(\ref{eq:multihead})-Eq.~(\ref{eq:pos-fnn}) illustrate how to compute the representation of candidate comment $H_{y}^{t}$. In implementation, we adopt the same way to compute the representations of surrounding comment $H_{c}^{t}$, vision clip $H_{f}^{t}$ and audio clip $H_{a}^{t}$. \subsection{Prediction Layer} The prediction layer outputs a score measuring the matching degree between $(\bm{C}, \bm{F},\bm{A})$ and $\bm{y}$. In this layer, we first employ a weighted pooling to convert the output of the last matching block to a fixed-length vector: \begin{align} &V_{y}=A_{y}^{p}H_{y}^{N} \\ A_{y}^{p}=softmax(&ReLU(H_{y}^{N}\bm{W}_{1}^{p}+\bm{b}_{1}^{p})\bm{W}_{2}^{p}+\bm{b}_{2}^{p}) \end{align} where $\bm{W}_{1}^{p}$, $\bm{W}_{2}^{p}$, $\bm{b}_{1}^{p}$ and $\bm{b}_{2}^{p}$ are trainable parameters. Similarly, we get the vectors $V_{c}$, $V_{f}$ and $V_{a}$ for $\bm{C}$, $\bm{F}$ and $\bm{A}$ respectively. Then, we adopt a fusional gate to combine $V_{c}$, $V_{f}$ and $V_{a}$ into $V_{context}$: \begin{align} g_{v}&=MLP(V_{c}, V_{f}, V_{a}) \\ \begin{split} V_{context}&=g_{v_{[:d]}}\odot V_{c}+g_{v_{[d:2d]}}\odot V_{f}\\ &+g_{v_{[2d:]}}\odot V_{a} \end{split} \end{align} where $d$ is the dimension of the $V_{c}$, $V_{f}$ and $V_{a}$. Finally, we use a cosine distance to measure the similarity between $V_{context}$ and $V_{y}$: \begin{align} s=\cos{(V_{context}, V_{y})} \end{align} \subsection{Training} To learn the $S_{m}(\cdot, \cdot, \cdot, \cdot)$, we leverage the max-margin loss function, which can be formulated as: \begin{align} \begin{split} L(\theta)&=\frac{1}{N}\sum_{i=1}^{N}\max(0, M \\ &+S_{m}(\bm{F}^{(i)},\bm{A}^{(i)},\bm{C}^{(i)},{\bm{y}^{(i)}}^{-};\theta) \\ &-S_{m}(\bm{F}^{(i)},\bm{A}^{(i)},\bm{C}^{(i)},{\bm{y}^{(i)}}^{+};\theta)) \end{split} \end{align} where $N$ is the number of instances in the training set, $(\bm{F}^{(i)},\bm{A}^{(i)},\bm{C}^{(i)},{\bm{y}^{(i)}}^{-})$ is the negative sample and $(\bm{F}^{(i)},\bm{A}^{(i)},\bm{C}^{(i)},{\bm{y}^{(i)}}^{+})$ is the positive sample. $M$ is the margin that needs to be specified manually. $\theta$ denotes all the trainable parameters of our model. When training, we employ Adam \cite{kingma2014adam} as the optimizer. \section{Experiments} \subsection{Dataset} We evaluate our model on a live commenting dataset\footnote{\url{https://github.com/lancopku/livebot}} that is released by \cite{ma2018livebot}. The live commenting dataset is a large-scale video-comment dataset. It contains 2,361 videos and 895,929 comments. The data is collected from a popular Chinese video streaming website, Bilibili. Therefore, it has strong authenticity and practicability. In our experiment, we use the same partition as in \cite{ma2018livebot}. The detailed statistics of the dataset is shown in Table~\ref{tab:dataset}. \begin{table} \centering \caption{Statistics of the Live Comment Dataset.} \begin{tabular}{lcccc} \toprule &\textbf{Train}&\textbf{Dev}&\textbf{Test}&\textbf{Total} \\ \midrule \#Video&2,161&100&100&2,361 \\ \#Comment&820k&42k&34k&896k \\ \#Word&4,419k&248k&193k&4,860k \\ Avg. Words&5.39&5.85&5.58&5.42 \\ Hours&103.81&5.02&5.01&113.84 \\ \bottomrule \end{tabular} \label{tab:dataset} \end{table} \begin{table*}[t] \centering \caption{The performance comparison on the live commenting dataset (\textbf{Recall@k}, \textbf{MRR}: higher is better; \textbf{MR}: lower is better). Our matching transformer significantly outperforms the baselines in terms of all metrics. Meanwhile, our model achieves better performance than baselines by using the same two modalities.} \begin{tabular}{l|ccc|ccccc} \toprule \textbf{Model}&\textbf{Text}&\textbf{Vision}&\textbf{Audio}&\textbf{Recall@1}&\textbf{Recall@5}&\textbf{Recall@10}&\textbf{MR}&\textbf{MRR} \\ \midrule S2S& \checkmark & \checkmark&&12.89&33.78&50.29&17.05&0.2454 \\ Fusional RNN& \checkmark& \checkmark&&17.25&37.96&56.10&16.14&0.2710 \\ Unified Transformer& \checkmark & \checkmark&&18.01&38.12&55.78&16.01&0.2753 \\ \hline Matching Transformer-C&\checkmark & & &18.02&42.83&59.37&12.28&0.3087 \\ Matching Transformer-CF&\checkmark & \checkmark &&22.77&46.71&62.87&11.19&0.3519 \\ Matching Transformer-CFA& \checkmark & \checkmark & \checkmark &\textbf{23.52}&\textbf{46.99}&\textbf{64.24}&\textbf{11.05}&\textbf{0.3596} \\ \bottomrule \end{tabular} \label{tab:result} \end{table*} \begin{table*}[t] \centering \caption{Effect of different modalities used in the Matching Transformer (\textbf{Recall@k}, \textbf{MRR}: higher is better; \textbf{MR}: lower is better). It shows that more modalities always lead to better performance, which indicates that the proposed model can capture the semantic information of different modalities to help the live commenting task.} \begin{tabular}{c|ccc|ccccc} \toprule &\textbf{Text}&\textbf{Vision}&\textbf{Audio}&\textbf{Recall@1}&\textbf{Recall@5}&\textbf{Recall@10}&\textbf{MR}&\textbf{MRR} \\ \midrule \multirow{3}{*}{Single-Modal} & \checkmark & & &18.02&42.83&59.37&12.28&0.3087 \\ & & \checkmark & &18.55&38.38&50.98&16.33&0.2920 \\ & & & \checkmark &17.95&36.89&50.52&15.33&0.2861 \\ \hline \multirow{3}{*}{Double-Modal} & \checkmark & \checkmark & & 22.77&46.71&62.87&11.19&0.3519 \\ & \checkmark & & \checkmark& 19.93&44.39&59.68&12.21&0.3276 \\ & & \checkmark & \checkmark&18.03&39.00&52.77&15.60&0.2933 \\ \hline Triple-Modal & \checkmark& \checkmark & \checkmark &\textbf{23.52}&\textbf{46.99}&\textbf{64.24}&\textbf{11.05}&\textbf{0.3596} \\ \bottomrule \end{tabular} \label{tab:feature} \end{table*} \begin{table} \centering \caption{Human evaluation results of different models (\textbf{Rel} refers to the relevance score; \textbf{Cor} refers to the correctness score; \textbf{Human} means the natural comments in the dataset.). It shows that the produced comments of our model are more relevant than those of the baseline models. Besides, our model can produce more correct and proper comments.} \begin{tabular}{lcc} \toprule \textbf{Model}&\textbf{Rel}&\textbf{Cor} \\ \hline S2S&2.23&2.91 \\ Fusional RNN&2.95&3.34 \\ Unified Transformer&3.07&3.45 \\ Matching Transformer&\textbf{3.25}&\textbf{3.57} \\ \hline Human&3.31&4.11 \\ \bottomrule \end{tabular} \label{tab:human} \end{table} \subsection{Evaluation Metric} Following the previous work~\cite{das2017visual,ma2018livebot}, we adopt \textbf{Recall@k}, \textbf{Mean Recall} (\textbf{MR}) and \textbf{Mean Reciprocal Rank} (\textbf{MRR}) for automatic evaluation, which are standard evaluation metrics of the ranking task. For testing, we construct a candidate comments set in which each video clip contains 100 comments, which is exactly the same as the previous work~\cite{ma2018livebot} for fair comparison. The comments in the candidate comment set are comprised of three parts: (1) the ground-truth comments; (2) top 20 popular comments; (3) random selected comments. We evaluate our model on the testing set. In addition, we also test the performance of our model by human evaluation. Following \cite{ma2018livebot}, we use the metrics of \textbf{relevance} (\textbf{Rel}) and \textbf{correctness} (\textbf{Cor}) to evaluate our model. Relevance measures the relevance between produced comments and videos and correctness measures the confidence that produced comments are made by human in the context of the video. We do not evaluate the fluency of the produced comments, because our model just selects a proper comment from a candidate comment set, which is naturally fluent. For both relevance and correctness, we use a score $s\in\{1, 2, 3, 4, 5\}$ to denote the degree, the higher the better. When testing, three human annotators are asked to give a score to evaluate the top one comment produced by our model and we use the average score as the final result. \begin{figure*}[t] \centering \subcaptionbox{Frame 1.}{\includegraphics[width=0.31\linewidth]{4.png}} \subcaptionbox{Frame 2.}{\includegraphics[width=0.31\linewidth]{5.png}} \subcaptionbox{Frame 3.}{\includegraphics[width=0.31\linewidth]{6.png}}\\ \begin{tabular}{cl} \includegraphics[width=0.8\linewidth]{case.pdf} \end{tabular} \caption{An example of the produced comments of different models on a video. Above are three selected frames in the videos. Below are the existing comments in the video and the produced comments of different models.} \label{fig:exp_pic} \end{figure*} \subsection{Settings} In our experiments, the word embeddings and video frame vectors are in 512 dimensions while audio frame vectors are in 64 dimensions. The GRU for audio encoder is in 512 dimensions. For positional embedding, we use fixed sinusoidal positional embedding and set the dimension as 512. The word embeddings are randomly initialized and updated during training, while the video frame vectors and audio frame vectors are fixed. There are 6 matching blocks in the matching layer. In each matching block, the number of heads in the multi-head attention is 8 and the dimension of the position-wise FNN is 2,048. The margin $M$ is set to 0.1 in our experiment. We employ the Adam \cite{kingma2014adam} for training, whose default hyper-parameters $\beta_{1}$ and $\beta_{2}$ are set to 0.9 and 0.999 for optimization respectively. The initial learning rate of Adam is set to 0.00009. The learning rate is halved when the accuracy on the development set drops. We also employ a dropout strategy \cite{srivastava2014dropout} and layer normalization \cite{ba2016layer} to reduce the risk of over-fitting. The dropout rate is set to 0.2 and the batch size is 64. For pre-processing, we use the Stanford tokenizer \cite{manning2014stanford} to tokenize the comments and audio feature extractor\footnote{\url{https://github.com/tensorflow/models/tree/master/research/audioset}} released by \cite{gemmeke2017audio} to process the audio clip. During training, we draw 1 negative sample for each video clip. \subsection{Baselines} \begin{itemize} \item \textbf{S2S} \cite{venugopalan2015sequence} is a traditional sequence to sequence model without the attention mechanism. Specifically, the model uses two encoders to encode visual and textual information respectively. During decoding, the decoder uses the concatenation of the outputs from the two encoders as input. \item \textbf{Fusional RNN} \cite{ma2018livebot} consists of three parts: a video encoder, a comment encoder and a comment decoder. The three parts are all RNN-based networks and they are related by an attention layer. This model uses the visual and textual context as input. \item \textbf{Unified Transformer} \cite{ma2018livebot} is a transformer-based generative model. Similar to Fusional RNN, this model is comprised of three parts: a video encoder, a comment encoder and a comment decoder. The difference is that these three parts are all stacked attention-based transformer blocks. \end{itemize} \subsection{Overall Results} Table~\ref{tab:result} shows the automatic evaluation results of the baseline models and our proposed models. The baselines (S2S, Fusional RNN and Unified Transformer) only use the text and vision of the videos. Our matching transformer leverages three modalities (text, vision, and audio) and significantly outperforms the baselines in terms of all metrics. Moreover, we also report the results of our model with only one modality (less than baselines) and two modalities (equal to baselines). It shows that the matching transformer achieves comparable performance with the baselines using only one modality. Meanwhile, our model achieves better performance than baselines by using the same two modalities, which verifies the efficiency of the proposed model. Finally, the triple-modality model significantly outperforms the baselines, achieving +5.51 points on Recall@1, +8.87 points on Recall@5 and +8.46 points on Recall@10. \subsection{Effect of Different Modalities} We also would like to know how different kinds of modalities contribute to our proposed model. Therefore, we conduct ablation experiments by removing different modalities from our model. Table~\ref{tab:feature} summarizes the results of the ablation experiments. Under the single-modality setting, it shows that the model with the modality of text achieves better performance over the other two modalities. Among the possible alternatives of double-modality, the combination of text and vision obtains the best performance. Finally, the model with triple-modalities get the highest scores in terms of all the automatic metrics. Besides, it is observed that more modalities always lead to better performance, which indicates that the proposed model can capture the semantic information of different modalities to help the live commenting task. \subsection{Human Evaluation} We randomly sample 100 video clips from the test set to evaluate our model in terms of the relevance and the correctness. For both metrics, we use a score range from 1 to 5 to denote the degree, where the higher the better. We have three human annotators to give a score that evaluates the top one comment produced by our model, and we use the average score as the reported result. The result is shown in Table~\ref{tab:human}. It shows that the produced comments of our model are more relevant than those of the baseline models. Besides, our model can produce more correct and proper comments. The scores of both relevance and correctness degrees are also closer to that of the comments made by human. This result indicates that our model is able to produce the relevant comments to the videos by modeling the relevance in different modalities. \subsection{Case Study} To further compare our model with the baselines, we provide an example for case study. This example is talking about a Chinese food called soup dumplings. As illustrated in Figure~\ref{fig:exp_pic}, this example consists of three frames, three surrounding comments and three target comments. Since the audio is not visible, we do not provide the audio part of the video. The surrounding comments are in the first row of the table below the three frames. The second row contains three target comments which are naturally made by human viewers and correspond to a specific time-stamp. We compare the produced comments of different models with the target comments. It shows that when we select the top one output as the produced comment, both unified transformer and matching transformer can produce a comment relevant to the target comments. However, the produced comments of both fusional RNN and S2S are of low relevance to the video. The output of S2S is talking about eggs and the output of fusional RNN is about dancing, both of which are far away from the video clip. Furthermore, we compare the produced comments between matching transformer and unified transformer. According to the case in Figure~\ref{fig:exp_pic}, it is obvious that the comment from the matching transformer is more relevant to the video clip. The matching transformer can make comments about the soup dumplings that are exactly the key point of the video clip, while the unified transformer can only produce comments about how to process the dirty soup dumplings that fell on the ground, which do not appear in the video. In conclusion, the comments made by our matching transformer are more relevant and correct than that of other baselines. \section{Related Work} Automatic live commenting aims to comment on a video clip based on the surrounding comments, the video clip itself and the corresponding audio clip. This task is similar to the image captioning and video captioning, both of which attract much attention for a long time. \paragraph{Image Captioning} Image captioning involves taking an image, analyzing its visual content, and generating a textual description \cite{bernardi2016automatic}. \cite{xu2015show} try to adopt a retrieval-based model to produce a description of an image from a multimodal space. \cite{yagcioglu2015distributed} propose a retrieval approach based on the features extracted by VGG. \cite{vinyals2015show} use a CNN-based model to encode the image and an LSTM to generate the description. \cite{liu2018simnet} try to utilize a merging gate to merge the information in the image and the topics. \paragraph{Video Captioning} Video captioning aims to automatic generate natural language sentences that describe the content of a video \cite{aafaq2018video}. \cite{venugopalan2014translating} present a CNN-LSTM architecture for generating natural language description of videos. \cite{srivastava2015unsupervised} use one LSTM to extract features from video frames and then pass the feature vector through another LSTM for decoding. \cite{wang2018video} propose a different neural network architecture based on reinforcement learning for video captioning. \cite{whitehead2018incorporating} release a knowledge-rich video captioning dataset and proposed a new knowledge-aware video description network. \paragraph{Matching Model} A matching model aims to compute the relation between two objects. For text-to-text, \cite{chen2017enhanced} adopt an LSTM-based model with cross-attention to predict the relation between two sentences. Inspired by transformer, \cite{yu2018qanet} use self-attention and cross-attention to encode two sentences and model the relation between them. For text-to-image, \cite{gan2019multi} propose a multi-step reasoning model for visual dialog, which measures the similarity between text-image pairs. \cite{anderson2018bottom} present a bottom-up and top-down attention for image captioning and visual question answering. For text-to-audio, \cite{aytar2017see} try to use transformer learning to learn aligned representations for image, sound and text. \cite{elizalde2019cross} propose a framework that learns joint embeddings from a shared lexico-acoustic space for text and audio. Despite the similarity to image captioning and video captioning, automatic live commenting has its own characteristics. Compared to existing research, it has more diverse contexts including textual context, visual context and audio context, which is more difficult to tackle. To this end, we propose a multimodal matching transformer model. It can jointly learn the representation of three modalities and the relations among them. Therefore, the proposed model can better integrate information from different angles. \section{Conclusion and Future Work} In this paper, we propose a multimodal matching transformer model for automatic live commenting. It can jointly learn the representation of visual context, audio context, and textual context. In addition, the matching transformer model also explicitly leverages the relations among three modalities to enrich the representation of each one. We evaluate our model on a publicly available live commenting dataset. Experiments show that the proposed multimodal matching transformer can significantly outperforms the state-of-the-art approaches. For future research, we will further investigate the multimodal interactions among vision, audio, and text in the real-world applications. Moreover, we believe the multimodal pre-training will be a promising direction to explore, where tasks like image captioning and video captioning will benefit from pre-trained models.
proofpile-arXiv_067-1206
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} It is quite evident that the universe has undergone a smooth transition from a decelerated phase to its present accelerated phase of expansion \cite{Perlmutter}-\cite{Reiss}. Discovering the source of cosmic acceleration is one of the biggest challenges of modern cosmology. This remarkable discovery has led cosmologists to hypothesize the presence of unknown form of energy called dark energy (DE), which is an exotic matter with negative pressure \cite{copeland}.This surprising finding has now been confirmed by more recent data coming from SNeIa surveys \cite{Knop03,Tonry03,Barris04,Riess04,R06,SNLS,ESSENCE,D07}, large scale structure \cite{Dode02,Perci02,Szal03,Hawk03,pope04} and cosmic microwave background (CMBR) anisotropy spectrum \cite{Boom,Stomp01,Netter02,Rebo04,wmap,WMAP,WMAP3}. All current observations are consistent with a cosmological constant (CC); while this is in some sense the most economical possibility, the CC has its own theoretical and naturalness problems \cite{Weinberg1}-\cite{Martin}, so it is worthwhile to consider alternatives. The above observational data properly complete each other and point out that the dark energy (DE) is the dominant component of the present universe which occupies about $\%73$ of the energy of our universe, while dark matter (DM) occupies $\%23$, and the usual baryonic matter about $\%4$. There are prominent candidates for DE such as the cosmological constant \cite{Sahni, Weinberg}, a dynamically evolving scalar field ( like quintessence) \cite{Caldwell, Zlatev} or phantom (field with negative energy) \cite{Caldwell2} that explain the cosmic accelerating expansion. Meanwhile, the accelerating expansion of universe can also be obtained through modified gravity \cite{Zhu}, brane cosmology and so on \cite{Zhu1}--\cite{set10}. The DE can track the evolution of the background matter in the early stage, and only recently, it has negative pressure, and becomes dominant . Thus, its current condition is nearly independent of the initial conditions \cite{Lyth}--\cite{Easson}. On the other hand, to explain the early and late time acceleration of the universe. it is most often the case that such fields interact with matter; directly due to a matter Lagrangian coupling, indirectly through a coupling to the Ricci scalar or as the result of quantum loop corrections \cite{Damouri}--\cite{Biswass}. If the scalar field self-interactions are negligible, then the experimental bounds on such a field are very strong; requiring it to either couple to matter much more weakly than gravity does, or to be very heavy \cite{Uzan}--\cite{Damourm}. Unfortunately, such scalar field is usually very light and its coupling to matter should be tuned to extremely to small values in order not to be conflict with the Equivalence Principal \cite{nojiri}. The Brans-Dicke theory of gravity is one of the most popular modified gravity theory which conducted by Brans and Dicke\cite{b1} and was related with some previous work of Jordan and Fierz \cite{JFBD} for developing an alternative to GR. It is widely used to describe a modification of Einstein's original formulation of General Relativity. This theory can be formulated in the Einstein and Jordan frame, which are related by the conformal transformations. Although the two frames are describe the same physics, and are equivalent, the stability of the field equations are not the same. Here we implement dynamical system and phase space approach as a robustness tool to investigate this issue. We concentrate on the Brans-Dicke theory, but the results can easily be generalized. \section{Mapping between Brans-Dicke, chameleon field and general scalar tensor theory} Scalar-tensor theories are usually formulated in two different frameworks, the Jordan Frame (JF) and the Einstein Frame (EF). It is easier to work in the EF. We start with the usual Scalar Tensor Theory (STT) action in (JF) \cite{Gilles} \begin{align}\label{S_JF} &S={1\over 16\pi G_*} \int d^4x \sqrt{-g} \Bigl(F(\Phi)~R - Z(\Phi)~g^{\mu\nu} \partial_{\mu}\Phi \partial_{\nu}\Phi\\ \nonumber &- 2U(\Phi) \Bigr) + S_m[\psi_m; g_{\mu\nu}]\ . \end{align} Here, $G_*$ denotes the bare gravitational coupling constant , $R$ is the scalar curvature of $g_{\mu\nu}$, and $g$ its determinant. The above equations are written in the so-called Jordan frame (JF). By conformal transformation of the metric and a redefinition of the scalar it is possible to obtain field equations in (EF) . Let us call $g^*_{\mu\nu}$ and $\varphi$ the new variables, and define \begin{mathletters} \begin{eqnarray} g^*_{\mu\nu} &\equiv& F(\Phi)~g_{\mu\nu}\ , \label{g*}\\ \left({d\varphi\over d\Phi}\right)^2 &\equiv& {3\over 4}\left({d\ln F(\Phi)\over d\Phi}\right)^2 + {Z(\Phi)\over 2F(\Phi)}\, \label{varphi}\\ A(\varphi) &\equiv& F^{-1/2}(\Phi)\ ,\label{A}\\ 2V(\varphi) &\equiv& U(\Phi)~F^{-2}(\Phi)\ .\label{V} \end{eqnarray} \label{2.4} \end{mathletters} Action (\ref{S_JF}) then takes the form \begin{align}\label{S_EF} &S={1\over 4\pi G_*} \int d^4x \sqrt{-g_*} \left({R^*\over 4} - {1\over 2} g_*^{\mu\nu} \partial_{\mu}\varphi \partial_{\nu}\varphi - V(\varphi) \right)\\ \nonumber &+ S_m[\psi_m; A^2(\varphi)~g^*_{\mu\nu}]\ , \end{align} where $g_*$ is the determinant of $g^*_{\mu\nu}$, $g_*^{\mu\nu}$ its inverse, and $R^*$ its scalar curvature. Note that the above action looks like the action of chameleon gravity \cite{Hees} where originally proposed by \cite{Khoury}. Note that matter is explicitly coupled to the scalar field $\varphi$ through the conformal factor $A^2(\varphi)$. Brans-Dicke theory as a particular case of scalar tensor theory of gravity can be derived by considering, $F(\Phi) = \Phi$ , $Z(\Phi) = \omega_{BD}/\Phi$ and $ 2ZF+3(dF/d\Phi)^2=2\omega_{BD} + 3$. Thus the field equations in (JF) will be \begin{eqnarray}\label{fried1} 3H^2=\frac{8\pi G_*\rho}{\Phi}-3H\frac{\dot{\Phi}}{\Phi} +\frac{\omega_{BD}}{2}\frac{\dot{\Phi}^{2}}{\Phi^{2}}+\frac{U(\Phi)}{\Phi}, \end{eqnarray} \begin{eqnarray}\label{fried2} \dot{H}=-\frac{4\pi G_*(\rho+P)}{\Phi}+H\frac{\dot{\Phi}}{2\Phi} -\frac{\omega_{BD}}{2}\frac{\dot{\Phi}^{2}}{\Phi^{2}}-\frac{\ddot{\Phi}}{2\Phi} \end{eqnarray} \begin{eqnarray}\label{phiequatio} \ddot{\Phi}+3H\dot{\Phi}=\frac{3\Phi(\dot{H}+2H^{2})}{\omega_{BD}}+\frac{\dot{\Phi}}{2\Phi}-\frac{\Phi}{\omega_{BD}}\frac{dU(\Phi)}{ d\Phi} \end{eqnarray}\begin{eqnarray}\label{phiequation} \dot{\rho}+3H(\rho+P)=0 \end{eqnarray} The variables in Brans-Dick theory in (EF) can be related to their corresponding in (JF) as \\ \begin{eqnarray}\label{conformal1} H_{*} &=&\Phi^{\frac{-1}{2}}(H+\frac{\dot{\Phi}}{2\Phi})\\ \rho_{*}& =&\frac{\rho}{\Phi^{2}}\\ \frac{d\varphi}{dt_{*}}&=&-\frac{\dot{\Phi}}{2\beta\Phi^{\frac{3}{2}}}\label{conformal3} \end{eqnarray} Where \begin{equation}\label{bdp} \beta=({2\omega_{BD}+3})^{\frac{-1}{2}} \end{equation} Hence the field equations in (EF) would be\\ \begin{eqnarray}\label{tm2} 3H^{2}_{*} = 8 \pi G_{*}\rho_*+\dot{\varphi}^{2}+2V(\varphi) \end{eqnarray} \begin{eqnarray}\label{tm3} &&\dot{H_{*}} = -4\pi G_{*}(\rho_*+P_{*})-\dot{\varphi}^{2}\\ &&\ddot{\varphi}+3H_{*}\dot{\varphi}+\frac{d V(\varphi)}{d\varphi} = -4\pi G_{*}\beta(\rho_*-3P_{*})\label{tm4} \end{eqnarray} Here, dot denotes derivative respect to $t_{*}$. Note that the field equations (\ref{tm2}) to (\ref{tm4}) are similar to those obtained for chameleon gravity\cite{Hees}. Here, dot denotes derivative respect to $t_{*}$. Note that the field equations (\ref{tm2}) to (\ref{tm4}) are similar to those obtained for chameleon gravity\cite{Hees}. We also can derive the equations by replacing the physical time $t_{*}$ with the conformal time $\eta_{*}$. Since, $d\eta=\frac{dt}{a}$ and $d\eta_{*}=\frac{dt_{*}}{a_{*}}$, thus,$\mathcal{H}=\frac{dlna}{d\eta}=aH$ and $\mathcal{H_{*}}=\frac{dlna_{*}}{d\eta_{*}}=a_{*}H_*$. Also \begin{eqnarray}\label{hh} \mathcal{H}_*=\frac{a'_*}{a_*}=\mathcal{H}-\frac{d \ln (A)}{ d\varphi}\,\varphi^{'}=\mathcal{H}-\beta\varphi^{'}, \label{changedeltaEFJF} \end{eqnarray} where prime denotes derivative respect to conformal time $\eta$. The conformal time $\eta$ is the same in both frames $\eta_{*}\equiv\eta$. Thus, the field equations for Brans-Dicke theory in (EF) will be \begin{eqnarray} &3\mathcal{H}_*^ -\varphi'^2\,=\,2\tilde{\rho_*} +2V(\varphi) a_{*}^2, \label{g_comps_0}\\ &\mathcal{H}_*^2-\mathcal{H}_*'-\varphi'^2\,=\, \tilde{\rho_*}(1+c_s^2), \label{g_comps_j}\\ &\varphi''+2\mathcal{H}_*\varphi'+a_{*}^{2}\frac{dV}{d\varphi} \,=\,- \tilde{\rho_*}(1-3c_s^2)\beta \label{scalar_comps} \end{eqnarray} \begin{eqnarray} \frac{\rho_*'}{\rho_*}=-3\mathcal{H}_*(1+c_s^2)+\beta(1-3c_s^2)\varphi'. \label{cons_comps} \end{eqnarray} Where, $c^{2}_{s}=\frac{P_*}{\rho_*}$ and $\tilde{\rho_*}=4\pi G_{*}\rho_*a_{*}^{2}$. The structure of the field equations is simplified by defining a few variables. \section{Stability analysis of Brans-Dick theory in (EF)} In this section we are going to investigate the stability of Brans- Dick theory in (EF). We consider the power low potential $U(\Phi)=U_{0}\Phi^{m}$ in (JF) which would be mapped to the exponential potentials $V=V_{0}e^{\alpha\varphi}$ in (EF) where $\alpha=2\beta(2-m)$. The system of equations (\ref{g_comps_0}) to (\ref{cons_comps}) can be transformed to an autonomous system of differential equations by means of the transformations \begin{eqnarray}\label{defe} \Omega_{1}^{2}=\frac{\tilde{\rho_*}}{3\mathcal{H}_*^{2}},\Omega_{2}^{2}=\frac{\varphi'^{2}}{3\mathcal{H}_*^{2}} ,\Omega_{3}^{2}=\frac{2V(\varphi) a_{*}^{2}}{3\mathcal{H}_*^{2}} \end{eqnarray} Equation (\ref{g_comps_0}) gives the following constraint between the variables \begin{eqnarray}\label{const} \Omega_{3}^{2}=1-2\Omega_{1}^{2}-\Omega_{2}^{2} \end{eqnarray} Now, for the autonomous equations of motions, we obtain \begin{eqnarray} \frac{d\Omega_{1}}{dN_{*}}&=&-\frac{1}{2}\left( 1+3c_s^2 \right)\Omega_{1} + \frac{1}{2}\sqrt{3}\beta\left(1-3c_s^2 \right)\Omega_{1}\Omega_{2}\\ &&-\Omega_{1} \left( 1-3\Omega_{2}^2 - 3 \left( 1+3c_s^2 \right)\Omega_{1}^2 \right)\nonumber \end{eqnarray} \begin{eqnarray} \frac{d\Omega_{2}}{dN_{*}}&=&-3\Omega_{2}-\sqrt{3}\beta\left(1-3c_s^2 \right)\Omega_{1}^2 +3\Omega_{2}^{3} +3\Omega_{2}\Omega_{1}^{2}\\ &&+3c_s^2\Omega_{2}\Omega_{1}^{2}-\frac{\alpha}{2}(1-2\Omega_{1}^{2}-\Omega_{2}^{2})\nonumber \end{eqnarray} Where $N_{*}=\ln a_{*}$. In order to investigate the evolution of the universe, we need the the essential parameter, $\frac{\mathcal{H}_*^{'}}{\mathcal{H}_*^2}$ . In term of the new variables it would be \begin{eqnarray} \frac{\mathcal{H}_*^{'}}{\mathcal{H}_*^2}=1-3(1+c_{s}^{2})\Omega_{1}^{2}-3\Omega_{2}^{2} \end{eqnarray} Where, one can obtain the deceleration parameter,$q_{*}$, in (EF) as, \begin{eqnarray}\label{qe} q_{*}=-\Big(1+\frac{\dot{H_{*}}}{H_{*}^{2}}\Big)=-\frac{\mathcal{H}_*^{'}}{\mathcal{H}_*^2}=-1+3(1+c_{s}^{2})\Omega_{1}^{2}+3\Omega_{2}^{2} \end{eqnarray} \begin{table} \caption{\label{tmodel} Critical points in (EF) } \begin{tabular}{cccccc} Points & $\Omega_{1}$ &$\Omega_{2}$ \\ \hline \hline $P_{1}$ &0 & $-\frac{\alpha\sqrt{3}}{6}$ \\ $P_{2}$ & 0 & 1 \\ $P_{3}$ & 0 & -1 \\ $P_{4}$ & $\frac{+ \frac{1}{\sqrt{6}}\left(3c_s^4 - 9\beta^2 c_s^4 -6c_s^2 + 6\beta^2 c_s^2 + 3 - \beta^2 \right)^{\frac{1}{2}}}{ \left( 1-c_s^2 \right)}$ & $-\frac{1}{3}\frac{\left( 3c_s^2 -1 \right)\beta \sqrt{3} }{-1+c_s^2}$ \\ $P_{5}$ & $\frac{- \frac{1}{\sqrt{6}}\left(3c_s^4 - 9\beta^2 c_s^4 -6c_s^2 + 6\beta^2 c_s^2 + 3 - \beta^2 \right)^{\frac{1}{2}}}{ \left( 1-c_s^2 \right)}$ & $-\frac{1}{3}\frac{\left( 3c_s^2 -1 \right)\beta \sqrt{3} }{-1+c_s^2}$ \\ $P_{6}$ & $\frac{1}{2}\frac{(-12+2\alpha^2+6c_s^2\beta\alpha-12c_s^2-2\beta\alpha)^{\frac{1}{2}}}{-\beta+\alpha+3c_s^2\beta} $&$-\frac{\sqrt{3}(1+c_s^2)}{-\beta+\alpha+3c_s^2\beta}$ \\ $P_{7}$ & $\frac{-1}{2}\frac{(-12+2\alpha^2+6c_s^2\beta\alpha-12c_s^2-2\beta\alpha)^{\frac{1}{2}}}{-\beta+\alpha+3c_s^2\beta} $&$-\frac{\sqrt{3}(1+c_s^2)}{-\beta+\alpha+3c_s^2\beta}$ \\ \hline \hline\end{tabular} \end{table} In the following discussions, we use the Jacobin stability of a dynamical system as the robustness of the system to small perturbations of the whole trajectory. Jacobin stability analysis offers a powerful and simple method for constraining the physical properties of different systems, described by second order differential equations\cite{Sabau}. It is especially important in oscillatory systems where the phase paths can “spiral in” towards zero, “spiral out” towards infinity, or reach neutrally stable situations called centers. The eigenvalues of jacobian matrix can be used to determine the stability of periodic orbits, or limit cycles and predict if the system oscillates near the critical point. In cosmology where there is the problem of initial conditions, phase space analysis gives us the possibility of studying all of the evolution paths admissible for all initial conditions \cite{Salehi1}-\cite{Salehi4}. It is useful in visualizing the behavior of the system. In previous section the critical points of the system have been obtained in term of important parameters$(\beta,\alpha)$. The nature of these points can be determined by the corresponding eigenvalues. Here the eigenvalues of the system are as follows \begin{align*} Ev_1= \begin{bmatrix} -3+\frac{\alpha^{2}}{4} \\ -\frac{3}{2}-\frac{3}{2}c_s^2+\frac{\alpha^{2}}{4}-\frac{1}{4}\beta\alpha+\frac{3}{4}c_s^2\beta\alpha \end{bmatrix} \end{align*} \begin{align*} Ev_2= \begin{bmatrix} 6+\alpha\sqrt{3} \\ \frac{3}{2}-\frac{3}{2}c_s^2+\frac{1}{2}\beta \sqrt{3} -\frac{3}{2}\beta \sqrt{3}c_s^2 \end{bmatrix} \end{align*} \begin{align*} Ev_3= \begin{bmatrix} 6 -\alpha\sqrt{3}\\ \frac{3}{2}-\frac{3}{2}c_s^2-\frac{1}{2}\beta \sqrt{3}+\frac{3}{2}\beta \sqrt{3}c_s^2 \end{bmatrix} \end{align*} \begin{align*} Ev_4= \begin{bmatrix} -\frac{1}{2} \frac{-3c_s^4 +9\beta^2c_s^4 + 6c_s^2 -6\beta^2c_s^2 -3 +\beta^2}{-1+c_s^2}\\ -\frac{1}{3}\frac{-18\beta^2 c_s^2-9c_s^4+9+\beta^2+27\beta^2c_s^4+9c_s^2\beta\alpha-3\beta\alpha}{-1+c_s^2} \end{bmatrix}\\ \end{align*} \begin{align*} Ev_5= \begin{bmatrix} -\frac{1}{2} \frac{-3c_s^4 +9\beta^2c_s^4 + 6c_s^2 -6\beta^2c_s^2 -3 +\beta^2}{-1+c_s^2}\\ -\frac{1}{3}\frac{-18\beta^2 c_s^2-9c_s^4+9+\beta^2+27\beta^2c_s^4+9c_s^2\beta\alpha-3\beta\alpha}{-1+c_s^2} \end{bmatrix}\\ \end{align*} \begin{align*} Ev_6= \begin{bmatrix} \frac{6\beta-18c_s^2\beta+3c_s^2\alpha-3\alpha +\sqrt{D}}{-4\beta+4\alpha+12c_s^2\beta}\\ \frac{6\beta-18c_s^2\beta+3c_s^2\alpha-3\alpha -\sqrt{A}}{-4\beta+4\alpha+12c_s^2\beta} \end{bmatrix}\\ Ev_7= \begin{bmatrix} \frac{6\beta-18c_s^2\beta+3c_s^2\alpha-3\alpha +\sqrt{D}}{-4\beta+4\alpha+12c_s^2\beta}\\ \frac{6\beta-18c_s^2\beta+3c_s^2\alpha-3\alpha -\sqrt{A}}{-4\beta+4\alpha+12c_s^2\beta} \end{bmatrix}\\ \end{align*} Where, \\$D=(432-72\alpha^3c_s^2\beta-432c_s^4\beta^2\alpha^2+288c_s^2\beta^2\alpha^2+81c_s^4\alpha^2 +432c_s^2+216c_s^2\beta\alpha-216c_s^2\beta^3\alpha+648c_s^4\beta^3\alpha -648c_s^6\beta^3\alpha+216c_s^6\beta\alpha-48\beta^2\alpha^2+24\alpha^3\beta+756\beta^2c_s^4-936\beta^2c_s^2-432c_s^4 -432c_s^6+1296c_s^6\beta^2+180\beta^2-108\beta\alpha-63\alpha^2+252c_s^4\beta\alpha+24\beta^3\alpha-18c_s^2\alpha^2).$\\ Generally speaking, the trajectories of the phase space approach to a fixed point if all eigenvalues get negative values. This fixed point is called stable point, also the trajectories recede from a fixed point if all eigenvalues have positive values. This fixed point is called unstable point. The fixed points with both positive and negative eigenvalues are called saddle points, and those trajectories which approach to a saddle fixed point along some eigenvectors may recede from it along some other eigenvectors. The behavior of the system near a critical point is spiral if and only if its eigenvalue be complex as $\lambda_{1,2}=\lambda_{r} \pm i\lambda_{I}$. Because of reality of parameters $\beta$ and $\alpha$, it is obvious that only the eigenvalues $Ev_{6}$ and $Ev_{7}$ can be complex. Thus we can expect the spiral behavior near the points $p_{6}$ and $p_{7}$. We investigate the properties of each of the fixed points for the baro tropic equation of state $c_s^2=0$, i.e., dust. \\ \textbf{A:Critical point} $P_{1}$$( \Omega_{1}=0,\Omega_{2}=-\frac{\alpha\sqrt{3}}{6})$. This critical point corresponds to a solution where the constraint Eqs. (\ref{const}) and (\ref{g_comps_0}) is dominated by \emph{potential-kinetic-scaling solution}. This solution exists for all potentials and only depends on slope of potential $\alpha$. This scaling solution has two eigenvalues which depend on the slope of potential $\alpha$ and coupling constant $\beta$. \begin{align} Ev_1= \begin{bmatrix} -3+\frac{\alpha^{2}}{4} \\ -\frac{3}{2}+\frac{\alpha^{2}}{4}-\frac{1}{4}\beta\alpha \end{bmatrix} \end{align} The eigenvalue shows that the critical point is stable under the condition \\ $ C I:\left\{ \begin{array}{ll} \beta<\frac{-6+\alpha^{2}}{\alpha}, -2\sqrt{3}<\alpha<0 \\ \beta>\frac{-6+\alpha^{2}}{\alpha}, 2\sqrt{3}>\alpha>0\\ \end{array} \right. $\\\\ \begin{figure} \centering \includegraphics[scale=.3]{p1.eps}\hspace{0.1 cm}\\ FiG.1. The behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$\\ phase plane for $\beta=1$ and $\alpha=1$. As can bee seen \\ $p_{1}$ is stable, $p_{2}$ and $p_{3}$ are unstable points, $p_{4}$ and $p_{5}$ are saddle points and $p_{6}$ and $p_{7}$ don't exist \end{figure} Fig.1 shows the behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=1$ and $\alpha=1$. As can bee seen under the condition $C I $, $p_{1}$ is stable, $p_{2}$ and $p_{3}$ are unstable points, $p_{4}$ and $p_{5}$ are saddle points and $p_{6}$ and $p_{7}$ don't exist. The non complexity of the eigenvalues implies that the the system has no spiral behavior near this critical point \textbf{B:Critical point} $P_{2}$$( \Omega_{1}=0,\Omega_{2}=1)$,corresponds to a \emph{kinetic-scaling solution}. This solution exists for all potentials and is independent of slope of potential $\alpha$ and coupling constant $\beta$. This scaling solution has two eigenvalues which depend on the slope of potential $\alpha$ and coupling constant $\beta$. \begin{align} Ev_2= \begin{bmatrix} 6 +\alpha\sqrt{3}\\ \frac{3}{2}+\frac{1}{2}\beta \sqrt{3} \end{bmatrix} \end{align} \begin{figure} \centering \includegraphics[scale=.3]{p33.eps}\hspace{0.1 cm}\\ Fig.2.The behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=-2$ and $\alpha=-5$. As can be seen $p_{1}$ and $p_{3}$ are unstable, $p_{2}$ is stable, $p_{4}$ , $p_{5}$, $p_{6}$ and $p_{7}$ are saddle points.\\ \end{figure} The eigenvalues show that the critical point is stable for \\ CII:($\beta<-\sqrt{3},\alpha<-2\sqrt{3}$)\\ Fig.2 shows the behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=-2$ and $\alpha=-5$. As can be seen $p_{1}$ and $p_{3}$ are unstable, $p_{2}$ is stable, $p_{4}$ , $p_{5}$, $p_{6}$ and $p_{7}$ are saddle points.\\ \textbf{C:Critical point} $P_{3}$$( \Omega_{1}=0,\Omega_{2}=-1)$, corresponds to a \emph{kinetic-scaling solution}. This solution exists for all potentials and is independent of slope of potential $\alpha$ and coupling constant $\beta$ however its eigenvalues are depend on slope of potential $\alpha$ and coupling constant $\beta$. \begin{align} Ev_3= \begin{bmatrix} 6 -\alpha\sqrt{3}\\ \frac{3}{2}-\frac{1}{2}\beta \sqrt{3} \end{bmatrix} \end{align} \begin{figure} \centering \includegraphics[scale=.3]{p3.eps}\hspace{0.1 cm}\\ Fig.3.The behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=2$ and $\alpha=5$. As can be seen $p_{1}$ and $p_{2}$ are unstable, $p_{3}$ is stable, $p_{4}$ , $p_{5}$ don't exist and $p_{6}$ and $p_{7}$ are saddle points.\\ \end{figure} The eigenvalues show that the critical point is stable for\\ CIII: ($\beta>\sqrt{3},\alpha>2\sqrt{3}$)\\.Fig.3 shows the behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=2$ and $\alpha=5$. As can be seen $p_{1}$ and $p_{2}$ are unstable, $p_{3}$ is stable, $p_{4}$ , $p_{5}$ don't exist and $p_{6}$ and $p_{7}$ are saddle points.\\ \textbf{D:Critical points} $P_{4},P_{5}$$( \Omega_{1}=\pm\sqrt{\frac{3-\beta^{2}}{6}},~\Omega_{2}=-\frac{\sqrt{3}}{3}\beta)$. These critical points are mirror images of each other . These solution exists for $\beta^{2}<3$ and all potentials. The solution has two eigenvalues which depend on slope of potential $\alpha$ and coupling constant $\beta$.\\ \begin{align} Ev_{4,5}= \begin{bmatrix} -\frac{3}{2}+\frac{\beta^{2}}{2} \\ 3+\beta^{2}-\beta\alpha \end{bmatrix} \end{align}\\ \begin{figure} \centering \includegraphics[scale=.3]{p54.eps}\hspace{0.1 cm}\\ Fig.4.The behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=-1$ and $\alpha=-6$. As can be seen $p_{1}$ and $p_{3}$ are unstable, $p_{2}$, $p_{6}$ and $p_{7}$ are saddle points and $p_{4}$ and $p_{5}$ are stable points.\\ \end{figure} The eigenvalues show that the critical point is stable for \\ CIV:$ \left\{ \begin{array}{ll} \alpha<\frac{3+\beta^{2}}{\beta}, -\sqrt{3}<\beta<0 \\ \alpha>\frac{3+\beta^{2}}{\beta}, \sqrt{3}>\beta>0\\ \end{array} \right. $\\ Fig.4 shows the behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=-1$ and $\alpha=-6$. As can be seen $p_{1}$ and $p_{3}$ are unstable, $p_{2}$, $p_{6}$ and $p_{7}$ are saddle points and $p_{4}$ and $p_{5}$ are stable points.\\ \textbf{E:Critical point} $P_{6},P_{7}$($\Omega_{1}= \pm\frac{1}{2}\frac{\sqrt{-12+2\alpha^2-2\beta\alpha}}{-\beta+\alpha}), \Omega_{2}=\frac{\sqrt{3}}{\beta-\alpha}$).\\ These critical points are mirror images of each other . The solution exists for \\ $ \left\{ \begin{array}{ll} \beta<\frac{-6+\alpha^{2}}{\alpha}, \alpha>0 \\ \beta>\frac{-6+\alpha^{2}}{\alpha}, \alpha<0\\ \end{array} \right. $\\ The solution has two eigenvalues which depend on slope of potential $\alpha$ and coupling constant $\beta$.\\ \begin{align} Ev_{6,7}: \begin{bmatrix} \frac{-6\beta+3\alpha+\sqrt{180\beta^2-108\beta\alpha-63\alpha^2-48\beta^2\alpha^2+24\beta^3\alpha+24\beta\alpha^3+432}}{4(\beta-\alpha)} \\ \frac{-6\beta+3\alpha-\sqrt{180\beta^2-108\beta\alpha-63\alpha^2-48\beta^2\alpha^2+24\beta^3\alpha+24\beta\alpha^3+432}}{4(\beta-\alpha)} \end{bmatrix} \end{align} \begin{figure} \centering \includegraphics[scale=.3]{p67.eps}\hspace{0.1 cm}\\ Fig.5.The behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=6$ and $\alpha=-5$.\\ As can be seen $p_{1}$ is unstable $p_{2}$ and $p_{3}$ are saddle points \\ $p_{4}$ and $p_{5}$ dont exist and $p_{6}$ and $p_{7}$ are stable focus.\\ \end{figure} Fig.5. shows the behavior of the dynamical system in the $\Omega_{1},\Omega_{2}$ phase plane for $\beta=6$ and $\alpha=-5$. As can be seen $p_{1}$ is unstable $p_{2}$ and $p_{3}$ are saddle points, , $p_{4}$ and $p_{5}$ don't exist and $p_{6}$ and $p_{7}$ are stable focus.\\ \begin{figure} \centering \includegraphics[scale=.4]{region.eps}\hspace{0.1 cm}\\ FiG.6. The region of stability for different critical points \end{figure} \\ \section{Mapping of stability analysis to (JF) } In this section using same procedure in (EF), the field equations (\ref{fried1}) to (\ref{phiequation}) can be transformed to an autonomous system of differential equations by introducing the following dimensionless variables, \begin{eqnarray}\label{newj} \Gamma_{1}^{2}=\frac{4 \pi G_{*}\rho}{3H^{2}\Phi},\Gamma_{2}=\frac{\dot{\Phi}}{3\Phi H} ,\Gamma_{3}^{2}=\frac{U(\Phi)}{3\Phi H^{2}} \end{eqnarray} However, equations (\ref{conformal1}) to (\ref{conformal3}) are more complicate than equations \ref{g_comps_0}-\ref{cons_comps}, hence in order to derive the autonomous deferential equations in (JF), it is more appropriate to implement equations (\ref{conformal1}) to (\ref{conformal3}), to make relation between the new variables (\ref{newj}) in (JF) and variables (\ref{defe}) in (EF) as \begin{eqnarray}\label{conf1} &&\Gamma_{2}=\frac{2\beta\Omega_{2}}{\sqrt{3}-3\beta\Omega_{2}}=\frac{2\Omega_{2}}{\sqrt{3}(2\omega_{BD}+3)^{\frac{1}{2}}-3\Omega_{2}}\\ &&\Gamma_{1}=\frac{\sqrt{3}\Omega_{1}}{\sqrt{3}-3\beta\Omega_{2}}=\frac{\sqrt{3}\Omega_{1}(2\omega_{BD}+3)^{\frac{1}{2}}}{\sqrt{3}(2+3)^{\frac{1}{2}}-3\Omega_{2}}\\ &&\Gamma_{3}=\frac{\sqrt{3}\Omega_{3}}{\sqrt{3}-3\beta\Omega_{2}}=\frac{\sqrt{3}\Omega_{3}(2\omega_{BD}+3)^{\frac{1}{2}}}{\sqrt{3}(2\omega_{BD}+3)^{\frac{1}{2}}-3\Omega_{2}}\label{conf3} \end{eqnarray} Note that the equations (\ref{conf1}) to (\ref{conf3}) confirm that \begin{eqnarray}\label{co} 2\Gamma_{1}^{2}-3\Gamma_{2}+\frac{3\omega_{BD}}{2}\Gamma_{2}^{2}+\Gamma_{3}^{2}=1 \end{eqnarray} Which can be derived from equation (\ref{fried1}) directly. Also \begin{eqnarray}\label{coo} \frac{dN_{*}}{dN}=\frac{\mathcal{H}_*}{\mathcal{H}}=\frac{2+3\Gamma_{2}}{2} \end{eqnarray} Now, for the autonomous equations of motions in (JF), we obtain \begin{eqnarray}\label{coo1} \frac{d\Gamma_{i}}{dN}=\frac{d\Gamma_{i}}{dN_{*}}\frac{dN_{*}}{dN}=\frac{2+3\Gamma_{2}}{2}\frac{d\Gamma_{i}}{dN_{*}} \end{eqnarray} Hence using equation (\ref{coo1}) and equations (\ref{conf1}) to (\ref{conf3}), the autonomous equations of motions in (JF) can be related to the corresponding equations in (EF) as \begin{eqnarray}\label{au} &&\frac{d\Gamma_{1}}{dN}=\frac{3(2+3\Gamma_{2})}{2(\sqrt{3}-3\beta\Omega_{2})^{2}}\Big(\frac{d\Omega_{1}}{dN_{*}}-\sqrt{3}\beta(\Omega_{2}\frac{d\Omega_{1}}{dN_{*}}-\Omega_{1}\frac{d\Omega_{2}}{dN_{*}})\Big)\\ &&\frac{d\Gamma_{2}}{dN}=\frac{2+3\Gamma_{2}}{2}\frac{2\sqrt{3}\beta}{(\sqrt{3}-3\beta\Omega_{2})^{2}}\frac{d\Omega_{2}}{dN_{*}}\\ &&\frac{d\Gamma_{3}}{dN}=\frac{3(2+3\Gamma_{2})}{2(\sqrt{3}-3\beta\Omega_{2})^{2}}\Big(\frac{d\Omega_{3}}{dN_{*}}-\sqrt{3}\beta(\Omega_{2}\frac{d\Omega_{3}}{dN_{*}}-\Omega_{3}\frac{d\Omega_{2}}{dN_{*}})\Big)\label{au3} \end{eqnarray} Equations (\ref{au}) to (\ref{au3}) indicate that when $(\frac{d\Omega_{1}}{dN_{*}}=\frac{d\Omega_{2}}{dN_{*}}=\frac{d\Omega_{3}}{dN_{*}}=0)$ then their corresponding in (JF) would also be zero $(\frac{d\Gamma_{1}}{dN}=\frac{d\Gamma_{2}}{dN}=\frac{d\Gamma_{3}}{dN}=0)$. This implies that critical points of dynamical system in (EF) would be mapped to their corresponding in (JF) by transformation relations (\ref{conf1}) to (\ref{conf3})(see table.I and II). \begin{table} \caption{\label{tmodel} Critical points in (JF) } \begin{tabular}{cccccc} Points & $\Gamma_{1}$ &$\Gamma_{2}$ \\ \hline \hline $P_{1}$ &0 & $-\frac{2}{3}\frac{\alpha\beta}{\alpha\beta+2}$ \\ $P_{2}$ & 0 & $\frac{2\beta}{\sqrt{3}-3\beta}$ \\ $P_{3}$ & 0 & $\frac{-2\beta}{\sqrt{3}+3\beta}$ \\ $P_{4}$ & $-\frac{\sqrt{18-6\beta^{2}}}{6(\beta^{2}+1}$ & $\frac{-2}{3}\frac{\beta^{2}}{\beta^{2}+1}$ \\ $P_{5}$ & $\frac{\sqrt{18-6\beta^{2}}}{6(\beta^{2}+1}$ & $\frac{-2}{3}\frac{\beta^{2}}{\beta^{2}+1}$ \\ $P_{6}$ & $-\frac{\sqrt{-2\beta\alpha+2\alpha^2-12}}{2(2\beta+\alpha)} $&$-\frac{-2\beta}{(2\beta+\alpha)}$ \\ $P_{7}$ & $\frac{\sqrt{-2\beta\alpha+2\alpha^2-12}}{2(2\beta+\alpha)} $&$-\frac{-2\beta}{(2\beta+\alpha)}$ \\ \hline \hline\end{tabular} \end{table} \begin{figure*} \centering \includegraphics[scale=.45]{conform7.eps}\hspace{0.1 cm}\includegraphics[scale=.33]{p1.eps}\hspace{0.1 cm}\\ \end{figure*} \begin{figure*} \centering \includegraphics[scale=.8]{comp.eps}\hspace{0.1 cm}\\ FiG.7. The phase space mapping of (EF) to (JF); The right graph in ($\Gamma_{1},\Gamma_{2}$) phase space in(JF) is mapping of Left ones in ($\Omega_{1},\Omega_{2}$) phase space in (EF) for $\beta=1$ and $\alpha=1$. The lower panel shows the corresponding regions in two frames. \end{figure*} Here the eigenvalues of the system are as follows \begin{align*} Ev_1= \begin{bmatrix} - \frac{1}{2}\frac{\alpha\beta-\alpha^2+6}{\beta\alpha+2}\\ \frac{1}{2}\frac{-12+\alpha^2}{\beta\alpha+2} \end{bmatrix} \end{align*} \begin{align*} Ev_2= \begin{bmatrix} \frac{3}{2}\frac{\sqrt{3}-\beta}{3\beta+\sqrt{3}}\\ \frac{3(2\sqrt{3}-\beta)}{3\beta+\sqrt{3}} \end{bmatrix} \end{align*} \begin{align*} Ev_3= \begin{bmatrix} \frac{3}{2}\frac{\sqrt{3}-17\beta-12\alpha\beta^2}{(3\beta+\sqrt{3})}\\ \frac{3(2\sqrt{3}-24\beta+ \alpha-18\alpha\beta^{2})}{(3\beta+\sqrt{3})} \end{bmatrix} \end{align*} \begin{align*} Ev_4= \begin{bmatrix} \frac{1}{2}\frac{\beta^2-3}{(\beta^{2}+1)}\\ \frac{\beta^2-\alpha\beta+3}{(\beta^{2}+1)} \end{bmatrix} \end{align*} \begin{align*} Ev_5= \begin{bmatrix} \frac{1}{2}\frac{\beta^2-3}{(\beta^{2}+1)}\\ \frac{\beta^2-\alpha\beta+3}{(\beta^{2}+1)} \end{bmatrix} \end{align*} \begin{align*} Ev_{6,7}: \begin{bmatrix} \frac{6\beta-3\alpha+\sqrt{180\beta^2-108\beta\alpha-63\alpha^2-48\beta^2\alpha^2+24\beta^3\alpha+24\beta\alpha^3+432}}{4(2\beta+\alpha)} \\ \frac{6\beta-3\alpha-\sqrt{180\beta^2-108\beta\alpha-63\alpha^2-48\beta^2\alpha^2+24\beta^3\alpha+24\beta\alpha^3+432}}{4(2\beta+\alpha)} \end{bmatrix} \end{align*} As can be seen, while there is one-to-one correspondence between critical points in two frames and each critical point in one frame is mapped to its corresponds in other frame , the eigenvalues in (JF) in some critical points are different from those obtained in (EF). This implies that while the critical points in (EF) will be mapped to their corresponding in (JF), however the nature of the critical points may be changed under the transformation and stability of a critical points in one frame does not grantee the stability in other frame. In Fig.7 the behavior of dynamical system in phase space have been shown in (EF) and its map in (JF)for the same values of $(\alpha=1,\beta=1)$. For this values the critical points in two frames are as;\\ EF$ \left\{ \begin{array}{ll} P_{1}=(0,-\frac{\sqrt{3}}{6}):stable \\ P_{2}=(0,1):unstable\\ P_{3}=(0,-1):unstable\\ P_{4}=(\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3}): saddle \\ P_{5}=(-\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3}):saddle\\ \end{array} \right.\\ $JF$ \left\{ \begin{array}{ll} P_{1}=(0,-0.2):stable \\ P_{2}=(0,-1.6):stable\\ P_{3}=(0,-0.4):unstable\\ P_{4}=(0.3,-0.3):saddle\\ P_{5}=(-0.3,-0.3):saddle\\ \end{array} \right. $\\ The eigenvalues in (EF) are as follows \begin{align*} Ev_1= \begin{bmatrix} - \frac{3}{2}\\ \frac{-11}{4} \end{bmatrix} , Ev_2= \begin{bmatrix} \frac{3}{2}+\frac{\sqrt{3}}{2}\\ \sqrt{3}+6 \end{bmatrix}, Ev_3= \begin{bmatrix} \frac{3}{2}-\frac{\sqrt{3}}{2}\\ -\sqrt{3}+6 \end{bmatrix} \end{align*} \begin{align*} Ev_4= \begin{bmatrix} 3\\ -1 \end{bmatrix} , Ev_5= \begin{bmatrix} 3\\ -1 \end{bmatrix} \end{align*} Their corresponding eigenvalues in (JF) are as follows \begin{align*} Ev_1= \begin{bmatrix} - 3.2\\ -10.6 \end{bmatrix} , Ev_2= \begin{bmatrix} -1.\\ -1.8 \end{bmatrix}, Ev_3= \begin{bmatrix} 2\\ 1.6 \end{bmatrix} \end{align*} \begin{align*} Ev_4= \begin{bmatrix} 1.5\\ -.5 \end{bmatrix} , Ev_5= \begin{bmatrix} 1.5\\ -.5 \end{bmatrix} \end{align*} As can be seen, the critical point $P_{2}$ is unstable in (EF) while its corresponding is stable in (JF). It is also interesting to note that dynamic of the deceleration parameters is different in two frames. From equation (\ref{hh}),$\mathcal{H}_*^{'}=\mathcal{H}^{'}-\beta\varphi^{''}$, Hence, the deceleration parameter in (JF) can be derived as \begin{eqnarray}\label{hh2} q=-\frac{\mathcal{H}^{'}}{\mathcal{H}^{2}}=-\frac{\mathcal{H}_*^{'}+\beta\varphi^{''}}{(\mathcal{H}_*^{'}+\beta\varphi^{'})^{2}} =\frac{q_{*}-\beta\frac{\varphi^{''}}{\mathcal{H}_*^{2}}}{(1+\beta\frac{\varphi^{'}}{\mathcal{H}_*})^{2}} \label{changedeltaEFJF} \end{eqnarray} Where using equations (\ref{scalar_comps}) and (\ref{defe}), it will be simplified as, \begin{eqnarray}\label{hh3} q=\frac{q_{*}+2\sqrt{3}\beta\Omega_{2}+\frac{\alpha\beta}{2}\Omega_{3}^{2}+3\beta^{2}(1-3c_{s}^{2})\Omega_{1}^{2}}{(1+\sqrt{3}\beta\Omega_{2})^{2}} \label{changedeltaEFJF} \end{eqnarray} This is an important point to remember: Although we are looking for cosmological FRW backgrounds whose expansion is accelerating, however, equations (\ref{hh3}) and (\ref{qe}) indicate that, acceleration universe in (JF) may be correspond to deceleration universe in (EF). For example, vanishing potential in (EF) implies that $q_{*}>0$, while the deceleration parameter $q$ in (JF) may be negative (This can be proved from equations (\ref{qe}), (\ref{const}) and (\ref{hh3})) . As an another straightforward example, at critical point $P_{2}$ in (EF) with ($\Omega_{1}=0,\Omega_{2}=1,\Omega_{3}=0$), the deceleration parameter in (EF) is $q_{*} =2$, while from equation (\ref{hh3}), at this critical point $q=\frac{2+2\sqrt{3}\beta}{(1+\sqrt{3}\beta\Omega_{2})^{2}}$. This indicates that for $\beta<-\frac{\sqrt{3}}{3}$, the deceleration parameter $q<0$. \section{equivalency of different cosmological models in EF and JF} Scalar–tensor theories of gravity can be formulated in the Einstein or in the Jordan frame, which are related by the conformal transformations. Some of the cosmological models can be reconstructed from scalar tensor theories under appropriate conformal metric. As a particular example, we want to discuss equivalency between Brans-Dicke theory and chameleon gravity as the well known models of scalar tensor theories in two different frames. As point out in equation (\ref{S_EF}), it is possible to reconstruct chameleon field equations by transformation of Brans-Dicke equations from Jordan frame (JF) to Einstein frame (EF) under conformal metric $g^*_{\mu\nu}$= $e^{-2\beta\varphi}g_{\mu\nu}$ where $g^*_{\mu\nu}$and $g_{\mu\nu}$ are metrics in Einstein and Jordan Frames respectively and $\beta$ is the chameleon- matter coupling parameter which would be related to Brans-Dicke parameter $\omega_{BD}$ by $\beta=(2\omega_{BD}+3)^{\frac{-1}{2}}$. The mathematical equivalency of the models in two different frames has this advantages for our cosmological studies. In principal, for those features of the chameleon study which focus on observational measurements it is more appropriate to use the corresponding Brans-Dicke theory in(JF) where experimental data have their usual interpretation. For example, the consistency between the two theory provides the possibility to derive confidence regions for the value of chameleon-matter coupling constant $\beta$ ( which is still controversial) from corresponding coupling constant $\omega_{BD}$ which severely has been constrained by some observations in (JF) Brans-Dicke theory. Solar System data put very strong constraints on the $\omega_{BD}$ parameter. The measurement of the Parameterized Post-Newtonian parameter $\gamma$ (see \cite{Will},\cite{Will2}) from the Cassini mission gives $\omega_{BD}> 40000$ at the $2\sigma$ confidence level \cite{Will2},\cite{Bertotti}. This enable us to find the confidence region for chameleon- matter coupling parameter as $|\beta|<5\times10^{-3}$ in solar system. On cosmological scales, a wide range of values $\omega_{BD}>\{50,2000\}$ have been reported in different studies\cite{Nagata}-\cite{Chen} which determine different confidence region for parameter $\beta$ in cosmological scale. An improvement of pervious studies has been done by\cite{Avilez} using Cosmic Microwave Background data from Planck. They implemented two types of models. First, the initial condition of the scalar field is fixed to give the same effective gravitational strength today as the one measured on the Earth. In this case they find that $\omega_{BD}>692$ at the $(99\% $ confidence level. In the second type by considering that the initial condition for the scalar is a free parameter they find $\omega_{BD}>890$ at the same confidence level. These confidence regions for $\omega_{BD}$ put new constraints on parameter $\beta$ as $\beta<0.023$ and $\beta<0.026$ in cosmological scale.\\ However, the important point that we must note is that the evolution of dynamical cosmological parameters such as deceleration parameter which are not equivalent in two frames. \section{Conclusion} In this paper we used dynamical system and phase space approach to show that stability of Brans-Dicke theory in (EF) does not guarantee the stability in (JF). .We have concentrated on the Brans-Dicke theory, but the results can easily be generalized. Our analysis show that while there is one-to-one correspondence between critical points in two frames and each critical point in one frame is mapped to its corresponds in other frame , however stability of a critical points in one frame does not guarantee the stability in other frame. Hence an unstable point in one frame may be mapped to a stable point in other frame. All trajectories between two critical points in phase space in one frame are different from their corresponds in other ones. This indicates that the dynamical behavior of variables and cosmological parameters are different in two frames. Hence cosmological parameters parameters such as deceleration parameter have different dynamic in two frames where a positive deceleration universe in (EF) may be correspond to an acceleration universe in (JF) and vise versa. Hence for those features of the study which focus on observational measurements we must use the (JF) where experimental data have their usual interpretation. However we can benefit from equivalency of the equations of two frames. As an particular case we discussed equivalency between Brans-Dicke theory and chameleon gravity as the well known models of scalar tensor theories in two different frames. We explained how we can put constraint on some parameters of chameleon gravity in (EF) using their correspondence in Brans-Dick theory in (JF).
proofpile-arXiv_067-1233
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\setcounter{equation}{0}\oldsection} \renewcommand\thesection{\arabic{section}} \renewcommand\theequation{\thesection.\arabic{equation}} \allowdisplaybreaks \def\pf{\it{Proof.}\rm\quad} \newcommand\divg{{\text{div}}} \newtheorem{defn}{Definition}[section] \newtheorem{thm}{Theorem}[section] \newtheorem{pro}{Proposition}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{re}{Remark}[section] \newtheorem{ac}{Acknowledgement} \begin{document} \title{Lifespan of Solution to MHD Boundary Layer Equations with Analytic Perturbation of General Shear Flow} \author{{\bf Feng Xie}\\[1mm] \small School of Mathematical Sciences, and LSC-MOE,\\[1mm] \small Shanghai Jiao Tong University, Shanghai 200240, P.R.China\\[1mm] {\bf Tong Yang}\\[1mm] \small Department of Mathematics, City University of Hong Kong,\\[1mm] \small Tat Chee Avenue, Kowloon, Hong Kong } \date{} \maketitle \begin{center} Dedicated to Professor Philippe G. Ciarlet on the Occasion of his 80th Birthday \end{center} \bigskip \begin{abstract} In this paper, we consider the lifespan of solution to the MHD boundary layer system as an analytic perturbation of general shear flow. By using the cancellation mechanism in the system observed in \cite{LXY1}, the lifespan of solution is shown to have a lower bound in the order of $\varepsilon^{-2+}$ if the strength of the perturbation is of the order of $\varepsilon$. Since there is no restriction on the strength of the shear flow and the lifespan estimate is larger than the one obtained for the classical Prandtl system in this setting, it reveals the stabilizing effect of the magnetic field on the electrically conducting fluid near the boundary. \end{abstract} \footnotetext[1]{{\it E-mail address:} tzxief@sjtu.edu.cn (F. Xie)} \footnotetext[2]{{\it E-mail address:} matyang@cityu.edu.hk(T. Yang)} \vskip 2mm \noindent {\bf 2000 Mathematical Subject Classification}: 76N20, 35Q35, 76N10, 35M33. \vskip 2mm \noindent {\bf Keywords}: MHD boundary layer, analytic perturbation, lifespan estimate, shear flow. \section{Introduction} Consider the high Reynolds number limit to the MHD system near a no-slip boundary, the following MHD boundary layer system was derived in \cite{LXY1} when both of the Reynolds number and the magnetic Reynolds number have the same order in two space dimensions. Precisely, consider the MHD system in the domain $\{(x,Y)|x\in\mathbb{R}, Y\in\mathbb{R}_+\}$ with $Y=0$ being the boundary, \begin{align} \label{V1} \left\{ \begin{array}{ll} \partial_t u^\epsilon+(u^\epsilon\partial_x+v^\epsilon\partial_Y)u^\epsilon+\partial_xp^\epsilon-(h^\epsilon\partial_x+g^\epsilon\partial_Y)h^\epsilon=\epsilon(\partial_x^2 u^\epsilon+\partial_Y^2 u^\epsilon),\\ \partial_t v^\epsilon+(u^\epsilon\partial_x+v^\epsilon\partial_Y)v^\epsilon+\partial_Yp^\epsilon-(h^\epsilon\partial_x+g^\epsilon\partial_Y)g^\epsilon=\epsilon(\partial_x^2 v^\epsilon+\partial_Y^2 v^\epsilon),\\ \partial_t h^\epsilon+(u^\epsilon\partial_x+v^\epsilon\partial_Y)h^\epsilon-(h^\epsilon\partial_x+g^\epsilon\partial_Y)u^\epsilon=\kappa\epsilon(\partial_x^2 h^\epsilon+\partial_Y^2 h^\epsilon),\\ \partial_t g^\epsilon+(u^\epsilon\partial_x+v^\epsilon\partial_Y)g^\epsilon-(h^\epsilon\partial_x+g^\epsilon\partial_Y)v^\epsilon=\kappa\epsilon(\partial_x^2 g^\epsilon+\partial_Y^2 g^\epsilon),\\ \partial_x u^\epsilon+\partial_Y v^\epsilon=0,\qquad \partial_x h^\epsilon+\partial_Y g^\epsilon=0, \end{array} \right. \end{align} where both the viscosity and resistivity coefficients are denoted by a small positive parameter $\epsilon$, $(u^\epsilon, v^\epsilon)$ and $(h^\epsilon, g^\epsilon)$ represent the velocity and the magnetic field respectively. The no-slip boundary condition is imposed on the velocity field \begin{align} \label{BCV} (u^\epsilon, v^\epsilon)|_{Y=0}={\bf{0}}, \end{align} and the perfectly conducting boundary condition is given for the magnetic field \begin{align} \label{BCM} (\partial_Yh^\epsilon, g^\epsilon)|_{Y=0}={\bf{0}}. \end{align} Formally, when $\epsilon=0$, (\ref{V1}) is reduced into the following incompressible ideal MHD system \begin{align} \label{ILE0} \left\{ \begin{array}{ll} \partial_t u^0_e+(u^0_e\partial_x+v^0_e\partial_Y)u^0_e+\partial_xp^0_e-(h^0_e\partial_x+g^0_e\partial_Y)h^0_e=0,\\ \partial_t v^0_e+(u^0_e\partial_x+v^0_e\partial_Y)v^0_e+\partial_Yp^0_e-(h^0_e\partial_x+g^0_e\partial_Y)g^0_e=0,\\ \partial_t h^0_e+(u^0_e\partial_x+v^0_e\partial_Y)h^0_e-(h^0_e\partial_x+g^0_e\partial_Y)u^0_e=0,\\ \partial_t g^0_e+(u^0_e\partial_x+v^0_e\partial_Y)g^0_e-(h^0_e\partial_x+g^0_e\partial_Y)v^0_e=0,\\ \partial_x u^0_e+\partial_Y v^0_e=0,\qquad \partial_x h^0_e+\partial_Y g^0_e=0. \end{array} \right. \end{align} Since the sovability of the system (\ref{ILE0}) requires only the normal components of the velocity and magnetic fields $(v_e^0, g^0_e)$ on the boundary \begin{align} \label{IBE} (v^0_e, g^0_e)|_{Y=0}={\bf 0}, \end{align} in the limit from (\ref{V1}) to (\ref{ILE0}), a Prandtl-type boundary layer can be derived to resolve the mis-match of the tangential components between the viscous flow $(u^\epsilon, h^\epsilon)$ and invicid flow $(u^0, h^0)$ on the boundary $\{Y=0\}$. And this system governing the fluid behavior in the leading order of approximation near the boundary is derived in \cite{GP,LXY1,LXY2}: \begin{align} \label{1.1} \left\{ \begin{array}{ll} \partial_tu_1+u_1\partial_xu_1+u_2\partial_yu_1=b_1\partial_xb_1+b_2\partial_yb_1+\partial^2_yu_1,\\ \partial_tb_1+\partial_y(u_2b_1-u_1b_2)=\kappa\partial_y^2b_1,\\ \partial_xu_1+\partial_yu_2=0,\quad \partial_xb_1+\partial_yb_2=0 \end{array} \right. \end{align} in $\mathbb{H}=\{(x,y)\in\mathbb{R}^2| y\geq0\}$ with the fast variable $y=Y/\sqrt{\epsilon}$. Here, the trace of the horizontal ideal MHD flow (\ref{ILE0}) on the boundary $\{Y=0\}$ is assumed to be a constant vector so that the pressure term $\partial_xp^0_e(t,x,0)$ vanishes by the Bernoulli's law. Consider the system $(\ref{1.1})$ with initial data \begin{align} \label{ID} u_1(t,x,y)|_{t=0}=u_0(x,y),\qquad\qquad b_1(t,x,y)|_{t=0}=b_0(x,y), \end{align} and the boundary conditions \begin{align} \label{1.2} \left\{ \begin{array}{ll} u_1|_{y=0}=0,\\ u_2|_{y=0}=0, \end{array} \right. \qquad\hbox{and}\qquad \left\{ \begin{array}{ll} \partial_yb_1|_{y=0}=0,\\ b_2|_{y=0}=0. \end{array} \right. \end{align} And the far field state is denoted by $(\bar{u}, \bar{b})$: \begin{align} \label{1.3} \lim_{y\rightarrow+\infty}u_1=u^0_e(t,x,0)\triangleq\bar{u},\qquad \lim_{y\rightarrow+\infty}b_1=h^0_e(t,x,0)\triangleq\bar{b}. \end{align} First of all, a shear flow $(u_s(t,y),0,\bar{b}, 0)$ is a trivial solution to the system \eqref{1.1} with $u_s(t,y)$ solving \begin{align} \label{GS} \left\{ \begin{array}{ll} \partial_tu_s(t,y)-\partial_y^2u_s(t,y)=0, \quad (t, y)\in\mathbb{R}_+\times\mathbb{R}_+,\\ u_s(t,y=0)=0, \qquad \lim_{y\rightarrow\infty}u_s(t,y)=\bar{u},\\ u_{s}(t=0, y)=u_{s0}(y). \end{array} \right. \end{align} In the following discussion, we assume the shear flow $u_s(t,y)$ has the following properties: \begin{align*} {\bf (H)}\quad \|\partial_y^iu_s(t,\cdot)\|_{L^\infty_y}\leq \frac{C}{\langle t \rangle^{i/2}}\quad (i=1,2),\quad \int_0^\infty|\partial_yu_s(t,y)|dy<C,\quad\|\theta_\alpha\partial_y^2u_s(t,\cdot)\|_{L^2_y}\leq \frac{C}{\langle t \rangle^{3/4}}, \end{align*} for some generic constant $C$. \begin{re} \label{RK2} The assumption (H) on the shear flow holds for a large class of initial data $u_{s0}$. For example, it holds for the initial data $u_{s0}=\chi(y)$ with $\chi(y)\in C^\infty(\mathbb{R})$, $\chi(y)=0$ for $y\leq 1$ and $\chi(y)=\bar{u}$ for $y\geq 2$ considered in \cite{ZZ} for the Prandtl system. Note that here we do not assume the smallness of the shear flow. In addition, it also holds when $u_{s0}(y)=\frac{1}{\sqrt{\pi}}\int_0^{y}\exp(-\frac{z^2}{4})dz$ considered in \cite{IV} for the Prandtl system where the almost global solution is obtained. Note that for the classical Prandtl equations, such shear flow in the form of Guassian error function yields a time decay damping term in the time evolution equation of $u_1$, however, it does not leads to any damping effect in the MHD boundary layer system \eqref{1.1}. \end{re} To define the function space of the solution considered in this paper, the following Gaussian weighted function $\theta_\alpha$ will be used: \begin{align*} \theta_{\alpha}(t,y)=\exp{(\frac{\alpha z(t,y)^2}{4})},\quad \hbox{with}\quad z(t,y)=\frac{y}{\sqrt{\langle t\rangle}},\quad \langle t\rangle=1+t\quad\hbox{and}\quad \alpha\in[1/4,1/2]. \end{align*} With this and \begin{align*} M_m=\frac{\sqrt{m+1}}{m!}, \end{align*} define the Sobolev weighted semi-norms by \begin{align} \label{NM1} &X_m=X_m(f,\tau)=\|\theta_{\alpha}\partial_x^mf\|_{L^2}\tau^mM_m,\quad D_m=D_m(f,\tau)=\|\theta_{\alpha}\partial_y\partial_x^mf\|_{L^2}\tau^mM_m,\nonumber\\ &Z_m=Z_m(f,\tau)=\|z\theta_{\alpha}\partial_x^mf\|_{L^2}\tau^mM_m,\quad Y_m=Y_m(f,\tau)=\|\theta_{\alpha}\partial_x^mf\|_{L^2}\tau^{m-1}mM_m. \end{align} Then the following space of analytic functions in the tangential variable $x$ and Sobolev weighted in the normal variable $y$ is defined by \begin{align*} X_{\tau, \alpha}=\{f(t,x,y)\in L^2(\mathbb{H}; \theta_{\alpha}dxdy): \|f\|_{X_{\tau, \alpha}}<\infty\} \end{align*} with $\tau>0$ and the norm \begin{align*} \|f\|_{X_{\tau, \alpha}}=\sum_{m\geq 0}X_m(f,\tau). \end{align*} In addition, the following two semi-norms will also be used: \begin{align*} \|f\|_{D_{\tau, \alpha}}=\sum_{m\geq 0}D_m(f,\tau)=\|\partial_yf\|_{X_{\tau, \alpha}},\quad \|f\|_{Y_{\tau, \alpha}}=\sum_{m\geq 1}Y_m(f,\tau). \end{align*} Here, the summation over $m$ is considered in the $l^1$ sense that is similar to the definition used in \cite{IV,ZZ} rather than in the $l^2$ sense used in \cite{KV}. With the above notations, we are now ready to state the main Theorem as follows. \begin{thm} \label{THM} For any $\lambda\in [3/2, 2)$, there exists a small positive constant $\varepsilon_*$ depending on $2-\lambda$. Under the assumption (H) on the backgroud shear flow $(u_s(t,y), 0, \bar{b},0)$ with $\bar{b}\neq 0$, assume the initial data $u_0$ and $b_0$ satisfy \begin{align} \label{THM1} \|u_0-u_s(0,y)\|_{X_{2\tau_0,1/2}}\leq \varepsilon, \quad \|b_0-\bar{b}\|_{X_{2\tau_0,1/2}}\leq \varepsilon, \end{align} for some given $\varepsilon\in (0, \varepsilon_*]$. Then there exists a unique solution $(u_1, u_2, b_1, b_2)$ to the MHD boundary layer equations (\ref{1.1})-(\ref{1.3}) such that \begin{align*} (u_1-u_s(t,y), b_1-\bar{b})\in X_{\tau,\alpha},\ \alpha\in[1/4,1/2], \end{align*} with analyticity radius $\tau$ larger than $\tau_0/4$ in the time interval $[0, T_\varepsilon]$. And the lifespan $T_\varepsilon$ has the following low bound estimate \begin{align} \label{THM3} T_\varepsilon\geq {C}\varepsilon^{-\lambda}, \end{align} where the constant ${C}$ is independent of $\varepsilon$. \end{thm} As is well-known that the leading order characteristic boundary layer for the incompressible Navier-Stokes equations with no-slip boundary condition is described by the classical Prandtl equations derived by Prandtl \cite{P} in 1904. In the two space dimensions, under the monotonicity assumption on the tangential velocity in the normal direction, Oleinik firstly obtained the local existence of classical solutions by using the Crocco transformation, cf. \cite{O} and Oleinik-Samokhin's classical book \cite{OS}. Recently, this well-posedness result was re-proved by using an energy method in the framework of Sobolev spaces in \cite{AWXY} and \cite{MW1} independently by observing the cancellation mechanism in the convection terms. And by imposing an additional favorable condition on the pressure, a global in time weak solution was obtained in \cite{XZ}. When the monotonicity condition is violated, singularity formation or separation of the boundary layer is well expected and observed. For this, E-Engquist constructed a finite time blowup solution to the Prandtl equations in \cite{EE}. Recently, when the background shear flow has a non-degenerate critical point, some interesting ill-posedness (or instability) phenomena of solutions to both linear and nonlinear classical Prandtl equations around shear flows are studied, cf. \cite{GD,GN,GN1}. All these results show that the monotonicity assumption on the tangential velocity plays a key role for well-posedness theory except in the frameworks of analytic functions and Gevrey regularity classes. Indeed, in the framework of analytic functions, Sammartino and Caflisch \cite{SC,CS} established the local well-posedness theory of the Prandtl system in three space dimensions and also justified the Prandtl ansatz in this setting by applying the abstract Cauchy-Kowalewskaya (CK) theorem initated by Asano's unpublished work. Later, the analyticity requirement in the normal variable $y$ was removed by Lombardo, Cannone and Sammartino in \cite{LCS} because of the viscous effect in the normal direction. Recently, Zhang and Zhang obtained the lifespan of small analytic solution to the classical Prandtl equations with small analytic initial data in \cite{ZZ}. Precisely, when the strength of background shear flow is of the order of $\varepsilon^{5/3}$ and the perturbation is of the order of $\varepsilon$, they showed that the classical Prandtl system has a unique solution with a lower bound estimate on the lifespan in the order of $\varepsilon^{-4/3}$. Furthermore, if the initial data is a small analytic perturbation of the Guassian error function (\ref{GS}), an almost global existence for the Prandtl boundary layer equations is proved in \cite{IV}. On the other hand, to study the high Reynolds number limits for the MHD equations (\ref{V1}) with no-slip boundary condition on the velocity (\ref{BCV}) and perfect conducting boundary condition (\ref{BCM}) on the magnetic field, one can apply the Prandtl ansatz to derive the boundary layer system (\ref{1.1}) as the leading order description on the flow near the boundary. For this, readers can refer to \cite{GP,LXY1,LXY2,LXY3,XY} about the formal derivation of (\ref{1.1}), the well-posedness theory of the system and the justification of the Prandtl ansatz locally in time. This paper is about long time existence of solutions to (\ref{1.1})-(\ref{1.3}). Precisely, we will show that if the initial data is a small perturbation of a shear flow analytically in the order of $\varepsilon$, then there exists a unique solution to (\ref{1.1})-(\ref{1.3}) with the lifespan $T_\varepsilon$ of the order of $ \varepsilon^{-2+}$. Compared with the estimate on the lifespan of solutions to the classical Prandtl system studied in \cite{ZZ}, the lower bound estimate is larger and there is no requirement on the smallness of the background shear flow because the mechanism in the system is used due to the non-degeneracy of the tangential magnetic field. However, it is not known whether one can obtain a global or almost global in time solution like the work on the Prandtl system when the background shear velocity is taken to be a Gaussian error function in \cite{IV}. We mention that even though Lin and Zhang showed the almost global existence of solution to MHD boundary layer equations with zero Dirichlet boundary condition on the magnetic field in \cite{LZ} when the components of both the background velocity and magnetic fields are Guassian error functions, it is not clear wheather the system (\ref{1.1}) holds with zero Dirichlet boundary condition even in formal derivation. The analysis on the lifespan of the perturbed system in this paper relies on the introduction of some new unknown functions that capture the cancellation of some linear terms. Unlike the work in \cite{IV} on the Prandtl system for which the cancellation yields a damping term in the time evolution of the perturbation of the tangetial velocity field, there is no such damping effect observed for the MHD boundary layer system. Finally, the rest of the paper is organized as follows. After giving some preliminary estimates, a uniform estimate on the solution will be proved in the next section. Based on this uniform estimate, a low bound of the lifespan of solution is derived in Section 3. The uniqueness part is done in Section 4. Throughout the paper, constants denoted by $C$, $\bar{C}, C_0, C_1$ and $C_2$ are generic and independent of the small parameter $\varepsilon$. \section{Uniform Estimate} We first list the following two priliminary estimates on the functions in the norms defined in the previous section. The first estimate indeed is from Lemma 3.3 in \cite{IV} (also see \cite{H}). \begin{lem} \label{LEM2.1} (Poincar\'e type inequality with Gaussian weight) Let $f$ be a function such that $f|_{y=0}=0\ (or\ \partial_yf|_{y=0})$ and $f|_{y=\infty}=0$. Then, for $\alpha\in[1/4,1/2], m\geq 0$ and $t\geq 0$, it holds that \begin{align} \label{2.1} \frac{\alpha}{\langle t\rangle}\|\theta_{\alpha}\partial_x^mf\|_{L^2_y}^2\leq\|\theta_{\alpha}\partial_y\partial_x^mf\|_{L^2_y}^2. \end{align} \end{lem} The second lemma is used in \cite{IV} and we include it here with a short proof for convenience of readers. \begin{lem}\label{LEM2.2} Let $f$ be a function such that $f|_{y=0}=0\ (or\ \partial_yf|_{y=0})$ and $f|_{y=\infty}=0$. Then \begin{align} \label{LL} \sum_{m\geq 0}\frac{\|\theta_{\alpha}\partial_y\partial_x^mf\|^2_{L^2}}{\|\theta_{\alpha}\partial_x^mf\|_{L^2}}\tau^mM_m\geq \frac{\alpha^{1/2}\beta}{2\langle t\rangle^{1/2}}\|f\|_{D_{\tau,\alpha}}+\frac{\alpha(1-\beta)}{\langle t\rangle}\|f\|_{X_{\tau,\alpha}}, \end{align} for $\beta\in(0, 1/2)$. \end{lem} \begin{pf} In fact, by Lemma \ref{LEM2.1}, one has \begin{align*} \frac{\|\theta_{\alpha}\partial_y\partial_x^mf\|^2_{L^2}}{\|\theta_{\alpha}\partial_x^mf\|_{L^2}}\geq & \frac{\beta}{2}\frac{\|\theta_{\alpha}\partial_y\partial_x^mf\|^2_{L^2}}{\|\theta_{\alpha}\partial_x^mf\|_{L^2}}+\frac{2-\beta}{2}\frac{\alpha^{1/2}}{\langle t\rangle^{1/2}}\|\theta_{\alpha}\partial_y\partial_x^mf\|_{L^2}\\ \geq &\frac{\beta}{2}\frac{\|\theta_{\alpha}\partial_y\partial_x^mf\|^2_{L^2}}{\|\theta_{\alpha}\partial_x^mf\|_{L^2}}+\frac{\beta\alpha^{1/2}}{2\langle t\rangle^{1/2}}\|\theta_{\alpha}\partial_y\partial_x^mf\|_{L^2}+\frac{\alpha(1-\beta)}{\langle t\rangle}\|\theta_{\alpha}\partial_x^mf\|_{L^2}\\ \geq& \frac{\beta\alpha^{1/2}}{2\langle t\rangle^{1/2}}\|\theta_{\alpha}\partial_y\partial_x^mf\|_{L^2}+\frac{\alpha(1-\beta)}{\langle t\rangle}\|\theta_{\alpha}\partial_x^mf\|_{L^2}. \end{align*} Multiplying the above inequality by $\tau^mM_m$ and summing up in $m\geq 0$ give (\ref{LL}). \end{pf} We are now ready to study a uniform estimate on the solution. For this, we first rewrite the solution to (\ref{1.1})-(\ref{1.3}) as a perturbation $(u, v, b, g)$ of the $(u_s(t,y), 0, \bar{b}, 0)$ by denoting \begin{align} \label{EXP} \left\{ \begin{array}{ll} u_1=u_s(t,y)+u,\\ u_2=v, \end{array} \right. \qquad\qquad \left\{ \begin{array}{ll} b_1=\bar{b}+b,\\ b_2=g. \end{array} \right. \end{align} Without loss of generality, take $\bar{b}=1$ and $\kappa=1$. Then (\ref{1.1}) yields \begin{align} \label{1.4} \left\{ \begin{array}{ll} \partial_tu+(u_s+u)\partial_xu+v\partial_y(u_s+u)-(1+b)\partial_xb-g\partial_yb-\partial_y^2u=0,\\ \partial_tb-(1+b)\partial_xu-g\partial_y(u_s+u)+(u_s+u)\partial_xb+v\partial_yb-\partial_y^2b=0. \end{array} \right. \end{align} And the initial and boundary data of $(u, v)$ and $(b, g)$ are given by \begin{align} \label{NID} u(t,x,y)|_{t=0}=u_0(x,y)-u_s(0,y),\qquad b(t, x,y)|_{t=0}=b_0(x,y)-1, \end{align} \begin{align} \label{NBC} \left\{ \begin{array}{ll} u|_{y=0}=0,\\ v|_{y=0}=0, \end{array} \right. \qquad\hbox{and}\qquad \left\{ \begin{array}{ll} \partial_yb|_{y=0}=0,\\ g|_{y=0}=0, \end{array} \right. \end{align} with the corresponding far field condition \begin{align} \label{NFC} \lim_{y\rightarrow+\infty}u=0,\qquad \lim_{y\rightarrow+\infty}b=0. \end{align} It suffices to establish the long time existence of solutions to (\ref{1.4})-(\ref{NFC}). In this section, we focus on the uniform a priori estimate on the solution to (\ref{1.4}) in the analytical framework defined in Section 1. Integrating equation $(\ref{1.4})_2$ over $[0,y]$ gives that \begin{align} \label{3.1} \partial_t\int_0^ybd\tilde{y}+v(1+b)-(u_s+u)g=\partial^2_y\int_0^ybd\tilde{y}, \end{align} where the boundary conditions that $\partial_yb|_{y=0}=v|_{y=0}=g|_{y=0}=0$ are used.\\ Define \begin{align*} \psi(t,y)=\int_0^ybd\tilde{y}, \end{align*} one has \begin{align} \label{3.2} \partial_t\psi+v(1+b)-(u_s+u)g=\partial^2_y\psi. \end{align} Now introduce new unknown functions by taking care of the cancellation mechamism in the system as obseved in \cite{LXY1} as follows \begin{align} \label{3.3} \tilde{u}=u-\partial_yu_s\psi,\qquad \tilde{b}=b. \end{align} Then $(\tilde{u}, \tilde{b})$ satisfies the following equations. \begin{align} \label{3.4} \left\{ \begin{array}{ll} \partial_t\tilde{u}-\partial_y^2\tilde{u}+(u_s+u)\partial_x\tilde{u}+v\partial_y\tilde{u}-(1+b)\partial_x\tilde{b}-g\partial_y\tilde{b}-2\partial_y^2u_s\tilde{b}+v\partial_y^2u_s\psi=0,\\ \partial_t\tilde{b}-\partial_y^2\tilde{b}-(1+b)\partial_x\tilde{u}-g\partial_y\tilde{u}+(u_s+u)\partial_x\tilde{b}+v\partial_y\tilde{b}-g\partial_y^2u_s\psi=0. \end{array} \right. \end{align} Here we have used the following fact that $u_s$ is the solution to the heat equation. That is, \begin{align*} \partial_tu_s-\partial_y^2u_s=0,\qquad \partial_t\partial_yu_s-\partial_y^3u_s=0. \end{align*} By a direct calculation, the boundary conditions of $(\tilde{u}, \tilde{b})$ are given by \begin{align} \label{BC} \tilde{u}|_{y=0}=0,\qquad \partial_y\tilde{b}|_{y=0}=0, \end{align} \begin{align} \label{FBC} \tilde{u}|_{y=\infty}=0,\qquad \tilde{b}|_{y=\infty}=0. \end{align} We then turn to show the existence of solution $(\tilde{u}, \tilde{b})$ to (\ref{3.4})-(\ref{FBC}) with the corresponding initial data. \begin{align} \label{IIII} \tilde{u}(0,x,y)=u(0,x,y)-\partial_yu_s(0,y)\int_0^yb(0,x,\tilde{y})d\tilde{y},\qquad \tilde{b}(0,x,y)=b(0,x,y). \end{align} Note that \begin{align} \label{IIIIE} \|\tilde{u}(0,x,y)\|_{X_{2\tau_0, \alpha}}\leq \|u(0,x,y)\|_{X_{2\tau_0, \alpha}}+C\|b(0,x,y)\|_{X_{2\tau_0, \alpha}}, \end{align} for $\alpha\in[1/4, 1/2]$. Moreover, once the existence of solution $(\tilde{u}, \tilde{b})$ to (\ref{3.4})-(\ref{IIII}) is obtained, one can define $(u, b)$ by \begin{align} \label{RS} u(t,x,y)=\tilde{u}(t,x,y)+\partial_yu_s(t,y)\int_0^y\tilde{b}(t,x,\tilde{y})d\tilde{y},\qquad b(t,x,y)=\tilde{b}(t,x,y). \end{align} It is straightforward to check that $(u, b)$ is a solution to (\ref{1.4})-(\ref{NFC}) with the following estimates \begin{align*} \|u\|_{X_{\tau, \alpha}}\leq \|\tilde{u}\|_{X_{\tau, \alpha}}+C\|\tilde{b}\|_{X_{\tau, \alpha}},\qquad \|b\|_{X_{\tau, \alpha}}=\|\tilde{b}\|_{X_{\tau, \alpha}}. \end{align*} Therefore, we only need to estimate the solution $(\tilde{u}, \tilde{b})$ to (\ref{3.4})-(\ref{IIII}) in the analytic norms as shown in the next two subsections. \subsection{A priori estimate on velocity field} For $m\geq 0$, by applying the tangential derivative operator $\partial_x^m$ to $(\ref{3.4})_1$ and multiplying it by $\theta_{\alpha}^2\partial_x^m\tilde{u}$, the integration over $\mathbb{H}$ yields \begin{align} \label{3.5} \int_{\mathbb{H}}\partial_x^m(\partial_t\tilde{u}-\partial_y^2\tilde{u}+(u_s+u)\partial_x\tilde{u}+v\partial_y\tilde{u}- (1+b)\partial_x\tilde{b}-g\partial_y\tilde{b}-2\partial_y^2u_s\tilde{b}+v\psi\partial_y^2u_s)\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy=0. \end{align} We now estimate each term in $(\ref{3.5})$ as follows. Firstly, note that \begin{align} \label{3.6} &\int_\mathbb{H}\partial_t\partial_x^m\tilde{u}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy\nonumber\\ = &\frac12\frac{d}{dt}\int_\mathbb{H}(\partial_x^m\tilde{u})^2\theta_{\alpha}^2dxdy-\int_\mathbb{H}(\partial_x^m\tilde{u})^2\theta_{\alpha}\frac{d}{dt}\theta_{\alpha}dxdy\\ = &\frac12\frac{d}{dt}\|\theta_{\alpha}\partial_x^m\tilde{u}\|^2_{L^2}+\frac{\alpha}{4\langle t\rangle}\|\theta_{\alpha}z\partial_x^m\tilde{u}\|^2_{L^2},\nonumber \end{align} and \begin{align*} -\int_\mathbb{H}\partial_y^2\partial_x^m\tilde{u}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy = \|\theta_{\alpha}\partial_x^m\partial_y\tilde{u}\|_{L^2}^2+\int_\mathbb{H}\partial_y\partial_x^m\tilde{u}\partial_y(\theta_{\alpha}^2)\partial_x^m\tilde{u}dxdy. \end{align*} The boundary term vanishes because of the boundary condition $\partial_x^m\tilde{u}|_{y=0}=0$. Furthermore, \begin{align*} &\int_\mathbb{H}\partial_y\partial_x^m\tilde{u}\partial_y(\theta_{\alpha}^2)\partial_x^m\tilde{u}dxdy =-\frac12\int_\mathbb{H}(\partial_x^m\tilde{u})^2\partial_y^2(\theta_{\alpha}^2)dxdy\\ =&-\frac{\alpha}{2}\frac{1}{\langle t\rangle}\|\theta_{\alpha}\partial_x^m\tilde{u}\|^2_{L^2}-\frac{\alpha^2}{2}\frac{1}{\langle t\rangle}\|\theta_{\alpha}z\partial_x^m\tilde{u}\|^2_{L^2}, \end{align*} where we have used \begin{align*} \partial_y^2(\theta_{\alpha}^2)=\frac{\alpha}{\langle t\rangle}\theta_{\alpha}^2+\frac{\alpha^2}{\langle t\rangle}z^2(t,y)\theta_{\alpha}^2. \end{align*} Consequently, \begin{align} \label{3.7} -\int_\mathbb{H}\partial_y^2\partial_x^m\tilde{u}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy = \|\theta_{\alpha}\partial_x^m\partial_y\tilde{u}\|_{L^2}^2-\frac{\alpha}{2}\frac{1}{\langle t\rangle}\|\theta_{\alpha}\partial_x^m\tilde{u}\|^2_{L^2}-\frac{\alpha^2}{2}\frac{1}{\langle t\rangle}\|\theta_{\alpha}z\partial_x^m\tilde{u}\|^2_{L^2}. \end{align} For the nonlinear terms in (\ref{3.5}), we have \begin{align*} \int_{\mathbb{H}}\partial_x^m((u_s+u)\partial_x\tilde{u})\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy =\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{\mathbb{H}}\partial^{m-j}_xu\partial_x^{j+1}\tilde{u}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy \triangleq R_1 \end{align*} and \begin{align*} |R_1|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xu\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{j+1}_x\tilde{u}\|_{L^\infty_xL^2_y}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xu\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{j+1}_x\tilde{u}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}. \end{align*} For $0\leq j\leq [m/2]$, by (\ref{RS}), one has \begin{align*} &\|\partial^{m-j}_xu\|_{L^2_xL^\infty_y}=\|\partial^{m-j}_x(\tilde{u}+\partial_yu_s\psi)\|_{L^2_xL^\infty_y}\\ \leq &\|\partial^{m-j}_x\tilde{u}\|_{L^2_xL^\infty_y}+\|\partial_yu_s\partial^{m-j}_x\psi\|_{L^2_xL^\infty_y}\\ \leq &C\|\theta_{\alpha}\partial^{m-j}_x\tilde{u}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j}_x\partial_y\tilde{u}\|^{1/2}_{L^2}+C\langle t\rangle^{-1/4}\|\theta_{\alpha}\partial^{m-j}_x\tilde{b}\|_{L^2}, \end{align*} where in the last inequality, we have used $$ \|\partial_yu_s\|_{L_y^\infty}\leq \frac{C}{\sqrt{\langle t\rangle}}, $$ according to the assumption (H). Moreover, \begin{align*} &\|\partial^{m-j}_x\psi\|_{L^2_xL^\infty_y}=\|\int_0^y\partial^{m-j}_x\tilde{b}d\tilde{y}\|_{L^2_xL^\infty_y}\\ =&\|\int_0^y\theta_{\alpha}\partial^{m-j}_x\tilde{b}\exp(-\frac{\alpha}{4}z^2)d\tilde{y}\|_{L^2_xL^\infty_y} \leq C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j}_x\tilde{b}\|_{L^2}. \end{align*} And \begin{align*} \|\theta_{\alpha}\partial^{j+1}_x\tilde{u}\|_{L^\infty_xL^2_y}\leq C\|\theta_{\alpha}\partial^{j+1}_x\tilde{u}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{j+2}_x\tilde{u}\|^{1/2}_{L^2}. \end{align*} For $[m/2]+1\leq j\leq m$, we have \begin{align*} \|\partial^{m-j}_xu\|_{L^\infty_{xy}}\leq &\|\partial^{m-j}_x\tilde{u}\|_{L_{xy}^\infty}+\|\partial_yu_s\partial^{m-j}_x\psi\|_{L_{xy}^\infty}\\ \leq &C\|\theta_{\alpha}\partial^{m-j}_x\tilde{u}\|^{1/4}_{L^2}\|\theta_{\alpha}\partial^{m-j}_x\partial_y\tilde{u}\|^{1/4}_{L^2} \|\theta_{\alpha}\partial^{m-j+1}_x\tilde{u}\|^{1/4}_{L^2}\|\theta_{\alpha}\partial^{m-j+1}_x\partial_y\tilde{u}\|^{1/4}_{L^2}\\ &+C\langle t\rangle^{-1/4}\|\theta_{\alpha}\partial^{m-j}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|^{1/2}_{L^2}. \end{align*} Hence, \begin{align} \label{NE1} \frac{|R_1|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{u}\|_{L^2}}\leq& \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}(X^{1/2}_{m-j}D^{1/2}_{m-j}+\langle t\rangle^{-1/4}\bar{X}_{m-j})Y^{1/2}_{j+1}Y^{1/2}_{j+2} \right.\\ &\left.+\sum_{j=[m/2]+1}^{m}(X^{1/4}_{m-j}X^{1/4}_{m-j+1}D^{1/4}_{m-j}D^{1/4}_{m-j+1}+\langle t\rangle^{-1/4}\bar{X}_{m-j}^{1/2}\bar{X}_{m-j+1}^{1/2})Y_{j+1}\right\}.\nonumber \end{align} From now on, we use $X_i, D_i, Y_i$ to denote the semi-norms of function $\tilde{u}$ defined in (\ref{NM1}), and $\bar{X}_i, \bar{D}_i$ and $\bar{Y}_i$ for the corresponding semi-norms for $\tilde{b}$. Note that \begin{align*} \int_{\mathbb{H}}\partial_x^m(v\partial_y\tilde{u})\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy =\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{\mathbb{H}}\partial^{m-j}_xv\partial_x^{j}\partial_y\tilde{u}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy \triangleq R_2 \end{align*} and \begin{align*} |R_2|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xv\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{u}\|_{L^\infty_xL^2_y}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xv\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{u}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}. \end{align*} For $0\leq j\leq [m/2]$, \begin{align*} \|\partial^{m-j}_xv\|_{L^2_xL^\infty_y}=&\|\int_0^y\partial^{m-j+1}_xud\tilde{y}\|_{L^2_xL^\infty_y}\\ \leq&\|\int_0^y\partial^{m-j+1}_x\tilde{u}d\tilde{y}\|_{L^2_xL^\infty_y}+\|\int_0^y\partial_yu_s(\int_0^{\tilde{y}}\partial^{m-j+1}_x\tilde{b}ds)d\tilde{y}\|_{L^2_xL^\infty_y}\\ \leq&\|\int_0^y\partial^{m-j+1}_x\tilde{u}d\tilde{y}\|_{L^2_xL^\infty_y}+\|\int_0^y\partial_yu_s d\tilde{y}\|_{L^\infty_y}\|\int_0^{\tilde{y}}\partial^{m-j+1}_x\tilde{b}ds\|_{L^2_xL^\infty_y}\\ \leq&C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{u}\|_{L^2}+C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|_{L^2}, \end{align*} where we have used $\int_0^\infty|\partial_yu_s(t,y)|dy<C$ by the assumption (H). Note that \begin{align*} \|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{u}\|_{L^\infty_xL^2_y}\leq C\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{u}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{j+1}_x\partial_y\tilde{u}\|^{1/2}_{L^2}. \end{align*} For $[m/2]+1\leq j\leq m$, \begin{align*} \|\partial^{m-j}_xv\|_{L^\infty_{xy}}\leq & \|\int_0^y\partial^{m-j+1}_x\tilde{u}d\tilde{y}\|_{L^\infty_{xy}}+\|\int_0^y\partial_yu_s(\int_0^{\tilde{y}}\partial^{m-j+1}_x\tilde{b}ds)d\tilde{y}\|_{L^\infty_{xy}}\\ \leq & \|\int_0^y\partial^{m-j+1}_x\tilde{u}d\tilde{y}\|_{L^\infty_{xy}}+\|\int_0^y\partial_yu_s d\tilde{y}\|_{L^\infty_{y}}\|\int_0^{\tilde{y}}\partial^{m-j+1}_x\tilde{b}ds\|_{L^\infty_{xy}}\\ \leq & C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{u}\|_{L^2_yL^\infty_x}+C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|_{L^2_yL^\infty_x}\\ \leq &C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{u}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j+2}_x\tilde{u}\|^{1/2}_{L^2}+C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j+2}_x\tilde{b}\|^{1/2}_{L^2}. \end{align*} Hence, \begin{align} \label{NE2} \frac{|R_2|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{u}\|_{L^2}}\leq& \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}(\langle t\rangle^{1/4}Y_{m-j+1}+\langle t\rangle^{1/4}\bar{Y}_{m-j+1})D^{1/2}_{j}D^{1/2}_{j+1} \right.\\ &\left.+\sum_{j=[m/2]+1}^{m}(\langle t\rangle^{1/4}Y_{m-j+1}^{1/2}Y_{m-j+2}^{1/2}+\langle t\rangle^{1/4}\bar{Y}_{m-j+1}^{1/2}\bar{Y}_{m-j+2}^{1/2})D_{j}\right\}.\nonumber \end{align} Recall $b=\tilde{b}$ so that \begin{align*} R_3\triangleq\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{\mathbb{H}}\partial^{m-j}_x\tilde{b}\partial_x^{j+1}\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy, \end{align*} and \begin{align*} |R_3|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_x\tilde{b}\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{j+1}_x\tilde{b}\|_{L^\infty_xL^2_y}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_x\tilde{b}\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{j+1}_x\tilde{b}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}. \end{align*} For $0\leq j\leq [m/2]$, \begin{align*} \|\partial^{m-j}_x\tilde{b}\|_{L^2_xL^\infty_y} \leq C\|\theta_{\alpha}\partial^{m-j}_x\tilde{b}\|^{1/2}_{L^2}\|\partial^{m-j}_x\partial_y\tilde{b}\|^{1/2}_{L^2}, \end{align*} and \begin{align*} \|\partial^{j+1}_x\tilde{b}\|_{L^\infty_xL^2_y}\leq C\|\partial^{j+1}_x\tilde{b}\|^{1/2}_{L^2}\|\partial^{j+2}_x\tilde{b}\|^{1/2}_{L^2}. \end{align*} For $[m/2]+1\leq j\leq m$, \begin{align*} \|\partial^{m-j}_x\tilde{b}\|_{L^\infty_{xy}} \leq C\|\partial^{m-j}_x\tilde{b}\|^{1/4}_{L^2}\|\partial^{m-j}_x\partial_y\tilde{b}\|^{1/4}_{L^2}\|\partial^{m-j+1}_x\tilde{b}\|^{1/4}_{L^2}\|\partial^{m-j+1}_x\partial_y\tilde{b}\|^{1/4}_{L^2}. \end{align*} Therefore, \begin{align} \label{NE3} \frac{|R_3|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{u}\|_{L^2}}\leq& \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}\bar{X}_{m-j}^{1/2}\bar{D}_{m-j}^{1/2}\bar{Y}_{j+1}^{1/2}\bar{Y}_{j+2}^{1/2}\right.\\ &\left.+\sum_{j=[m/2]+1}^{m}\bar{X}_{m-j}^{1/4}\bar{X}_{m-j+1}^{1/4}\bar{D}_{m-j}^{1/4}\bar{D}_{m-j+1}^{1/4}\bar{Y}_{j+1}\right\}.\nonumber \end{align} Note that \begin{align*} \int_{\mathbb{H}}\partial_x^m(g\partial_y\tilde{b})\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy =\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{\mathbb{H}}\partial^{m-j}_xg\partial_x^{j}\partial_y\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy \triangleq R_4 \end{align*} and \begin{align*} |R_4|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xg\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{b}\|_{L^\infty_xL^2_y}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xg\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{b}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}. \end{align*} For $0\leq j\leq [m/2]$, \begin{align*} \|\partial^{m-j}_xg\|_{L^2_xL^\infty_y}\leq C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|_{L^2}, \end{align*} and \begin{align*} \|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{b}\|_{L^\infty_xL^2_y}\leq C\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{j+1}_x\partial_y\tilde{b}\|^{1/2}_{L^2}. \end{align*} For $[m/2]+1\leq j\leq m$, \begin{align*} \|\partial^{m-j}_xg\|_{L^\infty_{xy}}\leq & C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|_{L^2_yL^\infty_x}\\ \leq & C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j+2}_x\tilde{b}\|^{1/2}_{L^2}. \end{align*} As a consequence, we have \begin{align} \label{NE4} \frac{|R_4|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{u}\|_{L^2}}\leq \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}\langle t\rangle^{1/4}\bar{Y}_{m-j+1}\bar{D}^{1/2}_{j}\bar{D}^{1/2}_{j+1} +\sum_{j=[m/2]+1}^{m}\langle t\rangle^{1/4}\bar{Y}^{1/2}_{m-j+1}\bar{Y}^{1/2}_{m-j+2}\bar{D}_{j}\right\}. \end{align} And \begin{align*} |\int_\mathbb{H}\partial_y^2u_s\partial_x^m\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy| \leq&\|\partial_y^2u_s\|_{L^\infty_y}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}\\ \leq& C\langle t\rangle^{-1}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}, \end{align*} that is, \begin{align} \label{NE5} \frac{|\int_\mathbb{H}\partial_y^2u_s\partial_x^m\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy|}{\|\theta_{\alpha}\partial_x^m\tilde{u}\|_{L^2}}\leq C\langle t\rangle^{-1}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}, \end{align} where we have used $\|\partial_y^2u_s\|_{L_y^\infty}\leq \frac{C}{\langle t\rangle}$ by the assumption (H). We now consider \begin{align*} R_5\triangleq\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{H}\partial^{m-j}_xv \partial_y^2u_s \partial_x^{j}\psi\theta_{\alpha}^2\partial_x^m\tilde{u}dxdy. \end{align*} Note that \begin{align*} |R_5|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xv\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial_y^2u_s\|_{L^2_y} \|\partial^{j}_x\psi\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xv\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial_y^2u_s\|_{L^2_y} \|\partial^{j}_x\psi\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{m}_x\tilde{u}\|_{L^2}. \end{align*} For $0\leq j\leq [m/2]$, we have \begin{align*} \|\partial^{m-j}_xv\|_{L^2_xL^\infty_y} \leq C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{u}\|_{L^2} + C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|_{L^2}, \end{align*} and \begin{align*} \|\theta_{\alpha}\partial_y^2u_s\|_{L^2_y}\leq \frac{C}{\langle t\rangle^{3/4}}, \end{align*} provided that $\alpha<1$ by the assumption (H). And \begin{align*} \|\partial^{j}_x\psi\|_{L^\infty_{xy}}\leq C\langle t\rangle^{1/4} \|\theta_{\alpha}\partial^{j}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{j+1}_x\tilde{b}\|^{1/2}_{L^2}. \end{align*} For $[m/2]+1\leq j\leq m$, we have \begin{align*} \|\partial^{m-j}_xv\|_{L^\infty_{xy}} \leq& C\langle t\rangle^{1/4} \|\theta_{\alpha}\partial^{m-j+1}_x\tilde{u}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j+2}_x\tilde{u}\|^{1/2}_{L^2}\\ &+C\langle t\rangle^{1/4} \|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j+2}_x\tilde{b}\|^{1/2}_{L^2}. \end{align*} And \begin{align*} \|\partial^{j}_x\psi\|_{L^2_xL^\infty_y}\leq C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial_x^j\tilde{b}\|_{L^2}. \end{align*} Hence, \begin{align} \label{NE6} \frac{|R_5|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{u}\|_{L^2}}\leq & \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}(\langle t\rangle^{-1/4}Y_{m-j+1}+\langle t\rangle^{-1/4}\bar{Y}_{m-j+1})\bar{X}^{1/2}_{j}\bar{X}^{1/2}_{j+1} \right.\\ &\left.+\sum_{j=[m/2]+1}^{m}(\langle t\rangle^{-1/4}Y_{m-j+1}^{1/2}Y_{m-j+2}^{1/2}+\langle t\rangle^{-1/4}\bar{Y}_{m-j+1}^{1/2}\bar{Y}_{m-j+2}^{1/2})\bar{X}_{j}\right\}.\nonumber \end{align} Combining the estimates (\ref{3.6})-(\ref{NE6}) and summing over $m\geq 0$ give \begin{align} \label{NEU} &\frac{d}{dt}\|\tilde{u}\|_{X_{\tau,\alpha}}+\sum_{m\geq 0}\tau^mM_m\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{u}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{u}\|_{L^2}}+\frac{\alpha(1-2\alpha)}{4\langle t\rangle}\sum_{m\geq 0}\tau^mM_m\frac{\|\theta_{\alpha}z\partial_x^m\tilde{u}\|^2_{L^2}}{\|\theta_\alpha\partial_x^m\tilde{u}\|_{L^2}}\nonumber\\ &-\frac{\alpha}{2\langle t\rangle}\|\tilde{u}\|_{X_{\tau,\alpha}}-\frac{C}{\langle t\rangle}\|\tilde{b}\|_{X_{\tau,\alpha}} \leq\dot{\tau}(t)\|\tilde{u}\|_{Y_{\tau,\alpha}}\\&+\frac{C_0}{(\tau(t))^{1/2}}\left(\langle t\rangle^{-1/4}(\|\tilde{u}\|_{X_{\tau,\alpha}}+\|\tilde{b}\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}\|_{D_{\tau,\alpha}}+\|\tilde{b}\|_{D_{\tau,\alpha}})\right)(\|\tilde{u}\|_{Y_{\tau,\alpha}}+\|\tilde{b}\|_{Y_{\tau,\alpha}}),\nonumber \end{align} where we have used the fact that for any positive sequences $\{a_j\}_{j\geq 0}$ and $\{b_j\}_{j\geq 0}$, \begin{align*} \sum_{m\geq 0}\sum_{j=0}^{m}a_jb_{m-j}\leq \sum_{j\geq 0}a_j\sum_{j\geq 0}b_j. \end{align*} Choosing $\alpha\leq1/2$ in (\ref{NEU}) yields \begin{align} \label{NEUU} &\frac{d}{dt}\|\tilde{u}\|_{X_{\tau,\alpha}}+\sum_{m\geq 0}\tau^mM_m\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{u}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{u}\|_{L^2}}-\frac{\alpha}{2\langle t\rangle}\|\tilde{u}\|_{X_{\tau,\alpha}}-\frac{C}{\langle t\rangle}\|\tilde{b}\|_{X_{\tau,\alpha}}\nonumber\\ \leq& \dot{\tau}(t)\|\tilde{u}\|_{Y_{\tau,\alpha}}+\frac{C_0}{(\tau(t))^{1/2}}\left(\langle t\rangle^{-1/4}(\|\tilde{u}\|_{X_{\tau,\alpha}}+\|\tilde{b}\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}\|_{D_{\tau,\alpha}}+\|\tilde{b}\|_{D_{\tau,\alpha}})\right)\nonumber\\ &\times(\|\tilde{u}\|_{Y_{\tau,\alpha}}+\|\tilde{b}\|_{Y_{\tau,\alpha}}). \end{align} \subsection{A priori estimate on magnetic field} Similarly, for $m\geq 0$, by applying the tangential derivative operator $\partial_x^m$ to $(\ref{3.4})_2$ and multiplying it by $\theta_{\alpha}^2\partial_x^m\tilde{b}$, the integration over $\mathbb{H}$ gives \begin{align} \label{3.8} \int_{\mathbb{H}}\partial_x^m(\partial_t\tilde{b}-\partial_y^2\tilde{b}-(1+b)\partial_x\tilde{u}-g\partial_y\tilde{u}+ (u_s+u)\partial_x\tilde{b}+v\partial_y\tilde{b}-g\partial_y^2u_s\psi)\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy=0. \end{align} We now estimate $(\ref{3.8})$ term by term as follows. Firstly, \begin{align} \label{3.19} &\int_\mathbb{H}\partial_t\partial_x^m\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy = \frac12\frac{d}{dt}\int_\mathbb{H}(\partial_x^m\tilde{b})^2\theta_{\alpha}^2dxdy-\int_\mathbb{H}(\partial_x^m\tilde{b})^2\theta_{\alpha}\frac{d}{dt}\theta_{\alpha}dxdy\\ = &\frac12\frac{d}{dt}\|\theta_{\alpha}\partial_x^m\tilde{b}\|^2_{L^2}+\frac{\alpha}{4\langle t\rangle}\|\theta_{\alpha}z\partial_x^m\tilde{b}\|^2_{L^2}.\nonumber \end{align} And \begin{align*} -\int_\mathbb{H}\partial_y^2\partial_x^m\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy =\|\theta_{\alpha}\partial_x^m\partial_y\tilde{b}\|_{L^2}^2+\int_\mathbb{H}\partial_y\partial_x^m\tilde{b}\partial_y(\theta_{\alpha}^2)\partial_x^m\tilde{b}dxdy, \end{align*} where we have used the boundary condition $\partial_y\partial_x^m\tilde{b}|_{y=0}=0$. Moreover, \begin{align*} &\int_\mathbb{H}\partial_y\partial_x^m\tilde{b}\partial_y(\theta_{\alpha}^2)\partial_x^m\tilde{b}dxdy =-\frac12\int_\mathbb{H}(\partial_x^m\tilde{b})^2\partial_y^2(\theta_{\alpha}^2)dxdy\\ =&-\frac{\alpha}{2}\frac{1}{\langle t\rangle}\|\theta_{\alpha}\partial_x^m\tilde{b}\|^2_{L^2}-\frac{\alpha^2}{2}\frac{1}{\langle t\rangle}\|\theta_{\alpha}z\partial_x^m\tilde{b}\|^2_{L^2}.\nonumber \end{align*} Hence, \begin{align} \label{3.7} -\int_\mathbb{H}\partial_y^2\partial_x^m\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy = \|\theta_{\alpha}\partial_x^m\partial_y\tilde{b}\|_{L^2}^2-\frac{\alpha}{2}\frac{1}{\langle t\rangle}\|\theta_{\alpha}\partial_x^m\tilde{b}\|^2_{L^2}-\frac{\alpha^2}{2}\frac{1}{\langle t\rangle}\|\theta_{\alpha}z\partial_x^m\tilde{b}\|^2_{L^2}. \end{align} Similar to Subsection 2.1, the nonlinear terms can be estimated as follows. Firstly, \begin{align*} \int_{\mathbb{H}}\partial_x^m((1+b)\partial_x\tilde{u})\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy =\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{H}\partial^{m-j}_x\tilde{b}\partial_x^{j+1}\tilde{u}\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy \triangleq R_6, \end{align*} and \begin{align*} |R_6|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_x\tilde{b}\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{j+1}_x\tilde{u}\|_{L^\infty_xL^2_y}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_x\tilde{b}\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{j+1}_x\tilde{u}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}. \end{align*} For $0\leq j\leq [m/2]$, \begin{align*} \|\partial^{m-j}_x\tilde{b}\|_{L^2_xL^\infty_y} \leq C\|\theta_{\alpha}\partial^{m-j}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j}_x\partial_y\tilde{b}\|^{1/2}_{L^2}, \end{align*} and \begin{align*} \|\theta_{\alpha}\partial^{j+1}_x\tilde{u}\|_{L^\infty_xL^2_y}\leq C\|\theta_{\alpha}\partial^{j+1}_x\tilde{u}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{j+2}_x\tilde{u}\|^{1/2}_{L^2}. \end{align*} For $[m/2]+1\leq j\leq m$, \begin{align*} \|\partial^{m-j}_x\tilde{b}\|_{L^\infty_{xy}}\leq C\|\theta_{\alpha}\partial^{m-j}_x\tilde{b}\|^{1/4}_{L^2}\|\theta_{\alpha}\partial^{m-j}_x\partial_y\tilde{b}\|^{1/4}_{L^2} \|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|^{1/4}_{L^2}\|\theta_{\alpha}\partial^{m-j+1}_x\partial_y\tilde{b}\|^{1/4}_{L^2}. \end{align*} Hence, \begin{align} \label{NE7} \frac{|R_6|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{b}\|_{L^2}}\leq \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}\bar{X}_{m-j}^{1/2}\bar{D}_{m-j}^{1/2}Y_{j+1}^{1/2}Y_{j+2}^{1/2}+\sum_{j=[m/2]+1}^{m}\bar{X}^{1/4}_{m-j}\bar{X}^{1/4}_{m-j+1}\bar{D}^{1/4}_{m-j}\bar{D}^{1/4}_{m-j+1}Y_{j+1}\right\}. \end{align} Moreover, \begin{align*} \int_{\mathbb{H}}\partial_x^m(g\partial_y\tilde{u})\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy =\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{\mathbb{H}}\partial^{m-j}_xg\partial_x^{j}\partial_y\tilde{u}\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy \triangleq R_7, \end{align*} and \begin{align*} |R_7|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xg\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{u}\|_{L^\infty_xL^2_y}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xg\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{u}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}. \end{align*} For $0\leq j\leq [m/2]$, \begin{align*} \|\partial^{m-j}_xg\|_{L^2_xL^\infty_y}\leq C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|_{L^2}, \end{align*} and \begin{align*} \|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{u}\|_{L^\infty_xL^2_y}\leq C\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{u}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{j+1}_x\partial_y\tilde{u}\|^{1/2}_{L^2}. \end{align*} For $[m/2]+1\leq j\leq m$, \begin{align*} \|\partial^{m-j}_xg\|_{L^\infty_{xy}}\leq C \langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j+2}_x\tilde{b}\|^{1/2}_{L^2}. \end{align*} Therefore, \begin{align} \label{NE8} \frac{|R_7|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{b}\|_{L^2}}\leq \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}\langle t\rangle^{1/4}\bar{Y}_{m-j+1}D_{j}^{1/2}D_{j+1}^{1/2}+\sum_{j=[m/2]+1}^{m}\langle t\rangle^{1/4}\bar{Y}_{m-j+1}^{1/2}\bar{Y}_{m-j+2}^{1/2}D_{j}\right\}. \end{align} Denote \begin{align*} R_8\triangleq\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{\mathbb{H}}\partial^{m-j}_xu\partial_x^{j+1}\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy, \end{align*} then \begin{align*} |R_8|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xu\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{j+1}_x\tilde{b}\|_{L^\infty_xL^2_y}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xu\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{j+1}_x\tilde{b}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}. \end{align*} Similar to the estimation on $R_1$, we can obtain \begin{align} \label{NE9} \frac{|R_8|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{b}\|_{L^2}} \leq& \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}(X_{m-j}^{1/2}D_{m-j}^{1/2}+\langle t\rangle^{-1/4}\bar{X}_{m-j})\bar{Y}_{j+1}^{1/2}\bar{Y}_{j+2}^{1/2}\right.\\ &\left.+\sum_{j=[m/2]+1}^{m}(X_{m-j}^{1/4}X_{m-j+1}^{1/4}D_{m-j}^{1/4}D_{m-j+1}^{1/4}+\langle t\rangle^{-1/4}\bar{X}^{1/2}_{m-j}\bar{X}^{1/2}_{m-j+1})\bar{Y}_{j+1}\right\}.\nonumber \end{align} And \begin{align*} \int_{\mathbb{H}}\partial_x^m(v\partial_y\tilde{b})\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy =\sum_{j=0}^m(\begin{array}{ll}m\\j\end{array})\int_{\mathbb{H}}\partial^{m-j}_xv\partial_x^{j}\partial_y\tilde{b}\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy \triangleq R_9. \end{align*} Thus \begin{align*} |R_9|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xv\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{b}\|_{L^\infty_xL^2_y}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xv\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{j}_x\partial_y\tilde{b}\|_{L^2}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}. \end{align*} Similar to the estimation on $R_2$, we have \begin{align} \label{NE10} \frac{|R_9|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{b}\|_{L^2}} \leq& \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}(\langle t\rangle^{1/4}Y_{m-j+1}+\langle t\rangle^{1/4}\bar{Y}_{m-j+1})\bar{D}_{j}^{1/2}\bar{D}_{j+1}^{1/2}\right.\\ &\left.+\sum_{j=[m/2]+1}^{m}(\langle t\rangle^{1/4}Y_{m-j+1}^{1/2}Y_{m-j+2}^{1/2}+\langle t\rangle^{1/4}\bar{Y}_{m-j+1}^{1/2}\bar{Y}_{m-j+2}^{1/2})\bar{D}_{j}\right\}.\nonumber \end{align} Denote \begin{align*} R_{10} \triangleq \int_\mathbb{H}\partial_x^m(g\partial_y^2u_s\psi)\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy =\sum_{j=0}^m\int_\mathbb{H}\partial_x^{m-j}g\partial_y^2u_s\partial_x^j\psi\theta_{\alpha}^2\partial_x^m\tilde{b}dxdy. \end{align*} Then \begin{align*} |R_{10}|\leq& \sum_{j=0}^{[m/2]}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xg\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial_y^2u_s\|_{L^2_y} \|\partial^{j}_x\psi\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}\\ +&\sum_{j=[m/2]+1}^{m}(\begin{array}{ll}m\\j\end{array})\|\partial^{m-j}_xg\|_{L^\infty_{xy}}\|\theta_{\alpha}\partial_y^2u_s\|_{L^2_y} \|\partial^{j}_x\psi\|_{L^2_xL^\infty_y}\|\theta_{\alpha}\partial^{m}_x\tilde{b}\|_{L^2}. \end{align*} For $0\leq j\leq [m/2]$, \begin{align*} \|\partial^{m-j}_xg\|_{L^2_xL^\infty_y} \leq C\langle t\rangle^{1/4}\|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|_{L^2}, \end{align*} and \begin{align*} \|\theta_{\alpha}\partial_y^2u_s\|_{L^2_y}\leq \frac{C}{\langle t\rangle^{3/4}}, \end{align*} provided that $\alpha<1$ by the assumption (H). Moreover, \begin{align*} \|\partial^{j}_x\psi\|_{L^\infty_{xy}}\leq C\langle t\rangle^{1/4} \|\theta_{\alpha}\partial^{j}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{j+1}_x\tilde{b}\|^{1/2}_{L^2}. \end{align*} For $[m/2]+1\leq j\leq m$, \begin{align*} \|\partial^{m-j}_xg\|_{L^\infty_{xy}} \leq C\langle t\rangle^{1/4} \|\theta_{\alpha}\partial^{m-j+1}_x\tilde{b}\|^{1/2}_{L^2}\|\theta_{\alpha}\partial^{m-j+2}_x\tilde{b}\|^{1/2}_{L^2}, \end{align*} and \begin{align*} \|\partial^{j}_x\psi\|_{L^2_xL^\infty_y}\leq C\langle t\rangle^{1/4}\|\theta_\alpha\partial_x^j\tilde{b}\|_{L^2}. \end{align*} Consequently, \begin{align} \label{NE11} \frac{|R_{10}|\tau^mM_m}{\|\theta_{\alpha}\partial_x^m\tilde{b}\|_{L^2}} \leq& \frac{C}{(\tau(t))^{1/2}}\left\{\sum_{j=0}^{[m/2]}\langle t\rangle^{-1/4}\bar{Y}_{m-j+1}\bar{X}_j^{1/2}\bar{X}_{j+1}^{1/2}\right.\\ &\left.+\sum_{j=[m/2]+1}^{m}\langle t\rangle^{-1/4}\bar{Y}^{1/2}_{m-j+1}\bar{Y}^{1/2}_{m-j+2}\bar{X}_j\right\}.\nonumber \end{align} From the estimates (\ref{3.19})-(\ref{NE11}), summing over $m\geq 0$ yields \begin{align} \label{NEB} &\frac{d}{dt}\|\tilde{b}\|_{X_{\tau,\alpha}}+\sum_{m\geq 0}\tau^mM_m\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{b}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{b}\|_{L^2}}+\frac{\alpha(1-2\alpha)}{4\langle t\rangle}\sum_{m\geq 0}\tau^mM_m\frac{\|\theta_{\alpha}z\partial_x^m\tilde{b}\|^2_{L^2}}{\|\theta_\alpha\partial_x^m\tilde{b}\|_{L^2}}-\frac{\alpha}{2\langle t\rangle}\|\tilde{b}\|_{X_{\tau,\alpha}}\nonumber\\ \leq&\dot{\tau}(t)\|\tilde{b}\|_{Y_{\tau,\alpha}}+\frac{C_0}{(\tau(t))^{1/2}}\left(\langle t\rangle^{-1/4}(\|\tilde{u}\|_{X_{\tau,\alpha}}+\|\tilde{b}\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}\|_{D_{\tau,\alpha}}+\|\tilde{b}\|_{D_{\tau,\alpha}})\right)\nonumber\\ &\times(\|\tilde{u}\|_{Y_{\tau,\alpha}}+\|\tilde{b}\|_{Y_{\tau,\alpha}}). \end{align} Similarly, by choosing $\alpha\leq1/2$, we have \begin{align} \label{NEBB} &\frac{d}{dt}\|\tilde{b}\|_{X_{\tau,\alpha}}+\sum_{m\geq 0}\tau^mM_m\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{b}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{b}\|_{L^2}}-\frac{\alpha}{2\langle t\rangle}\|\tilde{b}\|_{X_{\tau,\alpha}}\nonumber\\ \leq& \dot{\tau}(t)\|\tilde{b}\|_{Y_{\tau,\alpha}}+\frac{C_0}{(\tau(t))^{1/2}}\left(\langle t\rangle^{-1/4}(\|\tilde{u}\|_{X_{\tau,\alpha}}+\|\tilde{b}\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}\|_{D_{\tau,\alpha}}+\|\tilde{b}\|_{D_{\tau,\alpha}})\right)\nonumber\\ &\times(\|\tilde{u}\|_{Y_{\tau,\alpha}}+\|\tilde{b}\|_{Y_{\tau,\alpha}}). \end{align} \section{The Proof of Estimate of Lifespan in Theorem \ref{THM}} By the uniform {\it a priori} estimates obtained in Section 2, we now estimate the low bound on lifespan of the solution. Consider $(\ref{NEUU})+K\times(\ref{NEBB})$ with $K> 1$ to be determined later, \begin{align} \label{NEUB} &\frac{d}{dt}(\|\tilde{u}\|_{X_{\tau,\alpha}}+K\|\tilde{b}\|_{X_{\tau,\alpha}})+\sum_{m\geq 0}\tau^mM_m(\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{u}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{u}\|_{L^2}} +K\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{b}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{b}\|_{L^2}})\nonumber\\ &-\frac{\alpha}{2\langle t\rangle}\|\tilde{u}\|_{X_{\tau,\alpha}}-(C+\frac{K\alpha}{2})\frac{1}{\langle t\rangle}\|\tilde{b}\|_{X_{\tau,\alpha}}\\ \leq&\left(\dot{\tau}(t)+\frac{C_0(K+1)}{(\tau(t))^{1/2}}\left(\langle t\rangle^{-1/4}(\|\tilde{u}\|_{X_{\tau,\alpha}}+\|\tilde{b}\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}\|_{D_{\tau,\alpha}}+\|\tilde{b}\|_{D_{\tau,\alpha}})\right)\right)\nonumber\\ &\times(\|\tilde{u}\|_{Y_{\tau,\alpha}}+K\|\tilde{b}\|_{Y_{\tau,\alpha}}).\nonumber \end{align} Choose the function $\tau(t)$ satisfies the following ODE. \begin{align} \label{ODE} \frac{d}{dt}(\tau(t))^{3/2}+\frac{3C_0(K+1)}{2}\left(\langle t\rangle^{-1/4}(\|\tilde{u}\|_{X_{\tau,\alpha}}+\|\tilde{b}\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}\|_{D_{\tau,\alpha}}+\|\tilde{b}\|_{D_{\tau,\alpha}})\right)=0. \end{align} From (\ref{NEUB}) and (\ref{ODE}), one has \begin{align} \label{NEUBU} \frac{d}{dt}(\|\tilde{u}\|_{X_{\tau,\alpha}}+K\|\tilde{b}\|_{X_{\tau,\alpha}})&+\sum_{m\geq 0}\tau^mM_m\left(\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{u}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{u}\|_{L^2}} +K\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{b}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{b}\|_{L^2}}\right)\nonumber\\ &-\frac{\alpha}{2\langle t\rangle}\|\tilde{u}\|_{X_{\tau,\alpha}}-(C+\frac{K\alpha}{2})\frac{1}{\langle t\rangle}\|\tilde{b}\|_{X_{\tau,\alpha}} \leq 0. \end{align} By lemma \ref{LEM2.2}, we have \begin{align} \label{ESTU} \sum_{m\geq 0}\tau^mM_m\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{u}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{u}\|_{L^2}}\geq \frac{\alpha^{1/2}\beta_1}{2\langle t\rangle^{1/2}}\|\tilde{u}\|_{D_{\tau,\alpha}}+\frac{\alpha(1-\beta_1)}{\langle t\rangle}\|\tilde{u}\|_{X_{\tau,\alpha}}, \end{align} and \begin{align} \label{ESTB} \sum_{m\geq 0}\tau^mM_m\frac{\|\theta_{\alpha}\partial_x^m\partial_y\tilde{b}\|_{L^2}^2}{\|\theta_\alpha\partial_x^m\tilde{b}\|_{L^2}}\geq \frac{\alpha^{1/2}\beta_2}{2\langle t\rangle^{1/2}}\|\tilde{b}\|_{D_{\tau,\alpha}}+\frac{\alpha(1-\beta_2)}{\langle t\rangle}\|\tilde{b}\|_{X_{\tau,\alpha}}, \end{align} for $\beta_1, \beta_2\in(0,1/2)$. From (\ref{NEUBU}), (\ref{ESTU}) and (\ref{ESTB}), it follows that \begin{align} \label{ESTT} \frac{d}{dt}(\|\tilde{u}\|_{X_{\tau,\alpha}}+K\|\tilde{b}\|_{X_{\tau,\alpha}})&+\frac12(\alpha(1-2\beta_1))\frac{1}{\langle t\rangle}\|\tilde{u}\|_{X_{\tau,\alpha}}+ \frac12(\alpha(1-2\beta_2)-\frac{2C}{K})\frac{1}{\langle t\rangle}K\|\tilde{b}\|_{X_{\tau,\alpha}}\nonumber\\ &\qquad\qquad+\frac{\alpha^{1/2}\beta_1}{2\langle t\rangle^{1/2}}\|\tilde{u}\|_{D_{\tau,\alpha}}+\frac{K\alpha^{1/2}\beta_2}{2\langle t\rangle^{1/2}}\|\tilde{b}\|_{D_{\tau,\alpha}}\leq 0. \end{align} Choose \begin{align*} \alpha=\frac12-\delta,\quad \beta_1=\frac{\delta}{2},\quad \beta_2=\frac{\delta}{2},\quad K=\frac{4C}{\delta}, \end{align*} where $0<\delta<1/4$ is sufficiently small to be determined later, then \begin{align*} \alpha(1-2\beta_1)=\frac12-\frac32\delta+\delta^2, \end{align*} and \begin{align*} \alpha(1-2\beta_2)-\frac{2C}{K}=\frac12-2\delta+\delta^2. \end{align*} Then, there exist small positive constants $\eta_1=\delta$ and $\eta_2=\frac{\delta}{8}$ such that \begin{align} \label{ESTTT} \frac{d}{dt}(\|\tilde{u}\|_{X_{\tau,\alpha}}+K\|\tilde{b}\|_{X_{\tau,\alpha}})+\frac{1/4-\eta_1}{\langle t\rangle}\left(\|\tilde{u}\|_{X_{\tau,\alpha}}+ K\|\tilde{b}\|_{X_{\tau,\alpha}}\right)+\frac{\eta_2}{\langle t\rangle^{1/2}}(\|\tilde{u}\|_{D_{\tau,\alpha}}+K\|\tilde{b}\|_{D_{\tau,\alpha}})\leq 0. \end{align} It implies that \begin{align} \label{ESTTTT} \frac{d}{dt}(\|\tilde{u}\|_{X_{\tau,\alpha}}+K\|\tilde{b}\|_{X_{\tau,\alpha}})\langle t\rangle^{1/4-\eta_1}&+\frac{1/4-\eta_1}{\langle t\rangle^{3/4+\eta_1}}\left(\|\tilde{u}\|_{X_{\tau,\alpha}}+ K\|\tilde{b}\|_{X_{\tau,\alpha}}\right)\nonumber\\ &+\frac{\eta_2}{\langle t\rangle^{1/4+\eta_1}}(\|\tilde{u}\|_{D_{\tau,\alpha}}+K\|\tilde{b}\|_{D_{\tau,\alpha}})\leq 0. \end{align} As a consequence, \begin{align} \label{ESTTTTL} (\|\tilde{u}\|_{X_{\tau,\alpha}}+K\|\tilde{b}\|_{X_{\tau,\alpha}})\langle t\rangle^{1/4-\eta_1}&+\int_0^t\frac{\eta_2}{\langle s\rangle^{1/4+\eta_1}}(\|\tilde{u}(s)\|_{D_{\tau,\alpha}}+K\|\tilde{b}(s)\|_{D_{\tau,\alpha}})ds\nonumber\\ &\qquad\leq (\|\tilde{u}(0)\|_{X_{\tau,\alpha}}+K\|\tilde{b}(0)\|_{X_{\tau,\alpha}})\leq C(1+K)\varepsilon, \end{align} where we have used (\ref{IIIIE}). Then, by noting that $K=\frac{4C}{\delta}$, one has \begin{align} \label{T1} &\frac{3C_0}{2}(K+1)\int_0^t\langle s\rangle^{-1/4}(\|\tilde{u}(s)\|_{X_{\tau,\alpha}}+\|\tilde{b}(s)\|_{X_{\tau,\alpha}})ds\nonumber\\ =&\frac{3C_0}{2}(\frac{4C}{\delta}+1)\int_0^t\langle s\rangle^{-1/4}(\|\tilde{u}(s)\|_{X_{\tau,\alpha}}+\|\tilde{b}(s)\|_{X_{\tau,\alpha}})ds\nonumber\\ \leq &\frac{3CC_0\varepsilon}{2}(\frac{4C}{\delta}+1)^2\int_0^t\langle s\rangle^{-1/2+\eta_1}ds \leq 3CC_0\varepsilon(\frac{4C}{\delta}+1)^2\langle t\rangle^{1/2+\eta_1}, \end{align} and \begin{align} \label{T2} &\frac{3C_0}{2}(K+1)\int_0^t\langle s\rangle^{1/4}(\|\tilde{u}(s)\|_{D_{\tau,\alpha}}+\|\tilde{b}(s)\|_{D_{\tau,\alpha}})ds\nonumber\\ =&\frac{3C_0}{2}(\frac{4C}{\delta}+1)\int_0^t\langle s\rangle^{1/4}(\|\tilde{u}(s)\|_{D_{\tau,\alpha}}+\|\tilde{b}(s)\|_{D_{\tau,\alpha}})ds\nonumber\\ =&\frac{3C_0}{2}(\frac{4C}{\delta}+1)\frac{8}{\delta}\int_0^t\langle s\rangle^{1/2+\eta_1}\frac{\eta_2}{\langle s\rangle^{1/4+\eta_1}}(\|\tilde{u}(s)\|_{D_{\tau,\alpha}}+K\|\tilde{b}(s)\|_{D_{\tau,\alpha}})ds\nonumber\\ \leq&(\frac{4C}{\delta}+1)^2\frac{12CC_0}{\delta}\langle t\rangle^{1/2+\eta_1}\varepsilon. \end{align} On the other hand, (\ref{ODE}) implies that \begin{align} \label{ODES} \tau(t)^{3/2} =\tau(0)^{3/2}-\frac{3C_0(K+1)}{2}\int_0^t(\langle s\rangle^{-1/4}(\|\tilde{u}\|_{X_{\tau,\alpha}}+\|\tilde{b}\|_{X_{\tau,\alpha}})+\langle s\rangle^{1/4}(\|\tilde{u}\|_{D_{\tau,\alpha}}+\|\tilde{b}\|_{D_{\tau,\alpha}}))ds. \end{align} From (\ref{T1}), (\ref{T2}) and (\ref{ODES}), one has \begin{align*} \tau(t)^{3/2}\geq \tau_0^{3/2}-\max\{3CC_0(\frac{4C}{\delta}+1)^2\langle t\rangle^{1/2+\eta_1}\varepsilon,\quad (\frac{4C}{\delta}+1)^2\frac{12CC_0}{\delta}\langle t\rangle^{1/2+\eta_1}\varepsilon \}, \end{align*} for all $t\geq 0$. Choose $\delta=\frac{1}{\ln(1/\varepsilon)}$. It is straightforward to show that \begin{align*} \tau(t)\geq \frac{\tau_0}{4}, \end{align*} in the time interval $[0, T_\varepsilon]$, where $T_\varepsilon$ satisfies \begin{align} \label{LS} T_\varepsilon= \bar{C}\left(\frac{1}{\varepsilon(\ln(1/\varepsilon))^3}\right)^{2-4/(\ln(1/\varepsilon)+2)}-1. \end{align} This gives the estimate on the lifespan of solution stated in (\ref{THM3}). \section{The Proof of Uniqueness Part in Theorem \ref{THM}} Assume there are two solutions $(\tilde{u}_1, \tilde{b}_1)$ and $(\tilde{u}_2, \tilde{b}_2)$ to (\ref{3.4}) with the same initial data $(\tilde{u}_0, \tilde{b}_0)$, which satisfies $\|(\tilde{u}_0, \tilde{b}_0)\|_{X_{2\tau_0}, \alpha}\leq \varepsilon$. And the tangential radii of analyticity of $(\tilde{u}_1, \tilde{b}_1)$ and $(\tilde{u}_2, \tilde{b}_2)$ are $\tau_1(t)$ and $\tau_2(t)$, respectively. Define $\tau(t)$ by \begin{align} \label{U1} \frac{d(\tau(t))^{3/2}}{dt}+\frac{3C_0(K+1)}{2}&\left(\langle t\rangle^{-1/4}(\|\tilde{u}_1\|_{X_{\tau_1(t),\alpha}}+\|\tilde{b}_1\|_{X_{\tau_1(t),\alpha}})\right.\nonumber\\ &\left.+\langle t\rangle^{1/4}(\|\tilde{u}_1\|_{D_{\tau_1(t),\alpha}}+\|\tilde{b}_1\|_{D_{\tau_1(t),\alpha}})\right)=0, \end{align} with initial data \begin{align} \label{U2} \tau(0)=\frac{\tau_0}{8}. \end{align} By the estimates obtained in Section 2, there exists a time interval $[0, T_0]$ with $T_0\leq T_\varepsilon$ such that \begin{align} \label{U3} \frac{\tau_0}{16}\leq \tau(t)\leq \frac{\tau_0}{8}\leq \frac{\min\{\tau_1,\tau_2\}}{2} \end{align} for all $t\in[0,T_0]$. Set $U=\tilde{u}_1-\tilde{u}_2$ and $B=\tilde{b}_1-\tilde{b}_2$. Then \begin{align} \label{U4} \partial_tU-\partial_y^2U+(u_s+u_1)\partial_xU&+(v_1-v_2)\partial_y\tilde{u}_1-(1+b_1)\partial_xB-(g_1-g_2)\partial_y\tilde{b}_1\nonumber\\ &\qquad\qquad-2\partial_y^2u_s B+(v_1-v_2)\partial_y^2u_s\psi_1+R_{s1}=0, \end{align} and \begin{align} \label{U5} \partial_tB-\partial_y^2B-(1+b_1)\partial_xU-(g_1-g_2)\partial_y\tilde{u}_1&+(u_s+u_1)\partial_xB+(v_1-v_2)\partial_y\tilde{b}_1\nonumber\\ &\qquad-(g_1-g_2)\partial_y^2u_s\psi_1+R_{s2}=0, \end{align} with the source terms $R_{s1}$ and $R_{s2}$ given by \begin{align} \label{U6} R_{s1}=(u_1-u_2)\partial_x\tilde{u}_2+v_2\partial_yU-(b_1-b_2)\partial_x\tilde{b}_2-g_2\partial_yB+v_2\partial_y^2u_s(\psi_1-\psi_2), \end{align} and \begin{align} \label{U7} R_{s2}=-(b_1-b_2)\partial_x\tilde{u}_2-g_2\partial_yU+(u_1-u_2)\partial_x\tilde{b}_2+v_2\partial_yB-g_2\partial_y^2u_s(\psi_1-\psi_2). \end{align} Note that the initial data and the boundary conditions are \begin{align} \label{DEI} U(t,x,y)|_{t=0}=0,\qquad B(t, x,y)|_{t=0}=0, \end{align} and \begin{align} \label{DE} \left\{ \begin{array}{ll} U|_{y=0}=0,\\ U|_{y=\infty}=0, \end{array} \right. \qquad\hbox{and}\qquad \left\{ \begin{array}{ll} \partial_yB|_{y=0}=0,\\ B|_{y=\infty}=0. \end{array} \right. \end{align} Similar to Section 2, we have \begin{align} \label{U8} &\frac{d}{dt}\|U\|_{X_{\tau,\alpha}}+\sum_{m\geq 0}\tau^mM_m\frac{\|\theta_\alpha\partial_y\partial_x^m U\|^2_{L^2}}{\|\theta_\alpha\partial_x^m U\|_{L^2}}-\frac{\alpha}{2\langle t\rangle}\|U\|_{X_{\tau,\alpha}}-\frac{C}{\langle t\rangle}\|B\|_{X_{\tau,\alpha}}\\ \leq&\dot{\tau}(t)\|U\|_{Y_{\tau,\alpha}}+\frac{C_0}{(\tau(t))^{1/2}}\left(\langle t\rangle^{-1/4}(\|\tilde{u}_1\|_{X_{\tau(t),\alpha}}+\|\tilde{b}_1\|_{X_{\tau(t),\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}_1\|_{D_{\tau(t),\alpha}}+\|\tilde{b}\|_{D_{\tau_1(t),\alpha}})\right)\nonumber\\ &\times (\|U\|_{Y_{\tau,\alpha}}+\|B\|_{Y_{\tau,\alpha}})\nonumber\\ &+\frac{C_0}{(\tau(t))^{1/2}}(\|\tilde{u}_2\|_{Y_{\tau,\alpha}}+\|\tilde{b}_2\|_{Y_{\tau,\alpha}}) (\langle t\rangle^{-1/4}(\|U\|_{X_{\tau,\alpha}}+ \|B\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|U\|_{D_{\tau,\alpha}}+\|B\|_{D_{\tau,\alpha}}))\nonumber, \end{align} and \begin{align} \label{U9} &\frac{d}{dt}\|B\|_{X_{\tau,\alpha}}+\sum_{m\geq 0}\tau^mM_m\frac{\|\theta_\alpha\partial_y\partial_x^m U\|^2_{L^2}}{\|\theta_\alpha\partial_x^m U\|_{L^2}}-\frac{\alpha}{2\langle t\rangle}\|B\|_{X_{\tau,\alpha}}\\ \leq&\dot{\tau}(t)\|B\|_{Y_{\tau,\alpha}}+\frac{C_0}{(\tau(t))^{1/2}}\left(\langle t\rangle^{-1/4}(\|\tilde{u}_1\|_{X_{\tau(t),\alpha}}+\|\tilde{b}_1\|_{X_{\tau(t),\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}_1\|_{D_{\tau(t),\alpha}}+\|\tilde{b}_1\|_{D_{\tau(t),\alpha}})\right)\nonumber\\ &\times (\|U\|_{Y_{\tau,\alpha}}+\|B\|_{Y_{\tau,\alpha}})\nonumber\\ &+\frac{C_0}{(\tau(t))^{1/2}}(\|\tilde{u}_2\|_{Y_{\tau,\alpha}}+\|\tilde{b}_2\|_{Y_{\tau,\alpha}}) (\langle t\rangle^{-1/4}(\|U\|_{X_{\tau,\alpha}}+ \|B\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|U\|_{D_{\tau,\alpha}}+\|B\|_{D_{\tau,\alpha}}))\nonumber. \end{align} Then, we have \begin{align} \label{U10} &\frac{d}{dt}(\|U\|_{X_{\tau,\alpha}}+K\|B\|_{X_{\tau,\alpha}})+\sum_{m\geq 0}\tau^mM_m\left(\frac{\|\theta_\alpha\partial_y\partial_x^m U\|^2_{L^2}}{\|\theta_\alpha\partial_x^m U\|_{L^2}}+K\frac{\|\theta_\alpha\partial_y\partial_x^m B\|^2_{L^2}}{\|\theta_\alpha\partial_x^m B|_{L^2}}\right)\nonumber\\ &-\frac{\alpha}{2\langle t\rangle}\|U\|_{X_{\tau,\alpha}}-\frac{2C+K\alpha}{2\langle t\rangle}\|B\|_{X_{\tau,\alpha}}\nonumber\\ \leq&\left(\dot{\tau}(t)+\frac{C_0(1+K)}{(\tau(t))^{1/2}}\left(\langle t\rangle^{-1/4}(\|\tilde{u}_1\|_{X_{\tau(t),\alpha}}+\|\tilde{b}_1\|_{X_{\tau(t),\alpha}})+\langle t\rangle^{1/4}(\|\tilde{u}_1\|_{D_{\tau(t),\alpha}}+\|\tilde{b}_1\|_{D_{\tau(t),\alpha}})\right)\right.\nonumber\\ &\left.\times(\|U\|_{Y_{\tau,\alpha}}+K\|B\|_{Y_{\tau,\alpha}})\right)\\ &+\frac{C_0(1+K)}{(\tau(t))^{1/2}}(\|\tilde{u}_2\|_{Y_{\tau,\alpha}}+\|\tilde{b}_2\|_{Y_{\tau,\alpha}}) (\langle t\rangle^{-1/4}(\|U\|_{X_{\tau,\alpha}}+ \|B\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|U\|_{D_{\tau,\alpha}}+\|B\|_{D_{\tau,\alpha}}))\nonumber. \end{align} From (\ref{U1}), one has \begin{align} \label{U11} \dot{\tau}(t)+\frac{C_0(K+1)}{(\tau(t))^{1/2}}&\left(\langle t\rangle^{-1/4}(\|\tilde{u}_1\|_{X_{\tau,\alpha}}+ \|\tilde{b}_1\|_{X_{\tau,\alpha}})\right.\nonumber\\ &\left.+\langle t\rangle^{1/4}(\|\tilde{u}_1\|_{D_{\tau,\alpha}}+\|\tilde{b}_1\|_{D_{\tau,\alpha}})\right)(\|U\|_{Y_{\tau,\alpha}}+\|B\|_{Y_{\tau,\alpha}}) \leq 0, \end{align} because $\tau(t)\leq \tau_1(t)$ and the norms $X_{\tau, \alpha}$ and $D_{\tau, \alpha}$ are increasing in $\tau$. By the inequalities (\ref{ESTU}), (\ref{ESTB}) and (\ref{U11}), one has \begin{align} \label{U12} &\frac{d}{dt}(\|U\|_{X_{\tau,\alpha}}+K\|B\|_{X_{\tau,\alpha}})+\frac{\alpha(1-2\beta_1)}{2\langle t\rangle}\|U\|_{X_{\tau,\alpha}} +\frac{\alpha(1-2\beta_2)-\frac{2C}{K}}{2\langle t\rangle}K\|B\|_{X_{\tau,\alpha}}\nonumber\\ &+\frac{\alpha^{1/2}\beta_1}{2\langle t\rangle^{1/2}}\|U\|_{D_{\tau,\alpha}}+\frac{\alpha^{1/2}\beta_2}{2\langle t\rangle^{1/2}}K\|B\|_{D_{\tau,\alpha}}\\ \leq &\frac{C_0(1+K)}{(\tau(t))^{1/2}}(\|\tilde{u}_2\|_{Y_{\tau,\alpha}}+\|\tilde{b}_2\|_{Y_{\tau,\alpha}}) (\langle t\rangle^{-1/4}(\|U\|_{X_{\tau,\alpha}}+ \|B\|_{X_{\tau,\alpha}})+\langle t\rangle^{1/4}(\|U\|_{D_{\tau,\alpha}}+\|B\|_{D_{\tau,\alpha}}))\nonumber, \end{align} for $\beta_1, \beta_2\in (0, 1/2)$. Since \begin{align} \label{U13} \|\tilde{u}_2\|_{Y_{\tau,\alpha}}\leq \frac{1}{\tau}\|\tilde{u}_2\|_{X_{2\tau,\alpha}}\leq \frac{1}{\tau}\|\tilde{u}_2\|_{X_{\tau_2,\alpha}} \leq \frac{C(1+K)}{\tau}\varepsilon\langle t\rangle^{-1/4+\eta_1} \end{align} and \begin{align} \label{U14} \|\tilde{b}_2\|_{Y_{\tau,\alpha}}\leq \frac{1}{\tau}\|\tilde{b}_2\|_{X_{2\tau,\alpha}}\leq \frac{1}{\tau}\|\tilde{b}_2\|_{X_{\tau_2,\alpha}} \leq \frac{C(1+K)}{\tau}\varepsilon\langle t\rangle^{-1/4+\eta_1}, \end{align} we have \begin{align} \label{U15} \frac{C_0(1+K)}{(\tau(t))^{1/2}}(\|\tilde{u}_2\|_{Y_{\tau,\alpha}}+\|\tilde{b}_2\|_{Y_{\tau,\alpha}})\leq \frac{2(1+K)^2CC_0\varepsilon}{(\tau(t))^{3/2}\langle t\rangle^{1/4-\eta_1}}. \end{align} Notice that $t\in [0,T_\varepsilon]$ with $T_\varepsilon=\varepsilon^{-2+\delta_0}$, where $\delta_0$ is a fixed small positive constant. As in Section 3, we can choose $\alpha=1/2-\delta, \beta_1=\beta_2=\frac{\delta}{2}, K=\frac{4C}{\delta}$ and $\delta=1/\ln(1/\varepsilon)$, then $\eta_1$ can be chosen to be $\delta$. Let $\varepsilon$ suitably small to have \begin{align*} \frac{\alpha(1-2\beta_1)}{2}>\frac{2(1+K)^2CC_0\varepsilon\langle t\rangle^{1/2+\eta_1}}{(\tau(t))^{3/2}},\qquad \frac{\alpha(1-2\beta_1)-2C/K}{2}>\frac{2(1+K)^2CC_0\varepsilon\langle t\rangle^{1/2+\eta_1}}{(\tau(t))^{3/2}K}, \end{align*} and \begin{align*} \frac{\alpha^{1/2}\beta_1}{2}>\frac{2(1+K)^2CC_0\varepsilon\langle t\rangle^{1/2+\eta_1}}{(\tau(t))^{3/2}},\qquad \frac{\alpha^{1/2}\beta_2}{2}>\frac{2(1+K)^2CC_0\varepsilon\langle t\rangle^{1/2+\eta_1}}{(\tau(t))^{3/2}K}. \end{align*} (\ref{U12}) and (\ref{U15}) imply that \begin{align} \label{U16} \frac{d}{dt}(\|U\|_{X_{\tau,\alpha}}+K\|B\|_{X_{\tau,\alpha}})+\eta_3(\|U\|_{X_{\tau,\alpha}} +K\|B\|_{X_{\tau,\alpha}})\leq 0 \end{align} for suitably small $\eta_3>0$ and any $t\in [0,T_\varepsilon]$. It implies uniqueness of solution to (\ref{3.4}) in the time interval $[0, T_\varepsilon]$.\\ \noindent{\bf Acknowledgement:} Feng Xie' research was supported by National Nature Science Foundation of China 11571231, the China Scholarship Council and Shanghai Jiao Tong University SMC(A). Tong Yang's research was supported by internal research funding of City University of Hong Kong, 7004847.
proofpile-arXiv_067-1317
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} This paper studies extended irreducible Goppa codes. Our interest in Goppa codes stems from the fact they are of practical value as they form the backbone of the McEliece cryptosystem. In the McEliece cryptosystem, one chooses a random Goppa code as a key hence it is important that we know the number of Goppa codes for any given set of parameters. This will help in the assessment of how secure the McEliece cryptosystem is against an enumerative attack. An enumerative attack on the McEliece cryptosystem finds all Goppa codes for the given set of parameters and tests their equivalence with the public code. Research has clearly shown that many Goppa codes become equivalent when extended by a parity check (see \cite{Musukwa} and \cite{Ryan4}) and so it has been suggested that an enumerative attack can be mounted through extended Goppa codes. In this paper we obtain an upper bound on the number of extended irreducible $q$-ary Goppa codes of degree $r$ and length $q^n+1$, where $q=p^t$ and $n$ and $r>2$ are prime numbers. Methods similar to the ones used in the present paper have been used in \cite{Musukwa} and \cite{Ryan4} to count the number of extended irreducible Goppa codes of length $2^n+1$ of degree $2^m$ and degree 4, respectively. However, in this work, we obtain our count by exploiting the action of the projective linear group on the set of elements of degree $r$. \section{Preliminaries} We begin this section by defining an irreducible Goppa code. \subsection{Irreducible Goppa codes} \begin{definition} Let $n$ be a positive integer, $q$ be a power of a prime number and $g(z)\in \mathbb F_{q^n}[z]$ be irreducible of degree $r$. Let $L=\mathbb F_{q^n}=\{\zeta_i: 0\leq i \leq q^n-1 \}$. Then an irreducible Goppa code $\Gamma(L,g)$ is defined as the set of all vectors $\underline{c}=(c_0,c_1,\ldots,c_{q^n-1})$ with components in $\mathbb F_q$ which satisfy the condition \begin{equation} \sum_{i=0}^{q^n-1}\frac{c_i}{z-\zeta_i}\equiv 0~ \mbox{mod}~g(z). \end{equation} \end{definition} The polynomial $g(z)$ is called the Goppa polynomial. Since $g(z)$ is irreducible and of degree $r$ over $\mathbb F_{q^n}$, $g(z)$ does not have any root in $L$ and the code is called an irreducible Goppa code of degree $r$. In this paper $g(z)$ is always irreducible of degree $r$ over $\mathbb F_{q^n}$.\\ It can be shown, see \cite{Chen}, that if $\alpha$ is any root of the Goppa polynomial $g(z)$ then $\Gamma(L, g)$ is completely described by any root $\alpha$ of $g(z)$ and a parity check matrix $\bf{H}(\alpha)$ is given by \begin{equation} \bf{H}(\alpha)=\left(\frac{1}{\alpha-\zeta_{0}}\frac{1}{\alpha-\zeta_{1}}\cdots \frac{1}{\alpha-\zeta_{q^n-1}}\right) \end{equation} where $L=\mathbb F_{q^n}=\{\zeta_i: 0\leq i \leq q^n-1 \}$. This code is usually denoted $C(\alpha)$. \subsection{Extended Irreducible Goppa codes} Next we give the definition of an extended irreducible Goppa code. \begin{definition} Let $\Gamma(L,g)$ be an irreducible Goppa code of length $q^{n}$. Then the extended code $\overline{\Gamma(L,g)}$ is defined by $\overline{\Gamma(L,g)}=\{(c_{0},c_{1},...,c_{q^{n}}): (c_{0},c_{1},...,c_{q^{n}-1})\in \Gamma(L,g)\hspace{0.2cm} \mbox{and}~ \sum_{i=0}^{q^{n}}c_{i}=0\}$. \end{definition} \subsection{Enumeration of Matrices of a Given Order in $GL(2,q^n)$}\label{Matriculation} In this section we obtain an enumeration of matrices of a given order $k$ in $GL(2,q^n)$, where $GL(2,q^n)$ is the group of $2\times 2$-invertible matrices over $\mathbb F_{q^n}$. Invertible matrices of a given order will be a powerful tool that will be used in the enumeration of extended irreducible Goppa codes. We first review concepts in matrix algebra. We know that a $2\times 2$ matrix $A$ is conjugate (or similar) to a $2\times 2$ matrix $B$ if there exists a non-singular $2\times 2$ matrix $P$ such that $A=P^{-1}BP$. It is well known that the conjugacy of square matrices defines an equivalence relation on and partitions $GL(2,q^n)$ into disjoint sets of equivalence classes called conjugacy classes. We also know that conjugate matrices have the same order. Thus the number of matrices in $GL(2,q^n)$ of order $k$ is given by the product of the number of conjugacy classes containing elements of order $k$ and the number of elements in each conjugacy class. Now to find the number of conjugacy classes in $GL(2,q^n)$ we use the fact that all matrices in a given conjugacy class have the same minimal polynomial. Since conjugacy classes partition $GL(2,q^n)$ it follows that the number of distinct minimal polynomials will give us the number of conjugacy classes. We explain this below. \begin{definition} Let $A\in GL(2,q^n)$. Then the characteristic polynomial of a matrix $A$ is defined by $\chi_A(x)=det(A-xI_2)$ where $I_2=\left( \begin{array}{cc} 1&0\\ 0&1 \end{array}\right)$ and $det(B)$ is the determinant function. \end{definition} \begin{definition} We say that $A\in GL(2,q^n)$ is a matrix root of the polynomial\\ $f(x)=a_kx^k+\cdots+a_0\in \mathbb F_{q^n}[x]$ if $$f(A)=a_kA^k+\cdots+a_0I_2=0,$$ where $A^i$ is the ith power of $A$ under matrix multiplication. \end{definition} \begin{definition} The minimal polynomial of $A\in GL(2,q^n)$ denoted $m_A(x)$ is the monic polynomial $m_A(x)\in \mathbb F_{q^n}[x]$ of least degree having $A$ as a matrix root. \end{definition} By the Cayley-Hamilton Theorem, $m_A(x)$ is a factor of $\chi_A(x)$. Since elements in a conjugacy class have the same order and minimal polynomial, minimal polynomials are in a one-to-one correspondence with conjugacy classes in $GL(2,q^n)$. Now, the matrices $A$ are of size $2\times 2$ so $\chi_A(x)$ is a quadratic polynomial, as such $m_A(x)$ is either linear, or a product of two like or unlike linear factors or an irreducible monic quadratic polynomial, see \cite{Alperin}. In this work we are interested in matrices whose minimal polynomial is quadratic and irreducible over $\mathbb F_{q^n}$. The following lemma gives the relationship between such minimal polynomials and conjugacy classes of elements of $GL(2,q^n)$. \begin{lemma}\label{lemma_T4} Let $A\in GL(2,q^n)$. If $m_A(x)=x^2-\xi x-\zeta$, where $\xi,\zeta \in \mathbb F_{q^n}$, is irreducible then $A$ is conjugate with $\left( \begin{array}{cc} 0&1\\ \zeta&\xi \end{array}\right)$ and there are $\frac{q^n(q^n-1)}{2}$ conjugacy classes of length $q^n(q^n-1)$ each. {\it Proof.} See \cite{Alperin} and \cite{Basheer}. \end{lemma} Now to find the number of matrices of order $k$ in $GL(2,q^n)$ we exploit the fact that if $A\in GL(2,q^n)$ has finite order $k$, then its minimal polynomial $m_A(x)$ divides $x^k-1 \in \mathbb F_{q^n}[x]$, see \cite{Koo}. So to find all possible minimal polynomials we consider the factorization of $x^k-1$ over $\mathbb F_{q^n}$ which can be found in \cite{Lidl} and is stated below. \begin{theorem}[Lidl and Niederreiter, 1983]{\cite{Lidl}}\label{Lidl} Let $k$ be a positive integer and $\mathbb F_{q^n}=\mathbb F_{p^{nt}}$. If $p\nmid k$, then $$x^k-1=\prod_{s\mid k} Q_s(x),$$ where $Q_s(x)$ is the $k-\mbox{th}$ cyclotomic polynomial over $\mathbb F_{q^n}$. Furthermore, $Q_k(x)$ factors into $\phi(k)/d$ distinct monic irreducible polynomials of the same degree $d$, where $d$ is the least positive integer such that $q^{nd}\equiv 1 \pmod{k}$. \end{theorem} Now, if $k$ is prime then $x^k-1=(x-1)Q_k(x)$. It is easy to see that if there are irreducible quadratic factors in the factorization of $x^k-1$ then they will occur in the factorization of $Q_k(x)$. In the following theorem we find the number of matrices $A$ of order $k$ in $GL(2,q^n)$ whose minimal polynomial $m_A(x)$ is an irreducible quadratic polynomial over $\mathbb F_{q^n}$. \begin{theorem}\label{Thm_Yanga} Let $k$ be a positive integer such that $(q^n,k)=1$ and $k \mid (q^{n}+1)$ but $k \nmid (q^n-1)$. Then the number of matrices $A$ of order $k$ in $GL(2,q^n)$, where $m_A(x)$ is an irreducible quadratic polynomial over $\mathbb F_{q^n}$, is $\frac{\phi(k)q^n(q^n-1)}{2}$. \\ {\rm {\it Proof.} Let $\rho$ be the number of irreducible quadratic polynomials in the factorization of the cyclotomic polynomial $Q_k(x)$ where $(q^n,k)=1$ and $k \mid (q^{n}+1)$ but $k \nmid (q^n-1)$. By Theorem \ref{Lidl}, the cyclotomic polynomial $Q_k(x)$ factors into $\rho=\frac{\phi(k)}{2}$ distinct irreducible quadratic factors. Since each minimal polynomial gives rise to a conjugacy class, the number of conjugacy classes containing elements of order $k$ is equal to $\rho$. From Lemma \ref{lemma_T4}, we know that there are $q^n(q^n-1)$ matrices in a conjugacy class in this case. Thus the number of matrices in $GL(2,q^n)$ of order $k$ whose minimal polynomial $m_A(x)$ is an irreducible quadratic polynomial over $\mathbb F_{q^n}$ is $\frac{\phi(k)q^n(q^n-1)}{2}$.} \end{theorem} \begin{example}\label{Ex_1} Let's take $q=3$, $n=3$ and $r=7$. Suppose we want to find the number of matrices $A$ of order $7$ in $GL(2,3^3)$, where $m_A(x)$ is an irreducible quadratic polynomial over $\mathbb F_{3^3}$. Here $k=7$ and $7|(3^3+1)$. Clearly, $\rho=3$ and $\mu=3^3\times (3^3-1)=702$. So there are $3\times 702=2,106$ matrices in $GL(2,3^3)$ of the required form of order $7$. \end{example} \subsection{Tools for Counting Extended Irreducible Goppa codes} \subsubsection{The set $\mathbb S=\mathbb S(n,r)$} An irreducible Goppa code can be defined by any root of its Goppa polynomial. As such the set of all roots of such polynomials is important and we make the following definition. \begin{definition} The set $\mathbb S=\mathbb S(n,r)$ is the set of all elements in $\mathbb F_{q^{nr}}$ of degree $r$ over $\mathbb F_{q^{n}} $. \end{definition} \subsubsection{Maps on $\mathbb S$}\label{maps_on_S} We define the following maps on $\mathbb S$. The action of the groups arising from these maps will help us to count the number of irreducible Goppa codes and their extended versions. \begin{enumerate} \item $\sigma^i:\alpha \to \alpha^{q^i}$ where $\sigma$ denotes the Frobenius automorphism of $\mathbb F_{q^{nr}}$ leaving $\mathbb F_q$ fixed and $0 \leq i<nr$.\label{uyu1} \item $\pi_A:\alpha\to a\alpha+b$ where $A=\left( \begin{array}{cc} a&b\\ 0&1 \end{array}\right) \in GL(2,q^n)$.\label{uyu2} \item $\pi_{B}:\alpha \to \frac{a\alpha+b}{c\alpha+d}$, where $B=\left( \begin{array}{cc} a&b\\ c&d \end{array}\right) \in GL(2,q^n)$.\label{uyu3} \end{enumerate} \begin{remark} The composition of maps \ref{uyu1} and \ref{uyu2} sends irreducible Goppa codes into equivalent irreducible Goppa codes and \ref{uyu1} and \ref{uyu3} sends extended irreducible Goppa codes into equivalent extended irreducible Goppa codes (see \cite{Berger}). \end{remark} \subsubsection{Groups Acting on $\mathbb S$} \begin{definition} Let $G$ denote the set of all maps $\{\sigma^{i}:1\leq i\leq nr\}$. $G$ is a group under the composition of mappings. It is the group of Frobenius automorphisms. It is shown in \cite{Ryan3} that $G$ acts on $\mathbb S$. \end{definition} \begin{definition} Let $F$ denote the set of all maps $\left\lbrace\pi_{A}:A=\left( \begin{array}{cc} a&b\\ 0&1 \end{array}\right) \in GL(2,q^n) \right\rbrace$. $F$ is a group under the composition of mappings. It is the affine group of linear transformations. \end{definition} \begin{definition} We define the set of all maps $\left\lbrace\pi_{B}:B=\left( \begin{array}{cc} a&b\\ c&d \end{array}\right) \in GL(2,q^n)\right\rbrace$. This set of maps is a group under the composition of mappings and is isomorphic to the projective linear group which is denoted by $PGL(2,q^n)$. \end{definition} Next we show that $PGL(2,q^n)$ acts on $\mathbb S$. Denote by $[A]$ the image of $A=\left( \begin{array}{cc} a&b\\ c&d \end{array}\right) \in GL(2,q^n)$ in $PGL(2,q^n)$. Then, for $\alpha \in \mathbb S$ and $[A]\in PGL(2,q^n)$ we define the map $[A](\alpha)=\frac{a\alpha+b}{c\alpha+d}$. Denote by $I_2$ the $2 \times 2$ identity matrix $\left( \begin{array}{cc} 1&0\\ 0&1 \end{array}\right) $. Then we have $[I_2](\alpha)=\frac{1\alpha+0}{0\alpha+1}=\alpha$.\\ Also, suppose $B=\left( \begin{array}{cc} a_1&b_1\\ a_2&b_2 \end{array}\right)$ and $C=\left( \begin{array}{cc} a_3&b_3\\ a_4&b_4 \end{array}\right)$ then \begin{align*} [B]([C](\alpha))=&[B]\left(\frac{a_3\alpha+b_3}{a_4\alpha+b_4}\right)\\ =&\frac{a_1\left(\frac{a_3\alpha+b_3}{a_4\alpha+b_4}\right)+b_1}{a_2\left(\frac{a_3\alpha+b_3}{a_4\alpha+b_4}\right)+b_2}\\ =&\frac{(a_1a_3+a_4b_1)\alpha+a_1b_3+b_1b_4}{(a_2a_3+a_4b_2)\alpha+a_2b_3+b_2b_4}\\ =&[BC](\alpha). \end{align*} Thus, the map $[A](\alpha)=\frac{a\alpha+b}{c\alpha+d}$ defines a group action of $PGL(2,q^n)$ on $\mathbb S$. This can also be found in \cite{Stich}. \subsubsection{Actions of $F$, $PGL(2,q^n)$ and $G$}\label{Actions} We first consider the action of the affine group $F$ on $\mathbb S$. For each $\alpha \in \mathbb S$, the action of $F$ on $\mathbb S$ induces orbits denoted $A(\alpha)$ where $A(\alpha)=\{a\alpha+b:a \neq 0, b \in \mathbb F_{q^n}\}$, and called the affine set containing $\alpha$. We denote the set of all affine sets, $ \{A(\alpha):\alpha \in \mathbb S \}$, by $\mathbb A$. Since $|A(\alpha)|=q^n(q^n-1)$ then $|\mathbb A|=|\mathbb S|/q^n(q^n-1)$. It can be shown that $G$ acts on the set $\mathbb A$, see \cite{Ryan2}. We will then consider the action of $G$ on $\mathbb A$ to obtain orbits in $\mathbb S$ of $FG$. The number of orbits in $\mathbb S$ under $FG$ will then give us an upper bound on the number of irreducible Goppa codes. Next we consider the action of the group $E=PGL(2,q^n)$ on $\mathbb S$. The action of $PGL(2,q^n)$ on $\mathbb S$ induces orbits denoted by $O(\alpha)$ where $O(\alpha)=\{\frac{a\alpha +b}{c\alpha +d}: a,b,c,d\in \mathbb{F}_{q^{n}}, ad-bc \neq 0\}$. We will refer to $O(\alpha)$ as a projective linear set. Next we calculate the cardinality of $O(\alpha)$. \begin{theorem} For any $\alpha \in \mathbb S$, $|O(\alpha)|=q^{3n}-q^n$.\label{Card_O}\\ {\it Proof.} We know from the orbit-stabilizer theorem that $|O(\alpha)|=|E|/|E_\alpha|$ where $E_\alpha$ is the stabilizer of $\alpha$ in $E=PGL(2,q^n)$. Observe that $E_\alpha$ is trivial, for if some $[B]\in E$, where $B=\left( \begin{array}{cc} a&b\\ c&d \end{array}\right) $, fixes $\alpha \in \mathbb S$ then $[B](\alpha)=\alpha$. That is, $\frac{a\alpha+b}{c\alpha+d}=\alpha$. So, $a\alpha+b=c\alpha^2+d\alpha$, this implies $c\alpha^2+(d-a)\alpha-b=0$. Since the minimal polynomial of $\alpha$ over $\mathbb F_{q^n}$ is of degree $r \geq 3$, we conclude that $b=c=0$ and $a=d$, hence $[B]=[I_2]$. As such, $|O(\alpha)|=|E|=q^{3n}-q^n$. \end{theorem} We denote the set of all projective linear sets in $\mathbb{S}$ under the action of $PGL(2,q^n)$ by {\rm $\mathbb{O}$. That is, $\mathbb{O}=\{O(\alpha): \alpha \in \mathbb{S}\}$. Observe that $\mathbb{O}$ partitions the set $\mathbb{S}$} and that $G$ acts on the set $\mathbb{O}$ \cite{Ryan3}. It is shown in \cite{Ryan3} that each projective linear set $O(\alpha)$ in $\mathbb{O}$ can be partitioned into $q^{n}+1$ affine sets. See the theorem below. \begin{theorem}\label{O} For $\alpha \in \mathbb{S}, O(\alpha)=A(\alpha)\cup A(\frac{1}{\alpha})\cup A(\frac{1}{\alpha + 1})\cup A(\frac{1}{\alpha + \xi _{1}}) \cup A(\frac{1}{\alpha + \xi _{2}})\cup \dots \cup A(\frac{1}{\alpha + \xi _{q^{n}-2}})$ where $\mathbb{F}_{q^{n}}=\{0,1,\xi_{1}, \xi_{2},\ldots, \xi_{q^{n}-2}\}$. \end{theorem} Observe that the sets $\mathbb{O}$ and $\mathbb{A}$ are different. $\mathbb O$ and $\mathbb{A}$ are both partitions of $\mathbb{S}$ but $|\mathbb{A}|=(q^{n}+1) \times |\mathbb{O}|$. We will use the actions of $PGL(2,q^n)$ and $G$ on $\mathbb S$ to find an upper bound on the number of extended irreducible Goppa codes. Firstly, we will apply the action of $PGL(2,q^n)$ on $\mathbb S$ to obtain projective linear sets $O(\alpha)$. Then we will consider the action of $G$ on $\mathbb O$. The number of orbits in $\mathbb O$ under the action of $G$ will give an upper bound on the number of extended irreducible Goppa codes. To find the number of orbits we will use the Cauchy-Frobenius theorem which is stated below. \begin{theorem}\label{Cauchy-Frobenius} Let $G$ be a finite group acting on a set $X$. For any $g\in G$, let $X(g)$ denote the set of elements of $X$ fixed by $g$. Then the number of orbits in $X$ under the action of $G$ is $\frac{1}{|G|}\sum_{g\in G}|X(g)|$. \end{theorem} \subsubsection{Factorization of the polynomial $F_s(x)=cx^{q^s+1}+dx^{q^s}-ax-b$}\label{F_s} Another tool we shall use when it comes to counting extended irreducible Goppa codes is counting how many roots of the polynomial $F_s(x)=cx^{q^s+1}+dx^{q^s}-ax-b\in \mathbb F_{q^n}[x]$ where $ad-bc \neq 0$ lie in $\mathbb S$. We will do this by counting the number of irreducible polynomials of degree $r$ in the factorization of $F_s(x)=cx^{q^s+1}+dx^{q^s}-ax-b$. This problem was considered in \cite{Stich} for $F_s(x)=cx^{q^s+1}+dx^{q^s}-ax-b\in \mathbb F_{q}[x]$, where $ad-bc \neq 0$ and $s\geq 1$. \begin{theorem}[Stichtenoth, H., and Topuzo$\breve{\mbox{g}}$lu, 2012]{\cite{Stich}}\label{Stich_Bho} Let $A=\left( \begin{array}{cc} a&b\\ c&d \end{array}\right)\in GL(2,q)$ which is not a multiple of the identity matrix. Let the order of $[A]$ in $PGL(2,q)$ be $D$. Then the irreducible factors of $F_s(x)=cx^{q^s+1}+dx^{q^s}-ax-b\in \mathbb F_{q}[x]$, $s\geq 1$ are as follows: \begin{enumerate} \item irreducible factors of degree $Ds$, \item irreducible factors of degree $Dk$ with $k<s$, $s=km$ and $\mbox{gcd}(m,D)=1$, \item irreducible factors of degree $\leq 2$. \end{enumerate} \end{theorem} \section{Extended Irreducible Goppa codes of degree $r$ and length $q^n+1$, where $n=r$ is prime} In this section we obtain an upper bound on the number of extended irreducible Goppa codes of degree $r$ and length $q^n+1$, where $n=r$ is prime. We first obtain $\mathbb S(n,n)$. \subsection{$\mathbb S(n,r)$ where $n=r$ is prime} We use a lattice of subfields, as proposed in \cite{Magamba}, to show where elements of $\mathbb S(n,n)$ lie and find $|\mathbb S(n,n)|$. Figure~\ref{242} shows a lattice of subfields corresponding to $q$ and $n=r$. \begin{figure}[htb] \centering \begin{tikzpicture}[node distance=1.5cm] \node (F1) {$\mathbb F_q$}; \node (Fn) [above of=F1]{$\mathbb F_{q^{n}}$}; \node (Fnr) [above of=Fn]{$\mathbb F_{q^{n^2}}$}; \draw (F1) -- (Fn); \draw (Fn) -- (Fnr); \end{tikzpicture} \caption{Lattice of subfields of $\mathbb F_{q^{n^2}}$} \label{242} \end{figure} \begin{remark} The number of elements of degree $n$ over $\mathbb F_{q^n}$ is $|\mathbb S(n,n)|=q^{n^2}-q^{n}.$ \end{remark} \subsection{Action of $G$ on $\mathbb A$} In this section we find the number of affine sets fixed by the subgroups $\langle\sigma\rangle$, $\langle\sigma^{n}\rangle$ and $\langle\sigma^{n^2}\rangle$ of $G$. It is easy to see that the trivial subgroup $\langle\sigma^{n^2}\rangle$, containing the identity, fixes every affine set in $\mathbb A$. By an argument similar to the one in Section III of \cite{Ryan4} corresponding to the actions of the subgroups $\langle\sigma^n\rangle$ and $\langle\sigma\rangle$ we obtain the following result: \begin{theorem}\label{Theorem_Puliya_1} {\rm The number of affine sets fixed under $\langle\sigma^{n}\rangle$ and $\langle\sigma\rangle$ is $$\left\{ \begin{array}{r l} n-1,& \quad \mbox{if}~ n\neq p~\mbox{and}~n\mid q^n-1 \\ 0,& \quad \mbox{if}~ ~ n\neq p~\mbox{and}~n\nmid q^n-1\\ 1,& \quad \mbox{if}~ n=p \end{array} \right..$$ } \end{theorem} \subsection{Action of G on $\mathbb O$} In this section we obtain an upper bound on the number of extended irreducible Goppa codes of degree $r$ and length $q^n+1$ where $n=r$ is prime. As already stated, we will do this by considering the action of $G$ on $\mathbb O$ and applying the Cauchy-Frobenius Theorem. We begin by finding the number of projective linear sets $O(\alpha)$ which are in $\mathbb O$. By Theorem \ref{Card_O}, $|O(\alpha)|=q^{3n}-q^n$. We also know that $|\mathbb S(n,n)|=q^{n^2}-q^{n}$. Therefore, there are $\frac{q^{n^2}-q^{n}}{q^{3n}-q^n}$ projective linear sets in $\mathbb O$. The number of orbits in $\mathbb O$ under the action of $G$ will give us an upper bound on the number of extended irreducible Goppa codes. Clearly, the trivial subgroup $\langle\sigma^{n^2}\rangle$, containing the identity, fixes every projective linear set in $\mathbb O$. We now consider the actions of $\langle\sigma^{n}\rangle$ and $\langle\sigma\rangle$. \subsubsection{Action of $\langle\sigma^{n}\rangle$, a subgroup of order $n$}\label{Action_O_n_n=r} Suppose $O(\alpha)\in \mathbb O$ is fixed under $\langle\sigma^{n}\rangle$. Then $\langle\sigma^{n}\rangle$ acts on $O(\alpha)=A(\alpha)\cup A(\frac{1}{\alpha})\cup A(\frac{1}{\alpha + 1})\cup A(\frac{1}{\alpha + \xi _{1}}) \cup A(\frac{1}{\alpha + \xi _{2}})\cup \dots \cup A(\frac{1}{\alpha + \xi _{q^{n}-2}})$ which can be seen as a set of $q^{n}+1$ affine sets. $\langle\sigma^{n}\rangle$ partitions this set of $q^{n}+1$ affine sets. The possible length of orbits are $1$ and $n$. We will consider two cases: $n=p$ and $n\neq p$. First suppose that $n=p$. Then $q^n+1\equiv 1 \pmod{n}$ since $q=p^t$. By Theorem \ref{Theorem_Puliya_1}, we know that there is only one affine set fixed under $\langle\sigma^{n}\rangle$ so we conclude that there is one projective linear set fixed under $\langle\sigma^{n}\rangle$ in this case. Next suppose that $n\neq p$. We consider two possibilities; $n\mid (q^n-1)$ and $n\nmid (q^n-1)$. Now, if $n\mid (q^n-1)$ then $q^n+1=q^n-1+2\equiv 2 \pmod{n}$. So orbits of length $n$ only are not possible since $n=r>2$. The fact that $q^n+1\equiv 2 \pmod{n}$ implies that a projective linear set fixed under $\langle\sigma^{n}\rangle$ contains $jn+2$ affine sets that are fixed under $\langle\sigma^{n}\rangle$ where $j$ is a non-negative integer. By Theorem \ref{Theorem_Puliya_1}, there are only $n-1$ affine sets fixed under $\langle\sigma^{n}\rangle$. Thus $j =0$. We conclude that there are $\frac{n-1}{2}$ projective linear sets fixed under $\langle\sigma^{n}\rangle$, 2 fixed affine sets in each. Now suppose that $n\nmid (q^n-1)$. By Theorem \ref{Theorem_Puliya_1} there is no affine set fixed in this case. If $O(\alpha)\in \mathbb O$ is fixed under $\langle\sigma^{n}\rangle$ then $\sigma ^{n}(O(\alpha))=O(\alpha)$. So we have $\sigma^{n}(\alpha)=\alpha ^{q^{n}}=[A](\alpha)$ where $A\in GL(2,q^n)$ and $m_A(x)$ is an irreducible quadratic polynomial over $\mathbb F_{q^n}$. When we apply $\sigma^{n}$ to $\alpha$ $n$-times we obtain $$\alpha=\sigma^{n^2}(\alpha)=[A^n](\alpha)=[I_2](\alpha).$$ We conclude that $A^n=I_2$. Thus $A$ is a matrix of order $n$ over $\mathbb{F}_{q^{n}}$. Next we count the number of such matrices. We consider two possibilities; $n\nmid (q^n+1)$ and $n\mid (q^n+1)$. If $n\nmid (q^n+1)$ matrices of order $n$ do not exist hence there is no projective linear set fixed. Now suppose that $n\mid (q^n+1)$. Then $q^{2n}\equiv 1 \pmod{n}$ so, by Theorem \ref{Lidl} with $d=2$, the factorization of the cyclotomic polynomial $Q_n(x)$ over $\mathbb F_{q^n}$ contains $\phi(n)/2=(n-1)/2$ irreducible quadratic factors. Hence, by Theorem \ref{Thm_Yanga}, there are $\frac{q^n(q^n-1)(n-1)}{2}$ matrices of order $n$. For each matrix of order $n$, we have $\alpha^{q^n}=[A](\alpha)$ where $A=\left( \begin{array}{cc} a&b\\ c&d \end{array}\right)\in GL(2,q^n)$ is of order $n$. This gives $\alpha^{q^n}=\frac{a\alpha+b}{c\alpha+d}$. That is, $c\alpha^{q^n+1}+d\alpha^{q^n}-a\alpha-b=0$. So we may assume that $\alpha$ satisfies an equation of the form \begin{equation}\label{Eqn_Main} F_n(x)=cx^{q^{n}+1}+dx^{q^n}-ax-b=0, \end{equation} where the coefficients $a,b,c,d \in \mathbb F_{q^n}$ come from a matrix of order $n$. Next we count the number of roots of polynomials of the form $F_n(x)$ which lie in $\mathbb S$. Let $\mathbb S_F$ be the set of roots of all the polynomials $F_n(x)$ which lie in $\mathbb S$. Clearly, $|\mathbb S_F|$ depends on $a,b,c,d \in \mathbb F_{q^n}$ where $ad-bc \neq 0$. We know that the number of matrices in $GL(2,q^n)$ of order $n$ is $\frac{q^n(q^n-1)(n-1)}{2}$. By Theorem \ref{Stich_Bho}, with $s=1$ and $D=n$ we see that $F_n(x)$ factors into polynomials of the same degree $n$. Note that we are taking $s=1$ since $F_n(x)$ is a polynomial over $F_{q^n}$. Thus all $q^n+1$ roots of $F_n(x)$ lie in $\mathbb S_F$. As such $|\mathbb S_F|=\frac{q^n(q^n-1)(q^n+1)(n-1)}{2}$. Now, since every element of a fixed projective linear set is a root of some polynomial of the form $F_n(x)$ then the number of projective linear sets fixed under $\langle\sigma^{n}\rangle$ in this case is $$\frac{|\mathbb S_F|}{|O(\alpha)|}=\frac{q^n(q^n-1)(q^n+1)(n-1)}{2q^n(q^n-1)(q^n+1)}=\frac{n-1}{2}.$$ From the foregoing discussion, we have established the following. \begin{theorem}\label{main_theorem} Let $\rho$ be the number of irreducible quadratic polynomials in the factorization of the cyclotomic polynomial $Q_n(x)$ over $\mathbb F_{q^n}$. Then the number of projective linear sets fixed under $\langle\sigma^{n}\rangle$ where $n\nmid (q^n-1)$ but $n\mid (q^n+1)$ and all roots of $F_n(x)$ lie in $\mathbb S_F$ is $\rho$. \end{theorem} \subsubsection{Action of $\langle\sigma\rangle$, a subgroup of order $n^2$} Suppose $O(\alpha)\in \mathbb O$ is fixed under $\langle\sigma\rangle$. Then $\langle\sigma\rangle$ acts on $O(\alpha)=A(\alpha)\cup A(\frac{1}{\alpha})\cup A(\frac{1}{\alpha + 1})\cup A(\frac{1}{\alpha + \xi _{1}}) \cup A(\frac{1}{\alpha + \xi _{2}})\cup \dots \cup A(\frac{1}{\alpha + \xi _{q^{n}-2}})$ which can be seen as a set of $q^{n}+1$ affine sets. $\langle\sigma\rangle$ partitions this set of $q^{n}+1$ affine sets. The possible length of orbits are $1$, $n$ and $n^2$. We consider the following cases: $n=p$ and $n\neq p$. First suppose that $n=p$. Then $q^n+1\equiv 1 \pmod{n}$ since $q=p^t$. By Theorem \ref{Theorem_Puliya_1}, we know that there is one affine set fixed under $\langle\sigma\rangle$ so we conclude that there is one projective linear set fixed under $\langle\sigma\rangle$. Next suppose that $n\neq p$. We consider two possibilities: $n\mid (q^n-1)$ and $n\nmid (q^n-1)$. Now, if $n\mid (q^n-1)$ then $q^n+1=q^n-1+2\equiv 2 \pmod{n}$. So orbits of length $n$ only are not possible since $n=r>2$. As such there must be at least one orbit of length 1, that is, $O(\alpha)$ must contain an affine set that is fixed under $\langle\sigma^{n}\rangle$. By Theorem \ref{Theorem_Puliya_1}, there are $n-1$ affine sets fixed under $\langle\sigma^{n}\rangle$. We conclude that there are $\frac{n-1}{2}$ projective linear sets fixed under $\langle\sigma\rangle$, 2 fixed affine sets in each. Now suppose that $n\nmid (q^n-1)$. By Theorem \ref{Theorem_Puliya_1}, we know that there is no affine set fixed under $\langle\sigma\rangle$. So orbits of length 1 are not possible. Additionally, if $n\nmid (q^n+1)$ then it is easy to see that there is no projective linear set fixed. If $n\nmid (q^n-1)$ but $n\mid (q^n+1)$ (or $n\mid (q+1)$ since $q^n+1\equiv 0 \pmod{n}$ and $q^n+1\equiv q+1 \pmod{n}$ implies $n\mid (q+1)$) then orbits of length $n$ are possible. By an argument similar to the one in Section \ref{Action_O_n_n=r}, we need to consider how many roots of $F_1(x)=cx^{q+1}+dx^{q}-ax-b=0$ where $a,b,c,d \in \mathbb F_{q^n}$ and $ad-bc \neq 0$ lie in $\mathbb S$. We see that all $q+1$ roots of $F_1(x)$ lie in $\mathbb S_F$ and so by Theorem \ref{main_theorem} the number of projective linear sets fixed under $\langle\sigma\rangle$ given that $n\nmid (q-1)$ but $n\mid (q+1)$ is $\rho$ where $\rho=\frac{n-1}{2}$. \subsection{Applying the Cauchy Frobenius Theorem} Table \ref{Length_2^n+1_n=r} shows the number of projective linear sets which are fixed under the action of various subgroups of $G$. The subgroups are listed in ascending order of the number of elements in the subgroup. We list in column 2 the number of elements in a subgroup which are not already counted in subgroups in the rows above it in the table. \begin{table}[htbp] \caption{Number of Projective Linear Sets Fixed} \label{Length_2^n+1_n=r} \centering \begin{tabular}{cccccc} \hline Subgroup & No. of elements& No. of fixed & No. of fixed & No. of fixed & No. of fixed\\ of $G$ & not in previous & projective linear & projective linear & projective linear & projective linear\\ & subgroup &sets & sets & sets & sets\\ & &if $n\neq p$ &if $n\neq p$ &if $n\neq p$ &if $n=p$\\ & &and $n\mid (q^n-1)$ &$n\nmid (q^n-1)$ &$n\nmid (q^n-1)$ &\\ & & &and $n\mid (q^n+1)$& and $n\nmid (q^n+1)$ & \\ \hline \hline $\langle \sigma^{n^2} \rangle$ & 1 & $\frac{q^{n^2}-q^n}{q^{3n}-q^n}$ &$\frac{q^{n^2}-q^n}{q^{3n}-q^n}$&$\frac{q^{n^2}-q^n}{q^{3n}-q^n}$&$\frac{q^{n^2}-q^n}{q^{3n}-q^n}$ \\ \hline $\langle \sigma^{n} \rangle$ & $n-1$ & $\frac{n-1}{2}$ & $\frac{n-1}{2}$ & 0& 1\\ \hline $\langle \sigma \rangle$ & $n^2-n$ & $\frac{n-1}{2}$ & $\frac{n-1}{2}$& $0$&$1$ \\ \hline \end{tabular} \end{table} By the Cauchy-Frobenius Theorem, we obtain the following result. \begin{theorem}\label{Thm_all_n=r_q^n+1} {\rm The number of orbits in $\mathbb{O}$ under the action of $G$ is\\ $$\left\{ \begin{array}{r l} \frac{1}{n^2}\left[\frac{q^{n(n-1)}-1}{q^{2n}-1}+n^2-1\right],& \quad \mbox{if}~n=p \\ \frac{1}{n^2}\left[\frac{q^{n(n-1)}-1}{q^{2n}-1}+\frac{(n^2-1)(n-1)}{2}\right],& \quad \mbox{if}~n\neq p~\mbox{and}~ n\mid q^n-1 \\ \frac{1}{n^2}\left[\frac{q^{n(n-1)}-1}{q^{2n}-1}+\frac{(n+1)(n-1)^2}{2}\right],& \quad \mbox{if}~n\neq p, n\nmid q^n-1~\mbox{and}~n\mid q^n+1\\ \frac{1}{n^2}\left[\frac{q^{n(n-1)}-1}{q^{2n}-1}\right],& \quad \mbox{if}~n\neq p, n\nmid q^n-1 ~\mbox{and}~ n\nmid q^n+1 \end{array} \right. $$ } \end{theorem} \begin{remark} The number of orbits in $\mathbb O$ under the action of $G$ gives us an upper bound on the number of extended irreducible Goppa codes of degree $r$ and length $q^n+1$ where $n=r$ is prime. \end{remark} \begin{example}\label{Ex_2} If we take $q=2$, $n=r=5$, then there are at most $$\frac{2^{5(5-3)}+2^{5(5-5)}}{25}=\frac{1025}{25}=41$$ irreducible binary Goppa codes of degree $5$ and length $33$. \end{example} The result in Example \ref{Ex_2} was also found in \cite{Magamba_wakale}. \section{Extended Irreducible Goppa codes of degree $r$ and length $q^n+1$, where $n\neq r$} Our aim in this section is to obtain the number of extended irreducible Goppa codes of degree $r$ and length $q^n+1$, where $n\neq r$. Before we do that let us first obtain $\mathbb S(n,r)$. \subsection{$\mathbb S(n,r)$ where $n$ and $r$ are both prime and $n\neq r$} From Figure \ref{243}, elements of $\mathbb S(n,r)$ lie in $\mathbb F_{q^{nr}}$ and $\mathbb F_{q^{r}}$. Hence the number of elements of degree $r$ over $\mathbb F_{q^n}$ is $|\mathbb S(n,r)|=q^{nr}-q^{n}.$ \begin{figure}[htb] \centering \begin{tikzpicture}[node distance=1.5cm] \node (F1) {$\mathbb F_q$}; \node (Fn) [above left of=F1]{$\mathbb F_{q^{n}}$}; \node (Fr) [above right of=F1]{$\mathbb F_{q^{r}}$}; \node (Fnr) [above right of=Fn]{$\mathbb F_{q^{nr}}$}; \draw (F1) -- (Fn); \draw (F1) -- (Fr); \draw (Fr) -- (Fnr); \draw (Fn) -- (Fnr); \end{tikzpicture} \caption{Lattice of subfields of $\mathbb F_{q^{nr}}$} \label{243} \end{figure} \subsection{Action of $G$ on $\mathbb A$} In this section we find the number of affine sets in $\mathbb A$ which are fixed by subgroups of $G$. Thus, we will consider the action of $\langle\sigma\rangle$, $\langle\sigma^{r}\rangle$, $\langle\sigma^{n}\rangle$ and $\langle\sigma^{nr}\rangle$ on $\mathbb A$. Clearly, the trivial subgroup $\langle\sigma^{nr}\rangle$ containing the identity fixes every affine set in $\mathbb A$. By an argument similar to the one in Section III of \cite{Ryan4} corresponding to the subgroups $\langle\sigma^{n}\rangle$, $\langle\sigma^{r}\rangle$ and $\langle\sigma\rangle$ we obtain the following results. \begin{theorem}\label{Theorem_Action_n_(n,r)=1} {\rm The number of affine sets fixed under $\langle\sigma^n\rangle$ when $n\neq r$ is $$\left\{ \begin{array}{r l} 1,& \quad \mbox{if}~ r=p \\ 0,& \quad \mbox{if}~ r\neq p~\mbox{and}~r \nmid (q^n-1) \\ r-1,& \quad \mbox{if}~ r\neq p~\mbox{and}~r\mid (q^n-1) \end{array} \right. .$$ } \end{theorem} \begin{theorem}\label{Action_r_(n,r)=1} {\rm The number of affine sets fixed by $\langle\sigma^{r}\rangle$ when $n\neq r$ is $$\frac{|\mathbb S(1,r)|}{q(q-1)}=\frac{q^{r}-q}{q(q-1)}=\frac{q^{r-1}-1}{q-1}.$$ } \end{theorem} \begin{theorem}\label{Theorem_Action_1_(n,r)=1} {\rm The number of affine sets fixed under $\langle\sigma\rangle$ when $n\neq r$ is $$\left\{ \begin{array}{r l} 1,& \quad \mbox{if}~ r=p \\ 0,& \quad \mbox{if}~ r\neq p~\mbox{and}~r \nmid (q-1) \\ r-1,& \quad \mbox{if}~ r\neq p~\mbox{and}~r\mid (q-1) \end{array} \right. .$$ } \end{theorem} \subsection{Action of G on $\mathbb O$} In this section we obtain an upper bound on the number of extended irreducible Goppa codes of degree $r$ and length $q^n+1$ where $n$ and $r$ are both prime and $n\neq r$. Clearly, the trivial subgroup $\langle\sigma^{nr}\rangle$ containing the identity fixes every projective linear set in $\mathbb O$. We now consider the actions of $\langle\sigma^{n}\rangle$, $\langle\sigma^{r}\rangle$ and $\langle\sigma\rangle$. \subsubsection{Action of $\langle\sigma^{n}\rangle$, a subgroup of order $r$}\label{ka_sigma_n} Suppose $O(\alpha)\in \mathbb O$ is fixed under $\langle\sigma^{n}\rangle$. Then $\langle\sigma^{n}\rangle$ acts on $O(\alpha)=A(\alpha)\cup A(\frac{1}{\alpha})\cup A(\frac{1}{\alpha + 1})\cup A(\frac{1}{\alpha + \xi _{1}}) \cup A(\frac{1}{\alpha + \xi _{2}})\cup \dots \cup A(\frac{1}{\alpha + \xi _{q^{n}-2}})$ which can be seen as a set of $q^{n}+1$ affine sets. $\langle\sigma^{n}\rangle$ partitions this set of $q^{n}+1$ affine sets. The possible length of orbits are $1$ and $r$. We will consider two possibilities; $r=p$ and $r\neq p$. First suppose that $r=p$. Then $q^n+1\equiv 1 \pmod{r}$. So orbits of length $r$ only are not possible. So if a projective linear set is fixed under $\langle\sigma^{n}\rangle$ where $r=p$ it has to contain a fixed affine set. Now, by Theorem \ref{Theorem_Action_n_(n,r)=1}, there is one affine set fixed under $\langle\sigma^{n}\rangle$ so it follows that there is one projective linear set fixed. Second suppose that $r \neq p$. We consider the following three possibilities 1) $r\mid (q-1)$, 2) $r\mid (q^n-1)$ and $r\nmid (q-1)$ and 3) $r\nmid (q^n-1)$. If $r\mid (q-1)$ and since $q-1\mid(q^n-1)$ for any positive integer $n$ then the fact that $q^n+1=q^n-1+2\equiv 2 \pmod{r}$ implies that orbits of length $r$ only are not possible. It is easy to see that if a projective linear set is fixed under $\langle\sigma^{n}\rangle$ then it contains two fixed affine sets. Now, by Theorem \ref{Theorem_Action_n_(n,r)=1}, there are $r-1$ affine sets fixed under $\langle\sigma^{n}\rangle$. Hence the number of projective linear sets fixed under $\langle\sigma^n\rangle$ given that $r\mid (q-1)$ is $\frac{r-1}{2}$.\\ Next, suppose that $r\mid (q^n-1)$ but $r\nmid (q-1)$. Then $q^n+1=q^n-1+2\equiv 2 \pmod{r}$. So orbits of length $r$ only are not possible since $r>2$. Now, by Theorem \ref{Theorem_Action_n_(n,r)=1}, there are $r-1$ affine sets fixed under $\langle\sigma^{n}\rangle$. Hence the number of projective linear sets fixed under $\langle\sigma^n\rangle$ given that $r\mid (q^n-1)$ but $r\nmid (q-1)$ is $\frac{r-1}{2}$. \\ Now suppose that $r\nmid (q^n-1)$. Clearly, there is no projective linear set fixed when $r\nmid (q^n-1)$ and $r\nmid (q^n+1)$ since orbits of length $r$ only are not possible and there is no affine set fixed when $r\nmid (q^n-1)$ and $r \neq p$. However, if $r\nmid (q^n-1)$ but $r\mid (q^n+1)$ then $Q_r(x)$ factors into $\rho=\frac{\phi(r)}{2}$ irreducible quadratic polynomials. By an argument similar to the one in Section \ref{Action_O_n_n=r}, the number of projective linear sets fixed under $\langle\sigma^{n}\rangle$, where there are no affine sets fixed in the decomposition of $O(\alpha)$ and $r\nmid (q^n-1)$ but $r\mid (q^n+1)$ is $\rho=\frac{\phi(r)}{2}=\frac{r-1}{2}$. \subsubsection{Action of $\langle\sigma^{r}\rangle$, a subgroup of order $n$}\label{Action_of_sigma_r} Suppose the orbit in $\mathbb{O}$ under the action of $G$ containing $O(\alpha)$ contains $r$ affine sets. Then $O(\alpha)$ is fixed under $\langle\sigma^{r}\rangle$. We claim that each $O(\alpha) \in {\mathbb O}$ fixed under $\langle\sigma^{r}\rangle$ contains precisely $q+1$ affine sets which are fixed under $\langle\sigma^{r}\rangle$. Without loss of generality, suppose $A(\alpha)$ is fixed under $\langle\sigma^{r}\rangle$. Then $A(\alpha)$ contains elements which satisfy the equation $x^{q^{r}}=x$. Assume that $\alpha$ satisfies $x^{q^{r}}=x$. Then it is clear that $\alpha+\nu$ where $\nu \in {\mathbb F}_{q}$ also satisfies the equation $x^{q^r}=x$. So it follows that, for $\frac{\zeta}{\alpha+\nu}+\xi \in A(\frac{1}{\alpha+\nu})$, we have $(\frac{\zeta}{\alpha+\nu}+\xi)^{q^{r}}=\frac{\zeta^{q^{r}}}{\alpha+\nu}+\xi^{q^r} \in A(\frac{1}{\alpha+\nu})$ which implies that $A(\frac{1}{\alpha+\nu})$ is fixed under $\langle\sigma^{r}\rangle$. Since there are $q$ elements in $\mathbb{F}_{q}$ then there are $q$ such affine sets. We now show that no affine set of the form $A(\frac{1}{\alpha+\mu})$, where $\mu\in \mathbb{F}_{q^{n}}\setminus\mathbb{F}_{q}$, in the decomposition of $O(\alpha)$ is fixed. First note that $\mu^{q^{r}}\neq\mu$ for $\mu \in \mathbb{F}_{q^{n}}\setminus\mathbb{F}_{q}$. Since, for $\frac{\zeta}{\alpha+\mu}+\xi \in A(\frac{1}{\alpha+\mu})$ where $\mu \in \mathbb{F}_{q^{n}}\setminus\mathbb{F}_{q}$, we have $(\frac{\zeta}{\alpha+\mu}+\xi)^{q^{r}}=\frac{\zeta^{q^{r}}}{\alpha+\mu^{q^{r}}}+\xi^{q^{r}}\notin A(\frac{1}{\alpha+\mu})$ then $A(\frac{1}{\alpha+\mu})$ is not fixed under $\langle\sigma^{r}\rangle$. So we conclude that there are $q+1$ affine sets in $O(\alpha)$ which are fixed under $\langle\sigma^{r}\rangle$. By Section \ref{Action_r_(n,r)=1}, there are $\frac{q^{r-1}-1}{q-1}$ affine sets fixed under $\langle\sigma^{r}\rangle$. Hence the number of projective linear sets in ${\mathbb O}$ which are fixed under $\langle\sigma^{r}\rangle$ is $$\frac{q^{r-1}-1}{(q-1)(q+1)}=\frac{q^{r-1}-1}{q^2-1}.$$ \subsubsection{Action of $\langle\sigma\rangle$, a subgroup of order $nr$} Suppose $O(\alpha)\in \mathbb O$ is fixed under $\langle\sigma\rangle$. Then $\langle\sigma\rangle$ acts on $O(\alpha)=A(\alpha)\cup A(\frac{1}{\alpha})\cup A(\frac{1}{\alpha + 1})\cup A(\frac{1}{\alpha + \xi _{1}}) \cup A(\frac{1}{\alpha + \xi _{2}})\cup \dots \cup A(\frac{1}{\alpha + \xi _{q^{n}-2}})$ which can be seen as a set of $q^{n}+1$ affine sets. $\langle\sigma\rangle$ partitions this set of $q^{n}+1$ affine sets. The possible length of orbits are $1$, $n$, $r$ and $nr$. We will consider two possibilities: $r=p$ and $r\neq p$. Suppose that $r=p$. Then $q^n+1\equiv 1 \pmod{r}$. So orbits of length $r$ only are not possible. So if a projective linear set is fixed under $\langle\sigma\rangle$ it has to contain a fixed affine set. Now, by Theorem \ref{Theorem_Action_1_(n,r)=1}, there is only one affine set fixed under $\langle\sigma\rangle$ so it follows that there is only one projective linear set fixed under $\langle\sigma\rangle$. Now if $O(\alpha)$ fixed under $\langle\sigma\rangle$ contains an orbit of length $n$ then $O(\alpha)$ is also fixed under $\langle\sigma^n\rangle$. But we know that if $r=p$ a projective linear set fixed under $\langle\sigma^n\rangle$ contains one fixed affine set so there is no projective linear set fixed in this case. Next we consider the possibility of a fixed projective linear set where there are orbits of length $n$, $r$ and $nr$. That is, we can find non-negative integers $x$ and $y$ and $z$ such that $nx+ry+nrz=q^n+1$. Since such a projective linear set contains an orbit of length $n$, it is also fixed under $\langle\sigma^n\rangle$. By Section \ref{ka_sigma_n}, a projective linear set fixed under $\langle\sigma^n\rangle$ where $r=p$ contains one fixed affine set. We conclude that if a projective linear set is fixed under $\langle\sigma\rangle$ where $r=p$ then it contains one affine set fixed. Hence there is one projective linear set fixed under $\langle\sigma\rangle$. Now suppose that $r \neq p$. We consider the following three possibilities 1) $r\mid (q-1)$, 2) $r\mid (q^n-1)$ but $r\nmid (q-1)$ and 3) $r\nmid (q^n-1)$. We begin by looking at the case $r\mid (q-1)$. For any positive integer $n$, $q-1\mid (q^n-1)$ so $q^n+1=q^n-1+2\equiv 2 \pmod{r}$. As such, orbits of length $r$ only are not possible since $r>2$. The fact that $q^n+1\equiv 2 \pmod{r}$ implies that a projective linear set fixed under $\langle\sigma\rangle$ contains $jr+2$ affine sets that are fixed under $\langle\sigma\rangle$ where $j$ is a positive integer. Now, by Theorem \ref{Theorem_Action_1_(n,r)=1}, there are only $r-1$ affine sets fixed under $\langle\sigma^{n}\rangle$. This implies that $j=0$. So there is no projective linear set fixed. However, if a projective linear set is fixed under $\langle\sigma\rangle$ and contains an orbit of length $n$, then $\langle\sigma^n\rangle$ fixes such a projective linear set. By Section \ref{ka_sigma_n}, a projective linear set fixed under $\langle\sigma^n\rangle$ contains two fixed affine sets. Hence the number of projective linear sets fixed under $\langle\sigma\rangle$ given that $r\mid (q-1)$ is $\frac{r-1}{2}$. Now, suppose that $r\mid (q^n-1)$ where $r\nmid (q-1)$. By Theorem \ref{Theorem_Action_1_(n,r)=1}, there are no affine sets fixed, so the possible lengths of an orbit are $n$, $r$ and $nr$. We consider two cases 1) $r\nmid (q+1)$ and 2) $r\mid (q+1)$. Suppose $r\nmid (q+1)$, then orbits of length $r$ only are not possible. So there is no projective linear set fixed. Next we consider orbits of length $n$. We know that $q^n+1\equiv q+1\pmod{n}$, by Fermat's Little Theorem. So if $n\mid (q+1)$ then orbits of length $n$ are possible. Now, a projective linear set fixed under $\langle\sigma\rangle$ containing an orbit of length $n$ is also fixed under $\langle\sigma^n\rangle$. From Section \ref{ka_sigma_n}, we know that a projective linear set fixed under $\langle\sigma^n\rangle$ contains 2 fixed affine sets and $q^n-1$ affine sets that are permuted in orbits of length $r$. Now, since $n\neq r$ there is no projective linear set fixed in this case. However, if $r\mid (q^n-1)$ where $r\nmid (q-1)$ and $r\mid (q+1)$ then matrices $A\in GL(2,q)$ of order $r$ exist. By an argument similar to the one in Section \ref{Action_O_n_n=r}, we need to find how many roots of $F_1(x)=cx^{q+1}+dx^{q}-ax-b=0$ where $a,b,c,d \in \mathbb F_{q}$ and $ad-bc \neq 0$ lie in $\mathbb S$. We conclude that the number of projective linear sets fixed under $\langle\sigma\rangle$ is $\frac{r-1}{2}$. Next, suppose that $r\nmid (q^n-1)$. By Theorem \ref{Theorem_Action_1_(n,r)=1} there is no affine set fixed under $\langle\sigma\rangle$. As such, the possible lengths of an orbit are $n$, $r$ and $nr$. If $r\nmid (q^n-1)$ and $r\nmid (q^n+1)$ then matrices $A\in GL(2,q^n)$ of order $r$ do not exist. Similarly, if $r\nmid (q^n-1)$ and $r\nmid (q+1)$ then matrices $A\in GL(2,q)$ of order $r$ do not exist. In either case, there are no projective linear sets fixed under $\langle\sigma\rangle$. However, if $r\nmid (q^n-1)$ and $r\mid (q+1)$ then, as above, we need to find how many roots of $F_1(x)=cx^{q+1}+dx^{q}-ax-b=0$, where $a,b,c,d \in \mathbb F_{q}$ and $ad-bc \neq 0$, lie in $\mathbb S$. By an argument similar to the one in Section \ref{Action_O_n_n=r}, the number of projective linear sets fixed under $\langle\sigma\rangle$ is $\frac{r-1}{2}$. Lastly, suppose that there is a possibility of having a projective linear set $O(\alpha)$ fixed under $\langle\sigma\rangle$ with a combination of different orbit lengths. That is, we can find non-negative integers $x,y$ and $z$ all not equal to zero such that $nx+ry+nrz=q^n+1$. Observe that if $y\neq 0$ then such a projective linear set is also fixed under $\langle\sigma^r\rangle$. Now, a projective linear set fixed under $\langle\sigma^r\rangle$ contains $q+1$ fixed affine sets, see Section \ref{Action_of_sigma_r}. This implies $r=q+1$. We see that we have already dealt with this case. Moreover, if $x\neq 0$ then $O(\alpha)$ is also fixed under $\langle\sigma^n\rangle$. By Section \ref{ka_sigma_n}, if there is no fixed affine set in the decomposition of $O(\alpha)$ then $r\mid (q^n+1)$ and $\langle\sigma^n\rangle$ fixes $\frac{r-1}{2}$ projective linear sets. This result is consistent with the results above. \subsection{Applying the Cauchy-Frobenius Theorem} Tables \ref{Length_q^n+1_(n,r)=1_C}, \ref{Length_q^n+1_(n,r)=1_D} and \ref{Length_q^n+1_(n,r)=1_D_1} show the number of projective linear sets which are fixed under the action of various subgroups of $G$. \begin{table}[htbp] \caption{Number of Projective Linear (PL) Sets fixed} \label{Length_q^n+1_(n,r)=1_C} \centering \begin{tabular}{ccccccc} \hline Subgroup& No. of elements & No. of fixed & No. of fixed & No. of fixed & No. of fixed & No. of fixed\\ of $G$ & not in previous & PL sets & PL sets & PL sets & PL sets& PL sets\\ & subgroup & if $r=p$ & if $r\neq p$ & if $r\neq p$,& if $r\neq p$,& if $r\neq p$,\\ & & & and $r\mid (q-1)$ &$r\mid (q^n-1)$ & $r\nmid (q^n-1)$,& $r\nmid (q^n-1)$\\ & & & & and $r\nmid (q-1)$ & and $r\mid (q^n+1)$ & and $r\nmid (q^n+1)$ \\ \hline \hline $\langle \sigma^{nr} \rangle$ & 1 & $\frac{q^{nr}-q^n}{q^{3n}-q^n}$& $\frac{q^{nr}-q^n}{q^{3n}-q^n}$&$\frac{q^{nr}-q^n}{q^{3n}-q^n}$&$\frac{q^{nr}-q^n}{q^{3n}-q^n}$&$\frac{q^{nr}-q^n}{q^{3n}-q^n}$ \\ \hline $\langle \sigma^{n} \rangle$ & $r-1$ & 1&$\frac{r-1}{2}$&$\frac{r-1}{2}$&$\frac{r-1}{2}$&0 \\ \hline $\langle \sigma^{r} \rangle$ & $n-1$ & $\frac{q^{r-1}-1}{q^2-1}$& $\frac{q^{r-1}-1}{q^2-1}$& $\frac{q^{r-1}-1}{q^2-1}$&$\frac{q^{r-1}-1}{q^2-1}$ &$\frac{q^{r-1}-1}{q^2-1}$\\ \hline \end{tabular} \end{table} \begin{table}[htbp] \caption{Number of Projective Linear (PL) Sets fixed} \label{Length_q^n+1_(n,r)=1_D} \centering \begin{tabular}{ccccc} \hline Subgroup& No. of elements & No. of fixed & No. of fixed & No. of fixed \\ of $G$ & not in previous & PL sets & PL sets & PL sets \\ & subgroup & if $r=p$ & if $r\neq p$ & if $r\neq p$,\\ & & & and $r\mid (q-1)$ &$r\mid (q^n-1)$,\\ & & & & $r\nmid (q-1)$,\\ & & & & and $r\mid (q+1)$ \\ \hline \hline $\langle \sigma \rangle$ & $(n-1)(r-1)$ & 1&$\frac{r-1}{2}$&$\frac{r-1}{2}$\\ \hline \end{tabular} \end{table} \begin{table}[htbp] \caption{Number of Projective Linear (PL) Sets fixed} \label{Length_q^n+1_(n,r)=1_D_1} \centering \begin{tabular}{ccccc} \hline Subgroup& No. of elements &No. of fixed & No. of fixed & No. of fixed\\ of $G$ & not in previous &PL sets & PL sets & PL sets\\ & subgroup &if $r\neq p$, & if $r\neq p$, & if $r\neq p$,\\ & &$r\nmid (q^n-1)$, & $r\nmid (q^n-1)$, & $r\nmid (q^n-1)$,\\ & &$r\mid (q^n+1)$, & and $r\mid (q+1)$ & and $r\nmid (q^n+1)$\\ & & and $r\nmid (q+1)$ & & \\ \hline \hline $\langle \sigma \rangle$ & $(n-1)(r-1)$ &0&$\frac{r-1}{2}$&0\\ \hline \end{tabular} \end{table} \begin{theorem}\label{Thm_all_(n,r)=1_q^n+1} The number of orbits in $\mathbb{O}$ under the action of $G$ is \begin{enumerate} \item $\begin{array}{rl} \frac{1}{nr}\left[\frac{q^{n(r-1)}-1}{q^{2n}-1}+n(r-1)+\frac{(n-1)(q^{r-1}-1)}{q^2-1}\right],&\mbox{if}~ r=p. \end{array}$ \item $\begin{array}{rl} \frac{1}{nr}\left[\frac{q^{n(r-1)}-1}{q^{2n}-1}+\frac{n(r-1)^2}{2}+\frac{(n-1)(q^{r-1}-1)}{q^2-1}\right],&\mbox{if}~ r\neq p~\mbox{and}~r\mid (q-1)~\mbox{or}~r\neq p, ~r\mid (q^n-1), ~r\nmid (q-1) ~\mbox{and}~r\mid (q+1)\\ &\mbox{or}~ r\neq p,~ r\nmid (q^n-1),~ r\mid (q^n+1)~\mbox{and}~ r\mid (q+1).\end{array}$ \item $\begin{array}{rl} \frac{1}{nr}\left[\frac{q^{n(r-1)}-1}{q^{2n}-1}+\frac{(r-1)^2}{2}+\frac{(n-1)(q^{r-1}-1)}{q^2-1}\right],&\mbox{if}~ r\neq p,~ r\nmid (q^n-1),r\mid (q^n+1)~\mbox{and}~r\nmid (q+1). \end{array} $ \item $\begin{array}{rl} \frac{1}{nr}\left[\frac{q^{n(r-1)}-1}{q^{2n}-1}+\frac{(n-1)(q^{r-1}-1)}{q^2-1}\right],&\mbox{if}~ r\neq p,~ r\nmid (q^n-1)~\mbox{and}~r\nmid (q^n+1). \end{array}$ \end{enumerate} \end{theorem} \begin{remark} The number of orbits in $\mathbb O$ under the action of $G$ gives us an upper bound on the number of extended irreducible Goppa codes of degree $r$ and length $q^n+1$ where $q=p^t$, $n$ and $r$ are prime numbers and $n\neq r$. \end{remark} \begin{example}\label{Ex_4} If we take $q=2$, $n=11$ and $r=5$, then there are at most $76,261$ extended irreducible binary Goppa codes of degree $5$ and length $2,049$. \end{example} \section{Conclusion} In this paper we have produced an upper bound on the number of extended irreducible Goppa codes of degree $r$ and length $q^n+1$ where $n$ and $r$ are both prime numbers.
proofpile-arXiv_067-1387
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Two of the major open questions in star formation research are: what is the dominant mechanism regulating the efficiency and rate of star formation and on what scale does this mechanism operate. Increases in the average efficiency and rate of star formation are observed over large systems, i.e. starburst galaxies (e.g. \citealt{Scoville00}; \citealt{Dopita02}; \citealt{Kennicutt12}) and on the smaller scale of individual molecular clouds \citep[e.g.][]{Moore07, Polychroni12}. Recent studies have attempted to determine the effect that the spiral arms, and other features of large-scale structure, have had on the efficiency of star formation in the Milky Way (\citealt{Eden12,Eden13,Eden15,Moore12,Ragan16,Urquhart17}). On average, the efficiencies were found to be roughly constant over kiloparsec scales, regardless of environment, with some minor enhancements associated with some, but not all, spiral arms. Closer inspection showed that individual, extreme star-forming regions, namely the W49A and W51 complexes, were responsible for localised peaks in the ratio of infrared luminosity to molecular gas mass, even averaged over large sections of a spiral arm \citep{Moore12}. The study of \citet{Moore12} found that the star formation rate density ($\Sigma_{\rmn{SFR}}$ in units of M$_{\sun}$\,yr$^{-1}$\,kpc$^{-2}$) had significant increases at Galactocentric radii associated with spiral arms, but the vast majority of these increases, $\sim$ 70 per cent, were due to source crowding. The remaining 30 per cent of this increase was found to be due to the inclusion of these individual high-SFR star-forming regions. In the Sagittarius arm, thought to include W51, the increase was to be due to an increase in the number of young stellar objects (YSOs) per unit mass, whilst the increase seen towards the Perseus arm is thought to be due to the presence of W49A, which has a larger luminosity per YSO, i.e. the luminosity distribution in this region is flatter. A change in the luminosity distribution of the stars in the W49A star-formation region would indicate a possible change in the stellar initial mass function (IMF). This would be very significant as a review of the IMF in environments from local clusters to nearby galaxies to starburst galaxies has found strong variations from the Salpeter-like form can be ruled out \citep{Bastian10}. As inferred, the IMF was found to be fairly constant within the Milky Way \citep{McKee07} but hints at variations have been detected in the extreme star-forming conditions within the Galactic Centre. These clusters have been shown to have significant variations in the IMF \citep{Espinoza09} but more recent observations indicate it is Salpeter-like \citep{Lockmann10,Habibi13}. However, a change in the W49A mass function compared to other significant star-forming regions (W43 and W51) would indicate a real deviation from the global IMF of the Galaxy. W49A is at a distance estimated to be 11.11$^{+ 0.79}_{- 0.69}$\,kpc \citep{Zhang13} and is one of the most extreme star-forming regions in the Galaxy \citep[e.g.][]{Galvan-Madrid13}. This region is considered extreme as it has many quantities consistent with those found in LIRGS and ULIRGS, (ultra)luminous infrared galaxies, with localised dust temperatures of $>$\,100\,K and column densities $>$\,10$^{5}$\,cm$^{-3}$ \citep{Nagy12} and a luminosity per unit mass of $\sim$ 10\,L$_{\sun}$/M$_{\sun}$, compared to $\sim$ 100\,L$_{\sun}$/M$_{\sun}$ in ULIRGS \citep[e.g.][]{Solomon97}. The absolute luminosity of W49A ($\sim$ 10$^{7}$\,L$_{\sun}$; \citealt*{Harvey77,Ward-Thompson90}) does not compare to those of LIRGS and ULIRGS ($\sim$\,10$^{11}$\,--\,10$^{12}$\,L$_{\sun}$), but a mass of $\sim$ 10$^{6}$\,M$_{\sun}$ \citep{Sievers91} gives a $L/M$ that is within an order of magnitude. The region also has an overabundance of ultra-compact \ion{H}{ii} regions, with a factor of $\sim$ 3 more found coincident with this region compared to any other in the first quadrant of the Galaxy \citep{Urquhart13}. The star-forming region W51 has a comparable $L/M$ to W49A (2.3\,$\times$\,10$^{5}$\,M$_{\sun}$, 3\,$\times$\,10$^{6}$\,L$_{\sun}$; \citealt{Harvey86,Kang10}) and has starburst-like star formation, with the majority occurring recently \citep[e.g.][]{Clark09} and very efficiently \citep{Kumar04}. W51 is at an estimated distance of 5.41$^{+ 0.31}_{- 0.28}$\,kpc \citep{Sato10}. Distances to both W49A and W51 are from maser parallax measurements. The aim of this paper is to determine the star-forming properties of the two regions, building on the work of \citet{Moore12}, who found that the presence of these two regions was producing significant increases in the mean $L/M$ on kiloparsec scales. Assuming that the IMF is fully sampled, invariant, and that the infrared-bright evolutionary stages have lifetimes short compared to those of molecular clouds, then we would expect $L/M$ to be correlated with SFE. Alternatively, changes in $L/M$ may be due to variations in the luminosity distribution of the embedded massive YSOs, suggesting variations in the IMF. We use data from the James Clerk Maxwell Telescope (JCMT) Plane Survey (JPS; \citealt{Moore15,Eden17}), additional 850-$\upmu$m continuum data from the JCMT, and the \emph{Herschel} infrared Galactic Plane Survey \citep{Molinari10a, Molinari10b} to determine the distribution of clump masses and embedded YSO luminosities for both regions, and examine the relationship between luminosity and mass. The paper is structured as follows: Section 2 introduces the data, Section 3 describes how the sources are selected for the study as well as the methods used to calculate source mass and luminosity. Section 4 presents the results, with Section 5 discussing those results. In Section 6 we provide a summary of our results and give conclusions. \section{Data} \subsection{\emph{Herschel} infrared Galactic Plane Survey} The \emph{Herschel}\footnote{{\em Herschel} was an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.} infrared Galactic Plane Survey (Hi-GAL; \citealt{Molinari10b,Molinari10a}) was an Open-time Key Project of the $\emph{Herschel Space Observatory}$, and has mapped the entire Galactic Plane, with the inner Galaxy portion and initial compact-source catalogues outlined by \citet{Molinari16a,Molinari16}. This section, spanning Galactic longitudes of $-$70$\degr$\,$\leq$\,$\emph{l}$\,$\leq$\,68$\degr$, contains the W49A and W51 star-forming regions, imaged with the PACS \citep{Poglitsch10} and SPIRE \citep{Griffin10} cameras at 70, 160, 250, 350 and 500\,$\upmu$m with diffraction-limited beams of 6--35 arcseconds \citep{Molinari16a}. The catalogue was produced using the source extraction algorithm, CuTEx (Curvature Threshold Extractor; \citealt{Molinari11}), with a band-merged catalogue produced by \citet{Elia17}. The Hi-GAL data have saturated pixels present in all five wavebands within both the W49A and W51 regions \citep{Molinari16a}. These saturated pixels occur in the most central areas of the two regions. However, only W51 is significantly affected, with over 300 pixels in the 250-$\upmu$m data. The saturated regions in W49A are not associated with any significant dust clumps identified by ATLASGAL \citep{Urquhart14b}. Accounting for the saturation in the W51 region will be discussed in Section 5.2. The Hi-GAL sources used in this study are compact objects, tracing the peaks of the luminosity found embedded within the larger, star-forming clump structures. The fixed-aperture-based photometry, which is described below and in full in \citet{Elia17}, may produce fluxes and luminosities that depend on this method. \subsection{JCMT continuum data} The two regions were imaged in the 850-$\upmu$m continuum by the Submillimetre Common-User Bolometer Array 2 (SCUBA-2) instrument \citep{Holland13} on the JCMT at an angular resolution of 14.4\,arcseconds. The W51 data are taken from the JCMT Plane Survey (JPS: \citealp{Moore15})\footnote{The JPS is part of the JCMT Legacy Surveys Project \citep{Chrysostomou10}.} where the compact sources are catalogued in \citet{Eden17}. The W49A data were obtained in standard time allocations under Project IDs m13bu27 and m14au23. The W49A data were observed in the same method as the JPS, as outlined in \citet{Eden17}, between September 2013 and September 2014 in the weather band with 220-Ghz sky opacities of $\uptau_{220}$\,$\simeq$\,0.08\,--\,0.16, JCMT band-2. The observations consisted of 23 individual $\emph{pong3600}$ observations \citep{Bintley14}, each taking 40-45 minutes and covering a one-degree circular field. The data, reduced with 3-arcsecond pixels using the same procedure described in \citet{Eden17}, have a pixel-to-pixel rms of 17.39\,mJy\,beam$^{-1}$, 4.99\,mJy\,beam$^{-1}$ when smoothed over the beam. The resulting map is displayed in Fig.~\ref{W49map}. When utilising the full dymanic range, the data display negative bowling around the bright W49A region, a common feature of the observation and reduction process. For a full explanation, see \citet{Mairs15} and \citet{Eden17}. However, while potentially influencing photometry results in the affected area, this effect does not appear to be a significant factor in the results. The depth of the negative bowling is $\sim$\,10\,$\upsigma$, compared to the $\sim$\,2500\,$\upsigma$ at the brightest point of the data. This means that few, if any, significant compact sources will have been missed due to this effect. Additionally, no ATLASGAL compact sources \citep{Urquhart14b} or Hi-GAL band-merged sources \citep{Elia17} are found in the negative regions. The corresponding W51 map from the JPS is displayed in Appendix~\ref{W51app}. \begin{figure*} \includegraphics[width=0.99\linewidth]{W49map.eps} \caption{The JCMT map of the W49A region. The intensity scale is in units of mJy\,beam$^{-1}$. The white circle indicates a 20-arcmin radius, corresponding to a physical radius of $\sim$\,60\,pc, within which all extracted sources are analysed. The dynamic range is scaled in such a way that only pixels above 3$\upsigma$ are displayed. The pink ellipses represent the JCMT sources assigned to W49A, with the small white circles showing the positions of the Hi-GAL sources.} \label{W49map} \end{figure*} \subsection{Molecular-line data} Molecular-line data are available for both regions in the $J = 1-0$ (110.150\,GHz) and $J = 3-2$ (330.450\,GHz) rotational transitions of $^{13}$CO. The $^{13}$CO $J = 1-0$ data form part of the Galactic Ring Survey (GRS; \citealt{Jackson06}), which mapped the inner Galaxy at Galactic longitudes of $\emph{l}$\,=\,18$\degr$ to 55$\fdg$7 and latitudes of $\emph{b}$\,$\leq$\,1$\degr$, at an angular resolution of 46 arcsecs. The higher-energy transition of $J = 3-2$ was mapped at an angular resolution of $\sim$14 arcsecs as part of two different projects with the Heterodyne Array Receiver Program (HARP; \citealt{Buckle09}) instrument on the JCMT. The W49A data are part of the $^{13}$CO/C$^{18}$O Heterodyne Inner Milky Way Plane Survey (CHIMPS; \citealt{Rigby16}), while the W51 data are from the targeted survey of the region by \citet{Parsons12}. \section{Hi-GAL source selection \& source properties} \subsection{Source selection} \subsubsection{Hi-GAL sources} A maximum projected radius of 60\,pc from the main star-forming centre was imposed as the first source-selection criterion in the two regions. The radius of 60\,pc corresponds to approximately double the size of the largest molecular clouds in the GRS catalogue \citep{Roman-Duval10} and to the size of the largest giant molecular clouds in the Galaxy (e.g. W3: \citealt{Polychroni12}), ensuring all material associated with the region is included in this study. Previous studies of W49A were also confined to a radius of 60\,pc \citep{Galvan-Madrid13}. This corresponds to a radius of 20\,arcmin centred on $\emph{l} = 43\fdg$170, $\emph{b}$ = $-$0$\fdg$004 for W49A and a radius of 40\,arcmin from $\emph{l} = 49\fdg$486, $\emph{b}$ = $-$0$\fdg$381 for W51. Next, a source must have a detection in at least 3 of the 4 sub-millimetre wavelengths of the Hi-GAL band-merged catalogue \citep{Elia17}, i.e., 160, 250, 350 and 500\,$\micron$. 148 and 712 candidate Hi-GAL sources were found meeting these criteria for W49A and W51, respectively. In order to define association with the target regions by velocity, CO spectra were extracted from the GRS and HARP data cubes at the positions of the above 860 candidate sources. The HARP spectra were inspected first, as the $J = 3-2$ transition traces denser ($\gtrsim$ 10$^{4}$\,cm$^{-3}$) and warmer ($\sim$30\,K) gas than does the $J = 1-0$ transition (10$^{2}$--10$^{3}$\,cm$^{-3}$ and $\sim$10\,K). As the lines of sight towards W49A and W51 contain multiple emission components at different velocities due to foreground and background spiral arms, the $J = 3-2$ transitions are less ambiguous than $J = 1-0$ at identifying the molecular emission associated with a dense, star-forming clump. 176 candidate Hi-GAL sources had emission in the HARP spectra, of which 50 were in W49A and 126 in W51. For those candidate sources with multiple emission peaks at different velocities in the spectra, the strongest emission peak was chosen on the assumption that it corresponds to the highest column density (e.g. \citealt{Urquhart07,Eden12,Eden13}). In total, using both HARP and GRS data, 762 sources (121 in W49A and 641 in W51) were assigned velocities. The velocities obtained from the HARP or GRS spectra were cross-referenced with the \citet{Rathborne09} GRS cloud catalogue, containing the derived distances from \citet{Roman-Duval09}. A full description of the matching method can be found in \citet{Eden12}. These cloud distances were adopted as the distances to the Hi-GAL sources, resulting in assigned distances to 109 and 582 sources in the W49A and W51 target areas, respectively. Of these, 57 and 406 were coincident with the accepted distances of W49A (11.11$^{+ 0.79}_{- 0.69}$\,kpc) and W51 (5.41$^{+ 0.31}_{- 0.28}$\,kpc). The tolerance was taken to be equal to the quoted errors on the cloud distances. The 71 candidate sources with a $^{13}$CO velocity but without a GRS cloud association were assigned two kinematic distances using the Galactic rotation curve of \citet{Brand93}, due to the distance ambiguity that exists in the Inner Galaxy. Since W51 has a velocity consistent with the rotation tangent point along that particular line of sight ($\sim$ 60\,km\,s$^{-1}$), sources in the W51 field did not require a determination between the two kinematic distances, with all sources at that velocity placed at the tangent distance. 28 out of 59 candidate W51 sources could thus be assigned to the W51 complex on velocity alone. Of the remaining 12 candidate sources with velocities within the W49A field, only two had one kinematic distance consistent with W49A. To determine between the two kinematic distances for these two sources, the HISA method is used \citep[e.g.][]{Anderson09, Roman-Duval09}, making use of \ion{H}{i} from the VLA Galactic Plane Survey \citep{Stil06}. Neither of the two were assigned the far distance, i.e. not determined to be in W49A. The final source numbers, including all Hi-GAL sources within the selection radii and at the distances of the two regions, are 57 and 434 for W49A and W51, respectively. \subsubsection{JCMT sources} The source-extraction process for the new JCMT data makes use of the {\sc FellWalker} (FW; \citealt{Berry15}) algorithm, with the same configuraton parameters used to produce the JPS compact-source catalogue \citep{Eden17}. 173 sources were found in the W49A map, after excluding sources with aspect ratios greater than 5 and SNR less than 5$\upsigma$. A sample of the source catalogue is displayed in Table~\ref{W49sources} (the full list of 173 W49A sources is available as Supporting Information to the online article). Since the observing and source-extraction methods are identical to those used for the JPS, we can estimate the sample completeness limit by scaling the JPS results, with 95 per cent completeness obtained for sources over 5$\upsigma$ \citep{Eden17}, or 86.95\,mJy\,beam$^{-1}$. The W51 JCMT data, as part of the $\ell$\,=\,50$\degr$ JPS field, have a somewhat higher pixel-to-pixel rms of 25.66\,mJy\,beam$^{-1}$, or 5.98\,mJy\,beam$^{-1}$, when smoothed over the beam \citep{Eden17}. 822 compact sources were found within this JPS field, 384 within 40\,arcmin of the W51 region. Within the same 20-arcmin angular radius, 117 850-$\upmu$m compact sources were found in the W49A map. The CO spectra at the positions of the JCMT sources were extracted in the same manner as above, with 472 of the 501 candidate JCMT sources assigned velocities, 109 for W49A and 363 for W51. These velocities produced GRS cloud matches, and thus distances, to 61 and 287 sources within the two regions, respectively. Of the sources without cloud distances, using the methods above, a further 6 of 8 were assigned to W51 and zero of 19 had a far kinematic distance consistent with W49A. These selection criteria gave 61 and 293 JCMT sources within the W49A and W51 regions, respectively. A summary of the source numbers can be found in Table.~\ref{sourcenumbers}. The big difference in the source numbers, in both the Hi-GAL and JCMT samples, is probably due to source blending at the greater distance of W49A. This issue is addressed below (section 4.2). The source IDs of the Hi-GAL and JPS sources used are listed in Appendix~\ref{sourceIDs}. \begin{table} \begin{center} \caption{Summary of the source numbers found in each survey for W49A and W51.} \label{sourcenumbers} \begin{tabular}{lcc}\hline Region & Hi-GAL & JCMT \\ & Sources & Sources \\ \hline W49A & 57 & 61 \\ W51 & 434 & 293 \\ \hline \end{tabular} \end{center} \end{table} \begin{table*} \begin{center} \caption{The W49A JCMT catalogue. The columns are as follows: (1) W49A catalogue source name; (2) and (3) Galactic coordinates of the position at which the peak flux is found within the W49A source; (4) and (5) the central point in Galactic coordinates; (6--8) semi-major, semi-minor and position angle, measured anticlockwise from the Galactic north, of the elliptical fit to the shape of the source; (9) effective radius of source, calculated by $\sqrt{(A/\pi)}$, where A is the area of the source above the FW detection threshold; (10--11) peak flux density, in units of Jy\,beam$^{-1}$, and measurement error; (12--13) integrated flux, in units of Jy, and measurement error; (14) signal-to-noise ratio (SR) of the source and (15) whether the source is associated with the W49A star-forming region, determined by heliocentric distance.} \label{W49sources} \begin{tabular}{lccccccccccccccc}\hline Source & $\ell_{\rmn{peak}}$ & $\emph{b}_{\rmn{peak}}$ & $\ell_{\rmn{cen}}$ & $\emph{b}_{\rmn{cen}}$ & $\upsigma_{\rmn{maj}}$ & $\upsigma_{\rmn{min}}$ & PA & $R_{\rmn{eff}}$ & $S_{\rmn{peak}}$ & $\Delta$$S_{\rmn{peak}}$ & $S_{\rmn{int}}$ & $\Delta$$S_{\rmn{int}}$ & SNR & W49A\\ ID & ($^{\circ}$) & ($^{\circ}$) & ($^{\circ}$) & ($^{\circ}$) & ($\prime\prime$) & ($\prime\prime$) & ($^{\circ}$) & ($\prime\prime$) & (Jy\,beam$^{-1}$) & (Jy\,beam$^{-1}$) & (Jy) & (Jy) & & Source\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) \\ \hline W49\_021 & 42.871 & -0.182 & 42.866 & -0.179 & 18 & 9 & 190 & 26 & 0.145 & 0.027 & 0.577 & 0.029 & 8.31 & n \\ W49\_022 & 42.884 & -0.030 & 42.882 & -0.029 & 10 & 7 & 206 & 17 & 0.090 & 0.017 & 0.175 & 0.009 & 5.17 & n \\ W49\_023 & 42.888 & -0.082 & 42.887 & -0.083 & 10 & 4 & 137 & 13 & 0.088 & 0.016 & 0.089 & 0.005 & 5.07 & n \\ W49\_024 & 42.888 & -0.193 & 42.886 & -0.190 & 8 & 7 & 212 & 15 & 0.103 & 0.019 & 0.157 & 0.008 & 5.95 & n \\ W49\_025 & 42.889 & -0.197 & 42.888 & -0.197 & 15 & 6 & 137 & 17 & 0.108 & 0.020 & 0.195 & 0.010 & 6.18 & n \\ W49\_026 & 42.904 & -0.060 & 42.902 & -0.061 & 15 & 6 & 160 & 19 & 0.098 & 0.018 & 0.241 & 0.012 & 5.64 & y \\ W49\_027 & 42.906 & -0.006 & 42.908 & -0.005 & 14 & 9 & 177 & 25 & 0.125 & 0.024 & 0.453 & 0.023 & 7.19 & y \\ W49\_028 & 42.908 & -0.025 & 42.907 & -0.024 & 14 & 5 & 225 & 17 & 0.097 & 0.018 & 0.192 & 0.010 & 5.59 & y \\ W49\_029 & 42.915 & -0.134 & 42.915 & -0.135 & 13 & 7 & 241 & 22 & 0.200 & 0.037 & 0.498 & 0.025 & 11.52 & n \\ W49\_030 & 42.922 & -0.142 & 42.920 & -0.143 & 6 & 6 & 230 & 12 & 0.093 & 0.018 & 0.098 & 0.005 & 5.32 & n \\ W49\_031 & 42.927 & -0.067 & 42.932 & -0.068 & 20 & 6 & 179 & 21 & 0.090 & 0.017 & 0.246 & 0.012 & 5.18 & y \\ W49\_032 & 42.928 & -0.052 & 42.924 & -0.053 & 12 & 8 & 169 & 20 & 0.126 & 0.024 & 0.329 & 0.016 & 7.27 & y \\ W49\_033 & 42.929 & -0.042 & 42.930 & -0.042 & 12 & 9 & 158 & 24 & 0.196 & 0.037 & 0.601 & 0.030 & 11.28 & y \\ W49\_034 & 42.932 & -0.013 & 42.930 & -0.012 & 14 & 5 & 247 & 18 & 0.092 & 0.017 & 0.178 & 0.009 & 5.29 & y \\ W49\_035 & 42.939 & -0.034 & 42.940 & -0.034 & 7 & 5 & 151 & 13 & 0.113 & 0.021 & 0.181 & 0.009 & 6.48 & y \\ W49\_036 & 42.945 & -0.314 & 42.943 & -0.313 & 8 & 4 & 100 & 12 & 0.088 & 0.017 & 0.092 & 0.005 & 5.08 & n \\ W49\_037 & 42.946 & 0.149 & 42.946 & 0.151 & 9 & 6 & 188 & 16 & 0.096 & 0.019 & 0.187 & 0.009 & 5.50 & n \\ W49\_038 & 42.947 & -0.032 & 42.946 & -0.034 & 8 & 6 & 101 & 15 & 0.109 & 0.021 & 0.203 & 0.010 & 6.26 & y \\ W49\_039 & 42.950 & -0.032 & 42.956 & -0.030 & 16 & 6 & 181 & 20 & 0.098 & 0.019 & 0.249 & 0.012 & 5.61 & y \\ W49\_040 & 42.951 & 0.147 & 42.951 & 0.147 & 10 & 5 & 126 & 15 & 0.092 & 0.018 & 0.136 & 0.007 & 5.28 & n \\ \hline \multicolumn{15}{l}{$\emph{Note:}$ Only a small portion of the data is provided here, with the full list of 173 W49A sources available as Supporting Information to the online article. }\\ \end{tabular} \end{center} \end{table*} \subsection{Luminosity determination} The luminosities of the Hi-GAL sources are given in the Hi-GAL compact-source catalogue \citep{Elia17} and were calculated by fitting a modified blackbody to the spectral energy distribution (SED) of each source above $21\,\umu$m, using the fitting strategy as described in \citet{Giannini12}. The SED fitting and consequent luminosity calculations are fully explained in \citet{Elia17}, with a brief description below. The justification for the use of a modified blackbody as opposed to an SED template is described in \citet{Elia16}. The modified blackbody expressions, and adopted constants, are as described by \citet{Elia13}. Note that, in order to account for the different angular resolutions in each band, the fluxes at 350 and 500\,$\micron$ are scaled by the ratio of the beam-deconvolved source sizes in each band to that at 250\,$\micron$ (e.g., \citealp{Giannini12}; cf \citealp{NguyenLuong11}) The value of the dust opacity exponent, $\beta$, is kept constant at 2.0 in the fit, as recommended by \citet{Sadavoy13} in the $\emph{Herschel}$ Gould Belt Survey and as adopted in the HOBYS survey \citep{Giannini12}. The integrated flux is then converted to luminosity, $L$, and temperature, $T_{\rmn{d}}$, which are free parameters. The luminosities of the sources also include shorter, and longer, wavelength components detected with various other surveys, allowing for these values to approximate bolometric luminosities. Shorter-wavelength surveys used include MIPSGAL \citep{Gutermuth15}, MSX \citep*{Egan03}, and WISE \citep{Wright10}, whilst longer wavelengths made use of the GaussClumps ATLASGAL catalogue \citep{Csengeri14} and the version 2 catalogue of the BGPS \citep{Ginsburg13}. The use of the \citet{Csengeri14} ATLASGAL catalogue emphasises the compact nature of the Hi-GAL sources as these ATLASGAL sources are more compact than those of \citet{Contreras13} and \citet{Urquhart14b}. The completeness limits of the luminosities correspond to 200\,L$_{\sun}$ and 100\,L$_{\sun}$ for W49A and W51, respectively. The cumulative distribution of the fitted temperatures in the two regions is shown in Fig.~\ref{temperatures}. The mean temperatures are 16.8\,$\pm$\,0.8\,K and 15.4\,$\pm$\,0.2\,K with median temperatures of 15.4\,$\pm$\,3.5\,K and 14.3\,$\pm$\,2.6\,K for W49A and W51, respectively. A Kolomogorov--Smirnov (K--S) test was applied to the $T_{\rmn{d}}$ distributions of the two sub-samples giving a 22 per cent probability that the differences arise from random sampling fluctuations, so it can be assumed that these subsets are similarly evolved. \begin{figure} \includegraphics[scale=0.50]{W49W51_temperatures.eps} \caption{The cumulative distributions of the SED-derived temperatures for Hi-GAL sources in the W49A and W51 regions, represented by the blue dashed and red dotted lines, respectively.} \label{temperatures} \end{figure} \subsubsection{Method Dependency of Luminosities} The luminosities quoted in this study are those given in \citet{Elia17} and are obtained using the method outlined in that paper. The total luminosities contained within all compact clumps are found to be 1.03\,$\times$\,10$^{6}$\,L$_{\sun}$ and 4.67\,$\times$\,10$^{5}$\,L$_{\sun}$ for W49A and W51, respectively. These values are an order of magnitude smaller than those found in other studies. For example, \citet{Urquhart17} find integrated compact-source luminosities of 1.52\,$\times$\,10$^{7}$\,L$_{\sun}$ and 1.11\,$\times$\,10$^{7}$\,L$_{\sun}$, respectively. As described above, the Hi-GAL luminosities of \citet{Elia17} use fluxes scaled to the source size at 250\,$\upmu$m, which will remove flux at longer wavelengths and for larger sources. The corresponding fluxes in the \citet{Urquhart17} study, extracted from the public Hi-GAL image data, use a 3-$\upsigma$ aperture radius, which corresponds to a minimum aperture size of 55.1 arcsec \citep{Konig17}. Total integrated luminosities obtained from the image data, rather than by adding the compact sources, including all emission in all wavebands within 60\,pc radii, but otherwise calculated as in \citet{Elia17}, are 8.82\,$\times$\,10$^{6}$\,L$_{\sun}$ and 1.45\,$\times$\,10$^{6}$\,L$_{\sun}$ for W49A and W51, respectively. These are consistent with the literature values quoted in the introduction \citep{Harvey77,Kang10}. This consistency implies that the low values of $L$ are the result of the aperture-photometry methodology of \cite{Elia17}. However, this emphasises that the derived luminosity distributions are strictly relevant to compact sources, at the position where the YSO is most likely to form within a clump, and tend to exclude extended emission. \subsection{Mass determination} The masses of the JCMT detected sources were calculated using the following: \begin{equation} M = \frac{S_{\nu}D^{2}}{\kappa_{\nu}B_{\nu}(T_{\rmn{d}})} \end{equation} \noindent where $S_{\nu}$ is the integrated flux density, $D$ is the distance to the source, $\kappa_{\nu}$ is the mass absorption coefficient, taken to be 0.001\,m$^{2}$\,kg$^{-1}$ at a wavelength of 850\,$\upmu$m \citep{Mitchell01} and $B_{\nu}(T_{\rmn{d}})$ is the Planck function evaluated at a dust temperature, $T_{\rmn{d}}$. Taking the distances to W49A and W51 as 11.11\,kpc and 5.41\,kpc, respectively \citep{Sato10,Zhang13}, and the dust temperatures as the median values from above (15.37\,K and 14.28\,K, respectively), the equation becomes $M/M_{\sun} = 2066\,S_{\nu}/\rmn{Jy}$ and $M/M_{\sun} = 490\,S_{\nu}/\rmn{Jy}$ for the two regions, respectively. The masses are calculated from the JCMT data to maintain some independence between the determination of $M$ and $L$. The median temperatures are used in the instances where there are not positional matches, within the $\emph{Herschel}$ beam, with a Hi-GAL source. Where there is a match, the SED-derived temperature is used. The SED-derived temperatures are used in 37 and 148 cases for W49A and W51, respectively. We can compare the masses derived from the JCMT single fluxes to those of the ATLASGAL survey \citep{Urquhart17}, which were derived from SED fits. We find for W49A, 2.54\,$\times$\,10$^{5}$\,M$_{\odot}$ and 2.26,$\times$\,10$^{5}$\,M$_{\odot}$ for the SCUBA-2 masses and ATLASGAL masses, respectively, and 2.49\,$\times$\,10$^{5}$\,M$_{\odot}$ and 2.12\,$\times$\,10$^{5}$\,M$_{\odot}$, respectively. This allows us to confidently say our masses are a good estimate of the sub-mm mass in the two regions, whilst maintaining the independence of $M$ and $L$. \section{Results} \subsection{Clump mass and luminosity distributions} Using the luminosities and masses derived in Sections 3.2 and 3.3, clump mass distributions (CMDs) and luminosity distributions (LDs) are plotted, which are presented in Fig.~\ref{massfunctions} and Fig~\ref{lumfunctions}, respectively. \begin{figure} \includegraphics[scale=0.50]{mass_median_JCMT.eps} \caption{The clump mass distributions of W49A and W51 plotted with blue crosses and red squares, respectively, with the masses derived from the JCMT sub-millimetre fluxes and Equation 1. The least-squares fit for each CMD is overlaid with a dashed and dotted line, respectively. The vertical lines in blue (dashed) and red (dotted) are the sample completeness limits for W49A and W51, respectively.} \label{massfunctions} \end{figure} \begin{figure} \includegraphics[scale=0.50]{luminosity_median.eps} \caption{The luminosity distributions of W49A and W51 plotted with blue crosses and red squares, respectively, with the luminosities derived from the Hi-GAL SED fits. The least-squares fit for each LD is overlaid with a dashed and dotted line, respectively. The vertical lines in blue (dashed) and red (dotted) are the sample completeness limits for W49A and W51, respectively.} \label{lumfunctions} \end{figure} The plotted quantities, $\Delta\emph{N/}\Delta\emph{M}$ and $\Delta\emph{N/}\Delta\emph{L}$, are the number of sources per mass or luminosity bin width, with the mass and luminosity coordinate represented by the median value in each bin. This method was used to plot LDs in \citet{Eden15}. A fixed number of sources per bin was used, as opposed to fixed bin widths, in order to equalise weights determined from Poisson errors \citep{MaizApellaniz05}. By assuming a power-law slope of the form $\Delta\emph{N/}\Delta\emph{M}$ $\propto$ $M^{\alpha}$ and $\Delta\emph{N/}\Delta\emph{L}$ $\propto$ $L^{\gamma}$, least-squares fit to both CMDs and LDs can be calculated. Indices of $\alpha = -1.55 \pm 0.11$ and $\alpha = -1.51 \pm 0.06$ are calculated for the CMDs for W49A and W51, respectively, and $\gamma = -1.26 \pm 0.05$ and $\gamma = -1.51 \pm 0.03$ for the LDs for W49A and W51, respectively. The fits are performed on all bins above the completeness limit calculated from the 5-$\upsigma$ rms noise in the JPS data (CMDs) and the 95 per cent detection limit in the Hi-GAL data (LDs; \citealt{Molinari16a}). These limits are taken to be 360\,M$_{\sun}$ and 200\,L$_{\sun}$ for the W49A mass and luminosity distributions, respectively, and 180\,M$_{\sun}$ and 100\,L$_{\sun}$ for the W51 data. The fitted index values for the two CMDs are consistent with each other but those of the LDs are statistically different at the 5-$\upsigma$ level, with the W49A luminosity distribution being more top-heavy (flatter). The LD of W51 is consistent with those found for YSOs in Galactic-wide samples ($\alpha = -1.50 \pm 0.02$; \citealt{Mottram11,Urquhart14a}, $\alpha = -1.57 \pm 0.07$; \citealt{Eden15}), that in nearby clouds ($\alpha = -1.41 \pm 0.03$; \citealt{Kryukova12}), and the Cygnus-X and W43 star-forming regions ($\alpha = -1.63 \pm 0.03$; \citealt{Kryukova14}, $\alpha = -1.55 \pm 0.05$; \citealt{Eden15}). It is, however, worth noting that the final point of the W51 LD is constraining the fit. When a fit is performed without that point, it is significantly shallower and consistent with W49A and so both are flatter, in this case, than the Galactic average. The CMDs found for each region are consistent with the Galactic mean \citep{Beuret16,Elia17}. Monte Carlo simulations of the slopes of the LDs provide an estimate of how the observational errors of the individual luminosities propagate. We combined the uncertainties on the individual luminosities, taken to be 30 per cent (D.\ Elia, private communication), as well as any uncertainties on the association with the two regions. The luminosity of each source in the LD was then sampled from within these error bars, and a new LD produced, with a calculated fit. This was repeated 1000 times. We find that this analysis gives the errors on the LDs as 0.049\,$\pm$\,0.003 and 0.032$\pm$\,0.001 for W49A and W51, respectively. Therefore we conclude that these observational errors are not altering the derived slopes significantly. \subsection{Mass-luminosity relationship} \begin{figure} \includegraphics[scale=0.50]{mass_luminosity_W49W51.eps} \caption{Mass-luminosity relationship for W49A and W51, represented by the blue crosses and red squares, respectively, with fits to those data indicated by the dashed and dotted lines, respectively. The lower, middle and upper black dot-dash lines represent the $L/M_{\rmn{clump}}$ = 0.1, 1 and 10\,L$_{\sun}$/M$_{\sun}$, respectively.} \label{M-L} \end{figure} The JCMT clumps were positionally matched to Hi-GAL YSOs with a tolerance of 40$\arcsec$, approximately the $\emph{Herschel}$ beam FWHM at 500\,$\upmu$m. This resulted in 37 JPS clumps (matched with 44 Hi-GAL sources) in W49A and 148 JPS clumps (matched with 267 Hi-GAL sources) in W51. The relationship between the mass of a clump and luminosity of the associated Hi-GAL infrared source is shown in Fig.~\ref{M-L}, for both regions. There is very little correlation found in both samples, with Spearman-rank tests giving correlation coefficients of 0.58 and 0.34 for W49A and W51, respectively, with associated $p$-values of 0.24 and 0.21. The lack of correlation is possibly due to the narrow range of $L$ and $M$ found within individual regions, compared to the well constrained correlations found across many orders of magnitude of $L$ and $M$ in much larger samples \citep[e.g][]{Urquhart14}. \subsection{Distance effects} One potential source of bias affecting the LDs of the two regions is that W49A is at approximately double the distance of W51 (11.11\,kpc compared to 5.41\,kpc). The CMDs are not subject to these biases as studies have found that the slopes of CMDs do not change across different distance ranges, both heliocentric and Galactocentric \citep[e.g][]{Eden12,Elia17}. One potential effect is seen in observations and simulations \citep{Moore07,Reid10} in which the clustering scale of the sources and the angular resolution of the survey combine to bump lower-mass clumps into the higher bins. The CMDs and LDs show evidence of this but, as seen in \citet{Reid10}, the slope before and after these ``bumps'' in the distributions is the same as the high-mass clumps are rare and do not generally get merged with each other, so the high-end slope would not be affected, except in extreme cases. To mitigate the effects of distance, we use the method outlined in \citet{Baldeschi17} to simulate placing the W51 region at the same distance as W49A. This method rescales and rebins the map according to the ratio of the distances. The rescaled W51 map is then convolved with the point-spread function of the instrument, again scaled by the relative distance. After which, noise is added to the map to replicate the noise that was reduced in the smoothing process. The ``moved'' map was then subject to the same CuTEx source extraction and SED fitting as the original Hi-GAL maps. In the rescaled map, 134 sources were extracted. However, as the real velocities are no longer relevant, all sources within the angular radius of 20\,arcmins were assigned to the W51 star-forming region. The number of sources is similar to the number of sources found in W49A, indicating the potential source blending in action in W49A. The luminosities are shifted by an order of magnitude compared to the original W51 map, with the highest luminosity sources consistent with W49A. \begin{figure} \includegraphics[scale=0.50]{moved_luminosity_function.eps} \caption{The luminosity distribution of W51 after resampling and smoothing the data and re-extracting the sources, using the technique of \citet{Baldeschi17}, to simulate it being at the same distance as W49A. The binned data are represented by the green crosses, with the dot-dash line showing the linear best fit. The gradients of the luminosity distributions in W49A and in the original W51 data are displayed with the blue dashed and red dotted lines, respectively. The vertical, green dot-dash line represents the completeness limit.} \label{movedLD} \end{figure} These sources were then used to calculate the LD of the moved W51, and the index was found to be $\gamma = -1.45 \pm 0.07$ above 600\,M$_{\sun}$, with the LD shown in Fig.~\ref{movedLD}. This value is consistent with that of the original W51 LD and is still significantly steeper than that of the W49A molecular cloud. \section{Discussion} \subsection{A comparison of W49A and W51} A number of the quantities that are commonly used to compare the star-forming content of different regions have been calculated for W49A and W51 and are displayed in Table~\ref{quantities}. These parameters are the indices of power-law fits to the CMDs and LDs, as derived in Section 4.1; the ratio of infrared luminosity to the clump; the clump formation efficiency (CFE), the percentage of molecular gas that was converted to dense, star-forming material; the number of infrared sources per unit cloud mass; and star-forming fraction (SFF), the number of Hi-GAL sources with an associated 70-$\upmu$m source \citep{Ragan16}. The molecular gas masses are taken from \citet{Galvan-Madrid13} and \citet{Roman-Duval10} for W49A and W51, respectively. Included in the cloud mass for W51 are those clouds associated with the sources as well as clouds at the distances W51 but within the on-sky selection radii. The additional clouds at the distance of W49A were accounted for by \citet{Galvan-Madrid13}, who derived the molecular mass within a radius of 60\,pc. The CFE, as defined in \citet{Eden12,Eden13}, takes a snapshot of the current star formation and any variation in this quantity implies either an altered timescale for clump formation, or a change in the clump-formation rate. As the clump formation stage is short \citep[e.g.][]{Mottram11}, we assume that any change is due to an altered rate. These quantities cover the scale of the whole cloud (CFE, infrared sources per unit cloud mass) to the scale of individual clumps (SFF, ratio of infrared luminosity to clump mass). By covering these scales, we can identify if changes in quantities are associated with a specific stage of star formation. \begin{table*} \begin{center} \caption{Summary of the quantities calculated for the W49A and W51 star-forming regions, with the W51 moved results included as well as Galactic averages, alongside the relevant reference.} \label{quantities} \begin{tabular}{lccccc} \hline Parameter & W49A & W51 & Moved W51 & Galactic Avg. & Reference \\ \hline Index of CMD & -1.56\,$\pm$\,0.11 & -1.51\,$\pm$\,0.06 & --- & -1.57\,$\pm$\,0.07 & \citealt{Beuret16} \\ Index of LD & -1.26\,$\pm$\,0.05 & -1.51\,$\pm$\,0.03 & -1.45\,$\pm$\,0.07 & -1.50\,$\pm$\,0.02 & \citealt{Eden15} \\ $L_{\rmn{IR}}/M_{\rmn{clump}}$ (L$_{\sun}$\,M$_{\sun}^{-1}$) & 3.12\,$\pm$\,0.59 & 3.52\,$\pm$\,0.34 & --- & 1.39\,$\pm$\,0.09 & \citealt{Eden15} \\ Mean $L_{\rmn{IR}}/M_{\rmn{clump}}$ (L$_{\sun}$\,M$_{\sun}^{-1}$) & 3.80\,$\pm$\,1.22 & 4.05\,$\pm$\,0.62 & --- & 5.24\,$\pm$\,0.70 & \citealt{Eden15} \\ Median $L_{\rmn{IR}}/M_{\rmn{clump}}$ (L$_{\sun}$\,M$_{\sun}^{-1}$) & 0.91\,$\pm$\,0.71 & 1.21\,$\pm$\,0.93 & --- & 1.72\,$\pm$\,1.14 & \citealt{Eden15} \\ $M_{\rmn{clump}}/M_{\rmn{cloud}}$ (per cent) & 62.3\,$\pm$\,13.7 & 39.9\,$\pm$\,6.0 & --- & 11.0\,$\pm$\,6.0 & \citealt{Battisti14} \\ YSOs per Cloud Mass ($\times$10$^{-4}$\,M$_{\sun}^{-1}$) & 0.90\,$\pm$\,0.17 & 6.94\,$\pm$\,1.03 & 2.14\,$\pm$\,0.35 & 0.05\,$\pm$\,0.01 & \citealt{Moore12} \\ SFF & 0.29\,$\pm$\,0.05 & 0.30\,$\pm$\,0.02 & --- & 0.25 & \citealt{Ragan16} \\ \hline \end{tabular} \end{center} \end{table*} Some quantities associated with the actively star-forming evolutionary stage show a variation between the two regions. The index of the power-law fitted to the luminosity distribution was found to be $\gamma = -1.26 \pm 0.05$ in W49A, compared to $\gamma = -1.51 \pm 0.03$ for W51. The LD of W49A is significantly flatter than those found in other star-forming regions and across the Galaxy by \citet{Eden15} and in the RMS survey \citep{Lumsden13,Urquhart14a}, while that of W51 is consistent with those large-scale samples. \citet{Moore12} postulated that a flatter LD in W49A might contribute to measured increases in large-scale $L/M$ in the Perseus spiral arm. They also found lesser but similar increases in $L/M$ associated with the Sagittarius spiral arm due to the inclusion of the W51 region. However, it was suggested that the latter was more likely to be due to an increase in the number of YSOs per unit gas mass. In the present data, we find values of the latter parameter to be $(0.90 \pm 0.17) \times 10^{-4}$ and $(6.94 \pm 1.03) \times 10^{-4}$\,M$_{\sun}^{-1}$ for W49A and W51, respectively. Distance and resolution do affect the latter, as the value for the moved W51 map was found to be $(2.14 \pm 0.35) \times 10^{-4}$\,M$_{\sun}^{-1}$, almost enough to account for the difference between the two regions. The corresponding SFF values are consistent with each other, as well as with the global mean of the inner Galactic Plane \citep{Ragan16}. The consistency of the W51 LD with the average found in Galaxy-wide samples of high-mass star-forming regions (e.g., \citealt{Mottram11}) suggests that W51 is normal in this regard and its invariance with simulated distance indicates that the flatter slope seen in W49A is not the result of distance-related resolution effects. W49A therefore appears to be unusual, and may contain a shallow cluster mass function or top-heavy underlying stellar IMF The high-mass stellar IMF has been found to be invariant within the measurement uncertainties across multiple environments, from the Milky Way to the extremity of starburst-Galaxies \citep{Bastian10}, so any evidence of variations is significant for star-formation theories. Quantities associated with the clump formation stage are consistent between the two star-forming complexes. The CMDs have power-law indices that are statistically indistinguishable from each other, which is consistent with the result of \citet{Eden12} who found no variation in CMDs across different Galactic environments, including the W43 star-forming region, and that of \citet{Beuret16} who measured consistent CMDs between clustered and non-clustered clumps. The CFE does not vary between the two regions, again consistent with \citet{Eden12} and \citet{Eden13}. They found this ratio to be constant on average across kiloparsec scales but that large local variations occur, with the distribution of the CFE of individual molecular clouds being consistent with being log-normal. The implication of this is that the most extreme regions are not necessarily abnormal but simply lie in the wings of a distribution resulting from multiple, multiplicative random processes. The CFEs found for the two regions, $\sim$ 62 and 40 per cent, respectively, are at the high end of these distributions, but comparable with the peak value found in W43 (58\,$\pm$\,13 per cent; \citealt{Eden12}). The mean values of the star-formation-efficiency analogue, $L/M$, using the clump mass, are also consistent between the two regions. Values for $L_{\rmn{IR}}/M_{\rmn{clump}}$ are found to be $3.12 \pm 0.59$\,L$_{\sun}$\,M$_{\sun}^{-1}$ and $3.52 \pm 0.34$\,L$_{\sun}$\,M$_{\sun}^{-1}$, for W49A and W51, respectively. The values of $L_{\rmn{IR}}/M_{\rmn{clump}}$ compare to the ratio of $1.65 \pm 0.07$\,L$_{\sun}$\,M$_{\sun}^{-1}$ found for W43 \citep{Eden15}. The distribution of $L/M$ values in the two regions is not statistically distiguishable from a log-normal distribution, with Anderson-Darling giving probabilities of 0.15 and 0.15 for the W49A and W51 regions, respectively, with the probablities of the Shapiro-Wilk test found to be 0.11 and 0.10, respectively. This distribution is consistent with those found in a wider sample by \citet{Eden15}, with a log-normal fit giving means of 0.57 and 1.19, with standard deviations of 0.88\,dex and 0.66\,dex for W49A and W51, respectively. However, there is marginal evidence that the inner regions of W49A are different to those on the outer edge in this parameter. Splitting the sample by the median radius from the centre, the distributions of $L/M$ differ at the 2.5-$\upsigma$ level. There is also a hint of bimodality in the W49A sample (Fig.~\ref{L/M}), although the significance is low, with Hartigan's dip test giving a probability of 0.04 that the observed distribution arises at random. The $L/M$ parameter is both a metric of evolutionary state and an SFE analogue. If the IMF is fully sampled, and the timescale of the selected evolutionary stage (i.e. IR-bright) is short enough to be a snapshot of current star formation, then the $L/M$ of a sample should be proportional to the SFE. However, for a single source, it may be useful to trace the evolution. The $L-M$ relationship can also be used as an evolutionary indicator of the YSO, and the stage it is in, as it evolves towards the main sequence (\citealt{Molinari08}; \citealt{Giannetti13}). A full description of the evolutionary tracks that a YSO can take can be found in \citet{Molinari08}. It is clear, however, that the two star-forming regions are indistinguishable using this measure, and it is known that radio-faint massive YSOs and \ion{H}{ii} regions occupy the same position in the $M-L$ plane \citep{Urquhart14}. There is evidence though that the star formation in W49 is at a younger stage compared to W51, as well as W43 \citep{Saral15,Saral17}. This is in contrast to the wider Galactic environments in which the two regions are located. \citet{Eden15} found that star formation has distinct time gradients across different Galactic spiral arms, with the star formation in the Perseus arm found to be at a more evolved stage than the other star-forming regions. However, as the clump-formation stage is short, with the onset of star formation almost instantaneous, any differences found at the clump level should indicate a difference in the star formation. The distributions of the value of $L/M$ in individual clumps (Fig.~\ref{L/M}) are statistically indistinguishable. The median $L/M$ values are $0.91 \pm 0.71$\,L$_{\sun}$\,M$_{\sun}^{-1}$ and $1.21 \pm 0.93$\,L$_{\sun}$\,M$_{\sun}^{-1}$ for W49A and W51, respectively. The mean values also do not differ significantly, being $3.80 \pm 1.22$\,L$_{\sun}$\,M$_{\sun}^{-1}$ for W49A and $4.05 \pm 0.62$\,L$_{\sun}$\,M$_{\sun}^{-1}$ for W51 (Table \ref{quantities}). A K--S test of the two samples gives a probability of 86 per cent that they are drawn from the same population. These values are consistent with a much wider Galactic sample \citep{Eden15}, which were calculated in a similar way to this study. If $L/M$ is the same but the LD is flatter, as is the case in W49A, one would predict that the underlying SFE, i.e., the ratio of stellar mass to either clump or cloud mass, is lower. The probability distribution of the ``true'' SFE of the two regions can be estimated by simulating the populating of an IMF using the Monte-Carlo model of \citet{Urquhart13}. By assuming a standard IMF \citep{Kroupa01}, and halting the random sampling once either the mass of the clump is exceeded, or the observed $L/M$ is, a value for the SFE consistent with these two constraints is recorded. This is repeated 1000 times for each clump considered in Fig.~\ref{L/M} with a mass of above 500\,M$_{\sun}$, leaving 34 and 86 sources in W49A and W51, respectively. The results of these Monte Carlo simulations are probability distributions for the SFE within each clump which, when added together, provide a probability distribution for the clump SFE in the whole region. These distributions are presented as histograms in Fig.~\ref{truecomp}. Gaussian fits to these distributions find that the peak probability lies at SFEs of 2.7 per cent and 3.4 per cent for W49A and W51, respectively. However, these peaks correspond to $\log({\rm SFE}) = -1.56 \pm 0.10$ and $-1.47 \pm 0.04$ for W49A and W51, respectively, and are indistinguishable. \begin{figure} \includegraphics[scale=0.5]{W49W51_SFEhistogram.eps} \caption{Histogram of the ratio of luminosity to mass for individual clumps in the star-forming regions W49A and W51, represented by red and blue bars, respectively.} \label{L/M} \end{figure} \begin{figure} \includegraphics[scale=0.5]{W49W51_trueSFEs.eps} \caption{Simulated SFEs for matched sources in W49A and W51 with masses above 500\,M$_{\sun}$, represented by red and blue bars, respectively. Each source is run 1000 times.} \label{truecomp} \end{figure} \subsection{$L/M$ and SFE as a function of radius} Radii equivalent to physical sizes of 7.5, 15, 30 and 60\,pc were placed around the central points of W49A and W51, with the total luminosity and clump mass contained within sources within each of these rings summed, giving the $L/M$ ratio as a function of distance from the region. The results of this analysis are presented in Fig.~\ref{rings}. \begin{figure} \includegraphics[scale=0.50]{LM_radius.eps} \caption{Total $L/M$ ratios as a function of radius from the central point of the W49A and W51 regions, represented by the blue crosses and red squares, respectively. The grey vertical lines indicate the boundaries of the radius bins used to calculate the ratios, with the $x$-axis positions representing the centres of each bin.} \label{rings} \end{figure} \begin{figure} \includegraphics[scale=0.50]{W49_radial_trueSFEs.eps} \caption{Simulated SFEs for matched sources in W49A from Fig.~\ref{truecomp}, split by radius from the centre of W49A. Green bars represent sources within 7.5\,pc, with the purple bars representing sources between 7.5 and 60\,pc.} \label{trueW49} \end{figure} The two regions have indistinguishable $L/M$ ratios in the inner three annuli, but W51 has significantly elevated $L/M$ at the outermost radii. The reason for this latter difference could be twofold. W49A is a relatively compact star-forming region, $\sim$\,20\,pc in the most extended direction, whereas W51 is much larger, extending to $\sim$\,40\,pc in one direction from the most compact part of the source (Figure \ref{W51map}). The location of W51 in the tangent of the Sagittarius arm \citep{Sato10} may also contribute, since the outermost radii may include unassociated emission in the line of sight. The dip in the central aperture of Fig.~\ref{rings} may be caused by depletion of the mass in the centre of the regions, the conditions may not be conducive to clump formation with potential sources broken up and therefore no further star formation, or the offset of star formation from the central regions of gas shells \citep{Thompson12,Palmeirim17}. However, as mentioned above, the central area of W51 contains saturated pixels, which may have prevented Hi-GAL source detections. To account for this for the purposes of this analysis, we consulted the ATLASGAL compact source catalogue \citep{Urquhart14b}, since the positional matching of Hi-GAL and ATLASGAL clumps is in good agreement in the W51 region, with $\sim$ 92 per cent of ATLASGAL sources corresponding to Hi-GAL sources. Four ATLASGAL clumps were found in the saturated regions and used as markers for possible Hi-GAL sources. We then produced SEDs from the Hi-GAL image data using the method of \citet{Konig17}, with photometry within apertures of radii of 25\,arcsec, 1.5 times the median size of Hi-GAL sources associated with W51. The addition of this luminosity did not significantly alter the $L/M$ value in the central 7.5\,kpc of W51. There is also evidence for radial structure in the probability distributions for the underlying SFE within clumps in W49A. If we split the population of clumps to examine sources in the first radial bin of Fig.~\ref{rings} with respect to the other three bins, the Monte-Carlo SFE simulation finds two very different distributions (Fig.~\ref{trueW49}). For clumps in the outer radial bins, the simulation finds a probability distribution that is very similar in form to that of W51 but with a peak at $\log({\rm SFE}) = -1.645\,\pm\,0.05$, corresponding to 2.2 per cent, which is significantly lower. For the 10 clumps in the innermost radial region, it predicts a double-peaked distribution. The lower SFE peak is associated with two high-mass, high-luminosity clumps in which the highest-mass stars can form, dominating the luminosity budget and limiting the fraction of clump mass converted into stars. The remainder of the central subsample clumps tend to form mostly lower-mass stars, filling up the mass budget with relatively smaller contributions to the luminosity and producing the higher-SFE probability peak. Such lowered SFEs in the highest-mass clumps is consistent with the prediction of \citet{Urquhart14b}. This suggestion of bimodality, hence mass segregation, echoes the hint of structure in the $L/M$ distribution in Fig.\ref{L/M} and is worthy of further investigation at higher resolution. \subsection{Is the central region of W49A a mini-starburst?} A number of regions in the Milky Way have been identified as potential mini-starburst regions, analogous to starburst galaxies in miniature. Examples are RCW 106, Cygnus X, W43, W49, and W51 \citep{Rodgers60,Schneider06,NguyenLuong11,Galvan-Madrid13,Ginsburg15} with their inferred star-formation efficiency, the amount of star-forming material and current, ongoing star formation cited as reasons for this classification. However, the star-formation rate densities of W43, W49 and W51 are an order of magnitude greater than the other regions on this list, with W49 and W51 having an order of magnitude greater SFR than all other regions \citep{NguyenLuong16}. This, together with other results implying that the presence of W49 and W51 significantly affect the mean star-formation efficiency on kiloparsec scales \citep{Moore12}, makes it clear that these two regions are exceptional within the Galaxy. They also form part of the $\sim$ 30 complexes that contribute most of the star-formation rate and associated luminosity to the Milky Way \citep{Urquhart17}. As the observable analogues of the star-formation efficiency are consistent with those in W51 and with other Galactic environments \citep[e.g.][]{Eden12} the cause of the starburst-like behaviour must be on larger scales than those confined to clumps. This points towards the ISM conditions within the whole of W49A as the source of the starburst-like conditions within this region. Chemical analysis of the ISM in W49A has revealed starburst-like conditions within it (\citealt{Roberts11}; \citealt{Nagy12}). The high density and high temperature of the gas is comparable with the conditions found in ULIRGs \citep{Nagy15}. The highest-temperatures ($\sim$\,200\,K) are preferentially tracing shocked regions, with these tracers found over a large area of W49A \citep{Nagy15}. However, we find the temperatures of the potentially star-forming clumps to be consistent with those found in other, more regular regions in the Galaxy, such as the W43 complex \citep{Eden12} and interarm sources \citep{Eden13}. The cool dust may be washing out the high temperature gas, due to a larger filling factor, as is potentially seen in external galaxies \citep{Watanabe17}. An example of this is the formaldehyde (H$_{2}$CO) emission associated with a $3.3 \times 3.3$\,pc region in W49A and detected on kiloparsec scales in external starburst systems \citep{Mangum13}. In W51, the formaldehyde emission and other assorted dense-gas tracers are associated with UC\ion{H}{ii} regions on scales of 0.1\,pc \citep{Zhang97}, whereas the large-scale H$_{2}$CO is observed in absorption \citep{Martin-Pintado85}. Any future advancement in analysing the Galactic analogues of starburst conditions requires studying the chemical composition of the region \citep[e.g.][]{Nagy15}. W49A is also rather unique in being a source of very high energy ($>100$\,GeV) $\gamma$-ray emission, as detected by the High Energy Stereoscopic System (HESS; \citealt{Brun11}), a phenomenon rare in Galactic star-forming regions and more commonly associated with starburst galaxies such as M82 and NGC253 \citep{Ohm12}. Galactic sources are usually supernova remnants \citep{FermiColl17} and the mechanism is possibly fast proton collisions with dense gas producing $\pi^0$ decays \citep{Brun11}, such as in the W49B supernova remnant \citep{Keohane07}. However, W49A has two giant gas shells, with the shocks produced by the strong winds causing the $\gamma$-ray emission \citep{Peng10}. \citet{Papadopoulos10} and \citet{Papadopoulos11} postulated that cosmic rays may be regulating the star formation in starburst systems, globally causing high molecular gas temperatures. ULIRGS are dominated by warm, dense gas \citep{Papadopoulos12}, conditions which could lead to a relatively top-heavy IMF \citep{Klessen07} by raising the effective Jeans mass. \section{Summary \& Conclusions} We have compared the star-forming properties of W49A and W51, two major star-forming regions in the Milky Way whose presence affects the average properties of Galaxy-scale samples of young stellar objects and that are often referred to as Galactic starburst analogues. We also present a new 850-$\upmu$m continuum map of a 1-degree diameter area around W49A, made using SCUBA-2 at JCMT, at a pixel-to-pixel rms of 17.39\,mJy\,beam$^{-1}$. 173 compact sources were extracted from this map using the {\sc FellWalker} \citep{Berry15} algorithm. By comparison with spectral line surveys, 61 of these were placed at the distance of W49A. 293 objects were found in the JCMT Plane Survey (JPS) compact-source catalogue \citep{Eden17} within a 60\,pc radius at the distance of W51. The clump-mass distributions of the two regions are consistent with each other, having fitted power-law indices of $\alpha = -1.55 \pm 0.11$ and $\alpha = -1.51 \pm 0.06$. However, the luminosity distributions differ significantly, with W49A having a shallower fitted power-law index of $\alpha = -1.26 \pm 0.05$, compared to $\alpha = -1.51 \pm 0.03$ for W51. As the CMDs are consistent, but the LDs are not, this could be indicative of an underlying difference in the star-formation rate and efficiency in W49A. The flatter luminosity distribution, combined with elevated temperatures, high gas densities and the fact that W49A is a source of very high-energy $\gamma$-ray emission \citep{Brun11} suggest that it is the most promising candidate for a Galactic starburst analogue or mini-starburst. The clump-formation efficiencies and $L/M$ ratios of the two regions are consistent with each other, as well as with other extreme star-forming regions in the Galaxy. The $L/M$ ratios and simulated SFEs found for the individual clumps within the two regions are also consistent with each other, except in the central regions of W49A, where the SFE probability distribution favours either low or high efficiencies within clumps. \section*{Acknowledgements} DJE is supported by a STFC postdoctoral grant (ST/M000966/1). This publication makes use of molecular line data from the Boston University-FCRAO Galactic Ring Survey (GRS). The GRS is a joint project of Boston University and Five College Radio Astronomy Observatory, funded by the National Science Foundation under grants AST-9800334, AST-0098562, \& AST-0100793. This work is part of the VIALACTEA Project, a Collaborative Project under Framework Programme 7 of the European Union, funded under Contract \#607380 that is hereby acknowledged. PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA). This research has made use of NASA's Astrophysics Data System. The JCMT has historically been operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the National Research Council of Canada and the Netherlands Organization for Scientific Research. Additional funds for the construction of SCUBA-2 were provided by the Canada Foundation for Innovation. This research has made use of NASA's Astrophysics Data System. The Starlink software \citep{Currie14} is currently supported by the East Asian Observatory. DJE would like to dedicate this work to his uncle, Joseph Eden. \bibliographystyle{mnras}
proofpile-arXiv_067-1410
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} For reasons of convenience, most models of dust evolution and planetesimal formation in protoplanetary disks are computed against the backdrop of a simple stationary-state model for the gaseous disk \citep[see~e.g.,][]{1980Icar...44..172W, 1996A&A...309..301S, 1997A&A...319.1007S, 2008A&A...480..859B, 2012ApJ...752..106O, 2016A&A...594A.105D, 2017ApJ...839...16C,2017MNRAS.472.4117E}. At the start of the simulation the dust is assumed to be all in the form of micron-size monomers, and as time goes by, the monomers stick to each other and form ever larger dust aggregates. Whether it is justified to start the simulation with an already fully fledged class II protoplanetary disk, or if it is important to include the disk buildup stage, is a question that remains to be explored. The same question comes up when studying the meteoritic record: the ``time zero'' is defined as the time when calcium-aluminium-rich inclusions (CAIs) were formed, and all time is measured from that point onward. But is that definition of $t=0$ the end of the buildup phase or the beginning? It seems to be an important question, because during the buildup phase the disk is highly dynamic, possibly gravitationally unstable, and is continuously fed with fresh gas and dust. This area has not been entirely ignored. Whether or not the later dust growth phase can be affected by events taking place during the buildup phase was investigated \citep{2008A&A...491..663D}. However, as was shown by \citet{2005A&A...434..971D} and \citet{2010A&A...513A..79B}, dust growth quickly loses its ``memory'' of the initial conditions. If drift is not included, the coagulation and fragmentation equilibrium is quickly reached, and which initial condition we start from is irrelevant. If we include radial drift, the situation changes slightly, to the extent that the dust may be depleted by radial drift slightly earlier or later, depending on the initial conditions. But overall, the initial conditions of the dust do not appear to be very important. Things may change, however, if we include the formation of planetesimals. Studies of mass reservoirs of planet-forming disks infer that planet(esimal) formation should start early \citep{2010MNRAS.407.1981G, 2011MNRAS.412L..88G, 2014MNRAS.445.3315N, 2017ApJ...838..151T}. If we assume that the main mechanism responsible for planetesimal formation is the streaming instability \citep{2007Natur.448.1022J}, then the initial conditions may be extremely important as planetesimals can only form under particular conditions \citep{2009ApJ...704L..75J, 2015A&A...579A..43C}. These special conditions necessary for the streaming instability may be fleeting: a missed chance (by an unfavorable initial condition, for instance) could mean that a sufficiently dense population of sufficiently large dust aggregates does not form. As a consequence all the solids remain below the ``meter size barrier'', and no planetesimals are formed. It is therefore important to not simply start the model with an already formed disk filled with micron-sized monomers, but instead follow the formation of the disk and let dust grow and drift in the disk buildup stage. The purpose of this paper is to investigate the influence of the class 0/I stage of the disk buildup on the first population of planetesimals. We follow the dust evolution and planetesimal formation model presented by \citet{2017A&A...608A..92D}, in which the growth of dust is computed, and, based on a simplified criterion for the triggering of the streaming instability, part of this dust is converted into planetesimals. In that paper it was found that processes taking place around the water snow line enable planetesimal formation. Most importantly, due to the fact that icy dust is more sticky, sufficiently large pebbles can grow outside of the snow line. Because the dry aggregates remain small and well-coupled to the gas, there is a ``traffic jam'' arising inside the snow line that slows down the removal of solids by radial drift. What is more, the outward diffusion of water vapor from inside the snow line followed by its recondensation (so called ``cold finger effect'') increases the abundance of icy pebbles that trigger planetesimal formation just outside of the snow line. In this paper, we include early phases of disk evolution in this picture. This paper is organized as follows. We outline our numerical approach in Sect.~\ref{sub:methods}. We describe our results in Sect.~\ref{sub:results} and discuss their major limitations in Sect~\ref{sub:discussion}. Finally, we summarize our findings in Sect.~\ref{sub:conclusions}. \section{Methods}\label{sub:methods} \subsection{Gas disk}\label{sub:methodgas} We choose the \citet{2005A&A...442..703H} approach, which solves the viscous disk equations supplemented with a mass source function that describes the disk buildup from a rotating collapsing molecular cloud core. The infalling cloud model is the inside-out \citet{1977ApJ...214..488S} model coupled to the \citet{1976ApJ...210..377U} rotating infall model. Our implementation was described in \citet{2006ApJ...640L..67D} and \citet{2006ApJ...645L..69D}. The disk model includes the angular momentum transport by gravitational instability, albeit in a local viscous disk approximation, which means that when the gravitational instability is detected with the Toomre criterion, the local gas viscosity is increased \citep[for details see][]{2001MNRAS.324..705A}. We found that neglecting the gravitational instability does not change our results significantly, as it only happens for a relatively short period of time and well outside of the planetesimal formation region. The infalling cloud has an initial mass of $M_{\rm cloud}=1$~M$_{\odot}$, temperature of $T_0=10$~K, and rotates at a rate of $\Omega_{\rm 0} = 5\cdot10^{-15}$~s$^{-1}$. On a timescale of $\sim7\cdot10^{5}$~yr, this cloud forms a single star surrounded by a disk with a peak mass depending on the viscosity parameter $\alpha_{\rm v}$, but staying within the range of 0.1-0.2~M$_{\odot}$. We take into account heating due to viscosity and irradiation by the central star when calculating the midplane temperature in the disk. Our current model does not include photoevaporation, thus the disk lifetime is determined by its viscous evolution described by the standard $\alpha$-formalism \citep{1973A&A....24..337S}, where the gas viscosity is defined as \begin{equation}\label{eq:gasvis} \nu = \alpha_{\rm v} c_{\rm s} H_{\rm g}, \end{equation} where $c_{\rm s}$ is the sound speed and $H_{\rm g}$ is the gas scale height, which is equal to the turbulent gas diffusivity $D_{\rm gas}$. Since we use a vertically averaged model, the diffusivity is essentially a density-weighted average and we cannot directly model the vertical distance at which the gas flow takes place. We vary the disk viscosity parameter $\alpha_{\rm v}$ between $3\cdot10^{-4}$ and $10^{-2}$ and purposely distinguish it from $\alpha_{\rm t}$, which describes the strength of the midplane turbulence that affects vertical settling and fragmentation of dust aggregates. We consider models with $\alpha_{\rm t} \le \alpha_{\rm v}$, where the standard case of $\alpha_{\rm t} = \alpha_{\rm v}$ is our benchmark model. The cases with $\alpha_{\rm t} < \alpha_{\rm v}$ are motivated by the recent protoplanetary disk models where a quiescent midplane layer is often found \citep{2014prpl.conf..411T}. We provide more in-depth discussion of these values in Section~\ref{sub:discussion}. \subsection{Dust evolution and planetesimal formation} The infall of gas onto the disk is accompanied by the delivery of dust, with the usual dust-to-gas ratio of 1\%. We neglect dust coagulation inside the envelope, where the growth timescale is very long \citep{2009A&A...502..845O}. The infalling dust is assumed to be monomer size, and the coagulation in the disk at a given orbital distance $R$ only starts after the surface density of dust exceeds $\Sigma_{\rm{d, min}} = 10^{-6}$~g~cm$^{-2}$. The dust coagulation is modeled with the two-population algorithm proposed by \citet{2012A&A...539A.148B}. However the initial growth stage is modified from the original algorithm to take into account the disk buildup stage, when the mass of the central star and thus the rotation frequency $\Omega_{\rm K}(t,R) = \sqrt{GM_{\star}(t)/R^3}$ increase with time. We estimate the size in the initial growth regime as \begin{equation} a_{\rm{ini,i}} = a_{\rm{ini,i-1}}\cdot\exp\left({Z\cdot\Omega_{\rm K}\cdot\Delta t}\right), \end{equation} where $a_{\rm{ini,i-1}}$ is the maximum size obtained in the previous time step, $\Delta t$ is the time step duration, and $Z$ is the vertically integrated dust-to-gas ratio. The maximum aggregate size at each orbital distance is determined as a minimum of $a_{\rm{ini}}$, fragmentation limit $a_{\rm{frag}}$ and maximum size that can be retained taking into account the radial drift $a_{\rm{drift}}$. We assume that the infalling dust consists of 50\% ice and 50\% rock. Inside the snow line the ice component sublimates and is added to the water vapor reservoir. We track the water vapor evolution and account for the possibility of its recondensation outside of the snow line. Water sublimation and recondensation is included following the algorithm suggested by \citet{2006Icar..181..178C}. Since the water ice is more sticky than silicate dust \citep{2009ApJ...702.1490W, 2011ApJ...737...36W, 2014MNRAS.437..690A}, we assume that aggregates outside of the snow line fragment at impact speeds above $v_{\rm f, out}=10$~m~s$^{-1}$ while inside the snow line the dry aggregates fragment as soon as the impact speed exceeds $v_{\rm f, in}=1$~m~s$^{-1}$. This means that the fragmentation-limited size $a_{\rm{frag}}\propto v_{\rm f}^2$ is two orders of magnitude larger outside of the snow line than inside it. When calculating the radial drift velocity, we take into account the so-called collective drift effect, which means that the drift speed decreases as the solids-to-gas ratio increases. Planetesimal formation is included in a simple way. We assume that planetesimals may be formed by streaming instability if the midplane dust-to-gas ratio calculated for the dust particles when size corresponding to the Stokes number ${\rm St}\ge10^{-2}$ exceeds unity. In every time step and at every orbital distance we verify whether this condition is fulfilled and if it is, we transfer part of the surface density of pebbles to planetesimals. Currently we do not include planetesimal evolution. We refer interested readers to \citet{2017A&A...608A..92D} for more detailed discussion of our dust evolution and planetesimal formation treatment and its limitations. \section{Results}\label{sub:results} \begin{figure} \centering \includegraphics[width=0.9\hsize]{massevo_alpha-3.pdf} \caption{Time evolution of star mass, gas disk, dust and water, and planetesimal reservoir in the benchmark model with $\alpha_{\rm v}=\alpha_{\rm t}=10^{-3}$.} \label{fig:massevo1} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\hsize]{2panels_alpha-3.pdf} \caption{{\it Upper panel:} Surface density of planetesimals at the end of the disk lifetime (at $10^7$~years, red solid line) obtained in the model with $\alpha_{\rm v}=\alpha_{\rm t}=10^{-3}$. The black dotted line corresponds to the minimum mass solar nebula. {\it Lower panel:} Radial and time distribution of planetesimal formation in the same model. The light blue solid line shows the location of the snow line.} \label{fig:plts1} \end{figure} In a standard case, when the outward transport of angular momentum and thus disk accretion is driven by global, isotropic turbulence, $\alpha_{\rm v}$ and $\alpha_{\rm t}$ are equal. Following \citet{2017A&A...608A..92D}, we choose $\alpha_{\rm v}=\alpha_{\rm t}=10^{-3}$ for our benchmark run. Results we obtain in this run are presented in Figs.~\ref{fig:massevo1} and \ref{fig:plts1}. Figure \ref{fig:massevo1} presents evolution of the total mass of the star, gas, dust disk, and planetesimals. Since the infall proceeds inside-out, matter only falls onto the star at first. After $\sim2\cdot10^{5}$~yr, the disk buildup starts and it lasts for another $\sim5\cdot10^{5}$~yr. After that, the class II disk stage starts and we model it for another 10~Myr. Planetesimal formation only starts after 10$^6$~yr of evolution, of which about $3\cdot10^5$~yr is within the class II protoplanetary disk stage. Planetesimal formation requires the existence of a dense midplane layer of sufficiently large pebbles. We find that the sufficiently large pebbles can only grow outside of the snow line, where the icy dust is more sticky. The highest dust-to-gas ratio is always obtained directly outside of the snow line and this is the region where planetesimals form. The mechanism of triggering this snow line pile-up is as follows: the large icy pebbles are efficiently delivered to the snow line because of their fast radial drift. The smaller aggregates inside the snow line drift at much lower speed and thus a pile-up arises, which spreads outside of the snow line by turbulent diffusion. The pile-up of icy pebbles outside of the snow line is supported by the cold finger effect and the collective drift, which is a key component for obtaining a sufficiently high pebble-to-gas ratio. The bottom panel of Fig.~\ref{fig:plts1} shows that the planetesimal formation region is relatively narrow at each point of time and shifts inwards as the disk evolves. This is because the disk cools down with time and thus the snow line moves inward. The final surface density of planetesimals, displayed in the upper panel of Fig.~\ref{fig:plts1}, is significantly higher than the conventional minimum mass solar nebula profile of \citet{1977Ap&SS..51..153W}, which means that the amount of planetesimals formed is sufficient to form the solar system planets. These results are generally consistent with the ones presented by \citet{2017A&A...608A..92D}. As explained above, there are two processes promoting the pileup of icy pebbles outside of the snow line in the protoplanetary disk: the outward diffusion and recondensation of water vapor (the cold finger effect) and the traffic-jam arising because the dry aggregates inside the snow line remain small and thus drift at a lower speed. The latter process operates on timescales of approximately 200,000 yr, during which pebbles grow in the outer disk and are transported inside the snow line by radial drift. This process can only start after the disk is fully formed. In the disk buildup stage, the effect of radial drift is limited, like in the case of an outburst \citep{2017A&A...605L...2S}. Thus, there is only the outward diffusion and recondensation of water vapor helping to form dust overdensity, so the dust-to-gas ratio enhancement outside of the snow line in this early stage is always weaker than the one arising during the following disk evolution, and in our benchmark run it is not sufficient to produce dense enough midplane layer of pebbles. \begin{figure} \centering \includegraphics[width=0.9\hsize]{alpha_alpha_v3.pdf} \caption{Summary of models showing the influence of the viscosity parameter $\alpha_{\rm v}$ and the midplane turbulence strength $\alpha_{\rm t}$. The triangles denote models where planetesimals are formed both during the disk buildup and in the protoplanetary disk stage. The circles mark models where planetesimals only form after the disk is fully formed, and squares mean no planetesimal formation at all.} \label{fig:alpha_alpha} \end{figure} Exploring the possibility of planetesimal formation during the disk buildup stage we found that it is mostly regulated by the viscosity parameter $\alpha_{\rm v}$ and the turbulence strength $\alpha_{\rm t}$. Figure~\ref{fig:alpha_alpha} gives the overview of results obtained in models with different $\alpha_{\rm v}$ and $\alpha_{\rm t}$. Planetesimals form during the disk buildup when $\alpha_{\rm v} \ge 10^{-3}$ and $\alpha_{\rm t} \le 10^{-2}\cdot\alpha_{\rm v}$. In general, we find that planetesimal formation is easier to trigger after the disk is fully formed than during the disk buildup. Each model that forms planetesimals in the disk buildup stage also forms them in the class II disk stage. Furthermore, planetesimals are formed in the protoplanetary disk stage in all the models with $\alpha_{\rm t} \le 10^{-3}$, even if there are no planetesimals formed during the disk buildup stage. This is because the density enhancement produced thanks to the traffic jam effect in the disk stage is significantly higher than the one given by the cold finger effect alone \citep[see][Sect. 3.2.2]{2017A&A...608A..92D}. Nevertheless, we find that even the cold finger effect alone may be enough to form conditions for planetesimal formation. In a disk with a low internal turbulence level, a vertically integrated dust-to-gas ratio of 0.03 may be enough to trigger the streaming instability \citep{2010ApJ...722.1437B, 2014A&A...572A..78D, 2015A&A...579A..43C}. This kind of enhancement is easily produced by the outward diffusion and recondensation of water vapor \citep{1988Icar...75..146S, 2004ApJ...614..490C, 2017A&A...602A..21S}. The outward redistribution of water promoted by high $\alpha_{\rm v}$ values thus supports planetesimal formation particularly during the disk buildup stage, when it fully relies on the cold finger effect. At the same time however, the low midplane turbulence is necessary to allow for formation of large enough pebbles and a dense midplane layer, and consequently for planetesimal formation. \begin{figure} \centering \includegraphics[width=0.9\hsize]{massevo.pdf} \caption{Time evolution of star mass, gas disk, dust and water, and planetesimal reservoir for model with $\alpha_{\rm v}=10^{-3}$ and $\alpha_{\rm t}=10^{-5}$.} \label{fig:massevo} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\hsize]{2panels.pdf} \caption{{\it Upper panel:} Surface density of planetesimals at the end of the disk buildup stage (at $7\cdot10^5$~years, gray dashed line) and at the end of the disk lifetime (at $10^7$~years, red solid line) obtained in the model with $\alpha_{\rm v}=10^{-3}$ and $\alpha_{\rm t}=10^{-5}$. The black dotted line corresponds to the minimum mass solar nebula. {\it Lower panel:} Radial and time distribution of planetesimal formation in the same model. The light blue solid line shows the location of the snow line.} \label{fig:plts} \end{figure} Figure \ref{fig:massevo} presents evolution of the total mass of the star, gas and dust disk, and planetesimals in a model with $\alpha_{\rm v}=10^{-3}$ and $\alpha_{\rm t}=10^{-5}$. In contrast to the results presented in Fig.~\ref{fig:massevo1}, two periods of planetesimal formation are visible: first during the disk buildup about $1.4$~M$_{\oplus}$ of planetesimals is formed, and then in the more extended protoplanetary disk phase another $900$~M$_{\oplus}$ of planetesimals arises. If at all possible, planetesimal formation in the disk buildup stage begins shortly after the region outside of the snow line is populated. Dust growth timescale can be estimated as \begin{equation}\label{eq:tgrowth} \tau_{\rm growth} \approx \frac{1}{Z\cdot\Omega_{\rm K}} = 10^2\cdot\left(\frac{0.01}{Z}\right)\cdot\left(\frac{R}{1\ {\rm AU}}\right)^{1.5}\ {\rm yr} ,\end{equation} \citep[see][]{2012A&A...539A.148B}, which, for a typical location of the water snow line, is much shorter than the buildup and disk evolution timescales. Derivation of Eq.~(\ref{eq:tgrowth}) assumes that collisions are driven by turbulence. In cases of very low $\alpha_{\rm t}$, collisions could also be driven by differential drift. In that case, analogical derivation shows that the growth is slowed down by a factor of $c_{\rm s} / v_\eta$, where $v_\eta$ is the maximum drift velocity. This factor is on the order of 20 in a typical protoplanetary disk. Thus, pebbles can easily grow to their maximum size even before the disk is fully formed. The timescale for outward diffusion and recondensation of water vapor is also short. The condensation is assumed to be instantaneous since abundance of small grains is replenished by fragmentation \citep[for more discussion see][]{2017A&A...608A..92D}. The timescale for diffusion can be estimated as \begin{equation}\label{eq:tdiff} \tau_{\rm{diff}} \approx \frac{\Delta R^2}{D_{\rm{gas}}} = 10^3\cdot\left(\frac{10^{-3}}{\alpha_{\rm v}}\right)\cdot\left(\frac{R}{1\ {\rm AU}}\right)^{1.5}\cdot\left(\frac{\Delta R}{H_{\rm g}}\right)^2\ {\rm yr}, \end{equation} where $\Delta R$ is the radial width of the recondensation region and gas diffusivity $D_{\rm{gas}}$ is equal to gas viscosity given in Eq.~(\ref{eq:gasvis}). Such a short timescale allows for a moderate enhancement of the dust-to-gas ratio outside of the snow line to appear even at the very early stages of disk buildup, which leads to planetesimal formation in some of the models. Figure~\ref{fig:plts} presents the distribution of planetesimals formed during the buildup and the fully formed disk stage in a model with $\alpha_{\rm v}=10^{-3}$ and $\alpha_{\rm t}=10^{-5}$. As in the benchmark run, planetesimal formation follows the water snow line. This is initially close-in, and the planetesimal formation in the disk buildup stage starts already at 1~AU. Then the snow line gradually moves outwards as the disk becomes more massive and heats up. Since both $\tau_{\rm growth}$ and $\tau_{\rm{diff}}$ that determine the efficiency of the cold finger effect depend strongly on the distance from the star (Eqs~(\ref{eq:tgrowth})-(\ref{eq:tdiff})), no planetesimals are formed anymore when the snow line recedes beyond $\sim$8~AU. In the class II disk phase, the snow line moves back inwards as the disk cools down and the more extended planetesimal formation phase, supported by the radial drift, takes place. In the disk buildup stage, planetesimals form only directly outside of the snow line, and the planetesimal formation region at a given time is relatively narrow, corresponding to the width of the recondensation zone. After the disk is fully formed, the planetesimal formation region is wider thanks to the collective drift effect, which spreads the peak of the dust-to-gas ratio enhancement. The surface density of planetesimals displayed in the upper panel of Fig.~\ref{fig:plts} is similar to that of the benchmark run with $\alpha_{\rm v}=\alpha_{\rm t}=10^{-3}$. The mass of planetesimals formed during the buildup stage (the gray dashed line) is only a small fraction of the final planetesimal mass. Taking into account all the runs presented in Fig.~\ref{fig:alpha_alpha}, it ranges from 0 to $10^{-2}$. However, from the perspective of planet formation these first planetesimals may be significant and trigger early formation of planetary embryos. \section{Discussion}\label{sub:discussion} Section 2.3 of \citet{2017A&A...608A..92D} presented a comprehensive list of the limitations of our dust evolution algorithm. In this Section, we only address the aspects that are particularly important in the context of models presented in this paper. \subsection{$\alpha_{\rm v}$ and $\alpha_{\rm t}$ values} As summarized in Fig.~\ref{fig:alpha_alpha}, the values for the $\alpha_{\rm v}$, describing the global efficiency of angular momentum transport via turbulent viscosity, and the midplane turbulence strength parameter $\alpha_{\rm t}$ are crucial for the possibility of planetesimal formation in the buildup stage of protoplanetary disk. Thus, we discuss a realistic range of these parameters below. As mentioned in Sect.~\ref{sub:methodgas}, we use a vertically averaged method and thus we are not able to model the gas and dust flow at different layers directly. Thus, the $\alpha_{\rm v}$ describes density-averaged flow of gas (and water vapor) through the disk, which can be directly translated into disk lifetime. At the same time, $\alpha_{\rm t}$ concerns only the turbulence (i.e.,~the velocity dispersion) in the midplane, where the pebbles reside, and is used to constrain the scale height of the pebble layers as well as their collision speeds. Observational constraints on protoplanetary disk lifetimes are in the range of 1 to 10~Myr (\citealt{2007ApJ...662.1067H}, however \citealt{2014ApJ...793L..34P} pointed out that these short lifetimes may be a selection bias). This suggests values of $\alpha_{\rm v}$ close to 10$^{-2}$ if we assume that the dispersal is driven solely by gas accretion onto the central star. However, if the disk dispersal is also driven by other processes, such as magnetic winds and photoevaporation, the $\alpha_{\rm v}$ value can be lower. Thus, we considered $3\cdot10^{-4}\le\alpha_{\rm v}\le10^{-2}$. Attempts to observationally constrain the disk turbulence brought various results, from $\alpha_{\rm t}<10^{-3}$ for the outer parts of the disk around HD~163296 \citep{2015ApJ...813...99F, 2017ApJ...843..150F} to $\alpha_{\rm t}\approx10^{-2}$ in the outer parts of the TW~Hya disk \citep{2016A&A...592A..49T}, both derived from molecular line emission. Interestingly, \citet{2016ApJ...816...25P} found the best fit between observations and models of the disk around HL~Tau assuming $\alpha_{\rm t} = 3\cdot10^{-4}$ as their dust settling parameter. In our paper, we considered $10^{-5}\le\alpha_{\rm t}\le3\cdot10^{-3}$. Although in principle we treat $\alpha_{\rm v}$ and $\alpha_{\rm t}$ as independent parameters, we excluded the cases with $\alpha_{\rm t} > \alpha_{\rm v}$. In a standard understanding of the layered accretion, which was based on including the Ohmic resistivity into the picture of magnetorotational instability, the flow of gas takes place in the active, upper layers of the disk while the midplane is ``dead'', with little or no turbulence \citep{1996ApJ...457..355G}. However, more recent models show that while turbulence measured by the local velocity dispersion, may be indeed many orders of magnitude lower in the midplane than in the surface layers, the gas density of the active surface layer is low and thus the density-weighted average of the turbulent diffusivity is not much higher than the midplane diffusivity \citep{2011ApJ...742...65O, 2017arXiv171104770S}. This would suggest that the values of $\alpha_{\rm v}$ and $\alpha_{\rm t}$ should not be independent, and based on the results of \citet{2011ApJ...742...65O}, $\alpha_{\rm t}$ should be about ten times lower than $\alpha_{\rm v}$. We would like to note that including non-ideal magnetohydrodynamic effects other than the Ohmic resistivity has recently changed the picture of protoplanetary disk evolution \citep{2016ApJ...821...80B, 2017arXiv171104770S}. In general, it was found that large regions of the protoplanetary disk can be free of turbulence and the angular momentum can be removed vertically by magnetic wind \citep{2014ApJ...791..137B, 2017ApJ...845...75B} or radially through laminar torques \citep{2014A&A...566A..56L}. The consequences of this new protoplanetary disk picture for planetesimal formation are yet to be studied. For reference, in planetesimal formation models similar to ours, \citet{2017ApJ...839...16C} used $\alpha_{\rm v}=10^{-2}$ and $\alpha_{\rm t}=10^{-4}$, while \citet{2017MNRAS.472.4117E} used $\alpha_{\rm v}=7\cdot10^{-4}$ and $\alpha_{\rm t}=7\cdot10^{-6}$. \subsection{Vertical mixing} Our model assumes that the water vapor is always well mixed with the gas and has the same scale height. Thus the advection and diffusion of water vapor is governed by the same diffusion coefficient as the gas, which is the $\alpha_{\rm v}$. However, if the water vapor is released by pebbles that reside in a thin midplane layer, which is regulated by $\alpha_{\rm t}$, the vertical diffusion of vapor should take place over timescales on the order of $\alpha_{\rm v}/\alpha_{\rm t}$ longer than radial diffusion (see Eq.~\ref{eq:tdiff}). In the runs with $\alpha_{\rm t}\ll\alpha_{\rm v}$, which actually produce planetesimals during the disk buildup stage, this could potentially limit the efficiency of the cold finger effect. However, in the class II protoplanetary disk stage, the cold finger effect impact on creating the pebble pileup outside of the snow line is secondary in comparison to the traffic jam effect caused by different sticking properties of icy and dry aggregates \citep[see][Sect 3.2.2]{2017A&A...608A..92D}. Thus we argue that the fact of neglecting the timescale of vertical mixing of water vapor does not impact our results in the disk phase. A similar conclusion was made by \citet[][(their Sect. 4.2.3)]{2017A&A...602A..21S}, who compared models with the scale height of water vapor equal to both gas and pebbles. On the other hand, during the disk buildup stage the water vapor is primarily not delivered by the drifting pebbles but by the small icy dust grains falling onto the disk surface from the cold molecular cloud. Since evaporation of these monomer-sized grains is virtually instantaneous, the vapor is delivered directly to the active surface layer and the diffusion of vapor is indeed governed by the viscosity parameter $\alpha_{\rm v}$. What may be problematic is the vertical mixing of vapor outside the snow line, such that it increases the midplane pebble-to-gas ratio. This process might be potentially sped up by the sedimentation-driven coagulation \citep[see e.g.,][]{2005A&A...434..971D} if the water vapor freezes on small grains present in the upper layers that stick together and settle to the midplane. Detailed quantification of this process requires performing two-dimensional models, which is beyond the scope of this paper. \section{Conclusions}\label{sub:conclusions} We present the first study on planetesimal formation during the protoplanetary disk formation stage. Our key findings may be summarized as follows: \begin{itemize} \item{The water snow line is the preferable place for planetesimal formation, both when the disk is already fully formed and during its buildup.} \item{Planetesimal formation is less likely to take place during the disk buildup phase than during the class II protoplanetary disk stage; it is only possible if the gas and water vapor redistribution is efficient ($\alpha_{\rm v} \ge 10^{-3}$) and the dust resides in a quiescent midplane ($\alpha_{\rm t} \le 10^{-4}$)}. At the same time, the water vapor needs to be efficiently mixed to the gas scale height, which might be a condition that is incompatible with the required low $\alpha_{\rm t}$. The plausibility of such a setup in a realistic protoplanetary disk is unclear. \item{If planetesimals are already formed in the disk buildup stage, their mass is always much lower than the mass of planetesimals formed during the subsequent disk evolution, but due to their early occurrence, these bodies may be important for planet formation. } \end{itemize} Our findings may have implications for the formation history of meteorite parent bodies in the solar system, since early formation would lead to strong planetesimal melting by $^{26}$Al decay, while the later-formed planetesimals may be spared such complete melting \citep{1993Sci...259..653G, 2006M&PS...41...95H, 2016Icar..274..350L}. In fact, there is evidence that the differentiated parent bodies of iron meteorites formed earlier than the chondrite parent bodies \citep{2003Natur.422..502T, 2005ApJ...632L..41B}. Planetesimals formed in our models are icy, as they always form outside of the snow line \citep{2017A&A...608A..92D}. However, as the snow line location evolves, some of the planetesimals that are formed during the disk buildup stage are placed inside of it in the protoplanetary disk stage. Although there are still numerous intermediate stages involved in turning planetesimals into planets, which are beyond the scope of this work, our findings may certainly have implications for the final planetary architectures and compositions, because the question of whether planetesimals form early or late will have repercussions on which kind of planet may form, its position, and when. \begin{acknowledgements} We want to thank the referee, Satoshi Okuzumi, for his thoughtful report that helped us to clarify this paper. This work has been carried out within the framework of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation. JD acknowledges the financial support of the SNSF. CPD thanks the NCCR PlanetS for the hospitality during the month of July 2017, when this project was initiated. \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_067-1493
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the \emph{nucleated instability\/} (also called core instability) hypothesis of giant planet formation, a critical mass for static core envelope protoplanets has been found. Mizuno (\cite{mizuno}) determined the critical mass of the core to be about $12 \,M_\oplus$ ($M_\oplus=5.975 \times 10^{27}\,\mathrm{g}$ is the Earth mass), which is independent of the outer boundary conditions and therefore independent of the location in the solar nebula. This critical value for the core mass corresponds closely to the cores of today's giant planets. Although no hydrodynamical study has been available many workers conjectured that a collapse or rapid contraction will ensue after accumulating the critical mass. The main motivation for this article is to investigate the stability of the static envelope at the critical mass. With this aim the local, linear stability of static radiative gas spheres is investigated on the basis of Baker's (\cite{baker}) standard one-zone model. Phenomena similar to the ones described above for giant planet formation have been found in hydrodynamical models concerning star formation where protostellar cores explode (Tscharnuter \cite{tscharnuter}, Balluch \cite{balluch}), whereas earlier studies found quasi-steady collapse flows. The similarities in the (micro)physics, i.e., constitutive relations of protostellar cores and protogiant planets serve as a further motivation for this study. \section{Baker's standard one-zone model} \begin{figure*} \centering \caption{Adiabatic exponent $\Gamma_1$. $\Gamma_1$ is plotted as a function of $\lg$ internal energy $\mathrm{[erg\,g^{-1}]}$ and $\lg$ density $\mathrm{[g\,cm^{-3}]}$.} \label{FigGam}% \end{figure*} In this section the one-zone model of Baker (\cite{baker}), originally used to study the Cephe{\"{\i}}d pulsation mechanism, will be briefly reviewed. The resulting stability criteria will be rewritten in terms of local state variables, local timescales and constitutive relations. Baker (\cite{baker}) investigates the stability of thin layers in self-gravitating, spherical gas clouds with the following properties: \begin{itemize} \item hydrostatic equilibrium, \item thermal equilibrium, \item energy transport by grey radiation diffusion. \end{itemize} For the one-zone-model Baker obtains necessary conditions for dynamical, secular and vibrational (or pulsational) stability (Eqs.\ (34a,\,b,\,c) in Baker \cite{baker}). Using Baker's notation: \[ \begin{array}{lp{0.8\linewidth}} M_{r} & mass internal to the radius $r$ \\ m & mass of the zone \\ r_0 & unperturbed zone radius \\ \rho_0 & unperturbed density in the zone \\ T_0 & unperturbed temperature in the zone \\ L_{r0} & unperturbed luminosity \\ E_{\mathrm{th}} & thermal energy of the zone \end{array} \] \noindent and with the definitions of the \emph{local cooling time\/} (see Fig.~\ref{FigGam}) \begin{equation} \tau_{\mathrm{co}} = \frac{E_{\mathrm{th}}}{L_{r0}} \,, \end{equation} and the \emph{local free-fall time} \begin{equation} \tau_{\mathrm{ff}} = \sqrt{ \frac{3 \pi}{32 G} \frac{4\pi r_0^3}{3 M_{\mathrm{r}}} }\,, \end{equation} Baker's $K$ and $\sigma_0$ have the following form: \begin{eqnarray} \sigma_0 & = & \frac{\pi}{\sqrt{8}} \frac{1}{ \tau_{\mathrm{ff}}} \\ K & = & \frac{\sqrt{32}}{\pi} \frac{1}{\delta} \frac{ \tau_{\mathrm{ff}} } { \tau_{\mathrm{co}} }\,; \end{eqnarray} where $ E_{\mathrm{th}} \approx m (P_0/{\rho_0})$ has been used and \begin{equation} \begin{array}{l} \delta = - \left( \frac{ \partial \ln \rho }{ \partial \ln T } \right)_P \\ e=mc^2 \end{array} \end{equation} is a thermodynamical quantity which is of order $1$ and equal to $1$ for nonreacting mixtures of classical perfect gases. The physical meaning of $ \sigma_0 $ and $K$ is clearly visible in the equations above. $\sigma_0$ represents a frequency of the order one per free-fall time. $K$ is proportional to the ratio of the free-fall time and the cooling time. Substituting into Baker's criteria, using thermodynamic identities and definitions of thermodynamic quantities, \begin{displaymath} \Gamma_1 = \left( \frac{ \partial \ln P}{ \partial\ln \rho} \right)_{S} \, , \; \chi^{}_\rho = \left( \frac{ \partial \ln P}{ \partial\ln \rho} \right)_{T} \, , \; \kappa^{}_{P} = \left( \frac{ \partial \ln \kappa}{ \partial\ln P} \right)_{T} \end{displaymath} \begin{displaymath} \nabla_{\mathrm{ad}} = \left( \frac{ \partial \ln T} { \partial\ln P} \right)_{S} \, , \; \chi^{}_T = \left( \frac{ \partial \ln P} { \partial\ln T} \right)_{\rho} \, , \; \kappa^{}_{T} = \left( \frac{ \partial \ln \kappa} { \partial\ln T} \right)_{T} \end{displaymath} one obtains, after some pages of algebra, the conditions for \emph{stability\/} given below: \begin{eqnarray} \frac{\pi^2}{8} \frac{1}{\tau_{\mathrm{ff}}^2} ( 3 \Gamma_1 - 4 ) & > & 0 \label{ZSDynSta} \\ \frac{\pi^2}{\tau_{\mathrm{co}} \tau_{\mathrm{ff}}^2} \Gamma_1 \nabla_{\mathrm{ad}} \left[ \frac{ 1- 3/4 \chi^{}_\rho }{ \chi^{}_T } ( \kappa^{}_T - 4 ) + \kappa^{}_P + 1 \right] & > & 0 \label{ZSSecSta} \\ \frac{\pi^2}{4} \frac{3}{\tau_{ \mathrm{co} } \tau_{ \mathrm{ff} }^2 } \Gamma_1^2 \, \nabla_{\mathrm{ad}} \left[ 4 \nabla_{\mathrm{ad}} - ( \nabla_{\mathrm{ad}} \kappa^{}_T + \kappa^{}_P ) - \frac{4}{3 \Gamma_1} \right] & > & 0 \label{ZSVibSta} \end{eqnarray} For a physical discussion of the stability criteria see Baker (\cite{baker}) or Cox (\cite{cox}). We observe that these criteria for dynamical, secular and vibrational stability, respectively, can be factorized into \begin{enumerate} \item a factor containing local timescales only, \item a factor containing only constitutive relations and their derivatives. \end{enumerate} The first factors, depending on only timescales, are positive by definition. The signs of the left hand sides of the inequalities~(\ref{ZSDynSta}), (\ref{ZSSecSta}) and (\ref{ZSVibSta}) therefore depend exclusively on the second factors containing the constitutive relations. Since they depend only on state variables, the stability criteria themselves are \emph{ functions of the thermodynamic state in the local zone}. The one-zone stability can therefore be determined from a simple equation of state, given for example, as a function of density and temperature. Once the microphysics, i.e.\ the thermodynamics and opacities (see Table~\ref{KapSou}), are specified (in practice by specifying a chemical composition) the one-zone stability can be inferred if the thermodynamic state is specified. The zone -- or in other words the layer -- will be stable or unstable in whatever object it is imbedded as long as it satisfies the one-zone-model assumptions. Only the specific growth rates (depending upon the time scales) will be different for layers in different objects. \begin{table} \caption[]{Opacity sources.} \label{KapSou} $$ \begin{array}{p{0.5\linewidth}l} \hline \noalign{\smallskip} Source & T / {[\mathrm{K}]} \\ \noalign{\smallskip} \hline \noalign{\smallskip} Yorke 1979, Yorke 1980a & \leq 1700^{\mathrm{a}} \\ Kr\"ugel 1971 & 1700 \leq T \leq 5000 \\ Cox \& Stewart 1969 & 5000 \leq \\ \noalign{\smallskip} \hline \end{array} $$ \end{table} We will now write down the sign (and therefore stability) determining parts of the left-hand sides of the inequalities (\ref{ZSDynSta}), (\ref{ZSSecSta}) and (\ref{ZSVibSta}) and thereby obtain \emph{stability equations of state}. The sign determining part of inequality~(\ref{ZSDynSta}) is $3\Gamma_1 - 4$ and it reduces to the criterion for dynamical stability \begin{equation} \Gamma_1 > \frac{4}{3}\,\cdot \end{equation} Stability of the thermodynamical equilibrium demands \begin{equation} \chi^{}_\rho > 0, \;\; c_v > 0\, , \end{equation} and \begin{equation} \chi^{}_T > 0 \end{equation} holds for a wide range of physical situations. With \begin{eqnarray} \Gamma_3 - 1 = \frac{P}{\rho T} \frac{\chi^{}_T}{c_v}&>&0\\ \Gamma_1 = \chi_\rho^{} + \chi_T^{} (\Gamma_3 -1)&>&0\\ \nabla_{\mathrm{ad}} = \frac{\Gamma_3 - 1}{\Gamma_1} &>&0 \end{eqnarray} we find the sign determining terms in inequalities~(\ref{ZSSecSta}) and (\ref{ZSVibSta}) respectively and obtain the following form of the criteria for dynamical, secular and vibrational \emph{stability}, respectively: \begin{eqnarray} 3 \Gamma_1 - 4 =: S_{\mathrm{dyn}} > & 0 & \label{DynSta} \\ \frac{ 1- 3/4 \chi^{}_\rho }{ \chi^{}_T } ( \kappa^{}_T - 4 ) + \kappa^{}_P + 1 =: S_{\mathrm{sec}} > & 0 & \label{SecSta} \\ 4 \nabla_{\mathrm{ad}} - (\nabla_{\mathrm{ad}} \kappa^{}_T + \kappa^{}_P) - \frac{4}{3 \Gamma_1} =: S_{\mathrm{vib}} > & 0\,.& \label{VibSta} \end{eqnarray} The constitutive relations are to be evaluated for the unperturbed thermodynamic state (say $(\rho_0, T_0)$) of the zone. We see that the one-zone stability of the layer depends only on the constitutive relations $\Gamma_1$, $\nabla_{\mathrm{ad}}$, $\chi_T^{},\,\chi_\rho^{}$, $\kappa_P^{},\,\kappa_T^{}$. These depend only on the unperturbed thermodynamical state of the layer. Therefore the above relations define the one-zone-stability equations of state $S_{\mathrm{dyn}},\,S_{\mathrm{sec}}$ and $S_{\mathrm{vib}}$. See Fig.~\ref{FigVibStab} for a picture of $S_{\mathrm{vib}}$. Regions of secular instability are listed in Table~1. \begin{figure} \centering \caption{Vibrational stability equation of state $S_{\mathrm{vib}}(\lg e, \lg \rho)$. $>0$ means vibrational stability. } \label{FigVibStab} \end{figure} \section{Conclusions} \begin{enumerate} \item The conditions for the stability of static, radiative layers in gas spheres, as described by Baker's (\cite{baker}) standard one-zone model, can be expressed as stability equations of state. These stability equations of state depend only on the local thermodynamic state of the layer. \item If the constitutive relations -- equations of state and Rosseland mean opacities -- are specified, the stability equations of state can be evaluated without specifying properties of the layer. \item For solar composition gas the $\kappa$-mechanism is working in the regions of the ice and dust features in the opacities, the $\mathrm{H}_2$ dissociation and the combined H, first He ionization zone, as indicated by vibrational instability. These regions of instability are much larger in extent and degree of instability than the second He ionization zone that drives the Cephe{\"\i}d pulsations. \end{enumerate} \begin{acknowledgements} Part of this work was supported by the German \emph{Deut\-sche For\-schungs\-ge\-mein\-schaft, DFG\/} project number Ts~17/2--1. \end{acknowledgements} \section{Introduction} Radio emission in galaxies is commonly attributed to activity of the supermassive black hole as well as to processes related to the formation of stars \citep[for recent reviews see][]{Hardcastle15,Tadhunter16,Norris17,Krause18a}. The LOFAR radio telescope is surveying the northern sky at low-frequency with unprecedented resolution and sensitivity \citep{Combea19}, giving us a much more comprehensive view of the radio emission of the local galaxy population. Of the 326,000 sources in the first data release of the LOFAR Two-metre Sky Survey \rev{\citep[LoTSS][]{Shimwea19}}, 70\% have optical counterparts \citep{WilliamsWea19} including redshifts and absolute magnitudes \citep{DuncanKea19}. \citet{Sabea19} cross-matched the LoTSS database with the Sloan Digital Sky Survey and classified the sources as galaxies with or without active galactic nuclei based on diagnostic emission line and two-colour plots, and the strength of the 4000~\AA~break and the luminosity in H$\alpha$ compared to the 150~MHz radio luminosity. This resulted in a local sample of 2121~galaxies with active galactic nuclei (AGN) and 8494 star-forming galaxies. They find that galaxy mass is the main driver of radio-AGN formation and that above a stellar mass of $10^{11} M_\odot$, all galaxies are switched on, whereas below $3 \times 10^{10} M_\odot$ less \rev{than} 10\% of the galaxies show even the faintest sign of a radio AGN. This is a significant improvement of previous results, and a direct consequence of the more sensitive LoTSS observations compared to previous surveys. \rev{Separating contributions from AGN activity and star formation to the total radio luminosity is, however, a} difficult issue \citep[e.g.,][]{Guerkea18}. It is even more complicated by the fact that even luminous radio AGN frequently do not show any optical activity. The radio luminosity of an AGN depends on both, jet power and environment \citep[e.g.,][]{KDA97,MK02,HK13,HK14,MacAlex14,TS15,EHK2016,Turnea18b}. The lack of radio AGN at lower galaxy masses might therefore either indicate that such galaxies have the capacity to produce radio jets to a much lesser extent, or that the gaseous halos of the smaller galaxies differ significantly from the ones in bigger galaxies. Empirical studies provide hints that AGN may work differently in galaxies of different masses. For example, \citet{Bestea05} found that the expression of optical activity is essentially independent of activity in the radio band, with lower mass galaxies showing more optical activity and higher mass galaxies showing more radio-loud AGN. Focussing on radio-loud AGN only, \citet{KHB08} showed that the probability for them to have emission lines decreases with galaxy mass. This could, however, be due to the fact that at low galaxy mass, radio AGN with low Eddington ratio, which tend to have little line emission \citep{BH12}, are harder to detect. On the other hand, the existence of scaling relations between black-hole mass and stellar velocity dispersion, bulge mass and total galaxy mass \citep[e.g.,][]{Magea98,HR04,Guea09,ReinVol15,BentzMN18} suggests most galaxies have supermassive black holes, which grow in a similar way during phases of nuclear activity \citep[compare, e.g.,][]{Soltan82,MerlHeinz08,TucVol17}. Radio AGN that also have strong emission lines tend to be associated with more strongly star-forming hosts \citep{Hardcastlea13}, higher Eddington ratios and lower stellar mass \citep{BH12}. Hence, the observational evidence suggests that lower mass galaxies have, on average, higher Eddington-scaled accretion rates on their central, supermassive black holes. In emission line AGN, the luminosity in optical emission lines is broadly correlated to the radio luminosity \citep{MC93,KHB08}. Therefore, one should expect that the typical, Eddington-scaled jet power in low-mass galaxies is, if anything, rather higher than in high-mass galaxies. Galaxy halos, however, might be expected to be qualitatively different in galaxies with different masses: a hot halo can form via an accretion shock in galaxies with a mass of their dark matter halo $>10^{11} M_\odot$, whereas at lower masses, one expects that the accreting gas cools so fast that no shock forms \citep{BirnDek03}. This difference has lead to the concept that high mass galaxies accrete their gas mainly in the \emph{hot mode}, i.e. by first shock-heating it to the virial temperature of the dark matter halo and subsequent gas cooling, whereas the lower mass galaxies accrete in the \emph{cold mode} with cold gas being channeled into the galaxies along cosmic filaments \citep{DekBir06}. In a cosmological, hydrodynamic simulation, \citet{Kerea05} find a critical dark matter mass, at a similar level of $3\times 10^{11} M_\odot$. We can use the halo occupation model of \citet{Mostea13} to derive a critical stellar mass of 1-3$\times 10^9 M_\odot$ where the properties of the gas halos would be expected to change. However, this disagrees with the mass scale of $10^{11} M_\odot$ (stellar mass) now identified in the sensitive LOFAR measurements. One might suspect that stellar feedback affects the properties of galaxy halos. Two types of interstellar medium-halo interaction are known: \rev{{\em fountains}}, where the gas mainly circulates in the lower \rev{(i.e. closer to the galactic disc)} part of the halo and \rev{{\em galactic winds}} where gas appears at above escape speed and the outflow likely proceeds beyond the virial radius of the galaxy \citep[e.g.,][]{CC85,dAB04,dAB05,VCB05,DT08a,vGlea13,Gattoea15,HT17,KimOst18}. In the fountain case, high entropy material accumulates at high altitudes and there is a smooth transition to a hydrostatic halo \citep[e.g.,][]{dAB04,dAB05}. This is confirmed by studies of, e.g., the motion of HI clouds \citep[e.g.,][]{Mirabel81,KalbDed08,Marascea12} and observations of the hot halo gas of the Milky Way in X-ray absorption and emission \citep{GuptaAea12,GuptaAea17,Bregmea18}. Simulations of galactic winds found that at the onset of the wind, any infalling or hydrostatic halo is swept up by a shock wave, leaving the halo in a state of low density outflow with regions of turbulence and denser gas in filaments or on the walls of the outflow cone \citep[e.g.,][]{StSt00,Coopea08,DT08a,vGlea13,Ruszea17}. Multi-wavelength observations of the different gas phases are generally consistent with this structure \citep[e.g.,][]{VCB05,SH07,HT17}. The transition between wind and fountain solution occurs at different galaxy masses in different types of simulations, depending probably mainly on details of the feedback implementation. For example, \citet{DT08a}, using an effective equation of state for unresolved interstellar medium and energy input from clustered supernovae, find a \rev{galactic} wind for their galaxies with circular velocity of 35~km~s$^{-1}$~ (stellar mass $\approx 10^7M_\odot$), but a fountain for 75~km~s$^{-1}$~ galaxies (stellar mass $\approx 10^9M_\odot$). \citet{JacobSea18}, who also use an effective equation of state for unresolved interstellar medium and focus on cosmic rays as driver of feedback, find \rev{galactic winds below} 160~km~s$^{-1}$~(stellar mass $\approx 3\times 10^{10} M_\odot$), and a fountain solution at higher masses. Galactic winds consistently form in the simulations, when a characteristic threshold of star formation rate per unit area is exceeded \citep[e.g.,][]{vGlea13}. Hence, any given galaxy might switch repeatedly between wind and fountain, depending on the current availability of fuel for star formation. Direct observations of hot gaseous halos of galaxies in X-rays are rare \citep[for reviews see][]{Putmea12,TPW17}. For a few massive spirals and elliptical galaxies with stellar mass $\gtrsim10^{11}M_\odot$, \citet{Bregmea18} report X-ray detections, and in some cases also density profiles. The results are consistent with expectations for a gas halo at the virial temperature close to hydrostatic equilibrium. \citet{Strickea04b} show for a sample of disc galaxies with circular velocities between 100 and 244~km~s$^{-1}$~(stellar masses: $\approx 10^{10}-10^{11} M_\odot$) that most of the luminosity of extraplanar X-ray emission is likely related to superbubble blowout \citep[see also][]{Krausea14a}. The total vertical extent of the X-ray halos, however, correlates with circular velocity and might therefore also indicate a hydrostatic halo. Galaxies of all masses show multi-phase gas in their halos \citep[e.g.][]{TPW17,Bordolea18,LanMo18}. Winds at or above escape velocity have been reported up to circular velocities of 300~km~s$^{-1}$~\citep{HT17}. Summarising, while trends of the state of gaseous halos (inflow, \rev{outflow or hydrostatic}) with galaxy mass are expected, neither simulations nor observations currently provide a clear picture of these trends, or any particular galaxy masses where transitions would occur. However, the new LOFAR LoTSS survey has clearly identified such a critical galaxy mass at $\approx10^{11} M_\odot$. Here we investigate, with simple models for the maximum luminosity expected from the radio-AGN of a galaxy of a given mass, if this mass scale can be understood as a critical stellar mass at which the properties of gaseous halos of galaxies change significantly. We first construct fiducial gaseous halo models for the different situations described above (Sect.~\ref{s:halomodels}). We then describe our models for the radio emission of AGN-jets in the given halos (Sect.~\ref{s:radioem}), compare to the LoTSS radio emitting galaxies in Sect.~\ref{s:comp} and summarise our conclusions in Sect.~\ref{s:conc}. \section{Models for gaseous halos of galaxies}\label{s:halomodels} \rev{In this section we discuss three simple models for gaseous halos of galaxies. Hydrostatic halos are characterised by an overall equilibrium between gravity and pressure gradient. Gas cooling, stellar and AGN feedback are unable to cause large perturbations to the overall equilibrium, but produce convection (a galactic fountain) in the lower part of the halo close to the stellar component, or buoyant bubbles. Wind halos occur where feedback in the galaxy is strong enough to lead to a global gaseous outflow from the galaxy beyond its virial radius. An inflow halo will occur, where pressure forces in the halo and feedback from the galaxy are insufficient to balance the ram pressure of a global gas inflow into the galaxy. In reality, combinations of different halo types can occur in the same galaxy. For example, a disc galaxy could have a global inflow in its equatorial region, when at the same time a starburst in its core drives an outflow in the polar directions. Since jets are a directed phenomenon, we consider here only galaxies with one type of gaseous halo and imply that this is the type of halo relevant for the direction into which the jets are emitted.} \subsection{Hydrostatic halos}\label{ss:hydrostatic} We construct a fiducial hydrostatic halo model from thermodynamic and cosmological constraints. Isothermal hydrostatic gas halos in \citet{NFW97} dark matter halos were derived by \citet{Makinea98}. As they note, the resulting profile is similar to the conventional $\beta$-profile, which we adopt in the following: $\rho(r)=\rho_0 (1+(r/r_\mathrm{c})^2)^{-3\beta/2}$. Here $\rho(r)$ is the gas density profile approximated as spherically symmetric and $r_\mathrm{c}$ is the core radius. The \citet{Makinea98} gas profile is characterised by a core with a radius given by the scale radius $R_\mathrm{s}$ of the dark matter halo, which is related to the virial radius $R_\mathrm{vir}$ by the concentration $C=R_\mathrm{vir}/R_\mathrm{s}$. We use the fitting formula from \citet{Klypea16} for redshift zero, $C = 7.4 (M_\mathrm{vir}/(10^{12} M_\odot/h))^{-0.12} (1+(M_\mathrm{vir}/M_0)^{0.4})$, with $M_\mathrm{vir} =M_{200}= 200 \rho_\mathrm{crit} 4\pi R_\mathrm{vir}^3/3$ and $M_0=5.5\times10^{17} M_\odot/h$. We use the \citet{Planck16a} cosmology; for redshift zero, $h =0.68$ and $\rho_\mathrm{crit}=8.62\times10^{-30}$g~cm$^{-3}$. Massive spiral as well as elliptical galaxies are well fit by the $\beta$-model with $\beta\approx 0.5$ \citep{Bregmea18} and we adopt this value. We note that this leads to a radio luminosity that declines with source size for radio sources larger than the core radius. Since in the following we are only interested in the maximum luminosity a radio source can produce, the exact value of $\beta$ is not important as long as $\beta>0.37$, which is required for a radio luminosity that declines with source size \citep{HK13,Yatea18}. Again, since we are interested in the maximum radio luminosity of radio AGN for a given galaxy mass, we take the core density as the maximum density allowed by the requirement that the radiative cooling time $t_\mathrm{c}$ exceeds the dynamical time. For a given dark matter halo, we define the dynamical time as $t_\mathrm{dyn}=R_\mathrm{s}/v_\mathrm{vir}$ with the virial velocity $v_\mathrm{vir}=(GM_\mathrm{vir}/R_\mathrm{vir})^{1/2}$. We link virial masses to stellar masses $M_*$ by the halo occupation model of \citet{Mostea13}, which we approximate as $\log_{10} M_* = 2.2\log_{10}M_\mathrm{vir} -15.4$ \rev{for $\log_{10} M_\mathrm{vir}<11.8$ and} $\log_{10} M_* = 0.4\log_{10}M_\mathrm{vir} +5.8$, otherwise. This yields dynamical times of the order of 100~Myr. Using the $\sbr{\rm Fe/H}=-0.5$ and $\sbr{\rm Fe/H}=-1.0$ \citep{Bogdea17,Bregmea18} collisional ionisation equilibrium cooling functions $\Lambda(T)$ from \citet{SD93}, the maximum particle density in a hydrostatic halo at the virial temperature is then given by $n_\mathrm{max} = k_\mathrm{B}T_\mathrm{vir} / (\Lambda(T)\,t_\mathrm{c})$. We plot the maximum particle density in Fig.~\ref{f:maxden} and compare it to measurements for the Milky Way and a more massive spiral galaxy. The strong increase between $M_*=10^{10}M_\odot$ and $M_*=10^{11}M_\odot$ mirrors the behaviour of the cooling function for the relevant virial temperature. \rev{Towards} $10^{12} M_\odot$ the scale radii become very large, and the model would lead to a strong overestimate of the total gas mass. Therefore, we restrict the valid range of the model \rev{to $\log_{10} M_*/M_\odot \le 11.5$.} We adopt $\sbr{ \rm Fe/H}=-0.5$ \rev{as appropriate for the case of the Milky Way, most galaxies where it has been measured \citep{Bregmea18} and the central galaxies of groups and clusters \citep{BW10}. Assuming a metallicity of $\sbr{\rm Fe/H}=-1.0$ would increase the gas density in the modelled halos by a factor of two or less. This would increase the predicted radio luminosities by less than 50~\% \citep[compare, e.g.,][their Fig. 10]{MacAlex14}, which is not important for the arguments presented here.} \begin{figure} \centering \includegraphics[width=\hsize]{maxden-mgal_v3.png} \caption{Maximum particle density of the hydrostatic halo as a function of stellar mass according to the model presented in Sect.~\ref{ss:hydrostatic} for two metallicities. For comparison, we show measurements for the Milky Way \citep[using the stellar mass estimate of $6\times10^{10} M_\odot$ from \citet{BR13} and \citet{TaylorCea16}]{GuptaAea17} and the massive spiral galaxy \object{NGC 6753} \citep{Bogdea17}. } \label{f:maxden} \end{figure} \subsection{Galactic-wind halo\rev{s}}\label{ss:gwh} Following the work of \citet{CC85}, we assume a core at almost constant density and pressure, where the stellar mass and energy input take place, and a free wind zone at constant outflow velocity with density declining as $r^{-2}$. The solution is fully determined by the energy and mass input rates, $\dot M$ and $\dot E$, both linear in the star formation rate. The core temperature is proportional to $\dot E / \dot M$ and therefore constant for all systems. The initial temperature could be as high as $\approx10$~keV \citep{CC85,SH07}. However, similar to the case of superbubbles \citep{Krausea14a,RP14}, mixing of material from swept-up shells and remnant cloud cores likely reduces the temperature of the hot phase quickly. We therefore adopt a temperature of 3~keV which is approximately the lower limit allowed by models for the nearby, probably best studied, wind galaxy \object{M82} \citep{SH09}. The density in the model increases with star formation rate, but for realistic conditions should be dominated by mixing as well. Further following \citet{SH09}, we adopt a core density of 0.6~cm$^{-3} M_*/M_{82}$, where $M_{82}$ is the stellar mass of \object{M82}. The latter is between $10^9M_\odot$ \citep{FSea03} and $10^{10}M_\odot$ \citep[dynamical mass estimate]{Grecea12}. We adopt $M_{82}= 10^{10}M_\odot$, \rev{because the lower value takes into account only the central parts of the galaxy. Using the lower value would increase our core densities by a factor of ten, and the radio luminosities for this model would consequently be higher by a factor of a few \citep[compare, e.g.,][their Fig. 10]{MacAlex14}. This would not change the conclusions of the present analysis (compare below, e.g., Fig.~\ref{f:Sbtr-plot}).} We use \rev{a} constant core radius of 500~pc, as for M82 \citep{Grecea12}, for all galaxies. We note that the thermal pressure quickly declines outside the core, and hence the isothermality assumed by our $\beta$-model ansatz is violated. We ignore this effect, because we expect the maximum radio luminosity on the scale of the core radius. \subsection{Infall halo\rev{s}}\label{ss:infh} In the infalling halo picture, galaxies get their fuel for star formation from accretion of intergalactic gas. The accretion rate is therefore given by the star formation rate. For this, we use the average star formation rate of the main sequence enhanced by one standard deviation from \citet{Belfea18}: $-\log \dot M/M_\odot\,\mathrm{yr}^{-1} =$ $ \log \dot M_*/M_\odot\,\mathrm{yr}^{-1} = 0.73 \log(M_*/M_\odot) -6.94$. While some of this gas might be clumpy, we get an upper limit on the gas density by assuming spherically symmetric accretion. \rev{To model this, we consider either a free fall or Bondi accretion into a \citet{NFW97} dark-matter halo. The velocity close to the galaxy is in both cases given by $v=-\sqrt{-2\Phi_\mathrm{NFW}}$, where the Navarro-Frenk-White potential is given by \citep{HNS07}: \begin{equation} \Phi_\mathrm{NFW} = -\frac{G M_\mathrm{vir}}{r} \frac{\log\left(1+r/R_\mathrm{s}\right)}{ \log(1+C) - C/(1+C) }\, . \end{equation} We then} get the halo density from mass conservation, $\dot M = 4 \pi r^2 \rho v$. Since the infalling medium is assumed to be cold, the relevant halo pressure is now the ram pressure of the infalling halo, $\rho v^2$. This leads to an almost constant velocity in the relevant inner part of the halo, and hence a density and ram pressure distribution as $r^{-2}$. This can be modelled by an isothermal beta profile with $\beta =2/3$. We assume the infall continues down to 0.5~kpc, where we assume normal processes of star formation to keep the density constant and the pressure balanced with the ram pressure of the infalling halo. \section{Radio luminosities of jets in different halos }\label{s:radioem} The early evolution of radio AGN is treated in \citet{Alex06} and \citet{MacAlex14}. As can be seen, e.g., from Fig.~5 in \citet{MacAlex14}, the source luminosity increases throughout the early evolution into the self-similar phase. It reaches its peak approximately where the ambient density distribution starts to decline, i.e., near the core radius. In this phase, the radio lobes dominate the luminosity. \rev{For an estimate of the radio luminosity, it is therefore appropriate to use a standard model for the evolution of radio lobes, such as the one in \citet{Hardcastle18}. The model requires that radio lobes have formed already, which first happens when the radio source has reached a certain size given, e.g., by \citet{Krausea12b}. After confirming that the radio lobes form earlier than our scale of interest for all cases,} we therefore used the standard radio lobe models of \citet{Hardcastle18}. Briefly, the model is based on 3D simulation results \citep{HK13,HK14,EHK2016}, distributes a constant fraction of the steadily supplied power $Q_0$, respectively, to radio lobes and shocked ambient gas and advances the prolate spheroidal outer shock surface according to the Rankine-Hugoniot shock jump conditions. It assumes that the lobes consist of an electron-positron pair plasma (compare below) and that the ratio between the energy in the magnetic field and the one in particles is~0.1. We assumed an injection power law index for relativistic electrons of $q=-2.2$. Adiabatic, synchrotron and inverse Compton losses at the cosmic microwave background (redshift $z=0$) are taken into account. \citet{Sabea19} probe the jet power distribution for massive galaxies, $10^{11}M_\odot<M_*<10^{12}M_\odot$, as a fraction of the Eddington luminosity $L_\mathrm{Ed}$ of the supermassive black hole. They find a strong decline towards higher jet powers. Only 0.3\% of all galaxies have a radio AGN with $Q_0>10^{-2}L_\mathrm{Ed}$. We therefore adopt this as upper limit for radio sources in all galaxies. To estimate the black hole masses, we adopt a constant mass fraction, $M_\mathrm{SMBH} =0.0005 M_*$ \citep{BentzMN18}. We show a plot of the resulting radio luminosities at 150~MHz over lobe length in Fig.~\ref{f:explPD} for an intermediate mass galaxy. Due to our assumption that wind and infall halos have a small core region of only 0.5~kpc, the radio luminosity also peaks on this scale. The relevant length scale for the hydrostatic halo is the scale radius of the dark-matter halo. Consequently, the peak of the radio luminosity is reached at a scale of tens of kpc. The peak luminosities for the wind and infall models are much higher than for the hydrostatic-halo case. This is because, at the galaxy mass of the chosen example, $M_*=3\times 10^{10}M_\odot$, cooling still severely restricts the density in a hydrostatic halo. The luminosities in the wind and infall models would increase even further, if the core radius was assumed to be greater than 0.5~kpc. \rev{The radio luminosity in our models scales directly with the kinetic jet power, which is linearly coupled to the black hole mass. Varying the black hole mass to galaxy mass ratio would therefore move all models along the vertical axis in Figs.~\ref{f:explPD}~and~\ref{f:Sbtr-plot}. } \begin{figure} \centering \includegraphics[width=\hsize]{L150-R_models.png} \caption{Example radio luminosity at 150~MHz versus lobe length plots for hydrostatic, galactic wind and infall halos for a galaxy of stellar mass $M_*=3 \times10^{10}M_\odot$ and the maximum plausible jet power in our model of $2\times10^{36}$~W. See Sect.~\ref{s:halomodels} for details of the gaseous halo models and Sect.~\ref{s:radioem} for details of the radio source model. } \label{f:explPD} \end{figure} \section{Comparison to the LoTSS sample}\label{s:comp} We compare the maximum of the predicted radio luminosities against galaxy mass for all halo models to the LoTSS measurements in Fig.~\ref{f:Sbtr-plot}. The radio sources in the hydrostatic halos approximate \rev{well} the upper envelope of the data points. It is well known that the models reproduce radio luminosities of radio sources in large dark-matter halos well \citep{HK13,Hardcastle18}, it is an interesting feature of the present model that the radio luminosities of smaller galaxies are also not overpredicted by a large factor. This means that, if all galaxies have hydrostatic gas halos, radio AGN in low-mass galaxies could have the same jet-power distribution (in terms of the Eddington luminosity of the supermassive black hole) as the ones in high-mass galaxies, but the observed luminosities would be much lower. The low luminosities are due to the lower halo gas densities, which are a direct consequence of the higher cooling rates at lower halo temperature, which via the virial temperature is a function of the galaxy mass (compare Fig.~\ref{f:maxden}). Of course the radio luminosity in any given low-mass galaxy might still be entirely due to star formation. The models only tell us that, if the galaxy developed a radio AGN, its radio luminosity would not exceed a certain luminosity. The models therefore need to be compared to the upper envelope of the observed distribution. Radio sources in wind galaxies or in halos dominated by spherical infall would have a luminosity far greater than observed. This could be interpreted as galaxies not having strong radio AGN in such phases, or that such conditions are rare. This would agree with findings in the literature that starburst and AGN phases are sequential rather than simultaneous \citep[e.g.,][]{Krause2005b,Schawea07,Schartea10,Shabea12a}. The predicted luminosities become very similar at high galaxy masses. \rev{At those masses,} all halo models are consistent with the radio observations. An interesting caveat here is the type of the radio source. It is possible that jets in more strongly star-forming systems suffer much more entrainment of proton-rich interstellar medium, and therefore develop more \citet{FR74} class~I-like radio sources \citep{CIH18}. In this case, the same jet power would be distributed also to additional protons, and thus the expected radio luminosity would be lower than in our model, possibly to a degree that could reconcile the prediction for wind and infall halos with the observations. The spatial resolution of the LoTSS survey corresponds to 28~kpc at the redshift limit of $z=0.3$ \citep[6~arcsec resolution,][]{Sabea19}. Radio AGN in infall and wind halos would therefore mostly be unresolved. The maximum radio luminosity in hydrostatic halos occurs around 20 (40, 100)~kpc for $10^9 (10^{10}, 10^{11}) M_\odot$ galaxies. Radio AGN would therefore also frequently be unresolved in less massive galaxies in our model, if they had a hydrostatic halo. This agrees with the findings of \citet{Shabala18} that the fraction of unresolved radio AGN decreases with increasing galaxy mass. This prediction could be tested with LOFAR observations that include international baselines \citep[$<1$~arcsec resolution,][]{Ramirea18}, which are, however, not yet available for large samples of galaxies. The maximum radio luminosity we predict in our models should not be regarded as a strict upper limit on the measured LOFAR 150~MHz luminosities. We estimate the measurement uncertainties for the luminosities due to uncertainties of the flux calibration scale to about 0.3~dex or more \citep[compare][]{Hardcastlea19}. Also, jet powers are known to sometimes exceed our choice of 1\% of the Eddington luminosity \citep{Sabea19}. For example \citet{Turnea18b} estimate jet powers around $10^{47}$~erg/s for several 3C radio sources in galaxies with $11.5<\log(M_*/M_\odot)<12$. The jet power in these sources therefore likely exceeds 10\% of the Eddington luminosity. Further, the black-hole-mass scaling relation has a scatter of about a factor of three. Taken together, individual sources could still be one or two dex more luminous than predicted by our model. The high jet power phases might, however, be rare, possibly linked to galaxy merging \citep{Ramea12,Shabea12a,Tadhea14,Krausea19a}. The \citet{Sabea19} sample is local (redshift $z\le0.3$) and therefore contains relatively few recent mergers, especially for the lower galaxy masses \citep{Hopkea10}. For the general distribution of galaxies, however, our assumptions reasonably cover the top end of the relevant distributions: in \citet{Sabea19}, much less than 1\% of the radio sources exceed our jet power limit and the halo properties we assume take account of the scatter. For example, Fig.~\ref{f:maxden} demonstrates that observations fall comfortably below our limiting halo gas density. \begin{figure} \centering \includegraphics[width=\hsize]{Sabater-plot-models_v02.png} \caption{Maximum 150~MHz luminosity of radio AGN for a given galaxy mass for three different models of the gaseous halo of the galaxies (hydrostatic halo, infall halo, wind halo). Data points are from the LoTSS sample \citep{Sabea19}, plotted separately for star forming galaxies (SFG) and radio AGN. The radio luminosity expected from star formation for galaxies on the main sequence of star formation at redshift zero is indicated as solid thin green line \citep[Sect.~\ref{ss:infh}, eq.~(3) in][]{Guerkea18}. The dotted black line denotes the completeness limit, i.e., the median limiting luminosity at the maximum redshift of the sample. For the hydrostatic halo model, radio AGN with the same Eddington-scaled jet power distribution could be hosted by galaxies of all masses. The lower densities in the halos of lower mass galaxies would, however, limit their radio luminosities. Radio AGN need to be intrinsically less powerful or otherwise different in low mass galaxies that are in galactic wind or (any) quasi-spherical infall phases in order not to exceed the LOFAR radio luminosity constraints. } \label{f:Sbtr-plot} \end{figure} \section{Conclusions}\label{s:conc} We use an established model for the radio luminosity of radio AGN to estimate the maximum AGN-related LOFAR 150~MHz luminosities of galaxies. As input to our model, we assume a jet power of 1\% of the Eddington luminosity with supermassive black hole masses following the observed scaling relation and models for the gaseous halos of galaxies based on theoretical constraints and observations. We consider hydrostatic, galactic wind and infall halos. Our aim is to explain the marked rise of radio luminosities at stellar masses around $10^{11} M_\odot$. Our main findings are: \begin{enumerate} \item The shape of the cooling function translates into higher hydrostatic halo masses, and hence a marked increase in radio luminosity around $10^{11} M_\odot$. \item Assuming hydrostatic gas halos in all galaxies and the jet power (as fraction of the Eddington luminosity) distribution seen in massive galaxies to hold for all galaxies we find an upper envelope for the radio luminosities that matches the data well. \item Disproportionately lower radio luminosities in low-mass galaxies can therefore not be used as an argument that radio-AGN are absent in such galaxies. \item A model where low-mass galaxies are dominated by winds or infall and higher mass galaxies by hydrostatic gas halos is not supported by the data. \rev{Models expect the presence of hydrostatic halos in galaxies with masses $M_*\gtrsim10^9M_\odot$ \citep{BirnDek03,Kerea05}. This corresponds to the mass range covered by the LOTSS sample. Our findings are hence consistent with these predictions.} \item We would predict higher radio luminosities than observed for low-mass galaxies, if their jets were frequently launched in phases of smooth, quasi-spherical inflow or galactic winds, unless jets in such environments suffer a lot of entrainment of proton-rich gas. This is consistent with the idea that galactic winds and AGN-jet outbursts do not happen at the same time. \item Our results are also consistent with scenarios in which radio AGN occur much more rarely in lower mass galaxies, such that their radio luminosities are always completely dominated by star formation. We regard, however, the alternative, namely that AGN have similar jet power distributions at all galaxy masses, which is as well consistent with the data, as the simpler explanation. \end{enumerate} \begin{acknowledgements} We gratefully acknowledge the provision of the LOFAR radio data in electronic form by Jose Sabater Montes. We thank the anonymous referee for a very useful report. \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_067-1650
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Among the exceptional properties exhibited by the 2D materials, one can cite the strong interaction with light in comparison with their bulk counterparts \cite{Maier2013,Xia2014,Mak2016,Yakobson2018}. For example, graphene presents a constant and universal absorbance of $2.3\%$ across the whole infrared to visible spectral range \cite{Nair08,Mak2008}. On the other hand, monolayer transition-metal dichalcogenides, such as MoS$_2$ and WSe$_{2}$, are able to absorb over 10\% of incident light at the bandgap resonances \cite{Mak2010,Heinz2014}. In consequence, the incorporation of 2D materials in optical systems modifies their electrodynamical response. These modifications are observable in fundamental optical phenomena such as the Brewster effect \cite{Lin2016,Lambin2018,Pickwell2018}, the total internal reflection \cite{Tian2013,Pickwell2016,Pickwell2017} or the Goos--H\"{a}nchen shift \cite{Fan2016,You2018,Zambale2019}. Another of the fascinating properties of the 2D materials is the high stretchability. They are capable of withstanding much larger elastic deformation compared to conventional electronic materials \cite{Akinwande2017}. For instance, graphene endures reversible stretching beyond of $10\%$ \cite{Lee08,Perez2014}. This long interval of elastic response results in substantial changes of the electronic, thermal, chemical and optical properties \cite{Roldan2015,Naumis2017,Zhang2019}. Therefore, the mechanical strain has been widely proposed as a tool to tune the physical properties of 2D materials and to ultimately achieve high-performance 2D-material-based devices. In particular, strain engineering of the optical response of graphene has been experimentally archived in various works \cite{Pereira14,Tian2014,Chhikara2017}. When graphene is under uniform strain, its electronic structure is modified, e.g. the Dirac cones change of shape, from circular to elliptical cross-section. As a consequence, the Fermi velocity becomes anisotropic \cite{Oliva2013,Oliva2017}. This strain-induced effect on the Fermi velocity is traduced in an anisotropic optical conductivity \cite{Pereira10,Pellegrino11,Oliva2014} which yields a modulation of the transmittance for normal incidence of linearly polarized light as a function of the polarization direction \cite{Pereira10,Oliva2015}. However, to the best of our knowledge, the problem of light scattering for oblique incidence in the case of strained graphene (or an anisotropic 2D material) has not been analyzed in details. For instance, the contribution of the non-diagonal components of the strained graphene conductivity tensor on the light scattering remains unexplored. Thus, the main objective of this paper is to give a general characterization of this problem, which would lead a more complete understanding of the Brewster effect or the total internal reflection as a function of strain. This paper is organized as follows. In Sec.~\ref{II}, we derive the generalized Fresnel coefficients when two dielectric media are separated by an anisotropic 2D material. Section \ref{III} is devoted to analyze the general features of the modifications of the Brewster effect or the total internal reflection due the optical anisotropy of the 2D material. In Sec. \ref{IV}, our findings are particularized to the case of graphene under a uniaxial strain deformation and we discuss previous experiments about total internal reflection \cite{Tian2014}. Finally, in Sec. \ref{V}, our conclusions are given. \section{Generalized Fresnel coefficients}\label{II} We consider the propagation of linearly polarized light through two semi-infinite non-absorbing dielectric media separated by a 2D material and with refractive indices $n_{1}$ and $n_{2}$, as shown schematically in Fig. \ref{fig1}. In general, the 2D material is assumed to be an anisotropic conducting sheet of negligible thickness \cite{Merano2016,Heinz2018}, whose optical conductivity is characterized by the second-rank symmetric tensor $\overline{\bm{\sigma}}$. According to this model, the effects of the out-plane anisotropy, recently investigated by Maj\'erus \emph{et.al.}\cite{Majerus2018}, are disregarded. It is worth mentioning that the optical in-plain anisotropy of the 2D material can be an intrinsic property, as occurred for example in phosphorene \cite{Wang2016} and borophene \cite{Lherbier2016}, or induced by strain \cite{Nguye2017}. \begin{figure}[t] \includegraphics[width=\linewidth]{Fig1} \caption{Front view of the scattering geometry for oblique incidence between two media with a 2D conducting material separating them.} \label{fig1} \end{figure} For the considered scattering problem, the components of the electric ($\bm{E}$) and magnetic ($\bm{H}$) fields in the incident, reflected and transmitted waves can be written as~\cite{Born}: \emph{incident wave} \begin{align} E_{x}^{(i)}&=-a_{p}\cos\theta_{i} e^{i\tau_{i}}, & H_{x}^{(i)}z_{0}&=-a_{s}\cos\theta_{i}n_{1}e^{i\tau_{i}}, \nonumber \\ E_{y}^{(i)}&=a_{s} e^{i\tau_{i}}, & H_{y}^{(i)}z_{0}&=-a_{p}n_{1} e^{i\tau_{i}}, \nonumber\\ E_{z}^{(i)}&=a_{p}\sin\theta_{i} e^{i\tau_{i}}, & H_{z}^{(i)}z_{0}&=a_{s}\sin\theta_{i}n_{1} e^{i\tau_{i}},\label{FI} \end{align} \emph{reflected wave} \begin{align} E_{x}^{(r)}&=-r_{p}\cos\theta_{r} e^{i\tau_{r}}, & H_{x}^{(r)}z_{0}&=-r_{s}\cos\theta_{r}n_{1} e^{i\tau_{r}}, \nonumber \\ E_{y}^{(r)}&=r_{s} e^{i\tau_{r}}, & H_{y}^{(r)}z_{0}&=-r_{p}n_{1} e^{i\tau_{r}}, \nonumber\\ E_{z}^{(r)}&=r_{p}\sin\theta_{r} e^{i\tau_{r}}, & H_{z}^{(r)}z_{0}&=r_{s}\sin\theta_{r}n_{1} e^{i\tau_{r}},\label{FR} \end{align} and \emph{transmitted wave} \begin{align} E_{x}^{(t)}&=-t_{p}\cos\theta_{t} e^{i\tau_{t}}, & H_{x}^{(t)}z_{0}&=-t_{s}\cos\theta_{t}n_{2} e^{i\tau_{t}}, \nonumber \\ E_{y}^{(t)}&=t_{s} e^{i\tau_{t}}, & H_{y}^{(t)}z_{0}&=-t_{p}n_{2} e^{i\tau_{t}}, \nonumber\\ E_{z}^{(t)}&=t_{p}\sin\theta_{t} e^{i\tau_{t}}, & H_{z}^{(t)}z_{0}&=t_{s}\sin\theta_{t}n_{2} e^{i\tau_{t}},\label{FT} \end{align} where $z_{0}$ is the vacuum impedance and $\tau_{j}=\bm{k}_{j}\cdot\bm{r}-\omega t$ ($j=i,r,t$), with $\omega$ being the angular frequency of light and $\bm{k}_{j}$ the respective wave vector. The relation between $\bm{E}$ and $\bm{H}$ for each wave is $\bm{H}z_{0}=n(\bm{k}/\vert\bm{k}\vert)\times\bm{E}.$ At the interface $z=0$, the electric and magnetic fields are related by the boundary conditions \cite{Jackson}, \begin{align} E_{x}^{(t)}-E_{x}^{(i)}-E_{x}^{(r)}&=0, & E_{y}^{(t)}-E_{y}^{(i)}-E_{y}^{(r)}&=0,\label{BCE}\\ H_{x}^{(t)}-H_{x}^{(i)}-H_{x}^{(r)}&=J_{y}, & H_{y}^{(t)}-H_{y}^{(i)}-H_{y}^{(r)}&=-J_{x},\label{BCH} \end{align} where $\bm{J}$ is the surface current density because of the 2D conducting material. According to the Ohm's law $\bm{J}=\overline{\bm{\sigma}}\cdot\bm{E}^{(t)}$, namely \begin{align} J_{x}&=\sigma_{xx}E_{x}^{(t)}+\sigma_{xy}E_{y}^{(t)}, \nonumber\\ J_{y}&=\sigma_{yy}E_{y}^{(t)}+\sigma_{xy}E_{x}^{(t)},\label{J} \end{align} with $\sigma_{xx}$, $\sigma_{yy}$ and $\sigma_{xy}$ being the components of the optical conductivity tensor $\overline{\bm{\sigma}}$. To note that $\overline{\bm{\sigma}}$ is symmetric, i.e. $\sigma_{xy}=\sigma_{yx}$. Now, substituting Eqs.~(\ref{FI}-\ref{FT}) into Eqs.~(\ref{BCE}-\ref{J}), and using the fact that $\cos\theta_{r}=-\cos\theta_{i}$, we obtain the components of the reflected and transmitted waves, in terms of those of the incident wave, are given through the following generalized Fresnel coefficients, \begin{widetext} \begin{align} r_{s}&=\Bigl(-1 + \frac{2 n_{1}f_{1}\cos\theta_{i}}{f_{1}f_{2}+z_{0}^{2}\sigma_{xy}^{2}\cos\theta_{i}\cos\theta_{t}}\Bigr) a_{s} + \Bigl(\frac{2 n_{1}z_{0}\sigma_{xy}\cos\theta_{i}}{f_{1}f_{2}+z_{0}^{2}\sigma_{xy}^{2}\cos\theta_{i}\cos\theta_{t}}\Bigr) a_{p}, \label{Rs}\\ t_{s}&=\Bigl(\frac{2 n_{1}f_{1}\cos\theta_{i}}{f_{1}f_{2}+z_{0}^{2}\sigma_{xy}^{2}\cos\theta_{i}\cos\theta_{t}}\Bigr) a_{s} + \Bigl(\frac{2 n_{1}z_{0}\sigma_{xy}\cos\theta_{i}}{f_{1}f_{2}+z_{0}^{2}\sigma_{xy}^{2}\cos\theta_{i}\cos\theta_{t}}\Bigr) a_{p}, \\ r_{p}&=\Bigl(1 - \frac{2 n_{1}f_{2}\cos\theta_{t}}{f_{1}f_{2}+z_{0}^{2}\sigma_{xy}^{2}\cos\theta_{i}\cos\theta_{t}}\Bigr)a_{p} + \Bigl(\frac{2 n_{1}z_{0}\sigma_{xy}\cos\theta_{t}}{f_{1}f_{2}+z_{0}^{2}\sigma_{xy}^{2}\cos\theta_{i}\cos\theta_{t}}\Bigr)a_{s}, \\ t_{p}&=\Bigl(\frac{2 n_{1}f_{2}\cos\theta_{i}}{f_{1}f_{2}+z_{0}^{2}\sigma_{xy}^{2}\cos\theta_{i}\cos\theta_{t}}\Bigr) a_{p} - \Bigl(\frac{2 n_{1}z_{0}\sigma_{xy}\cos\theta_{i}}{f_{1}f_{2}+z_{0}^{2}\sigma_{xy}^{2}\cos\theta_{i}\cos\theta_{t}}\Bigr) a_{s},\label{Tp} \end{align} with $f_{1}=(n_{1}\cos\theta_{t}+n_{2}\cos\theta_{i}+z_{0}\sigma_{xx}\cos\theta_{i}\cos\theta_{t})$ and $f_{2}=(n_{1}\cos\theta_{i}+n_{2}\cos\theta_{t}+z_{0}\sigma_{yy})$. \end{widetext} \begin{figure*}[t] \includegraphics[width=\linewidth]{Fig2} \caption{(a) Reflectance and transmittance of $s-$ and $p$-polarized waves as functions of the incident angle $\theta_{i}$ for a system dielectric-graphene-dielectric with $n_{1}=1$ and $n_{2}=1.5$. Graphene is assumed to be unstrained and with optical conductivity equals to $\pi\alpha/z_{0}$. (b) Reflectance of $p$-polarized radiation around the Brewster angle of a system as (a), but now graphene is uniaxially stretched along of the $x-$axis under different strain magnitudes. (c) Analogous to panel (b), but for different directions of a uniaxial strain of magnitude $\epsilon=0.04$.} \label{fig2} \end{figure*} \section{Results}\label{III} As a first result from these equations one can note that, in general, the considered scattering of light can not be decoupled in transverse electric (TE) and transverse magnetic (TM) waves. When the normal to the incidence plane is not along one principal axes of the conductivity tensor $\overline{\bm{\sigma}}$, both incident TE ($s$-polarized) and TM ($p$-polarized) waves are scattered in reflected and transmitted waves with components that are parallel and perpendicular to the incidence plane. Namely, an incident $s$- or $p$-polarization is only preserved if $\sigma_{xy}=0$. In this last case, the Fresnel coefficients (\ref{Rs}-\ref{Tp}) are quite more simplified and, in particular, the reflectance and transmittance for both $s$- and $p$-polarization can be calculated as \begin{align} R_{s}&=\Bigl\vert\frac{n_{1}\cos\theta_{i}-n_{2}\cos\theta_{t}-z_{0}\sigma_{yy}}{n_{1}\cos\theta_{i}+n_{2}\cos\theta_{t}+z_{0}\sigma_{yy}}\Bigr\vert^{2}, \label{R-s}\\ R_{p}&=\Bigl\vert\frac{n_{2}/\cos\theta_{t}-n_{1}/\cos\theta_{i}+z_{0}\sigma_{xx}}{n_{2}/\cos\theta_{t}+n_{1}/\cos\theta_{i}+z_{0}\sigma_{xx}}\Bigr\vert^{2},\label{R-p} \\ T_{s}&=\frac{4n_{1}n_{2}\cos\theta_{i}\cos\theta_{t}}{\big\vert n_{1}\cos\theta_{i}+n_{2}\cos\theta_{t}+z_{0}\sigma_{yy}\big\vert^{2}} , \\ T_{p}&=\frac{4n_{1}n_{2}/(\cos\theta_{i}\cos\theta_{t})}{\big\vert n_{2}/\cos\theta_{t}+n_{1}/\cos\theta_{i}+z_{0}\sigma_{xx}\big\vert^{2}} ,\label{T-p} \end{align} and the absorbance as $A=1-R-T$. These expressions reproduce those for an isotropic 2D material like unstrained graphene \cite{Zhan2013,Lambin2015} if both $\sigma_{xx}$ and $\sigma_{yy}$ are replacing with the same conductivity value. In absence of anisotropy and normal incidence, the reflectance, transmittance and absorbance are independent on the incident light polarization, as depicted in Fig.~\ref{fig2}(a), Fig.~\ref{fig3}(a) and Fig.~\ref{fig4}(a), respectively. However, for normal incidence but in the presence of anisotropy, from Eqs.~(\ref{R-s}--\ref{T-p}) it is clear that they depend on the light polarization, which has been studied in details for strained graphene \cite{Oliva2015}. \subsection{Modified Brewster angle} For oblique incidence, such quantities show a strong dependence on the light polarization. For instance, while the $s-$polarized radiation is partially reflected for all $\theta_{i}$, the $p-$polarization reflection is totally inhibited at the Brewster angle (see Fig.~\ref{fig2}(a) and Fig.~\ref{fig3}(a)). For the simplest system dielectric-dielectric, the Brewster angle $\theta_{\text{B}}$ is given by the well-know formula $\tan\theta_{\text{B}}=n_{2}/n_{1}$ \cite{Born}. Once a 2D conducting material is present at the interface between both dielectric media, this angle changes \cite{Lambin2018}. Staring from the cancellation condition of $R_{p}$ given by Eq.~(\ref{R-p}) and using the Snell law, the modified Brewster angle $\theta_{\text{B}}^{\prime}$ can be expressed as \begin{equation} \theta_{\text{B}}^{\prime}\approx\theta_{\text{B}}+\frac{z_{0}\sigma_{xx}n_{1}n_{2}^{3}}{(n_{2}^{2} - n_{1}^{2})(n_{2}^{2}+n_{1}^{2})^{3/2}},\label{BA} \end{equation} which is fulfilled in the regime of purely real and low conductivity, i.e. $\Im(\sigma_{xx})=0$ and $z_{0}\sigma_{xx}\ll1$. Therefore, the shift of the Brewster angle $\Delta_{\text{B}}=\theta_{\text{B}}^{\prime}-\theta_{\text{B}}$ is imposed by the relation between the refractive indexes of the incident medium $n_{1}$ and of the substrate $n_{2}$: \emph{if} $n_{2}>n_{1}$ ($n_{1}>n_{2}$) \emph{then} $\Delta>0$ ($\Delta<0$). Equation~(\ref{BA}) also leads to useful approximated expression for the following limiting cases: \begin{align} &\text{if}\ \ n_{2} \gg n_{1}, & \Delta_{\text{B}}&\approx z_{0}\sigma_{xx}n_{1}/n_{2}^{2}, \label{AAB}\\ &\text{if}\ \ n_{2} \ll n_{1}, & \Delta_{\text{B}}&\approx-z_{0}\sigma_{xx}n_{2}^{3}/n_{1}^{4}, \end{align} being the approximation (\ref{AAB}) coincident with the reported one in Ref.~[\onlinecite{Lambin2018}]. \begin{figure*}[t] \includegraphics[width=0.75\linewidth]{Fig3} \caption{(a) Reflectance and transmittance of $s$- and $p$-polarized waves as functions of the incident angle $\theta_{i}$ for a system dielectric-graphene-dielectric with $n_{1}=1.5$ and $n_{2}=1$. Graphene is assumed to be unstrained and with optical conductivity equals to $\pi\alpha/z_{0}$. Panels (b) and (c), show respectively the reflectance of $p$- and $s$-polarized waves around the critical angle of a system as (a), but now graphene is uniaxially stretched along of the $x$-axis. (d) Analogous to panel (c), but for different directions of a uniaxial strain of magnitude $\epsilon=0.04$.} \label{fig3} \end{figure*} \subsection{``Total'' internal reflection} Another fundamental optical effect occurs when the incident medium is optically denser than the substrate. For such system (with $n_{1}>n_{2}$), if the incident angle exceeds the critical value $\theta_{c}$ given by $\sin\theta_{c}=n_{2}/n_{1}$ then all the incident light is reflected, i.e. $R_{s,p}=1$ and $T_{s,p}=0$ for $\theta_{i}\geq\theta_{c}$, which is know as total internal reflection \cite{Born}. The presence of a 2D conducting material at the interface modifies this phenomenon \cite{Tian2013,Tian2014,Zhan2013}. Unlike the Brewster angle, the critical angle $\theta_{c}$ does not change. However, while the transmittance remains nullified for $\theta_{i}\geq\theta_{c}$, the reflectance is no longer total, as it can be appreciated in Fig.~\ref{fig3}(a). Above the critical angle, the $p$-polarization reflectance $R_{p}$ roughly presents a very slow U-shaped behaviour (concave downward) as a function of the incident angle $\theta_{i}$, being equal to $1$ for $\theta_{i}=\theta_{c}$ and $\theta_{i}=90\text{\textdegree}$. Such U-shaped behaviour of $R_{p}$ is more pronounced for a 2D material with higher optical conductivity $\sigma_{xx}$, whereas $R_{p}$ recovers its value $1$ for $\sigma_{xx}$ tending to $0$. This fact simply means that under total internal reflection the $p$-polarization absorbance $A_{p}=1-R_{p}$ increases as $\sigma_{xx}$ increases. On the other hand, for $\theta_{i}\geq\theta_{c}$ the $s$-polarization reflectance $R_{s}$ shows an approximate (increasing) lineal dependence of $\theta_{i}$, reaching to maximum value $1$ at $\theta_{i}=90\text{\textdegree}$. Using Eq.~(\ref{R-s}) one can be derived that $R_{s}$ at the critical angle $\theta_{c}$ takes the exact value: \begin{equation}\label{RsAc} \overline{R}_{s}\equiv {R}_{s}(\theta_{c})=\Bigl\vert\frac{\sqrt{n_{1}^{2}-n_{2}^{2}}-z_{0}\sigma_{yy}}{\sqrt{n_{1}^{2}-n_{2}^{2}}+z_{0}\sigma_{yy}}\Bigr\vert^{2}, \end{equation} which could be used to determine the optical conductivity of 2D materials from reflectance measurements of $s$-polarized radiation under configuration of total internal reflection. It is important to note that the $s$-polarization absorbance $A_{s}$ has its maximum at $\theta_{c}$, as illustrated in Fig.~\ref{fig4}(a), being equal to $1-\overline{R}_{s}$. Then, from Eq.~(\ref{RsAc}) and neglecting the higher-order term of $\sigma_{yy}$, the maximum of $A_{s}$ is given by \begin{equation} \overline{A}_{s}\equiv {A}_{s}(\theta_{c})\approx\frac{4z_{0}\Re(\sigma_{yy})}{\sqrt{n_{1}^{2} - n_{2}^{2}}}. \end{equation} Thus, an increasing of the optical conductivity $\sigma_{yy}$ leads to an increasing of the $s$-polarization absorbance peak (see Fig.~\ref{fig4}(c)). \section{Strained graphene}\label{IV} In order to apply our previous results to the concrete anisotropic 2D material, let us consider graphene under uniaxial strain. In general, when graphene is subjected to an arbitrary uniform strain (e.g. uniaxial, biaxial, and so forth) its optical conductivity tensor $\overline{\bm{\sigma}}$, up to first order in the strain tensor $\overline{\bm{\epsilon}}$, can be written as \cite{Pereira10,Pellegrino11,Oliva2014} \begin{align} \sigma_{xx}&=\sigma_{0}(1+\beta(\epsilon_{yy}-\epsilon_{xx})), \nonumber\\ \sigma_{yy}&=\sigma_{0}(1+\beta(\epsilon_{xx}-\epsilon_{yy})), \nonumber\\ \sigma_{xy}&=\sigma_{yx}=-2\beta\epsilon_{xy}\sigma_{0},\label{Cond} \end{align} where $\sigma_{0}$ is the conductivity of unstrained graphene and $\beta\simeq2.37$ is a parameter related to the hopping changes due to strain \cite{Ribeiro2009,Botello2018}. For chemical potential equals to zero and at near-infrared and visible frequencies, $\sigma_{0}$ takes the universal value $\pi\alpha/z_{0}$, where $\alpha\approx1/137$ is the fine structure constant. In the case of a uniaxial strain such that the stretching direction is rotated by an angle $\phi$ respect to the $x$-axis of the laboratory frame, then the strain tensor components read \begin{align} \epsilon_{xx}&=\epsilon(\cos^{2}\phi - \nu\sin^{2}\phi), \nonumber\\ \epsilon_{yy}&=\epsilon(\sin^{2}\phi - \nu\cos^{2}\phi), \nonumber\\ \epsilon_{xy}&=\epsilon_{yx}=\epsilon(1+\nu)\sin\phi\cos\phi, \end{align} where $\epsilon$ is the strain magnitude and $\nu\approx0.16$ is the Poisson ratio. In consequence, from Eqs.~(\ref{Cond}) it follows that $\sigma_{xx,yy}=\sigma_{0}(1\mp\beta\epsilon(1+\nu)\cos2\phi)$ meanwhile $\sigma_{xy}=-\sigma_{0}\beta\epsilon(1+\nu)\sin2\phi$. So, for a stretching along the $x-$axis ($\phi=0\text{\textdegree}$) or the $y-$axis ($\phi=90\text{\textdegree}$), the conductivity component $\sigma_{xy}$ results zero and, therefore, the reflectance and transmittance can obtained from Eqs.~(\ref{R-s}-\ref{T-p}). Moreover, it is important to keep in mind that, according to Eqs.~(\ref{Cond}), the optical conductivity parallel (perpendicular) to the stretching direction decreases (increases) linearly with increasing the strain magnitude $\epsilon$, which has been experimentally observed \cite{Pereira14,Chhikara2017}. \begin{figure*}[t] \includegraphics[width=0.75\linewidth]{Fig4} \caption{(a) Dependence of the $s$- and $p$-polarization absorbance on the incident angle $\theta_{i}$ for $n_{1}=1.5$, $n_{2}=1$ and the graphene conductivity given by $\pi\alpha/z_{0}$. Panels (b) and (c), display respectively the $s$- and $p$-polarization absorbance around the critical angle for graphene uniaxially stretched along of the $x$-axis. (d) Similar to panel (c) but for different directions of a uniaxial strain of magnitude $\epsilon=0.04$.} \label{fig4} \end{figure*} Figure~\ref{fig2}(b) depicts the $p$-polarization reflectance $R_{p}$ about the modified Brewster angle for uniaxially strained graphene along the $x-$axis under different strain magnitudes. The observed shift of $\theta_{\text{B}}^{\prime}$ for stronger stretching can be understood from Eq.~(\ref{AAB}). Replacing $\sigma_{xx}$ by its corresponding value, one gets \begin{equation} \Delta_{\text{B}}(\epsilon)\approx z_{0}\sigma_{0}(1-\beta\epsilon(1+\nu)) n_{1}/n_{2}^{2}.\label{AABs} \end{equation} Expression~(\ref{AABs}) allows easily to evaluate the modified Brewster angle as function on the strain magnitude $\epsilon$. For instance, for $n_{1}=1$, $n_{2}=1.5$ and $z_{0}\sigma_{0}=\pi\alpha$, it predicts a decreasing of $-0.03\text{\textdegree}$ for each $1\%$ that $\epsilon$ increases, with a relative error not greater than $2\%$ (see TABLE~\ref{table} and also Ref.~[\onlinecite{Lambin2018}]). Inversely, it is important to note that Eq.~(\ref{AABs}) could be used to determine the strain magnitude from measurements of the Brewster angle shift. \begin{table}[h] \begin{tabular}{ |c|c|c| } \hline $\epsilon$ & Exact $\theta_{\text{B}}^{\prime}$ & Approx. $\theta_{\text{B}}^{\prime}$ \\ \hline 0.00 & 56.90\textdegree & 56.89\textdegree \\ \hline 0.02 & 56.86\textdegree & 56.85\textdegree \\ \hline 0.04 & 56.82\textdegree & 56.81\textdegree \\ \hline 0.06 & 56.78\textdegree & 56.77\textdegree \\ \hline \end{tabular} \caption{Modified Brewster angle for $n_{1}=1$, $n_{2}=1.5$, $z_{0}\sigma_{0}=\pi\alpha$ and different strain magnitudes (as in Fig.~\ref{fig2}(b)). The approximate values were obtained using Eq.~(\ref{AABs}).} \label{table} \end{table} On the other hand, Fig.~\ref{fig2}(c) illustrates $R_{p}$ for uniaxial strains with the same magnitude ($\epsilon=0.04$), but along of different directions. The most striking feature is that $R_{p}$ only nullify for the elongations along the $x$- and $y$-axes. This fact is due to the non-diagonal component of the graphene conductivity is not zero for uniaxial strain along any other direction ($\sigma_{xy}=-\sigma_{0}\beta\epsilon(1+\nu)\sin2\phi$). As mentioned above, under an incident $p$-polarized wave, if $\sigma_{xy}\neq0$ then the reflected wave is not strictly $p$-polarized since its electric field also presents a small transverse component to the incidence plane (see Eq.~(\ref{Rs})), which inhibits the perfect suppression of $R_{p}$. In short, the Brewster effect does not happens in strained graphene whenever one principal axes of the strain tensor is not normal to the incidence plane. Finally, we extend our previous general discussion on the total internal reflection phenomenon to the case that the interstitial 2D material is strained graphene. Figure~\ref{fig3}(b) shows $R_{p}$ about the critical angle $\theta_{c}$ for $n_{1}=1.5$, $n_{2}=1$ and graphene uniaxially strained along the $x$-axis. For $\theta_{i}\geq\theta_{c}$, it can be noted that $R_{p}$ increases for stronger elongation. As above mentioned, this behavior is expected since the strain increasing reduces the conductivity $\sigma_{xx}$ and, therefore, $R_{p}$ tends to $1$ which is its value in the absence of graphene. As a consequence, under these basic considerations the $p$-polarization absorbance $A_{p}$ should decreases with increasing $\epsilon$ (see Fig.~\ref{fig4}(b)). However, Dong and coauthors \cite{Tian2014} experimentally observed an opposite dependence of $A_{p}$ on the strain magnitude $\epsilon$. Then, according to the discussion presented here, those results can not be understood in terms of strain-induced effects on graphene conductivity. Otherwise, Fig.~\ref{fig3}(c) displays the step-shape of $R_{s}$ about the critical angle $\theta_{c}$, which experiments a decreasing (an approximate down-shift curve) for stronger uniaxial strains along the $x$-axis. From Eq.~(\ref{RsAc}) and considering that $\sigma_{yy}=\sigma_{0}(1+\beta\epsilon(1+\nu))$, the value of $R_{s}$ at the critical angle $\theta_{c}$ as a function of $\epsilon$ results \begin{equation}\label{Rs-SG} \overline{R}_{s}(\epsilon)\approx 1-4 z_{0}\sigma_{0}(1+\beta\epsilon(1+\nu))/\sqrt{n_{1}^{2}-n_{2}^{2}}, \end{equation} which predicts the $\overline{R}_{s}$ decreasing when $\epsilon$ increases. In particular, for $n_{1}=1.5$, $n_{2}=1$ and $z_{0}\sigma_{0}=\pi\alpha$, from this relation it follows that the $\overline{R}_{s}$ value reduces approximately $-0.003$ for each $1\%$ that $\epsilon$ enhances. This variation is also experimented by $R_{s}$ for incident angles slightly above the angle critical, as noted in Fig.~\ref{fig3}(c). Therefore, the experimental monitoring of $R_{s}$ under total internal reflection conditions, as made in Ref.~[\onlinecite{Tian2014}], it would allow to investigate the strain-induced effects on graphene conductivity using Eq.~(\ref{Rs-SG}) or the most general expression~(\ref{RsAc}). It is worth mentioning that this study can be complemented by the consideration of ``the freedom degree'' associated to parameter $\phi$. Figure~\ref{fig3}(d) shows $R_{s}$ for uniaxial strains with $\epsilon=0.04$, but different directions. It can noted that $\overline{R}_{s}$ have a minimum when the elongation is parallel to the incidence plane, e.g. $\overline{R}_{s}\approx0.911$ for $\phi=0\text{\textdegree}$, while $\overline{R}_{s}\approx0.932$ for $\phi=90\text{\textdegree}$. This means an absorbance variation greater than $2\%$ due exclusively to the orientation change of the incidence plane respect to the strain direction, as appreciated in Fig.~\ref{fig4}(d). \section{Conclusion}\label{V} In summary, we derived the Fresnel coefficients for oblique incidence of linearly polarized light through two dielectric media with an anisotropic 2D conducting material at the interface. Based on these generalized coefficients it was demonstrated that, if the incidence plane is not parallel to the principal axes of the optical conductivity tensor of the 2D material, then the light scattering problem can not be decoupled in pure $s$- and $p$-polarized waves. In other words, whenever the non-diagonal conductivity component $\sigma_{xy}\neq0$ is not zero, the incident polarization is not preserved because the polarization plane changes. We performed an analysis about the modifications of the Brewster effect and the total internal reflection due to the anisotropic 2D material. For the former, an analytical expression of the modified Brewster angle has been obtained for low conductivity, which predicts a up-shift (down-shift) of the Brewster angle if the refractive index of the substrate is greater (smaller) than the one of the incident medium. This expression also reproduces a previous result found by Maj\'erus \emph{et.al.}\cite{Lambin2018} in certain limiting case. Moreover, it is demonstrated that for $\sigma_{xy}\neq0$ the perfect suppression of the $p$-polarization reflectance is inhibited and, thus, the Brewster effect does not occur strictly. To exemplify our findings, as anisotropic 2D material we considered uniaxially strained graphene. The uniaxial-strain effects on the Brewster angle and the reflectance and absorbance under total internal reflection were estimated as a function either of the magnitude or of the strain direction. The presented results also suggest that measurements of the modified Brewster angle and the $s$-polarization reflectance about the angle critical could be used to investigate the strain-induced effects on the optical conductivity of graphene and, in general, the optical anisotropy of 2D materials. \begin{acknowledgments} This work is partially supported by Conacyt-Mexico under Grant No. 254414. \end{acknowledgments}
proofpile-arXiv_067-1793
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgements} We would like to thank C.G.~Boyd, D.E.~Brahm, A.~Brignole, W. Buchm\"uller, J.M.~Cornwall, K.~Farakos, V.~Jain, C.~Kounnas and M.E.~Shaposhnikov for useful discussions and suggestions. \newpage \section*{References} \begin{enumerate} \item \label{kl1} D.A.~Kirzhnits and A.D.~Linde, Phys. Lett. 72B (1972) 471. \item \label{old} S. Weinberg, Phys. Rev. D9 (1974) 3357; L. Dolan and R. Jackiw, Phys. Rev. D9 (1974) 3320; D.A.~Kirzhnits and A.D.~Linde, Sov. Phys. JETP 40 (1975) 628 and Ann. Phys. 101 (1976) 195. \item \label{sphal} F.R.~Klinkhamer and N.S.~Manton, Phys. Rev. D30 (1984) 2212; V.A.~Kuzmin, V.A.~Rubakov and M.E.~Shaposhnikov, Phys. Lett. B155 (1985) 36. \item \label{reviews} A.D.~Dolgov, Phys. Rep. 222 (1992) 309; M.E.~Shaposhnikov, in {\em `Proceedings of the 1991 Summer School in High Energy Physics and Cosmology'}, Trieste, 17 June--9 August 1991, E.~Gava et al., eds. (World Scientific, Singapore, 1992), Vol.~1, p.~338; A.G.~Cohen, D.B.~Kaplan and A.E.~Nelson, San Diego preprint UCSD-PTH-93-02, BUHEP-93-4. \item \label{books} D.J.~Gross, R.D.~Pisarski and L.G.~Yaffe, Rev. Mod. Phys. 53 (1981) 43; J.I.~Kapusta, {\em Finite-temperature Field Theory} (Cambridge University Press, 1989); A.D. Linde, {\em Particle Physics and Inflationary Cosmology} (Harwood, New York, 1990). \item \label{fendley} P.~Fendley, Phys. Lett. B196 (1987) 175. \item \label{ah} G.W.~Anderson and L.J.~Hall, Phys. Rev. D45 (1992) 2685. \item \label{carrington} M.E. Carrington, Phys. Rev. D45 (1992) 2933. \item \label{shapo} M.E.~Shaposhnikov, Phys. Lett. B277 (1992) 324 and (E) B282 (1992) 483. \item \label{brahm} D.E.~Brahm and S.D.H.~Hsu, Caltech preprints CALT-68-1762, HUTP-91-A064 and CALT-68-1705, HUTP-91-A063. \item \label{dine} M.~Dine, R.G.~Leigh, P.~Huet, A.~Linde and D.~Linde, Phys. Lett. B283 (1992) 319 and Phys. Rev. D46 (1992) 550. \item \label{arnold} P.~Arnold, Phys. Rev. D46 (1992) 2628. \item \label{eqz} J.R.~Espinosa, M.~Quir\'os and F.~Zwirner, Phys. Lett. B291 (1992) 115. \item \label{bbh} C.G.~Boyd, D.E.~Brahm and S.D.~Hsu, Caltech preprint CALT-68-1795, HUTP-92-A027, EFI-92-22. \item \label{buch} W.~Buchm\"uller and T.~Helbig, Int. J. Mod. Phys. C3 (1992) 799; W.~Buchm\"uller, T.~Helbig and D.~Walliser, preprint DESY 92-151. \item \label{klimov} O.K.~Kalashnikov and V.V.~Klimov, Phys. Lett. 95B (1980) 423. \item \label{kal} O.K.~Kalashnikov, Phys. Lett. B279 (1992) 367. \item \label{buchbis} W.~Buchm\"uller, Z.~Fodor, T.~Helbig and D.~Walliser, preprint DESY 93-021. \item \label{latticebis} A.~Billoire, G.~Lazarides and Q.~Shafi, Phys. Lett. B103 (1981) 450; T.~De~Grand and D.~Toussaint, Phys. Rev. D25 (1982) 526; G.~Lazarides and S.~Sarantakos, Phys. Rev. D31 (1985) 389; G.~Koutsoumbas, K.~Farakos and S.~Sarantakos, Phys. Lett. B189 (1986) 173; A.~Irb\"ack and C.~Peterson, Phys. Lett. B174 (1986) 99; J.~Ambjorn, K.~Farakos and M.~Shaposhnikov, preprint CERN-TH.6508/92; J.E.~Mandula and M.~Ogilvie, Phys. Lett. B201 (1988) 117; M.~Teper, Phys. Lett. B289 (1992) 115; C.~Bernard, Phys. Lett. B108 (1982) 431; J.E.~Mandula and M.~Ogilvie, Phys. Lett. B185 (1987) 127. \item \label{corn} J.M.~Cornwall, W.-S. Hou and J.E.~King, Phys. Lett. B153 (1985) 173. \item \label{cornbis} J.M.~Cornwall, preprint UCLA/92/TEP/51. \item \label{magnetic} T.S.~Bir\'o and B.~M\"uller, Duke University preprint DUKE-TH-92-42. \item \label{boydbis} C.G.~Boyd, D.E.~Brahm and S.D.~Hsu, Caltech preprint CALT-68-1858, HUTP-93-A011, EFI-93-22. \item \label{quiros} M.~Quir\'os, Madrid preprint IEM-FT-71/93. \item \label{pi} G.~Amelino-Camelia and S.-Y.~Pi, Phys. Rev. D47 (1993) 2356. \item \label{ae} P.~Arnold and O.~Espinosa, Phys. Rev. D47 (1993) 3546. \item \label{amelino} G.~Amelino-Camelia, Boston University preprint BUHEP-93-12. \item \label{bd} J.E.~Bagnasco and M.~Dine, Santa Cruz preprint SCIPP-92-43. \item \label{lattice} B.~Bunk, E.M.~Ilgenfritz, J.~Kripfganz and A.~Schiller, Phys. Lett. B284 (1992) 371 and Bielefeld preprint BI-TP-92-46; K.~Kajantie, K.~Rummukainen and M.~Shaposhnikov, preprint CERN-TH.6901/93. \item \label{oneovern} V. Jain, Nucl. Phys. B394 (1993) 707 and Max-Planck-Institut preprint MPI-Ph/92-72; V.~Jain and A.~Papadopoulos, Phys. Lett. B303 (1993) 315 and Berkeley preprint LBL-33833 (1992); M. Carena and C.E.M. Wagner, Max-Planck-Institut preprint MPI-Ph/92-67. \item \label{epsilon} P. Ginsparg, Nucl. Phys. B170 (1980) 388; J. March-Russell, Princeton preprint PUPT-92-1328, LBL-32540. \item \label{threed} J.~Lauer, Heidelberg preprint HD-THEP-92-59; A.~Jakov\'ac and A.~Patk\'os, Bielefeld preprint BI-TP 93/18 \item \label{wetterich} N.~Tetradis and C.~Wetterich, preprint DESY 92-093; M.~Reuter, N.~Tetradis and C.~Wetterich, preprint DESY 93-004. \end{enumerate} \newpage \section*{Figure captions} \begin{itemize} \item[Fig.1:] The temperature-dependent effective potential of the Standard Model, at the critical temperature and for $m_h = 60 \; {\rm GeV}$, $m_t = 150 \; {\rm GeV}$. The long-dashed line corresponds to the na\"{\i}ve one-loop approximation, neglecting the scalar loops; the short-dashed line corresponds to the approximation of eq.~(\ref{carsm}), as in ref.~[\ref{carrington}]; the solid lines correspond to the approach discussed in the text, for $\gamma = 1,1.5,2,3$. \item[Fig.2:] The ratio $v(T_c)/T_c$, as a function of the Higgs mass $m_h$, for $m_t = 150 \; {\rm GeV}$. The long-dashed line corresponds to the one-loop approximation, the short-dashed line to the approximation of eq.~(\ref{carsm}), as in ref.~[\ref{carrington}], and the solid lines to the approach discussed in the text, for $\gamma = 1,1.5,2,2.5,3$. \end{itemize} \end{document} /M { moveto } def /D { lineto } def /ST { stroke } def /NP { newpath } def /IC { initclip } def /CL { clip } def /FL { eofill } def /PT 12 def /SP1 { 12 setlinewidth 0 setgray } def /SP2 { 8 setlinewidth 0 setgray } def /SP3 { 8 setlinewidth 0 setgray } def /SP4 { 8 setlinewidth 0 setgray } def /SP5 { 12 setlinewidth 0 setgray } def /SP6 { 12 setlinewidth 0 setgray } def gsave [] 0 setdash 1 setlinecap 1 setlinejoin 90 rotate 72 1016 div dup scale 0 -7721 translate IC 0 0 M 0 7721 D 11040 7721 D 11040 0 D 0 0 D CL NP SP1 2550 1020 M 9894 1020 D ST 2550 1020 M 2550 1135 D ST 2537 918 M 2511 911 D 2493 889 D 2484 852 D 2484 831 D 2493 794 D 2511 772 D 2537 765 D 2554 765 D 2581 772 D 2598 794 D 2607 831 D 2607 852 D 2598 889 D 2581 911 D 2554 918 D 2537 918 D ST 2519 911 M 2502 889 D 2493 852 D 2493 831 D 2502 794 D 2519 772 D ST 2511 780 M 2537 772 D 2554 772 D 2581 780 D ST 2572 772 M 2589 794 D 2598 831 D 2598 852 D 2589 889 D 2572 911 D ST 2581 903 M 2554 911 D 2537 911 D 2511 903 D ST 2734 1020 M 2734 1097 D ST 2917 1020 M 2917 1097 D ST 3101 1020 M 3101 1097 D ST 3284 1020 M 3284 1097 D ST 3468 1020 M 3468 1097 D ST 3652 1020 M 3652 1097 D ST 3835 1020 M 3835 1097 D ST 4019 1020 M 4019 1097 D ST 4202 1020 M 4202 1097 D ST 4386 1020 M 4386 1135 D ST 4237 765 M 4316 838 D 4334 860 D 4342 874 D 4342 889 D 4334 903 D 4325 911 D 4307 918 D 4272 918 D 4255 911 D 4246 903 D 4237 889 D 4237 882 D 4246 882 D 4246 889 D 4255 903 D 4272 911 D 4307 911 D 4325 903 D 4334 889 D 4334 874 D 4325 860 D 4307 838 D 4229 765 D 4351 765 D 4351 772 D 4237 772 D ST 4456 918 M 4430 911 D 4412 889 D 4403 852 D 4403 831 D 4412 794 D 4430 772 D 4456 765 D 4473 765 D 4500 772 D 4517 794 D 4526 831 D 4526 852 D 4517 889 D 4500 911 D 4473 918 D 4456 918 D ST 4438 911 M 4421 889 D 4412 852 D 4412 831 D 4421 794 D 4438 772 D ST 4430 780 M 4456 772 D 4473 772 D 4500 780 D ST 4491 772 M 4508 794 D 4517 831 D 4517 852 D 4508 889 D 4491 911 D ST 4500 903 M 4473 911 D 4456 911 D 4430 903 D ST 4570 1020 M 4570 1097 D ST 4753 1020 M 4753 1097 D ST 4937 1020 M 4937 1097 D ST 5120 1020 M 5120 1097 D ST 5304 1020 M 5304 1097 D ST 5488 1020 M 5488 1097 D ST 5671 1020 M 5671 1097 D ST 5855 1020 M 5855 1097 D ST 6038 1020 M 6038 1097 D ST 6222 1020 M 6222 1135 D ST 6073 809 M 6196 809 D 6196 801 D 6065 801 D 6161 918 D 6161 765 D 6152 765 D 6152 896 D 6073 801 D ST 6292 918 M 6266 911 D 6248 889 D 6239 852 D 6239 831 D 6248 794 D 6266 772 D 6292 765 D 6309 765 D 6336 772 D 6353 794 D 6362 831 D 6362 852 D 6353 889 D 6336 911 D 6309 918 D 6292 918 D ST 6274 911 M 6257 889 D 6248 852 D 6248 831 D 6257 794 D 6274 772 D ST 6266 780 M 6292 772 D 6309 772 D 6336 780 D ST 6327 772 M 6344 794 D 6353 831 D 6353 852 D 6344 889 D 6327 911 D ST 6336 903 M 6309 911 D 6292 911 D 6266 903 D ST 6406 1020 M 6406 1097 D ST 6589 1020 M 6589 1097 D ST 6773 1020 M 6773 1097 D ST 6956 1020 M 6956 1097 D ST 7140 1020 M 7140 1097 D ST 7324 1020 M 7324 1097 D ST 7507 1020 M 7507 1097 D ST 7691 1020 M 7691 1097 D ST 7874 1020 M 7874 1097 D ST 8058 1020 M 8058 1135 D ST 7997 911 M 8006 896 D 8014 896 D 8006 911 D 7979 918 D 7962 918 D 7936 911 D 7918 889 D 7909 852 D 7909 816 D 7918 787 D 7936 772 D 7962 765 D 7971 765 D 7997 772 D 8014 787 D 8023 809 D 8023 816 D 8014 838 D 7997 852 D 7971 860 D 7962 860 D 7936 852 D 7918 838 D ST 8006 903 M 7979 911 D 7962 911 D 7936 903 D ST 7944 911 M 7927 889 D 7918 852 D 7918 816 D 7927 787 D 7953 772 D ST 7918 801 M 7936 780 D 7962 772 D 7971 772 D 7997 780 D 8014 801 D ST 7979 772 M 8006 787 D 8014 809 D 8014 816 D 8006 838 D 7979 852 D ST 8014 823 M 7997 845 D 7971 852 D 7962 852 D 7936 845 D 7918 823 D ST 7953 852 M 7927 838 D 7918 816 D ST 8128 918 M 8102 911 D 8084 889 D 8075 852 D 8075 831 D 8084 794 D 8102 772 D 8128 765 D 8145 765 D 8172 772 D 8189 794 D 8198 831 D 8198 852 D 8189 889 D 8172 911 D 8145 918 D 8128 918 D ST 8110 911 M 8093 889 D 8084 852 D 8084 831 D 8093 794 D 8110 772 D ST 8102 780 M 8128 772 D 8145 772 D 8172 780 D ST 8163 772 M 8180 794 D 8189 831 D 8189 852 D 8180 889 D 8163 911 D ST 8172 903 M 8145 911 D 8128 911 D 8102 903 D ST 8242 1020 M 8242 1097 D ST 8425 1020 M 8425 1097 D ST 8609 1020 M 8609 1097 D ST 8792 1020 M 8792 1097 D ST 8976 1020 M 8976 1097 D ST 9160 1020 M 9160 1097 D ST 9343 1020 M 9343 1097 D ST 9527 1020 M 9527 1097 D ST 9710 1020 M 9710 1097 D ST 9894 1020 M 9894 1135 D ST 9780 918 M 9754 911 D 9745 896 D 9745 882 D 9754 867 D 9763 860 D 9780 852 D 9815 845 D 9833 838 D 9842 831 D 9850 816 D 9850 794 D 9842 780 D 9815 772 D 9780 772 D 9754 780 D 9745 794 D 9745 816 D 9754 831 D 9763 838 D 9780 845 D 9815 852 D 9833 860 D 9842 867 D 9850 882 D 9850 896 D 9842 911 D 9815 918 D 9780 918 D ST 9763 911 M 9754 896 D 9754 882 D 9763 867 D 9780 860 D 9815 852 D 9833 845 D 9850 831 D 9859 816 D 9859 794 D 9850 780 D 9842 772 D 9815 765 D 9780 765 D 9754 772 D 9745 780 D 9737 794 D 9737 816 D 9745 831 D 9763 845 D 9780 852 D 9815 860 D 9833 867 D 9842 882 D 9842 896 D 9833 911 D ST 9842 903 M 9815 911 D 9780 911 D 9754 903 D ST 9745 787 M 9772 772 D ST 9824 772 M 9850 787 D ST 9964 918 M 9938 911 D 9920 889 D 9911 852 D 9911 831 D 9920 794 D 9938 772 D 9964 765 D 9981 765 D 10008 772 D 10025 794 D 10034 831 D 10034 852 D 10025 889 D 10008 911 D 9981 918 D 9964 918 D ST 9946 911 M 9929 889 D 9920 852 D 9920 831 D 9929 794 D 9946 772 D ST 9938 780 M 9964 772 D 9981 772 D 10008 780 D ST 9999 772 M 10016 794 D 10025 831 D 10025 852 D 10016 889 D 9999 911 D ST 10008 903 M 9981 911 D 9964 911 D 9938 903 D ST 2550 1020 M 2550 7140 D ST 2550 1020 M 2687 1020 D ST 1037 1097 M 1011 1089 D 994 1067 D 985 1031 D 985 1009 D 994 973 D 1011 951 D 1037 944 D 1055 944 D 1081 951 D 1099 973 D 1107 1009 D 1107 1031 D 1099 1067 D 1081 1089 D 1055 1097 D 1037 1097 D ST 1020 1089 M 1003 1067 D 994 1031 D 994 1009 D 1003 973 D 1020 951 D ST 1011 958 M 1037 951 D 1055 951 D 1081 958 D ST 1072 951 M 1090 973 D 1099 1009 D 1099 1031 D 1090 1067 D 1072 1089 D ST 1081 1082 M 1055 1089 D 1037 1089 D 1011 1082 D ST 1177 965 M 1169 958 D 1169 951 D 1177 944 D 1186 944 D 1195 951 D 1195 958 D 1186 965 D 1177 965 D ST 1177 958 M 1177 951 D 1186 951 D 1186 958 D 1177 958 D ST 1309 1097 M 1282 1089 D 1265 1067 D 1256 1031 D 1256 1009 D 1265 973 D 1282 951 D 1309 944 D 1326 944 D 1352 951 D 1370 973 D 1378 1009 D 1378 1031 D 1370 1067 D 1352 1089 D 1326 1097 D 1309 1097 D ST 1291 1089 M 1274 1067 D 1265 1031 D 1265 1009 D 1274 973 D 1291 951 D ST 1282 958 M 1309 951 D 1326 951 D 1352 958 D ST 1343 951 M 1361 973 D 1370 1009 D 1370 1031 D 1361 1067 D 1343 1089 D ST 1352 1082 M 1326 1089 D 1309 1089 D 1282 1082 D ST 1448 1016 M 1501 1016 D 1501 1024 D 1448 1024 D ST 1545 944 M 1440 944 D 1440 1097 D 1545 1097 D 1545 1089 D 1448 1089 D 1448 951 D 1545 951 D 1545 944 D ST 1684 951 M 1684 1075 D 1676 1075 D 1676 951 D 1684 951 D ST 1754 1009 M 1606 1009 D 1606 1016 D 1754 1016 D 1754 1009 D ST 1868 1097 M 1842 1089 D 1824 1067 D 1816 1031 D 1816 1009 D 1824 973 D 1842 951 D 1868 944 D 1886 944 D 1912 951 D 1929 973 D 1938 1009 D 1938 1031 D 1929 1067 D 1912 1089 D 1886 1097 D 1868 1097 D ST 1851 1089 M 1833 1067 D 1824 1031 D 1824 1009 D 1833 973 D 1851 951 D ST 1842 958 M 1868 951 D 1886 951 D 1912 958 D ST 1903 951 M 1921 973 D 1929 1009 D 1929 1031 D 1921 1067 D 1903 1089 D ST 1912 1082 M 1886 1089 D 1868 1089 D 1842 1082 D ST 2043 1097 M 2017 1089 D 1999 1067 D 1990 1031 D 1990 1009 D 1999 973 D 2017 951 D 2043 944 D 2060 944 D 2087 951 D 2104 973 D 2113 1009 D 2113 1031 D 2104 1067 D 2087 1089 D 2060 1097 D 2043 1097 D ST 2025 1089 M 2008 1067 D 1999 1031 D 1999 1009 D 2008 973 D 2025 951 D ST 2017 958 M 2043 951 D 2060 951 D 2087 958 D ST 2078 951 M 2095 973 D 2104 1009 D 2104 1031 D 2095 1067 D 2078 1089 D ST 2087 1082 M 2060 1089 D 2043 1089 D 2017 1082 D ST 2218 1097 M 2192 1089 D 2174 1067 D 2165 1031 D 2165 1009 D 2174 973 D 2192 951 D 2218 944 D 2235 944 D 2261 951 D 2279 973 D 2288 1009 D 2288 1031 D 2279 1067 D 2261 1089 D 2235 1097 D 2218 1097 D ST 2200 1089 M 2183 1067 D 2174 1031 D 2174 1009 D 2183 973 D 2200 951 D ST 2192 958 M 2218 951 D 2235 951 D 2261 958 D ST 2253 951 M 2270 973 D 2279 1009 D 2279 1031 D 2270 1067 D 2253 1089 D ST 2261 1082 M 2235 1089 D 2218 1089 D 2192 1082 D ST 2550 1173 M 2642 1173 D ST 2550 1326 M 2642 1326 D ST 2550 1479 M 2642 1479 D ST 2550 1632 M 2642 1632 D ST 2550 1785 M 2642 1785 D ST 2550 1938 M 2642 1938 D ST 2550 2091 M 2642 2091 D ST 2550 2244 M 2642 2244 D ST 2550 2397 M 2642 2397 D ST 2550 2550 M 2687 2550 D ST 1064 2568 M 1090 2554 D 1099 2532 D 1099 2517 D 1090 2495 D 1064 2481 D ST 1099 2510 M 1081 2488 D 1055 2481 D 1029 2481 D 1003 2488 D 994 2503 D 985 2503 D 994 2488 D 1003 2481 D 1029 2474 D 1055 2474 D 1081 2481 D 1099 2495 D 1107 2517 D 1107 2532 D 1099 2554 D 1081 2568 D 1055 2576 D 1029 2576 D 1003 2568 D 1011 2619 D 1090 2619 D 1090 2627 D 1003 2627 D 994 2561 D 1003 2561 D 1020 2568 D 1055 2568 D 1081 2561 D 1099 2539 D ST 1020 2481 M 994 2495 D ST 1177 2495 M 1169 2488 D 1169 2481 D 1177 2474 D 1186 2474 D 1195 2481 D 1195 2488 D 1186 2495 D 1177 2495 D ST 1177 2488 M 1177 2481 D 1186 2481 D 1186 2488 D 1177 2488 D ST 1309 2627 M 1282 2619 D 1265 2597 D 1256 2561 D 1256 2539 D 1265 2503 D 1282 2481 D 1309 2474 D 1326 2474 D 1352 2481 D 1370 2503 D 1378 2539 D 1378 2561 D 1370 2597 D 1352 2619 D 1326 2627 D 1309 2627 D ST 1291 2619 M 1274 2597 D 1265 2561 D 1265 2539 D 1274 2503 D 1291 2481 D ST 1282 2488 M 1309 2481 D 1326 2481 D 1352 2488 D ST 1343 2481 M 1361 2503 D 1370 2539 D 1370 2561 D 1361 2597 D 1343 2619 D ST 1352 2612 M 1326 2619 D 1309 2619 D 1282 2612 D ST 1448 2546 M 1501 2546 D 1501 2554 D 1448 2554 D ST 1545 2474 M 1440 2474 D 1440 2627 D 1545 2627 D 1545 2619 D 1448 2619 D 1448 2481 D 1545 2481 D 1545 2474 D ST 1754 2539 M 1606 2539 D 1606 2546 D 1754 2546 D 1754 2539 D ST 1868 2627 M 1842 2619 D 1824 2597 D 1816 2561 D 1816 2539 D 1824 2503 D 1842 2481 D 1868 2474 D 1886 2474 D 1912 2481 D 1929 2503 D 1938 2539 D 1938 2561 D 1929 2597 D 1912 2619 D 1886 2627 D 1868 2627 D ST 1851 2619 M 1833 2597 D 1824 2561 D 1824 2539 D 1833 2503 D 1851 2481 D ST 1842 2488 M 1868 2481 D 1886 2481 D 1912 2488 D ST 1903 2481 M 1921 2503 D 1929 2539 D 1929 2561 D 1921 2597 D 1903 2619 D ST 1912 2612 M 1886 2619 D 1868 2619 D 1842 2612 D ST 2043 2627 M 2017 2619 D 1999 2597 D 1990 2561 D 1990 2539 D 1999 2503 D 2017 2481 D 2043 2474 D 2060 2474 D 2087 2481 D 2104 2503 D 2113 2539 D 2113 2561 D 2104 2597 D 2087 2619 D 2060 2627 D 2043 2627 D ST 2025 2619 M 2008 2597 D 1999 2561 D 1999 2539 D 2008 2503 D 2025 2481 D ST 2017 2488 M 2043 2481 D 2060 2481 D 2087 2488 D ST 2078 2481 M 2095 2503 D 2104 2539 D 2104 2561 D 2095 2597 D 2078 2619 D ST 2087 2612 M 2060 2619 D 2043 2619 D 2017 2612 D ST 2279 2627 M 2192 2474 D 2200 2474 D 2288 2627 D 2165 2627 D 2165 2619 D 2279 2619 D ST 2550 2703 M 2642 2703 D ST 2550 2856 M 2642 2856 D ST 2550 3009 M 2642 3009 D ST 2550 3162 M 2642 3162 D ST 2550 3315 M 2642 3315 D ST 2550 3468 M 2642 3468 D ST 2550 3621 M 2642 3621 D ST 2550 3774 M 2642 3774 D ST 2550 3927 M 2642 3927 D ST 2550 4080 M 2687 4080 D ST 1055 4004 M 1046 4004 D 1046 4142 D 1029 4127 D 1011 4120 D 1011 4127 D 1029 4135 D 1055 4157 D 1055 4004 D ST 1177 4025 M 1169 4018 D 1169 4011 D 1177 4004 D 1186 4004 D 1195 4011 D 1195 4018 D 1186 4025 D 1177 4025 D ST 1177 4018 M 1177 4011 D 1186 4011 D 1186 4018 D 1177 4018 D ST 1309 4157 M 1282 4149 D 1265 4127 D 1256 4091 D 1256 4069 D 1265 4033 D 1282 4011 D 1309 4004 D 1326 4004 D 1352 4011 D 1370 4033 D 1378 4069 D 1378 4091 D 1370 4127 D 1352 4149 D 1326 4157 D 1309 4157 D ST 1291 4149 M 1274 4127 D 1265 4091 D 1265 4069 D 1274 4033 D 1291 4011 D ST 1282 4018 M 1309 4011 D 1326 4011 D 1352 4018 D ST 1343 4011 M 1361 4033 D 1370 4069 D 1370 4091 D 1361 4127 D 1343 4149 D ST 1352 4142 M 1326 4149 D 1309 4149 D 1282 4142 D ST 1448 4076 M 1501 4076 D 1501 4084 D 1448 4084 D ST 1545 4004 M 1440 4004 D 1440 4157 D 1545 4157 D 1545 4149 D 1448 4149 D 1448 4011 D 1545 4011 D 1545 4004 D ST 1754 4069 M 1606 4069 D 1606 4076 D 1754 4076 D 1754 4069 D ST 1868 4157 M 1842 4149 D 1824 4127 D 1816 4091 D 1816 4069 D 1824 4033 D 1842 4011 D 1868 4004 D 1886 4004 D 1912 4011 D 1929 4033 D 1938 4069 D 1938 4091 D 1929 4127 D 1912 4149 D 1886 4157 D 1868 4157 D ST 1851 4149 M 1833 4127 D 1824 4091 D 1824 4069 D 1833 4033 D 1851 4011 D ST 1842 4018 M 1868 4011 D 1886 4011 D 1912 4018 D ST 1903 4011 M 1921 4033 D 1929 4069 D 1929 4091 D 1921 4127 D 1903 4149 D ST 1912 4142 M 1886 4149 D 1868 4149 D 1842 4142 D ST 2043 4157 M 2017 4149 D 1999 4127 D 1990 4091 D 1990 4069 D 1999 4033 D 2017 4011 D 2043 4004 D 2060 4004 D 2087 4011 D 2104 4033 D 2113 4069 D 2113 4091 D 2104 4127 D 2087 4149 D 2060 4157 D 2043 4157 D ST 2025 4149 M 2008 4127 D 1999 4091 D 1999 4069 D 2008 4033 D 2025 4011 D ST 2017 4018 M 2043 4011 D 2060 4011 D 2087 4018 D ST 2078 4011 M 2095 4033 D 2104 4069 D 2104 4091 D 2095 4127 D 2078 4149 D ST 2087 4142 M 2060 4149 D 2043 4149 D 2017 4142 D ST 2261 4149 M 2270 4135 D 2279 4135 D 2270 4149 D 2244 4157 D 2227 4157 D 2200 4149 D 2183 4127 D 2174 4091 D 2174 4055 D 2183 4025 D 2200 4011 D 2227 4004 D 2235 4004 D 2261 4011 D 2279 4025 D 2288 4047 D 2288 4055 D 2279 4076 D 2261 4091 D 2235 4098 D 2227 4098 D 2200 4091 D 2183 4076 D ST 2270 4142 M 2244 4149 D 2227 4149 D 2200 4142 D ST 2209 4149 M 2192 4127 D 2183 4091 D 2183 4055 D 2192 4025 D 2218 4011 D ST 2183 4040 M 2200 4018 D 2227 4011 D 2235 4011 D 2261 4018 D 2279 4040 D ST 2244 4011 M 2270 4025 D 2279 4047 D 2279 4055 D 2270 4076 D 2244 4091 D ST 2279 4062 M 2261 4084 D 2235 4091 D 2227 4091 D 2200 4084 D 2183 4062 D ST 2218 4091 M 2192 4076 D 2183 4055 D ST 2550 4233 M 2642 4233 D ST 2550 4386 M 2642 4386 D ST 2550 4539 M 2642 4539 D ST 2550 4692 M 2642 4692 D ST 2550 4845 M 2642 4845 D ST 2550 4998 M 2642 4998 D ST 2550 5151 M 2642 5151 D ST 2550 5304 M 2642 5304 D ST 2550 5457 M 2642 5457 D ST 2550 5610 M 2687 5610 D ST 1055 5534 M 1046 5534 D 1046 5672 D 1029 5657 D 1011 5650 D 1011 5657 D 1029 5665 D 1055 5687 D 1055 5534 D ST 1177 5555 M 1169 5548 D 1169 5541 D 1177 5534 D 1186 5534 D 1195 5541 D 1195 5548 D 1186 5555 D 1177 5555 D ST 1177 5548 M 1177 5541 D 1186 5541 D 1186 5548 D 1177 5548 D ST 1335 5628 M 1361 5614 D 1370 5592 D 1370 5577 D 1361 5555 D 1335 5541 D ST 1370 5570 M 1352 5548 D 1326 5541 D 1300 5541 D 1274 5548 D 1265 5563 D 1256 5563 D 1265 5548 D 1274 5541 D 1300 5534 D 1326 5534 D 1352 5541 D 1370 5555 D 1378 5577 D 1378 5592 D 1370 5614 D 1352 5628 D 1326 5636 D 1300 5636 D 1274 5628 D 1282 5679 D 1361 5679 D 1361 5687 D 1274 5687 D 1265 5621 D 1274 5621 D 1291 5628 D 1326 5628 D 1352 5621 D 1370 5599 D ST 1291 5541 M 1265 5555 D ST 1448 5606 M 1501 5606 D 1501 5614 D 1448 5614 D ST 1545 5534 M 1440 5534 D 1440 5687 D 1545 5687 D 1545 5679 D 1448 5679 D 1448 5541 D 1545 5541 D 1545 5534 D ST 1754 5599 M 1606 5599 D 1606 5606 D 1754 5606 D 1754 5599 D ST 1868 5687 M 1842 5679 D 1824 5657 D 1816 5621 D 1816 5599 D 1824 5563 D 1842 5541 D 1868 5534 D 1886 5534 D 1912 5541 D 1929 5563 D 1938 5599 D 1938 5621 D 1929 5657 D 1912 5679 D 1886 5687 D 1868 5687 D ST 1851 5679 M 1833 5657 D 1824 5621 D 1824 5599 D 1833 5563 D 1851 5541 D ST 1842 5548 M 1868 5541 D 1886 5541 D 1912 5548 D ST 1903 5541 M 1921 5563 D 1929 5599 D 1929 5621 D 1921 5657 D 1903 5679 D ST 1912 5672 M 1886 5679 D 1868 5679 D 1842 5672 D ST 2043 5687 M 2017 5679 D 1999 5657 D 1990 5621 D 1990 5599 D 1999 5563 D 2017 5541 D 2043 5534 D 2060 5534 D 2087 5541 D 2104 5563 D 2113 5599 D 2113 5621 D 2104 5657 D 2087 5679 D 2060 5687 D 2043 5687 D ST 2025 5679 M 2008 5657 D 1999 5621 D 1999 5599 D 2008 5563 D 2025 5541 D ST 2017 5548 M 2043 5541 D 2060 5541 D 2087 5548 D ST 2078 5541 M 2095 5563 D 2104 5599 D 2104 5621 D 2095 5657 D 2078 5679 D ST 2087 5672 M 2060 5679 D 2043 5679 D 2017 5672 D ST 2261 5679 M 2270 5665 D 2279 5665 D 2270 5679 D 2244 5687 D 2227 5687 D 2200 5679 D 2183 5657 D 2174 5621 D 2174 5585 D 2183 5555 D 2200 5541 D 2227 5534 D 2235 5534 D 2261 5541 D 2279 5555 D 2288 5577 D 2288 5585 D 2279 5606 D 2261 5621 D 2235 5628 D 2227 5628 D 2200 5621 D 2183 5606 D ST 2270 5672 M 2244 5679 D 2227 5679 D 2200 5672 D ST 2209 5679 M 2192 5657 D 2183 5621 D 2183 5585 D 2192 5555 D 2218 5541 D ST 2183 5570 M 2200 5548 D 2227 5541 D 2235 5541 D 2261 5548 D 2279 5570 D ST 2244 5541 M 2270 5555 D 2279 5577 D 2279 5585 D 2270 5606 D 2244 5621 D ST 2279 5592 M 2261 5614 D 2235 5621 D 2227 5621 D 2200 5614 D 2183 5592 D ST 2218 5621 M 2192 5606 D 2183 5585 D ST 2550 5763 M 2642 5763 D ST 2550 5916 M 2642 5916 D ST 2550 6069 M 2642 6069 D ST 2550 6222 M 2642 6222 D ST 2550 6375 M 2642 6375 D ST 2550 6528 M 2642 6528 D ST 2550 6681 M 2642 6681 D ST 2550 6834 M 2642 6834 D ST 2550 6987 M 2642 6987 D ST 2550 7140 M 2687 7140 D ST 994 7064 M 1072 7136 D 1090 7158 D 1099 7173 D 1099 7187 D 1090 7202 D 1081 7209 D 1064 7217 D 1029 7217 D 1011 7209 D 1003 7202 D 994 7187 D 994 7180 D 1003 7180 D 1003 7187 D 1011 7202 D 1029 7209 D 1064 7209 D 1081 7202 D 1090 7187 D 1090 7173 D 1081 7158 D 1064 7136 D 985 7063 D 1107 7064 D 1107 7071 D 994 7071 D ST 1177 7085 M 1169 7078 D 1169 7071 D 1177 7064 D 1186 7064 D 1195 7071 D 1195 7078 D 1186 7085 D 1177 7085 D ST 1177 7078 M 1177 7071 D 1186 7071 D 1186 7078 D 1177 7078 D ST 1309 7217 M 1282 7209 D 1265 7187 D 1256 7151 D 1256 7129 D 1265 7093 D 1282 7071 D 1309 7064 D 1326 7064 D 1352 7071 D 1370 7093 D 1378 7129 D 1378 7151 D 1370 7187 D 1352 7209 D 1326 7217 D 1309 7217 D ST 1291 7209 M 1274 7187 D 1265 7151 D 1265 7129 D 1274 7093 D 1291 7071 D ST 1282 7078 M 1309 7071 D 1326 7071 D 1352 7078 D ST 1343 7071 M 1361 7093 D 1370 7129 D 1370 7151 D 1361 7187 D 1343 7209 D ST 1352 7202 M 1326 7209 D 1309 7209 D 1282 7202 D ST 1448 7136 M 1501 7136 D 1501 7144 D 1448 7144 D ST 1545 7064 M 1440 7064 D 1440 7217 D 1545 7217 D 1545 7209 D 1448 7209 D 1448 7071 D 1545 7071 D 1545 7064 D ST 1754 7129 M 1606 7129 D 1606 7136 D 1754 7136 D 1754 7129 D ST 1868 7217 M 1842 7209 D 1824 7187 D 1816 7151 D 1816 7129 D 1824 7093 D 1842 7071 D 1868 7064 D 1886 7064 D 1912 7071 D 1929 7093 D 1938 7129 D 1938 7151 D 1929 7187 D 1912 7209 D 1886 7217 D 1868 7217 D ST 1851 7209 M 1833 7187 D 1824 7151 D 1824 7129 D 1833 7093 D 1851 7071 D ST 1842 7078 M 1868 7071 D 1886 7071 D 1912 7078 D ST 1903 7071 M 1921 7093 D 1929 7129 D 1929 7151 D 1921 7187 D 1903 7209 D ST 1912 7202 M 1886 7209 D 1868 7209 D 1842 7202 D ST 2043 7217 M 2017 7209 D 1999 7187 D 1990 7151 D 1990 7129 D 1999 7093 D 2017 7071 D 2043 7064 D 2060 7064 D 2087 7071 D 2104 7093 D 2113 7129 D 2113 7151 D 2104 7187 D 2087 7209 D 2060 7217 D 2043 7217 D ST 2025 7209 M 2008 7187 D 1999 7151 D 1999 7129 D 2008 7093 D 2025 7071 D ST 2017 7078 M 2043 7071 D 2060 7071 D 2087 7078 D ST 2078 7071 M 2095 7093 D 2104 7129 D 2104 7151 D 2095 7187 D 2078 7209 D ST 2087 7202 M 2060 7209 D 2043 7209 D 2017 7202 D ST 2261 7209 M 2270 7195 D 2279 7195 D 2270 7209 D 2244 7217 D 2227 7217 D 2200 7209 D 2183 7187 D 2174 7151 D 2174 7115 D 2183 7085 D 2200 7071 D 2227 7064 D 2235 7064 D 2261 7071 D 2279 7085 D 2288 7107 D 2288 7115 D 2279 7136 D 2261 7151 D 2235 7158 D 2227 7158 D 2200 7151 D 2183 7136 D ST 2270 7202 M 2244 7209 D 2227 7209 D 2200 7202 D ST 2209 7209 M 2192 7187 D 2183 7151 D 2183 7115 D 2192 7085 D 2218 7071 D ST 2183 7100 M 2200 7078 D 2227 7071 D 2235 7071 D 2261 7078 D 2279 7100 D ST 2244 7071 M 2270 7085 D 2279 7107 D 2279 7115 D 2270 7136 D 2244 7151 D ST 2279 7122 M 2261 7144 D 2235 7151 D 2227 7151 D 2200 7144 D 2183 7122 D ST 2218 7151 M 2192 7136 D 2183 7115 D ST 2550 7140 M 9894 7140 D ST 2550 7140 M 2550 7026 D 2550 7026 D ST 2734 7140 M 2734 7064 D ST 2917 7140 M 2917 7064 D ST 3101 7140 M 3101 7064 D ST 3284 7140 M 3284 7064 D ST 3468 7140 M 3468 7064 D ST 3652 7140 M 3652 7064 D ST 3835 7140 M 3835 7064 D ST 4019 7140 M 4019 7064 D ST 4202 7140 M 4202 7064 D ST 4386 7140 M 4386 7026 D 4386 7026 D ST 4570 7140 M 4570 7064 D ST 4753 7140 M 4753 7064 D ST 4937 7140 M 4937 7064 D ST 5120 7140 M 5120 7064 D ST 5304 7140 M 5304 7064 D ST 5488 7140 M 5488 7064 D ST 5671 7140 M 5671 7064 D ST 5855 7140 M 5855 7064 D ST 6038 7140 M 6038 7064 D ST 6222 7140 M 6222 7026 D 6222 7026 D ST 6406 7140 M 6406 7064 D ST 6589 7140 M 6589 7064 D ST 6773 7140 M 6773 7064 D ST 6956 7140 M 6956 7064 D ST 7140 7140 M 7140 7064 D ST 7324 7140 M 7324 7064 D ST 7507 7140 M 7507 7064 D ST 7691 7140 M 7691 7064 D ST 7874 7140 M 7874 7064 D ST 8058 7140 M 8058 7026 D 8058 7026 D ST 8242 7140 M 8242 7064 D ST 8425 7140 M 8425 7064 D ST 8609 7140 M 8609 7064 D ST 8792 7140 M 8792 7064 D ST 8976 7140 M 8976 7064 D ST 9160 7140 M 9160 7064 D ST 9343 7140 M 9343 7064 D ST 9527 7140 M 9527 7064 D ST 9710 7140 M 9710 7064 D ST 9894 7140 M 9894 7026 D 9894 7026 D ST 9894 1020 M 9894 7140 D ST 9894 1020 M 9756 1020 D ST 9894 1173 M 9802 1173 D ST 9894 1326 M 9802 1326 D ST 9894 1479 M 9802 1479 D ST 9894 1632 M 9802 1632 D ST 9894 1785 M 9802 1785 D ST 9894 1938 M 9802 1938 D ST 9894 2091 M 9802 2091 D ST 9894 2244 M 9802 2244 D ST 9894 2397 M 9802 2397 D ST 9894 2550 M 9756 2550 D ST 9894 2703 M 9802 2703 D ST 9894 2856 M 9802 2856 D ST 9894 3009 M 9802 3009 D ST 9894 3162 M 9802 3162 D ST 9894 3315 M 9802 3315 D ST 9894 3468 M 9802 3468 D ST 9894 3621 M 9802 3621 D ST 9894 3774 M 9802 3774 D ST 9894 3927 M 9802 3927 D ST 9894 4080 M 9756 4080 D ST 9894 4233 M 9802 4233 D ST 9894 4386 M 9802 4386 D ST 9894 4539 M 9802 4539 D ST 9894 4692 M 9802 4692 D ST 9894 4845 M 9802 4845 D ST 9894 4998 M 9802 4998 D ST 9894 5151 M 9802 5151 D ST 9894 5304 M 9802 5304 D ST 9894 5457 M 9802 5457 D ST 9894 5610 M 9756 5610 D ST 9894 5763 M 9802 5763 D ST 9894 5916 M 9802 5916 D ST 9894 6069 M 9802 6069 D ST 9894 6222 M 9802 6222 D ST 9894 6375 M 9802 6375 D ST 9894 6528 M 9802 6528 D ST 9894 6681 M 9802 6681 D ST 9894 6834 M 9802 6834 D ST 9894 6987 M 9802 6987 D ST 9894 7140 M 9756 7140 D ST 2550 1020 M 2611 1020 D ST 2672 1020 M 2734 1020 D ST 2795 1020 M 2856 1020 D ST 2917 1020 M 2978 1020 D ST 3040 1020 M 3101 1020 D ST 3162 1020 M 3223 1020 D ST 3284 1020 M 3346 1020 D ST 3407 1020 M 3468 1020 D ST 3529 1020 M 3590 1020 D ST 3652 1020 M 3713 1020 D ST 3774 1020 M 3835 1020 D ST 3896 1020 M 3958 1020 D ST 4019 1020 M 4080 1020 D ST 4141 1020 M 4202 1020 D ST 4264 1020 M 4325 1020 D ST 4386 1020 M 4447 1020 D ST 4508 1020 M 4570 1020 D ST 4631 1020 M 4692 1020 D ST 4753 1020 M 4814 1020 D ST 4876 1020 M 4937 1020 D ST 4998 1020 M 5059 1020 D ST 5120 1020 M 5182 1020 D ST 5243 1020 M 5304 1020 D ST 2550 1020 M 2643 1041 D 2710 1087 D ST 2829 1203 M 2922 1332 D 2923 1334 D ST 3004 1472 M 3015 1489 D 3078 1613 D ST 3146 1754 M 3201 1873 D 3211 1897 D ST 3273 2041 M 3294 2090 D 3332 2186 D ST 3391 2331 M 3447 2477 D ST 3502 2622 M 3557 2768 D ST 3612 2914 M 3665 3058 D 3666 3061 D ST 3721 3207 M 3758 3309 D 3775 3353 D ST 3829 3500 M 3851 3558 D 3884 3645 D ST 3940 3791 M 3944 3803 D 3997 3936 D ST 4054 4082 M 4113 4227 D ST 4174 4371 M 4223 4488 D 4235 4516 D ST 4300 4658 M 4316 4693 D 4369 4800 D ST 4440 4942 M 4502 5059 D 4515 5081 D ST 4595 5218 M 4685 5352 D ST 4785 5480 M 4874 5576 D 4901 5599 D ST 5038 5699 M 5060 5712 D 5153 5747 D 5209 5754 D ST 5386 5732 M 5431 5716 D 5524 5662 D 5539 5650 D ST 5664 5538 M 5710 5489 D 5770 5413 D ST 5865 5282 M 5896 5234 D 5949 5146 D ST 6027 5007 M 6082 4904 D 6101 4867 D ST 6171 4726 M 6175 4715 D 6235 4583 D ST 6299 4439 M 6362 4295 D ST 6420 4150 M 6455 4068 D 6479 4006 D ST 6537 3860 M 6548 3831 D 6592 3714 D ST 6648 3568 M 6702 3422 D ST 6757 3276 M 6811 3130 D ST 6866 2984 M 6920 2839 D 6920 2838 D ST 6975 2692 M 7013 2593 D 7030 2546 D ST 7086 2400 M 7106 2352 D 7144 2255 D ST 7202 2110 M 7264 1966 D ST 7327 1822 M 7384 1697 D 7393 1679 D ST 7464 1538 M 7477 1514 D 7543 1399 D ST 7630 1266 M 7663 1218 D 7734 1139 D ST 7868 1039 M 7942 1016 D 8035 1030 D 8043 1035 D ST 8169 1144 M 8221 1208 D 8258 1278 D ST 8329 1419 M 8386 1565 D ST 8435 1712 M 8480 1859 D ST 8521 2009 M 8557 2159 D ST 8594 2309 M 8625 2460 D ST 8655 2610 M 8686 2759 D 8686 2761 D ST 8712 2913 M 8737 3064 D ST 8764 3216 M 8779 3301 D 8789 3367 D ST 8811 3519 M 8833 3671 D ST 8855 3823 M 8872 3934 D 8877 3975 D ST 8896 4127 M 8916 4279 D ST 8934 4432 M 8954 4584 D ST 8972 4736 M 8989 4888 D ST 9007 5041 M 9024 5193 D ST 9041 5345 M 9057 5496 D 9058 5498 D ST 9073 5650 M 9087 5803 D ST 9103 5955 M 9118 6108 D ST 9133 6260 M 9147 6413 D ST 9158 6565 M 9168 6718 D ST 9179 6871 M 9189 7024 D ST 2550 1020 M 2610 1027 D ST 2669 1040 M 2726 1060 D ST 2779 1085 M 2829 1112 D 2830 1113 D ST 2877 1145 M 2922 1176 D 2925 1178 D ST 2969 1214 M 3011 1250 D ST 3053 1287 M 3095 1324 D ST 3134 1363 M 3174 1403 D ST 3213 1441 M 3251 1481 D ST 3289 1521 M 3294 1526 D 3327 1562 D ST 3364 1601 M 3387 1626 D 3402 1642 D ST 3440 1682 M 3477 1723 D ST 3515 1763 M 3552 1803 D ST 3590 1843 M 3628 1883 D ST 3666 1923 M 3705 1961 D ST 3746 2000 M 3758 2013 D 3786 2039 D ST 3827 2077 M 3851 2100 D 3868 2114 D ST 3911 2151 M 3944 2180 D 3954 2187 D ST 3999 2222 M 4037 2251 D 4045 2256 D ST 4092 2288 M 4130 2313 D 4140 2319 D ST 4191 2348 M 4223 2366 D 4243 2375 D ST 4297 2399 M 4316 2408 D 4352 2420 D ST 4409 2439 M 4468 2452 D ST 4528 2461 M 4589 2466 D ST 4650 2465 M 4688 2463 D 4710 2459 D ST 4772 2450 M 4781 2448 D 4829 2434 D ST 4887 2416 M 4942 2394 D ST 4994 2368 M 5046 2342 D ST 5096 2311 M 5145 2281 D ST 5191 2247 M 5237 2213 D ST 5281 2179 M 5325 2142 D ST 5366 2105 M 5408 2068 D ST 5448 2030 M 5489 1991 D ST 5528 1952 M 5567 1913 D ST 5606 1874 M 5617 1862 D 5644 1834 D ST 5682 1793 M 5710 1763 D 5719 1753 D ST 5757 1713 M 5795 1673 D ST 5832 1632 M 5869 1592 D ST 5906 1551 M 5944 1512 D ST 5982 1472 M 5989 1464 D 6021 1432 D ST 6060 1392 M 6082 1370 D 6100 1354 D ST 6140 1316 M 6175 1281 D 6180 1277 D ST 6223 1240 M 6266 1204 D ST 6311 1170 M 6357 1136 D ST 6407 1107 M 6455 1078 D 6457 1077 D ST 6511 1054 M 6548 1039 D 6567 1034 D ST 6627 1022 M 6641 1019 D 6688 1021 D ST 6748 1026 M 6806 1043 D ST 6857 1070 M 6907 1100 D ST 6949 1136 M 6989 1175 D ST 7026 1216 M 7058 1259 D ST 7091 1303 M 7106 1321 D 7119 1347 D ST 7146 1393 M 7172 1439 D ST 7198 1485 M 7198 1486 D 7220 1533 D ST 7240 1581 M 7262 1629 D ST 7283 1677 M 7291 1694 D 7303 1725 D ST 7320 1774 M 7338 1823 D ST 7355 1872 M 7374 1921 D ST 7390 1970 M 7404 2019 D ST 7420 2069 M 7435 2118 D ST 7450 2168 M 7464 2216 D ST 7479 2266 M 7493 2316 D ST 7505 2365 M 7517 2415 D ST 7530 2465 M 7543 2515 D ST 7556 2565 M 7568 2615 D ST 7579 2665 M 7590 2715 D ST 7601 2766 M 7612 2816 D ST 7623 2866 M 7634 2916 D ST 7646 2966 M 7657 3016 D ST 7668 3067 M 7676 3117 D ST 7686 3167 M 7696 3218 D ST 7705 3268 M 7715 3319 D ST 7724 3369 M 7734 3419 D ST 7743 3470 M 7753 3520 D ST 7762 3571 M 7770 3621 D ST 7779 3672 M 7787 3722 D ST 7796 3773 M 7803 3823 D ST 7812 3874 M 7821 3924 D ST 7829 3975 M 7838 4025 D ST 7846 4076 M 7849 4093 D 7854 4127 D ST 7861 4177 M 7868 4228 D ST 7876 4278 M 7883 4329 D ST 7890 4380 M 7898 4430 D ST 7906 4481 M 7914 4532 D ST 7921 4582 M 7928 4633 D ST 7936 4683 M 7942 4728 D 7943 4734 D ST 7949 4785 M 7956 4836 D ST 7963 4886 M 7969 4937 D ST 7976 4988 M 7982 5038 D ST 7989 5089 M 7996 5140 D ST 8002 5190 M 8009 5241 D ST 8015 5292 M 8021 5343 D ST 8029 5393 M 8035 5442 D 8035 5444 D ST 8041 5495 M 8047 5546 D ST 8053 5596 M 8059 5647 D ST 8064 5698 M 8070 5749 D ST 8076 5799 M 8082 5850 D ST 8089 5901 M 8095 5952 D ST 8100 6002 M 8106 6053 D ST 8112 6104 M 8118 6155 D ST 8124 6206 M 8128 6240 D 8129 6256 D ST 8135 6307 M 8140 6358 D ST 8145 6409 M 8151 6460 D ST 8156 6510 M 8161 6561 D ST 8167 6612 M 8172 6663 D ST 8177 6714 M 8183 6764 D ST 8188 6815 M 8193 6866 D ST 8199 6917 M 8204 6968 D ST 8209 7018 M 8215 7069 D ST 8220 7120 M 8221 7129 D 8233 7140 D ST 2550 1020 M 2643 1024 D 2736 1034 D 2829 1053 D 2922 1077 D 3015 1108 D 3108 1142 D 3201 1182 D 3294 1225 D 3387 1270 D 3479 1316 D 3572 1363 D 3665 1408 D 3758 1451 D 3851 1492 D 3944 1529 D 4037 1562 D 4130 1588 D 4223 1611 D 4316 1626 D 4409 1634 D 4502 1636 D 4595 1632 D 4688 1620 D 4781 1601 D 4874 1577 D 4967 1545 D 5060 1509 D 5153 1468 D 5246 1422 D 5338 1373 D ST 5338 1373 M 5431 1323 D 5524 1271 D 5617 1220 D 5710 1171 D 5803 1126 D 5896 1086 D 5989 1054 D 6082 1030 D 6175 1019 D 6269 1021 D 6362 1040 D 6455 1078 D 6548 1138 D 6641 1223 D 6734 1335 D 6827 1478 D 6920 1655 D 7013 1871 D 7106 2128 D 7198 2430 D 7291 2780 D 7384 3183 D 7477 3644 D 7570 4168 D 7663 4756 D 7756 5415 D 7849 6150 D 7942 6964 D 7958 7140 D ST 2550 1020 M 2643 1020 D 2736 1021 D 2829 1023 D 2922 1024 D 3015 1026 D 3108 1029 D 3201 1031 D 3294 1034 D 3387 1036 D 3479 1038 D 3572 1040 D 3665 1041 D 3758 1041 D 3851 1041 D 3944 1040 D 4037 1038 D 4130 1036 D 4223 1033 D 4316 1030 D 4409 1026 D 4502 1023 D 4595 1021 D 4688 1020 D 4781 1020 D 4874 1023 D 4967 1029 D 5060 1039 D 5153 1055 D 5246 1077 D 5338 1107 D ST 5338 1107 M 5431 1144 D 5524 1193 D 5617 1254 D 5710 1327 D 5803 1417 D 5896 1524 D 5989 1649 D 6082 1797 D 6175 1970 D 6269 2168 D 6362 2395 D 6455 2654 D 6548 2948 D 6641 3279 D 6734 3652 D 6827 4067 D 6920 4530 D 7013 5045 D 7106 5613 D 7198 6240 D 7291 6930 D 7324 7140 D ST 2550 1020 M 2643 1020 D 2736 1020 D 2829 1020 D 2922 1020 D 3015 1021 D 3108 1022 D 3201 1023 D 3294 1025 D 3387 1028 D 3479 1032 D 3572 1038 D 3665 1046 D 3758 1056 D 3851 1068 D 3944 1084 D 4037 1104 D 4130 1127 D 4223 1157 D 4316 1191 D 4409 1232 D 4502 1281 D 4595 1338 D 4688 1405 D 4781 1482 D 4874 1571 D 4967 1672 D 5060 1788 D 5153 1920 D 5246 2069 D 5338 2236 D ST 5338 2236 M 5431 2425 D 5524 2636 D 5617 2871 D 5710 3132 D 5803 3423 D 5896 3744 D 5989 4099 D 6082 4490 D 6175 4918 D 6269 5388 D 6362 5902 D 6455 6462 D 6548 7073 D 6562 7140 D ST 2550 1020 M 2643 1021 D 2736 1026 D 2829 1033 D 2922 1043 D 3015 1055 D 3108 1069 D 3201 1085 D 3294 1102 D 3387 1119 D 3479 1136 D 3572 1154 D 3665 1170 D 3758 1185 D 3851 1197 D 3944 1209 D 4037 1217 D 4130 1222 D 4223 1224 D 4316 1223 D 4409 1219 D 4502 1212 D 4595 1201 D 4688 1186 D 4781 1170 D 4874 1152 D 4967 1131 D 5060 1111 D 5153 1089 D 5246 1069 D 5338 1051 D ST 5338 1051 M 5431 1035 D 5524 1025 D 5617 1020 D 5710 1022 D 5803 1033 D 5896 1056 D 5989 1091 D 6082 1141 D 6175 1209 D 6269 1295 D 6362 1406 D 6455 1539 D 6548 1701 D 6641 1893 D 6734 2119 D 6827 2382 D 6920 2685 D 7013 3030 D 7106 3424 D 7198 3869 D 7291 4369 D 7384 4927 D 7477 5549 D 7570 6237 D 7663 6998 D 7691 7140 D ST 8568 332 M 8568 115 D ST 8580 321 M 8580 126 D ST 8592 332 M 8592 115 D ST 8592 229 M 8667 229 D ST 8531 115 M 8630 115 D ST 8543 332 M 8568 321 D ST 8555 332 M 8568 311 D ST 8605 332 M 8592 311 D ST 8617 332 M 8592 321 D ST 8667 332 M 8728 321 D ST 8691 332 M 8728 311 D ST 8704 332 M 8728 301 D ST 8716 332 M 8728 270 D 8728 332 D 8531 332 D ST 8667 187 M 8654 229 D 8667 270 D 8667 187 D ST 8667 249 M 8642 229 D 8667 208 D ST 8667 239 M 8617 229 D 8667 218 D ST 8568 126 M 8543 115 D ST 8568 136 M 8555 115 D ST 8592 136 M 8605 115 D ST 8592 126 M 8617 115 D ST 8815 332 M 8815 311 D 8840 311 D 8840 332 D 8815 332 D ST 8827 332 M 8827 311 D ST 8815 321 M 8840 321 D ST 8815 259 M 8815 115 D ST 8827 249 M 8827 126 D ST 8778 259 M 8840 259 D 8840 115 D ST 8778 115 M 8877 115 D ST 8790 259 M 8815 249 D ST 8803 259 M 8815 239 D ST 8815 126 M 8790 115 D ST 8815 136 M 8803 115 D ST 8840 136 M 8852 115 D ST 8840 126 M 8864 115 D ST 8976 239 M 8963 218 D 8963 198 D 8976 177 D ST 9050 177 M 9062 198 D 9062 218 D 9050 239 D 9062 249 D 9087 259 D 9099 259 D 9111 249 D 9099 239 D 9087 249 D ST 9000 156 M 8988 167 D 8976 187 D 8976 229 D 8988 249 D 9000 259 D 9025 259 D 9050 249 D 9062 239 D 9074 218 D 9074 198 D 9062 177 D 9050 167 D 9025 156 D 9000 156 D 8976 167 D 8963 177 D 8951 198 D 8951 218 D 8963 239 D 8976 249 D 9000 259 D ST 9025 156 M 9037 167 D 9050 187 D 9050 229 D 9037 249 D 9025 259 D ST 8963 177 M 8951 167 D 8938 146 D 8938 136 D 8951 115 D 8963 105 D 9000 95 D 9050 95 D 9087 84 D 9099 74 D ST 8963 115 M 9000 105 D 9050 105 D 9087 95 D ST 8976 43 M 8951 53 D 8938 74 D 8938 84 D 8951 105 D 8976 115 D 8938 105 D 8926 84 D 8926 74 D 8938 53 D 8976 43 D 9050 43 D 9087 53 D 9099 74 D 9099 84 D 9087 105 D 9050 115 D 8988 115 D 8951 126 D 8938 136 D ST 9198 146 M 9186 136 D 9186 126 D 9198 115 D 9210 115 D 9223 126 D 9223 136 D 9210 146 D 9198 146 D ST 9198 136 M 9198 126 D 9210 126 D 9210 136 D 9198 136 D ST 9643 311 M 9643 115 D ST 9655 311 M 9655 126 D ST 9606 290 M 9630 301 D 9668 332 D 9668 115 D ST 9593 115 M 9717 115 D ST 9643 126 M 9618 115 D ST 9643 136 M 9630 115 D ST 9668 136 M 9680 115 D ST 9668 126 M 9692 115 D ST 2873 6346 M 2873 6244 D ST 2882 6339 M 2882 6252 D ST 2847 6346 M 2890 6346 D 2890 6244 D ST 2890 6317 M 2899 6332 D 2908 6339 D 2925 6346 D 2951 6346 D 2969 6339 D 2978 6332 D 2986 6310 D 2986 6244 D ST 2969 6332 M 2978 6310 D 2978 6252 D ST 2951 6346 M 2960 6339 D 2969 6317 D 2969 6244 D ST 2986 6317 M 2995 6332 D 3004 6339 D 3021 6346 D 3048 6346 D 3065 6339 D 3074 6332 D 3083 6310 D 3083 6244 D ST 3065 6332 M 3074 6310 D 3074 6252 D ST 3048 6346 M 3056 6339 D 3065 6317 D 3065 6244 D ST 2847 6244 M 2917 6244 D ST 2943 6244 M 3013 6244 D ST 3039 6244 M 3109 6244 D ST 2855 6346 M 2873 6339 D ST 2864 6346 M 2873 6332 D ST 2873 6252 M 2855 6244 D ST 2873 6259 M 2864 6244 D ST 2890 6259 M 2899 6244 D ST 2890 6252 M 2908 6244 D ST 2969 6252 M 2951 6244 D ST 2969 6259 M 2960 6244 D ST 2986 6259 M 2995 6244 D ST 2986 6252 M 3004 6244 D ST 3065 6252 M 3048 6244 D ST 3065 6259 M 3056 6244 D ST 3083 6259 M 3091 6244 D ST 3083 6252 M 3100 6244 D ST 3153 6300 M 3153 6208 D ST 3158 6295 M 3158 6212 D ST 3137 6300 M 3163 6300 D 3163 6208 D ST 3163 6251 M 3168 6260 D 3174 6265 D 3184 6269 D 3200 6269 D 3210 6265 D 3216 6260 D 3221 6247 D 3221 6208 D ST 3210 6260 M 3216 6247 D 3216 6212 D ST 3200 6269 M 3205 6265 D 3210 6251 D 3210 6208 D ST 3137 6208 M 3179 6208 D ST 3195 6208 M 3237 6208 D ST 3142 6300 M 3153 6295 D ST 3147 6300 M 3153 6291 D ST 3153 6212 M 3142 6208 D ST 3153 6216 M 3147 6208 D ST 3163 6216 M 3168 6208 D ST 3163 6212 M 3174 6208 D ST 3210 6212 M 3200 6208 D ST 3210 6216 M 3205 6208 D ST 3221 6216 M 3226 6208 D ST 3221 6212 M 3231 6208 D ST 3430 6339 M 3281 6339 D 3281 6346 D 3430 6346 D 3430 6339 D ST 3430 6281 M 3281 6281 D 3281 6288 D 3430 6288 D 3430 6281 D ST 3771 6376 M 3771 6368 D 3780 6368 D 3780 6376 D 3771 6376 D ST 3780 6383 M 3771 6383 D 3762 6376 D 3762 6368 D 3771 6361 D 3780 6361 D 3789 6368 D 3789 6376 D 3780 6390 D 3762 6397 D 3736 6397 D 3710 6390 D 3692 6376 D 3684 6361 D 3675 6332 D 3675 6288 D 3684 6266 D 3701 6252 D 3727 6244 D 3745 6244 D 3771 6252 D 3789 6266 D 3797 6288 D 3797 6295 D 3789 6317 D 3771 6332 D 3745 6339 D 3727 6339 D 3710 6332 D 3701 6325 D 3692 6310 D ST 3701 6376 M 3692 6361 D 3684 6332 D 3684 6288 D 3692 6266 D 3701 6259 D ST 3780 6266 M 3789 6281 D 3789 6303 D 3780 6317 D ST 3736 6397 M 3719 6390 D 3710 6383 D 3701 6368 D 3692 6339 D 3692 6288 D 3701 6266 D 3710 6252 D 3727 6244 D ST 3745 6244 M 3762 6252 D 3771 6259 D 3780 6281 D 3780 6303 D 3771 6325 D 3762 6332 D 3745 6339 D ST 3876 6383 M 3867 6368 D 3858 6339 D 3858 6303 D 3867 6274 D 3876 6259 D ST 3946 6259 M 3955 6274 D 3963 6303 D 3963 6339 D 3955 6368 D 3946 6383 D ST 3902 6244 M 3885 6252 D 3876 6266 D 3867 6303 D 3867 6339 D 3876 6376 D 3885 6390 D 3902 6397 D 3920 6397 D 3946 6390 D 3963 6368 D 3972 6332 D 3972 6310 D 3963 6274 D 3946 6252 D 3920 6244 D 3902 6244 D 3876 6252 D 3858 6274 D 3850 6310 D 3850 6332 D 3858 6368 D 3876 6390 D 3902 6397 D ST 3920 6244 M 3937 6252 D 3946 6266 D 3955 6303 D 3955 6339 D 3946 6376 D 3937 6390 D 3920 6397 D ST 4331 6376 M 4339 6397 D 4339 6354 D 4331 6376 D 4313 6390 D 4296 6397 D 4269 6397 D 4243 6390 D 4226 6376 D 4217 6361 D 4208 6339 D 4208 6303 D 4217 6281 D 4226 6266 D 4243 6252 D 4269 6244 D 4296 6244 D 4313 6252 D 4331 6252 D 4339 6244 D 4339 6303 D ST 4234 6376 M 4226 6361 D 4217 6339 D 4217 6303 D 4226 6281 D 4234 6266 D ST 4269 6397 M 4252 6390 D 4234 6368 D 4226 6339 D 4226 6303 D 4234 6274 D 4252 6252 D 4269 6244 D ST 4331 6295 M 4331 6259 D ST 4322 6303 M 4322 6259 D 4313 6252 D ST 4296 6303 M 4366 6303 D ST 4304 6303 M 4322 6295 D ST 4313 6303 M 4322 6288 D ST 4348 6303 M 4339 6288 D ST 4357 6303 M 4339 6295 D ST 4427 6303 M 4523 6303 D 4523 6317 D 4514 6332 D 4505 6339 D 4479 6346 D 4462 6346 D 4435 6339 D 4418 6325 D 4409 6303 D 4409 6288 D 4418 6266 D 4435 6252 D 4462 6244 D 4479 6244 D 4505 6252 D 4523 6266 D ST 4514 6310 M 4514 6317 D 4505 6332 D ST 4427 6325 M 4418 6310 D 4418 6281 D 4427 6266 D ST 4505 6303 M 4505 6325 D 4497 6339 D 4479 6346 D ST 4462 6346 M 4444 6339 D 4435 6332 D 4427 6310 D 4427 6281 D 4435 6259 D 4444 6252 D 4462 6244 D ST 4584 6397 M 4637 6266 D 4637 6244 D 4575 6397 D ST 4593 6397 M 4645 6266 D ST 4558 6397 M 4619 6397 D ST 4663 6397 M 4715 6397 D ST 4567 6397 M 4584 6383 D ST 4602 6397 M 4593 6383 D ST 4610 6397 M 4593 6390 D ST 4680 6397 M 4698 6390 D 4637 6244 D ST 4707 6397 M 4698 6390 D ST 2873 6805 M 2873 6703 D ST 2882 6798 M 2882 6711 D ST 2847 6805 M 2890 6805 D 2890 6703 D ST 2890 6776 M 2899 6791 D 2908 6798 D 2925 6805 D 2951 6805 D 2969 6798 D 2978 6791 D 2986 6769 D 2986 6703 D ST 2969 6791 M 2978 6769 D 2978 6711 D ST 2951 6805 M 2960 6798 D 2969 6776 D 2969 6703 D ST 2986 6776 M 2995 6791 D 3004 6798 D 3021 6805 D 3048 6805 D 3065 6798 D 3074 6791 D 3083 6769 D 3083 6703 D ST 3065 6791 M 3074 6769 D 3074 6711 D ST 3048 6805 M 3056 6798 D 3065 6776 D 3065 6703 D ST 2847 6703 M 2917 6703 D ST 2943 6703 M 3013 6703 D ST 3039 6703 M 3109 6703 D ST 2855 6805 M 2873 6798 D ST 2864 6805 M 2873 6791 D ST 2873 6711 M 2855 6703 D ST 2873 6718 M 2864 6703 D ST 2890 6718 M 2899 6703 D ST 2890 6711 M 2908 6703 D ST 2969 6711 M 2951 6703 D ST 2969 6718 M 2960 6703 D ST 2986 6718 M 2995 6703 D ST 2986 6711 M 3004 6703 D ST 3065 6711 M 3048 6703 D ST 3065 6718 M 3056 6703 D ST 3083 6718 M 3091 6703 D ST 3083 6711 M 3100 6703 D ST 3158 6750 M 3158 6684 D 3163 6675 D ST 3174 6667 M 3168 6671 D 3163 6684 D 3163 6759 D 3153 6750 D 3153 6689 D 3158 6675 D 3163 6671 D 3174 6667 D 3184 6667 D 3195 6671 D 3200 6680 D ST 3137 6728 M 3184 6728 D ST 3388 6798 M 3240 6798 D 3240 6805 D 3388 6805 D 3388 6798 D ST 3388 6740 M 3240 6740 D 3240 6747 D 3388 6747 D 3388 6740 D ST 3686 6842 M 3686 6703 D ST 3694 6842 M 3694 6711 D ST 3659 6827 M 3677 6835 D 3703 6856 D 3703 6703 D ST 3651 6703 M 3738 6703 D ST 3686 6711 M 3668 6703 D ST 3686 6718 M 3677 6703 D ST 3703 6718 M 3712 6703 D ST 3703 6711 M 3721 6703 D ST 3913 6784 M 3922 6769 D 3922 6740 D 3913 6725 D ST 3878 6805 M 3896 6798 D 3904 6791 D 3913 6769 D 3913 6740 D 3904 6718 D 3896 6711 D 3878 6703 D ST 3817 6740 M 3817 6733 D 3826 6733 D 3826 6740 D 3817 6740 D ST 3826 6849 M 3896 6849 D ST 3826 6842 M 3861 6842 D 3896 6849 D 3913 6856 D 3826 6856 D 3808 6784 D 3826 6798 D 3852 6805 D 3878 6805 D 3904 6798 D 3922 6784 D 3930 6762 D 3930 6747 D 3922 6725 D 3904 6711 D 3878 6703 D 3852 6703 D 3826 6711 D 3817 6718 D 3808 6733 D 3808 6740 D 3817 6747 D 3826 6747 D 3834 6740 D 3834 6733 D 3826 6725 D 3817 6725 D ST 4009 6842 M 4000 6827 D 3992 6798 D 3992 6762 D 4000 6733 D 4009 6718 D ST 4079 6718 M 4088 6733 D 4097 6762 D 4097 6798 D 4088 6827 D 4079 6842 D ST 4035 6703 M 4018 6711 D 4009 6725 D 4000 6762 D 4000 6798 D 4009 6835 D 4018 6849 D 4035 6856 D 4053 6856 D 4079 6849 D 4097 6827 D 4105 6791 D 4105 6769 D 4097 6733 D 4079 6711 D 4053 6703 D 4035 6703 D 4009 6711 D 3992 6733 D 3983 6769 D 3983 6791 D 3992 6827 D 4009 6849 D 4035 6856 D ST 4053 6703 M 4070 6711 D 4079 6725 D 4088 6762 D 4088 6798 D 4079 6835 D 4070 6849 D 4053 6856 D ST 4464 6835 M 4473 6856 D 4473 6813 D 4464 6835 D 4446 6849 D 4429 6856 D 4403 6856 D 4376 6849 D 4359 6835 D 4350 6820 D 4341 6798 D 4341 6762 D 4350 6740 D 4359 6725 D 4376 6711 D 4403 6703 D 4429 6703 D 4446 6711 D 4464 6711 D 4473 6703 D 4473 6762 D ST 4368 6835 M 4359 6820 D 4350 6798 D 4350 6762 D 4359 6740 D 4368 6725 D ST 4403 6856 M 4385 6849 D 4368 6827 D 4359 6798 D 4359 6762 D 4368 6733 D 4385 6711 D 4403 6703 D ST 4464 6754 M 4464 6718 D ST 4455 6762 M 4455 6718 D 4446 6711 D ST 4429 6762 M 4499 6762 D ST 4438 6762 M 4455 6754 D ST 4446 6762 M 4455 6747 D ST 4481 6762 M 4473 6747 D ST 4490 6762 M 4473 6754 D ST 4560 6762 M 4656 6762 D 4656 6776 D 4647 6791 D 4639 6798 D 4612 6805 D 4595 6805 D 4569 6798 D 4551 6784 D 4542 6762 D 4542 6747 D 4551 6725 D 4569 6711 D 4595 6703 D 4612 6703 D 4639 6711 D 4656 6725 D ST 4647 6769 M 4647 6776 D 4639 6791 D ST 4560 6784 M 4551 6769 D 4551 6740 D 4560 6725 D ST 4639 6762 M 4639 6784 D 4630 6798 D 4612 6805 D ST 4595 6805 M 4577 6798 D 4569 6791 D 4560 6769 D 4560 6740 D 4569 6718 D 4577 6711 D 4595 6703 D ST 4717 6856 M 4770 6725 D 4770 6703 D 4709 6856 D ST 4726 6856 M 4779 6725 D ST 4691 6856 M 4752 6856 D ST 4796 6856 M 4848 6856 D ST 4700 6856 M 4717 6842 D ST 4735 6856 M 4726 6842 D ST 4744 6856 M 4726 6849 D ST 4814 6856 M 4831 6849 D 4770 6703 D ST 4840 6856 M 4831 6849 D ST 527 3668 M 494 3686 D 483 3705 D 483 3714 D 505 3732 D 538 3732 D 582 3723 D 659 3705 D 692 3705 D 714 3714 D ST 483 3714 M 505 3723 D 538 3723 D 615 3705 D 659 3695 D 692 3695 D 714 3714 D 714 3723 D 703 3751 D 670 3778 D 637 3796 D 593 3815 D 560 3824 D 516 3833 D 494 3833 D 483 3824 D 483 3815 D 494 3806 D 516 3796 D 549 3796 D 571 3806 D 593 3824 D 604 3842 D 604 3861 D ST 716 3879 M 716 3939 D 703 3939 D 690 3934 D 683 3928 D 677 3912 D 677 3901 D 683 3884 D 697 3873 D 716 3868 D 730 3868 D 749 3873 D 762 3884 D 769 3901 D 769 3912 D 762 3928 D 749 3939 D ST 710 3934 M 703 3934 D 690 3928 D ST 697 3879 M 710 3873 D 736 3873 D 749 3879 D ST 716 3928 M 697 3928 D 683 3923 D 677 3912 D ST 677 3901 M 683 3890 D 690 3884 D 710 3879 D 736 3879 D 756 3884 D 762 3890 D 769 3901 D ST 644 4022 M 637 4022 D 637 4016 D 651 4016 D 651 4027 D 637 4027 D 631 4022 D 631 4005 D 637 3994 D 644 3989 D 664 3983 D 769 3983 D ST 644 3994 M 664 3989 D 762 3989 D ST 631 4005 M 637 4000 D 651 3994 D 769 3994 D ST 677 3967 M 677 4016 D ST 769 3967 M 769 4011 D ST 762 3983 M 769 3972 D ST 756 3983 M 769 3978 D ST 756 3994 M 769 4000 D ST 762 3994 M 769 4005 D ST 644 4099 M 637 4099 D 637 4093 D 651 4093 D 651 4104 D 637 4104 D 631 4099 D 631 4082 D 637 4071 D 644 4066 D 664 4060 D 769 4060 D ST 644 4071 M 664 4066 D 762 4066 D ST 631 4082 M 637 4077 D 651 4071 D 769 4071 D ST 677 4044 M 677 4093 D ST 769 4044 M 769 4088 D ST 762 4060 M 769 4049 D ST 756 4060 M 769 4055 D ST 756 4071 M 769 4077 D ST 762 4071 M 769 4082 D ST 791 4138 M 439 4303 D 439 4294 D 791 4129 D 791 4138 D ST 560 4358 M 692 4404 D ST 560 4368 M 692 4414 D ST 560 4331 M 560 4395 D ST 560 4423 M 560 4478 D ST 560 4340 M 582 4368 D ST 560 4386 M 571 4368 D ST 560 4441 M 571 4459 D 692 4414 D 714 4404 D 560 4349 D ST 560 4469 M 571 4459 D ST 408 4547 M 527 4547 D ST 402 4553 M 520 4553 D ST 487 4586 M 487 4498 D 388 4558 D 527 4558 D ST 527 4531 M 527 4575 D ST 520 4547 M 527 4536 D ST 514 4547 M 527 4542 D ST 514 4558 M 527 4564 D ST 520 4558 M 527 4569 D ST 5573 542 M 5573 325 D ST 5586 542 M 5586 325 D ST 5549 377 M 5524 387 D 5511 397 D 5499 418 D 5499 449 D 5511 470 D 5524 480 D 5549 490 D 5610 490 D 5647 480 D 5660 470 D 5672 449 D 5672 418 D 5660 397 D 5647 387 D 5610 377 D 5549 377 D 5511 387 D 5499 397 D 5487 418 D 5487 449 D 5499 470 D 5511 480 D 5549 490 D ST 5610 377 M 5635 387 D 5647 397 D 5660 418 D 5660 449 D 5647 470 D 5635 480 D 5610 490 D ST 5536 542 M 5623 542 D ST 5536 325 M 5623 325 D ST 6092 583 M 6067 562 D 6043 531 D 6018 490 D 6006 439 D 6006 397 D 6018 346 D 6043 305 D 6067 274 D 6092 253 D ST 6043 521 M 6030 490 D 6018 449 D 6018 387 D 6030 346 D 6043 315 D ST 6067 562 M 6055 542 D 6043 511 D 6030 449 D 6030 387 D 6043 325 D 6055 294 D 6067 274 D ST 6352 511 M 6364 542 D 6364 480 D 6352 511 D 6327 531 D 6302 542 D 6265 542 D 6228 531 D 6203 511 D 6191 490 D 6179 459 D 6179 408 D 6191 377 D 6203 356 D 6228 336 D 6265 325 D 6302 325 D 6327 336 D 6352 336 D 6364 325 D 6364 408 D ST 6216 511 M 6203 490 D 6191 459 D 6191 408 D 6203 377 D 6216 356 D ST 6265 542 M 6240 531 D 6216 500 D 6203 459 D 6203 408 D 6216 367 D 6240 336 D 6265 325 D ST 6352 397 M 6352 346 D ST 6339 408 M 6339 346 D 6327 336 D ST 6302 408 M 6401 408 D ST 6315 408 M 6339 397 D ST 6327 408 M 6339 387 D ST 6376 408 M 6364 387 D ST 6389 408 M 6364 397 D ST 6488 408 M 6624 408 D 6624 428 D 6611 449 D 6599 459 D 6562 470 D 6537 470 D 6500 459 D 6475 439 D 6463 408 D 6463 387 D 6475 356 D 6500 336 D 6537 325 D 6562 325 D 6599 336 D 6624 356 D ST 6611 418 M 6611 428 D 6599 449 D ST 6488 439 M 6475 418 D 6475 377 D 6488 356 D ST 6599 408 M 6599 439 D 6586 459 D 6562 470 D ST 6537 470 M 6512 459 D 6500 449 D 6488 418 D 6488 377 D 6500 346 D 6512 336 D 6537 325 D ST 6710 542 M 6784 356 D 6784 325 D 6698 542 D ST 6722 542 M 6797 356 D ST 6673 542 M 6759 542 D ST 6821 542 M 6895 542 D ST 6685 542 M 6710 521 D ST 6735 542 M 6722 521 D ST 6747 542 M 6722 531 D ST 6846 542 M 6871 531 D 6784 325 D ST 6883 542 M 6871 531 D ST 6957 583 M 6982 562 D 7007 531 D 7031 490 D 7044 439 D 7044 397 D 7031 346 D 7007 305 D 6982 274 D 6957 253 D ST 7007 521 M 7019 490 D 7031 449 D 7031 387 D 7019 346 D 7007 315 D ST 6982 562 M 6994 542 D 7007 511 D 7019 449 D 7019 387 D 7007 325 D 6994 294 D 6982 274 D ST 7617 5514 M 7604 5529 D 7598 5543 D 7600 5556 D 7610 5568 D 7619 5574 D 7644 5577 D 7675 5574 D 7705 5564 D 7764 5537 D ST 7611 5522 M 7605 5536 D 7609 5562 D 7619 5574 D ST 7612 5627 M 7635 5618 D 7649 5610 D 7698 5571 D 7734 5547 D 7763 5530 D ST 7611 5621 M 7634 5612 D 7648 5603 D 7698 5571 D ST 7646 5776 M 7627 5665 D 7619 5666 D 7638 5776 D 7646 5776 D ST 7708 5768 M 7689 5658 D 7681 5659 D 7700 5769 D 7708 5768 D ST 7613 5865 M 7761 5848 D ST 7614 5872 M 7754 5856 D ST 7625 5844 M 7620 5858 D 7600 5880 D 7763 5861 D ST 7757 5822 M 7768 5887 D ST 7753 5849 M 7759 5835 D ST 7745 5850 M 7760 5842 D ST 7748 5863 M 7764 5868 D ST 7755 5862 M 7765 5874 D ST 6838 5073 M 6825 5087 D 6820 5101 D 6822 5113 D 6832 5125 D 6840 5130 D 6864 5134 D 6894 5130 D 6923 5121 D 6980 5095 D ST 6832 5080 M 6826 5094 D 6831 5119 D 6840 5130 D ST 6834 5182 M 6855 5173 D 6869 5165 D 6916 5128 D 6951 5105 D 6979 5089 D ST 6833 5176 M 6854 5167 D 6868 5159 D 6916 5128 D ST 6866 5324 M 6848 5218 D 6841 5219 D 6859 5325 D 6866 5324 D ST 6926 5317 M 6908 5211 D 6901 5212 D 6919 5318 D 6926 5317 D ST 6845 5378 M 6852 5377 D 6853 5383 D 6846 5384 D 6845 5378 D ST 6837 5379 M 6838 5385 D 6847 5390 D 6854 5389 D 6861 5382 D 6860 5376 D 6851 5371 D 6844 5372 D 6830 5380 D 6823 5387 D 6819 5406 D 6823 5431 D 6834 5449 D 6842 5455 D 6858 5459 D 6873 5457 D 6887 5449 D 6899 5429 D 6909 5396 D 6914 5382 D 6927 5368 D 6949 5359 D 6971 5357 D ST 6841 5448 M 6857 5453 D 6872 5451 D 6886 5443 D ST 6823 5431 M 6833 5443 D 6856 5447 D 6871 5445 D 6885 5437 D 6898 5423 D 6909 5396 D ST 6956 5358 M 6950 5366 D 6952 5378 D 6965 5408 D 6969 5433 D 6963 5447 D ST 6948 5449 M 6963 5447 D 6977 5439 D 6984 5432 D 6980 5407 D 6952 5378 D 6972 5408 D 6976 5433 D 6970 5440 D ST 7252 5092 M 7239 5107 D 7234 5120 D 7236 5133 D 7245 5144 D 7254 5150 D 7277 5153 D 7307 5150 D 7336 5140 D 7393 5114 D ST 7245 5100 M 7240 5113 D 7244 5138 D 7254 5150 D ST 7247 5201 M 7269 5193 D 7283 5184 D 7330 5147 D 7364 5124 D 7392 5108 D ST 7246 5195 M 7268 5186 D 7282 5178 D 7330 5147 D ST 7280 5344 M 7262 5237 D 7255 5238 D 7273 5345 D 7280 5344 D ST 7340 5337 M 7322 5230 D 7315 5231 D 7333 5338 D 7340 5337 D ST 7249 5430 M 7391 5414 D ST 7250 5437 M 7385 5421 D ST 7261 5410 M 7255 5423 D 7236 5445 D 7393 5426 D ST 7387 5389 M 7398 5451 D ST 7384 5414 M 7389 5401 D ST 7376 5415 M 7390 5407 D ST 7378 5428 M 7394 5432 D ST 7386 5427 M 7395 5439 D ST 7386 5516 M 7392 5509 D 7400 5508 D 7408 5514 D 7409 5520 D 7403 5527 D 7395 5528 D 7387 5523 D 7386 5516 D ST 7393 5515 M 7401 5515 D 7402 5521 D 7394 5522 D 7393 5515 D ST 7348 5655 M 7364 5659 D 7394 5656 D 7408 5648 D ST 7321 5632 M 7331 5644 D 7339 5649 D 7363 5653 D 7393 5649 D 7414 5640 D 7421 5633 D 7426 5620 D ST 7381 5581 M 7389 5580 D 7390 5586 D 7382 5587 D 7381 5581 D ST 7270 5600 M 7278 5650 D ST 7277 5599 M 7281 5624 D 7278 5650 D 7273 5663 D 7262 5601 D 7335 5580 D 7322 5594 D 7318 5613 D 7321 5632 D 7332 5650 D 7349 5661 D 7372 5664 D 7387 5663 D 7409 5654 D 7422 5640 D 7426 5620 D 7423 5601 D 7412 5583 D 7404 5578 D 7388 5573 D 7380 5574 D 7374 5581 D 7375 5588 D 7383 5593 D 7391 5592 D 7397 5585 D 7396 5579 D ST 6057 5245 M 6044 5260 D 6039 5273 D 6041 5286 D 6051 5297 D 6059 5303 D 6083 5306 D 6113 5303 D 6142 5293 D 6199 5267 D ST 6051 5253 M 6045 5266 D 6050 5291 D 6059 5303 D ST 6053 5354 M 6074 5346 D 6088 5337 D 6135 5300 D 6170 5277 D 6198 5261 D ST 6052 5348 M 6073 5339 D 6087 5331 D 6135 5300 D ST 6084 5497 M 6066 5390 D 6059 5391 D 6077 5498 D 6084 5497 D ST 6144 5490 M 6126 5383 D 6119 5384 D 6137 5491 D 6144 5490 D ST 6063 5550 M 6070 5549 D 6071 5556 D 6064 5557 D 6063 5550 D ST 6055 5551 M 6056 5557 D 6065 5563 D 6072 5562 D 6079 5555 D 6078 5548 D 6069 5543 D 6062 5544 D 6048 5552 D 6041 5559 D 6037 5579 D 6041 5604 D 6052 5622 D 6068 5626 D 6090 5624 D 6104 5616 D 6109 5596 D ST 6051 5615 M 6067 5620 D 6089 5617 D 6103 5609 D ST 6105 5577 M 6109 5596 D 6118 5608 D 6135 5618 D 6151 5623 D 6174 5620 D 6188 5612 D 6194 5605 D 6199 5585 D 6194 5560 D 6184 5542 D 6175 5537 D 6159 5533 D 6152 5533 D 6145 5541 D 6146 5547 D 6155 5552 D 6162 5551 D 6169 5544 D 6168 5538 D ST 6134 5612 M 6150 5616 D 6173 5614 D 6187 5606 D ST 6199 5585 M 6193 5599 D 6172 5608 D 6149 5610 D 6126 5607 D 6117 5601 D 6107 5590 D 6102 5603 D 6088 5611 D 6066 5614 D 6050 5609 D 6040 5598 D ST 6153 5540 M 6160 5539 D 6161 5545 D 6154 5546 D 6153 5540 D ST grestore showpage /M { moveto } def /D { lineto } def /ST { stroke } def /NP { newpath } def /IC { initclip } def /CL { clip } def /FL { eofill } def /PT 12 def /SP1 { 12 setlinewidth 0 setgray } def /SP2 { 8 setlinewidth 0 setgray } def /SP3 { 8 setlinewidth 0 setgray } def /SP4 { 8 setlinewidth 0 setgray } def /SP5 { 12 setlinewidth 0 setgray } def /SP6 { 12 setlinewidth 0 setgray } def gsave [] 0 setdash 1 setlinecap 1 setlinejoin 90 rotate 72 1016 div dup scale 0 -7721 translate IC 0 0 M 0 7721 D 11040 7721 D 11040 0 D 0 0 D CL NP SP1 2550 1020 M 9894 1020 D ST 2550 1020 M 2550 1135 D ST 2471 809 M 2498 794 D 2506 772 D 2506 758 D 2498 736 D 2471 721 D ST 2506 750 M 2489 729 D 2463 721 D 2436 721 D 2410 729 D 2401 743 D 2393 743 D 2401 729 D 2410 721 D 2436 714 D 2463 714 D 2489 721 D 2506 736 D 2515 758 D 2515 772 D 2506 794 D 2489 809 D 2463 816 D 2436 816 D 2410 809 D 2419 860 D 2498 860 D 2498 867 D 2410 867 D 2401 801 D 2410 801 D 2428 809 D 2463 809 D 2489 801 D 2506 780 D ST 2428 721 M 2401 736 D ST 2620 867 M 2594 860 D 2576 838 D 2567 801 D 2567 780 D 2576 743 D 2594 721 D 2620 714 D 2637 714 D 2664 721 D 2681 743 D 2690 780 D 2690 801 D 2681 838 D 2664 860 D 2637 867 D 2620 867 D ST 2602 860 M 2585 838 D 2576 801 D 2576 780 D 2585 743 D 2602 721 D ST 2594 729 M 2620 721 D 2637 721 D 2664 729 D ST 2655 721 M 2672 743 D 2681 780 D 2681 801 D 2672 838 D 2655 860 D ST 2664 852 M 2637 860 D 2620 860 D 2594 852 D ST 2697 1020 M 2697 1097 D ST 2844 1020 M 2844 1097 D ST 2991 1020 M 2991 1097 D ST 3138 1020 M 3138 1097 D ST 3284 1020 M 3284 1097 D ST 3431 1020 M 3431 1097 D ST 3578 1020 M 3578 1097 D ST 3725 1020 M 3725 1097 D ST 3872 1020 M 3872 1097 D ST 4019 1020 M 4019 1135 D ST 3958 860 M 3966 845 D 3975 845 D 3966 860 D 3940 867 D 3923 867 D 3896 860 D 3879 838 D 3870 801 D 3870 765 D 3879 736 D 3896 721 D 3923 714 D 3931 714 D 3958 721 D 3975 736 D 3984 758 D 3984 765 D 3975 787 D 3958 801 D 3931 809 D 3923 809 D 3896 801 D 3879 787 D ST 3966 852 M 3940 860 D 3923 860 D 3896 852 D ST 3905 860 M 3888 838 D 3879 801 D 3879 765 D 3888 736 D 3914 721 D ST 3879 750 M 3896 729 D 3923 721 D 3931 721 D 3958 729 D 3975 750 D ST 3940 721 M 3966 736 D 3975 758 D 3975 765 D 3966 787 D 3940 801 D ST 3975 772 M 3958 794 D 3931 801 D 3923 801 D 3896 794 D 3879 772 D ST 3914 801 M 3888 787 D 3879 765 D ST 4089 867 M 4063 860 D 4045 838 D 4036 801 D 4036 780 D 4045 743 D 4063 721 D 4089 714 D 4106 714 D 4132 721 D 4150 743 D 4159 780 D 4159 801 D 4150 838 D 4132 860 D 4106 867 D 4089 867 D ST 4071 860 M 4054 838 D 4045 801 D 4045 780 D 4054 743 D 4071 721 D ST 4063 729 M 4089 721 D 4106 721 D 4132 729 D ST 4124 721 M 4141 743 D 4150 780 D 4150 801 D 4141 838 D 4124 860 D ST 4132 852 M 4106 860 D 4089 860 D 4063 852 D ST 4166 1020 M 4166 1097 D ST 4313 1020 M 4313 1097 D ST 4459 1020 M 4459 1097 D ST 4606 1020 M 4606 1097 D ST 4753 1020 M 4753 1097 D ST 4900 1020 M 4900 1097 D ST 5047 1020 M 5047 1097 D ST 5194 1020 M 5194 1097 D ST 5341 1020 M 5341 1097 D ST 5488 1020 M 5488 1135 D ST 5444 867 M 5356 714 D 5365 714 D 5453 867 D 5330 867 D 5330 860 D 5444 860 D ST 5558 867 M 5531 860 D 5514 838 D 5505 801 D 5505 780 D 5514 743 D 5531 721 D 5558 714 D 5575 714 D 5601 721 D 5619 743 D 5627 780 D 5627 801 D 5619 838 D 5601 860 D 5575 867 D 5558 867 D ST 5540 860 M 5523 838 D 5514 801 D 5514 780 D 5523 743 D 5540 721 D ST 5531 729 M 5558 721 D 5575 721 D 5601 729 D ST 5593 721 M 5610 743 D 5619 780 D 5619 801 D 5610 838 D 5593 860 D ST 5601 852 M 5575 860 D 5558 860 D 5531 852 D ST 5634 1020 M 5634 1097 D ST 5781 1020 M 5781 1097 D ST 5928 1020 M 5928 1097 D ST 6075 1020 M 6075 1097 D ST 6222 1020 M 6222 1097 D ST 6369 1020 M 6369 1097 D ST 6516 1020 M 6516 1097 D ST 6663 1020 M 6663 1097 D ST 6810 1020 M 6810 1097 D ST 6956 1020 M 6956 1135 D ST 6843 867 M 6817 860 D 6808 845 D 6808 831 D 6817 816 D 6825 809 D 6843 801 D 6878 794 D 6895 787 D 6904 780 D 6913 765 D 6913 743 D 6904 729 D 6878 721 D 6843 721 D 6817 729 D 6808 743 D 6808 765 D 6817 780 D 6825 787 D 6843 794 D 6878 801 D 6895 809 D 6904 816 D 6913 831 D 6913 845 D 6904 860 D 6878 867 D 6843 867 D ST 6825 860 M 6817 845 D 6817 831 D 6825 816 D 6843 809 D 6878 801 D 6895 794 D 6913 780 D 6921 765 D 6921 743 D 6913 729 D 6904 721 D 6878 714 D 6843 714 D 6817 721 D 6808 729 D 6799 743 D 6799 765 D 6808 780 D 6825 794 D 6843 801 D 6878 809 D 6895 816 D 6904 831 D 6904 845 D 6895 860 D ST 6904 852 M 6878 860 D 6843 860 D 6817 852 D ST 6808 736 M 6834 721 D ST 6886 721 M 6913 736 D ST 7026 867 M 7000 860 D 6983 838 D 6974 801 D 6974 780 D 6983 743 D 7000 721 D 7026 714 D 7044 714 D 7070 721 D 7088 743 D 7096 780 D 7096 801 D 7088 838 D 7070 860 D 7044 867 D 7026 867 D ST 7009 860 M 6991 838 D 6983 801 D 6983 780 D 6991 743 D 7009 721 D ST 7000 729 M 7026 721 D 7044 721 D 7070 729 D ST 7061 721 M 7079 743 D 7088 780 D 7088 801 D 7079 838 D 7061 860 D ST 7070 852 M 7044 860 D 7026 860 D 7000 852 D ST 7103 1020 M 7103 1097 D ST 7250 1020 M 7250 1097 D ST 7397 1020 M 7397 1097 D ST 7544 1020 M 7544 1097 D ST 7691 1020 M 7691 1097 D ST 7838 1020 M 7838 1097 D ST 7985 1020 M 7985 1097 D ST 8131 1020 M 8131 1097 D ST 8278 1020 M 8278 1097 D ST 8425 1020 M 8425 1135 D ST 8373 794 M 8355 780 D 8329 772 D 8320 772 D 8294 780 D 8277 794 D 8268 816 D 8268 823 D 8277 845 D 8294 860 D 8320 867 D 8329 867 D 8355 860 D 8373 845 D 8381 816 D 8381 780 D 8373 743 D 8355 721 D 8329 714 D 8312 714 D 8285 721 D 8277 736 D 8285 736 D 8294 721 D ST 8373 816 M 8364 794 D 8338 780 D ST 8373 809 M 8355 787 D 8329 780 D 8320 780 D 8294 787 D 8277 809 D ST 8312 780 M 8285 794 D 8277 816 D 8277 823 D 8285 845 D 8312 860 D ST 8277 831 M 8294 852 D 8320 860 D 8329 860 D 8355 852 D 8373 831 D ST 8338 860 M 8364 845 D 8373 816 D 8373 780 D 8364 743 D 8347 721 D ST 8355 729 M 8329 721 D 8312 721 D 8285 729 D ST 8495 867 M 8469 860 D 8451 838 D 8443 801 D 8443 780 D 8451 743 D 8469 721 D 8495 714 D 8513 714 D 8539 721 D 8556 743 D 8565 780 D 8565 801 D 8556 838 D 8539 860 D 8513 867 D 8495 867 D ST 8478 860 M 8460 838 D 8451 801 D 8451 780 D 8460 743 D 8478 721 D ST 8469 729 M 8495 721 D 8513 721 D 8539 729 D ST 8530 721 M 8548 743 D 8556 780 D 8556 801 D 8548 838 D 8530 860 D ST 8539 852 M 8513 860 D 8495 860 D 8469 852 D ST 8572 1020 M 8572 1097 D ST 8719 1020 M 8719 1097 D ST 8866 1020 M 8866 1097 D ST 9013 1020 M 9013 1097 D ST 9160 1020 M 9160 1097 D ST 9306 1020 M 9306 1097 D ST 9453 1020 M 9453 1097 D ST 9600 1020 M 9600 1097 D ST 9747 1020 M 9747 1097 D ST 9894 1020 M 9894 1135 D ST 9715 714 M 9706 714 D 9706 852 D 9689 838 D 9671 831 D 9671 838 D 9689 845 D 9715 867 D 9715 714 D ST 9872 867 M 9846 860 D 9828 838 D 9820 801 D 9820 780 D 9828 743 D 9846 721 D 9872 714 D 9890 714 D 9916 721 D 9933 743 D 9942 780 D 9942 801 D 9933 838 D 9916 860 D 9890 867 D 9872 867 D ST 9855 860 M 9837 838 D 9828 801 D 9828 780 D 9837 743 D 9855 721 D ST 9846 729 M 9872 721 D 9890 721 D 9916 729 D ST 9907 721 M 9925 743 D 9933 780 D 9933 801 D 9925 838 D 9907 860 D ST 9916 852 M 9890 860 D 9872 860 D 9846 852 D ST 10047 867 M 10021 860 D 10003 838 D 9995 801 D 9995 780 D 10003 743 D 10021 721 D 10047 714 D 10064 714 D 10091 721 D 10108 743 D 10117 780 D 10117 801 D 10108 838 D 10091 860 D 10064 867 D 10047 867 D ST 10030 860 M 10012 838 D 10003 801 D 10003 780 D 10012 743 D 10030 721 D ST 10021 729 M 10047 721 D 10064 721 D 10091 729 D ST 10082 721 M 10099 743 D 10108 780 D 10108 801 D 10099 838 D 10082 860 D ST 10091 852 M 10064 860 D 10047 860 D 10021 852 D ST 2550 1020 M 2550 7140 D ST 2550 1020 M 2687 1020 D ST 1943 1097 M 1917 1089 D 1900 1067 D 1891 1031 D 1891 1009 D 1900 973 D 1917 951 D 1943 944 D 1961 944 D 1987 951 D 2004 973 D 2013 1009 D 2013 1031 D 2004 1067 D 1987 1089 D 1961 1097 D 1943 1097 D ST 1926 1089 M 1908 1067 D 1900 1031 D 1900 1009 D 1908 973 D 1926 951 D ST 1917 958 M 1943 951 D 1961 951 D 1987 958 D ST 1978 951 M 1996 973 D 2004 1009 D 2004 1031 D 1996 1067 D 1978 1089 D ST 1987 1082 M 1961 1089 D 1943 1089 D 1917 1082 D ST 2083 965 M 2074 958 D 2074 951 D 2083 944 D 2092 944 D 2101 951 D 2101 958 D 2092 965 D 2083 965 D ST 2083 958 M 2083 951 D 2092 951 D 2092 958 D 2083 958 D ST 2214 1097 M 2188 1089 D 2171 1067 D 2162 1031 D 2162 1009 D 2171 973 D 2188 951 D 2214 944 D 2232 944 D 2258 951 D 2275 973 D 2284 1009 D 2284 1031 D 2275 1067 D 2258 1089 D 2232 1097 D 2214 1097 D ST 2197 1089 M 2179 1067 D 2171 1031 D 2171 1009 D 2179 973 D 2197 951 D ST 2188 958 M 2214 951 D 2232 951 D 2258 958 D ST 2249 951 M 2267 973 D 2275 1009 D 2275 1031 D 2267 1067 D 2249 1089 D ST 2258 1082 M 2232 1089 D 2214 1089 D 2188 1082 D ST 2550 1173 M 2642 1173 D ST 2550 1326 M 2642 1326 D ST 2550 1479 M 2642 1479 D ST 2550 1632 M 2642 1632 D ST 2550 1785 M 2642 1785 D ST 2550 1938 M 2642 1938 D ST 2550 2091 M 2642 2091 D ST 2550 2244 M 2642 2244 D ST 2550 2397 M 2642 2397 D ST 2550 2550 M 2687 2550 D ST 1943 2627 M 1917 2619 D 1900 2597 D 1891 2561 D 1891 2539 D 1900 2503 D 1917 2481 D 1943 2474 D 1961 2474 D 1987 2481 D 2004 2503 D 2013 2539 D 2013 2561 D 2004 2597 D 1987 2619 D 1961 2627 D 1943 2627 D ST 1926 2619 M 1908 2597 D 1900 2561 D 1900 2539 D 1908 2503 D 1926 2481 D ST 1917 2488 M 1943 2481 D 1961 2481 D 1987 2488 D ST 1978 2481 M 1996 2503 D 2004 2539 D 2004 2561 D 1996 2597 D 1978 2619 D ST 1987 2612 M 1961 2619 D 1943 2619 D 1917 2612 D ST 2083 2495 M 2074 2488 D 2074 2481 D 2083 2474 D 2092 2474 D 2101 2481 D 2101 2488 D 2092 2495 D 2083 2495 D ST 2083 2488 M 2083 2481 D 2092 2481 D 2092 2488 D 2083 2488 D ST 2171 2474 M 2249 2546 D 2267 2568 D 2275 2583 D 2275 2597 D 2267 2612 D 2258 2619 D 2241 2627 D 2206 2627 D 2188 2619 D 2179 2612 D 2171 2597 D 2171 2590 D 2179 2590 D 2179 2597 D 2188 2612 D 2206 2619 D 2241 2619 D 2258 2612 D 2267 2597 D 2267 2583 D 2258 2568 D 2241 2546 D 2162 2474 D 2284 2474 D 2284 2481 D 2171 2481 D ST 2550 2703 M 2642 2703 D ST 2550 2856 M 2642 2856 D ST 2550 3009 M 2642 3009 D ST 2550 3162 M 2642 3162 D ST 2550 3315 M 2642 3315 D ST 2550 3468 M 2642 3468 D ST 2550 3621 M 2642 3621 D ST 2550 3774 M 2642 3774 D ST 2550 3927 M 2642 3927 D ST 2550 4080 M 2687 4080 D ST 1943 4157 M 1917 4149 D 1900 4127 D 1891 4091 D 1891 4069 D 1900 4033 D 1917 4011 D 1943 4004 D 1961 4004 D 1987 4011 D 2004 4033 D 2013 4069 D 2013 4091 D 2004 4127 D 1987 4149 D 1961 4157 D 1943 4157 D ST 1926 4149 M 1908 4127 D 1900 4091 D 1900 4069 D 1908 4033 D 1926 4011 D ST 1917 4018 M 1943 4011 D 1961 4011 D 1987 4018 D ST 1978 4011 M 1996 4033 D 2004 4069 D 2004 4091 D 1996 4127 D 1978 4149 D ST 1987 4142 M 1961 4149 D 1943 4149 D 1917 4142 D ST 2083 4025 M 2074 4018 D 2074 4011 D 2083 4004 D 2092 4004 D 2101 4011 D 2101 4018 D 2092 4025 D 2083 4025 D ST 2083 4018 M 2083 4011 D 2092 4011 D 2092 4018 D 2083 4018 D ST 2171 4047 M 2293 4047 D 2293 4040 D 2162 4040 D 2258 4157 D 2258 4004 D 2249 4004 D 2249 4135 D 2171 4040 D ST 2550 4233 M 2642 4233 D ST 2550 4386 M 2642 4386 D ST 2550 4539 M 2642 4539 D ST 2550 4692 M 2642 4692 D ST 2550 4845 M 2642 4845 D ST 2550 4998 M 2642 4998 D ST 2550 5151 M 2642 5151 D ST 2550 5304 M 2642 5304 D ST 2550 5457 M 2642 5457 D ST 2550 5610 M 2687 5610 D ST 1943 5687 M 1917 5679 D 1900 5657 D 1891 5621 D 1891 5599 D 1900 5563 D 1917 5541 D 1943 5534 D 1961 5534 D 1987 5541 D 2004 5563 D 2013 5599 D 2013 5621 D 2004 5657 D 1987 5679 D 1961 5687 D 1943 5687 D ST 1926 5679 M 1908 5657 D 1900 5621 D 1900 5599 D 1908 5563 D 1926 5541 D ST 1917 5548 M 1943 5541 D 1961 5541 D 1987 5548 D ST 1978 5541 M 1996 5563 D 2004 5599 D 2004 5621 D 1996 5657 D 1978 5679 D ST 1987 5672 M 1961 5679 D 1943 5679 D 1917 5672 D ST 2083 5555 M 2074 5548 D 2074 5541 D 2083 5534 D 2092 5534 D 2101 5541 D 2101 5548 D 2092 5555 D 2083 5555 D ST 2083 5548 M 2083 5541 D 2092 5541 D 2092 5548 D 2083 5548 D ST 2258 5679 M 2267 5665 D 2275 5665 D 2267 5679 D 2241 5687 D 2223 5687 D 2197 5679 D 2179 5657 D 2171 5621 D 2171 5585 D 2179 5555 D 2197 5541 D 2223 5534 D 2232 5534 D 2258 5541 D 2275 5555 D 2284 5577 D 2284 5585 D 2275 5606 D 2258 5621 D 2232 5628 D 2223 5628 D 2197 5621 D 2179 5606 D ST 2267 5672 M 2241 5679 D 2223 5679 D 2197 5672 D ST 2206 5679 M 2188 5657 D 2179 5621 D 2179 5585 D 2188 5555 D 2214 5541 D ST 2179 5570 M 2197 5548 D 2223 5541 D 2232 5541 D 2258 5548 D 2275 5570 D ST 2241 5541 M 2267 5555 D 2275 5577 D 2275 5585 D 2267 5606 D 2241 5621 D ST 2275 5592 M 2258 5614 D 2232 5621 D 2223 5621 D 2197 5614 D 2179 5592 D ST 2214 5621 M 2188 5606 D 2179 5585 D ST 2550 5763 M 2642 5763 D ST 2550 5916 M 2642 5916 D ST 2550 6069 M 2642 6069 D ST 2550 6222 M 2642 6222 D ST 2550 6375 M 2642 6375 D ST 2550 6528 M 2642 6528 D ST 2550 6681 M 2642 6681 D ST 2550 6834 M 2642 6834 D ST 2550 6987 M 2642 6987 D ST 2550 7140 M 2687 7140 D ST 1943 7217 M 1917 7209 D 1900 7187 D 1891 7151 D 1891 7129 D 1900 7093 D 1917 7071 D 1943 7064 D 1961 7064 D 1987 7071 D 2004 7093 D 2013 7129 D 2013 7151 D 2004 7187 D 1987 7209 D 1961 7217 D 1943 7217 D ST 1926 7209 M 1908 7187 D 1900 7151 D 1900 7129 D 1908 7093 D 1926 7071 D ST 1917 7078 M 1943 7071 D 1961 7071 D 1987 7078 D ST 1978 7071 M 1996 7093 D 2004 7129 D 2004 7151 D 1996 7187 D 1978 7209 D ST 1987 7202 M 1961 7209 D 1943 7209 D 1917 7202 D ST 2083 7085 M 2074 7078 D 2074 7071 D 2083 7064 D 2092 7064 D 2101 7071 D 2101 7078 D 2092 7085 D 2083 7085 D ST 2083 7078 M 2083 7071 D 2092 7071 D 2092 7078 D 2083 7078 D ST 2206 7217 M 2179 7209 D 2171 7195 D 2171 7180 D 2179 7166 D 2188 7158 D 2206 7151 D 2241 7144 D 2258 7136 D 2267 7129 D 2275 7115 D 2275 7093 D 2267 7078 D 2241 7071 D 2206 7071 D 2179 7078 D 2171 7093 D 2171 7115 D 2179 7129 D 2188 7136 D 2206 7144 D 2241 7151 D 2258 7158 D 2267 7166 D 2275 7180 D 2275 7195 D 2267 7209 D 2241 7217 D 2206 7217 D ST 2188 7209 M 2179 7195 D 2179 7180 D 2188 7166 D 2206 7158 D 2241 7151 D 2258 7144 D 2275 7129 D 2284 7115 D 2284 7093 D 2275 7078 D 2267 7071 D 2241 7064 D 2206 7064 D 2179 7071 D 2171 7078 D 2162 7093 D 2162 7115 D 2171 7129 D 2188 7144 D 2206 7151 D 2241 7158 D 2258 7166 D 2267 7180 D 2267 7195 D 2258 7209 D ST 2267 7202 M 2241 7209 D 2206 7209 D 2179 7202 D ST 2171 7085 M 2197 7071 D ST 2249 7071 M 2275 7085 D ST 2550 7140 M 9894 7140 D ST 2550 7140 M 2550 7026 D 2550 7026 D ST 2697 7140 M 2697 7064 D ST 2844 7140 M 2844 7064 D ST 2991 7140 M 2991 7064 D ST 3138 7140 M 3138 7064 D ST 3284 7140 M 3284 7064 D ST 3431 7140 M 3431 7064 D ST 3578 7140 M 3578 7064 D ST 3725 7140 M 3725 7064 D ST 3872 7140 M 3872 7064 D ST 4019 7140 M 4019 7026 D 4019 7026 D ST 4166 7140 M 4166 7064 D ST 4313 7140 M 4313 7064 D ST 4459 7140 M 4459 7064 D ST 4606 7140 M 4606 7064 D ST 4753 7140 M 4753 7064 D ST 4900 7140 M 4900 7064 D ST 5047 7140 M 5047 7064 D ST 5194 7140 M 5194 7064 D ST 5341 7140 M 5341 7064 D ST 5488 7140 M 5488 7026 D 5488 7026 D ST 5634 7140 M 5634 7064 D ST 5781 7140 M 5781 7064 D ST 5928 7140 M 5928 7064 D ST 6075 7140 M 6075 7064 D ST 6222 7140 M 6222 7064 D ST 6369 7140 M 6369 7064 D ST 6516 7140 M 6516 7064 D ST 6663 7140 M 6663 7064 D ST 6810 7140 M 6810 7064 D ST 6956 7140 M 6956 7026 D 6956 7026 D ST 7103 7140 M 7103 7064 D ST 7250 7140 M 7250 7064 D ST 7397 7140 M 7397 7064 D ST 7544 7140 M 7544 7064 D ST 7691 7140 M 7691 7064 D ST 7838 7140 M 7838 7064 D ST 7985 7140 M 7985 7064 D ST 8131 7140 M 8131 7064 D ST 8278 7140 M 8278 7064 D ST 8425 7140 M 8425 7026 D 8425 7026 D ST 8572 7140 M 8572 7064 D ST 8719 7140 M 8719 7064 D ST 8866 7140 M 8866 7064 D ST 9013 7140 M 9013 7064 D ST 9160 7140 M 9160 7064 D ST 9306 7140 M 9306 7064 D ST 9453 7140 M 9453 7064 D ST 9600 7140 M 9600 7064 D ST 9747 7140 M 9747 7064 D ST 9894 7140 M 9894 7026 D 9894 7026 D ST 9894 1020 M 9894 7140 D ST 9894 1020 M 9756 1020 D ST 9894 1173 M 9802 1173 D ST 9894 1326 M 9802 1326 D ST 9894 1479 M 9802 1479 D ST 9894 1632 M 9802 1632 D ST 9894 1785 M 9802 1785 D ST 9894 1938 M 9802 1938 D ST 9894 2091 M 9802 2091 D ST 9894 2244 M 9802 2244 D ST 9894 2397 M 9802 2397 D ST 9894 2550 M 9756 2550 D ST 9894 2703 M 9802 2703 D ST 9894 2856 M 9802 2856 D ST 9894 3009 M 9802 3009 D ST 9894 3162 M 9802 3162 D ST 9894 3315 M 9802 3315 D ST 9894 3468 M 9802 3468 D ST 9894 3621 M 9802 3621 D ST 9894 3774 M 9802 3774 D ST 9894 3927 M 9802 3927 D ST 9894 4080 M 9756 4080 D ST 9894 4233 M 9802 4233 D ST 9894 4386 M 9802 4386 D ST 9894 4539 M 9802 4539 D ST 9894 4692 M 9802 4692 D ST 9894 4845 M 9802 4845 D ST 9894 4998 M 9802 4998 D ST 9894 5151 M 9802 5151 D ST 9894 5304 M 9802 5304 D ST 9894 5457 M 9802 5457 D ST 9894 5610 M 9756 5610 D ST 9894 5763 M 9802 5763 D ST 9894 5916 M 9802 5916 D ST 9894 6069 M 9802 6069 D ST 9894 6222 M 9802 6222 D ST 9894 6375 M 9802 6375 D ST 9894 6528 M 9802 6528 D ST 9894 6681 M 9802 6681 D ST 9894 6834 M 9802 6834 D ST 9894 6987 M 9802 6987 D ST 9894 7140 M 9756 7140 D ST 2550 7131 M 2587 7093 D 2623 7055 D 2660 7018 D 2666 7013 D ST 2783 6894 M 2807 6870 D 2844 6834 D 2880 6797 D 2901 6777 D ST 3020 6661 M 3027 6654 D 3064 6619 D 3101 6584 D 3138 6549 D 3141 6545 D ST 3264 6431 M 3284 6412 D 3321 6378 D 3358 6344 D 3386 6318 D ST 3511 6206 M 3541 6178 D 3578 6147 D 3615 6114 D 3637 6095 D ST 3764 5984 M 3798 5956 D 3835 5924 D 3872 5894 D 3894 5875 D ST 4024 5768 M 4056 5742 D 4092 5712 D 4129 5682 D 4156 5661 D ST 4289 5556 M 4313 5539 D 4349 5510 D 4386 5481 D 4423 5454 D 4425 5453 D ST 4561 5351 M 4570 5345 D 4606 5317 D 4643 5291 D 4680 5264 D 4699 5250 D ST 4840 5151 M 4863 5135 D 4900 5109 D 4937 5084 D 4974 5058 D 4981 5053 D ST 5124 4957 M 5157 4936 D 5194 4912 D 5231 4889 D 5267 4865 D 5270 4863 D ST 5415 4772 M 5451 4750 D 5488 4728 D 5524 4705 D 5561 4683 D 5565 4682 D ST 5714 4593 M 5745 4576 D 5781 4554 D 5818 4534 D 5855 4512 D 5866 4506 D ST 6019 4423 M 6038 4412 D 6075 4392 D 6112 4373 D 6149 4353 D 6173 4340 D ST 6330 4260 M 6332 4259 D 6369 4240 D 6406 4222 D 6442 4203 D 6479 4186 D 6488 4182 D ST 6647 4106 M 6663 4098 D 6699 4081 D 6736 4064 D 6773 4047 D 6807 4031 D ST 6970 3960 M 6993 3949 D 7030 3933 D 7067 3918 D 7103 3902 D 7133 3889 D ST 7298 3821 M 7324 3811 D 7360 3796 D 7397 3781 D 7434 3767 D 7463 3756 D ST 7630 3691 M 7654 3682 D 7691 3668 D 7728 3655 D 7764 3640 D 7797 3628 D ST 7966 3568 M 7985 3561 D 8021 3548 D 8058 3535 D 8095 3522 D 8131 3510 D 8135 3509 D ST 8305 3451 M 8315 3447 D 8352 3434 D 8388 3422 D 8425 3410 D 8462 3399 D 8475 3394 D ST 8647 3338 M 8682 3326 D 8719 3315 D 8756 3303 D 8792 3292 D 8818 3283 D ST 8989 3229 M 9013 3222 D 9049 3211 D 9086 3199 D 9123 3188 D 9160 3176 D 9162 3175 D ST 9333 3122 M 9343 3119 D 9380 3108 D 9417 3097 D 9453 3086 D 9490 3074 D 9506 3069 D ST 9677 3016 M 9710 3006 D 9747 2994 D 9784 2982 D 9821 2971 D 9850 2962 D ST 2550 5571 M 2587 5544 D ST 2622 5516 M 2623 5515 D 2659 5489 D ST 2696 5462 M 2697 5460 D 2731 5435 D ST 2768 5407 M 2770 5406 D 2805 5381 D ST 2843 5354 M 2844 5352 D 2879 5326 D ST 2916 5300 M 2917 5300 D 2953 5273 D ST 2991 5247 M 3027 5221 D 3027 5221 D ST 3065 5195 M 3101 5170 D 3103 5168 D ST 3141 5143 M 3174 5119 D 3178 5116 D ST 3216 5091 M 3248 5069 D 3254 5065 D ST 3292 5039 M 3321 5019 D 3331 5013 D ST 3369 4988 M 3395 4971 D 3407 4963 D ST 3446 4938 M 3468 4924 D 3484 4912 D ST 3523 4888 M 3541 4876 D 3562 4862 D ST 3600 4838 M 3615 4829 D 3639 4813 D ST 3679 4789 M 3688 4783 D 3718 4764 D ST 3757 4740 M 3762 4737 D 3796 4715 D ST 3836 4691 M 3872 4670 D 3876 4668 D ST 3915 4643 M 3945 4625 D 3955 4620 D ST 3994 4596 M 4019 4582 D 4035 4573 D ST 4075 4549 M 4092 4538 D 4114 4526 D ST 4155 4502 M 4166 4496 D 4195 4479 D ST 4235 4456 M 4239 4454 D 4276 4433 D 4276 4433 D ST 4317 4410 M 4349 4392 D 4358 4388 D ST 4398 4365 M 4423 4351 D 4440 4342 D ST 4480 4321 M 4496 4312 D 4522 4298 D ST 4562 4276 M 4570 4272 D 4604 4254 D ST 4645 4232 M 4680 4214 D 4687 4211 D ST 4727 4189 M 4753 4176 D 4769 4168 D ST 4812 4146 M 4827 4138 D 4854 4125 D ST 4895 4103 M 4900 4101 D 4937 4083 D 4937 4083 D ST 4980 4062 M 5010 4046 D 5021 4041 D ST 5064 4021 M 5084 4011 D 5106 3999 D ST 5149 3980 M 5157 3975 D 5191 3960 D ST 5233 3939 M 5267 3923 D 5276 3919 D ST 5319 3899 M 5341 3889 D 5362 3880 D ST 5404 3860 M 5414 3856 D 5447 3840 D ST 5491 3821 M 5524 3807 D 5534 3802 D ST 5577 3783 M 5598 3774 D 5621 3764 D ST 5664 3745 M 5671 3742 D 5708 3726 D ST 5751 3708 M 5781 3695 D 5795 3689 D ST 5839 3671 M 5855 3665 D 5883 3653 D ST 5927 3635 M 5928 3634 D 5965 3619 D 5970 3617 D ST 6014 3600 M 6038 3589 D 6059 3582 D ST 6103 3564 M 6112 3561 D 6147 3547 D ST 6191 3530 M 6222 3518 D 6237 3513 D ST 6281 3496 M 6295 3490 D 6325 3479 D ST 6370 3463 M 6406 3450 D 6415 3446 D ST 6459 3429 M 6479 3423 D 6505 3414 D ST 6550 3398 M 6552 3397 D 6589 3383 D 6595 3381 D ST 6639 3366 M 6663 3358 D 6685 3351 D ST 6730 3334 M 6736 3332 D 6773 3320 D 6776 3319 D ST 6822 3304 M 6846 3296 D 6867 3290 D ST 6912 3274 M 6920 3272 D 6956 3260 D 6958 3260 D ST 7004 3246 M 7030 3237 D 7049 3230 D ST 7096 3216 M 7103 3214 D 7140 3203 D 7141 3203 D ST 7188 3189 M 7213 3180 D 7233 3175 D ST 7280 3161 M 7287 3159 D 7324 3149 D 7326 3148 D ST 7373 3134 M 7397 3127 D 7419 3121 D ST 7466 3108 M 7470 3107 D 7507 3097 D 7512 3096 D ST 7559 3082 M 7581 3077 D 7605 3070 D ST 7652 3058 M 7654 3057 D 7691 3048 D 7698 3046 D ST 7745 3035 M 7764 3029 D 7792 3022 D ST 7839 3011 M 7874 3002 D 7885 2999 D ST 7933 2988 M 7948 2985 D 7980 2976 D ST 8027 2966 M 8058 2959 D 8074 2955 D ST 8122 2945 M 8131 2943 D 8168 2935 D 8169 2935 D ST 8216 2924 M 8242 2918 D 8264 2914 D ST 8311 2904 M 8315 2903 D 8352 2896 D 8359 2895 D ST 8407 2885 M 8425 2882 D 8455 2875 D ST 8502 2866 M 8535 2860 D 8550 2858 D ST 8598 2849 M 8609 2847 D 8646 2841 D ST 8693 2833 M 8719 2827 D 8741 2824 D ST 8790 2816 M 8792 2816 D 8829 2810 D 8838 2808 D ST 8885 2801 M 8903 2799 D 8934 2794 D ST 8982 2787 M 9013 2783 D 9030 2780 D ST 9079 2773 M 9086 2772 D 9123 2767 D 9127 2766 D ST 9176 2760 M 9196 2758 D 9223 2754 D ST 9272 2749 M 9306 2745 D 9321 2743 D ST 9369 2738 M 9380 2737 D 9417 2733 D 9418 2733 D ST 9467 2727 M 9490 2725 D 9515 2722 D ST 9564 2718 M 9600 2715 D 9612 2714 D ST 9661 2710 M 9674 2709 D 9710 2706 D ST 9758 2703 M 9784 2701 D 9807 2699 D ST 9856 2696 M 9857 2696 D 9894 2694 D ST 2550 5265 M 2587 5232 D 2623 5199 D 2660 5166 D 2697 5135 D 2734 5103 D 2770 5071 D 2807 5040 D 2844 5009 D 2880 4978 D 2917 4947 D 2954 4917 D 2991 4887 D 3027 4857 D 3064 4828 D 3101 4799 D 3138 4770 D 3174 4741 D 3211 4712 D 3248 4684 D 3284 4655 D 3321 4628 D 3358 4600 D 3395 4573 D 3431 4545 D 3468 4519 D 3505 4492 D 3541 4466 D 3578 4439 D 3615 4413 D 3652 4387 D ST 3652 4387 M 3688 4361 D 3725 4335 D 3762 4311 D 3798 4285 D 3835 4260 D 3872 4235 D 3909 4211 D 3945 4186 D 3982 4162 D 4019 4138 D 4056 4114 D 4092 4090 D 4129 4067 D 4166 4043 D 4202 4020 D 4239 3997 D 4276 3974 D 4313 3951 D 4349 3929 D 4386 3907 D 4423 3884 D 4459 3862 D 4496 3840 D 4533 3819 D 4570 3796 D 4606 3775 D 4643 3754 D 4680 3733 D 4716 3712 D 4753 3690 D ST 4753 3690 M 4790 3670 D 4827 3650 D 4863 3629 D 4900 3609 D 4937 3588 D 4974 3568 D 5010 3548 D 5047 3528 D 5084 3508 D 5120 3488 D 5157 3469 D 5194 3450 D 5231 3430 D 5267 3411 D 5304 3393 D 5341 3373 D 5377 3355 D 5414 3335 D 5451 3317 D 5488 3299 D 5524 3280 D 5561 3262 D 5598 3245 D 5634 3226 D 5671 3208 D 5708 3191 D 5745 3172 D 5781 3155 D 5818 3138 D 5855 3120 D ST 5855 3120 M 5892 3103 D 5928 3086 D 5965 3068 D 6002 3052 D 6038 3035 D 6075 3018 D 6112 3001 D 6149 2985 D 6185 2968 D 6222 2952 D 6259 2936 D 6295 2919 D 6332 2903 D 6369 2887 D 6406 2871 D 6442 2855 D 6479 2840 D 6516 2823 D 6552 2808 D 6589 2793 D 6626 2777 D 6663 2762 D 6699 2747 D 6736 2732 D 6773 2716 D 6810 2701 D 6846 2687 D 6883 2671 D 6920 2657 D 6956 2643 D ST 6956 2643 M 6993 2629 D 7030 2614 D 7067 2600 D 7103 2586 D 7140 2571 D 7177 2557 D 7213 2543 D 7250 2530 D 7287 2515 D 7324 2502 D 7360 2489 D 7397 2476 D 7434 2461 D 7470 2448 D 7507 2436 D 7544 2423 D 7581 2409 D 7617 2396 D 7654 2384 D 7691 2372 D 7728 2358 D 7764 2346 D 7801 2334 D 7838 2322 D 7874 2309 D 7911 2297 D 7948 2286 D 7985 2274 D 8021 2261 D 8058 2250 D ST 8058 2250 M 8095 2239 D 8131 2228 D 8168 2216 D 8205 2205 D 8242 2194 D 8278 2183 D 8315 2172 D 8352 2161 D 8388 2151 D 8425 2140 D 8462 2130 D 8499 2120 D 8535 2109 D 8572 2100 D 8609 2090 D 8646 2080 D 8682 2071 D 8719 2061 D 8756 2052 D 8792 2043 D 8829 2034 D 8866 2025 D 8903 2016 D 8939 2007 D 8976 1999 D 9013 1991 D 9049 1982 D 9086 1975 D 9123 1967 D 9160 1958 D ST 9160 1958 M 9196 1951 D 9233 1944 D 9270 1936 D 9306 1930 D 9343 1923 D 9380 1916 D 9417 1909 D 9453 1902 D 9490 1896 D 9527 1890 D 9564 1884 D 9600 1879 D 9637 1873 D 9674 1868 D 9710 1863 D 9747 1857 D 9784 1852 D 9821 1848 D 9857 1843 D 9894 1839 D ST 2550 4899 M 2572 4878 D 2594 4857 D 2616 4836 D 2638 4815 D 2660 4795 D 2682 4775 D 2704 4754 D 2727 4733 D 2750 4712 D 2772 4692 D 2795 4672 D 2817 4651 D 2839 4632 D 2862 4611 D 2884 4591 D 2907 4571 D 2931 4550 D 2953 4530 D 2976 4510 D 2999 4490 D 3021 4470 D 3044 4450 D 3068 4430 D 3091 4409 D 3114 4390 D 3136 4370 D 3160 4350 D 3183 4330 D 3206 4311 D 3229 4290 D ST 3229 4290 M 3253 4271 D 3276 4250 D 3299 4231 D 3322 4212 D 3346 4191 D 3369 4172 D 3393 4152 D 3417 4133 D 3440 4113 D 3463 4093 D 3486 4074 D 3511 4055 D 3534 4035 D 3557 4016 D 3582 3996 D 3605 3977 D 3628 3958 D 3653 3938 D 3676 3919 D 3701 3899 D 3724 3880 D 3748 3861 D 3773 3842 D 3796 3823 D 3821 3804 D 3845 3785 D 3868 3766 D 3893 3747 D 3917 3729 D 3942 3710 D ST 3942 3710 M 3966 3691 D 3991 3673 D 4015 3655 D 4040 3635 D 4065 3617 D 4090 3600 D 4114 3581 D 4139 3563 D 4164 3545 D 4189 3526 D 4215 3509 D 4239 3490 D 4265 3473 D 4291 3455 D 4315 3437 D 4341 3419 D 4366 3402 D 4391 3384 D 4417 3366 D 4442 3349 D 4468 3331 D 4494 3314 D 4519 3296 D 4545 3278 D 4571 3261 D 4597 3244 D 4622 3226 D 4648 3209 D 4674 3192 D 4699 3174 D ST 4699 3174 M 4725 3157 D 4751 3140 D 4776 3122 D 4802 3105 D 4828 3088 D 4855 3070 D 4880 3053 D 4906 3036 D 4932 3019 D 4959 3002 D 4985 2985 D 5010 2967 D 5036 2950 D 5063 2933 D 5089 2916 D 5116 2899 D 5141 2882 D 5167 2864 D 5194 2847 D 5220 2831 D 5246 2813 D 5272 2796 D 5299 2778 D 5325 2762 D 5352 2745 D 5379 2727 D 5404 2710 D 5431 2694 D 5457 2676 D 5484 2659 D ST 5484 2659 M 5511 2642 D 5538 2625 D 5563 2608 D 5590 2591 D 5617 2574 D 5644 2557 D 5671 2540 D 5698 2522 D 5724 2506 D 5751 2489 D 5778 2471 D 5805 2454 D 5832 2437 D 5858 2420 D 5884 2403 D 5911 2386 D 5938 2368 D 5965 2351 D 5991 2334 D 6018 2316 D 6045 2299 D 6070 2282 D 6097 2263 D 6123 2246 D 6149 2229 D 6175 2211 D 6201 2193 D 6227 2176 D 6253 2157 D 6278 2140 D ST 6278 2140 M 6304 2122 D 6330 2103 D 6355 2085 D 6380 2068 D 6406 2049 D 6430 2030 D 6456 2011 D 6480 1993 D 6505 1975 D 6528 1955 D 6552 1936 D 6577 1918 D 6600 1898 D 6623 1879 D 6647 1858 D 6670 1839 D 6692 1819 D 6714 1799 D 6736 1779 D 6758 1757 D 6780 1737 D 6801 1716 D 6822 1695 D 6841 1674 D 6862 1651 D 6882 1630 D 6900 1608 D 6918 1585 D 6937 1562 D 6954 1538 D ST 6954 1538 M 6971 1515 D 6988 1491 D 7004 1467 D 7019 1442 D 7034 1417 D 7047 1391 D 7062 1367 D 7074 1340 D 7086 1315 D 7097 1288 D 7108 1263 D 7119 1236 D 7128 1210 D 7138 1183 D 7145 1156 D 7153 1129 D 7160 1102 D 7166 1075 D 7172 1048 D 7177 1020 D ST 2550 4347 M 2566 4332 D 2581 4317 D 2597 4301 D 2612 4286 D 2627 4270 D 2643 4254 D 2659 4239 D 2674 4224 D 2690 4208 D 2705 4192 D 2720 4177 D 2736 4162 D 2751 4145 D 2767 4130 D 2781 4115 D 2797 4098 D 2812 4083 D 2828 4068 D 2843 4051 D 2858 4036 D 2873 4021 D 2889 4005 D 2904 3989 D 2920 3974 D 2934 3958 D 2950 3942 D 2965 3926 D 2981 3911 D 2996 3895 D 3011 3879 D ST 3011 3879 M 3026 3864 D 3042 3848 D 3057 3832 D 3073 3817 D 3087 3801 D 3103 3785 D 3118 3770 D 3133 3754 D 3149 3738 D 3163 3722 D 3179 3707 D 3194 3691 D 3210 3675 D 3224 3660 D 3240 3643 D 3255 3628 D 3270 3613 D 3286 3597 D 3300 3581 D 3316 3565 D 3331 3550 D 3347 3534 D 3362 3518 D 3377 3503 D 3392 3486 D 3407 3471 D 3423 3456 D 3437 3439 D 3453 3424 D 3468 3408 D ST 3468 3408 M 3484 3393 D 3499 3377 D 3515 3361 D 3529 3346 D 3545 3329 D 3560 3314 D 3576 3299 D 3590 3282 D 3605 3267 D 3621 3251 D 3636 3235 D 3652 3220 D 3666 3204 D 3682 3189 D 3698 3173 D 3713 3157 D 3729 3142 D 3743 3126 D 3759 3110 D 3774 3095 D 3790 3079 D 3805 3063 D 3821 3048 D 3836 3032 D 3851 3016 D 3867 3001 D 3882 2986 D 3898 2969 D 3914 2954 D 3928 2939 D ST 3928 2939 M 3944 2923 D 3960 2907 D 3975 2892 D 3991 2876 D 4007 2861 D 4022 2845 D 4037 2829 D 4053 2814 D 4069 2799 D 4085 2784 D 4101 2767 D 4115 2752 D 4131 2737 D 4147 2721 D 4163 2706 D 4179 2691 D 4195 2674 D 4211 2659 D 4226 2644 D 4242 2629 D 4257 2613 D 4273 2598 D 4289 2582 D 4304 2566 D 4320 2551 D 4336 2536 D 4351 2519 D 4366 2504 D 4382 2489 D 4397 2472 D ST 4397 2472 M 4413 2457 D 4428 2442 D 4444 2426 D 4458 2410 D 4473 2394 D 4489 2379 D 4504 2362 D 4518 2346 D 4533 2331 D 4548 2314 D 4562 2298 D 4577 2282 D 4592 2266 D 4606 2250 D 4620 2234 D 4634 2217 D 4648 2200 D 4663 2184 D 4676 2168 D 4690 2151 D 4703 2134 D 4716 2118 D 4729 2100 D 4742 2083 D 4754 2066 D 4767 2048 D 4779 2031 D 4791 2013 D 4803 1996 D 4816 1979 D ST 4816 1979 M 4827 1960 D 4838 1943 D 4850 1925 D 4861 1906 D 4871 1889 D 4882 1871 D 4893 1852 D 4903 1834 D 4914 1816 D 4923 1797 D 4933 1779 D 4943 1761 D 4953 1741 D 4963 1723 D 4971 1704 D 4981 1685 D 4989 1667 D 4998 1647 D 5007 1629 D 5015 1610 D 5024 1590 D 5031 1571 D 5040 1552 D 5047 1533 D 5054 1514 D 5062 1494 D 5069 1475 D 5075 1456 D 5082 1436 D 5089 1417 D ST 5089 1417 M 5095 1397 D 5101 1378 D 5107 1358 D 5113 1338 D 5118 1319 D 5124 1299 D 5129 1279 D 5134 1260 D 5139 1239 D 5144 1220 D 5147 1200 D 5152 1180 D 5156 1160 D 5160 1140 D 5163 1120 D 5167 1101 D 5171 1080 D 5173 1060 D 5177 1040 D 5179 1020 D ST 2550 3536 M 2560 3525 D 2570 3514 D 2579 3503 D 2588 3491 D 2598 3480 D 2608 3469 D 2616 3458 D 2626 3447 D 2636 3434 D 2644 3423 D 2654 3412 D 2663 3401 D 2672 3388 D 2681 3377 D 2691 3366 D 2699 3354 D 2708 3343 D 2718 3330 D 2726 3319 D 2735 3308 D 2745 3296 D 2753 3284 D 2762 3272 D 2770 3261 D 2780 3249 D 2789 3237 D 2797 3226 D 2806 3214 D 2814 3203 D 2824 3191 D ST 2824 3191 M 2833 3178 D 2841 3167 D 2850 3155 D 2858 3144 D 2867 3131 D 2876 3120 D 2884 3108 D 2894 3097 D 2903 3084 D 2911 3072 D 2920 3061 D 2928 3049 D 2937 3038 D 2945 3025 D 2954 3013 D 2962 3002 D 2971 2990 D 2980 2978 D 2988 2966 D 2997 2954 D 3005 2943 D 3014 2930 D 3022 2918 D 3030 2907 D 3038 2895 D 3047 2883 D 3056 2871 D 3064 2859 D 3073 2847 D 3081 2836 D ST 3081 2836 M 3090 2823 D 3098 2811 D 3106 2799 D 3114 2788 D 3123 2775 D 3131 2763 D 3140 2751 D 3147 2740 D 3156 2727 D 3164 2715 D 3173 2703 D 3180 2692 D 3189 2680 D 3197 2667 D 3205 2655 D 3213 2643 D 3221 2631 D 3229 2619 D 3238 2607 D 3245 2595 D 3254 2583 D 3261 2570 D 3270 2558 D 3277 2546 D 3284 2534 D 3293 2521 D 3300 2509 D 3308 2497 D 3316 2485 D 3324 2472 D ST 3324 2472 M 3331 2460 D 3338 2448 D 3347 2436 D 3354 2424 D 3362 2411 D 3369 2399 D 3376 2386 D 3384 2374 D 3391 2361 D 3398 2349 D 3406 2337 D 3413 2324 D 3420 2311 D 3428 2299 D 3434 2287 D 3441 2274 D 3448 2261 D 3456 2249 D 3462 2236 D 3469 2224 D 3477 2211 D 3483 2198 D 3490 2186 D 3496 2174 D 3503 2160 D 3510 2148 D 3516 2135 D 3523 2123 D 3529 2109 D 3535 2097 D ST 3535 2097 M 3541 2084 D 3548 2072 D 3554 2058 D 3560 2046 D 3566 2033 D 3572 2020 D 3578 2007 D 3584 1994 D 3589 1981 D 3595 1968 D 3600 1955 D 3606 1942 D 3611 1929 D 3616 1916 D 3622 1902 D 3627 1889 D 3632 1877 D 3637 1864 D 3642 1850 D 3647 1837 D 3652 1824 D 3656 1811 D 3661 1797 D 3665 1784 D 3670 1771 D 3675 1756 D 3679 1743 D 3683 1730 D 3687 1717 D 3692 1703 D ST 3692 1703 M 3696 1690 D 3701 1677 D 3704 1664 D 3708 1649 D 3712 1636 D 3715 1623 D 3719 1610 D 3723 1596 D 3726 1582 D 3730 1569 D 3734 1556 D 3737 1541 D 3740 1528 D 3743 1515 D 3747 1500 D 3750 1487 D 3753 1474 D 3756 1460 D 3758 1446 D 3762 1433 D 3764 1419 D 3767 1406 D 3769 1392 D 3772 1378 D 3774 1365 D 3776 1350 D 3779 1337 D 3781 1323 D 3783 1310 D 3785 1295 D ST 3785 1295 M 3787 1282 D 3789 1269 D 3791 1255 D 3792 1241 D 3794 1227 D 3795 1214 D 3797 1200 D 3798 1186 D 3800 1172 D 3801 1158 D 3801 1144 D 3802 1130 D 3803 1117 D 3803 1103 D 3805 1089 D 3805 1075 D 3805 1062 D 3806 1048 D 3806 1034 D 3806 1020 D ST 2550 2091 M 2552 2086 D 2555 2081 D 2557 2076 D 2559 2071 D 2561 2066 D 2563 2060 D 2566 2055 D 2568 2050 D 2570 2045 D 2572 2040 D 2574 2035 D 2577 2030 D 2578 2025 D 2581 2020 D 2583 2015 D 2584 2008 D 2587 2003 D 2589 1998 D 2590 1993 D 2593 1988 D 2595 1983 D 2597 1978 D 2599 1973 D 2601 1968 D 2603 1962 D 2605 1957 D 2606 1951 D 2609 1946 D 2610 1941 D 2612 1936 D ST 2612 1936 M 2614 1931 D 2616 1926 D 2617 1921 D 2620 1916 D 2621 1909 D 2623 1904 D 2625 1899 D 2627 1894 D 2628 1889 D 2631 1884 D 2632 1878 D 2633 1873 D 2636 1868 D 2637 1863 D 2639 1857 D 2641 1852 D 2642 1846 D 2644 1841 D 2645 1836 D 2647 1831 D 2649 1826 D 2650 1821 D 2652 1815 D 2654 1809 D 2655 1804 D 2656 1799 D 2659 1794 D 2660 1788 D 2661 1783 D 2663 1778 D ST 2663 1778 M 2665 1773 D 2666 1768 D 2668 1762 D 2669 1756 D 2670 1751 D 2672 1746 D 2674 1741 D 2675 1735 D 2676 1730 D 2677 1725 D 2679 1720 D 2680 1714 D 2681 1709 D 2682 1703 D 2683 1698 D 2686 1693 D 2687 1687 D 2688 1682 D 2690 1677 D 2690 1672 D 2691 1666 D 2692 1661 D 2693 1655 D 2694 1650 D 2696 1644 D 2697 1639 D 2698 1634 D 2699 1628 D 2699 1623 D 2701 1618 D ST 2701 1618 M 2702 1613 D 2703 1607 D 2703 1601 D 2704 1596 D 2705 1591 D 2707 1585 D 2707 1580 D 2708 1575 D 2709 1569 D 2709 1564 D 2710 1559 D 2710 1552 D 2712 1547 D 2713 1542 D 2713 1537 D 2714 1531 D 2714 1526 D 2715 1521 D 2715 1515 D 2716 1510 D 2716 1505 D 2718 1498 D 2718 1493 D 2719 1488 D 2719 1482 D 2720 1477 D 2720 1472 D 2721 1466 D 2721 1461 D 2721 1456 D ST 2721 1456 M 2723 1449 D 2723 1444 D 2724 1439 D 2724 1434 D 2724 1428 D 2725 1423 D 2725 1418 D 2726 1412 D 2726 1407 D 2726 1401 D 2727 1395 D 2727 1390 D 2727 1385 D 2729 1379 D 2729 1374 D 2729 1369 D 2729 1363 D 2730 1358 D 2730 1353 D 2730 1346 D 2731 1341 D 2731 1336 D 2731 1330 D 2731 1325 D 2732 1319 D 2732 1314 D 2732 1309 D 2732 1303 D 2734 1297 D 2734 1292 D ST 2734 1292 M 2734 1286 D 2734 1281 D 2734 1276 D 2735 1270 D 2735 1265 D 2735 1260 D 2735 1254 D 2735 1248 D 2735 1243 D 2736 1237 D 2736 1232 D 2736 1227 D 2736 1221 D 2736 1216 D 2736 1211 D 2736 1205 D 2736 1200 D 2736 1194 D 2737 1188 D 2737 1183 D 2737 1178 D 2737 1172 D 2737 1167 D 2737 1162 D 2737 1156 D 2737 1151 D 2737 1145 D 2737 1139 D 2737 1134 D 2737 1129 D ST 2737 1129 M 2737 1123 D 2737 1118 D 2737 1113 D 2737 1107 D 2737 1102 D 2737 1097 D 2737 1090 D 2737 1085 D 2737 1080 D 2737 1074 D 2737 1069 D 2737 1064 D 2737 1058 D 2737 1053 D 2737 1048 D 2737 1041 D 2737 1036 D 2737 1031 D 2737 1025 D 2736 1020 D ST 8568 332 M 8568 115 D ST 8580 321 M 8580 126 D ST 8592 332 M 8592 115 D ST 8592 229 M 8667 229 D ST 8531 115 M 8630 115 D ST 8543 332 M 8568 321 D ST 8555 332 M 8568 311 D ST 8605 332 M 8592 311 D ST 8617 332 M 8592 321 D ST 8667 332 M 8728 321 D ST 8691 332 M 8728 311 D ST 8704 332 M 8728 301 D ST 8716 332 M 8728 270 D 8728 332 D 8531 332 D ST 8667 187 M 8654 229 D 8667 270 D 8667 187 D ST 8667 249 M 8642 229 D 8667 208 D ST 8667 239 M 8617 229 D 8667 218 D ST 8568 126 M 8543 115 D ST 8568 136 M 8555 115 D ST 8592 136 M 8605 115 D ST 8592 126 M 8617 115 D ST 8815 332 M 8815 311 D 8840 311 D 8840 332 D 8815 332 D ST 8827 332 M 8827 311 D ST 8815 321 M 8840 321 D ST 8815 259 M 8815 115 D ST 8827 249 M 8827 126 D ST 8778 259 M 8840 259 D 8840 115 D ST 8778 115 M 8877 115 D ST 8790 259 M 8815 249 D ST 8803 259 M 8815 239 D ST 8815 126 M 8790 115 D ST 8815 136 M 8803 115 D ST 8840 136 M 8852 115 D ST 8840 126 M 8864 115 D ST 8976 239 M 8963 218 D 8963 198 D 8976 177 D ST 9050 177 M 9062 198 D 9062 218 D 9050 239 D 9062 249 D 9087 259 D 9099 259 D 9111 249 D 9099 239 D 9087 249 D ST 9000 156 M 8988 167 D 8976 187 D 8976 229 D 8988 249 D 9000 259 D 9025 259 D 9050 249 D 9062 239 D 9074 218 D 9074 198 D 9062 177 D 9050 167 D 9025 156 D 9000 156 D 8976 167 D 8963 177 D 8951 198 D 8951 218 D 8963 239 D 8976 249 D 9000 259 D ST 9025 156 M 9037 167 D 9050 187 D 9050 229 D 9037 249 D 9025 259 D ST 8963 177 M 8951 167 D 8938 146 D 8938 136 D 8951 115 D 8963 105 D 9000 95 D 9050 95 D 9087 84 D 9099 74 D ST 8963 115 M 9000 105 D 9050 105 D 9087 95 D ST 8976 43 M 8951 53 D 8938 74 D 8938 84 D 8951 105 D 8976 115 D 8938 105 D 8926 84 D 8926 74 D 8938 53 D 8976 43 D 9050 43 D 9087 53 D 9099 74 D 9099 84 D 9087 105 D 9050 115 D 8988 115 D 8951 126 D 8938 136 D ST 9198 146 M 9186 136 D 9186 126 D 9198 115 D 9210 115 D 9223 126 D 9223 136 D 9210 146 D 9198 146 D ST 9198 136 M 9198 126 D 9210 126 D 9210 136 D 9198 136 D ST 9581 290 M 9581 280 D 9593 280 D 9593 290 D 9581 290 D ST 9581 301 M 9593 301 D 9606 290 D 9606 280 D 9593 270 D 9581 270 D 9569 280 D 9569 290 D 9581 311 D 9593 321 D 9630 332 D 9680 332 D 9717 321 D 9729 311 D 9742 290 D 9742 270 D 9729 249 D 9692 229 D 9630 208 D 9606 198 D 9581 177 D 9569 146 D 9569 115 D ST 9717 311 M 9729 290 D 9729 270 D 9717 249 D ST 9680 332 M 9705 321 D 9717 290 D 9717 270 D 9705 249 D 9680 229 D 9630 208 D ST 9569 136 M 9581 146 D 9606 146 D 9668 136 D 9717 136 D 9742 146 D ST 9742 167 M 9742 146 D 9729 126 D 9717 115 D 9668 115 D 9606 146 D 9668 126 D 9717 126 D 9729 136 D ST 5366 389 M 5366 245 D ST 5378 379 M 5378 255 D ST 5329 389 M 5390 389 D 5390 245 D ST 5390 348 M 5403 368 D 5415 379 D 5440 389 D 5477 389 D 5502 379 D 5514 368 D 5526 337 D 5526 245 D ST 5502 368 M 5514 337 D 5514 255 D ST 5477 389 M 5489 379 D 5502 348 D 5502 245 D ST 5526 348 M 5539 368 D 5551 379 D 5576 389 D 5613 389 D 5638 379 D 5650 368 D 5662 337 D 5662 245 D ST 5638 368 M 5650 337 D 5650 255 D ST 5613 389 M 5625 379 D 5638 348 D 5638 245 D ST 5329 245 M 5428 245 D ST 5465 245 M 5563 245 D ST 5601 245 M 5699 245 D ST 5341 389 M 5366 379 D ST 5353 389 M 5366 368 D ST 5366 255 M 5341 245 D ST 5366 265 M 5353 245 D ST 5390 265 M 5403 245 D ST 5390 255 M 5415 245 D ST 5502 255 M 5477 245 D ST 5502 265 M 5489 245 D ST 5526 265 M 5539 245 D ST 5526 255 M 5551 245 D ST 5638 255 M 5613 245 D ST 5638 265 M 5625 245 D ST 5662 265 M 5675 245 D ST 5662 255 M 5687 245 D ST 5761 323 M 5761 194 D ST 5768 317 M 5768 200 D ST 5739 323 M 5776 323 D 5776 194 D ST 5776 255 M 5783 268 D 5790 274 D 5805 280 D 5827 280 D 5842 274 D 5850 268 D 5857 249 D 5857 194 D ST 5842 268 M 5850 249 D 5850 200 D ST 5827 280 M 5835 274 D 5842 255 D 5842 194 D ST 5739 194 M 5798 194 D ST 5820 194 M 5879 194 D ST 5746 323 M 5761 317 D ST 5753 323 M 5761 311 D ST 5761 200 M 5746 194 D ST 5761 206 M 5753 194 D ST 5776 206 M 5783 194 D ST 5776 200 M 5790 194 D ST 5842 200 M 5827 194 D ST 5842 206 M 5835 194 D ST 5857 206 M 5864 194 D ST 5857 200 M 5872 194 D ST 6278 502 M 6254 482 D 6229 451 D 6204 410 D 6192 358 D 6192 317 D 6204 265 D 6229 224 D 6254 193 D 6278 173 D ST 6229 440 M 6216 410 D 6204 368 D 6204 307 D 6216 265 D 6229 235 D ST 6254 482 M 6241 461 D 6229 430 D 6216 368 D 6216 307 D 6229 245 D 6241 214 D 6254 193 D ST 6538 430 M 6550 461 D 6550 399 D 6538 430 D 6513 451 D 6488 461 D 6451 461 D 6414 451 D 6389 430 D 6377 410 D 6365 379 D 6365 327 D 6377 296 D 6389 276 D 6414 255 D 6451 245 D 6488 245 D 6513 255 D 6538 255 D 6550 245 D 6550 327 D ST 6402 430 M 6389 410 D 6377 379 D 6377 327 D 6389 296 D 6402 276 D ST 6451 461 M 6427 451 D 6402 420 D 6389 379 D 6389 327 D 6402 286 D 6427 255 D 6451 245 D ST 6538 317 M 6538 265 D ST 6525 327 M 6525 265 D 6513 255 D ST 6488 327 M 6587 327 D ST 6501 327 M 6525 317 D ST 6513 327 M 6525 307 D ST 6562 327 M 6550 307 D ST 6575 327 M 6550 317 D ST 6674 327 M 6810 327 D 6810 348 D 6797 368 D 6785 379 D 6748 389 D 6723 389 D 6686 379 D 6661 358 D 6649 327 D 6649 307 D 6661 276 D 6686 255 D 6723 245 D 6748 245 D 6785 255 D 6810 276 D ST 6797 337 M 6797 348 D 6785 368 D ST 6674 358 M 6661 337 D 6661 296 D 6674 276 D ST 6785 327 M 6785 358 D 6773 379 D 6748 389 D ST 6723 389 M 6698 379 D 6686 368 D 6674 337 D 6674 296 D 6686 265 D 6698 255 D 6723 245 D ST 6896 461 M 6970 276 D 6970 245 D 6884 461 D ST 6908 461 M 6983 276 D ST 6859 461 M 6946 461 D ST 7007 461 M 7081 461 D ST 6871 461 M 6896 440 D ST 6921 461 M 6908 440 D ST 6933 461 M 6908 451 D ST 7032 461 M 7057 451 D 6970 245 D ST 7069 461 M 7057 451 D ST 7143 502 M 7168 482 D 7193 451 D 7217 410 D 7230 358 D 7230 317 D 7217 265 D 7193 224 D 7168 193 D 7143 173 D ST 7193 440 M 7205 410 D 7217 368 D 7217 307 D 7205 265 D 7193 235 D ST 7168 482 M 7180 461 D 7193 430 D 7205 368 D 7205 307 D 7193 245 D 7180 214 D 7168 193 D ST 1461 3356 M 1608 3407 D ST 1461 3366 M 1608 3417 D ST 1461 3325 M 1461 3397 D ST 1461 3427 M 1461 3488 D ST 1461 3335 M 1485 3366 D ST 1461 3386 M 1473 3366 D ST 1461 3448 M 1473 3468 D 1608 3417 D 1632 3407 D 1461 3346 D ST 1461 3478 M 1473 3468 D ST 1326 3601 M 1350 3580 D 1387 3560 D 1436 3539 D 1497 3529 D 1546 3529 D 1608 3539 D 1656 3560 D 1693 3580 D 1718 3601 D ST 1399 3560 M 1436 3550 D 1485 3539 D 1559 3539 D 1608 3550 D 1644 3560 D ST 1350 3580 M 1375 3570 D 1412 3560 D 1485 3550 D 1559 3550 D 1632 3560 D 1669 3570 D 1693 3580 D ST 1375 3733 M 1632 3733 D ST 1387 3743 M 1620 3743 D ST 1375 3754 M 1632 3754 D ST 1632 3703 M 1632 3784 D ST 1375 3682 M 1412 3662 D ST 1375 3692 M 1399 3662 D ST 1375 3713 M 1387 3662 D ST 1375 3774 M 1387 3825 D ST 1375 3794 M 1399 3825 D ST 1375 3805 M 1412 3825 D ST 1375 3815 M 1448 3825 D 1375 3825 D 1375 3662 D 1448 3662 D 1375 3672 D ST 1620 3733 M 1632 3713 D ST 1608 3733 M 1632 3723 D ST 1608 3754 M 1632 3764 D ST 1620 3754 M 1632 3774 D ST 1620 3937 M 1612 3937 D 1612 3931 D 1627 3931 D 1627 3943 D 1612 3943 D 1598 3931 D 1590 3919 D 1590 3900 D 1598 3882 D 1612 3870 D 1634 3864 D 1649 3864 D 1671 3870 D 1686 3882 D 1693 3900 D 1693 3913 D 1686 3931 D 1671 3943 D ST 1612 3876 M 1627 3870 D 1656 3870 D 1671 3876 D ST 1590 3900 M 1598 3888 D 1605 3882 D 1627 3876 D 1656 3876 D 1679 3882 D 1686 3888 D 1693 3900 D ST 1326 4002 M 1350 4023 D 1387 4043 D 1436 4064 D 1497 4074 D 1546 4074 D 1608 4064 D 1656 4043 D 1693 4023 D 1718 4002 D ST 1399 4043 M 1436 4053 D 1485 4064 D 1559 4064 D 1608 4053 D 1644 4043 D ST 1350 4023 M 1375 4033 D 1412 4043 D 1485 4053 D 1559 4053 D 1632 4043 D 1669 4033 D 1693 4023 D ST 1718 4135 M 1326 4319 D 1326 4308 D 1718 4125 D 1718 4135 D ST 1375 4431 M 1632 4431 D ST 1387 4441 M 1620 4441 D ST 1375 4451 M 1632 4451 D ST 1632 4400 M 1632 4482 D ST 1375 4380 M 1412 4359 D ST 1375 4390 M 1399 4359 D ST 1375 4410 M 1387 4359 D ST 1375 4472 M 1387 4523 D ST 1375 4492 M 1399 4523 D ST 1375 4502 M 1412 4523 D ST 1375 4512 M 1448 4523 D 1375 4523 D 1375 4359 D 1448 4359 D 1375 4370 D ST 1620 4431 M 1632 4410 D ST 1608 4431 M 1632 4421 D ST 1608 4451 M 1632 4461 D ST 1620 4451 M 1632 4472 D ST 1620 4635 M 1612 4635 D 1612 4629 D 1627 4629 D 1627 4641 D 1612 4641 D 1598 4629 D 1590 4617 D 1590 4598 D 1598 4580 D 1612 4568 D 1634 4561 D 1649 4561 D 1671 4568 D 1686 4580 D 1693 4598 D 1693 4610 D 1686 4629 D 1671 4641 D ST 1612 4574 M 1627 4568 D 1656 4568 D 1671 4574 D ST 1590 4598 M 1598 4586 D 1605 4580 D 1627 4574 D 1656 4574 D 1679 4580 D 1686 4586 D 1693 4598 D ST 6838 6661 M 6838 6532 D ST 6849 6651 M 6849 6541 D ST 6805 6661 M 6860 6661 D 6860 6532 D ST 6860 6624 M 6871 6642 D 6882 6651 D 6904 6661 D 6937 6661 D 6959 6651 D 6970 6642 D 6981 6615 D 6981 6532 D ST 6959 6642 M 6970 6615 D 6970 6541 D ST 6937 6661 M 6948 6651 D 6959 6624 D 6959 6532 D ST 6981 6624 M 6992 6642 D 7003 6651 D 7025 6661 D 7058 6661 D 7080 6651 D 7091 6642 D 7102 6615 D 7102 6532 D ST 7080 6642 M 7091 6615 D 7091 6541 D ST 7058 6661 M 7069 6651 D 7080 6624 D 7080 6532 D ST 6805 6532 M 6893 6532 D ST 6926 6532 M 7014 6532 D ST 7047 6532 M 7135 6532 D ST 6816 6661 M 6838 6651 D ST 6827 6661 M 6838 6642 D ST 6838 6541 M 6816 6532 D ST 6838 6550 M 6827 6532 D ST 6860 6550 M 6871 6532 D ST 6860 6541 M 6882 6532 D ST 6959 6541 M 6937 6532 D ST 6959 6550 M 6948 6532 D ST 6981 6550 M 6992 6532 D ST 6981 6541 M 7003 6532 D ST 7080 6541 M 7058 6532 D ST 7080 6550 M 7069 6532 D ST 7102 6550 M 7113 6532 D ST 7102 6541 M 7124 6532 D ST 7197 6590 M 7197 6508 D 7203 6497 D ST 7216 6486 M 7210 6492 D 7203 6508 D 7203 6601 D 7190 6590 D 7190 6514 D 7197 6497 D 7203 6492 D 7216 6486 D 7230 6486 D 7243 6492 D 7249 6503 D ST 7170 6563 M 7230 6563 D ST 7488 6651 M 7300 6651 D 7300 6661 D 7488 6661 D 7488 6651 D ST 7488 6578 M 7300 6578 D 7300 6587 D 7488 6587 D 7488 6578 D ST 7862 6707 M 7862 6532 D ST 7873 6707 M 7873 6541 D ST 7829 6688 M 7851 6697 D 7884 6725 D 7884 6532 D ST 7818 6532 M 7928 6532 D ST 7862 6541 M 7840 6532 D ST 7862 6550 M 7851 6532 D ST 7884 6550 M 7895 6532 D ST 7884 6541 M 7906 6532 D ST 8149 6633 M 8160 6615 D 8160 6578 D 8149 6560 D ST 8105 6661 M 8127 6651 D 8138 6642 D 8149 6615 D 8149 6578 D 8138 6550 D 8127 6541 D 8105 6532 D ST 8027 6578 M 8027 6569 D 8038 6569 D 8038 6578 D 8027 6578 D ST 8038 6716 M 8127 6716 D ST 8038 6707 M 8082 6707 D 8127 6716 D 8149 6725 D 8038 6725 D 8016 6633 D 8038 6651 D 8071 6661 D 8105 6661 D 8138 6651 D 8160 6633 D 8171 6606 D 8171 6587 D 8160 6560 D 8138 6541 D 8105 6532 D 8071 6532 D 8038 6541 D 8027 6550 D 8016 6569 D 8016 6578 D 8027 6587 D 8038 6587 D 8049 6578 D 8049 6569 D 8038 6560 D 8027 6560 D ST 8270 6707 M 8259 6688 D 8248 6651 D 8248 6606 D 8259 6569 D 8270 6550 D ST 8358 6550 M 8369 6569 D 8380 6606 D 8380 6651 D 8369 6688 D 8358 6707 D ST 8303 6532 M 8281 6541 D 8270 6560 D 8259 6606 D 8259 6651 D 8270 6697 D 8281 6716 D 8303 6725 D 8325 6725 D 8358 6716 D 8380 6688 D 8391 6642 D 8391 6615 D 8380 6569 D 8358 6541 D 8325 6532 D 8303 6532 D 8270 6541 D 8248 6569 D 8237 6615 D 8237 6642 D 8248 6688 D 8270 6716 D 8303 6725 D ST 8325 6532 M 8347 6541 D 8358 6560 D 8369 6606 D 8369 6651 D 8358 6697 D 8347 6716 D 8325 6725 D ST 8843 6697 M 8854 6725 D 8854 6670 D 8843 6697 D 8821 6716 D 8799 6725 D 8765 6725 D 8732 6716 D 8710 6697 D 8699 6679 D 8688 6651 D 8688 6606 D 8699 6578 D 8710 6560 D 8732 6541 D 8765 6532 D 8799 6532 D 8821 6541 D 8843 6541 D 8854 6532 D 8854 6606 D ST 8721 6697 M 8710 6679 D 8699 6651 D 8699 6606 D 8710 6578 D 8721 6560 D ST 8765 6725 M 8743 6716 D 8721 6688 D 8710 6651 D 8710 6606 D 8721 6569 D 8743 6541 D 8765 6532 D ST 8843 6596 M 8843 6550 D ST 8832 6606 M 8832 6550 D 8821 6541 D ST 8799 6606 M 8887 6606 D ST 8810 6606 M 8832 6596 D ST 8821 6606 M 8832 6587 D ST 8865 6606 M 8854 6587 D ST 8876 6606 M 8854 6596 D ST 8964 6606 M 9085 6606 D 9085 6624 D 9074 6642 D 9063 6651 D 9030 6661 D 9008 6661 D 8975 6651 D 8953 6633 D 8942 6606 D 8942 6587 D 8953 6560 D 8975 6541 D 9008 6532 D 9030 6532 D 9063 6541 D 9085 6560 D ST 9074 6615 M 9074 6624 D 9063 6642 D ST 8964 6633 M 8953 6615 D 8953 6578 D 8964 6560 D ST 9063 6606 M 9063 6633 D 9052 6651 D 9030 6661 D ST 9008 6661 M 8986 6651 D 8975 6642 D 8964 6615 D 8964 6578 D 8975 6550 D 8986 6541 D 9008 6532 D ST 9162 6725 M 9228 6560 D 9228 6532 D 9151 6725 D ST 9173 6725 M 9239 6560 D ST 9129 6725 M 9206 6725 D ST 9261 6725 M 9327 6725 D ST 9140 6725 M 9162 6707 D ST 9184 6725 M 9173 6707 D ST 9195 6725 M 9173 6716 D ST 9283 6725 M 9305 6716 D 9228 6532 D ST 9316 6725 M 9305 6716 D ST 2792 1432 M 2812 1449 D 2832 1457 D 2852 1457 D 2872 1449 D 2882 1440 D 2892 1416 D 2892 1383 D 2882 1350 D 2852 1284 D ST 2802 1440 M 2822 1449 D 2862 1449 D 2882 1440 D ST 2961 1457 M 2951 1432 D 2941 1416 D 2892 1358 D 2862 1317 D 2842 1284 D ST 2951 1457 M 2941 1432 D 2931 1416 D 2892 1358 D ST 3189 1449 M 3020 1449 D 3020 1457 D 3189 1457 D 3189 1449 D ST 3189 1383 M 3020 1383 D 3020 1391 D 3189 1391 D 3189 1383 D ST 3268 1482 M 3268 1473 D 3278 1473 D 3278 1482 D 3268 1482 D ST 3268 1490 M 3278 1490 D 3288 1482 D 3288 1473 D 3278 1465 D 3268 1465 D 3258 1473 D 3258 1482 D 3268 1498 D 3278 1506 D 3308 1515 D 3347 1515 D 3377 1506 D 3387 1490 D 3387 1465 D 3377 1449 D 3347 1440 D ST 3367 1506 M 3377 1490 D 3377 1465 D 3367 1449 D ST 3318 1440 M 3347 1440 D 3367 1432 D 3387 1416 D 3397 1399 D 3397 1374 D 3387 1358 D 3377 1350 D 3347 1341 D 3308 1341 D 3278 1350 D 3268 1358 D 3258 1374 D 3258 1383 D 3268 1391 D 3278 1391 D 3288 1383 D 3288 1374 D 3278 1366 D 3268 1366 D ST 3377 1416 M 3387 1399 D 3387 1374 D 3377 1358 D ST 3347 1341 M 3367 1350 D 3377 1374 D 3377 1399 D 3367 1424 D 3357 1432 D 3338 1440 D 3357 1449 D 3367 1465 D 3367 1490 D 3357 1506 D 3338 1515 D ST 3268 1383 M 3268 1374 D 3278 1374 D 3278 1383 D 3268 1383 D ST 8631 2332 M 8651 2348 D 8671 2357 D 8690 2357 D 8710 2348 D 8720 2340 D 8730 2315 D 8730 2282 D 8720 2249 D 8690 2183 D ST 8641 2340 M 8661 2348 D 8700 2348 D 8720 2340 D ST 8799 2357 M 8789 2332 D 8780 2315 D 8730 2257 D 8700 2216 D 8680 2183 D ST 8789 2357 M 8780 2332 D 8770 2315 D 8730 2257 D ST 9027 2348 M 8859 2348 D 8859 2357 D 9027 2357 D 9027 2348 D ST 9027 2282 M 8859 2282 D 8859 2290 D 9027 2290 D 9027 2282 D ST 9156 2398 M 9156 2241 D ST 9166 2398 M 9166 2249 D ST 9126 2381 M 9146 2390 D 9176 2414 D 9176 2241 D ST 9117 2241 M 9216 2241 D ST 9156 2249 M 9136 2241 D ST 9156 2257 M 9146 2241 D ST 9176 2257 M 9186 2241 D ST 9176 2249 M 9196 2241 D ST 7023 1719 M 7042 1735 D 7062 1744 D 7082 1744 D 7102 1735 D 7112 1727 D 7122 1702 D 7122 1669 D 7112 1636 D 7082 1570 D ST 7033 1727 M 7052 1735 D 7092 1735 D 7112 1727 D ST 7191 1744 M 7181 1719 D 7171 1702 D 7122 1644 D 7092 1603 D 7072 1570 D ST 7181 1744 M 7171 1719 D 7161 1702 D 7122 1644 D ST 7419 1735 M 7251 1735 D 7251 1744 D 7419 1744 D 7419 1735 D ST 7419 1669 M 7251 1669 D 7251 1677 D 7419 1677 D 7419 1669 D ST 7548 1785 M 7548 1628 D ST 7558 1785 M 7558 1636 D ST 7518 1768 M 7538 1777 D 7568 1801 D 7568 1628 D ST 7508 1628 M 7607 1628 D ST 7548 1636 M 7528 1628 D ST 7548 1644 M 7538 1628 D ST 7568 1644 M 7578 1628 D ST 7568 1636 M 7588 1628 D ST 7706 1653 M 7697 1644 D 7697 1636 D 7706 1628 D 7716 1628 D 7726 1636 D 7726 1644 D 7716 1653 D 7706 1653 D ST 7706 1644 M 7706 1636 D 7716 1636 D 7716 1644 D 7706 1644 D ST 7915 1719 M 7924 1702 D 7924 1669 D 7915 1653 D ST 7875 1744 M 7895 1735 D 7905 1727 D 7915 1702 D 7915 1669 D 7905 1644 D 7895 1636 D 7875 1628 D ST 7806 1669 M 7806 1661 D 7815 1661 D 7815 1669 D 7806 1669 D ST 7815 1793 M 7895 1793 D ST 7815 1785 M 7855 1785 D 7895 1793 D 7915 1801 D 7815 1801 D 7796 1719 D 7815 1735 D 7845 1744 D 7875 1744 D 7905 1735 D 7924 1719 D 7934 1694 D 7934 1677 D 7924 1653 D 7905 1636 D 7875 1628 D 7845 1628 D 7815 1636 D 7806 1644 D 7796 1661 D 7796 1669 D 7806 1677 D 7815 1677 D 7825 1669 D 7825 1661 D 7815 1653 D 7806 1653 D ST 5229 1527 M 5249 1544 D 5269 1552 D 5289 1552 D 5309 1544 D 5319 1535 D 5329 1510 D 5329 1477 D 5319 1444 D 5289 1378 D ST 5239 1535 M 5259 1544 D 5299 1544 D 5319 1535 D ST 5398 1552 M 5388 1527 D 5378 1510 D 5329 1453 D 5299 1411 D 5279 1378 D ST 5388 1552 M 5378 1527 D 5368 1510 D 5329 1453 D ST 5626 1544 M 5457 1544 D 5457 1552 D 5626 1552 D 5626 1544 D ST 5626 1477 M 5457 1477 D 5457 1486 D 5626 1486 D 5626 1477 D ST 5705 1577 M 5705 1568 D 5715 1568 D 5715 1577 D 5705 1577 D ST 5705 1585 M 5715 1585 D 5725 1577 D 5725 1568 D 5715 1560 D 5705 1560 D 5695 1568 D 5695 1577 D 5705 1593 D 5715 1601 D 5745 1610 D 5784 1610 D 5814 1601 D 5824 1593 D 5834 1577 D 5834 1560 D 5824 1544 D 5794 1527 D 5745 1510 D 5725 1502 D 5705 1486 D 5695 1461 D 5695 1436 D ST 5814 1593 M 5824 1577 D 5824 1560 D 5814 1544 D ST 5784 1610 M 5804 1601 D 5814 1577 D 5814 1560 D 5804 1544 D 5784 1527 D 5745 1510 D ST 5695 1453 M 5705 1461 D 5725 1461 D 5775 1453 D 5814 1453 D 5834 1461 D ST 5834 1477 M 5834 1461 D 5824 1444 D 5814 1436 D 5775 1436 D 5725 1461 D 5775 1444 D 5814 1444 D 5824 1453 D ST 3897 1432 M 3916 1449 D 3936 1457 D 3956 1457 D 3976 1449 D 3986 1440 D 3996 1416 D 3996 1383 D 3986 1350 D 3956 1284 D ST 3906 1440 M 3926 1449 D 3966 1449 D 3986 1440 D ST 4065 1457 M 4055 1432 D 4045 1416 D 3996 1358 D 3966 1317 D 3946 1284 D ST 4055 1457 M 4045 1432 D 4035 1416 D 3996 1358 D ST 4293 1449 M 4125 1449 D 4125 1457 D 4293 1457 D 4293 1449 D ST 4293 1383 M 4125 1383 D 4125 1391 D 4293 1391 D 4293 1383 D ST 4372 1482 M 4372 1473 D 4382 1473 D 4382 1482 D 4372 1482 D ST 4372 1490 M 4382 1490 D 4392 1482 D 4392 1473 D 4382 1465 D 4372 1465 D 4362 1473 D 4362 1482 D 4372 1498 D 4382 1506 D 4412 1515 D 4452 1515 D 4481 1506 D 4491 1498 D 4501 1482 D 4501 1465 D 4491 1449 D 4461 1432 D 4412 1416 D 4392 1407 D 4372 1391 D 4362 1366 D 4362 1341 D ST 4481 1498 M 4491 1482 D 4491 1465 D 4481 1449 D ST 4452 1515 M 4471 1506 D 4481 1482 D 4481 1465 D 4471 1449 D 4452 1432 D 4412 1416 D ST 4362 1358 M 4372 1366 D 4392 1366 D 4442 1358 D 4481 1358 D 4501 1366 D ST 4501 1383 M 4501 1366 D 4491 1350 D 4481 1341 D 4442 1341 D 4392 1366 D 4442 1350 D 4481 1350 D 4491 1358 D ST 4580 1366 M 4570 1358 D 4570 1350 D 4580 1341 D 4590 1341 D 4600 1350 D 4600 1358 D 4590 1366 D 4580 1366 D ST 4580 1358 M 4580 1350 D 4590 1350 D 4590 1358 D 4580 1358 D ST 4788 1432 M 4798 1416 D 4798 1383 D 4788 1366 D ST 4749 1457 M 4769 1449 D 4778 1440 D 4788 1416 D 4788 1383 D 4778 1358 D 4769 1350 D 4749 1341 D ST 4679 1383 M 4679 1374 D 4689 1374 D 4689 1383 D 4679 1383 D ST 4689 1506 M 4769 1506 D ST 4689 1498 M 4729 1498 D 4769 1506 D 4788 1515 D 4689 1515 D 4670 1432 D 4689 1449 D 4719 1457 D 4749 1457 D 4778 1449 D 4798 1432 D 4808 1407 D 4808 1391 D 4798 1366 D 4778 1350 D 4749 1341 D 4719 1341 D 4689 1350 D 4679 1358 D 4670 1374 D 4670 1383 D 4679 1391 D 4689 1391 D 4699 1383 D 4699 1374 D 4689 1366 D 4679 1366 D ST grestore showpage
proofpile-arXiv_067-1895
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection*{Figure Captions} \begin{enumerate} \item The total branching ratio of the new events in Eq. (14) as a function of $m_{Z'}$ for $m_{H_2} = 55$ GeV (dashed), 59.7 GeV (solid) and 65 GeV (dash-dotted). \end{enumerate} \newpage
proofpile-arXiv_067-1968
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:introduction} In the (oblivious) online vector discrepancy problem an adversary fixes vectors $\{v_i\}_{i\in [t]}$ in advance and the objective is to assign signs $\epsilon_i\in \{-1,1\}$ based only on vectors $v_1,\ldots,v_i$ to maintain that $\snorm{\sum_{i\le t'} \epsilon_iv_i}_\infty$ is small at all times $t' \in [t]$. Vector balancing includes a number of different problems in discrepancy theory including Spencer's \cite{Spe85} work on set discrepancy. Spencer's ``six standard deviations suffice'' result states that given vectors $v_1,\ldots,v_n\in \{0,1\}^n$ there exists a $\pm 1$-signing such that $\snorm{\sum_{i\le n} \epsilon_iv_i}_{\infty}\le 6\sqrt{n}$. Conjecturally, however, the restriction to $\{0,1\}^n$ vectors can be relaxed to a norm condition. In particular, the Koml\'{o}s conjecture states that given $v_1,\ldots,v_t$, each of at most unit length, there exists a sequence of signs $\epsilon_1,\ldots,\epsilon_t$ such that $\snorm{\sum_{i\le t} \epsilon_iv_i}_{\infty} = O(1)$. Despite substantial effort, the Koml\'{o}s conjecture is still open and the best known bounds due to Banaszczyk \cite{Ban98} give the existence of a sequence of signs so that $\epsilon_1,\ldots,\epsilon_t$ such that $\snorm{\sum_{i\le t} \epsilon_iv_i}_{\infty} = O(\sqrt{\min(\log n,\log t}))$. However, these original proofs were by their nature non-algorithmic. More recent research in theoretical computer science has focused on developing algorithmic versions of these results starting with the Bansal \cite{Ban10} and Lovett-Meka \cite{LM15} polynomial-time algorithms for Spencer's \cite{Spe85} ``six standard deviations suffice''. Since then, there have been several other constructive discrepancy minimization algorithms \cite{Rot17,ES18,BDG16,BG17,BDGL18,DNTT18}. Notably for our purposes, Bansal, Dadush, Garg \cite{BDG16} and Bansal, Dadush, Garg, Lovett \cite{BDGL18} have made the work of Banaszczyk \cite{Ban98} algorithmic. However in all cases these algorithms require all vectors to be known at the start and hence do not extend to the online setting. In the online setting, significant work has been devoted to the case where $v_i$ are drawn from a fixed (and known) distribution $\mathfrak{p}$ supported on $[-1,1]^n$. In the setting where $\mathfrak{p}$ is uniform on $[-1,1]^n$, Bansal and Spencer \cite{BS19} showed one can maintain $\max_{t'\le t}\snorm{\sum_{i\le t'} \epsilon_iv_i}_\infty\le O(\sqrt{n} \log t)$. In the more general setting where $\mathfrak{p}$ is a general distribution supported on $[-1,1]^n$, Aru, Narayanan, Scott, and Venkatesan \cite{ANSV18} achieved a bound of $O_n(\sqrt{\log t})$ (where the implicit dependence on $n$ is super-exponential) and Bansal, Jiang, Meka, Singla, and Sinha \cite{BJMSS20} (building on work of Bansal, Jiang, Singla, and Sinha \cite{BJSS20}) achieved an $\ell_{\infty}$ guarantee of $O(\sqrt{n}\log(nt)^4)$. In this work we focus on the online setting where the only guarantee is $\snorm{v_i}_2\le 1$. The only previous work in this oblivious online setting is the following result of Alweiss, the first author, and the third author \cite{ALS20}. \begin{theorem}[{\cite[Theorem~1.1]{ALS20}}] \label{thm:balance} For any vectors $v_1, v_2, \cdots, v_t \in \mathbb{R}^n$ with $\|v_i\|_2 \le 1$ for all $i \in [t]$, there exists an online algorithm $\textsc{Balance}(v_1, \cdots, v_t, \delta)$ which maintains $\snorm{\sum_{i\le t'} \epsilon_iv_i}_\infty = O\left(\log(nt/\delta)\right)$ for all $t' \in [t]$ with probability at least $1-\delta$. \end{theorem} The proof in \cite{ALS20} relies on a coupling procedure which compares the distribution of $\sum_{i\le t}\epsilon_i v_i$ to a Gaussian at each stage via a stochastic domination argument and then deduces the necessary tail bounds. In this work, we recover \cref{thm:balance} (in fact with a slightly improved dependence) as well as the following corollary. \begin{corollary}\label{cor:balance-new} For any vectors $v_1, v_2, \cdots, v_t \in \mathbb{R}^n$ with $\|v_i\|_2 \le 1$ for all $i \in [t]$, there exists an online algorithm which assigns $\epsilon_i\in \{\pm 1, 2\}$ and maintains $\snorm{\sum_{i\le t'} \epsilon_iv_i}_\infty = O(\sqrt{\log(nt/\delta)})$ for all $t' \in [t]$ with probability at least $1-\delta$. \end{corollary} This result essentially recovers the best known bound on the Koml\'os conjecture due to Banaszczyk \cite{Ban98} in an online algorithmic fashion, with the slight defect of requiring a $+2$-signing option. Furthermore due to the online nature of the algorithm, the algorithm will run in essentially input-sparsity time which is substantially faster than the Gram-Schmidt walk \cite{BDGL18} which gives an algorithmic proof of the result of \cite{Ban98} (without the defect of requiring a $+2$-signing option). Our results are based on the observation that there exists Markov chains on $\mathbb{R}$ with transition steps of $0,\pm 1$ or $\pm 1, 2$ such that $\mathcal{N}(0,1)$ is a stationary distribution (as well as $\mathcal{N}(0,\sigma^2)$ for appropriate values of $\sigma$). Note that no such walk exists for $\pm 1$ steps as $\sum_{n\in \mathbb{Z}} (-1)^ne^{-n^2/2}\neq 0$ and therefore any $\pm 1$ walk fails the natural ``parity constraint'' that the total mass on even integers is mapped to the odd integers and vice versa under one step. The remainder of the paper is organized as follows. In \cref{sec:0-walk} we construct the required Markov chain on $\mathbb{R}$ with transition steps of $0,\pm 1$ such that $\mathcal{N}(0,\sigma^2)$ is a stationary distribution. In \cref{sec:2-walk} we extend this to a walk with transition steps of $\pm 1, 2$ as long as $\sigma\ge 1$. Finally, in \cref{sec:algorithms} we deduce the various algorithmic consequences. \subsection{Notation} Throughout this paper let $\mathcal{N}(\mu,\sigma^2)$ denote the Gaussian random variable with mean $\mu$ and variance $\sigma^2$. Furthermore, let $\operatorname{nnz}(\{v_i\}_{i\in S})$ denote the total number of non-zero entries of the vectors $\{v_i\}_{i\in S}$. \section{\texorpdfstring{$0, \pm 1$}{0, +-1} walk}\label{sec:0-walk} \begin{definition}\label{def:jacobi-walk} Given $\sigma > 0$ and $f\in[-1/2,1/2]$, consider the following random walk on $f+\mathbb{Z}$. For $n\ge 1$ the state $n+f$ moves to $n+1+f$ with probability $p_\sigma(n+f)$ and to $n-1+f$ otherwise, and the state $-n+f$ moves to $-n-1+f$ with probability $p_\sigma(n-f)$ and to $-n+1+f$ otherwise. Finally, the state $f$ moves to $1+f$ with probability $p_\sigma(f)$, to state $-1+f$ with probability $p_\sigma(-f)$, and stays at $f$ with probability $r_\sigma(f)$. Here \begin{align*} p_\sigma(x) &= \sum_{j\ge 1}(-1)^{j-1}\exp\bigg(-\frac{j^2+2xj}{2\sigma^2}\bigg)\\ r_\sigma(f) &= \sum_{j=-\infty}^\infty (-1)^j\exp\bigg(-\frac{j^2+2fj}{2\sigma^2}\bigg) \end{align*} for all $x\in\mathbb{R}$. \end{definition} These series clearly absolutely converge. We prove that these indeed correspond to consistent probabilities giving a walk, and additionally show that this walk preserves the discrete Gaussian distribution on $f+\mathbb{Z}$ (i.e., $\mathcal{N}(0,\sigma^2)|_{f+\mathbb{Z}}$). \begin{lemma}\label{lem:validity} For $\sigma > 0$ and $f\in[-1/2,1/2]$, we have that $p_\sigma(n\pm f)\in(0,1)$ for all $n\ge 0$, that $p_\sigma(f)+r_\sigma(f)+p_\sigma(-f) = 1$, that $r_\sigma(f)\in[0,1]$, and that furthermore \[r_\sigma(f)\le e^{-\sigma^2}\] if $\sigma\ge 1/2$. Additionally, $\mathcal{N}(0,\sigma^2)|_{f+\mathbb{Z}}$ is stationary under a step of random walk defined in \cref{def:jacobi-walk} with parameters $\sigma,f$. \end{lemma} \begin{proof} First, note that $\exp(-(j^2+2xj)/\sigma^2)$ is strictly decreasing on integers $j\ge 1$ as long as $x\ge -1/2$. Therefore $p_\sigma(x)$ is given by an alternating series with strictly decreasing terms, and we immediately deduce \[0 < p_\sigma(x)\le\exp\bigg(-\frac{j^2+2xj}{2\sigma^2}\bigg) < 1.\] Since $n+f,n-f\ge -1/2$ for $n\ge 0$, we see that $p_\sigma(n\pm f)\in(0,1)$, as desired. Second, note that \[p_\sigma(-f)+r_\sigma(f)+p_\sigma(f) = 1\] holds as trivially everything except the $j = 0$ term of the sum for $r_\sigma(f)$ cancels. Third, we have for $u = \exp(-1/(2\sigma^2))$ and $v = \sqrt{-1}\exp(-f/(2\sigma^2))$ that $|u| < 1$ and $v\neq 0$, hence the Jacobi triple product identity (see \cite{And65} for a short but slick proof) yields \begin{align} r_\sigma(f) = \sum_{j=-\infty}^\infty u^{j^2}v^{2j} &= \prod_{j=1}^\infty(1-u^{2j})(1+u^{2j-1}v^2)(1+u^{2j-1}v^{-2})\notag\\ &= \prod_{j=1}^\infty(1-e^{-j/\sigma^2})(1-e^{-(2j+2f-1)/(2\sigma^2)})(1-e^{-(2j-2f-1)/(2\sigma^2)}).\label{eq:jacobi-triple-product} \end{align} Since $f\in[-1/2,1/2]$ we see each term is nonnegative and clearly less than $1$, so $r_\sigma(f)\in[0,1]$ is immediate. Therefore we indeed have a well-defined walk. In fact, we see that \[r_\sigma(f)\le r_\sigma(0)\le \prod_{j=1}^\infty (1-e^{-j/\sigma^2})^3 \le \prod_{j=1}^{\lfloor\sigma^2\rfloor}(1-e^{-j/\sigma^2})^{3}\le (1-e^{-1})^{3\lfloor\sigma^2\rfloor}.\] This is at most $\exp(-\sigma^2)$ for $\sigma\ge 2$, and we can further numerically check that $r_\sigma(0)\le\exp(-\sigma^2)$ for $\sigma\in[1/2,2]$. Now we show that this walk preserves $\mathcal{N}(0,\sigma^2)|_{f+\mathbb{Z}}$. Note that \[1-p_\sigma(x) = \sum_{j\ge 0}(-1)^j\exp\bigg(-\frac{j^2+2xj}{2\sigma^2}\bigg).\] Therefore \begin{align*} p_\sigma(x-1)&\exp\bigg(-\frac{(x-1)^2}{2\sigma^2}\bigg) + (1-p_\sigma(x+1))\exp\bigg(-\frac{(x+1)^2}{2\sigma^2}\bigg)\\ &= \sum_{j\ge 1}(-1)^{j-1}\exp\bigg(-\frac{(j+x-1)^2}{2\sigma^2}\bigg) + \sum_{j\ge 0}(-1)^j\exp\bigg(-\frac{(j+x+1)^2}{2\sigma^2}\bigg)\\ &= \exp\bigg(-\frac{x^2}{2\sigma^2}\bigg). \end{align*} Since the pdf of $\mathcal{N}(0,\sigma^2)|_{f+\mathbb{Z}}$ at $n+f$ is proportional to $\exp(-(n+f)^2/(2\sigma^2))$, we find that the random walk preserves this distribution at $n+f$ for all $n\neq 0$ (applying the above equation at values $x = n\pm f$). Furthermore, the final distribution is clearly still supported on $f+\mathbb{Z}$, therefore the probability at $n = 0$ is also preserved as the total sum is $1$. \end{proof} We immediately derive a walk which preserves $\mathcal{N}(0,\sigma^2)$ by piecing together all $f\in[-1/2,1/2)$. Let $J_x^\sigma$ be the random variable defined by writing $x = n+f$, where $f\in[-1/2,1/2)$, and then performing a step according to \cref{def:jacobi-walk}. \begin{lemma}\label{lem:gaussian-jacobi} If $Z = \mathcal{N}(0,\sigma^2)$ then $Z + J_Z^\sigma$ is distributed as $\mathcal{N}(0,\sigma^2)$. \end{lemma} \section{\texorpdfstring{$\pm 1, 2$}{+-1, 2} walk}\label{sec:2-walk} We now consider a variant of the above random walk with discrete $\pm 1$ and $2$ steps. Recall the definition of $p_\sigma(x)$ and $r_\sigma(f)$ from earlier. We will require the following numerical estimate which is deferred to \cref{sec:appendix}. \begin{lemma}\label{lem:inequality} If $\sigma\ge 1$ and $f\in[-1/2,1/2]$ then \[p_\sigma(1+f)\ge r_\sigma(f)\exp\bigg(\frac{2f+1}{2\sigma^2}\bigg).\] \end{lemma} \begin{remark} This inequality is immediate for large $\sigma$ as the left uniformly tends to $1/2$ and the right uniformly decays to zero. \end{remark} \begin{definition}\label{def:ramanujan-walk} Given $\sigma\ge 1$ and $f\in[-1/2,1/2]$, consider the following random walk on $f+\mathbb{Z}$. For $n\ge 2$ the state $n+f$ moves to $n+1+f$ with probability $p_\sigma(n+f)$ and to $n-1+f$ otherwise. For $n\ge 1$ the state $-n+f$ moves to $-n-1+f$ with probability $p_\sigma(n-f)$ and to $-n+1+f$ otherwise. The state $f$ moves to $1+f$ with probability $p_\sigma(f)$, to state $-1+f$ with probability $p_\sigma(-f)$, and moves to $2+f$ with probability $r_\sigma(f)$. Finally, for $n = 1$ the state $1+f$ moves to $2+f$ with probability $p_\sigma(1+f) - r_\sigma(f)\exp((2f+1)/(2\sigma^2))$ and to $f$ otherwise. \end{definition} \begin{lemma}\label{lem:validity-II} For $\sigma\ge 1$ and $f\in[-1/2,1/2]$, we have that the walk in \cref{def:ramanujan-walk} is well-defined, and that $\mathcal{N}(0,\sigma^2)|_{f+\mathbb{Z}}$ is stationary under a step of the walk with parameters $\sigma,f$. \end{lemma} \begin{proof} That all probabilities are valid follows from \cref{lem:validity}, except that we need to additionally verify \[p_\sigma(1+f)\ge r_\sigma(f)\exp\bigg(\frac{2f+1}{2\sigma^2}\bigg).\] This is precisely \cref{lem:inequality}. To verify that $\mathcal{N}(0,\sigma^2)|_{f+\mathbb{Z}}$ is preserved under the walk defined in \cref{def:ramanujan-walk}, recall that $\mathcal{N}(0,\sigma^2)|_{f+\mathbb{Z}}$ is preserved under walk defined in \cref{def:jacobi-walk} by \cref{lem:validity}. This walk only differs in its probabilities that $f$ goes to $f,2+f$ and that $1+f$ goes to $f,2+f$. Therefore the probabilities at $n+f$ for $n\in\mathbb{Z}\setminus\{0,2\}$ are correct. Since the probabilities sum to $1$, it is enough to check the probability at $2+f$ is correct. It therefore suffices to show that \begin{align*} r_\sigma(f)\exp\bigg(-&\frac{f^2}{2\sigma^2}\bigg) + \bigg(p_\sigma(1+f)-r_\sigma(f)\exp\bigg(\frac{2f+1}{2\sigma^2}\bigg)\bigg)\exp\bigg(-\frac{(1+f)^2}{2\sigma^2}\bigg)\\ &+ (1-p_\sigma(3+f))\exp\bigg(-\frac{(3+f)^2}{2\sigma^2}\bigg)= \exp\bigg(-\frac{(2+f)^2}{2\sigma^2}\bigg). \end{align*} We already verified in the proof of \cref{lem:validity} that \[p_\sigma(x-1)\exp\bigg(-\frac{(x-1)^2}{2\sigma^2}\bigg) + (1-p_\sigma(x+1))\exp\bigg(-\frac{(x+1)^2}{2\sigma^2}\bigg) = \exp\bigg(-\frac{x^2}{2\sigma^2}\bigg).\] Plugging in $x = 2+f$ gives the desired identity, upon canceling the terms containing $r_\sigma(f)$. \end{proof} Again, we immediately derive a walk which preserves $\mathcal{N}(0,\sigma^2)$ by piecing together all $f\in[-1/2,1/2)$. Let $R_x^\sigma$ be the random variable defined by writing $x = n+f$, where $f\in[-1/2,1/2)$, and then performing a step according to \cref{def:jacobi-walk}. \begin{lemma}\label{lem:gaussian-ramanujan} If $\sigma\ge 1$ and $Z = \mathcal{N}(0,\sigma^2)$ then $Z + R_Z^\sigma$ is distributed as $\mathcal{N}(0,\sigma^2)$. \end{lemma} \section{Algorithmic Applications}\label{sec:algorithms} We now derive a number of algorithmic consequences. \begin{algorithm}[ht] \caption{$\textsc{PartialColoring}_\sigma(v_1,\cdots,v_t)$ \label{alg:partial-coloring}} $w_0 \leftarrow \mathcal{N}(0,\sigma^2I_n)$ \\ \For{$1\le i\le t$}{ $\sigma'\leftarrow\sigma/\snorm{v_i}_2$\\ $x'\leftarrow\sang{w_{i-1},v_i}/\snorm{v_i}_2$\\ $w_i \leftarrow w_{i-1} + J_{x'}^{\sigma'}v_i.$ \label{line:move-partial} } $w\leftarrow w_t - w_0$ \end{algorithm} \begin{algorithm}[ht]\label{alg:balance} \caption{$\textsc{Balancing}_\sigma(v_1,\cdots,v_t)$} $w_0 \leftarrow \mathcal{N}(0,\sigma^2I_n)$ \\ \For{$1\le i\le t$}{ $\sigma'\leftarrow\sigma/\snorm{v_i}_2$\\ $x'\leftarrow\sang{w_{i-1},v_i}/\snorm{v_i}_2$\\ $w_i \leftarrow w_{i-1} + R_{x'}^{\sigma'}v_i.$ \label{line:move-balance} } $w\leftarrow w_t - w_0$ \end{algorithm} In both $\textsc{Balacing}_\sigma$ and $\textsc{PartialColoring}_\sigma$, $J$ and $R$ are sampled independently every time. Additionally, note that $\textsc{Balacing}_\sigma$ is only well-defined when $\sigma\ge 1$. Finally, we clearly see that $\textsc{PartialColoring}_\sigma$ assigns a sign of $\pm 1$ to each given vector online, or chooses to omit it (a sign of $0$), while $\textsc{Balancing}_\sigma$ does the same except that the sign $2$ is the additional alternative. Our first algorithm application is a (weak version) of the partial coloring lemma. \begin{theorem}\label{thm:partial-coloring} Let $\snorm{v_1}_2,\ldots,\snorm{v_t}_2\le 1$ and $\delta\in(0,1/2)$. With probability at least $1-\delta$ we have that $w_\ell-w_0$ in $\textsc{PartialColoring}_1(v_1,\ldots,v_t)$ is $2\sqrt{2\log(2nt/\delta)}$-bounded for all times $\ell\in[t]$. Furthermore, with probability at least $1-\delta$ we have that $w_t-w_0$ is $2\sqrt{2\log(2n/\delta)}$-bounded. Finally, at least $96.3\%$ of vectors are used with probability $1-\exp(-\Omega(t))$. \end{theorem} \begin{proof} By \cref{lem:gaussian-jacobi} we immediately see that $w_i\sim\mathcal{N}(0,\sigma^2I_n)$ for all $i\in[t]$. The discrepancy results follow by trivial Gaussian estimates. For example, we see that the $j$th coordinate of $w_\ell$ is $\sqrt{2\log(2nt/\delta)}$-bounded with probability at least $\delta/(2nt)$. Taking a union bound over $0\le\ell\le t$ and $j\in[n]$ yields that $w_0,\ldots,w_t$ are bounded with probability at least $1-\delta$. Therefore each difference is also bounded. The fraction of vectors used being large follows from Chernoff's inequality and the fact that at every step, conditional on all previous choices, a vector is used with probability at least \[\min_{f\in[-1/2,1/2]}(1-r_1(f))\ge 0.9639.\qedhere\] \end{proof} Our second algorithmic application recovers the online vector balancing results of Alweiss, the first author, and the third author \cite[Theorems~1.1,~1.2]{ALS20}. \begin{theorem}\label{thm:full-coloring} Let $\snorm{v_1}_2,\ldots,\snorm{v_t}_2\le 1$, $\delta\in(0,1/2)$, and set $\sigma = \sqrt{\log(t/\delta)}$. With probability at least $1-\delta$ we have that $w_\ell-w_0$ in $\textsc{PartialColoring}_\sigma(v_1,\cdots,v_t)$ is $2\sqrt{2\log(t/\delta)\log(2nt/\delta)}$-bounded for all times $\ell\in [t]$. Furthermore, with probability at least $1-\delta$ we have that $w_t-w_0$ is $2\sqrt{2\log(t/\delta)\log(2n/\delta)}$-bounded. Finally, all vectors are used with probability at least $1-\delta$. \end{theorem} \begin{proof} The proof is essentially identical to that of \cref{thm:partial-coloring}. The only difference is that we see that at each step, a vector is not used with probability at most \[\max_{f\in[-1/2,1/2]}r_\sigma(f)\le e^{-\sigma^2} = \frac{\delta}{t}\] due to our choice of $\sigma$, by the inequality in \cref{lem:validity}. A union bound shows that all vectors are used with probability at least $1-\delta$. \end{proof} In fact, we can design an algorithm achieving the same bounds by using Algorithm \ref{alg:partial-coloring} for any value of $\sigma \ge 1$ as follows. To do this, first run Algorithm \ref{alg:partial-coloring}, and then rerun Algorithm \ref{alg:partial-coloring} on the vectors which were given a $0$ sign until no vectors remain (note that this can still be done in an online manner). By \cref{lem:validity}, specifically $r_\sigma(f) \le e^{-\sigma^2},$ this process will terminate with probability $1-\delta$ in $O(\sigma^{-2} \log(t/\delta))$ rounds. Each run produces a random vector with variance $O(\sigma^2)$ in every coordinate, hence the total variance is $O(\log (t/\delta))$ per coordinate as desired. Finally we recover an online version of Banaszczyk \cite{Ban98}, except using $\pm 1, 2$-signings. The proof is identical to that of \cref{thm:partial-coloring} so we omit it. \begin{theorem}\label{thm:balancing} Let $\snorm{v_1}_2,\ldots,\snorm{v_t}_2\le 1$ and $\delta\in(0,1/2)$. With probability at least $1-\delta$ we have that $w_\ell-w_0$ in $\textsc{Balancing}_1(v_1,\ldots,v_t)$ is $2\sqrt{2\log(2nt/\delta)}$-bounded for all times $\ell\in[t]$. Furthermore, with probability at least $1-\delta$ we have that $w_t-w_0$ is $2\sqrt{2\log(2n/\delta)}$-bounded. \end{theorem} All three algorithmic procedures are online. \subsection{Computational details}\label{sub:computational-details} In the previous section the above idealized algorithms ignored the cost of computing $r_\sigma(f)$ and $p_\sigma(n\pm f)$ to sufficient precision in order to be used for algorithmic purposes. The key claim is that one can approximate the above sums within $\delta$ in $\operatorname{poly}(\log(\sigma/\delta))$-time. In order to do so first note that we can truncate the sums $p_\sigma(n\pm f)$ and $r_\sigma(f)$ to values of $j\ge 1$ where $(j^2+2(n\pm f)j)/(2\sigma^2) = O(\log(\sigma/\delta))$. We now note that \[\bigg|e^x-\sum_{j=0}^m\frac{x^j}{j!}\bigg|\le\frac{x^{m+1}}{(m+1)!}e^{\max(0,x)},\] so taking $m = \Theta(\log(\sigma/\delta))$ gives a very good approximation to $\exp(-(j^2+2(n\pm f)j)/(2\sigma^2))$ in the range of terms considered. Now we can compute the desired sums by interpreting it as a sum of low degree (i.e. $O(\log(\sigma/\delta))$) polynomials on a sequence of integers, which can be evaluated quickly. In the implementation of the algorithms above, at time $t$ if we are given a vector shorter than $1/(2t^2)$, we deterministically add it but ignore it for the purposes of maintaining a Gaussian distribution. These vectors have total length at most $1$, so contribute only $O(1)$ discrepancy in each coordinate. For the remaining vectors, we have $\sigma\le 2t^2$. We thus can approximate the relevant probabilities to within $\delta/(2t^2)$ efficiently, and then sample appropriately. This will preserve the Gaussians in question up to total variation distance of at most \[\sum_{t\ge 1}\frac{\delta}{2t^2}\le\delta.\] Therefore, the running time of all probability computations is $\operatorname{poly}(\log (t/\delta))$ at time $t$. Thus the modified versions of the algorithms in \cref{thm:partial-coloring,thm:balancing,thm:full-coloring} run in $O\left(t\operatorname{poly}(\log(t/\delta))+n+\operatorname{nnz}(\{v_i\}_{i\in [t]})\right)$ time with discrepancy guarantees that are an absolute multiplicative factor worse. (The second term arises due to sampling the initial Gaussian point.) This running time essentially matches (up to logarithmic factors) the results of \cite{ALS20} and make progress towards input-sparsity time algorithms for discrepancy, a direction suggested by \cite{DadWeb}. A variant of our algorithms which run in $O\left(t\operatorname{poly}(\log(t/\delta))+n\log t+\operatorname{nnz}(\{v_i\}_{i\in [t]})\right)$ time is achieved by ``disregarding vectors'' at time $t$ which are shorter than $1/(2t^2)$ (as above) and otherwise grouping vectors by length into dyadic scales and running the algorithms separately with independent randomness on each of the scales. Note that when vector lengths are forced to live in a dyadic scale then sampling an appropriate Gaussian leads us to compute the above probabilities only when $\sigma\in[1,2]$ and hence directly evaluation of the series is efficient. \section*{Acknowledgements} The authors thank Ryan Alweiss, Arun Jambulapati, Allen Liu, and Mark Sellke for earlier discussions on this topic. Furthermore we thank Ghaith Hiary for discussions regarding computing theta series. \bibliographystyle{amsplain0.bst}
proofpile-arXiv_067-2037
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Code} We coded the MHAR using python and the Pytorch library. This allowed us to use a high level language and avoid any manual thread control. During all of our experiments, we never experienced any problems due to numerical instability. The maximum error we found for the inversion matrix was of the order $1e-16$ which we think is robust enough for most applications. Furthermore, almost all operations are carried out in the GPU from matrix inversion, random number generation, matrix-to-matrix multiplication and point wise operations. The only operations that need to be carried out outside the CPU are reading the constraints and saving the samples to disk, any arithmetic operation and random number is done directly in GPU.\\ Due to the structure of our code we can safely say that the overall performance depends of the GPU available and the connection speed between host-device. Note that given the flexibility of the Pytorch framework the code also works in CPU (no GPU is needed), but given the nature of the problem and the high dimensions we aim to sample we decided to focus our analysis to experiments that make use of the GPU.\\ The decision to use Pytorch as backend responds to its flexibily, power and popularity \cite{paszke2019pytorch}. lemma 4.3 7/8/20 \item Second, we show that the \textit{cost per sample} of the MHAR depends entirely on $m, n, z$ and $\omega$, where $m, n$ are as described in the definition of $\Delta$, $\omega$ represents a matrix multiplication coefficient as described in Table \ref{table:1}, and $z$ is a padding hyper-parameter specified by the user. After proper pre-processing and a \textit{warm start} the algorithm has a \textit{cost per sample} of $\mathcal{O}^*\big(\min(m_I^{\omega -2}n^4, m_In^{\omega +1})\big)$ for the full dimensional scenario, and of $\mathcal{O}^*\big(\min(n^{\omega +2}, m_In^{\omega +1})\big)$ for the non-full dimensional one. For clarity and simplicity, we will refer to HAR as the algorithm presented in \cite{montiel_jd}, which extends \cite{firs_har_ever} for non-full dimensional polytopes. In addition, to ease comparison we make use of "soft-O" notation $\mathcal{O}^*$, which suppresses $log(n)$ factors and other parameters like error bounds since it is a common practice in this area, see \cite{walks}, \cite{original-har}, and \cite{RHMC}. In the case of the HAR and the MHAR after proper pre-processing and from a \textit{warm start} the \textit{mixing time} is of the non-deterministic order $\mathcal{O}^*(n^3)$ as showed in \cite{original-har}. We also use $f \ll g$ notation to define a relation where $f\in \mathcal{O}(g)$. Hence, if $A$ and $B$ are two algorithms, we use $A \ll B$ to explain that the overall upper bound complexity of $B$ bounds the overall upper bound complexity of $A$, in other words $A$ is as efficient as $B$ in terms of complexity. Finally, we assume the existence of a random stream of bits that allow us to generate a random number in $\mathcal{O}(1)$. The HAR sampling method can be described as follows. A \textit{walk} is initialized in a strict inner point of the polytope. At any iteration, a random direction is generated via independent normal variates. The random direction, along with the current point, generates a line set $L$, and its intersection with the polytope generates a line segment. The sampler selects a random point that lives in $L$ and repeats the process. The HAR algorithm for full-dimensional convex bodies, after \textit{warm start}, has a \textit{cost per sample} of $\mathcal{O}^*(m_I n^4)$ \cite{walks}, which is a special case of Theorem \ref{th_cost_per_sample_har_nf} that states that in general the \textit{cost per sample} of the HAR is $\mathcal{O}^*(n^3\max\{ m_In,m_E^{\omega-2}n^2\})$ after proper pre-processing and a \textit{warm start}. In general, the non-deterministic \textit{mixing time} of the HAR is of order $\mathcal{O}^*(n^2 \gamma_{\kappa})$, where $ \gamma_{\kappa}$ is defined: \[ \gamma_{\kappa} = \inf_{R_{in}, R_{out}>0}\{ \frac{R_{in}}{R_{out}} \| \mathcal{B}(x, R_{in}) \subseteq \Delta \subseteq \mathcal{B}(y, R_{out}) \ for \ some \ x,y \in \Delta\} \] Where $R_{in}$ and $R_{out}$ are the radius of an inscribed and circumscribed ball of the polytope $\Delta$, and $\mathcal{B(q,R)}$ is the ball of radius $R$ with containing the point q. In essence $ \gamma_{\kappa}$ is the coefficient generated by the biggest inscribed ball and the smallest circumscribed ball of the polytope. Intuitively the fact that the \textit{mixing time} depends on this parameters means that elongated polytopes are harder to sample. It is a common practice in literature to analyze the HAR algorithms for convex bodies after some pre-processing and invoking a \textit{warm start} , meaning that the body in question is brought to a near isotropic position in $\mathcal{O}^*(\sqrt{n})$ allowing us to express the \textit{mixing time} as $\mathcal{O}^*(n^3)$ see \cite{walks}, \cite{geo-walk}, \cite{RHMC}, \cite{ballwalk}, \cite{original-har} and \cite{har_tommi}. In order to retain comparability between literature we assume that the polytope has received proper pre-processing for the remainder of the paper. Modifique substancialmente la seccion 5 y 6 agregue conclusiones cambie teorema 4.10 (importante del mhar), y 3.3 (importante del har) agregue el anexo \section{Introduction} \label{S:1} Random sampling of convex bodies is employed in disciplines such as operations research, statistics, probability, and physics. Among random-sampling approaches, Markov Chain Monte Carlo (MCMC) is the fastest, most accurate, and easiest to use \cite{walks}. MCMC is often implemented using polytope sampling algorithms, which are used in volume estimation \cite{polytope_volume} \cite{RHMC} \cite{practical_polytope}, convex optimization \cite{convex_program} \cite{optimization}, contingency tables \cite{tables}, mixed integer programming \cite{mix_int}, linear programming \cite{lin_pog}, hard-disk modeling \cite{hard_disk}, and decision analysis \cite{har_tommi} \cite{montiel_jd} \cite{montiel_b}. Sampling methods start by defining a Markov chain whose stationary distribution converges to a desired target distribution. Then they draw a predetermined number of samples. These methods have two sources of computational complexity: \textit{mixing-time}, which is the number of samples needed to lose the ``dependency'' between each draw; and \textit{cost per iteration}, which is the number of operations required to obtain a single sample. Sampling algorithms aim for efficient mixing-times, so that they can produce independent samples without dropping (also called "burning") too many of them, and a low cost per iteration in order to draw samples fast \cite{mcmc_goals}. \subsection{History and relevance of MCMC} The use of Monte Carlo methods has surged in the last 50 years, due to the availability of modern computers. However, there are records of experiments leading to a Monte Carlo simulation method as early as 1901 when Mario Lazzarini approximate $\pi$ by manually repeating Buffon's needle experiment 3,408 times. During the first half of the 20th century the use of Monte Carlo had a frequentist approach, since the Bayesian approach was viewed as unfavorable due to philosophical and computational considerations. With the advent of MCMC together with more powerful computers, Bayesian Monte Carlo methods saw an increase in use, having its first application published in 1993 with the ``the bootstrap filter'' \cite{first_bayesian}. Recently, numerous applications in operations research have used MCMC to complement diverse optimization models. For example, the characterization of a joint probability distribution under partial information is perhaps not unique \cite{montiel_b}. Hence, if we need the joint probabilities to value a real option \cite{montiel_a}, or to optimize the net gain of an oil field \cite{montiel_jd}, we have to understand the space to which the joint distribution belongs. Another example is the incomplete specification of a multi-attribute utility function in decision analysis. Here, the problem is to understand the range of preferences of the decision maker to provide recommendations \cite{har_tommi}, \cite{Montiel:2013vn}. In cooperative game theory \cite{montiel_c}, MCMC can be used to create an approximate objective function to optimize the negotiation strategy for a coalition of players. \subsection{The blueprint} This work presents an algorithm we call Matrix Hit and Run (MHAR) for sampling full and non-full dimensional polytopes. MHAR enhances the Hit-and-Run (HAR) algorithm proposed in \cite{montiel_jd}. We use the standard definition of a generic polytope $\Delta:=\{x \in \mathbb{R}^n | Ax \leq b\}$, where $(A,b) \in \mathbb{R}^{m \times n} \times \mathbb{R}^{m \times 1}$, $n$ is the number of elements of $x$, $m = m_E + m_I$ is the number of restrictions, $m_E$ is the number of equality constraints, and $m_I$ is the number of inequality constraints. The contribution of this work is six-fold: \begin{itemize} \item First, we introduce Matrix Hit-and-Run (\textbf{MHAR}). \item Second, we show that the cost per sample of the MHAR depends entirely on $m, n, z$, and $\omega$, where $m, n$ are as described in the definition of $\Delta$, $\omega$ represents a matrix multiplication coefficient as described in Table \ref{table:1}, and $z$ is a padding hyper-parameter specified by the user. After proper pre-processing and a warm start, the algorithm has a cost per sample of $\mathcal{O}^*\big(\min(m_I^{\omega -2}n^4, m_In^{\omega +1})\big)$ for the full dimensional scenario, and of $\mathcal{O}^*\big(\min(n^{\omega +2}, m_In^{\omega +1})\big)$ for the non-full dimensional one. \item Third, we demonstrate that MHAR has lower cost per sample than HAR if the hyper-parameter $z$ is bigger than $\max(n,m)$. This is achieved by switching possibly isolated \textit{walks} into a padded matrix that allows us to share operations between walks. \item Fourth, we show that after proper pre-processing and a warm start, MHAR has a lower asymptotic cost per sample for the regime $n^{1+\frac{1}{3}} \ll m$ than does any of the published sampling algorithms \cite{walks}. \item Fifth, we provide code for MHAR as a python library based on the Pytorch framework. It is ready for use in CPU or CPU-GPU architectures (as found in Colab, AWS, Azure, and Google Cloud). All MHAR experiments were conducted using Colab notebooks with an Nvidia P100 GPU. The code is available in \url{https://github.com/uumami/mhar_pytorch}. The python package can be installed with the \textbf{pip install mhar}, the official site of the package is \url{https://github.com/uumami/mhar} \item Sixth, we present the results of experiments to assess the performance of MHAR against the \textit{hitandrun} package used in \cite{har_tommi}. MHAR was found to be substantially faster in almost all scenarios, especially in high dimensions. Furthermore, we ran simulations to empirically test the convergence in distribution of our implementation, with favorable results. Finally we present insights over the padding hyper-parameter $z$ obtained via computational tests. \end{itemize} The remainder of this paper is organized as follows. \S \ref{S:2} revises definitions and some basic matrix-to-matrix operations. \S \ref{S:3} revisits the cost per iteration and cost per sample of HAR. \S \ref{S:4} provides a computational complexity analysis of MHAR. \S \ref{S:5} compares MHAR against other algorithms developed for full dimensional scenarios. \S \ref{S:6} contains a back-to-back comparison of our implementation against the \textit{``hitandrun''} library used in \cite{har_tommi}, and a numerical analysis of the padding parameter $z$. \S \ref{S:7} presents our conclusions and identifies future work. For clarity and simplicity, HAR will refer to the algorithm presented in \cite{montiel_jd}, which extends \cite{firs_har_ever} for non-full dimensional polytopes. For ease of comparison, we use "soft-O" notation $\mathcal{O}^*$, which suppresses $log(n)$ factors and other parameters like error bounds \cite{walks}, \cite{RHMC}, \cite{original-har}. In order to allow comparison with other algorithms, we assume that the polytope sampled by HAR and MHAR has received proper pre-processing, which means the polytope is in near isotropic position as defined in \cite{walks}, \cite{RHMC}, \cite{har_tommi}. Additionally all algorithms are compared from a \textit{warm start}. We use $f \ll g$ notation to define a relation where $f\in \mathcal{O}(g)$. Finally, we assume the existence of a random stream of bits that allow us to generate a random number in $\mathcal{O}(1)$. \section{Preliminaries}\label{S:2} This section formalizes the notation and provides a brief overview of computational complexity in matrix-to-matrix operations. \subsection{Polytopes} We start by defining a polytope, which is the n-dimensional generalization of a polyhedron, as the intersection of half-spaces. Formally, a polytope is characterized by a set of $m_E$ linear equality constraints and $m_I$ linear inequality constraints in a Euclidean space ($\mathbb{R}^n$): \begin{align} \Delta^I \ &= \{ x\in \mathbb{R}^n \;|\; A^Ix \leq b^I, \; A^I \in \mathbb{R}^{m_I \times n}, \; b^I \in \mathbb{R}^{m_I} \}, \label{TS1}\\ \Delta^E &= \{ x\in \mathbb{R}^n \;|\; A^Ex = b^E, \; A^E \in \mathbb{R}^{m_E \times n}, \; b^E \in \mathbb{R}^{m_E} \},\label{TS2}\\ \Delta \ \ &= \Delta^I \cap \Delta^D, \label{TS3} \end{align} where Equations (\ref{TS1}) and (\ref{TS2}) are defined by the inequalities and equalities, respectively. The third equation defines the polytope of interest, and it is the intersection of the two previous sets. Since $\Delta$ is the intersection of convex sets, then by construction it is also convex. For simplicity we assume all polytopes to be bounded, non-empty, and characterized with no redundant constraints. \subsection{Matrix multiplication} We adopt common notation used in matrix multiplication. $\omega$ represents the matrix multiplication coefficient - which characterizes the number of operations required to multiply two $n \times n$ matrices. The complexity for such multiplication is of the order $\mathcal{O}(n^\omega)$. The lowest complexity for matrix multiplication algorithms is conjectured to be $\Omega(n^2)$ \cite{lowest_bound}. Table \ref{table:1} shows the theoretical bounds for many well-known multiplication algorithms. \begin{table}[h!] \caption{Asymptotic complexity of matrix multiplication algorithms} \label{table:1} \vspace{.3cm} \centering \begin{tabular}{ |p{4.5cm}||p{2.5cm}| } \hline \multicolumn{2}{|c|}{Matrix Multiplication Algorithms} \\ \hline \textbf{Algorithm} & \textbf{Complexity}\\ \hline Naive & $\mathcal{O}(n^{3})$ \\ \hline Strassen-Schonhaeg & $\mathcal{O}(n^{2.807})$ \\ \hline Coppersmith-Winograd & $\mathcal{O}(n^{2.376})$ \\ \hline Legall & $\mathcal{O}(n^{2.373})$ \\ \hline \end{tabular} \end{table} In general, \cite{rec_matrix} showed that the number of operations needed to multiply two matrices with dimensions $m\times n$ and $n \times p$ is of $\mathcal{O}(d_1d_2d_3^{\omega-2})$, where $d_3=min\{m,n,p\}$ and $\{d_1,d_2\} = \{m,n,p\}-\{d_3\}$. The special case of matrix-vector multiplication $d_3=1$ yields a bound of $\mathcal{O}(mn)$. The smallest published $\omega$ is 2.373 \citep{legall_min}. It is possible to define a function $\mu$ that represents the matrix multiplication order of complexity for matrices $A \in \mathbb{R}^{n_1 \times n_2}$ and $B \in \mathbb{R}^{n_2 \times n_3}$ as \begin{align} \mu_{A,B}=& \begin{cases} n_1^{\omega-2} n_2 n_3 & \quad \text{if } \min\{n_1,n_2,n_3\}=n_1,\\ n_1 n_2^{\omega-2} n_3 & \quad \text{if } \min\{n_1,n_2,n_3\}=n_2,\\ n_1 n_2 n_3^{\omega-2} & \quad \text{if } \min\{n_1,n_2,n_3\}=n_3. \end{cases} \end{align} Thus we can express the complexity of the operation $AB$ as $\mathcal{O}(\mu_{A,B})$. In practice, only the Naive and Strassen's algorithms are used because the constants hidden in the Big O notation are usually significantly big for large enough matrices to take advantage of. Moreover, many multiplication algorithms are impractical due to numerical instabilities \cite{walks}. Fortunately, there have recently been fast and numerically stable implementations of the Strassen algorithm using GPUs (\cite{strassen_gpu}, \cite{recipes}, \cite{reliable_gpu}). \section{HAR} \label{S:3} This section explains the HAR algorithm and calculates its cost per iteration and mixing time for non-full dimensional polytopes, as defined in \cite{montiel_jd}. \subsection{Overview} HAR can be described as follows. A \textit{walk} is initialized in a strict inner point of the polytope. At any iteration, a random direction is generated via independent normal variates. The random direction, along with the current point, generates a line set $L$, and its intersection with the polytope generates a line segment. The sampler selects a random point in $L$ and repeats the process. After a warm start, HAR for full-dimensional convex bodies has a cost per iteration $\mathcal{O}(m_In)$ and a cost per sample of $\mathcal{O}^*(m_I n^4)$ \cite{walks}. In general, the non-deterministic mixing time of HAR is of $\mathcal{O}^*(n^2 \gamma_{\kappa})$, where $ \gamma_{\kappa}$ is defined as \[ \gamma_{\kappa} = \inf_{R_{in}, R_{out}>0}\Bigg\{ \frac{R_{out}}{R_{in}} \| \mathcal{B}(x, R_{in}) \subseteq \Delta \subseteq \mathcal{B}(y, R_{out}) \ for \ some \ x,y \in \Delta\Bigg\}, \] where $R_{in}$ and $R_{out}$ are the radii of an inscribed and circumscribed ball of the polytope $\Delta$, respectively, and $\mathcal{B}(q,R)$ is the ball of radius $R$ containing the point q. In essence, $ \gamma_{\kappa}$ is the coefficient generated by the biggest inscribed ball and the smallest circumscribed ball of the polytope. That the mixing time depends on these parameters means that elongated polytopes are harder to sample. Implementations of HAR for convex bodies are typically analyzed after pre-processing and invoking a warm start, meaning that the body in question is brought to a near isotropic position in $\mathcal{O}^*(\sqrt{n})$, allowing the mixing time to be expressed as $\mathcal{O}^*(n^3)$ \cite{walks}, \cite{geo-walk}, \cite{RHMC}, \cite{ballwalk}, \cite{original-har} and \cite{har_tommi}. For ease of comparison with the literature, the remainder of the paper assumes that the polytope has received proper pre-processing. A HAR sampler must compute the starting point and find the line segment $L$ at each iteration. Additionally, a thinning factor (also called "burning rate") $\varphi(n)$ must be included to achieve a fair almost uniform distribution over the studied space \cite{har_tommi}. This means that after a warm start, the algorithm needs to drop $\varphi(n)$ sampled points for each desired i.i.d. observation. This thinning factor is known as the mixing-time, which is $\mathcal{O}^*(n^3)$ in the case of polytopes (see \cite{walks}, \cite{har_tommi}, \cite{original-har}). The HAR pseudocode proposed in \cite{montiel_jd} for full and non-full dimensional polytopes is presented in Algorithm \ref{alg:har}. It samples a collection $\mathcal{X}$ of $T$ uncorrelated points inside $\Delta$. We know the complexity of HAR for full dimensional polytopes, to find the cost per iteration and cost per sample of HAR for non-full dimensional polytopes will require analyzing the complexity of calculating the projection matrix. \subsection{Projection matrix} The projection matrix $P_{\Delta^E}$ is computed from the equality matrix $ A^E $. Then, $P_{\Delta^E}$ allows any vector to be projected to the null space of $A^E$. In our case, the random direction vector $h$ lives in a full dimensional space, which means that if $m_E>0$, $h$ needs to be projected so that the line set $L$ lives in the same space as $\Delta$. The projection operation $P_{\Delta^E}h = d$ yields $A^Ed=0$. Then, $A^E(x+d)=b^E$ \cite{montiel_jd}. (This step is omitted if $m_E=0$.) The projection matrix is defined as \begin{equation} P_{\Delta^E} = I - A'^E(A^EA'^E)^{-1}A^E . \end{equation} \begin{algorithm}[t] \label{alg:har} \caption{HAR pseudocode} \SetAlgoLined \KwResult{$\mathcal{X}$} Initialization\; $t\gets 0$ (Sample point counter)\; $j\gets 0$ (Iteration counter)\; $\mathcal{X} =\emptyset$\; \textbf{Set} the total sample size $T$\; \textbf{Set} a thinning factor $\varphi(n)$\; \textbf{Find} a strictly inner point of the polytope $\Delta$ and label it $x_{t=0,j=0}$\; \If{$(m_E>0)$}{Compute the projection matrix $P_{\Delta^E}$} \While{$t < T$}{ \textbf{Generate} the direction vector $h \in \mathbb{R}^n$\; \eIf{$(m_E=0)$}{$d=h$}{$d=P_{\Delta^E}h$} \textbf{Find} the line set $L:=\{x|x=x_{t,j}+\theta d, x\in\Delta \; \& \; \theta \in \mathbb{R} \}$\; $ j\gets j+ 1 $\; \textbf{Generate} a point uniformly distributed in $L \cap \Delta$ and label it $x_{t,j+1}$\; \If{$j==\varphi(n)$}{ $ \mathcal{X} = \mathcal{X} \cup x_{t,j} $\; $ t\gets t + 1 $\; $ j\gets 0 $\; } } \end{algorithm} \begin{lemma} \label{lemma projection complexity} If $m_E < n$, then the complexity of calculating $P_{\Delta^E}$ is $\mathcal{O}(m_E^{\omega-2}n^2)$. \end{lemma} \begin{proof} Computing $P_{\Delta^E}$ is done in three matrix multiplications, one matrix-to-matrix subtraction, and one matrix inversion operation over $(A^EA'^E)$. The number of operations needed to calculate the inverse matrix depends on the algorithm used for matrix multiplication \cite{inversion}. The order of number of operations for computing $P_{\Delta^E}$ is the sum of the following: \begin{enumerate} \item Obtain $(A^EA'^E)$ in $\mathcal{O}(\mu_{A^E, A'^E})=\mathcal{O}(\mu(m_E,n,m_E)) =\mathcal{O}(m_E^{\omega-1}n)$ operations. \item Find the inverse $(A^EA'^E)^{-1}$ in $\mathcal{O}(m_E^{\omega})$, since $(A^EA'^E)^{-1}$ has dimension $m_E \times m_E$. \item Multiply $ A'^E(A^EA'^E)^{-1}$ in $\mathcal{O}(\mu_{ A'^E,(A^EA'^E)^{-1}})=\mathcal{O}(\mu(n,m_E,m_E)) =\mathcal{O}(m_E^{\omega-1}n)$. \item Calculate $A'^E(A^EA'^E)^{-1}A^E$ in $\mathcal{O}(\mu_{ A'^E(A^EA'^E)^{-1},A^E})=\mathcal{O}(\mu(n,m_E,n)) =\mathcal{O}(m_E^{\omega-2}n^2)$. \item Subtract $ I - A'^E(A^EA'^E)^{-1}A^E$ in $\mathcal{O}(n^2)$. \end{enumerate} These sum to $2 \times \mathcal{O}(m_E^{\omega-1}n) + \mathcal{O}(m_E^{\omega}) + \mathcal{O}(m_E^{\omega-2}n^2)+\mathcal{O}(n^2)$. Hence the complexity of calculating $P_{\Delta^E}$ is $ \mathcal{O}(\mu_{A'^E(A^EA'^E)^{-1},A^E})=\mathcal{O}(m_E^{\omega-2}n^2)$. \end{proof} For simplicity, we will denote the complexity of computing $P_{\Delta^E}$ as $\mathcal{O}(\mu_{P_{\Delta^E}})$. \subsection{Non-full dimensional HAR} We proceed to calculate the cost per sample of HAR for $m_E > 0$. We start by computing the cost per iteration in Lemma \ref{lemma cost per iteration har nf}. \begin{lemma}\label{lemma cost per iteration har nf} The \textit{cost per iteration} of HAR for $0 \leq m_E$ is $\mathcal{O}(\max\{ m_In,m_E^{\omega-2}n^2\})$. \end{lemma} \begin{proof} As seen in Algorithm \ref{alg:har}, the only difference between the full and non-full dimensional cases is the projection step $P_{\Delta^E}h=d$. Then, the cost per iteration is defined by the larger of the original cost per iteration $\mathcal{O}(m_In)$ of HAR for $m_E=0$, and the extra cost induced by the projection when $m_E>0$. Because $P_{\Delta^E}$ has dimension $n \times n $ and $h$ is an $n \times 1$ vector, $\mu_{P_{\Delta^E},h}=n^2$ and the complexity is $\mathcal{O}(n^2)$. By Lemma \ref{lemma projection complexity}, finding $P_{\Delta^E}$ has an asymptotic complexity of $\mathcal{O}(m_E^{\omega-2}n^2)$. Therefore, the cost of projecting $h$ at each iteration is $ \mathcal{O}(n^2) + \mathcal{O}(m_E^{\omega-2}n^2) = \mathcal{O}(m_E^{\omega-2}n^2)$, since $m_E>0$. Therefore, the cost per iteration for $m_E >0$ is $\mathcal{O}(\max\{ m_In,m_E^{\omega-2}n^2)\})$. If $m_E=0$, then the coefficient $\max\{ m_In,m_E^{\omega-2}n^2)\}$ equals $\max\{m_In,0\}=m_In$ and the cost per sample is $\mathcal{O}^*(\max\{m_In,0)\})=\mathcal{O}^*(m_In)$. \end{proof} Having calculated the cost per iteration of HAR, we can proceed to Theorem \ref{th_cost_per_sample_har_nf}. \begin{theorem}\label{th_cost_per_sample_har_nf} The \textit{cost per sample} of HAR for $\ 0 \leq m_E$ is $\mathcal{O}^*(n^3\max\{ m_In,m_E^{\omega-2}n^2\})$ after proper pre-processing and a warm start. \begin{proof} According to \cite{walks}, the cost per sample of a sampling algorithm is its mixing time complexity multiplied by its cost per iteration. By Lemma \ref{lemma cost per iteration har nf}, the cost per iteration is $\mathcal{O}(\max\{ m_In,m_E^{\omega-2}n^2\})$. Moreover, \cite{har_tommi} states that the mixing time, after a warm start, of HAR is $\mathcal{O}^*(n^3)$. Therefore, the cost per sample is $\mathcal{O}^*(n^3\max\{ m_In,m_E^{\omega-2}n^2\})$. Recall that if $m_E=0$ the \textit{cost per sample} is $\mathcal{O}^*(n^3\max\{m_In,0)\})=\mathcal{O}^*(m_In^4)$ that is the special case of HAR for full dimensional polytopes. \end{proof} \end{theorem} \section{Matrix Hit-and-Run (MHAR)}\label{S:4} This section details our new algorithm, Matrix Hit-And-Run (\textbf{MHAR}). MHAR has a lower cost per sample than does HAR. Furthermore, making $z$ simultaneous walks with MHAR requires fewer operations than does running $z$ HAR walks in parallel. The "padding" hyper-parameter $z$ allows the concatenation of multiple directions $d$ and samples $x$ to form matrices $D$ and $\mathcal{X}$, respectively. Each column of these matrices represents a walk over the polytope. This modification permits the use of efficient matrix-to-matrix operations to simultaneously project many directions $d$ and find their respective line segments. \subsection{MHAR preliminaries} MHAR explores the polytope using simultaneous walks by drawing multiple directions $d$ from the n-dimensional hypersphere. Each independent walk has the same mixing-time as with HAR, but a lower cost per iteration. Instead of running separate threads, we "batch" the walks by "padding" vector $x$ and $d$ with $z$ columns, creating the matrices $X=(x^1|\dots|x^k|\dots|x^z)$ and $D=(d^1|\dots|d^k|\dots|d^z)$. Super index $k$ denotes the $k$th walk represented by the $k$th column in the padded matrix. The algorithm then adapts the steps in HAR to keep track of each independent walk and recast the operations as matrix-to-matrix. The algorithm is tailored for exploiting cutting-edge matrix routines that exploit the architectures of machines like GPUs, cache memories, and multiple cores. The main difference with HAR when running $z$ instances on multiple independent cores ($z$-HAR) is the estimation of $D=(d^1|\dots|d^k|\dots|d^z)$ and the line segments $L^k$ in a simultaneous fashion for all $z$-walks. In both, $z$-HAR and MHAR, each walk is oblivious of the others after a warm start, which guarantee a constant mixing-time among all $z$-walks \cite{montiel_jd} \cite{starting_point}. Algorithm \ref{alg:mhar} presents the pseudocode for MHAR. \begin{algorithm}[h] \label{alg:mhar} \caption{MHAR pseudocode} \KwResult{$\mathcal{X}$} Initialization\; $t\gets 0$ (Sample point counter)\; $j\gets 0$ (Iteration counter)\; $z \gets \max\{m_I,n\} + 1$\; $\mathcal{X} = \emptyset$\; \textbf{Set} the total sample size $T$\; \textbf{Set} a thinning factor $\varphi(n)$\; \textbf{Find} a strictly inner point of the polytope $\Delta$ and label it $x_{t,j}$\; \textbf{Set} $x_{t,j}^k=x_{t,j}, \ \forall k \in \{1,...,z\}$\; \textbf{Initialize} $X_{t,j}=(x_{t,j}^1|...|x_{t,j}^k|...|x_{t,j}^z) \in \mathbb{R}^{n \times z}$\; \If{$(m_E>0)$}{Compute the projection matrix $P_{\Delta^E}$} \While{$t < T$}{ \textbf{Generate} $H=(h^1|...|h^k|...|h^z) \in \mathbb{R}^{n \times z}$, the direction matrix\; \eIf{$(m_E=0)$}{$D=H$;}{$D=P_{\Delta^E}H=(d^1|...|d^k|...|d^z)$;} \textbf{Find} the line sets $\Big\{L^k:=\{x|x=x_{t,j}^k+\theta^k d^k, \ x\in\Delta \; \& \; \theta^k \in \mathbb{R} \}\Big\}_{k=1}^z$\; $ j\gets j + 1 $\; \textbf{Generate} a point uniformly distributed in each $L^k$ and label it $x_{t,j}^k$ in $X_{t,j}$\; \If{$j==\varphi(n)$}{ $ \mathcal{X} = \mathcal{X} \cup \{x^1_{t,j}, ..., x^z_{t,j}\} $\; $ t\gets t + z $\; $ j\gets 0 $\; } } \end{algorithm} \subsection{Starting point} In general, the cost of finding the starting point is excluded from the complexity analysis because it is independent of the mixing-time. However, we present it here for completeness even though the literature assumes a warm start in determining cost per sample (\cite{walks}, \cite{har_tommi}, \cite{RHMC}). MHAR needs to be initialized by a point in the relative interior of the polytope. We suggest Chebyshev's center of the polytope, which is the center of the largest inscribed ball. For polytopes, Chebyshev's center can be formulated as a linear optimization problem and solved using standard methods. Chebyshev's center is presented in Model (\ref{CCHC}). \begin{equation}\label{CCHC} \begin{aligned} \max \limits_{{x\in \mathbb{R}^n, r\in\mathbb{R}}} \ &r,\\ s.t \quad A^Ex&=b^E,\\ (a^I_i)^Tx + r||a^I_i||_2 &\leq b_i^I, \ \forall i = 1,...,m_I, \end{aligned} \end{equation} where $a^I_i$ and $b^I_i$ represent the $i$th row of matrix $A^I$ and $i$th entry from vector $b^I$, respectively. Model (\ref{CCHC}) has the original $m$ restrictions plus one additional variable $r$. Hence, the size of the problem has $m$ constraints and $n+1$ variables. Then, calculating the $||\cdot||_2$ coefficients takes $\mathcal{O}(mn)$. Thus, it can be formulated and solved in $\mathcal{O}(n^{\omega})$ using Vaidya's algorithm \cite{fast_linear} for linear optimization. After solving Model (\ref{CCHC}), we use $x$ as the starting point $x_{t=0, j=0}$ for all walks and draw independent walking directions. The matrix $X_{t,j} \in \mathbb{R}^{n \times z}$ introduced in Algorithm \ref{alg:mhar} is the algorithmic version of $X$, and it summarizes the state of all walks, where each $k$th column represents the current point of walk $k$ at iteration $\{t,j\}$. Formally we say $X_{t,j} = (x_{t,j}^1|...|x_{t,j}^k|...|x_{t,j}^z)$ where $x_{t,j}^k \in \mathbb{R}^{n \times 1} \ \forall k \in \{1,...,z\}$. \subsection{Generating D} Because the target distribution of HAR and MHAR is uniform, we follow the procedure established in \cite{montiel_jd} and \cite{original-har} that uses the Margsalia method \cite{mar} to generate a random vector $h$ from the hypersphere by generating $n$ i.i.d. samples from a standard normal distribution $\mathcal{N}(0, \ 1)$. However, instead of generating a single direction vector $d \in \mathbb{R}^ n$, we create matrices $H,\ D \in \mathbb{R}^{n \times z}$, where each element of the matrix corresponds to an independent execution of the Box-Muller method \cite{box-muller} bounded by $\mathcal{O}(nz)$. If the polytope is full dimensional, $H=D$ and no projection operation is needed. Otherwise, the projection matrix $P_{\Delta^E}$ is calculated as in \S \ref{S:3}, and Lemma \ref{lemma projection complexity} bounds the number of operations as $\mathcal{O}(m_E^{\omega-2}n^2)$. Matrices $H$ and $D$ can be visualized as \begin{align} H = (h^1 | ... | h^k | ... | h^z),\ h^k \in \mathbb{R}^n,\ \forall k \in \{1,...,z\}, \\ D = P_{\Delta^E}H = (d^1 | ... | d^k | ... | d^z),\ d^k \in \mathbb{R}^n,\ \forall k \in \{1,...,z\}. \end{align} Each column $h^k$ can be projected by the operation $D=P_{\Delta^E}H$. Hence, each column of $D$ satisfies the restrictions in $\Delta^E$ and serves as a direction $d$ for an arbitrary walk $k$. In principle, $z$ can be any number in $\mathbb{N}$, where $z=1$ is the special case that recovers the original HAR. \begin{lemma}\label{lemma complexity generating D} The complexity of generating matrix D in MHAR given $P_{\Delta^E}$ and $\max\{m_I, n\} \leq z$ is $\mathcal{O}(nz)$ if $m_E=0$, and $\mathcal{O}(n^{\omega-1}z)$ if $m_E>0$. \begin{proof} Generating $H$ has complexity $\mathcal{O}(nz)$ using the Box-Muller method. If $m_E=0$, then $D=H$, implying a total asymptotic cost $\mathcal{O}(nz)$. If $m_E>0$, then $D=P_{\Delta^E}H$, whose cost $\mathcal{O}(\mu_{P_{\Delta^E},H})=\mathcal{O}(n^{\omega-1}z)$ given by $\max\{m_I, n\} \leq z$, needs to be included. $\mathcal{O}(n^{\omega-1}z)$ bounds $\mathcal{O}(nz)$. Therefore, the total cost of computing $D$ for $m_E>0$ is bounded by $\mathcal{O}(n^{\omega-1}z)$. \end{proof} \end{lemma} Lemma \ref{lemma complexity generating D} shows that if $m_E>0$, the cost of generating new directions $d$ does not scale as if had used $z$ parallel HARs. In the HAR case, the operations required would have been carried out in $\mathcal{O}(z \mu_{P_{\Delta^E},h})=\mathcal{O}(zn^2)$, averaging $\mathcal{O}(\frac{zn^2}{z})=\mathcal{O}(n^2)$ per direction. In contrast, MHAR is $\mathcal{O}(n^{\omega-1}z)$, averaging $\mathcal{O}(\frac{n^{\omega-1}z}{z})=\mathcal{O}(n^{\omega-1})$ per direction. When $m_E=0$, the number of operations for both cases is the same. \subsection{Finding the line sets} Given matrices $X$ and $D$, we now obtain the line sets $\{L^k\}_{k=1}^{z}$: \begin{equation} \Big\{L^k:=\{x|x=x^k+\theta^k d^k, \ x\in\Delta, \; \mbox{and} \; \theta^k \in \mathbb{R} \}\Big\}_{k=1}^z. \end{equation} Each $\theta^k$ characterizes the line set for column $x^k$. The "padded" column-wise representation of restrictions $\Delta^I$ is \begin{equation} A^{I}X=\begin{pmatrix} a_1^Ix^1 & \dots & a_1^Ix^k \\ \vdots & \ddots & \vdots\\ a_{m_I}^Ix^1 & \dots & a_{m_I}^Ix^k \end{pmatrix} \leq \begin{pmatrix}b_1^I \\ \vdots \\ b_{m_I}^I \end{pmatrix}=b^{I}, \end{equation} where each element from the left matrix must be less than or equal to the corresponding element (row-wise) in vector $b^I$. The restrictions for an arbitrary $x^k$ can be rewritten row-wise so that the left side and right side are scalars: \begin{align} a_i^Ix^k \leq b^I_i, \ \forall i \in \{1, \dots, m_I \}. \end{align} Then, each $\theta^k$s must satisfy \begin{align} (a_i^Ix^k + \theta^k a_i^Id^k) < b^I_i, \ \forall i \in \{1, \dots, m_I \}. \end{align} Rearranging the terms obtains restrictions for each walk $k$, where each $\theta^k$ must be bounded by its respective set of lambdas $\{\lambda^k_i\}_{i=1}^{m_I}$, as follows: \begin{align} \theta^k &< \lambda_i^k = \frac{b_i^I - a_i^Ix^k}{a_i^I d^k}, \quad if \ a_i^I d^k>0, \\ \theta^k &> \lambda_i^k = \frac{b_i^I - a_i^Ix^k}{a_i^I d^k}, \quad if \ a_i^I d^k<0. \end{align} Hence, a walk's boundaries are represented by \begin{align} \lambda_{min}^k=\max\ \{\lambda_i^k \ | \ a_i^I d^k < 0\}, \\ \lambda_{max}^k=\min\ \{\lambda_i^k \ | \ a_i^I d^k > 0\}. \end{align} These lambdas can be used to construct the intervals $\Lambda^k=(\lambda^k_{\min},$ $\lambda^k_{\max}), \ k\in \{1,...,z\}$. By construction, if $\theta^k \in \Lambda^k$ and $x^k \in \Delta$, then $x^k + \theta^kd^k \in L^k$, since $A^I(x^k + \theta^k d^k) \leq b^I $ and $A^E(x^k + \theta^k d^k)=b^E$. The line segment can be found simply by evaluating $\{\Lambda^k\}_{k=1}^{z}$, because $x^k$ and $D$ were computed previously. We can now state Lemma \ref{lemma complexity generating L}. \begin{lemma}\label{lemma complexity generating L} The complexity of generating all line sets $\{L^k\}_{k=1}^{z}$ in MHAR given $D$, $X$, and $\max\{m_I, n\} \leq z$ is bounded by $\mathcal{O}(m_In^{\omega -2}z) \ if \ n \leq m_I$, and by $\mathcal{O}(m_I^{\omega -2}nz)$ otherwise. \begin{proof} All $\Lambda^k$s can be obtained as follows: \begin{enumerate} \item Obtain matrix $A^IX$ in $\mathcal{O}(\mu_{A^I,X})$. This is done in $\mathcal{O}(m_In^{\omega -2}z) \ if \ n \leq m_I$, and in $\mathcal{O}(m_I^{\omega -2}nz)$ otherwise. \item Compute $B^I - A^IX$, where $B_I=(b^I|...|b^I) \in \mathbb{R}^{m^I \times z}$, which takes $\mathcal{O}(m_Iz)$ operations. \item Calculate $A^ID$, which is bounded by $\mathcal{O}(\mu_{A^I,D})$, which is done in $\mathcal{O}(m_In^{\omega -2}z) \ if \ n \leq m_I$, and in $\mathcal{O}(m_I^{\omega -2}nz)$ otherwise. \item Divide $\frac{B^I - A^IX}{A^ID}$ (entry-wise) to obtain all $\lambda^k_i$. All the necessary point-wise operations for this calculation have a combined order of $\mathcal{O}(m_Iz)$. \item For each $k \in \{1,...,z\}$, find which coefficients $a_i^I d^k$ are positive or negative, which takes $\mathcal{O}(m_Iz)$. \item For each $k \in \{1,...,z\}$, find the intervals $\lambda_{min}^k=\max\ \{\lambda_i^k \ | \ a_i^I d^k < 0\}$ and $ \lambda_{max}^k=\min\ \{\lambda_i^k \ | \ a_i^I d^k > 0\}$, which can be done in $\mathcal{O}(m_Iz)$. \end{enumerate} This procedure constructs all the intervals $\Lambda^k=(\lambda_{\min}^k, \lambda_{\max}^k)$. The complexity of this operation is bounded by $\mathcal{O}(\mu_{A^I,X}) = \mathcal{O}(\mu_{A^I,D})$. Hence, the complexity of finding all line sets is bounded by $\mathcal{O}(m_In^{\omega -2}z) \ if \ n \leq m_I$, and by $\mathcal{O}(m_I^{\omega -2}nz)$ otherwise. \end{proof} \end{lemma} Lemma \ref{lemma complexity generating L} bounds the complexity of finding the line sets at any iteration of MHAR. This leaves only analyzing the cost of choosing a new sample. \subsection{Choosing samples} The following lemma bounds the complexity of choosing a new $X_{t,j+1}$ or $X_{t+z,0}$ given $\Lambda^k \ \forall k\in \{1,...,z\}$. The new samples will be padded to create the matrix $X_{t,j+1}=(x_{t,j+1}^1 | \dots | x_{t,j+1}^k)$ to be used in the next iteration. \begin{lemma}\label{lemma complexity sample pick} Sampling $z$ new points given $\{\Lambda^k\}_{k=1}^z$ has complexity $\mathcal{O}(zn)$. \begin{proof} Selecting a random $\theta^k \in \Lambda^k$ takes $\mathcal{O}(1)$. Sampling a new point $x^k_{t,j+1} = x^k_{t,j} + \theta d^k_{t,j}$ has complexity $\mathcal{O}(n)$ because it requires $n$ scalar multiplications and $n$ sums. Then, sampling all new $x_{t,j+1}^k$ points is bounded by $\mathcal{O}(zn)$. \end{proof} \end{lemma} Having concluded the complexity analysis for each step of the loop, we next calculate the cost per iteration and proceed to measure the cost per sample. \subsection{Iteration and sampling costs of MHAR} The asymptotic behavior of each operation that comprises the main loop of MHAR when $\max\{n,m_I\} \leq z$ is presented in Table \ref{complexity_mhar}. The cost of finding the starting point is excluded (\cite{walks}, \cite{har_tommi}). \vspace{-.2cm} \begin{table}[h!] \setstretch{1.5} \caption{Asymptotic cost per sample of MHAR at each step} \label{complexity_mhar} \vspace{.2cm} \centering \begin{tabular}{ |p{3.7cm}||p{2.2cm}||p{2.2cm}||p{2.2cm}||p{2.2cm}| } \hline \multicolumn{5}{|c|}{MHAR complexity at each step, $(n,m) < z$} \\ \hline Operation & $m_E=0, \quad \ $ $\quad n \leq m_I.$ & $m_E=0, \quad \ $ $ \ n>m_I.$ & $m_E>0,\quad \ $ $ \ n \leq m_I.$ & $m_E>0,\quad \ $ $ \ n>m_I.$ \\ \hline 1.Projection matrix & $\mathcal{O}(1)$ & $\mathcal{O}(1)$ & $\mathcal{O}(m_E^{\omega-2}n^2)$ & $\mathcal{O}(m_E^{\omega-2}n^2)$ \\ \hline 2.Generating $D$ & $\mathcal{O}(nz)$ & $\mathcal{O}(nz)$ & $\mathcal{O}(n^{\omega-1}z)$ & $\mathcal{O}(n^{\omega-1}z)$\\ \hline 3.Finding $\{L^k\}_{k=1}^z$ & $\mathcal{O}(m_In^{\omega -2}z)$ & $\mathcal{O}(m_I^{\omega -2}nz)$ & $\mathcal{O}(m_In^{\omega -2}z)$ & $\mathcal{O}(m_I^{\omega -2}nz)$\\ \hline 4.Sampling all $x_{t,j+1}^k$ & $\mathcal{O}(nz)$ & $\mathcal{O}(nz)$ & $\mathcal{O}(nz)$ & $\mathcal{O}(nz)$ \\ \hline \end{tabular} \end{table} The following lemmas will help bound the cost per iteration of MHAR. Lemmas \ref{lemma me=0 n<m} and \ref{lemma me=0 m<n} establish the full dimensional case for ($n \leq m_I$) and ($n > m_I$), respectively. Lemmas \ref{lemma 0<me n<m} and \ref{lemma 0<me m<n} do likewise in the non-full dimensional case for ($n \leq m_I$) and ($n > m_I$), respectively. Figure \ref{tree1} summarizes these results as follows. \begin{figure}[h!] \centering \resizebox{.7\textwidth}{!}{ \begin{tikzpicture}[grow=right, sloped] \node[bag] {Value of $m_E$} child { node[bag] {$m_I$ \textit{vs} $n$} child { node[end, label=right: {$\mathcal{O}(m_I n^{\omega -2}z)$}] {} edge from parent node[above] {} node[below] {$n \leq m_I$} } child { node[end, label=right: {$\mathcal{O}(n^{\omega -1}z)$}] {} edge from parent node[above] {$n > m_I $} node[below] {} } edge from parent node[above] {} node[below] {$0 < m_E$} } child { node[bag] {$m_I$ \textit{vs}. $n$} child { node[end, label=right: {$\mathcal{O}(m_In^{\omega -2}z)$}] {} edge from parent node[above] {} node[below] {$n \leq m_I$} } child { node[end, label=right: {$\mathcal{O}(m_I^{\omega -2}nz)$}] {} edge from parent node[above] {$n > m_I $} node[below] {} } edge from parent node[above] {$m_E=0$} node[below] {} }; \end{tikzpicture} } \caption{Asymptotic behavior of the cost per iteration of MHAR.} \label{tree1} \end{figure} \begin{lemma}\label{lemma me=0 n<m} Assume $m_E = 0$, $\max\{n,m\} < z$, and $n \leq m_I$. Then, the \textit{cost per iteration} of MHAR is $\mathcal{O}(m_In^{\omega -2}z)$, which is the number of operations needed for finding all line sets $\{L^k\}_{k=1}^z$. \end{lemma} \begin{proof} First we enumerate the cost of each step of the iteration for $m_E=0$ and $n \leq m_I$ if $\max\{n,m\} < z$: \begin{enumerate} \item By Lemma \ref{lemma projection complexity}, generating $P_{\Delta^E}$ is bounded by $\mathcal{O}(1)$. \item By Lemma \ref{lemma complexity generating D}, generating $D$ is bounded by $\mathcal{O}(nz)$. \item By Lemma \ref{lemma complexity generating L}, generating $\{L^k\}_{k=1}^z$ for $n \leq m_I$ is bounded by $\mathcal{O}(m_In^{\omega -2}z)$. \item By Lemma \ref{lemma complexity sample pick}, generating all new $x_{t,j+1}^k$ is bounded by $\mathcal{O}(zn)$. \end{enumerate} By hypothesis, $0<n\leq m_I$. Then, $nz \leq m_Iz < m_In^{\omega -2}z$, because $\omega \in (2,3]$. Therefore, $\mathcal{O}(1) \subseteq \mathcal{O}(nz) \subseteq \mathcal{O}(m_In^{\omega -2}z)$, where the first term is the complexity of finding the projection matrix (omitted for $m_E=0$), the second one bounds generating $D$ and sampling new points, and the third one is the asymptotic cost of finding all line sets $\{L^k\}_{k=1}^z$. \end{proof} \begin{lemma}\label{lemma me=0 m<n} Assume $m_E = 0$, $\max\{n,m\} < z$, and $n > m_I$. Then, the \textit{cost per iteration} of MHAR is $\mathcal{O}(nm_I^{\omega -2}z)$, which is the number of operations needed for finding all line sets $\{L^k\}_{k=1}^z$. \end{lemma} \begin{proof} As in the proof of Lemma \ref{lemma me=0 n<m}, the complexity of the projection matrix, generating $D$, and sampling all new $x_{t,j+1}^k$ points is the same, given by $m_E=0$ and $n > m_I$. Hence, the only change is provided by Lemma \ref{lemma complexity generating L}, in which the cost of finding all line sets $\{L^k\}_{k=1}^z$ for $n > m_I$ is $\mathcal{O}(nm_I^{\omega -2}z)$. By hypothesis, $0<m_I$ and $\max\{n,m\} < z$, thus $nz < nm_I^{\omega -2}z$. Therefore, $\mathcal{O}(1) \subseteq \mathcal{O}(nz) \subseteq \mathcal{O}(nm_I^{\omega -2}z)$, where the third term is the cost of finding all line sets $\{L^k\}_{k=1}^z$. \end{proof} \begin{corollary} \label{coro me=0} Assume $m_E = 0$ and $\max\{n,m\} < z$. Then, the \textit{cost per iteration} of MHAR is bounded by the cost of finding all line sets $\{L^k\}_{k=1}^z$. \end{corollary} \begin{proof} The proof follows from Lemmas \ref{lemma me=0 n<m} and \ref{lemma me=0 m<n}. \end{proof} \noindent We proceed to finding the cost per iteration for the non-full dimensional case $m_E>0$. \begin{lemma}\label{lma1} Assume $m_E < n$ and $(m,n)<z$. Then, the cost of calculating the projection matrix $P_{\Delta^E}$ is bounded by the cost of generating $D$. \end{lemma} \begin{proof} By hypothesis $m_E < n $, implying that $m_E^{\omega-2}n^2 < n^{\omega-2}n^2 = n^{\omega}$. Because $n<z$, $n^{\omega} = n^{\omega-1}n <n^{\omega-1}z $. Combining both inequalities yields $m_E^{\omega-2}n^2 < n^{\omega}< n^{\omega-1}z$. Therefore, $\mathcal{O}(m_E^{\omega-2}n^2) \subseteq \mathcal{O}(n^{\omega-1}z)$, where the first term is the complexity of computing $P_{\Delta^E}$ (by Lemma \ref{lemma projection complexity}), and the second term is the complexity of projecting $H$ in order to obtain $D$ (by Lemma \ref{lemma complexity generating D}). \end{proof} \begin{lemma}\label{lemma 0<me n<m} Assume $m_E>0$, $\max\{n,m\} < z$, and $n \leq m_I$. Then, the \textit{cost per iteration} of MHAR is $\mathcal{O}(m_In^{\omega -2}z)$, which is the number of operations needed for finding all line sets $\{L^k\}_{k=1}^z$. \end{lemma} \begin{proof} First, we enumerate the cost of each step of the iteration for $m_E>0$, $n \leq m_I$, and $\max\{n,m\} < z$: \begin{enumerate} \item By Lemma \ref{lemma projection complexity}, generating $P_{\Delta^E}$ is bounded by $\mathcal{O}(m_E^{\omega-2}n^2)$. \item By Lemma \ref{lemma complexity generating D}, generating $D$ is bounded by $\mathcal{O}(n^{\omega -1}z)$. \item By Lemma \ref{lemma complexity generating L}, generating $\{L^k\}_{k=1}^z$ for $n \leq m_I$ is bounded by$\mathcal{O}(m_In^{\omega -2}z)$. \item By Lemma \ref{lemma complexity sample pick}, generating all new $x_{t,j+1}^k$ is bounded by $\mathcal{O}(zn)$. \end{enumerate} Using Lemma \ref{lma1}, the Big-O term for finding $P_{\Delta^E}$ (step 1) is bounded by the term of generating $D$ (step 2). Because $n<m_I$, $n^{\omega-1}z=n^{\omega-2}nz<n^{\omega-2}m_Iz$. Therefore, $\mathcal{O}(m_E^{\omega-2}n^2)\subseteq \mathcal{O}(n^{\omega -1}z) \subseteq \mathcal{O}(m_In^{\omega -2}z)$, which are the respective costs of steps 1, 2, and 3. Furthermore, $nz \leq n^{\omega -2}m_Iz$, implying that step 4 is also bounded by step 3 in terms of complexity. This implies that all the operations above are bounded by the term $\mathcal{O}(m_In^{\omega -2}z)$, which is the asymptotic complexity of finding all line sets $\{L^k\}_{k=1}^z$. \end{proof} \begin{lemma}\label{lemma 0<me m<n} Assume $m_E>0$, $\max\{n,m\} < z$, and $n>m_I$. Then, the \textit{cost per iteration} of MHAR is $\mathcal{O}(nm_I^{\omega -2}z)$, which is the number of operations needed for generating $D$. \end{lemma} \begin{proof} As in the proof of Lemma \ref{lemma 0<me n<m}, the cost of the projection matrix, generating $D$, and sampling all new $x_{t,j+1}^k$ points is the same, given by $m_E>0$ and $n > m_I$. Hence, the only change is provided by Lemma \ref{lemma complexity generating L}, in which the cost of finding all line sets $\{L^k\}_{k=1}^z$ for $n > m_I$ is $\mathcal{O}(nm_I^{\omega -2}z)$. By Lemma \ref{lma1}, the Big-O term for finding $P_{\Delta^E}$ is bounded by the term of generating $D$. Because $n>m_I$, $m_I^{\omega-2}nz < n^{\omega-2}nz=n^{\omega-1}z$. Therefore, $\mathcal{O}(m_E^{\omega-2}n^2) \subseteq \mathcal{O}(nm_I^{\omega -2}z) \subseteq \mathcal{O}(n^{\omega -1}z)$, which are the respective costs of the projection matrix, finding all line sets, and generating $D$. Furthermore, $nz \leq n^{\omega -2}nz=n^{\omega -1}z$, implying that the cost of sampling all new $x_{t,j+1}^k$ is also bounded by the cost of generating $D$. This implies that all the operations above are bounded by $\mathcal{O}(nm_I^{\omega -2}z)$. \end{proof} We can now proceed to the main results of the paper, given in Theorem \ref{thmmhar1}. \begin{theorem}\label{thmmhar1} If $\max\{n,m\} < z$, then after proper pre-processing and a \textit{warm start}, the \textit{cost per sample} of MHAR is \begin{equation} \begin{cases} \ \ \mathcal{O}^*(m_In^{\omega+1}), \ \ if \ m_E = 0 \ and \ n \leq m_I\\ \ \ \mathcal{O}^*(n^{\omega+2}), \ \ \ \ \ \ if \ m_E = 0 \ and \ n > m_I\\ \ \ \mathcal{O}^*(m_In^{\omega+1}), \ \ if \ m_E > 0 \ and \ n \leq m_I\\ \ \ \mathcal{O}^*(m_I^{\omega-2}n^4), \ \ if \ m_E > 0 \ and \ n > m_I. \end{cases} \end{equation} \end{theorem} \begin{proof} Lemmas \ref{lemma me=0 n<m}, \ref{lemma me=0 m<n}, \ref{lemma 0<me n<m}, and \ref{lemma 0<me m<n} gave the cost per iteration of MHAR for all four cases: \begin{equation}\label{Thproof} \begin{cases} \ \ \mathcal{O}(m_In^{\omega-2}z), \ \ if \ m_E = 0 \ and \ n \leq m_I\\ \ \ \mathcal{O}(n^{\omega-1}z), \ \ \ \ \ \ if \ m_E = 0 \ and \ n > m_I\\ \ \ \mathcal{O}(m_In^{\omega-2}z), \ \ if \ m_E > 0 \ and \ n \leq m_I\\ \ \ \mathcal{O}(m_I^{\omega-2}nz), \ \ if \ m_E > 0 \ and \ n > m_I. \end{cases} \end{equation} It was stated that each walk from the "padding" is independent about the other ones after a warm-start. Then, each individual walk has a mixing time of $\mathcal{O}^*(n^3)$. Then it suffices to apply the rule for Big-O products between the cost per iteration and the mixing time, and divide the coefficient by the padding parameter $z$, which is the number of points obtained at each iteration. Hence, multiplying each case in Equation (\ref{Thproof}) by $\frac{n^3}{z}$ obtains the desired result. \end{proof} Figure \ref{tree2} graphically depicts the results of the theorem. \begin{figure}[h!] \centering \resizebox{.7\textwidth}{!}{ \begin{tikzpicture}[grow=right, sloped] \node[bag] {Value of $m_E$} child { node[bag] {$m_I$ \textit{vs}. $n$} child { node[end, label=right: {$\mathcal{O}^*(m_In^{\omega + 1})$}] {} edge from parent node[above] {} node[below] {$n \leq m_I$} } child { node[end, label=right: {$\mathcal{O}^*(n^{\omega +2})$}] {} edge from parent node[above] {$n < m_I $} node[below] {} } edge from parent node[above] {} node[below] {$m_E>0$} } child { node[bag] {$m_I$ \textit{vs}. $n$} child { node[end, label=right: {$\mathcal{O}^*(m_In^{\omega +1})$}] {} edge from parent node[above] {} node[below] {$n \leq m_I$} } child { node[end, label=right: {$\mathcal{O}^*(m_I^{\omega -2}n^4)$}] {} edge from parent node[above] {$n > m_I $} node[below] {} } edge from parent node[above] {$m_E=0$} node[below] {} }; \end{tikzpicture} } \caption{Asymptotic behavior of the cost per sample of MHAR after a warm start.} \label{tree2} \end{figure} Theorem \ref{thmmhar1} characterizes the cost per sample of MHAR for all parameter values. The theorem shows that MHAR is always at least as efficient as HAR, and more efficient for $\omega \in (2,3)$. Intuitively this is caused by ``padding,'' which permits matrix-to-matrix multiplications instead of isolated matrix-to-vector operations when finding the line sets $L$ or the directions $D$. Furthermore, this approach allows efficient cache usage and state-of-the-art GPU matrix multiplication algorithms. \section{MHAR Complexity Benchmarks}\label{S:5} This section benchmarks the asymptotic behavior of MHAR against that for seven state-of-the-art algorithms. Some of these algorithms cover additional convex figures, like spheres or cones. However, we restrict our focus on polytopes because they are the target of MHAR. For in-depth analysis of each algorithm, see \cite{walks}. We prioritize the full-dimensional case ($m=m_I, m_E=0$) because few algorithms are designed for the non-full dimensional scenario and their analysis is outside our scope. Table \ref{table_complex} is adapted from \cite{walks} and includes the notation established in \cite{har_tommi} and \cite{montiel_jd}. The authors of RHCM \cite{RHMC}, John's walk \cite{john_walk}, Vaidya walk, and John walk omitted $m < n$, which is also outside of our scope. Note that John's walk and John walk are different algorithms. In \S \ref{S:4} we showed that the MHAR has lower cost per sample than the HAR for efficient matrix multiplication algorithms. Furthermore, because the Ball walk \cite{ballwalk} has the same cost per sample as HAR, we can derive the next corollary. \begin{corollary} \label{cor eff ball} The \textit{cost per sample} of MHAR is as low as the \textit{cost per sample} of the Ball walk, after a \textit{warm start}, if $\max\{n,m\} < z$. And strictly lower if efficient matrix-to-matrix algorithms are used $\big(\omega \in (2,3)\big)$. \end{corollary} \begin{proof} This follows from comparing Theorem \ref{thmmhar1} against the complexity of the Ball walk. \end{proof} The following lemma shows that MHAR has a lower cost per sample than does John's walk. \begin{lemma} \label{lemma johns walk} For $\max\{n,m\} < z$, and $n<m$, MHAR has a lower cost per sample than does John's walk after proper pre-processing, \textit{warm start}, and ignoring the logarithmic and error terms. \end{lemma} \begin{proof} Given proper pre-processing, $n \ll m$, and $\max\{n,m\} < z$, then MHAR's cost per sample is $\mathcal{O}^*(mn^{\omega + 1})$, and that for John's walk is $\mathcal{O}(mn^{11} + n^{15})$. Note that $mn^{\omega + 1}\in \mathcal{O}(mn^{11} + n^{15})$. Therefore, when ignoring the logarithmic and error terms, MHAR has a lower cost per sample. \end{proof} \begin{table*}[h!] \setstretch{1.5} \caption{Asymptotic behavior of random walks} \label{table_complex} \vspace{.2cm} \centering \begin{tabular}{ |p{4.9cm}||p{2.7cm}||p{2.7cm}||p{2cm}|} \hline \multicolumn{4}{|c|}{Random walks behaviour} \\ \hline \textbf{Walk} &\textbf{Mixing time} & \textbf{Cost per $\quad \ $ iteration} & \textbf{Cost per $\ $ sample}\\ \hline MHAR with $n > m$ & $n^{3}$ &$m^{\omega -2}nz$ & $m^{\omega -2}n^4$\\ \hline MHAR with $ n \leq m$ & $n^{3}$ &$mn^{\omega -2}z$ & $mn^{\omega +1}$\\ \hline Ball walk & $n^3$ &$mn$ & $mn^4$\\ \hline HAR & $n^3$ &$mn$ & $mn^4$\\ \hline Dikin walk with $ n \leq m$ & $mn$ &$mn^{\omega -1}$ & $m^2n^{\omega}$\\ \hline RHCM with $ n \leq m$ & $mn^\frac{2}{3}$ &$mn^{\omega -1}$ & $m^2n^{\omega - \frac{1}{3}}$\\ \hline John's walk with $ n \leq m$ & $n^{7}$ &$mn^4 + n^8$ & $mn^{11} + n^{15}$\\ \hline Vaidya walk with $ n \leq m$ & $m^\frac{1}{2}n^{\frac{3}{2}}$ & $mn^{\omega -1}$ & $m^{1.5}n^{\omega + \frac{1}{2}}$\\ \hline John walk with $ n \leq m$ & $n^\frac{5}{2}log^4(\frac{2m}{n})$ &$mn^{\omega -1}log^2(m)$ & $mn^{\omega + \frac{3}{2}}$\\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item The table contains the upper bounds on the cost per sample (after a warm start) for various random walk algorithms applied to polytopes. In the case of MHAR, $\max\{n,m\} < z$ is assumed. For simplicity, we ignore the logarithmic terms in the cost per sample. We also avoid giving bounds in terms of the condition number of the set for MHAR, Ball walk, and HAR, because this condition number is bounded by $n$ after proper pre-processing. \end{tablenotes} \end{table*} In the regime of $n \ll m$, the overall upper bound complexity for the cost per sample is represented by John walk $\ll$ Vaidya walk $\ll$ Dikin walk \cite{walks}. We now show that for $n \ll m$, MHAR has a lower cost per sample than does John walk. \begin{lemma}\label{lemma mhar john} For $\max\{n,m\} < z$ and the regime $n \ll m$, MHAR has a lower cost per sample than does the John walk after proper pre-processing, warm start, and ignoring logarithmic and error terms. \end{lemma} \begin{proof} From proper pre-processing, $n \ll m$, and $\max\{n,m\} < z$ , MHAR's cost per sample is $\mathcal{O}^*(mn^{\omega + 1})$ and that for John walk is $\mathcal{O}(mn^{\omega + \frac{3}{2}})$. Note that $mn^{\omega + 1} \in \mathcal{O}(mn^{\omega + \frac{3}{2}})$. Therefore when ignoring the logarithmic and error terms, MHAR has a lower cost per sample. \end{proof} \begin{corollary}\label{cor mhar others} For $\max\{n,m\} < z$ and the regime $n \ll m$, then MHAR $\ll$ John Walk $\ll$ Vaidya walk $\ll$ Dikin walk after proper pre-processing, warm start, and ignoring logarithmic and error terms. \end{corollary} \begin{proof} This follows from Lemma \ref{lemma mhar john}. \end{proof} We proceed to compare MHAR and RHMC for the regime $n^{1+\frac{1}{3}} \ll m$. \begin{lemma}\label{lemma mhar rhmc} For $\max\{n,m\} < z$ and $n^{1+\frac{1}{3}} \ll m$, then MHAR $\ll$ RHMC after proper pre-processing, warm start and ignoring logarithmic and error terms.\end{lemma} \begin{proof} From proper pre-processing, $n \ll m$, and $n,m < z$, MHAR's cost per sample is $\mathcal{O}^*(mn^{\omega + 1})$, and RHMC's is $\mathcal{O}(m^2n^{\omega - \frac{1}{3}})$. Note that $mn^{\omega + 1} \in \mathcal{O}(m^2n^{\omega - \frac{1}{3}})$, because $n^{1+\frac{1}{3}} \ll m$. Therefore, when ignoring the logarithmic and error terms, MHAR has a lower cost per sample. \end{proof} From corollaries \ref{cor eff ball} and \ref{lemma johns walk}, MHAR $\ll$ Ball walk and MHAR $\ll$ HAR, regardless of the regime between $m$ and $n$. And MHAR $\ll$ John's Walk for the regime $n\leq m$. From corollary \ref{cor mhar others}, MHAR $\ll$ John Walk $\ll$ Vaidya walk $\ll$ Dikin walk if $n < m$. Finally, by Lemma \ref{lemma mhar rhmc}, if $n^{1+\frac{1}{3}} \ll m$, then MHAR $\ll$ RHMC. Then, if $n^{1+\frac{1}{3}} \ll m$ we have an analytic guarantee that MHAR has a lower cost per sample than all of the other algorithms in Table \ref{table_complex}. Moreover, empirical tests show that MHAR is faster than all of the other algorithms in Table \ref{table_complex} for regimes other than $n^{1+\frac{1}{3}} \ll m$. \section{MHAR Empirical Test}\label{S:6} This section details a series of experiments to compare MHAR against the \textit{hitandrun} library used by \cite{har_tommi}. We compare the running times in simplexes and hypercubes of different dimensions and for various values of the padding hyper-parameter $z$. We also test the robustness of MHAR by conducting empirical analyses similar to those in \cite{har_tommi}. MHAR experiments were run in a Colab Notebook equipped with an Nvidia P100 GPU, and a processor Intel\textsuperscript{\textregistered} Xeon\textsuperscript{\textregistered} CPU running at 2.00 GHz, and 14 GB of RAM. Due to its apparent incompatibility with the Colab Notebook, the \textit{hitandrun} experiments were run in a $<$device$>$ equipped with an Intel\textsuperscript{\textregistered} Core\texttrademark\ i7-7700HQ CPU running at 2.80 GHz and 32 GBs of RAM. All experiments used 64 bits of precision. We formally define the $n\mbox{-simplex}$ and the $n\mbox{-hypercube}$ as \begin{align} n\mbox{-simplex} &= \{ x \in \mathbb{R}^n \| \sum x_i = 1, x \geq 0 \},\\ n\mbox{-hypercube} &= \{ x \in \mathbb{R}^n \| x \in [-1,1]^n\}. \end{align} \subsection{The Code} The MHAR code was developed using python, and the Pytorch library was chosen because of its flexibility, power, and popularity \cite{paszke2019pytorch}. Pytorch also works in a CPU without need of a GPU, although the latter is more suitable for large samples in high dimensions. The MHAR experiments were performed without observing any numerical instabilities, and the maximum error found for the inversion matrix was on the order $1e$-$16$, which is robust enough for most applications. Operations such as matrix inversion, random number generation, matrix-to-matrix multiplication, and point-wise operations were carried out in the GPU. The only operations that needed to be carried out in the CPU were reading the constraints and saving the samples to disk. For the rest of this section, the acronyms MHAR and HAR refer to the actual implementations and not the abstract algorithms. The code is available in \url{https://github.com/uumami/mhar_pytorch}. \subsection{The padding} The padding hyper-parameter $z$ determines the number of simultaneous walks the algorithm performs. We generated 10 MHAR runs for each dimension (5, 25, 50, 100, 500, 1000) and each padding value ($z$) on simplexes and hypercubes. At each run we calculated the average samples per second as follows: \[ Avg. \ Samples \ per \ Second = \frac{Total \ Samples}{Time} = \frac{z \times \varphi \times T}{Time}. \] For example, $z$ might equal 100, the thinning parameter $\varphi$ might equal 30,000, and the number of iterations $T$ might equal 1, which would yield $3,000,000$ samples. If the experiment took 1,000 seconds, the average samples per second would be $3,000$. Figures \ref{padding_times_S} and \ref{padding_times_H} show box-plots for the experiments in dimensions 5 and 1000 for the simplex and the hypercube, respectively. The box-plots for the the simplex and the hypercube in dimensions 25, 50, 100 and 500 can be found in Figures \ref{padding_times_SA} and \ref{padding_times_HA} in \ref{BP1}. \begin{figure}[h!] \centering \subfigure[Unit simplex in dimension 5.]{ \label{subfig:pt_S_5} \includegraphics[scale=.33,viewport=20 0 650 500,clip]{figures/pad_times_simplex_5.png} } \subfigure[Unit Simplex in dimension 1000.]{ \label{subfig:pt_S_1000} \includegraphics[scale=.33,viewport=20 0 650 500,clip]{figures/pad_times_simplex_1000.png} } \caption{Box-plots for simplexes comparing padding behavior . In the y-axis the average samples per second are in thousands for different values of the padding parameter $z$.} \label{padding_times_S} \end{figure} The box in the box-plots show the 25\%, 50\%, and 75\% percentiles. The diamonds mark outliers, and the upper and lower limits mark the maximum and minimum values without considering outliers. For small values of $z$, larger padding yielded more average samples per second. However, for some dimensions in the simplex and the hypercube, there was a value of $z$ for which efficiency was lower. We conjecture that at some point large values of $z$ could cause memory contention in the GPU. \begin{figure}[h!] \centering \subfigure[Hypercube in dimension 5.]{ \label{subfig:pt_H_5} \includegraphics[scale=.33,viewport=20 0 650 500,clip]{figures/pad_times_hypercube_5.png} } \subfigure[Hypercube in dimension 1000.]{ \label{subfig:pt_H_1000} \includegraphics[scale=.33,viewport=20 0 650 500,clip]{figures/pad_times_hypercube_1000.png} } \caption{Box-plots for hypercubes comparing padding behavior. In the y-axis the average samples per second are in thousands for different values of the padding parameter $z$.} \label{padding_times_H} \end{figure} \subsection{Performance Test MHAR vs HAR} To compare MHAR and HAR we generated 10 simulations for different dimensions, and two types of polytopes (simplex and hypercubes). For the simplex we tested dimensions: 5, 25, 50, 100, and 250, and for the hypercube we tested dimensions: 5, 25, 50, 100, 500, and 1000. The \textit{hitandrun} routines for sampling the simplex exhibited an extreme drop in performance at dimensions higher than $100$ and memory contention at dimensions higher than $300$. For \textit{hitandrun}, the total number of samples equals number of iterations times the thinning parameter. Because \textit{hitandrun} does not make use of the GPU, the times are dependent on the CPU. Before running a given combination of convex body and dimension in MHAR, we selected the padding hyper-parameter $z^*$ that had the highest average sampled points per second according to our padding experiments. So the $z^*$ can differ by dimension. We used $\varphi=30,000$ and $T=1$. Table \ref{mhar_speedup} summarizes the results. \begin{table*}[h!] \caption{Performance of MHAR versus HAR for the optimal value of $z^*$} \label{mhar_speedup} \vspace{.2cm} \centering \resizebox{.9\hsize}{!}{ \begin{tabular}{lrrrrrrr} \hline & & & & & \multicolumn{2}{c}{Avg. Samples Per Second} \\ \cline{5-6} \cline{7-8} Figure & n & $z$ & Performance ratio $\quad \quad $ & MHAR mean & HAR mean & MHAR Std. Dev. & HAR Std. Dev. \\ & & & (MHAR mean / HAR mean) & & & & \\ \hline Hypercube & 5 & 10,000 & 14.18 & 13,206,089.93 & 931,368.92 & 376,068.96 & 57,727.69 \\ Hypercube & 25 & 5,000 & 29.05 & 10,839,474.35 & 373,127.77 & 1,236,619.81 & 77,786.96 \\ Hypercube & 50 & 2,500 & 21.85 & 5,151,516.81 & 235,742.22 & 612,241.73 & 20,636.30 \\ Hypercube & 100 & 4,000 & 116.77 & 4,363,525.70 & 37,367.93 & 10,619.65 & 1,486.54 \\ Hypercube & 500 & 4,000 & 95.21 & 621,554.24 & 6,528.56 & 782.70 & 157.76 \\ Hypercube & 1,000 & 4,000 & 248.32 & 248,513.69 & 1,000.79 & 182.97 & 18.15 \\ Simplex & 5 & 10,000 & 23.14 & 22,878,783.33 & 988,580.92 & 1,258,481.83 & 126,254.73 \\ Simplex & 25 & 10,000 & 1,343.58 & 24,338,761.06 & 18,114.90 & 168,300.75 & 409.27 \\ Simplex & 50 & 10,000 & 12,630.89 & 13,425,900.57 & 1,062.94 & 16,403.51 & 17.33 \\ Simplex & 100 & 3,000 & 128,348.67 & 7,255,837.08 & 56.53 & 135,616.62 & 0.88 \\ Simplex & 250 & 4,000 & 2,551,224.17 & 2,656,449.22 & 1.04 & 4,440.59 & 0.00 \\ \hline \end{tabular} } \end{table*} Table \ref{mhar_speedup} shows substantial performance gains for MHAR. For the simplex, the gains were greater at higher dimensions. The performance ratio (average samples per second for MHAR divided by that for HAR) was $23$ for $n=5$ and $2.5$ million for $n=250$. For the hypercube, performance gain for MHAR was also greater at higher dimensions. Nevertheless, the performance ratio was $14$ for $n=5$ and $248$ for $n=1,000$. In order to test the limits of our implementation, we conducted an additional set of experiments for lower and higher dimensions and different padding parameters. We present these results in \ref{App1}. \subsection{Independence Test} To asses the convergence of MHAR to a uniform distribution, we conducted Friedman-Rafsky two-sample Minimum Spanning Tree (MST) test \cite{mst}, as was done in \cite{har_tommi}. The test compares an obtained sample $S$ (MHAR) with a sample $U$ from the target distribution. The test defines an MST for $S$ and $U$ by counting the number of within- and across-sample edges to assess if both samples come from the same distribution. The statistic from the tests yields a \textit{z-value} for the null hypothesis: ``Both samples are drawn from the same distribution.'' Authors in \cite{har_tommi} establish a threshold of $-1.64 \leq$ \textit{z-value} to accept the null hypothesis. A uniform sample $U$ can quickly be drawn from the hypercube or the simplex \cite{simplex_sample} using known statistical methods. We generated 10 simulations in simplexes and hypercubes in dimensions: $5$, $15$, $25$, and $50$, for a total of 80 simulations. We used a single padding parameter ($z$) of $1000$; and a "burning rate" ($\varphi$) of $(n-1)^3$ for the simplex, and $n^3$ for the hypercube. Each simulation draw a total of $5000$ samples that were compared to an independently generated sample $U$ each time. Figure \ref{z_box} shows the results from the experiments. The red dashed line represents the threshold of $-1.64 \leq$ \textit{z-value}. All simulations where above the expected threshold with the exception of one single experiment for the simplex in dimension 25. This experiments suggests that MHAR mixes fast from any starting point, supporting the uniform sample hypothesis. \begin{figure}[h] \centering \subfigure[Simplex.]{ \label{subfig:pt_Sim} \includegraphics[scale=.33,viewport=20 0 650 500,clip]{figures/z_scores_boxplot_simplex.png} } \subfigure[Hypercube.]{ \label{subfig:pt_Hc} \includegraphics[scale=.33,viewport=20 0 650 500,clip]{figures/z_scores_boxplot_hypercube.png} } \caption{Friedman-Rafsky two-sample MST tests.} \label{z_box} \end{figure} \section{Conclusions}\label{S:7} MHAR showed sustainable performance improvements over HAR while having a robust uniform sampling. We hope that this technical advances move the scientific community towards simulation approaches to complement the already established analytical solutions. Our contribution was in creating the MHAR, analyzing its asymptotic behavior in terms of complexity and convergence, alongside a robust and easy to use implementation ready for deployment, including the cloud. Our implementation is substantially faster than existing libraries, especially for bigger dimensions. Additionally, we showed the versatility that Deep Learning frameworks, like Pytorch, can bring to support research. We would like to emphasize the relevance of this work as a cornerstone to exploratory-optimization algorithms. The speedups we present in high dimensions makes it possible for many new practical applications to become a normal trend, expanding the range of solutions that engineering can provide. In particular, our previous work in Decision Analysis, Optimization, Game Theory, and Ambiguity Optimization will be significantly improved with this tool, and we think that many practitioners and researchers will be benefit as well. Our implementation could be extended to multiple GPUs, possibly distributed. This will allow us to sample even larger polytopes using cloud architectures. Given the speed up results, a bounding approach for more general convex figures alongside accept-and-reject methods is worth exploring, especially for volume calculations. \section*{Acknowledgments} This work was supported by the National Council of Science and Technology of Mexico (CONACYT) and the National System of Researchers (SNI) under Luis V. Montiel, Grant No. 259968. In addition, we also acknowledge Dr. Fernando Esponda, Dr. Jose Octavio Gutierrez, and Dr. Rodolfo Conde for their support and insight in the development of this work. \setstretch{.9} \bibliographystyle{elsarticle-num-names}
proofpile-arXiv_067-2055
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec1} A general string or M-theory background may carry non-vanishing vacuum expectation values for some of its NS--NS or RR field strengths, commonly known as background fluxes which are widely used in all modern attempts to relate string theory to low-energy phenomenology \cite{Grana:2005jc}. Apart from the standard geometric settings, where one may also include the possibility of non-vanishing torsion, the T- and U-dualities of closed string and M-brane theories reveal the existence of exotic fluxes that cannot be described in the context of standard geometry. These are commonly known as non-geometric fluxes; see \cite{Plauschinn:2018wbo} for a recent review in the context of string theory with a complete list of references, and also~\cite{Szabo:2018hhh} for a review of some of the mathematical features in the setting of the present paper. In the context of M-theory, the full set of fluxes for its seven-dimensional compactifications{\footnote{For clarity, the seven dimensions here refer to the external spacetime.}} was determined in \cite{Blair:2014zba} using $SL(5)$ exceptional field theory \cite{Berman:2010is}, and studied further also for dimensions up to seven in \cite{Bosque:2016fpi,Gunaydin:2016axc,Kupriyanov:2017oob,Lust:2017bgx,Lust:2017bwq}. The latter is the M-theory analogue of double field theory \cite{dft1,dft2,dft3}, in that both theories are proposals for a duality-invariant formulation, be it T-duality in the string theory case or U-duality in the M-theory case. This exceptional field theory is related by construction to a generalized geometry on the tangent bundle extended by 2-forms \cite{HullEGG,Pacheco}. This bundle can be equipped with a bracket \cite{Hagiwara,BZ,Zambon,Bi,Bouwknegt}, the higher analogue of the Courant bracket defined in \cite{courant} whose properties are collected in the structure of a Courant algebroid \cite{liu}. In string theory, the Courant bracket was used to systematically determine the general expressions for the full set of geometric and non-geometric fluxes together with their Bianchi identities \cite{Halmagyi,Blumenhagen:2012pc}; indeed, one can subsequently show that these expressions coincide with the local form of the axioms of a Courant algebroid. On the other hand, the axioms of a Courant algebroid also coincide with the conditions for gauge (or BRST) invariance and on-shell closure of the algebra of gauge transformations for a first-order action functional for Wess-Zumino terms in three dimensions, called the Courant sigma-model \cite{Ikeda:2002wh,Park:2000au,Hofman:2002jz,dee1,dee2}. This may be neatly phrased in the language of the BV field-antifield formalism \cite{Henneaux:1989jq,Gomis:1994he}, where the structure of a general gauge theory is encoded in the master equation: The axioms of a Courant algebroid are equivalent to the classical master equation of the Courant sigma-model. The Courant sigma-model falls in the general class of topological sigma-models constructed geometrically in \cite{Alexandrov:1995kv}, called AKSZ sigma-models. The utility of membrane sigma-models as a fundamental microscopic description of closed strings in non-geometric flux backgrounds was originally suggested by~\cite{Halmagyi}, and further elucidated in~\cite{Mylonas:2012pg,Chatzistavrakidis:2015vka,Bessho:2015tkk,Chatzistavrakidis:2018ztm}. The upshot is that the general expressions for the fluxes and Bianchi identities of a generic string compactification are in direct correspondence to the axioms of a Courant algebroid and to the generalized Wess-Zumino terms of Courant sigma-models. One may then wonder whether this ``triple point'' also exists for M-theory compactifications. The main purpose of this paper is to investigate this problem for the $SL(5)$ case of M-theory flux compactifications to seven dimensions. Our approach comprises two steps. In a first step, we consider the higher Courant bracket on the extended bundle $E_2=TM\oplus \mbox{\footnotesize$\bigwedge$}^2\,T^{\ast}M$ and identify its possible twists, which in the physical case where $\dim M=4$ turn out to be \begin{equation} \label{eq:introfluxes} G_{ijkl}\ ,\quad F_{ij}{}^{k} \ , \quad Q_{i}{}^{jkl} \qquad \mbox{and} \qquad {\cal R}^{i,jklm}\ . \end{equation} These are the higher counterparts of the more familiar set of NS--NS fluxes $H_{ijk}$, $F_{ij}{}^{k}$, $Q_{i}{}^{jk}$ and $R^{ijk}$ encountered in string compactifications. The first two are associated to geometric compactifications of M-theory, while the last two are non-geometric fluxes~\cite{Hull:2006tp} sourced by exotic branes~\cite{Bakhmatov:2017les}. In particular, the last entry is the higher analogue of the locally non-geometric 3-vector $R$-flux, and presently it necessarily has the structure of a mixed-symmetry 5-vector of type $(1,4)$, in accord with \cite{Blair:2014zba}. Using the higher Courant bracket on $E_2$ we determine the general expressions for the fluxes \eqref{eq:introfluxes} and their corresponding Bianchi identities in terms of a vielbein, a 3-form and a 3-vector. In a second step, we employ the construction of topological field theories of AKSZ-type in four worldvolume dimensions (an open threebrane), studied in \cite{Ikeda:2010vz,Ikeda:2012pv}. The classical master equation for the corresponding threebrane sigma-model yields a set of conditions, which are then used to define the higher structure of a Lie algebroid up to homotopy \cite{Ikeda:2010vz}---see also \cite{Gru,Gru2,Carow-Watamura:2016lob} for a somewhat more general construction. Such threebrane sigma-models were already used in~\cite{Kokenyesi:2018ynq} to relate AKSZ theories of M2-branes in topological M-theory to exceptional generalized geometry and fluxes. Here we show that by choosing the extended bundle $E_2$ in which the fields of the sigma-model take values, accompanied by a projection to $SL(5)$ tensors, the local coordinate expressions for the axioms of a specific Lie algebroid up to homotopy on $E_2$ reproduce the expressions for the M-theory fluxes and their Bianchi identities for compactifications to seven dimensions. Therefore we conclude that the same set of equations underlies the $SL(5)$ M-theory fluxes and their Bianchi identities, the gauge structure of a topological sigma-model in four worldvolume dimensions, and the axioms of a specific Lie algebroid up to homotopy. This paper is organized as follows. In Section~\ref{sec2} we briefly recall the necessary background material on the higher Courant bracket for extended bundles of the type $E_p=TM\oplus \mbox{\footnotesize$\bigwedge$}^{p}\,T^{\ast}M$, and the construction of a generalized metric for the physically relevant case $p=2$ which accounts for $SL(5)$ exceptional generalized geometry. In Section~\ref{sec3} we use the higher Courant bracket to determine the general expressions for the fluxes and their Bianchi identities, and explain how in the case $\dim M=4$ they indeed yield the correct $SL(5)$ tensor structures. In Section~\ref{sec4}, topological sigma-models on an open threebrane are discussed, together with their relation to the structure of a Lie algebroid up to homotopy; we further present some simple examples and their relation to M-theory backgrounds. In Section~\ref{sec5} the relation between the general M-theory fluxes and generalized Wess-Zumino terms of the topological sigma-models is established. Section~\ref{sec6} contains a first step towards the generalization of our results to the extended geometry of exceptional field theory; here the dimensionality of the target space is extended from four to ten and the general expressions for the $SL(5)$ exceptional field theory fluxes up to the section condition are determined, though we have not yet found the appropriate extension of our threebrane sigma-model whose Wess-Zumino terms support the exceptional field theory fluxes. Finally, Section~\ref{sec7} contains our conclusions and an outlook toward some open problems related to our work. \section{Higher Courant Algebroids} \label{sec2} \subsection{Higher Courant Brackets} \label{sec21} We begin with a brief description of higher analogues of Courant algebroids (see for example~\cite{Hagiwara,BZ,Zambon,Bi,Bouwknegt}), and their uses in defining a generalized metric associated to exceptional group structures on their underlying vector bundle \cite{HullEGG} which extends the corresponding construction in generalized complex geometry \cite{Hitchin,Gualtieri}. The starting point is a $d$-dimensional manifold $M$ and a vector bundle $E_p\to M$, which is the extension of its tangent bundle by $p$-forms, \begin{equation} E_p=TM\oplus \mbox{\footnotesize${\bigwedge}$}^p\,T^{\ast}M~, \end{equation} where $p$ is a non-negative integer. For $p=1$ this is simply the vector bundle corresponding to a splitting of the exact sequence \begin{equation} 0\longrightarrow T^*M \xrightarrow{ \ \rho^\top \ } E_1 \xrightarrow{ \ \rho \ } TM \longrightarrow 0 \end{equation} which gives rise to an exact Courant algebroid, where $\rho^\top:T^*M\to E_1$ denotes the transpose of the anchor map $\rho$. For any $p\ge 1$, sections of the vector bundle $E_p$ correspond to a formal sum of vectors and $p$-forms, \begin{equation} \Gamma(E_p) \ni A=X+\eta \qquad \mbox{with} \quad X\in \Gamma(TM) \ , \ \eta\in \Gamma(\mbox{\footnotesize$\bigwedge$}^p\,T^{\ast}M)~. \end{equation} The vector bundle $E_p$ can be endowed with a non-degenerate symmetric fiber pairing \begin{equation} \langle\,\cdot\,,\,\cdot\,\rangle\,: E_p\times E_p\longrightarrow \mbox{\footnotesize$\bigwedge$}^{p-1}\,T^{\ast}M~, \end{equation} which is given in terms of a symmetrization of contractions between vectors and $p$-forms, \begin{equation} \label{higherbilinear} \langle X+\eta, Y+\xi\rangle=\sfrac 12 \, (\iota_X\xi +\iota_Y\eta)~, \end{equation} resulting in a $(p-1)$-form. In the special case $p=1$, this is a map to $C^{\infty}(M)$ and it defines a metric with split signature $(d,d)$, which is invariant under the continuous T-duality group $O(d,d)$. A binary operation on sections of $E_p$ can also be defined, either in terms of a higher Dorfman bracket, which we denote here as a circle product \begin{equation} \label{dorfman} (X+\eta)\circ (Y+\xi)=[X,Y]+{\cal L}_{X}\xi-\iota_{Y} \mathrm{d}\eta~, \end{equation} or in terms of its antisymmetrization, a higher Courant bracket \begin{equation} \label{courant} [X+\eta,Y+\xi]=[X,Y]+{\cal L}_{X}\xi-{\cal L}_Y\eta -\sfrac 12 \, \mathrm{d} (\iota_X\xi-\iota_Y\eta)~. \end{equation} Here ${\cal L}_X$ denotes the Lie derivative along the vector field $X$. For $p=1$ these are precisely the standard Dorfman and Courant brackets, and they are formally given in terms of the same expressions for higher $p$. We further define a smooth bundle map $\rho: E_p\to TM$, which could be for instance the projection to the tangent bundle, such that the quadruple $(E_p,\langle\,\cdot\,,\,\cdot\,\rangle,[\,\cdot\,,\,\cdot\,],\rho)$ satisfies the following properties: \begin{itemize} \item \ Modified Jacobi identity: \begin{equation} [[A,B],C]+\text{cyclic}(A,B,C)= \mathrm{d}\,{\cal N}(A,B,C)~;\end{equation} \item \ Homomorphism property:\ \begin{equation} \rho [A,B]=[\rho(A),\rho(B)]~;\end{equation} \item \ Modified Leibniz rule: \begin{equation} [A,f\,B]=f\,[A,B]+\big(\rho(A)f\big)\,B- \mathrm{d} f \wedge \langle A,B\rangle~;\end{equation} \item \ Compatibility condition: \begin{equation} {\cal L}_{\rho(C)}\langle A,B\rangle = \langle [C,A]+\mathrm{d}\langle C,A\rangle,B\rangle + \langle A,[C,B]+\mathrm{d}\langle C,B\rangle\rangle ~,\end{equation} \end{itemize} where the Nijenhuis operator is defined by \begin{equation} {\cal N}(A,B,C)=\sfrac 13\, \langle [A,B],C\rangle +\text{cyclic}(A,B,C)~, \end{equation} for any $A,B,C\in\G(E_p)$ and $f\in C^{\infty}(M)$. The last condition, expressing the compatibility between the pairing and the bracket, is somewhat modified as compared to the $p=1$ case, in the sense that a Lie derivative appears on the left-hand side. Evidently, this condition reduces to the standard one for a Courant algebroid in the $p=1$ case. When the anchor map $\rho$ is the projection to $TM$, the higher Courant bracket may be twisted by a closed $(p+2)$-form $H$ as \begin{equation} [X+\eta,Y+\xi]_H=[X+\eta,Y+\xi]+\iota_X\iota_YH~. \end{equation} The twist yields an additional $p$-form term in the bracket. In string theory, where the relevant structure is $p=1$, the twist is a closed 3-form; it is customarily identified with the NS--NS flux whose de~Rham cohomology class is the \v{S}evera class which classifies the exact Courant algebroids over $M$. In that case, one can also consider more general twists, usually denoted as $(H,f,Q,R)$ corresponding to a 3-form $H$, a vector-valued 2-form $f$, a bivector-valued 1-form $Q$ and a trivector $R$. The relevant bracket is then the Courant-Roytenberg bracket and the underlying structure corresponds to a general Courant algebroid where the anchor is not simply the projection to the tangent bundle, or equivalently to a protobialgebroid \cite{dee3,K-S,Chatzistavrakidis:2015vka}. However, for $p>1$, the twisting is more restricted. For instance, the analogue of a twisted Poisson structure (which would be a twisted Nambu-Poisson structure) does not exist \cite{Bouwknegt}. Nevertheless, one can directly infer from the entries of the bracket what are the five possible additional types of twists that could in principle be considered apart from a $(p+2)$-form: a vector-valued 2-form, a $(p+1)$-vector-valued 1-form, a $p$-vector-valued $(p+1)$-form, a $2p$-vector-valued $p$-form, and a $(2p+1)$-vector. The precise structure of such twists will be explained in the ensuing sections. For our purpose of investigating the flux content of M-theory for four-dimensional compactification manifolds, giving rise to dimensional reductions to seven dimensions, the relevant higher structure is simply the one with $p=2$ which corresponds to extending string charges for $p=1$ to M2-brane charges~\cite{HullEGG}. We shall also find it necessary to formulate these higher Courant algebroids more generally later on by replacing the tangent bundle $TM$ and the cotangent bundle $T^*M$ by a Lie algebroid $E$ over $M$ and its dual $E^*$. In order to investigate cases with $d>4$ the above setting is not sufficient and additional ingredients, in the form of extended bundles, are necessary to account for the charges of higher-dimensional objects such as M5-branes and MKK6-monopoles~\cite{HullEGG}. Therefore, we mostly focus on the special case of $p=2$ and $d=4$, although some of our results will be valid for any compactification dimension $d$ as we indicate. \subsection{$SL(5)$ Exceptional Generalized Geometry} Let us recall that in the more familiar $p=1$ case, a generalized metric on $E_1$ that combines a Riemannian metric $g$ and a 2-form $B$ on $TM$ may be defined \cite{Gualtieri}. One way to do this is to start from the $O(d,d)$-structure that preserves the fiber pairing of the Courant algebroid. The $O(d,d)$ T-duality transformations split into three types. The first corresponds to smooth bundle automorphisms $f:TM\to TM$ and their inverse transpose $f^{-\top}:T^{\ast}M\to T^{\ast}M$, acting on a section of $E_1=TM\oplus T^*M$ as $X+\eta\mapsto f(X)+f^{-\top}(\eta)$, or in $GL(d)\subset O(d,d)$ matrix form \begin{equation} F\begin{pmatrix} X \\ \eta \end{pmatrix}=\begin{pmatrix} f & 0 \\ 0 & f^{-\top} \end{pmatrix}\begin{pmatrix} X \\ \eta \end{pmatrix} \ . \end{equation} The second set of transformations are the $B$-transforms acting as $X+\eta \mapsto X+\eta+\iota_{X}B$, or \begin{equation} \label{eq:Btransform} e^{B}\begin{pmatrix} X \\ \eta \end{pmatrix}=\begin{pmatrix} 1 & 0 \\ B & 1 \end{pmatrix}\begin{pmatrix} X \\ \eta \end{pmatrix}~. \end{equation} The third type are the $\beta$-transforms which act via a bivector $\beta$ on $T^*M$ as $X+\eta\mapsto X+\eta+\iota_{\eta}\beta$, or \begin{equation} e^{\b}\begin{pmatrix} X \\ \eta \end{pmatrix}=\begin{pmatrix} 1 & \b \\ 0 & 1 \end{pmatrix}\begin{pmatrix} X \\ \eta \end{pmatrix}~. \end{equation} The geometric subgroup of these $O(d,d)$-transformations preserving the Courant bracket is $GL(d)\ltimes \Omega^2_{\text{cl}}$, the semi-direct product of diffeomorphisms of $M$ with closed 2-form transformations which act as bundle automorphisms \eqref{eq:Btransform} preserving the pairing \eqref{higherbilinear}. Then an $O(d,d)$-covariant generalized metric ${\cal H}_1$ may be parametrized in terms of $g$ and $B$ as \begin{equation} {\cal H}_1=\begin{pmatrix} g-B\,g^{-1}\,B & -B\,g^{-1} \\ g^{-1}\,B & g^{-1} \end{pmatrix}~, \end{equation} which is a $B$-transform of the induced Riemannian metric $g\oplus g^{-1}$ on $TM\oplus T^*M$. This is not the most general parametrization of ${\cal H}_1$, since $O(d,d)$-transformations generate fractional linear transformations of $g+B$, while field redefinitions allow for instance an expression of the generalized metric in terms of a metric $\tilde g$ and a bivector $\beta$ on $T^*M$~\cite{Andriot:2011uh}. The generalized metric for $p=1$ yields a reduction of the structure group $O(d,d)$ of $E_1$ to its maximal compact subgroup $O(d)\times O(d)$, and thus the moduli space of such reductions is the $d^2$-dimensional coset $O(d,d)/O(d)\times O(d)$, A similar prescription was followed in \cite{HullEGG} to define a generalized metric for $p=2$ and $d=4$. In that case, the group of transformations acting on $E_2=TM\oplus\mbox{\footnotesize$\bigwedge$}^2\,T^*M$ is the U-duality group $SL(5)$, and the geometric subgroup preserving the higher Courant bracket is the semi-direct product of diffeomorphisms of $M$ with closed 3-form transformations $\Omega_{\rm cl}^3$. The action can be decomposed into four types, in a similar way as before. The 15-dimensional $SL(4)$ subgroup acts separately on vectors, in the representation $\mathbf{4}$, and on 2-forms, in the representation $\bf 6$, which combine into the antisymmetric representation ${\bf 10}$ of $SL(5)$. A 3-form $C \in \Gamma(\mbox{\footnotesize$\bigwedge$}^3\,T^{\ast}M)$ acts as $X+\eta\mapsto X+\eta+\iota_XC$ ($C$-transform) and a 3-vector ${\mit\Omega}\in \Gamma(\mbox{\footnotesize$\bigwedge$}^3\,TM)$ acts as $X+\eta\mapsto X+\eta+\iota_{\eta}{\mit\Omega}$ (${\mit\Omega}$-transform). Both the 3-form and the 3-vector have four independent components in four dimensions. The matrix versions of these transformations are very similar to those of the $p=1$ case above, thus we do not write them explicitly. Finally, a scaling transformation $X+\eta\mapsto \alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta^3\,X+\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta^2\,\eta$ with $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta\in {\mathbb R}} \def\C{{\mathbb C}} \def\N{{\mathbb N}} \def\X{\mathbb X} \def \A{\mathbb A^{\times}$ guarantees closure of the group action. As before, a generalized metric ${\cal H}_2$ can then be parametrized in terms of a metric $g$ and a 3-form $C$ on $TM$ in the form \cite{HullEGG} \begin{equation} \label{exgm} {\cal H}_2=\begin{pmatrix} g+\sfrac 12\, C\,g^{-1}{\wedge} g^{-1}\,C & -\sfrac 12\, C\,g^{-1}{\wedge} g^{-1} \\[4pt] -\sfrac 12\, g^{-1}{\wedge} g^{-1}\,C & \sfrac 12\, g^{-1}{\wedge} g^{-1} \end{pmatrix}~. \end{equation} In the present case the structure group of $E_2$ is $SL(5)$, its maximal compact subgroup is $SO(5)$, and the moduli space of reductions by ${\cal H}_2$ is the corresponding 14-dimensional coset $SL(5)/SO(5)$. \section{$SL(5)$ M-Theory Fluxes} \label{sec3} \subsection{Fluxes from the Higher Courant Bracket} \label{sec31} To determine the general form of the fluxes in M-theory for $7+4$ dimensions, where the four internal dimensions are compactified, we follow a strategy similar to the one suggested in \cite{Halmagyi} for non-geometric fluxes in generalized geometry. Recall that in generalized geometry the expressions for all the types of fluxes may be found upon acting with the twist operator $e^{B}\,e^{\beta}$ (which is an element of $O(d,d)$) on the local holonomic basis spanned by $\partial_i=\frac\partial{\partial x^i}$ and $\mathrm{d} x^i$ \cite{Blumenhagen:2012pc,Chatzistavrakidis:2013wra}. This gives \begin{subequations} \begin{align} \label{dxe1} \partial_i \, &\xrightarrow{e^B\,e^{\beta}} \, e_i:=\partial_i+B_{ij}\,\mathrm{d} x^j~,\\[4pt] \mathrm{d} x^i \, &\xrightarrow{e^B\,e^{\beta}} \, e^i:=\mathrm{d} x^i+\beta^{ij}\,\partial_j+\beta^{ij}\,B_{jk}\,\mathrm{d} x^k=\mathrm{d} x^i+\beta^{ij}\,e_j ~.\label{dxe2} \end{align} \end{subequations} Then, computing the \emph{untwisted} Courant brackets of the new basis, one obtains \bse \begin{align} \label{courbasis1} [e_i,e_j]&=H_{ijk}\,e^k+F_{ij}{}^k\,e_k~,\\[4pt] \label{courbasis2} {[}e_i,e^j]&=F_{ik}{}^j\,e^k+Q_i{}^{jk}\,e_k~,\\[4pt] \label{courbasis3} {[}e^i,e^j]&=Q_{k}{}^{ij}\,e^k+R^{ijk}\,e_k~, \end{align} \end{subequations} and the fluxes are identified with the generalized structure constants appearing on the right-hand side. Their explicit expressions are \bse \begin{align} \label{R} H_{ijk}&=3\,\partial_{[i}B_{jk]}, \\[4pt] F_{ij}{}^k&=\beta^{kl}\,H_{lij}~,\\[4pt] Q_{k}{}^{ij}&=\partial_{k}\beta^{ij} +\beta^{il}\,\beta^{jm}\,H_{lmk}~,\\[4pt] R^{ijk}&= 3\,\beta^{[i\underline{l}}\,\partial_l\beta^{jk]} +\beta^{il}\,\beta^{jm}\,\beta^{kn}\,H_{lmn}~, \end{align} \end{subequations} where underlined indices do not participate in the antisymmetrization. The corresponding Bianchi identities may be obtained by using the Jacobi identity for this bracket \cite{Blumenhagen:2012pc}. For later reference, let us recall that in a general non-holonomic basis the above expressions take the form \bse \begin{align} H_{abc}&=3\,\nabla_{[a}B_{bc]}, \label{flux3da}\\[4pt] F_{ab}{}^c&=f_{ab}{}^{c}+\beta^{cd}\,H_{abd}~,\label{flux3db}\\[4pt] Q_{a}{}^{bc}&=\partial_{a}\beta^{bc}+\beta^{bd}\, f_{ad}{}^c -\beta^{cd}\, f_{ad}{}^b +\beta^{bd}\,\beta^{ce}\,H_{ade}~,\label{flux3dc}\\[4pt] R^{abc}&= 3\,\beta^{[a\underline{d}}\,\nabla_d\beta^{bc]} +\beta^{ad}\,\beta^{be}\,\beta^{cf}\,H_{def}~, \label{flux3dd}\end{align}\end{subequations} where $\nabla_a$ is the covariant derivative with respect to a vielbein $e_a=e_{a}{}^{i}\,\partial_i$, and its dual $e^a=e^a{}_i\,\mathrm{d} x^i$, acting as \begin{equation} \nabla_{a}B_{bc}=\partial_{a}B_{bc}-{\mit\G}_{ab}{}^{d}\,B_{dc}-{\mit\Gamma}_{ac}{}^{d}\,B_{bd}~, \end{equation} and \begin{equation} f_{ab}{}^c=2\,e^c{}_{j}\,e_{[a}{}^i\,\partial_ie_{b]}{}^j=:2\,{\mit\Gamma}_{[ab]}{}^{c} \end{equation} is the purely geometric torsion flux, appearing for example in string compactifications on twisted tori \cite{Hull:2005hk}. Following the same prescription in the present case,{\footnote{Everything that follows holds for an arbitrary dimensionality $d$ of $M$, as indicated; however, the physically relevant case is only the one with $d=4$, specified in parentheses. For higher $d$, the physical case requires a different extension of the tangent bundle, as already mentioned.}} we consider a local basis $\{e_I\}, I=1,\dots,\frac{d\,(d+1)}{2}$ $(I=1,\dots,10)$ of sections of the vector bundle $E_2$, which may be split in a $d+\frac{d\,(d-1)}{2} $ ($4+6$) fashion as \begin{equation} \{e_I\}=\{e_i,e^{ij}\}~, \quad i,j=1,\dots,d \ (i,j=1,2,3,4)~, \end{equation} with $e^{ij}=-e^{ji}$. Then we compute the untwisted higher Courant bracket \begin{equation} [e_I,e_J]=T_{IJ}{}^K\,e_K\ , \end{equation} and identify the corresponding M-theory fluxes with the local structure constants $T_{IJ}{}^{K}$. Explicitly, the brackets are generally given as \bse\begin{align} \label{agg1} [e_i,e_j]&=G_{ijkl}\,e^{kl}+F_{ij}{}^m\,e_m~,\\[4pt] \label{agg2} {[}e_i,e^{jk}]&=\widetilde F_{ilm}{}^{jk}\,e^{lm}+Q_i{}^{jkm}\,e_m~,\\[4pt] \label{agg3} {[}e^{ij},e^{kl}]&=\widetilde Q_{mn}{}^{ij,kl}\,e^{mn}+R^{ij,kl,n}\,e_n ~. \end{align}\end{subequations} Following the terminology of the string theory case, we refer to $G$, $F$ and $\widetilde{F}$ as geometric fluxes and to $Q$, $\widetilde Q$ and $R$ as non-geometric fluxes. Compared to the string theory case there is a proliferation in that $F$ and $Q$ do not repeat in two different brackets, but rather two apparently new fluxes appear, $\widetilde F$ and $\widetilde Q$. We shall determine their relation to the fluxes $F$ and $Q$ below, where we will see that they are not independent. There is further a proliferation in the index structure; for instance, the previously trivector flux $R$ now becomes a (mixed-symmetry) tensor with five indices (a 5-vector, but not fully antisymmetric---a completely antisymmetric tensor would anyway vanish in four dimensions) and index structure $(2,2,1)$. Recalling that previous studies of M-theory fluxes have revealed a mixed-symmetry 5-vector of type $(1,4)$ \cite{Blair:2014zba,Bosque:2016fpi,Gunaydin:2016axc,Kupriyanov:2017oob,Lust:2017bgx,Lust:2017bwq}, it is natural to ask what is the relation between the two. We shall return to this point after computing the explicit expressions for these fluxes. As in the string theory case, we begin with the coordinate basis of sections of $E_2=TM\oplus \mbox{\footnotesize$\bigwedge$}^2\,T^\ast M$, spanned by $\partial_i$ and $\sfrac 12\, \mathrm{d} x^i\wedge\mathrm{d} x^j$, and twist them by $SL(5)$ transformations corresponding to the action of a 3-form $C=\sfrac 16\, C_{ijk}\,\mathrm{d} x^i\wedge\mathrm{d} x^j\wedge\mathrm{d} x^k$ and a 3-vector ${\mit\Omega}= \sfrac 16\, {\mit\Omega}^{ijk}\,\partial_i\wedge\partial_j\wedge\partial_k$ to get \bse\begin{align} \label{xe1} \partial_i \, &\xrightarrow{e^C\,e^{{\mit\Omega}}} \, e_i:=\partial_i+\sfrac 12 \, C_{ijk}\,\mathrm{d} x^j\wedge\mathrm{d} x^k~,\\[4pt] \label{xe2} \sfrac 12 \, \mathrm{d} x^i\wedge\mathrm{d} x^j \, &\xrightarrow{e^C\,e^{{\mit\Omega}}} \, e^{ij}:=\sfrac 12 \, \mathrm{d} x^i\wedge\mathrm{d} x^j +\sfrac 12\, {\mit\Omega}^{ijk}\, e_k ~. \end{align}\end{subequations} We now compute the \emph{untwisted} higher Courant brackets of the new basis $\{e_i, e ^{ij}\}$ and comparing with \eqref{agg1}--\eqref{agg3} we formally obtain \bse \begin{align} \label{exflux1} G_{ijkl}&=4\,\partial_{[i}C_{jkl]}~,\\[4pt] \label{exflux2} F_{ij}{}^m&=-\sfrac 12\,G_{ijkl}\,{\mit\Omega}^{klm}~,\\[4pt] \label{exflux3} \widetilde F_{ilm}{}^{jk}&=\sfrac 12\,G_{ilmn}\,{\mit\Omega}^{njk}~,\\[4pt] \label{exflux4} Q_i{}^{jkm}&=\sfrac 12\,\big(\partial_i{\mit\Omega}^{jkm}-\sfrac 12\,{\mit\Omega}^{jkn}\,G_{inps}\,{\mit\Omega}^{psm}\big)~,\\[4pt] \label{exflux5} \widetilde Q_{mn}{}^{ij,kl}&=-\sfrac 14\,\big(\delta^l_{[m}\,\partial_{n]}{\mit\Omega}^{ijk}- \delta^k_{[m}\,\partial_{n]}{\mit\Omega}^{ijl}- \delta^j_{[m}\,\partial_{n]}{\mit\Omega}^{ikl}+ \delta^i_{[m}\, \partial_{n]}{\mit\Omega}^{jkl} \nonumber\\ &\hspace{2cm} +{\mit\Omega}^{ijp}\,G_{pp'nm}\,{\mit\Omega}^{p'kl}\big)~,\\[4pt] \label{exflux6} R^{ij,kl,n}&=\sfrac12\,\hat\partial^{i[j}{\mit\Omega}^{kln]}-\sfrac12\,\hat\partial^{j[i}{\mit\Omega}^{kln]}-\sfrac12\,\hat\partial^{k[l}{\mit\Omega}^{ijn]}+\sfrac12\,\hat\partial^{l[k}{\mit\Omega}^{ijn]}\nonumber \\ &\hspace{2cm} -\sfrac 18\,{\mit\Omega}^{ijm}\,{\mit\Omega}^{klp}\,{\mit\Omega}^{rsn}\,G_{mprs}~, \end{align}\end{subequations} where we defined $ \hat\partial^{ij}:={\mit\Omega}^{ijk}\,\partial_k$. These expressions were derived without using the restriction $d=4$. One may directly observe that the fluxes $\widetilde F$ and $\widetilde Q$ are not independent from $F$ and $Q$, as there are the trace relations \begin{align} \label{ftildef} \widetilde{F}_{ijl}{}^{lk}&=F_{ij}{}^{k}~, \\[4pt] \label{qtildeq} \widetilde{Q}_{im}{}^{jk,lm}&=\sfrac d4\,Q_i{}^{jkl}+\sfrac{d-4}{16}\,{\mit{\Omega}}^{jkp}\,G_{ipqr}\,{\mit{\Omega}}^{qrl}-\sfrac 14 \, \d_{i}^{[j}\,\partial_{n}{\mit{\Omega}^{k]ln}}+\sfrac 18\,\d_i^l\,\partial_{n}{\mit{\Omega}}^{jkn}~. \end{align} For general $d$, the flux $Q_i{}^{jkl}$ is not completely antisymmetric in its three vector indices. Let us now specialize to the four-dimensional case. First, it is then inevitable that only the contracted parts of $\widetilde F$ and $\widetilde Q$ play a role, since they both have more than four indices in total. Specifically, the expression \eqref{qtildeq} for $\widetilde{Q}$ results in{\footnote{Note that the trace of the second term in $\widetilde{Q}$, namely ${\mit{\Omega}}^{jkp}\,G_{pqrs}\,{\mit{\Omega}}^{qrs}$, vanishes in four dimensions.}} \begin{equation} \widetilde{Q}_{im}{}^{jk,lm}=Q_i{}^{jkl}-\sfrac 12 \, \d_{i}^{[j}\,Q_{n}{}^{k]ln}+\sfrac 14\,\d_i^l\,Q_{n}{}^{jkn}~. \end{equation} Second, in four dimensions, the flux $Q_{i}{}^{jkl}$ is necessarily completely antisymmetric in its three vector indices, in agreement with expectations. Another important observation is that the last term in \eqref{exflux6} is identically zero in four dimensions for any antisymmetric 3-vector ${\mit\Omega}$. Thus it is useful to define the $(1,4)$ mixed-symmetry combination \begin{equation} {\cal R}^{i,jklm}=\sfrac12\,\hat{\partial}^{i[j}{\mit\Omega}^{klm]}~, \end{equation} which allows us to write the $R$-flux obtained by the higher Courant bracket in terms of the ${\cal R}$-flux as \begin{equation} R^{ij,kl,n}={\cal R}^{i,jkln}-{\cal R}^{j,ikln}-{\cal R}^{k,lijn}+{\cal R}^{l,kijn}~. \end{equation} Thus unlike the string theory case where all indices of the trivector $R$-flux participate in the antisymmetrization, here the right-hand side contains terms with derivatives of the trivector ${\mit\Omega}$ where one index is outside the antisymmetrization. Studying all possibilities for index assignments, it turns out that in all cases we necessarily end up with only one term{\footnote{The argument works as follows. Fix $i=i_0\in\{1,2,3,4\}$. If $i_0=j$ then obviously the $R$-flux vanishes; if $i_0=k$, then $R^{ij,kl,n}=\,\hat{\partial}^{i_0[j}{\mit\Omega}^{i_0ln]}$, and similarly if $i_0=l$; if $i_0=n$ then $R^{ij,kl,n}=\sfrac 12\,\hat{\partial}^{i_0[j}{\mit\Omega}^{kli_0]}$. If $i_0\ne j,k,l,n$, then fix $j=j_0$ and repeat the argument.}} and the $R$-flux is a $(1,4)$ mixed-symmetry tensor, in agreement with expectations. We discuss this point further in the extended case of Section~\ref{sec6}, where we shall compare with previous results in the literature. For the time being, we can already conclude that by studying the higher Courant bracket, the types of geometric and non-geometric fluxes which arise in the four-dimensional case are \begin{equation} G_{ijkl}~, \quad F_{ij}{}^k~, \quad Q_{i}{}^{jkl} \qquad \mbox{and} \qquad {\cal R}^{i,jklm}~. \end{equation} Finally, the expressions above are written in the holonomic frame. The corresponding expressions in a non-holonomic frame may be obtained similarly, by considering a vielbein $e_a$ and $e^{ab}=\frac12\,e^a\wedge e^b$. We present them only for the relevant fluxes in the four-dimensional case, which are \bse \begin{align} G_{abcd}&=4\,\nabla_{[a}C_{bcd]}~, \label{flux4da}\\[4pt] F_{ab}{}^{c}&=f_{ab}{}^{c}-\sfrac 12\, G_{abde}\,{\mit\Omega}^{dec}~, \label{flux4db}\\[4pt] Q_{a}{}^{bcd}&=\sfrac 12\, \big(\partial_{a}{\mit\Omega}^{bcd}+3\,{\mit\Omega}^{e[bc}\,f_{ae}{}^{d]} -\sfrac 12\, {\mit\Omega}^{def}\,\d_{a}^{[b}\,f_{ef}{}^{c]}-\sfrac 12\, {\mit\Omega}^{e[bc}\,G_{aefg}\,{\mit\Omega}^{d]fg}\big)~, \label{flux4dc}\\[4pt] R^{ab,cd,e}&=\sfrac12\,\widehat\nabla{}^{a[b}{\mit\Omega}^{cde]}-\sfrac12\,\widehat\nabla{}^{b[a}{\mit\Omega}^{cde]}-\sfrac12\,\widehat\nabla{}^{c[d}{\mit\Omega}^{abe]}+\sfrac12\,\widehat\nabla{}^{d[c}{\mit\Omega}^{abe]}~, \label{flux4dd}\end{align}\end{subequations} where $\widehat\nabla{}^{ab}={\mit\Omega}^{abc}\,\nabla_c$, and we used the definitions and facts explained before. Once more, one should keep in mind that the $R$-flux effectively contains only one term.\footnote{Upon dimensional reduction over the M-theory circle, the four-dimensional fluxes \eqref{flux4da}--\eqref{flux4dd} reduce to the NS--NS fluxes \eqref{flux3da}--\eqref{flux3dd} in three dimensions. In higher dimensions this is not generally true, as the $G$-flux can reduce to either the NS--NS $H$-flux or the 4-form RR flux depending on whether or not the 4-form $G$ has a leg along the M-theory circle.} \subsection{Bianchi Identities} \label{sec32} Bianchi identities for the above fluxes can be obtained upon calculating the Jacobiator for the higher Courant bracket using the basic property \begin{equation}\label{aaa} [[A,B],C]+{\rm cyclic}(A,B,C)=\sfrac 13\,{\mathrm{d}}\big(\langle[A,B],C\rangle+{\rm cyclic}(A,B,C)\big)~.\end{equation} To calculate the Jacobiator it is useful to write down the pairing $\langle A,B\rangle$ for the basis elements \eqref{xe1} and \eqref{xe2}, which reads as \bse\begin{align} \langle e_i,e_j\rangle&=0~,\\[4pt] \label{bilinear2}\langle e_i,e^{jm}\rangle&=\sfrac 12\,\delta^{[j}_i\,\mathrm{d} x^{m]}~,\\[4pt] \langle e^{ij},e^{mn}\rangle &=\sfrac 12\,{\mit\Omega}^{[ijm}\,\delta^{n]}_l\,\mathrm{d} x^{l}~. \end{align}\end{subequations} Using these expressions, the modified Jacobi identity containing only the basis elements $e_i$, \begin{equation} [[e_i,e_j],e_m]+{\rm cyclic}(i,j,m)-\sfrac 13\,{\mathrm{d}}\big(\langle[e_i,e_j],e_m\rangle+{\rm cyclic}(i,j,m)\big)=0~,\end{equation} gives directly the Bianchi identities \bse \begin{align} \label{exbi1} \partial_{[m}G_{ijkl]}&=-\sfrac 35\, G_{np[ij}\,\widetilde F_{m]kl}{}^{np}-\sfrac 35\, F_{[ij}{}^n\,G_{m]nkl}~,\\[4pt] \label{exbi2} \partial_{[m}F_{ij]}{}^l-\sfrac 1{3}\,\hat\partial^{lk}G_{ijmk}&=-G_{nk[ij}\,Q_{m]}{}^{nkl}-F_{[ij}{}^k\,F_{m]k}{}^{l}~. \end{align}\end{subequations} As in Section~\ref{sec31}, these formulas hold in any dimension $d$. However, for the physically interesting case $d=4$, the first identity becomes algebraic, since the left-hand side is identically zero (five antisymmetrized indices in four dimensions), and taking into account the trace relation \eqref{ftildef} between $\widetilde F$ and $F$, the result is \begin{equation} \label{eq:GF0} G_{n[lij}\,F_{mk]}{}^{n}=0~. \end{equation} The rest of the Jacobi identities, involving different combinations of the basis elements $e_i$ and $e^{ij}$, give six additional Bianchi identities for all fluxes which read as \bse \begin{align} \label{exbi3}&3\,\partial_{[i}\widetilde F_{jp]r}{}^{mn} -\delta^{[n}_{[r}\,\partial_{p]} F_{ij}{}^{m]}+\sfrac 12\,\hat\partial^{mn}G_{ijpr}+{\mit\Omega}^{ks[m}\,\delta^{n]}_{[p}\,\partial_{r]}G_{ijks}\nonumber\\ &\hspace{1cm} = G_{ijkl}\,\widetilde Q_{pr}{}^{kl,mn}+ F_{ij}{}^k\,\widetilde F_{kpr}{}^{mn}+2\,\widetilde F_{kl[i}{}^{mn}\,\widetilde F_{j]pr}{}^{kl}+2\,Q_{[i}{}^{mnk}\,G_{j]kpr}~,\\[4pt] &2\,\partial_{[i}Q_{j]}{}^{mnp}+\sfrac 12\,\hat\partial^{mn}F_{ij}{}^p-\sfrac 12\,\hat\partial^{p[n} F_{ij}{}^{m]}-\sfrac 12\,\hat\partial^{lp}\widetilde F_{ijl}{}^{mn}+\sfrac 12\,{\mit\Omega}^{kl[m}\,\hat\partial^{n]p}G_{ijkl}\nonumber\\ &\hspace{1cm} = G_{ijkl}\,R^{kl,mn,p}+3\, F_{[ij}{}^k\,Q_{k]}{}^{mnp}+2\,\widetilde F_{kl[i}{}^{mn}\,Q_{j]}{}^{klp}~,\\[4pt] &3\,\partial_{[i}\widetilde Q_{st]}{}^{jk,mn}-\hat\partial^{jk}\widetilde F_{ist}{}^{mn}-(\{jk\}\to\{mn\})\nonumber\\ &\hspace{1cm}=2\,\widetilde F_{ipr}{}^{jk}\,\widetilde Q_{st}{}^{pr,mn}-\widetilde F_{ist}{}^{pr}\,\widetilde Q_{pr}{}^{jk,mn}+2\,Q_i{}^{jkp}\,\widetilde F_{pst}{}^{mn}+ R^{jk,mn,p}\,G_{pist} \nonumber\\ &\hspace{1cm}\quad-(\{jk\}\to\{mn\})~,\\[4pt] &\partial_iR^{jk,mn,s}-\hat\partial^{jk}Q_{i}{}^{mns}-\hat \partial^{ps}\widetilde Q_{ip}{}^{jk,mn} -(\{jk\}\to\{mn\})\nonumber\\ &\hspace{1cm}=2\,\widetilde F_{ipr}{}^{jk}\,R^{pr,mn,s}+ F_{pi}{}^{s}\,R^{jk,mn,p}+2\,Q_i{}^{jkp}\,Q_{p}{}^{mns}- \widetilde Q_{pr}{}^{jk,mn}\,Q_{i}{}^{prs}\nonumber\\ &\hspace{1cm}\quad-(\{jk\}\to\{mn\})~,\\[4pt] &\partial_{[t}R^{ij,kl,[m}\,\delta^{n]}_{s]}+\sfrac 3{4}\,\hat\partial^{mn}\widetilde Q_{st}{}^{ij,kl}+{\mit\Omega}^{[prm}\,\delta^{n]}_{[s}\,\partial_{t]}\widetilde Q_{pr}{}^{ij,kl}+{\rm cyclic} (ij,kl,mn) \nonumber\\ &\hspace{1cm}=R^{ij,kl,p}\,\widetilde F_{pst}{}^{mn}+\widetilde Q_{pr}{}^{ij,kl}\,\widetilde Q_{st}{}^{pr,mn}+\widetilde Q_{pr}{}^{ij,kl}\,\delta_{[s}^{[n}\,\big(Q_{t]}{}^{prm]}+\sfrac 14\,{\mit\Omega}^{pr\underline{a}}\,G_{t]abc}\,{\mit\Omega}^{m]bc}\big) \nonumber \\ &\hspace{1cm}\quad+{\rm cyclic} (ij,kl,mn) ~,\\[4pt] \label{exbi8}&\hat\partial^{mn}R^{ij,kl,q}+\sfrac 23\,\hat\partial^{pq}R^{ij,kl,[m}\,\delta^{n]}_{p}+\sfrac 23\,{\mit\Omega}^{[prm}\,\hat\partial^{n]q}\widetilde Q_{pr}{}^{ij,kl}+{\rm cyclic} (ij,kl,mn) \nonumber\\ &\hspace{1cm}=2\,R^{ij,kl,p}\,Q_{p}{}^{mnq}+2\,\widetilde Q_{pr}{}^{ij,kl}\,R^{pr,mn,q}+\sfrac 23\,\widetilde Q_{{ pr}}{}^{ij,kl}\,{\mit\Omega}^{dq[n}\,\big(Q_d{}^{prm]}+\sfrac 14\, {\mit\Omega}^{pr\underline{a}}\,G_{dabc}\,{\mit\Omega}^{m]bc}\big)\nonumber\\ &\hspace{1cm}\quad+{\rm cyclic} (ij,kl,mn) ~. \end{align}\end{subequations} Again these expressions are given in the holonomic frame. The Bianchi identities in a non-holonomic frame may be found in the same way. \section{Threebrane Sigma-Models and Homotopy Algebroids} \label{sec4} \subsection{Overview} \label{sec41} Let us begin with an explanation of which precise question we aim at answering in the following. To this end let us go back to the string theory case, and the expressions for fluxes and Bianchi identities there. As already mentioned, they may be determined with the same approach as the one we used in Section~\ref{sec3} for M-theory. The essential point we would like to recall is as follows. First we note that the fluxes and Bianchi identities in the holonomic frame for generalized geometry may be written in the compact form \begin{align} \label{ca2} \rho^i{}_{I}\,\partial_i\rho^j{}_J-\rho^i{}_J\,\partial_i\rho^j{}_I-\eta^{KL}\,\rho^j{}_K\,T_{LIJ}&=0~,\\[4pt] \label{ca3} 4\,\rho^i{}_{[L}\,\partial_iT_{IJK]}+3\,\eta^{MN}\, T_{M[IJ}\,T_{KL]N}&=0~, \end{align} where indices $i,j,\dots$ run over $1,\dots,d$ while $I,J,\dots$ run through $1,\dots,2d$. Here $\rho^i{}_I=(\d^i{}_j,\beta^{ij})$, $\eta_{IJ}$ is the $O(d,d)$-invariant metric, and the 3-form $T_{IJK}$ corresponds to the four fluxes $H_{ijk}$, $F_{ij}{}^{k}$, $Q_i{}^{jk}$ and $R^{ijk}$ depending on the index position. We supplement these equations with a third one, \begin{equation} \label{ca1} \eta^{IJ}\,\rho^i{}_{I}\,\rho^j{}_J=0~, \end{equation} which is identically satisfied as long as $\beta^{(ij)}=0$, namely $\beta^{ij}$ are components of an antisymmetric 2-vector (not necessarily a Poisson bivector). These three equations may be interpreted in three different but related ways: \begin{itemize} \item As the fluxes and Bianchi identities in the NS--NS sector of general string theory compactifications. \item As the local form of the axioms of a Courant algebroid on $E_1$ \cite{Ikeda:2012pv}. \item As the conditions arising from the classical master equation in the BV--BRST quantization of generalized Wess-Zumino terms, or equivalently as the conditions for gauge invariance and on-shell closure of gauge transformations for the Courant sigma-model \cite{Ikeda:2002wh}. \end{itemize} Read differently, the above statements say the following: given a Courant algebroid, one can uniquely write down a membrane sigma-model which gives the BV--BRST action for Wess-Zumino terms which appear as fluxes in string theory compactifications. This statement is based on \cite{Ikeda:2002wh,Park:2000au,Hofman:2002jz,dee1,dee2}. In particular, in~\cite{dee2} the precise relation with topological sigma-models of AKSZ-type \cite{Alexandrov:1995kv} is demonstrated. Moving one worldvolume dimension higher, from the closed string to the closed M2-brane, we have already found the expressions for the fluxes and their Bianchi identities in Section~\ref{sec3}. This allows us to pose the following question: given a higher analogue of an exact Courant algebroid, can one write down uniquely a threebrane sigma-model which gives the BV--BRST action for generalized Wess-Zumino terms that appear as fluxes in M-theory compactifications? In other words, we would like to have a set of expressions much like \eqref{ca2}--\eqref{ca1}, which can be interpreted again in three different ways: as M-theory fluxes and Bianchi identities, as the local form of the properties of the higher Courant bracket, and as conditions that guarantee the classical master equation for some topological threebrane sigma-model (or, equivalently, its gauge invariance and on-shell closure of gauge transformations). We address this question in Section~\ref{sec5} below. In the present section we first discuss the construction of threebrane sigma-models as the higher analogue of Courant sigma-models. \subsection{Topological Threebrane Sigma-Models} \label{sec42} The threebrane sigma-model that corresponds to the topological AKSZ theory for open threebranes, or in other words the BV--BRST action for a 4-form Wess-Zumino term, was contructed explicitly in \cite{Ikeda:2010vz}. We shall not review the full construction here,{\footnote{Along the usual lines of the BV--BRST formalism this consists in considering the BRST symmetry by replacing gauge parameters by ghosts, higher gauge parameters by ghosts-for-ghosts, and introducing the corresponding antifields required of the BV formalism. In the full space of fields, ghosts and antifields, an antibracket is defined and a BV--BRST action in constructed in terms of superfields on a graded manifold, which is required to satisfy the classical master equation imposing BRST-invariance.}} but instead we consider just the zero-ghost topological action for the AKSZ topological sigma-model in four dimensions, which will be enough to make our main point. Its general form reads as \begin{align} S[X,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta,A,F]&=\int_{\S_4}\, \big(F_i\wedge\mathrm{d} X^i-\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_I\wedge\mathrm{d} A^I+\rho^i{}_{ I}(X)\,F_i\wedge A^I\,+\sfrac 12\, S^{IJ}(X)\,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_I\wedge \alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_J \label{qp3}\\ &\hspace{1cm}\qquad+ \,\sfrac 12\, T^{I}{}_{JK}(X)\,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_I\wedge A^J\wedge A^K +\sfrac 1{4!}\,G_{IJKL}(X)\,A^I\wedge A^J\wedge A^K\wedge A^L\big)~. \nonumber \end{align} Let us spell out the ingredients in this sigma-model action. We have a theory of maps from a threebrane worldvolume $\S_4$ to a $d$-dimensional target space $M$ (we shall mostly consider $d=4$ later, but the discussion here holds for any $d$), \bea X=(X^i):\S_4\longrightarrow M \qquad \mbox{with} \quad i=1,\dots , d~, \end{eqnarray} and $F\in {\Omega}^{3}(\S_4,X^{\ast}T^{\ast}M)$ is an auxiliary worldvolume 3-form taking values in the pullback of the cotangent bundle of $M$ by the map $X$. In addition, there is a worldvolume 1-form $A\in{\Omega}^1(\S_4,X^{\ast}E)$ and a worldvolume 2-form $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta\in{\Omega}^2(\S_4,X^{\ast}E^{\ast})$. They take values in the pullback of a vector bundle $E\to M$ and its dual $E^*\to M$ respectively. For example this could be the tangent bundle, but not necessarily so (see below). Here $I$ is a bundle index when a basis of local sections $\{e_I\}$ of $E$ is chosen, with corresponding dual basis $\{e^I\}$ for $E^*$, while $\rho^i{}_I$ are the components of an anchor map $\rho: E \to TM$, and $S^{IJ}$ is symmetric in its two bundle indices, which defines a symmetric bilinear pairing on sections of $E^*$ (possibly degenerate). Finally, $T^{I}{}_{JK}$ are structure constants of an antisymmetric bracket on sections of $E$ and $G_{IJKL}$ is a generalized 4-form on $E$. The quantities $S$, $T$ and $G$ are all functions of $X(\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta)$, where $\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}$ are the local coordinates of the threebrane worldvolume, and they will be understood as generalized Wess-Zumino couplings in the present setting. There is a hierarchy of structures and fields exhibited in the following table: \small \begin{center}\boxed{ \begin{tabular}{cccccc} \underline{$\dim\S$} & \underline{AKSZ $\sigma$-model} & \underline{0-forms} & \underline{1-forms} & \underline{2-forms} & \underline{3-forms} \\[4pt] 2 & Poisson & $X^i$ & $F_i\in\G(X^{\ast}T^{\ast}M)$ & --- & --- \\[4pt] 3 & Courant & $X^i$ & $A^I\in \G(X^{\ast}E)$ & $F_i\in\G(X^{\ast}T^{\ast}M)$ & --- \\[4pt] 4 & Threebrane & $X^i$ & $A^I \in\G(X^{\ast}E)$ & $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_I\in \G(X^{\ast}E^{\ast})$ & $F_i\in \G(X^{\ast}T^{\ast}M)$ \end{tabular}} \end{center} \normalsize Specifically, the AKSZ theory in two dimensions corresponds to the BV--BRST action for the Poisson sigma-model (the first order formulation of the topological bosonic string $B$-field amplitude), in three dimensions to the Courant sigma-model and in four dimensions to the threebrane sigma-model that we discuss here. This is of course part of a semi-infinite staircase of (higher) geometric structures and topological sigma-models. More details may be found for example in the review \cite{Ikeda:2012pv}. The action \eqref{qp3} comes with a host of conditions stemming from the classical master equation---or, equivalently, from gauge invariance. Recall that in the two-dimensional case these conditions are equivalent to the vanishing of the Schouten-Nijenhuis bracket for a bivector field, which is the condition for a Poisson structure on $M$ or the Lie algebroid axioms for the cotangent bundle $T^*M$ equipped with the Koszul-Schouten bracket, while in the three-dimensional case they are equivalent to the axioms of a Courant algebroid. In the four-dimensional case the conditions found in \cite{Ikeda:2010vz,Gru} define a higher algebroid structure, called a Lie algebroid up to homotopy in \cite{Ikeda:2010vz} or more generally an $H$-twisted Lie algebroid in \cite{Gru} (see also Refs. \cite{Gru2,Carow-Watamura:2016lob}). Instead of providing the geometric axioms defining such a structure, we find it more illuminating at this stage to reside on the equivalent local coordinate conditions imposed by the classical master equation; the two approaches anyway reflect the same structure. The action \eqref{qp3} is invariant under the gauge transformations parametrized by scalar, 1-form and 2-form gauge parameters $\epsilon$, $\zeta$ and $t$: \bse\begin{align} \label{gt1} \delta X^i&=-\rho^i{}_I\, \epsilon^I~,\\[4pt] \label{gt2} \delta A^I&=\mathrm{d}\epsilon^I+S^{IJ}\,\zeta_J-T^I{}_{JK}\,A^J\,\epsilon^K~,\\[4pt] \label{gt3} \delta \alpha_I&=\mathrm{d} \zeta_I+\rho^{i}{}_{I}\,t_i+T^J{}_{IK}\,\zeta_J\wedge A^K+T^J{}_{IK}\,\alpha_J\,\epsilon^K+\sfrac 12\, G_{IJKL}\,\epsilon^J\,A^K\wedge A^L~,\\[4pt] \delta F_i&=-\mathrm{d} t_i+\partial_i\rho^j{}_I\,\big(\epsilon^I\,F_j+t_{j}\wedge A^{I}\big)-\partial_iT^J{}_{LI}\,\epsilon^I\,\alpha_J\wedge A^L \nonumber\\ \label{gt4} &\quad\,-\sfrac 16\, \partial_i G_{IJKL}\,\epsilon^I\,A^J\wedge A^K\wedge A^L +\sfrac 12\,\partial_iT^I{}_{JK}\,\zeta_I\wedge A^J\wedge A^K+\partial_i S^{IJ}\,\zeta_I\wedge\alpha_J~, \end{align}\end{subequations} provided the following conditions are met: \bse\begin{align} \label{c1} \rho^i{}_I\,S^{IJ}=0~,\\[4pt] \label{c2} \rho^{i}{}_I\,\partial_iS^{JK}+S^{LJ\,}T^K{}_{ IL}+S^{LK}\,T^J_{IL}=0~,\\[4pt] \label{c3} \rho^i{}_I\,\partial_i\rho^j{}_J-\rho^i{}_J\,\partial_i\rho^j{}_I-\rho^j{}_K\,T^K{}_{ IJ}=0~,\\[4pt] \label{c4} 3\,\rho^i{}_{[I}\,\partial_iT^J{}_{ KL]}+S^{JM}\,G_{KLIM}-3\,T^J{}_{ M[K}\,T^M{}_{ LI]}=0~,\\[4pt] \label{c5} \rho^i{}_{[I}\,\partial_iG_{JKLM]}+T^N{}_{ [IJ}\,G_{KLM]N}=0~. \end{align}\end{subequations} Therefore, given a set of structure functions that solve these conditions, the corresponding topological threebrane sigma-model is uniquely determined. One can then reconstruct the anchor, bracket and 4-form on the vector bundle $E$, as well as the pairing on its dual $E^*$, as derived brackets \cite{Ikeda:2010vz}. These relations define a homotopy deformation of a Lie algebroid on $E$: Setting $S=G=0$ reduces the conditions \eqref{c1}--\eqref{c5} to the usual axioms of a Lie algebroid, with the remaining non-trivial identities \eqref{c3} and \eqref{c4} corresponding to the homomorphism property, Leibniz rule and Jacobi identity for the anchor map and bracket on $E$. Generally, the bracket $[\,\cdot\,,\,\cdot\,]$ on $E$ extends to give a higher analog of the (twisted) Courant bracket on sections of $E\oplus\mbox{\footnotesize$\bigwedge$}^2E^*$, which can again be computed as a derived bracket and for $S=0$ is given by~\cite{Ikeda:2010vz} \begin{equation}\label{eq:derivedbracket} [s_1+\gamma_1,s_2+\gamma_2]_G = [s_1,s_2] +{\cal L}_{s_1}\gamma_2-{\cal L}_{s_2}\gamma_1-\sfrac12\,\mathrm{d}_E(\iota_{s_1}\gamma_2-\iota_{s_2}\gamma_1) + \iota_{s_1}\iota_{s_2}G \ , \end{equation} where $s_1,s_2\in\Gamma(E)$ and $\gamma_1,\gamma_2\in\Gamma(\mbox{\footnotesize$\bigwedge$}^2E^*)$. Here ${\cal L}_{s_i}=\mathrm{d}_E\,\iota_{s_i}+\iota_{s_i}\,\mathrm{d}_E$ and $\mathrm{d}_E:\Gamma(\mbox{\footnotesize$\bigwedge$}^pE^*)\to\Gamma(\mbox{\footnotesize$\bigwedge$}^{p+1}E^*)$ is the usual Lie algebroid differential defined by \begin{equation} (\mathrm{d}_E\,\omega)_{I_0I_1\cdots I_p}=\rho^i{}_{[I_0}\,\partial_i\,\omega_{I_1\cdots I_p]}+T^J{}_{[I_0I_1}\,\omega_{I_2\cdots I_{p-1}]J} \end{equation} for $\omega=\frac1{p!}\, \omega_{I_1\cdots I_p}\,e^{I_1}\wedge\cdots \wedge e^{I_p}$. Then \eqref{c4} (with $S=0$) is the nilpotency condition $\mathrm{d}_E^2=0$, while \eqref{c5} is just the closure condition $\mathrm{d}_E G=0$ which states that the twisting 4-form $G$ represents a class in the degree~4 Lie algebroid cohomology of $M$; this classifies the Lie algebroids up to homotopy over $M$ with $S=0$~\cite{Ikeda:2010vz}. The relation of the threebrane sigma-model to the higher Courant bracket may also be established as follows. First recall that the generalized Wess-Zumino term for Courant sigma-models is \begin{equation} \label{wztermCA} \int_{\S_3}\,\sfrac 1{3!}\,X^{\ast}(T_{IJK})\,A^{I}\wedge A^{J}\wedge A^{K}=\int_{\S_3}\,\sfrac 1{3}\,X^{\ast}\big(\langle e_{I},[e_J,e_K]\rangle\big)\,A^{I}\wedge A^{J}\wedge A^{K}~, \end{equation} where the components $T_{IJK}$ are directly related to the twisted Courant-Roytenberg bracket in a local basis $\{e_I\}$ of $E_1=TM\oplus T^{\ast}M$ as $ T_{IJK}=2\,\langle e_{I},[e_J,e_K]\rangle$~\cite{dee2}.{\footnote{We indicated explicitly the pullback by $X$ here, in order to avoid confusion, in contrast to the customary short-hand notation $\sfrac 13\, \langle A,[A,A]\rangle$ for the integrand in \eqref{wztermCA} used e.g. in~\cite{Chatzistavrakidis:2018ztm}; the latter notation should be treated with caution, since the fields $A$ live in the pullback bundle $X^{\ast}E_1$, which unlike $E_1$ itself is not endowed naturally with a Courant algebroid structure. Thus the notation $[A,A]$ can be misleading, since no such bracket is defined on $X^{\ast}E_1$, and we refrain from using it here.}} In a similar fashion, the last two terms in the action \eqref{qp3} may be written as \bse \begin{align} \sfrac 12 \,T^{I}{}_{JK}(X)\,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_{I}\wedge A^{J}\wedge A^{K}& =X^{\ast}\big(\langle e^{I},[e_{J},e_{K}]\rangle\big)\,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_{I}\wedge A^{J}\wedge A^{K}~, \\[4pt] \label{GWZ} \sfrac 14 \,G_{IJKL}(X)\, A^{I}\wedge A^{J}\wedge A^{K}\wedge A^{L}&=X^{\ast}\big(\langle e_{I},\langle e_J,[e_K,e_L]\rangle\rangle\big)\, A^{I}\wedge A^{J}\wedge A^{K}\wedge A^{L}~, \end{align} \end{subequations} where as before $\{e_I\}$ and $\{e^{I}\}$ are local bases of sections of $E$ and $E^{\ast}$ respectively, the bracket is the higher Courant bracket on $E\oplus\mbox{\footnotesize$\bigwedge$}^2E^*$ and the bilinear form corresponds to symmetric contraction as in \eqref{higherbilinear}:{\footnote{In \eqref{GWZ} the two bilinear forms are in principle different: the first (``inner'') bilinear form is defined on $E\oplus \mbox{\scriptsize$\bigwedge$}^2 E^*$, while the second (``outer'') bilinear form is the canonical pairing between the vector bundle $E$ and its dual $E^*$. Since in both cases the corresponding contractions are understood from the context, we refrain from establishing a separate notation for these two operations.}} \begin{equation} \langle s_1+\gamma_1, s_2+\gamma_2\rangle=\sfrac 12\, (\iota_{s_1}\gamma_2 +\iota_{s_2}\gamma_1) \ . \end{equation} Thus the higher geometric operations introduced above and in Section~\ref{sec21} directly dictate the generalized Wess-Zumino terms in the threebrane sigma-model. This will be further exemplified in a number of examples below. Therefore, the question we posed in Section \ref{sec41} may be rephrased as follows: What is the relation of the conditions \eqref{c1}--\eqref{c5} to the fluxes and Bianchi identities that we found in Section \ref{sec3}? Before we delve into the answer, we first discuss some (known and new) characteristic examples for this structure. \subsection{Examples} \label{sec43} \paragraph{Homotopy tangent algebroids.} The simplest possibility is to choose $E=TM$ with the usual Lie bracket of vector fields. Then the bundle index $I$ is identified with the coordinate index $i$, in a local basis $\{e_i\}$ of the tangent bundle. The worldvolume 1-form $A^i=A^{i}_{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}\,\mathrm{d}\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}$ is valued in the pullback of the tangent bundle $X^{\ast}TM$ over $\S_4$ and the worldvolume 2-form $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_i=\sfrac 12\, \alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_{i \, \alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta\b}\,\mathrm{d}\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}\wedge\mathrm{d}\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\b}$ is valued in the pullback of the cotangent bundle $X^{\ast}T^{\ast}M$. We begin with an analysis of the conditions \eqref{c1}--\eqref{c5} that define a Lie algebroid up to homotopy. The condition \eqref{c1} reads as \begin{equation} \rho^i{}_k\,S^{kj}=0~. \end{equation} If the pairing $S^{ij}$ is non-degenerate, this implies $\rho^i{}_j=0$. This is a legitimate option, especially in the case that the base manifold $M$ is a point and the algebroid structure is reduced to an algebra. Such cases were examined in \cite{Ikeda:2010vz}. For our purposes, it is more interesting to consider instead the case that the anchor is non-degenerate, in which case one concludes that \begin{equation} S^{ij}=0~. \end{equation} Then \eqref{c1} and also \eqref{c2} are satisfied automatically. Since $\rho$ is non-degenerate, with inverse $\rho_i{}^j$, the condition \eqref{c3} requires that \begin{equation} \label{geometricflux} T^{i}{}_{jk}=2\,\rho_l{}^{i}\,\rho^{m}{}_{[j}\, \partial_{\underline{m}}\,\rho^{l}{}_{k]}~. \end{equation} Finally, the relations \eqref{c4} and \eqref{c5} resemble Bianchi identities and they explicitly read as \bse \begin{align} \rho^{l}{}_{[i}\,\partial_{\underline{l}}T^{j}{}_{mn]}-T^{j}{}_{l[m}\,T^{l}{}_{ni]}&=0~,\\[4pt] \rho^{n}{}_{[i}\,\partial_{\underline{n}}G_{jklm]}+T^{n}{}_{[ij}\,G_{klm]n}&=0~. \end{align} \end{subequations} We distinguish two particularly interesting cases below. \paragraph{$\boldsymbol G$-flux.} Choose the anchor to be the projection to the tangent bundle, namely $\rho=\text{id}$, or more explicitly $\rho^{i}{}_{j}=\d^{i}{}_{j}$. Then \eqref{c3} implies immediately that $T^{i}{}_{jk}=0$, which automatically satisfies \eqref{c4}; in this case the derived bracket \eqref{eq:derivedbracket} reproduces (for $G=0$) the higher Courant bracket on $E_2=TM\oplus\mbox{\footnotesize$\bigwedge$}^2\,T^*M$ from \eqref{courant}. Finally \eqref{c5} is the Bianchi identity which simply states that the 4-form $G$ is closed, \begin{equation} \partial_{[i}G_{jklm]}=0 \ , \end{equation} or $\mathrm{d} G=0$, implying that $G$ defines a class in the degree~$4$ de~Rham cohomology of $M$. The corresponding threebrane sigma-model becomes \begin{align} S[X,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta,A,F]&=\int_{\S_4}\,\big(F_i\wedge(\mathrm{d} X^i+A^{i})-\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_i\wedge\mathrm{d} A^i \nonumber\\ &\hspace{1cm}\qquad+\sfrac 1{4!}\,G_{ijkl}(X)\,A^i\wedge A^j\wedge A^k\wedge A^l\big)~. \end{align} In accord with the discussion at the end of Section \ref{sec42}, the Wess-Zumino term is associated to the higher Courant bracket on $E_2=TM\oplus \mbox{\footnotesize$\bigwedge$}^2\, T^{\ast}M$ by means of the relation \begin{equation} \sfrac 14 \,G_{ijkl}(X)=X^{\ast}\big(\langle e_i,\langle e_j,[e_k,e_l]\rangle\rangle\big)~, \end{equation} where the bracket and the bilinear form are given by \eqref{agg1} and \eqref{bilinear2} respectively. It is useful to add a boundary term \begin{equation} S_{\partial}[X,A]=\oint_{\partial\S_4}\, \sfrac 12\, g_{ij}(X)\,A^i\wedge\ast \,A^j~, \end{equation} where the Hodge duality operation $\ast$ is in three dimensions, sending 1-forms to 2-forms. A topological boundary term of the form \begin{equation} S_{\partial,\text{top}}[X,A,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta]=\oint_{\partial\S_4}\, h^i_j(X)\, \alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_i\wedge A^{j} \end{equation} is also possible, as well as boundary terms of type $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta\wedge\ast\,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta$ and $A\wedge A\wedge A$, however we do not include them in this example. The field equation for the auxiliary 3-form $F_i$ gives \begin{equation} A^i=-\mathrm{d} X^i~, \end{equation} which with $G=\mathrm{d} C$ leads locally to the M2-brane action \begin{equation} S_{\partial}[X]=\oint_{\partial\S_4} \, \big(\sfrac 12\, g_{ij}\,\mathrm{d} X^i\wedge\ast\,\mathrm{d} X^j+\sfrac 1{3!}\,C_{ijk}\,\mathrm{d} X^i\wedge\mathrm{d} X^j\wedge\mathrm{d} X^k\big)~. \end{equation} This is recognized as the action for a closed M2-brane coupled to a 3-form $C$-field, whose field strength is the 4-form $G$-flux. This example, without the metric term, was considered in \cite{Ikeda:2010vz}, and recently in more detail in \cite{Kokenyesi:2018ynq} where the double dimensional reduction of the Wess-Zumino term for wrapped threebranes along the M-theory circle is shown to reproduce the standard Wess-Zumino membrane coupling to an NS--NS $H$-flux. \paragraph{M-theory on twisted tori.} Motivated by Scherk-Schwarz reductions of M-theory on twisted tori, studied in detail in \cite{Hull:2006tp}, we can choose $\rho^i{}_j$ to be equal to the components of a globally defined coframe ${\sf E}^{i}{}_{j}(X)$ for a twisted torus.{\footnote{As explained in e.g.~\cite{Chatzistavrakidis:2018ztm}, one has to choose an isomorphism ${\sf E}:TN\to TM$ between the tangent bundles of $M$ and the twisted torus $N$ with corresponding components ${\sf E}^{m}{}_{i}$, whose inverse ${\sf E}^{-1}:TM\to TN$ has components ${\sf E}_m{}^i$, and identify the anchor with those. We present a short-cut version of this construction here, hoping that no confusion is caused.}} The simplest example, but by no means the only one, is to take a twisted 4-torus which is a trivial circle bundle over the three-dimensional Heisenberg nilmanifold, see for example \cite{Blair:2014zba,Bosque:2016fpi,Gunaydin:2016axc,Kupriyanov:2017oob,Lust:2017bgx,Lust:2017bwq}. Then $T^{i}{}_{jk}$ is simply given by \eqref{geometricflux} and it is constant, corresponding to the structure constants of the associated nilpotent Lie algebra. Due to the Jacobi identity, the condition \eqref{c4} is identically satisfied. One may further choose $G=0$, in which case \eqref{c5} is also an identity and all conditions are solved. The sigma-model is \begin{align} S[X,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta,A,F]=\int_{\S_4}\,\big(F_i\wedge(\mathrm{d} X^i+{\sf E}^{i}{}_{j}\,A^{j})-\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_i\wedge\mathrm{d} A^i+ \sfrac 1{2}\,T^{i}{}_{jk}\,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_i\wedge A^j\wedge A^k\big)~, \end{align} where the last term corresponds to \begin{equation} \sfrac 12 \, T^{i}{}_{jk}=X^{\ast}\big(\langle e^{i},[e_j,e_k]\rangle\big)~, \end{equation} and we add the same boundary term as before. The equation of motion for $F_i$ yields \begin{equation} A^{i}=-{\sf E}^{i}=-{\sf E}_j{}^i\,\mathrm{d} X^j \qquad \text{with}\quad \mathrm{d} {\sf E}^{i}=-\sfrac 12\, T^{i}{}_{jk}\, {\sf E}^{j}\wedge {\sf E}^{k}~, \end{equation} where we used the Maurer-Cartan equations. Inserting this in the action, the bulk terms cancel completely and the boundary term becomes \begin{equation} S_{\partial}[X]=\oint_{\partial\S_4}\, \sfrac 12\, g_{ij}\, {\sf E}^{i}\wedge\ast\, {\sf E}^{j}~, \end{equation} which is the correct M2-brane action for an M-theory background with purely geometric torsion flux $T^{i}{}_{jk}$. One may already consider a more general situation, by allowing a non-vanishing constant 4-form $G$. Then one would simply obtain the same model decorated with an additional $G$-flux, subject to the algebraic identity \begin{equation} T^{n}{}_{[ij}\,G_{klm]n}=0~, \end{equation} due to \eqref{c5}. This is a special case of the algebraic Bianchi identity \eqref{eq:GF0}, and it is welcoming to see that the very same condition was considered in the context of Scherk-Schwarz reductions with both fluxes turned on, see \cite[Eq. (2.5)]{Hull:2006tp}, where it ensures that the flux ${\sf G}=\frac1{4!}\, G_{ijkl}\, {\sf E}^i\wedge{\sf E}^j\wedge{\sf E}^k\wedge{\sf E}^l$ is closed. In the present context this identity is imposed by the gauge invariance of the threebrane sigma-model via the classical master equation, and it ensures closure of the Wess-Zumino term $X^*({\sf G})$. \paragraph{Homotopy cotangent algebroids.} An option that has not been explored so far is the choice of vector bundle $E=T^{\ast}M$. As we shall see, this is the analogue of the $R$-flux with Poisson structure in the case of (contravariant) Courant algebroids \cite{Asakawa:2014kua}, which leads to a \emph{geometric} $R$-flux. The difference is that in the Courant algebroid, the analogues of the fields $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta$ and $A$ are treated symmetrically because they are both degree~$1$ fields. Here this ``symmetry'' breaks down. We choose a local coframe $\{e^{i}\}$ for the cotangent bundle, with dual local frame $\{e_i\}$ of the tangent bundle. According to the explanations of Section~\ref{sec42}, the worldvolume 1-form $A=A_i\,e^i$ with $A_i=A_{i\,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}\,\mathrm{d}\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}$ is now valued in $X^{\ast}T^{\ast}M$, while the worldvolume 2-form $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta=\alpha^i\,e_i$ with $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta^{i}=\sfrac 12\, \alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta^{i}_{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta\b}\,\mathrm{d}\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}\wedge\mathrm{d}\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\b}$ is valued in $X^{\ast}TM$. In the case of exact Courant algebroids, where $E=E_1=TM\oplus T^{\ast}M$, taking instead the dual $E_1^*=T^{\ast}M\oplus TM$ does not lead to any difference but a renaming of the fields; however, this is not the case here, since the fields have different degrees. Let us proceed with the analysis of the conditions for a Lie algebroid up to homotopy. First, the anchor components $\rho^{i}{}_{I}$ now become $\rho^{ij}$ and they correspond to a bivector. The condition \eqref{c1} becomes \begin{equation} \rho^{ij}\,S_{jk}=0~, \end{equation} which implies $S_{ij}=0$ if as before we assume that the anchor is a non-degenerate map. Once more, the condition \eqref{c2} is then automatically satisfied. On the other hand, the relation \eqref{c3} becomes \begin{equation} \rho^{li}\,\partial_l\rho^{jk}-\rho^{lk}\,\partial_l\rho^{ji}-\rho^{jl}\,T_l{}^{ik}=0~. \end{equation} This is solved by identifying $\rho^{ij}={\mit\Pi}^{ij}$ with the components of a Poisson bivector ${\mit\Pi}$, satisfying $[{\mit\Pi},{\mit\Pi}]_{\rm SN}=0$ with respect to the Schouten-Nijenhuis bracket on multivector fields, and \begin{equation} T_i{}^{jk}=-\,Q_i{}^{jk}:=-\,\partial_i{\mit\Pi}^{jk}~. \end{equation} This $Q$-flux satisfies the Bianchi identity \eqref{c4}, \begin{equation} {\mit\Pi}^{l[i}\,\partial_lQ_j{}^{km]}=Q_j{}^{l[k}\,Q_l{}^{mi]}~. \end{equation} If we allow a non-vanishing generalized 4-form $G$, which in the present case is a tetravector, it has to satisfy the Bianchi identity \eqref{c5} which reads as \begin{equation} {\mit\Pi}^{l[i}\,\partial_lG^{jkmn]}+Q_l{}^{[ij}\,G^{kmn]l}=0~, \end{equation} which may be written in the suggestive form \begin{equation} \mathrm{d}_{{\mit\Pi}}G:=[{\mit\Pi},G]_{\rm SN}=0~. \end{equation} This is reminiscent of the Bianchi identity $[{\mit\Pi},R]_{\rm SN}=0$ in the case of the Poisson Courant algebroid. In this case the higher Courant bracket \eqref{eq:derivedbracket} on $E_2^*=T^*M\oplus\mbox{\footnotesize$\bigwedge$}^2\,TM$ is written using the Lichnerowicz differential $\mathrm{d}_E=\mathrm{d}_{\mit\Pi}=[{\mit\Pi},\,\cdot\,]_{\rm SN}$ defined on multivector fields, together with the Koszul-Schouten bracket on $E=T^*M$ which for 1-forms $\tau_1,\tau_2\in\Gamma(T^*M)$ reads as \begin{equation} [\tau_1,\tau_2]_{\mit\Pi} = {\cal L}_{\iota_{\tau_1}{\mit\Pi}}\,\tau_2 - {\cal L}_{\iota_{\tau_2}{\mit\Pi}}\,\tau_1-\mathrm{d}\,\iota_{\tau_1}\iota_{\tau_2}{\mit\Pi} \ . \end{equation} Then the twisting 4-vector $G$ defines a class in the degree~4 Poisson cohomology of $M$. Now that we have satisfied all the conditions for a Lie algebroid up to homotopy, we are all set to write down the threebrane sigma-model for the above data. It is given by the action \begin{align} S[X,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta,A,F]&=\int_{\S_4}\,\big(F_i\wedge\mathrm{d} X^i-\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta^{i}\wedge\mathrm{d} A_i+{\mit\Pi}^{ij}\,F_i\wedge A_j \nonumber\\ &\hspace{1cm}\qquad -\sfrac 12\, Q_{i}^{\ jk}\,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta^{i}\wedge A_j\wedge A_k +\sfrac 1{4!}\,G^{ijkl}\,A_i\wedge A_j\wedge A_k\wedge A_l\big)~. \end{align} The equation of motion for $F_i$ gives \begin{equation} \mathrm{d} X^i=-{\mit\Pi}^{ij}\, A_j~, \end{equation} which can be inverted due to the non-degeneracy of the Poisson bivector to get \begin{equation} A_i=-{\mit\Pi}^{-1}_{ij}\,\mathrm{d} X^j~. \end{equation} Choosing local Darboux coordinates in which both $G^{ijkl}$ and ${\mit\Pi}^{ij}$ are constant, and adding a suitable boundary term, one obtains the M2-brane sigma model \begin{align} S_{\partial}[X]&=\oint_{\partial\S_4}\, \sfrac 12\, \big(g_{ij}-{\mit\Pi}^{-1}_{ik}\,g^{kl}\,{\mit\Pi}^{-1}_{lj}\big)\,\mathrm{d} X^{i}\wedge\ast\,\mathrm{d} X^{j}\nonumber\\ &\hspace{1cm}\qquad +\oint_{\partial\S_4}\, \sfrac 1{4!}\, G^{pqrs}\,{\mit\Pi}^{-1}_{ip}\,{\mit\Pi}^{-1}_{jq}\,{\mit\Pi}^{-1}_{kr}\,{\mit\Pi}^{-1}_{ls}\,X^{i}\, \mathrm{d} X^{j}\wedge \mathrm{d} X^{k}\wedge \mathrm{d} X^{l}~. \end{align} This is a non-trivial example of a Lie algebroid up to homotopy that generalizes to open threebranes the Poisson $R$-flux model for the open membrane. In the present case, the 4-form flux is controlled by a 4-vector $G$ that satisfies $[{\mit\Pi}, G]_{\rm SN}=0$. We stress that this is not the analogue of the \emph{non-geometric} $R$-flux in M-theory, as it does not correspond to the lift of the nonassociative closed string $R$-flux deformation along the M-theory circle. It is simply the analogue of the Poisson Courant algebroid \cite{Chatzistavrakidis:2015vka,Asakawa:2014kua,Chatzistavrakidis:2018ztm} at one level higher in the geometric staircase, and it is a geometric model. The analogue of the non-geometric $R$-flux will be discussed below. \section{Threebrane Sigma-Models and M-Theory Fluxes} \label{sec5} We would now like to address the question posed in Section \ref{sec41}, and relate the general M-theory fluxes and their Bianchi identities computed in Section \ref{sec3} to the threebrane sigma-models presented in Section \ref{sec42}. This question can now be stated in more precise terms as follows: do the conditions \eqref{c1}--\eqref{c5} generate all the fluxes and Bianchi identities? We will answer this question in the affirmative and thus enable ourselves to write down the corresponding threebrane sigma-model. The answer is that in the context of the threebrane sigma-model we should consider the vector bundle $E=E_2=TM\oplus \mbox{\footnotesize$\bigwedge$}^2\,T^{\ast}M$. Although this choice seems very reasonable in view of our previous discussion, one should appreciate that it is not the most natural choice, which is perhaps the reason that it has not been considered before. This may be explained by invoking the analogy with the Courant sigma-model. In that case, one has two different worldvolume 1-forms, say $q^{i}$ and $p_i$, taking values in dual bundles, say $L$ and $L^{\ast}$ respectively. However, since they are of the same degree, they can be combined in a single 1-form $A^{I}$ taking values in $E_1=L\oplus L^{\ast}$, so that $L$ is a maximally isotropic subbundle of $E_1$ with respect to the symmetric contraction pairing. Then the natural choice would be $L=TM$, which gives rise to the generalized tangent bundle. (The second choice $L=T^{\ast}M$ leads to the same bundle, as we already mentioned before.) On the contrary, in the present case, the two fields taking values in dual bundles $E$ and $E^{\ast}$ are of different degree and they cannot be combined. The most natural choice for $E$, the direct analogue of $L=TM$ above, would be either the tangent or cotangent bundle, the two choices being now inequivalent. But this is exactly what we have already done in Section \ref{sec43}. There we saw that this can account for the $G$-flux and for the geometric torsion flux $f^{i}{}_{jk}$, but not for the rest of the $SL(5)$ fluxes. We also saw that there is a consistent case with a 4-vector flux, which however is not one of the $SL(5)$ fluxes. Here our sole purpose is to find under which conditions the general theory yields instead the full set of $SL(5)$ fluxes and nothing more. In contrast to the case of Courant algebroids, where the $O(d,d)$-structure is intrinsic, the structure group for a general Lie algebroid up to homotopy is not naturally tailored for the U-duality group $SL(5)$ in four dimensions. Thus additional projections to $SL(5)$ tensors are needed to make contact with the $SL(5)$ fluxes. This is actually a strength of the present formalism, as there is some room for the same expressions below to also give a subset of the fluxes in higher dimensions for other exceptional U-duality groups. As in the examples of Section \ref{sec43}, based on $E=TM$ and $E=T^{\ast}M$ respectively, we directly set $S^{IJ}=0$, since this pairing does not play any further role in the identifications. Once more, this assumption takes care of the conditions \eqref{c1} and \eqref{c2}. As we now show, this means that the relevant $SL(5)$ fluxes are determined by the structure constants of a suitable Lie algebroid on $E=E_2$, whose bracket is specified by these structure constants. Considering the bundle $E_2$ over a four-dimensional target space $M$ means that its local basis index $I$ takes $10$ values, split into sets of $4$ and $6$ as before, namely a lower (upper) $I$ becomes either a lower (upper) $i$ or a set of upper (lower) $[ij]$ indices, with $i,j=1,2,3,4$. Based on this, we write the components of the anchor $\rho$ as \begin{equation} (\rho^i{}_I)=(\rho^i{}_j,\rho^{ijk})~, \end{equation} and for our purposes here we further identify \begin{equation} \rho^i{}_j=\d^i{}_j \qquad \mbox{and} \qquad \rho^{ijk}=\sfrac 12\, {\mit\Omega}^{ijk}~, \end{equation} where we assume that $\rho^{ijk}$ is a completely antisymmetric 3-vector, but not necessarily a Nambu-Poisson tensor. Then the condition \eqref{c3} yields three different equations. For both $I,J$ being $i,j$ the equation is \begin{equation} T^{m}{}_{ij}+\sfrac 12\, {\mit\Omega}^{mkl}\,T_{klij}=0~, \end{equation} which is precisely the expression \eqref{exflux2}, provided we make the identifications $T^{k}{}_{ij}=F^k{}_{ij}$ and $T_{ijkl}=G_{ijkl}$. The latter identification is non-trivial, since $T_{ijkl}$ is not completely antisymmetric \emph{a priori}, but instead it is a \emph{reducible} mixed-symmetry tensor of type $(2,2)$. This means that in order to make contact with the $G$-flux, we consider only the irreducible fully antisymmetric component to be non-vanishing. This is a legitimate assumption, as long as the consistency conditions are satisfied, which is the case here. In a certain sense, it corresponds to a projection to $SL(5)$ representations. Second, for $I=i$ and $J$ being (upper) $[jk]$ (or vice-versa), the equation we obtain is \begin{equation} \label{exflux3b} T_{i}{}^{jkl}=\sfrac 12\, \partial_i {\mit\Omega}^{jkl}-\sfrac 12\, {\mit\Omega}^{lmp}\,T_{mpi}{}^{jk}~. \end{equation} This instructs us to identify $T_i{}^{jkl}=Q_i{}^{jkl}$ and $T_{mpi}{}^{jk}=\widetilde F_{mpi}{}^{jk}$. This looks like \eqref{exflux4}, provided that we manage to fix $\widetilde F$ properly with the remaining equations. Third, taking $I$ to be (upper) $[ij]$ and $J$ to be (upper) $[kl]$, we obtain \begin{equation} \label{exflux6b} T^{mijkl}+\sfrac 12\, {\mit\Omega}^{mpq}\,T_{pq}{}^{ijkl}=\sfrac 14\, {\mit\Omega}^{nkl}\,\partial_n{\mit\Omega}^{mij}-\sfrac 14\, {\mit\Omega}^{nij}\,\partial_n{\mit\Omega}^{mkl}~. \end{equation} This expression indicates the identifications $T^{ijklm}=R^{jk,lm,i}$ and $T_{pq}{}^{ijkl}=\widetilde Q_{pq}{}^{ij,kl}$. Whether or not it is identical to \eqref{exflux6} remains to be shown, provided we are able to fix $\widetilde Q$ from what follows. We now move on to the condition \eqref{c4}, whose middle term is zero by assumption. Taking $I,K,L$ as single indices and $J$ as a doubled index, we obtain directly the Bianchi identity \eqref{exbi1} with the same identifications as above. This is then solved by taking $\widetilde F$ to be as in \eqref{exflux3}, in which case indeed \eqref{exflux3b} becomes identical to \eqref{exflux4}. Similarly, from \eqref{c4} we also identify $\widetilde Q$ as in \eqref{exflux5}, and thus \eqref{exflux6b} is identical to \eqref{exflux6}. Finally, in the present context \eqref{c5} is redundant and we can take a vanishing twist $G_{IJKL}=0$. By \eqref{GWZ}, the vanishing locus of $G$ defines an isotropic subbundle of $E=TM\oplus \mbox{\footnotesize$\bigwedge$}^2\,T^{\ast}M$. It is precisely on this isotropic subbundle that the Bianchi identities coming from the higher Courant bracket in Section~\ref{sec32} give the second condition \eqref{c4} coming from gauge invariance of the threebrane action with our projection. Thus we conclude that indeed the sought-for equations for the fluxes and Bianchi identities are the conditions \eqref{c1}--\eqref{c5} under the above identifications. This provides a correspondence between the higher Courant bracket and this special case of the general AKSZ threebrane sigma-model. Let us therefore use this correspondence to write down the sigma-model explicitly. First, the 1-form $A$ and the 2-form $\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta$ have components \bse \begin{align} A^I&=(A^i,A_{ij})=:(q^i,p_{ij})~, \\[4pt] \alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_I&=(\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta_i,\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta^{ij})=:(p_i,q^{ij})~, \end{align}\end{subequations} involving 1-forms taking values in (the pullback bundles of) $TM$ and $\mbox{\footnotesize$\bigwedge$}^{2}\,T^{\ast}M$, and 2-forms taking values in $T^{\ast}M$ and $\mbox{\footnotesize$\bigwedge$}^2\,TM$ respectively. It would therefore appear that we have introduced an overabundance of worldvolume fields. However, our assumption that only the fully antisymmetric component of the reducible tensor $T_{ijkl}$ survives and is identical to the $G$-flux indicates that the 2-form $q^{ij}$ is decomposable, namely $q^{ij}=c\, q^{i}\wedge q^{j}$, where we choose a convenient scaling factor $c\in{\mathbb R}} \def\C{{\mathbb C}} \def\N{{\mathbb N}} \def\X{\mathbb X} \def \A{\mathbb A^\times$. What is more, the fact that we restrict the target space dimension to be four, and that the fluxes $\widetilde F$ and $\widetilde Q$ obey the relations \eqref{ftildef} and \eqref{qtildeq}, dictates that $q^i\wedge p_{ij}=\frac 1{\tilde{c}} \, p_j$, where $\tilde c\in{\mathbb R}} \def\C{{\mathbb C}} \def\N{{\mathbb N}} \def\X{\mathbb X} \def \A{\mathbb A^\times$. Thus we are not dealing with an arbitrary threebrane sigma-model of the type discussed in Section~\ref{sec4}, but rather with a special case of it dictated by the relation to the M-theory fluxes. In the quantum theory, this would mean projecting the domain of the path integral to worldvolume field configurations constrained in this way. Therefore, putting the above ingredients together, the sought-for threebrane sigma-model reads explicitly as \begin{align} S&=\int_{\S_4}\,\Big(F_i\wedge\mathrm{d} X^i-\tilde{c}\,q^{i}\wedge p_{ij}\wedge\mathrm{d} q^j-c\, q^{i}\wedge q^{j}\wedge\mathrm{d} p_{ij}+F_i\wedge q^{i}+\sfrac 12\, {\mit\Omega}^{ijk}\,F_i\wedge p_{jk} \nonumber\\ & \hspace{1cm} \qquad + \big(3\,c+\mbox{$\frac {\tilde c}{2}$}\big)\, F^{m}{}_{jk}\, q^i\wedge q^j\wedge q^k\wedge p_{im}+\sfrac c{2}\, G_{ijkl}\,q^{i}\wedge q^{j}\wedge q^{k}\wedge q^{l} \nonumber \\ & \hspace{1cm} \qquad + \big(\tilde c+2\, c\big)\, Q_l{}^{ijk}\,q^{l}\wedge q^{m}\wedge p_{mi}\wedge p_{jk}\,+\sfrac c{2}\, R^{jk,lm,i}\,q^{n}\wedge p_{ni}\wedge p_{jk}\wedge p_{lm}\Big)~, \label{qp3special} \end{align} up to boundary terms. This is indeed a sigma-model that contains all types of fluxes $G$, $F$, $Q$ and $R$. Due to the conditions we have imposed, it depends only on two first order fields $q^{i}$ and $p_{ij}$, which are both 1-forms on $\S_4$. One could think of them as the first order variables for spacetime coordinates $X^{i}$ and their dual wrapping coordinates $\widetilde{X}_{ij}$, although a truly $\widetilde{X}$-inclusive model requires extension of the base manifold $M$ as in exceptional field theory, which is not the case here. With our specific projections in the threebrane sigma-model, it is straightforward if a bit lengthy to check that the projected threebrane action \eqref{qp3special} is invariant under the corresponding restriction of the gauge transformations \eqref{gt1}--\eqref{gt4}. We now observe that the $G$-flux model may be obtained in an alternative way using \eqref{qp3special}. One simply takes ${\mit\Omega}=0$, i.e. the anchor to be the projection to the tangent bundle of $M$, and the only non-vanishing flux to be $G$. The field equation for $F_i$ leads to $q^{i}=-\mathrm{d} X^{i}$ and the threebrane sigma-model reduces on-shell to \begin{equation} S_{G}[X]=\oint_{\partial\S_4}\, \sfrac 12\, g_{ij}\,\mathrm{d} X^{i}\wedge \ast\,\mathrm{d} X^{j}+\int_{\S_{4}}\, \sfrac 1{4!}\, G_{ijkl}\,\mathrm{d} X^{i}\wedge \mathrm{d} X^{j}\wedge \mathrm{d} X^{k}\wedge \mathrm{d} X^{l}~, \label{qp3specialG} \end{equation} where we consider the case that the threebrane has a non-empty closed M2-brane boundary and, using $\tilde c \, q^{i}\wedge p_{ij}=p_j$ and $c=\frac 1{12}$, we imposed the boundary condition \begin{equation} p_i= -\,{6\,\tilde c}\, g_{ij}\,\ast\mathrm{d} X^{j} \qquad \text{on}\quad \partial\S_4~. \end{equation} This is the same as the sigma-model with $G$-flux that we obtained in a different way in Section~\ref{sec43}. Here the symmetric term corresponds to the upper-left entry of the generalized metric \eqref{exgm} for~$C=0$. Alternatively, one may consider the completely dual situation, where the anchor maps to the tangent bundle only through the trivector $\mit\Omega$, i.e. taking $\rho^{i}{}_{j}=0$. For simplicity, let us suppose that ${\mit\Omega}$ is a constant non-degenerate trivector, and that the only non-vanishing (constant) flux is $R$. Then the field equation for $F_i$ is \begin{equation} \mathrm{d} X^{i}=-\sfrac 12\, {\mit\Omega}^{ijk}\,p_{jk}~. \end{equation} This implies ${\mit\Omega}^{ijk}\,\mathrm{d} p_{jk}=0$ for all $i$, and hence, by our assumption that ${\mit\Omega}^{ijk}$ is non-degenerate, that $\mathrm{d} p_{jk}=0$. This means that locally we can define functions $\widetilde{X}_{ij}$ on $\S_4$ through \begin{equation} p_{ij}=\mathrm{d} \widetilde{X}_{ij}~. \end{equation} This is not meant to be a wrapping coordinate, however the notation is indicative of what one would expect in the case of an extended base manifold. Then on-shell the threebrane sigma-model reduces to \bea S_{R}[\widetilde{X}]=\oint_{\partial\S_4}\, \sfrac 12\, g^{ijkl}\,\mathrm{d} \widetilde{X}_{ij}\wedge\ast\,\mathrm{d} \widetilde{X}_{kl}+\int _{\S_4}\, \sfrac 1{4!}\,R^{jk,lm,i}\,q^{n}\,\wedge\mathrm{d}\widetilde{X}_{ni}\wedge\mathrm{d}\widetilde{X}_{jk}\wedge\mathrm{d}\widetilde{X}_{lm}~, \end{eqnarray} where, taking into account $q^{ij}=\frac 1{12}\,q^{i}\wedge q^{j}$, we imposed the boundary condition \begin{equation} q^{ij}=\sfrac{1}{12\,\tilde{c}}\, g^{ijkl}\,\ast\mathrm{d}\widetilde{X}_{kl} \qquad \mbox{on} \quad \partial\Sigma_4~, \end{equation} with \begin{equation} g^{ijkl}=g^{ik}\,g^{jl}-g^{il}\,g^{jk}~. \end{equation} This is indeed the metric naturally appearing in M2-brane duality rotations \cite{Duff:1990hn}, and it also corresponds to the lower-right entry of the generalized metric \eqref{exgm}. An integrability problem in realizing the full $SL(5)$ U-duality group in the worldvolume theory for a closed M2-brane was identified in~\cite{Duff:2015jka}, based on previous work of~\cite{Duff:1990hn}. It would be interesting to see whether the simple constructions we presented here can be generalized to include the M2-brane wrapping modes and shed some light on the problem of defining a manifestly U-duality invariant sigma-model. \section{${SL(5)}$ Exceptional Field Theory Fluxes} \label{sec6} Going one step further, we would now like to determine in a systematic way the fluxes in $SL(5)$ exceptional field theory, as first discussed in \cite{Blair:2014zba}. In that case, the base manifold $M$ is extended to include coordinates conjugate to both the momentum and wrapping modes of closed M2-branes. In the present case the extended space ${\cal M}$ is $10$-dimensional with local coordinates \begin{equation} x^{I}=(x^{i}, \tilde{x}_{ij})~. \end{equation} Explicitly, $x^I=x^{\bar a\bar b}=-x^{\bar b\bar a}$ are coordinates in the antisymmetric representation $\boldsymbol{10}$ of $SL(5)$ with $\bar a,\bar b=1,2,3,4,5$. The spacetime coordinates are given by $x^{i5}=-x^{5i}=x^i$ with $i=1,2,3,4$ and the wrapping coordinates are given by dualization $\tilde x_{ij}=\frac12\,\epsilon_{ijkl}\, x^{kl}$ in four dimensions. A field in the antisymmetric representation of $SL(5)$ is a section of the generalized tangent bundle $E_2=TM\oplus\mbox{\footnotesize$\bigwedge$}^2\,T^*M$, so a local model for the extended space may be taken to be the total space ${\cal M}=\mbox{\footnotesize$\bigwedge$}^2\,T^*M$ of the bundle of 2-forms on $M$, with $x^i$ local coordinates on the base $M$ and $\tilde x_{ij}=-\tilde x_{ji}$ local fibre coordinates. Correspondingly, dual derivatives are defined in addition to the standard ones, \begin{equation} \partial_I=(\partial_i,\tilde{\partial}^{ij}) \ . \end{equation} It is instructive at this point to recall that the local symmetries of exceptional field theory are generated by a generalized Lie derivative \begin{equation} {\mathscr L}_{\xi}A^{I}=\xi^{J}\,\partial_{J}A^{I}-A^{J}\,\partial_{J}\xi^{I}+Y^{IJ}_{KL}\,A^{K}\,\partial_{J}\xi^{L}~, \end{equation} where $\xi^{I}$ is a gauge generator for generalized diffeomorphisms and $Y^{IJ}_{KL}$ is an invariant tensor of the U-duality group, here $SL(5)$. It reads as \begin{equation} Y^{IJ}_{KL}=\epsilon^{\bar aIJ}\,\epsilon_{\bar aKL}~. \end{equation} The role of this tensor becomes evident when closure of the algebra of gauge transformations is imposed. This happens when a section condition is satisfied, which reads as \begin{equation} \label{eq:sectioncondition} Y^{IJ}_{KL}\,\partial_I\otimes\partial_J=0~. \end{equation} This operator equation expresses the fact that when it acts on any field or product of fields the result should vanish. Restricting to fields satisfying this section condition, one finds \begin{equation} [{\mathscr L}_{\xi_1},{\mathscr L}_{\xi_2}]={\mathscr L}_{[\![ \xi_1,\xi_2]\!]}~, \end{equation} where the bracket appearing on the right-hand side is defined as \begin{equation} \label{higherC} [\![\xi_1,\xi_2]\!]=\sfrac 12\, ({\mathscr L}_{\xi_1}\xi_2-{\mathscr L}_{\xi_2}\xi_1)~. \end{equation} This bracket is the $SL(5)$ covariantization of the higher Courant bracket \eqref{courant}, in the same way that the $O(d,d)$ covariantization of the standard Courant bracket is the C-bracket of double field theory \cite{Hull:2009zb}. Indeed, when solving the section condition \eqref{eq:sectioncondition} by setting the dual derivatives $\tilde\partial^{ij}$ to zero, the bracket of \eqref{higherC} becomes precisely the higher Courant bracket \eqref{courant}, and moreover ${\mathscr L}_{\xi}\,\cdot\,=\xi\,\circ\,\cdot\,$ is given by the higher Dorfman bracket \eqref{dorfman}. This is exactly how this bracket was originally constructed in \cite{Berman:2011cg} and used in \cite{Bosque:2016fpi} to relate the geometric formulation of $SL(5)$ exceptional field theory to the embedding tensor formalism of seven-dimensional gauged supergravity, where the embedding tensor is related to the fluxes and in turn to the structure constants of the algebra of generalized diffeomorphisms. For our purposes, we employ the procedure given in \cite{Blumenhagen:2013hva} for double field theory fluxes. A representation of the algebra \eqref{agg1}--\eqref{agg3} can be given by the Lie bracket of certain vector fields which are sections of the tangent bundle $T{\cal M}$ of the extended manifold of exceptional field theory. We introduce two such fields in the holonomic basis, given by \bse\begin{align}\label{newa} D_i &= \partial_i+\sfrac 12\, C_{ijk}\,\tilde\partial^{jk}~,\\[4pt] \tilde D^{jk} &= \sfrac 12\,\tilde\partial^{jk}+\sfrac 12\,{\mit\Omega}^{jkl}\,D_l~, \end{align}\end{subequations} and calculate their Lie brackets. Here the components of the 3-form $C=\frac16\, C_{ijk}\, \mathrm{d} x^i\wedge\mathrm{d} x^j\wedge\mathrm{d} x^k$ and the 3-vector ${\mit\Omega}=\frac16\, {\mit\Omega}^{ijk}\,\partial_i\wedge\partial_j\wedge\partial_k$ can depend on both physical and wrapping coordinates. The result is \bse\begin{align}\label{newaa} [D_i,D_j]&=G_{ijkl}\,\tilde D^{kl}+F_{ij}{}^m\,D_m~,\\[4pt] {[}D_i,\tilde D^{jk}]&=\widetilde F_{ilm}{}^{jk}\,\tilde D^{lm}+Q_i{}^{jkm}\,D_m~,\\[4pt] {[} \tilde D^{ij},\tilde D^{kl}]&=R^{ij,kl,n}\,D_n+\widetilde Q_{mn}{}^{ij,kl}\,\tilde D^{mn}~, \end{align}\end{subequations} where we defined the exceptional field theory fluxes \bse\begin{align} G_{ijkl}&=4\,\partial_{[i}C_{jkl]}+2\,C_{mn[i}\,\tilde\partial^{mn}C_{jkl]}~,\\[4pt] F_{ij}{}^m&=-\sfrac 12\,G_{ijkl}\,{\mit\Omega}^{klm}+ \tilde{\partial}^{mk}C_{ijk}~,\\[4pt] \widetilde F_{ilm}{}^{jk}&=\sfrac 12\,G_{ilmn}\,{\mit\Omega}^{njk}-\sfrac 12\,\tilde\partial^{jk}C_{ilm}~,\\[4pt] Q_i{}^{jkm}&=\sfrac 12\,\big(\partial_i{\mit\Omega}^{jkm}+\sfrac12\,C_{iln}\,\tilde\partial^{ln}{\mit\Omega}^{jkm}+\sfrac 12\, {\mit\Omega}^{lnm}\,\tilde\partial^{jk}C_{iln}+{\mit\Omega}^{ljk}\,\tilde\partial^{mn}C_{iln}\nonumber\\ &\quad -\sfrac 12\,{\mit\Omega}^{jkn}\,G_{inps}\,{\mit\Omega}^{psm}\big)~,\\[4pt] \widetilde Q_{mn}{}^{ij,kl}&=\sfrac 14\,\big({\mit\Omega}^{ijp}\,G_{pp'mn}\,{\mit\Omega}^{p'kl}+{\mit\Omega}^{klr}\,\tilde\partial^{ij}C_{rmn}-{\mit\Omega}^{ijr}\,\tilde\partial^{kl}C_{rmn}\big)\nonumber\\ &\quad -\sfrac 14\,\big(\delta^l_{[m}\,\partial_{n]}{\mit\Omega}^{ijk}- \delta^k_{[m}\,\partial_{n]}{\mit\Omega}^{ijl}- \delta^j_{[m}\,\partial_{n]}{\mit\Omega}^{ikl}+ \delta^i_{[m}\,\partial_{n]}{\mit\Omega}^{jkl}\big)\\ &\quad -\sfrac 18\,\big(\delta^l_{[m}\,C_{n]sp}\,\tilde\partial^{sp}{\mit\Omega}^{ijk}- \delta^k_{[m}\,C_{n]sp}\,\tilde\partial^{sp}{\mit\Omega}^{ijl}- \delta^j_{[m}\,C_{n]sp}\,\tilde\partial^{sp}{\mit\Omega}^{ikl}+ \delta^i_{[m}\,C_{n]sp}\,\tilde\partial^{sp}{\mit\Omega}^{jkl}\big)~, \nonumber \\[4pt] R^{ij,kl,n}&=\sfrac 12\, \hat\partial^{i[j}{\mit\Omega}^{kln]}-\sfrac 12\, \hat\partial^{j[i}{\mit\Omega}^{kln]}-\sfrac 12\,\hat\partial^{k[l}{\mit\Omega}^{ijn]}+\sfrac 12\,\hat\partial^{l[k}{\mit\Omega}^{ijn]}-\sfrac 18\,{\mit\Omega}^{ijm}\,{\mit\Omega}^{klp}\,{\mit\Omega}^{rsn}\,G_{mprs}\nonumber\\ &\quad +\sfrac 14\, C_{mpr}\,\big({\mit\Omega}^{mi[j}\,\tilde\partial^{\underline{p}\underline{r}}{\mit\Omega}^{kln]}- {\mit\Omega}^{mj[i}\,\tilde\partial^{\underline{p}\underline{r}}{\mit\Omega}^{kln]}- {\mit\Omega}^{mk[l}\,\tilde\partial^{\underline{p}\underline{r}}{\mit\Omega}^{ijn]}+ {\mit\Omega}^{ml[k}\,\tilde\partial^{\underline{p}\underline{r}}{\mit\Omega}^{ijn]}\big)\nonumber\\ &\quad +\sfrac 18\,\big({\mit\Omega}^{stn}\,{\mit\Omega}^{{i}j{r}}\,\tilde\partial^{kl}-{\mit\Omega}^{stn}\,{\mit\Omega}^{{k}lr}\,\tilde\partial^{ij}+2{\mit\Omega}^{ijt}\,{\mit\Omega}^{klr}\,\tilde\partial^{ns}\big)C_{rst}~, \end{align}\end{subequations} and $ \hat\partial^{ij}=\tilde\partial^{ij}+{\mit\Omega}^{ijk}\,\partial_k$. These expressions rely on the section condition, as also happens in the case of double field theory. As in the case of exceptional generalized geometry, these expressions are valid for any $d$, in which case the dimension of the extended space is $d+\frac{d\,(d-1)}{2}$, however they simplify for the physically relevant case of $d=4$, where $\widetilde F$ and $\widetilde Q$ can be related to $F$ and $Q$ respectively. To write these expressions in a non-holonomic frame, we introduce a vielbein $e_{a}=e_{a}{}^{i}\,\partial_i$ whose components $e_a{}^i$ can depend on both physical and wrapping coordinates, together with the dual vector fields $\tilde e^{ab}=e^{[a}{}_i\,e^{b]}{}_j\,\tilde\partial^{ij}$. Then the fluxes acquire additional terms and in the four-dimensional case they read as \bse \begin{align} G_{abcd}&=4\,\nabla_{[a}C_{bcd]}+2\,C_{ef[a}\,\widetilde{\nabla}^{ef}C_{bcd]}~, \\[4pt] F_{ab}{}^{c}&= f_{ab}{}^{c}+C_{de[a}\,{\mit{\tilde\G}}^{de}{}_{b]}{}^{c}-\sfrac 12\, {\mit\Omega}^{dec}\,G_{abde}+ \widetilde{\nabla}^{cd}C_{dab}\ ,\\[4pt] Q_{a}{}^{bcd}&=\sfrac 12\, \partial_a{\mit\Omega}^{bcd}-\sfrac 32\, {\mit{\tilde\Gamma}}^{[bc}{}_{a}{}^{d]}+\sfrac 32\, {\mit\Omega}^{e[bc}\,f_{ae}{}^{d]} +\sfrac 14\, C_{aef}\,\widetilde{\nabla}^{ef}{\mit\Omega}^{bcd}-\sfrac 34\,C_{efg}\,{\mit{\tilde\Gamma}}^{ef}{}_{a}{}^{[d}\,{\mit\Omega}^{bc]g}\nonumber\\ &\quad + \sfrac 12\, {\mit{\tilde\Gamma}}^{de}{}_{e}{}^{[c}\,\delta^{b]}_a+\sfrac 14\,{\mit\Omega}^{def}\,C_{fgh}\,{\mit{\tilde\Gamma}}^{gh}{}_{e}{}^{[c}\,\delta^{b]}_a -\sfrac 14\, {\mit\Omega}^{def}\,f_{ef}{}^{[c}\,\delta^{b]}_a\nonumber\\ &\quad +\sfrac 14\, {\mit\Omega}^{efd}\,\widetilde{\nabla}^{bc}C_{aef}\,+\sfrac 12\, {\mit\Omega}^{bce}\,\widetilde{\nabla}^{df}C_{aef} -\sfrac 14\, {\mit\Omega}^{e[bc}\,{\mit\Omega}^{d]fg}\,G_{aefg}~, \\[4pt] R^{ab,cd,e}&=\sfrac 12\,\widehat{{\nabla}}{}^{a[b}\,{\mit\Omega}^{cde]}-\sfrac 12\,\widehat{{\nabla}}{}^{b[a}\,{\mit\Omega}^{cde]}-\sfrac 12\,\widehat{\nabla}{}^{c[d}\,{\mit\Omega}^{abe]}+\sfrac 12\,\widehat{\nabla}{}^{d[c}\,{\mit\Omega}^{abe]} \nonumber \\ &\quad+\sfrac 14\, C_{fgh}\,\big({\mit\Omega}^{fa[b}\,\widetilde{\nabla}^{\underline{g}\underline{h}}{\mit\Omega}^{cde]}-{\mit\Omega}^{fb[a}\,\widetilde{\nabla}^{\underline{g}\underline{h}}{\mit\Omega}^{cde]}- {\mit\Omega}^{fc[d}\,\widetilde{\nabla}^{\underline{g}\underline{h}}{\mit\Omega}^{abe]}+{\mit\Omega}^{fd[c}\,\widetilde{\nabla}^{\underline{g}\underline{h}}{\mit\Omega}^{abe]}\big)\nonumber\\ &\quad +\sfrac 18\,\big({\mit\Omega}^{fge}\,{\mit\Omega}^{{a}bh}\,\widetilde{\nabla}^{cd} -{\mit\Omega}^{fge}\,{\mit\Omega}^{{c}d{h}}\,\widetilde{\nabla}^{ab}+2{\mit\Omega}^{abg}\,{\mit\Omega}^{cdh}\,\widetilde{\nabla}^{ef}\big)C_{hfg}~, \end{align}\end{subequations} where we defined the dual connection \begin{equation} {\mit{\tilde\Gamma}}^{ab}{}_{c}{}^{d}=e^{d}{}_{k}\,e^{[a}{}_i\,e^{b]}{}_j\,\tilde\partial^{ij}e^{k}{}_{c}~, \end{equation} and \bse\begin{align} \widetilde{\nabla}^{ab}C_{cde}&=\tilde{\partial}^{ab}C_{cde}-{\mit{\tilde\Gamma}}^{ab}{}_{c}{}^{f}\,C_{fde} -{\mit{\tilde\Gamma}}^{ab}{}_{d}{}^{f}\,C_{cfe}-{\mit{\tilde\Gamma}}^{ab}{}_{e}{}^{f}\,C_{cdf}~, \\[4pt] \widehat{\nabla}^{ab}&=\widetilde{\nabla}^{ab}+{\mit\Omega}^{abc}\,\nabla_c~, \end{align}\end{subequations} are the dual covariant derivatives. Finally, let us compare our results with the $SL(5)$ fluxes described in \cite{Blair:2014zba,Lust:2017bwq}, where a group theoretical derivation in terms of $SL(5)$ representation theory was used. Based on the embedding tensor formalism for gaugings of seven-dimensional maximal supergravity~\cite{Samtleben:2005bp}, the relevant representations of $SL(5)$ are $\mathbf{\overline{15}}\oplus\mathbf{\overline{40}}\oplus\mathbf{\overline{10}}$, and therefore the fluxes should exhaust these representations. This may be confirmed by using their branching decompositions under the embedding $SL(5)\supset SL(4)\times{\mathbb R}} \def\C{{\mathbb C}} \def\N{{\mathbb N}} \def\X{\mathbb X} \def \A{\mathbb A^+$, which read as{\footnote{We refrain from presenting the additional ${\mathbb R}} \def\C{{\mathbb C}} \def\N{{\mathbb N}} \def\X{\mathbb X} \def \A{\mathbb A^{+}$ charges here.}} \bse \begin{align} \mathbf{\overline{10}}\,\big|_{SL(4)}&=\mathbf{\overline{4}}\oplus \mathbf{6}~,\\[4pt] \mathbf{\overline{15}}\,\big|_{SL(4)}&=\mathbf{\overline{10}}\oplus \mathbf{\overline{4}}\oplus \mathbf{1}~, \\[4pt] \mathbf{\overline{40}}\,\big|_{SL(4)}&=\mathbf{\overline{20}}\oplus\mathbf{10}\oplus \mathbf{6}\oplus \mathbf{\overline{4}}~. \end{align}\end{subequations} The available fluxes that should be matched with these representations are the 4-form $G$-flux $G_{abcd}$, the geometric torsion flux $f_{ab}{}^{c}$, the $Q$-flux $Q_{a}{}^{bcd}$ and the $\cal R$-flux ${\cal R}^{a,bcde}$. Clearly, the only singlet in these decompositions corresponds to the 4-form $G$-flux. Moreover, the $\cal R$-flux, as a mixed-symmetry $(1,4)$ tensor, lives in one of the three $\mathbf{\overline{4}}$ representations of $SL(4)$---the one in the decomposition of $\mathbf{\overline{40}}$. The torsion flux $f_{ab}{}^{c}$ has $24$ components and lives in the representations $\mathbf{\overline{20}}\oplus\mathbf{\overline{4}}$, corresponding to its trace and traceless parts. The flux $Q_{a}{}^{bcd}$, antisymmetric in its upper three indices, has $16$ components, corresponding to the representations $\mathbf{\overline{10}}\oplus\mathbf{6}$, again containing a trace and a traceless part. What remains is the trace part of the dual torsion flux ${\mit{\tilde\G}}^{ab}{}_{b}{}^{c}$, which contains a symmetric and an antisymmetric part with a total of $16$ components living in the representations $\mathbf{10}\oplus\mathbf{6}$. The last $\mathbf{\overline{4}}$ representation corresponds to the determinant of the seven-dimensional metric, and this exhausts all representations of $SL(4)$ appearing above. Therefore, the geometric approach based on the higher Courant bracket reproduces all the fluxes obtained using the group theoretical approach. \section{Conclusions and Outlook} \label{sec7} Motivated by the relation between T-duality and non-geometric fluxes in closed string theory, the properties of the Courant bracket in generalized geometry, and the gauge structure of AKSZ-type topological membrane sigma-models, in this paper we have investigated whether such relations extend to the case of U-duality and fluxes in M-theory, the higher Courant bracket in exceptional generalized geometry, and AKSZ-type topological threebrane sigma-models. We established that upon a certain projection to $SL(5)$ tensors, the local coordinate form of the axioms for a specific Lie algebroid up to homotopy based on the extended bundle $TM\oplus \mbox{\footnotesize$\bigwedge$}^{2}\,T^{\ast}M$ coincides with the general expressions for geometric and non-geometric fluxes in $(7+4)$-dimensional compactifications of M-theory, together with their Bianchi identities. The same expressions are also interpreted as the conditions for gauge invariance and closure of gauge transformations for a topological threebrane sigma-model, where the fluxes appear as generalized Wess-Zumino terms. It would be interesting to understand better the geometric features of the algebroid structure defined by our $SL(5)$ projection of the Lie algebroid up to homotopy on $TM\oplus \mbox{\footnotesize$\bigwedge$}^{2}\,T^{\ast}M$. Given that the higher Courant algebroid structure on this extended bundle has a well-known realization as an $L_\infty$-algebra, see for example~\cite{Zambon,Bi,Baraglia:2011dg}, it is an interesting problem to see how our specific Lie algebroid up to homotopy fits into recent discussions of the $L_\infty$-algebra structure of gauge symmetries underlying the tensor hierarchy in exceptional field theory, see for instance~\cite{Cederwall:2018aab,Arvanitakis,Cagnacci:2018buk}. This observation can serve as a first step toward a new geometric understanding of the worldvolume approach to M-theory. This requires extension of the analysis presented here in several directions. In this paper we focused on the case of the exceptional group $SL(5)$, which is the (continuous) U-duality group for M-theory compactified on a four-dimensional torus. Obviously, a more complete treatment would require studying the fluxes for lower (higher) number of external (internal) dimensions, where the U-duality group is larger. A standard complication in that case is that M5-brane charges emerge, and the extended bundles are larger and not of the type $TM\oplus \mbox{\footnotesize$\bigwedge$}^p\,T^{\ast}M$, while the corresponding topological sigma-model that encodes the fluxes as Wess-Zumino terms is expected to be more complicated as U-duality now exchanges charged objects of different dimensionalities. Some discussions of these issues can be found in~\cite{Kokenyesi:2018ynq,Arvanitakis,Berman:2018okd}. Another open problem is to construct a threebrane sigma-model with an extended base space, similarly to double field theory where the coordinates are doubled, whose generalized Wess-Zumino terms would accomodate the exceptional field theory fluxes, derived for the U-duality group $SL(5)$ in Section~\ref{sec6}. In the $SL(5)$ case this would require a ten-dimensional extended space, and even higher-dimensional for larger U-duality groups. Target space exceptional field theories with manifest U-duality invariance were constructed in recent years in~\cite{eft0,eft1,eft2,eft3,eft4}, however the corresponding worldvolume problem remains open \cite{Duff:2015jka}. Applying the strategy developed in \cite{Chatzistavrakidis:2018ztm} for the construction of T-duality invariant membrane sigma-models could be helpful in that case, although its precise implementation is not straightforward. In addition, one should then deal with the section condition and understand its geometric origin in the context of weaker algebroid structures, similar to the (pre-)DFT algebroids defined in~\cite{Chatzistavrakidis:2018ztm} and formulated in~\cite{KokenyesiDFT} in terms of an AKSZ-type construction. Finally, another issue that we did not discuss in the present paper is the nonassociativity of the M2-brane phase space, which is deformed by the presence of the locally non-geometric M-theory $R$-flux~\cite{Gunaydin:2016axc,Kupriyanov:2017oob}. This problem requires the extension of the base manifold $M$ and falls in the same line of discussion as previously. In other words, one would have to include the wrapping coordinates of closed M2-branes in order to see how nonassociativity manifests itself in the worldvolume approach. Similarly, it would be interesting to find a geometric interpretation for the fact that the phase space of M-theory, when $R$-flux is turned on, is dimensionally reduced from eight to seven dimensions due to the absence of momentum modes along the M-theory circle~\cite{Gunaydin:2016axc,Kupriyanov:2017oob,Lust:2017bgx,Lust:2017bwq}. \paragraph{Acknowledgements.} We thank Ralph Blumenhagen, Mark Bugden, Fech Scen Khoo, Zolt\'an K\"ok\'enyesi, Emanuel Malek, Erik Plauschinn, Felix Rudolph and Marc Syv\"ari for helpful discussions. We acknowledge support by COST (European Cooperation in Science and Technology) in the framework of the Action MP1405 QSPACE. The work of A.Ch. is supported by the Croatian Science Foundation Project ``New Geometries for Gravity and Spacetime'' (IP-2018-01-7615). A.Ch. and L.J. are partially supported by the European Union through the European Regional Development Fund -- The Competitiveness and Cohesion Operational Programme (KK.01.1.1.06) and by the H2020 CSA Twinning Project No. 692194 ``RBI-T-WINNING''. The work of D.L. was partially supported by the ERC Advanced Grant ``Strings and Gravity'' (Grant No.~320045). The work of R.J.S. was supported by the Consolidated Grant ST/P000363/1 from the UK Science and Technology Facilities Council.
proofpile-arXiv_067-2222
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Higher Whitehead products are important invariants of unstable homotopy type. They have been studied since the 1960s in the works of homotopy theorists such as Hardie~\cite{hard61}, Porter~\cite{port65} and Williams~\cite{will72}. The appearance of moment-angle complexes and, more generally, polyhedral products in toric topology at the end of the 1990s brought a completely new perspective on higher homotopy invariants such as higher Whitehead products. The homotopy fibration of polyhedral products \begin{equation}\label{zkfib} (D^2,S^1)^\mathcal K\to(\mathbb C P^\infty)^\mathcal K\to (\mathbb C P^\infty)^m \end{equation} was used as the universal model for studying iterated higher Whitehead products in~\cite{pa-ra08}. Here $(D^2,S^1)^\mathcal K=\zk$ is the moment-angle complex, and $(\mathbb C P^\infty)^\mathcal K$ is homotopy equivalent to the Davis--Januszkiewicz space~\cite{bu-pa00,bu-pa15}. The form of nested brackets in an iterated higher Whitehead product is reflected in the combinatorics of the simplicial complex~$\mathcal K$. There are two classes of simplicial complexes $\mathcal K$ for which the moment-angle complex is particularly nice. From the geometric point of view, it is interesting to consider complexes $\mathcal K$ for which $\zk$ is a manifold. This happens, for example, when $\mathcal K$ is a simplicial subdivision of sphere or the boundary of a polytope. The resulting moment-angle manifolds~$\zk$ often have remarkable geometric properties~\cite{pano13}. On the other hand, from the homotopy-theoretic point of view, it is important to identify the class of simplicial complexes $\mathcal K$ for which the moment-angle complex $\zk$ is homotopy equivalent to a wedge of spheres. We denote this class by~$B_\Delta$. The spheres in the wedge are usually expressed in terms of iterated higher Whitehead products of the canonical $2$-spheres in the polyhedral product~$(\mathbb C P^\infty)^\mathcal K$. We denote by $W_\Delta$ the subclass in $B_\Delta$ consisting of those $\mathcal K$ for which $\zk$ is a wedge of iterated higher Whitehead products. The question of describing the class $W_\Delta$ was studied in~\cite{pa-ra08} and formulated explicitly in~\cite[Problem~8.4.5]{bu-pa15}. It follows from the results of~\cite{pa-ra08} and~\cite{g-p-t-w16} that $W_\Delta=B_\Delta$ if we restrict attention to \emph{flag} simplicial complexes only, and a flag complex $\mathcal K$ belongs to~$W_\Delta$ if and only if its one-skeleton is a chordal graph. Furthermore, it is known that $W_\Delta$ contains directed $MF$-complexes~\cite{gr-th16}, shifted and totally fillable complexes~\cite{ir-ki1,ir-ki2}. On the other hand, it has been recently shown in~\cite{abra} that the class $W_\Delta$ is \emph{strictly} contained in~$B_\Delta$. There is also a related question of \emph{realisability} of an iterated higher Whitehead product $w$ with a given form of nested brackets: we say that a simplicial complex $\mathcal K$ \textit{realises} an iterated higher Whitehead product $w$ if $w$ is a nontrivial element of $\pi_*(\zk)$ (see \hr[Definition]{wdelta}). For example, the boundary of simplex $\mathcal K=\partial\Delta(1,\ldots,m)$ realises a single (non-iterated) higher Whitehead product $[\mu_1,\ldots,\mu_m]$, which maps $\zk=S^{2m-1}$ into the fat wedge $(\mathbb{C} P^\infty)^\mathcal K$. We suggest two approaches to the questions above. The first approach is combinatorial: using the operation of substitution of simplicial complexes (\hr[Section]{subst}), for any iterated higher Whitehead product~$w$ we describe a simplicial complex $\partial\Delta_w$ that realises~$w$ (\hr[Theorem]{realisation}). Furthermore, for a particular form of brackets inside~$w$, we prove in \hr[Theorem]{smallest_realisation}~(a) that $\partial\Delta_w$ is the smallest complex that realises~$w$. We also give a combinatorial criterion for the nontriviality of the product~$w$ (\hr[Theorem]{smallest_realisation}~(b)). In the proof of nontriviality we use the Hurewicz image of~$w$ in the cellular chains of~$\zk$ and the description of the cohomology product of~$\zk$ from~\cite{bu-pa00}. \hr[Theorems]{realisation},\hr{smallest_realisation} and further examples not included in this paper lead us to conjecture that $\partial\Delta_w$ is the smallest complex realising~$w$, for any iterated higher Whitehead product (see \hr[Problem]{proble}). The second approach is algebraic: we use the coalgebraic versions of the Koszul complex and the Taylor resolution of the face coalgebra of~$\mathcal K$ to describe the canonical cycles corresponding to iterated higher Whitehead products~$w$. This gives another criterion for realisability of $w$ in \hr[Theorem]{tarep}. \section{Preliminaries} \label{prelimi} A \textit{simplicial complex} $\mathcal K$ on the set $[m] = \{1, 2, \ldots, m\}$ is a collection of subsets $I \subset [m]$ closed under taking any subsets. We refer to $I\in\mathcal K$ as a \textit{simplex} or a \textit{face} of~$\mathcal K$, and always assume that $\mathcal K$ contains $\varnothing$ and all singletons $\{i\}$, $i=1,\ldots,m$. We do not distinguish between $\mathcal K$ and its geometric realisation when referring to the homotopy or topological type of~$\mathcal K$. We denote by $\Delta^{m-1}$ or $\Delta(1, \ldots, m)$ the full simplex on the set $[m]$. Similarly, denote by $\Delta(I)$ a simplex with the vertex set $I\subset[m]$ and denote its boundary by $\partial\Delta(I)$. A \emph{missing face}, or a \emph{minimal non-face} of~$\mathcal K$ is a subset $I\subset[m]$ such that $I\notin\mathcal K$, but $\partial\Delta (I)\subset \mathcal K$. Assume we are given a set of $m$ pairs of based cell complexes \[ (\underline X, \underline A) = \{(X_1, A_1),\dots, (X_m, A_m)\} \] where $A_i\subset X_i$. For each simplex $I\in\mathcal K$ we set \[ (\underline X, \underline A)^I = \{(x_1, \ldots, x_m)\in X_1\times\cdots\times X_m\;|\; x_j \in A_j \text{ for $j\notin I$}\}. \] The \textit{polyhedral product} of $(\underline X, \underline A)$ corresponding to $\mathcal K$ is the following subset of $X_1\times\dots\times X_m$: \[ (\underline X, \underline A)^{\mathcal K} = \bigcup\limits_{I\in \mathcal K} (\underline X, \underline A)^I\qquad (\subset X_1\times\cdots\times X_m). \] In the case when $(X_i, A_i)=(D^2, S^1)$ for each $i$, we use the notation $\zk$ for $(D^2,S^1)^{\mathcal K}$, and refer to $\zk = (D^2, S^1)^{\mathcal K}$ as the \textit{moment-angle complex}. Also, if $(X_i, A_i)=(X,\mathit{pt})$ for each $i$, where $\mathit{pt}$ denotes the basepoint, we use the abbreviated notation $X^{\mathcal K}$ for $(X,\mathit{pt})^{\mathcal K}$. \begin{theorem}[{\cite[Theorem~4.3.2]{bu-pa15}}]\label{cofib} The moment-angle complex $\zk$ is the homotopy fibre of the canonical inclusion $(\C P^{\infty})^{\sk} \hookrightarrow (\mathbb{C} P^{\infty})^m$. \end{theorem} There is also the following more explicit description of the fibre inclusion $\mathcal Z_{\mathcal K} \to (\C P^{\infty})^{\sk}$ in~\eqref{zkfib}. Consider the map of pairs $(D^2, S^1) \to (\mathbb{C} P^{\infty}, pt)$ sending the interior of the disc homeomorphically onto the complement of the basepoint in $\mathbb{C} P^1$. By the functoriality, we have the induced map of the polyhedral products $\mathcal{Z_K} = (D^2, S^1)^{\mathcal K} \to (\C P^{\infty})^{\sk}$. The general definition of higher Whitehead products can be found in~\cite{hard61}. We only describe Whitehead products in the space $(\C P^{\infty})^{\sk}$ and their lifts to $\zk$. In this case the indeterminacy of higher Whitehead products can be controlled effectively because extension maps can be chosen canonically. Consider the \emph{$i$th coordinate map} \[ \mu_i\colon(D^2, S^1) \to S^2 \cong \mathbb{C} P^1 \hookrightarrow (\mathbb{C} P^{\infty})^{\vee m} {}\hookrightarrow (\C P^{\infty})^{\sk}. \] Here the second map is the canonical inclusion of $\mathbb{C} P^1$ into the $i$-th summand of the wedge. The third map is induced by the embedding of $m$ disjoint points into $\mathcal K$. The \emph{Whitehead product} (or \emph{Whitehead bracket}) $[\mu_i, \mu_j]$ of $\mu_i$ and $\mu_j$ is the homotopy class of the map \[ S^3 \cong \partial D^4 \cong \partial (D^2\times D^2) \cong (D^2\times S^1)\cup (S^1\times D^2) \xrightarrow{[\mu_i,\mu_j]} (\C P^{\infty})^{\sk} \] where \[ [\mu_i, \mu_j](x, y) = \begin{cases} \mu_i(x) \quad \text{for $(x,y)\in D^2\times S^1$};\\ \mu_j(y) \quad \text{for $(x,y)\in S^1\times D^2$}. \end{cases} \] Every Whitehead product $[\mu_i,\mu_j]$ becomes trivial after composing with the embedding $(\C P^{\infty})^{\sk} \hookrightarrow (\mathbb{C} P^{\infty})^m \simeq K(\mathbb{Z}^m, 2)$. This implies that $[\mu_i, \mu_j]\colon S^3 \to (\mathbb{C} P^{\infty})^{\mathcal K}$ lifts to the fibre $\zk$, as shown next: \begin{center} \begin{tikzcd} \zk\arrow{r} & (\C P^{\infty})^{\sk} \arrow{r} & (\mathbb{C} P^{\infty})^m\\ &S^3\arrow[u,"{[\mu_i,\mu_j]}"']\arrow[dashed]{ul} \end{tikzcd} \end{center} We use the same notation $[\mu_i,\mu_j]$ for a lifted map $S^3\to\zk$. Such a lift can be chosen canonically as the inclusion of a subcomplex \[ [\mu_i, \mu_j]\colon S^3 \cong(D^2\times S^1)\cup(S^1\times D^2) \hookrightarrow \zk. \] The Whitehead product $[\mu_i, \mu_j]$ is trivial if and only if the map $[\mu_i, \mu_j]\colon S^3 \to \zk$ can be extended to a map $D^4 \cong D^2_i\times D^2_j \hookrightarrow\zk$. This is equivalent to the condition that $\Delta(i,j)=\{i,j\}$ is a $1$-simplex of~$\mathcal K$. \emph{Higher Whitehead products} are defined inductively as follows. Let $\mu_{i_1},\dots,\mu_{i_n}$ be a collection of maps such that the $(n-1)$-fold product \[ [\mu_{i_1},\dots,\widehat{\mu_{i_k}},\dots, \mu_{i_n}]\colon S^{2(n-1)-1}\to(\mathbb{C} P^{\infty})^{\mathcal K} \] is trivial for any $k$. Then there exists a \emph{canonical} extension $\overline{[\mu_{i_1},\dots,\widehat{\mu_{i_k}},\dots, \mu_{i_n}]}$ to a map from $D^{2(n-1)}$ given by the composite \[ \overline{[\mu_{i_1},\ldots, \widehat{\mu_{i_k}},\ldots, \mu_{i_n}]}\colon D^2_{i_1}\times\cdots\times D^2_{i_{k-1}}\times D^2_{i_{k+1}}\times\cdots\times D^2_{i_n}\hookrightarrow \zk \to (\mathbb{C} P^{\infty})^{\mathcal K}. \] Furthermore, all these extensions are compatible on the subproducts corresponding to the vanishing brackets of shorter length. The \emph{$n$-fold product} $[\mu_{i_1},\ldots, \mu_{i_n}]$ is defined as the homotopy class of the map \[ S^{2n-1} \cong \partial(D^2_{i_1}\times\cdots\times D^2_{i_n})\cong \bigcup\limits_{k=1}^n(D^2_{i_1}\times\cdots\times S^1_{i_k}\times\cdots \times D^2_{i_n})\xrightarrow{[\mu_{i_1},\ldots, \mu_{i_n}]} (\C P^{\infty})^{\sk} \] which is given by \[ [\mu_{i_1},\ldots, \mu_{i_n}](x_1,\ldots, x_n) = \overline{[\mu_{i_1},\ldots,\widehat\mu_{i_k},\ldots, \mu_{i_n}]} (x_1,\ldots,\widehat x_k,\ldots, x_n)\quad\text{if }\;x_k\in S^1_{i_k}. \] In~\hr[Proposition]{singlent} below we show that $[\mu_{i_1},\ldots,\mu_{i_p}]$ is defined in $\pi_{2p-1}((\C P^{\infty})^{\sk})$ if and only if $\partial \Delta(i_1,\ldots,i_p)$ is a subcomplex of~$\mathcal K$, and $[\mu_{i_1},\ldots,\mu_{i_p}]$ is trivial if and only if $\Delta(i_1,\ldots,i_p)$ is a simplex of~$\mathcal K$. Alongside with higher Whitehead products of canonical coordinate maps $\mu_i$ we consider \emph{general iterated} higher Whitehead products, i.\,e. higher Whitehead products in which arguments can be higher Whitehead products. For example, \[ \Bigl[\mu_1,\mu_2,[\mu_3,\mu_4,\mu_5], \bigl[\mu_6,\mu_{13},[\mu_7,\mu_8,\mu_9],\mu_{10}\bigr],[\mu_{11},\mu_{12}]\Bigr]. \] Among general iterated higher Whitehead products we distinguish \emph{nested} products, which have the form \[ w = \Big[\big[\dots\big[[\mu_{i_{11}},\ldots, \mu_{i_{1p_1}}], \mu_{i_{21}},\dots, \mu_{i_{2p_2}}\big], \dots\big], \mu_{i_{n1}},\dots, \mu_{i_{np_n}}\Big] \colon S^{d(w)}\to(\C P^{\infty})^{\sk}. \] Here $d(w)$ denotes the dimension of $w$. Sometimes we refer to $[\mu_{i_1}, \dots, \mu_{i_p}]$ as a \emph{single} (noniterated) higher Whitehead product. As in the case of ordinary Whitehead products any iterated higher Whitehead product lifts to a map $S^{d(w)} \to\zk$ for dimensional reasons. \begin{definition}\label{wdelta} We say that a simplicial complex $\mathcal K$ \textit{realises} a higher iterated Whitehead product $w$ if $w$ is a nontrivial element of $\pi_*(\zk)$. \end{definition} \begin{example}\label{singlereal} The complex $\partial \Delta(i_1,\ldots,i_p)$ realises the single higher Whitehead product $[\mu_{i_1},\ldots,\mu_{i_p}]$. \end{example} \begin{construction}[cell decomposition of~$\zk$]\label{celld} Following~\cite[\S4.4]{bu-pa15}, we decompose the disc $D^2$ into 3 cells: the point $1\in D^2$ is the 0-cell; the complement to $1$ in the boundary circle is the 1-cell, which we denote by~$S$; and the interior of $D^2$ is the 2-cell, which we denote by~$D$. These cells are canonically oriented as subsets of $\mathbb{R}^2$. By taking products we obtain a cellular decomposition of~$(D^2)^m$, in which cells are encoded by pairs of subsets $J,I\subset [m]$ with $J\cap I=\varnothing$: the set $J$ encodes the $S$-cells in the product and $I$ encodes the $D$-cells. We denote the cell of $(D^2)^m$ corresponding to a pair $J,I$ by $\varkappa(J,I)$: \begin{align*} \varkappa(J, I)& =\prod_{i\in I}D_i\times\prod_{j\in J}S_j\\ & = \big\{(x_1, \ldots, x_m) \in (D^2)^m\;\big|\\ &\qquad\qquad\qquad\text{$x_i \in D$ for $i\in I$, $x_j \in S$ for $j \in J$ and $x_l = 1$ for $l\notin J\cup I$} \big\}. \end{align*} Then $\zk$ is a cellular subcomplex in~$(D^2)^m$; we have $\varkappa(J,I)\subset\zk$ whenever $I\in\mathcal K$. Given a subset $J\subset [m]$, we denote by $\mathcal K_J$ the \emph{full subcomplex} of~$\mathcal K$ on~$J$, that is, \[ \mathcal K_J = \{I\in \mathcal K|\, I\subset J\}. \] Let $C_{p-1}(\mathcal K_J)$ denote the group of $(p-1)$-dimensional simplicial chains of~$\mathcal K_J$; its basis consists of simplices $L\in\mathcal K_J$, $|L|=p$. We also denote by $\mathcal C_q(\zk)$ the group of $q$-dimensional cellular chains of~$\zk$ with respect to the cell decomposition described above. \end{construction} \begin{theorem}[see~{\cite[Theorems~4.5.7, 4.5.8]{bu-pa15}}]\label{hochster} The homomorphisms \[ C_{p-1}(\mathcal K_J) \longrightarrow \mathcal C_{p+|J|}(\mathcal{Z_K}),\quad {L\mapsto \sign(L,J)\varkappa(J\backslash L, L)} \] induce injective homomorphisms \begin{align*} \widetilde H_{p-1}(\mathcal K_J) \hookrightarrow H_{p+|J|}(\mathcal{Z_K}), \end{align*} which are functorial with respect to simplicial inclusions. Here $L\in \mathcal K_J$ is a simplex, and $\sign(L,J)$ is the sign of the shuffle $(L,J)$. The inclusions above induce an isomorphism of abelian groups \[ \bigoplus\limits_{J\subset [m]}\widetilde H_*(\mathcal K_J) \stackrel\cong\longrightarrow H_*(\mathcal{Z_K}). \] The cohomology versions of these isomorphisms combine to form a ring isomorphism \[ \bigoplus\limits_{J\subset [m]}\widetilde H^*(\mathcal K_J) \stackrel\cong\longrightarrow H^*(\mathcal{Z_K}). \] where the ring structure on the left hand side is given by the maps \[ H^{k-|I|-1}(\mathcal K_{I})\otimes H^{\ell-|J|-1}(\mathcal K_{J})\to H^{k+\ell-|I|-|J|-1}(\mathcal K_{I\cup J}) \] which are induced by the canonical simplicial inclusions $\mathcal K_{I\cup J}\to\mathcal K_I\mathbin{*}\mathcal K_J$ for $I\cap J=\varnothing$ and are zero for $I\cap J\ne\varnothing$. \end{theorem} \section{The Hurewicz image of a higher Whitehead product}\label{hirewicz_image_section} Here we consider the Hurewicz homomorphism $h\colon\pi_*(\zk)\to H_*(\zk)$. The canonical cellular chain representing the Hurewicz image $h(w)\in H_*(\zk)$ of a \emph{nested} higher Whitehead product~$w$ was described in~\cite{abra}. \begin{lemma}[{\cite[Lemma~4.1]{abra}}]\label{hurewicz_image_old} The Hurewicz image \[ h\left(\Big[\big[\dots\big[[\mu_{i_{11}},\ldots, \mu_{i_{1p_1}}], \mu_{i_{21}},\dots, \mu_{i_{2p_2}}\big], \dots\big], \mu_{i_{n1}},\dots, \mu_{i_{np_n}}\Big]\right) \in H_{2(p_1 + \cdots + p_n) - n}(\mathcal {Z_K}) \] is represented by the cellular chain \[ h_c(w)=\prod\limits_{k = 1}^n \left(\sum\limits_{j = 1}^{p_k} D_{i_{k1}}\cdots D_{i_{k(j-1)}}S_{i_{kj}}D_{i_{k(j+1)}}\cdots D_{i_{kp_k}} \right). \] \end{lemma} A more general version of this lemma is presented next. It gives a simple recursive formula describing the canonical cellular chain $h_c(w)$ which represents the Hurewicz image of a \emph{general} iterated higher Whitehead product $w \in \pi_*(\zk)$, therefore providing an effective method of identifying nontrivial Whitehead products in the homotopy groups of a moment-angle complex~$\zk$. Some applications are also given below. \begin{lemma}\label{hurewicz_image} Let $w$ be a general iterated higher Whitehead product $$ w = [w_1, \dots, w_q, \mu_{i_1}, \dots, \mu_{i_p}] \in \pi_*(\zk). $$ Here $w_k$ is a \textup(general iterated\textup) higher Whitehead product for $k = 1, \dots, q$. Then the Hurewicz image $h(w)\in H_*(\zk)$ is represented by the following canonical cellular chain: \[ h_c(w) = h_c(w_1)\cdots h_c(w_q)\Big(\sum\limits_{k=1}^p D_{i_1}\cdots D_{i_{k-1}}S_{i_k} D_{i_{k+1}}\cdots D_{i_p}\Big). \] \end{lemma} We shall refer to $h_c(w)$ as the \emph{canonical} cellular chain for an interated higher Whitehead product~$w$. In the case of nested products, \hyperref[hurewicz_image]{Lemma~\ref{hurewicz_image}} reduces to \hyperref[hurewicz_image_old]{Lemma~\ref{hurewicz_image_old}}. \begin{proof}[Proof of \textup{\hr[Lemma]{hurewicz_image}}] Let $d, d_1, \dots, d_q$ be the dimensions of $w, w_1, \dots, w_q$, respectively. The Whitehead product $w$ is represented by the composite map {\small \begin{multline}\label{31} S^d \cong \partial(D^{d_1}\times\dots\times D^{d_q}\times D^2_{i_1}\times\cdots\times D^2_{i_p})\\ \cong \bigg(D^{d_1}\times\dots\times D^{d_q}\times \Big(\bigcup\limits_{k=1}^p D^2_{i_1}\times\dots\times S^1_{i_k}\times\dots\times D^2_{i_p}\Big)\bigg) \\\cup \bigg( \Big(\bigcup_{l=1}^q D^{d_1}\times\dots\times S^{d_l-1}\times\dots\times D^{d_q}\Big) \times D^2_{i_1}\times\dots\times D^2_{i_p}\bigg) \\ \stackrel\gamma\longrightarrow \bigg(S^{d_1}\times\dots\times S^{d_q}\times \Big(\bigcup\limits_{k=1}^p D^2_{i_1}\times\dots\times S^1_{i_k}\times\dots\times D^2_{i_p}\Big)\bigg) \\\cup \bigg(\Big(\bigcup_{l=1}^q S^{d_1}\times\dots\times\mathit{pt}\times\dots\times S^{d_q}\Big) \times D^2_{i_1}\times\dots\times D^2_{i_p}\bigg) \to \zk. \end{multline} }% The map $\gamma$ above contracts the boundary of each $D^{d_l}$, $l = 1, \ldots, q$. Note that the whole cartesian product in the last row above has dimension less than $d$, so its Hurewicz image is trivial. Using the same argument for the spheres $S^{d_1}, \dots, S^{d_q}$, we obtain that $w$ factors through a map from $S^d$ to a union of products of discs and circles, which embeds as a subcomplex in~$\zk$. By the induction hypothesis each sphere $S^{d_k}, k = 1, \dots, q$, maps to the subcomplex of~$\zk$ corresponding to the cellular chain $h_c(w_k)$. Therefore, by~\eqref{31}, the Hurewicz image of $w$ is represented by the subcomplex corresponding to the product of $h_c(w_1), \ldots, h_c(w_q)$ and $\sum\limits_{k=1}^p D_{i_1}\cdots D_{i_{k-1}} S_{i_k} D_{i_{k+1}}\cdots D_{i_p}$. \end{proof} As a first corollary we obtain a combinatorial criterion for the nontriviality of a single higher Whitehead product. \begin{proposition}\label{singlent} A single higher Whitehead product $[\mu_{i_1},\ldots,\mu_{i_p}]$ is \begin{itemize}[leftmargin=0.06\textwidth] \item[\textup(a\textup)] defined in $\pi_{2p-1}((\C P^{\infty})^{\sk})$ \textup(and lifts to $\pi_{2p-1}(\zk)$\textup) if and only if $\partial \Delta(i_1,\ldots,i_p)$ is a subcomplex of~$\mathcal K$; \item[\textup(b\textup)] trivial if and only if $\Delta(i_1,\ldots,i_p)$ is a simplex of~$\mathcal K$. \end{itemize} \end{proposition} \begin{proof} If the Whitehead product $[\mu_{i_1},\ldots,\mu_{i_p}]$ is defined, then each $(p-1)$-fold product $[\mu_{i_1},\dots, \widehat\mu_{i_k}\dots,\mu_{i_p}]$ is trivial. By the induction hypothesis, this implies that $\partial \Delta(i_1,\ldots,i_p)$ is a subcomplex of~$\mathcal K$. Suppose that $\Delta(i_1,\ldots,i_p)$ is not a simplex of $\mathcal K$. Then, by \hr[Lemma]{hurewicz_image}, the Hurewicz image $h([\mu_{i_1},\ldots,\mu_{i_p}])$ gives a nontrivial homology class in $H_*(\zk)$ corresponding to $[\partial \Delta(i_1,\ldots,i_p)]\in\widetilde H_*(\mathcal K_{i_1,\dots, i_p})$ via the isomorphism of \hr[Theorem]{hochster}. Thus, $[\mu_{i_1},\ldots,\mu_{i_p}]$ is itself nontrivial. \end{proof} This proposition will be generalised to iterated higher Whitehead products in \hr[Section]{realisation_section}. \medskip \hr[Lemmata]{hurewicz_image_old}, \ref{hurewicz_image} and \hyperref[hochster]{Theorem~\ref{hochster}} can be used to detect simplicial complexes $\mathcal K$ for which $\zk$ is a wedge of iterated higher Whitehead products. We recall the following definition. \begin{definition} A simplicial complex $\mathcal K$ belongs to the class $W_{\Delta}$ if $\zk$ is a wedge of spheres, and each sphere in the wedge is a lift of a linear combination of iterated higher Whitehead products. \end{definition} As a first example of application of our method we deduce the results of Iriye and Kishimoto that shifted and totally fillable complexes belong to the class $W_{\Delta}$. \begin{example}\label{example_ik1} A simplicial complex $\mathcal K$ is called \emph{shifted} if its vertices can be ordered in such way that the following condition is satisfied: whenever $I\in \mathcal K$, $i\in I$ and $j>i$, we have $(I-i)\cup j\in \mathcal K$. Let $\mathop\mathrm{MF}\nolimits_m(\mathcal K)$ be the set of missing faces of $\mathcal K$ containing the maximal vertex $m$, i.\,e. \[ \mathop\mathrm{MF}\nolimits_m(\mathcal K) = \{I\subset [m]\,|\, I\notin \mathcal K,\; \partial\Delta (I)\subset \mathcal K \text{ and } m\in I\}. \] As observed in~\cite{ir-ki1}, for a shifted complex $\mathcal K$ there is a homotopy equivalence \begin{equation}\label{shifted} \mathcal K \simeq \bigvee\limits_{I \in \mathop\mathrm{MF}\nolimits_m(\mathcal K)}\partial \Delta(I) \end{equation} (the reason is that the quotient $\mathcal K/\mathop{\mathrm{star}}_m\mathcal K$ is homeomorphic to the wedge on the right hand side of~\eqref{shifted}, by definition of a shifted complex). Note that a full subcomplex of a shifted complex is again shifted. Then \hyperref[hochster]{Theorem~\ref{hochster}} together with \eqref{shifted} implies that $H_*(\zk)$ is a free abelian group generated by the homology classes of cellular chains of the form \begin{equation}\label{ik1_chain} \Big(\sum\limits_{l=1}^p D_{i_1}\cdots D_{i_{l-1}} S_{i_l} D_{i_{l+1}}\cdots D_{i_p}\Big)S_{j_1} \cdots S_{j_q} \end{equation} where $I = \{i_1, \dots, i_p\}\in \mathop\mathrm{MF}\nolimits_m(\mathcal K_{i_1,\ldots,i_p,j_1,\dots, j_q})$. \hyperref[hurewicz_image]{Lemma~\ref{hurewicz_image_old}} implies that~\eqref{ik1_chain} is the canonical cellular chain for the nested Whitehead product \[ w = \Big[\Big[\Big[\dots\big[[\mu_{i_1}, \dots, \mu_{i_p}], \mu_{j_1}\big],\dots \Big], \mu_{j_{q-1}}\Big], \mu_{j_q}\Big]. \] Hence, the following wedge of the Whitehead products \[ \bigvee\limits_{J\subset[m]}\bigvee\limits_{\substack{I = \{i_1, \dots, i_p\} \in \mathop\mathrm{MF}_m(\mathcal K_J)\\ J\setminus I=\{j_1, \dots, j_q\}}} \Big[\Big[\Big[\dots\big[[\mu_{i_1}, \dots, \mu_{i_p}], \mu_{j_1}\big],\dots \Big], \mu_{j_{q-1}}\Big], \mu_{j_q}\Big]\colon \!\!\!\!\!\bigvee\limits_{\substack{J\subset [m]\\ I\in\mathop\mathrm{MF}_m(\mathcal K_J)}} S^{d(w)}_{J, I}\to \zk \] induces an isomorphism in homology, so it is a homotopy equivalence. Thus, we obtain the following. \begin{theorem}[{\cite{ir-ki1}}] Every shifted complex $\mathcal K$ belongs to $W_{\Delta}$. \end{theorem} \end{example} Here is another result which can be proved using \hyperref[hurewicz_image]{Lemma~\ref{hurewicz_image}}. \begin{example}\label{example_ik2} A simplicial complex $\mathcal K$ is called \emph{fillable} if there is a collection $\mathop\mathrm{MF}_{\rm fill}(\mathcal K)$ of missing faces $I_1, \dots, I_k$ such that $\mathcal K\cup I_1 \cup\dots\cup I_k$ is contractible. If any full subcomplex of $\mathcal K$ is fillable, then $\mathcal K$ is called \emph{totally fillable}. Note that homology of any full subcomplex $\mathcal K_J$ in a totally fillable complex $\mathcal K$ is generated by the cycles $\partial\Delta(I)$ for $I\in \mathop\mathrm{MF}_{\mathrm{fill}}(\mathcal K_J)$. As in \hyperref[example_ik1]{Example~\ref{example_ik1}}, $H_*(\zk)$ is a free abelian group generated by the homology classes of cellular chains \[ \Big(\sum\limits_{l=1}^p D_{i_1}\dots D_{i_{l-1}} S_{i_l} D_{i_{l+1}}\dots D_{i_p}\Big) S_{j_1}\dots S_{j_q}, \] where $\Delta(i_1, \dots, i_q) \in \mathop\mathrm{MF}_{\rm fill}(\mathcal K_{j_1, \dots, j_p, i_1, \dots, i_q})$. Again, the map \[ \bigvee\limits_{J\subset[m]}\bigvee\limits_{\substack{ \in \mathop\mathrm{MF}_{\rm fill}(\mathcal K_J)\\ J\setminus I=\{j_1, \dots, j_q\}}} \Big[\Big[\Big[\dots\big[[\mu_{i_1}, \dots, \mu_{i_p}], \mu_{j_1}\big],\dots \Big], \mu_{j_{q-1}}\Big], \mu_{j_q}\Big]\colon \bigvee\limits_{\substack{J\subset [m]\\ I\in\mathop\mathrm{MF}_{\mathrm{fill}}(\mathcal K_J)}} S^{d(w)}_{J, I}\to \zk \] is a homotopy equivalence, by the same reasons. We obtain the following. \begin{theorem}[{\cite{ir-ki2}}] Every totally fillable complex $\mathcal K$ belongs to $W_{\Delta}$. \end{theorem} \end{example} \section{Substitution of simplicial complexes}\label{subst} The combinatorial construction presented here is similar to the~one described in \cite{ayze13} and \cite{b-b-c-g}, although the resulting complexes are different. An analogous construction for building sets was suggested by N.~Erokhovets (see~\cite[Construction~1.5.19]{bu-pa15}). \begin{definition}\label{substitution_definition} Let $\mathcal K$ be a simplicial complex on the set $[m]$, and let $\mathcal K_1, \dots, \mathcal K_m$ be a set of $m$ simplicial complexes. We refer to the simplicial complex \begin{equation}\label{substitution_equation} \mathcal K(\mathcal K_1, \dots, \mathcal K_m) = \{I_{j_1}\sqcup\dots\sqcup I_{j_k}\,|\; I_{j_l}\in \mathcal K_{j_l},\; l = 1,\dots, k \quad\text{and}\quad \{j_1, \dots, j_k\}\in \mathcal K\} \end{equation} as the \emph{substitution} of $\mathcal K_1, \dots, \mathcal K_m$ into $\mathcal K$. The set of missing faces $\mathop\mathrm{MF}(\mathcal K(\mathcal K_1, \dots, \mathcal K_m))$ of a substitution complex can be described as follows. First, every missing face of each $\mathcal K_i$ is the missing face of $\mathcal K(\mathcal K_1, \dots, \mathcal K_m)$. Second, for every missing face $\Delta(i_1, \dots, i_k)$ of $\mathcal K$ we have the following set of missing faces of the substitution complex: \[ \mathop\mathrm{MF}\nolimits_{i_1,\dots, i_k}\bigl(\mathcal K(\mathcal K_1, \dots, \mathcal K_m)\bigr) = \bigl\{\Delta(j_1, \dots, j_k)\,|\; j_l \in \mathcal K_{i_l},\; l =1,\dots, k\bigr\}. \] It is easy to see that there are no other missing faces in $\mathcal K(\mathcal K_1, \dots, \mathcal K_m)$, so we have \[ \mathop\mathrm{MF}\bigl(\mathcal K(\mathcal K_1, \dots, \mathcal K_m)\bigr) = \mathop\mathrm{MF} (\mathcal K_1)\sqcup\dots\sqcup\mathop\mathrm{MF} (\mathcal K_m)\sqcup\hspace{-8mm}\bigsqcup\limits_{\Delta(i_1, \dots, i_k)\in\mathop\mathrm{MF} (\mathcal K)}\hspace{-8mm}\mathop\mathrm{MF}\nolimits_{i_1,\dots, i_k} \bigl(\mathcal K(\mathcal K_1, \dots, \mathcal K_m)\bigr). \] \end{definition} \begin{figure}[ht \begin{tikzpicture} \coordinate (A1) at ( 0cm, 3cm); \coordinate (A2) at ( 0cm,-3cm); \coordinate (A3) at (-2cm, 0cm); \coordinate (A4) at ( .75cm,-.75cm); \coordinate (A5) at ( 2cm, 0cm); \fill[gray!20, opacity = 0.3] (A1) -- (A4) -- (A3) -- cycle; \fill[gray!20, opacity = 0.3] (A1) -- (A4) -- (A5) -- cycle; \fill[gray!20, opacity = 0.7] (A2) -- (A4) -- (A5) -- cycle; \fill[gray!20, opacity = 0.5] (A2) -- (A4) -- (A3) -- cycle; \foreach \j in {3, 4, 5} { \draw[thick] (A1) -- (A\j) ; \draw[thick] (A2) -- (A\j) ; } \draw[thick] (A3) -- (A4) -- (A5); \draw[thick, dashed] (A1) -- (A2); \draw[thick, dashed] (A3) -- (A5); \foreach \i in {1, 2, ..., 5} { \draw[fill=black] (A\i) circle (0.2em); } \draw[fill = black] (0cm, -0.3cm) circle (0.1em); \draw (A1) node[above right] {\textbf 4}; \draw (A2) node[right] {\textbf 5}; \draw (A3) node[above left] {\textbf 1}; \draw (A4) node[below right] {\textbf 2}; \draw (A5) node[above right] {\textbf 3}; \end{tikzpicture} \caption{Substitution complex $\partial\Delta(\partial\Delta(1,2,3), 4, 5)$} \label{12345} \end{figure} \begin{example} If each $\mathcal K_i$ is a point $\{i\}$, then $\mathcal K(\mathcal K_1, \dots, \mathcal K_m) = \mathcal K$. In particular, $\partial\Delta^{m-1}(1, \dots, m)= \partial\Delta^{m-1}$. In the case of substitution into a simplex $\Delta^{m-1}$ or its boundary $\partial\Delta^{m-1}$ we shall omit the dimension, so we have $\partial\Delta(1, \dots, m) = \partial\Delta^{m-1}$, which is compatible with the previous notation. \end{example} The next example is our starting point for further generalisations. \begin{example} Let $\mathcal K = \partial\Delta^{m-1}$ and each $\mathcal K_i$ is a point, except for $\mathcal K_1$. We have $\partial\Delta(\mathcal K_1, i_2,\dots, i_m) = \mathcal J_{m-2}(\mathcal K_1)$, where $\mathcal J_{n}(\mathcal L)$ is the operation defined in \cite[Theorem~5.2]{abra}. By \cite[Theorem~6.1]{abra}, the iterated substitution \[ \partial\Delta\big(\partial\Delta(j_1, \dots, j_q), i_1, \dots, i_p\big) \] is the smallest simplicial complex that realises the Whitehead product \[ \big[[\mu_{j_{1}},\dots, \mu_{j_q}],\mu_{i_1},\dots, \mu_{i_p}\big]. \] The case $q = 3$, $p = 2$ is shown in \hr[Figure]{12345}. \end{example} The next example will be used in \hr[Theorem]{smallest_realisation}. \begin{construction}\label{main_substitution} Here we inductively describe the canonical simplicial complex $\partial\Delta_w$ associated with a general iterated higher Whitehead product~$w$. We start with the boundary of simplex $\partial\Delta(i_1, \dots, i_m)$ corresponding to a single higher Whitehead product $[\mu_{i_1}, \dots, \mu_{i_m}]$. Now we write a general iterated higher Whitehead product recursively as \[ w = [w_1, \dots, w_q, \mu_{i_1}, \dots, \mu_{i_p}] \in \pi_*(\zk), \] where $w_1,\ldots,w_q$ are nontrivial general iterated higher Whitehead products, $q\geq0$. We assign to $w$ the substitution complex \[ \partial\Delta_w \overset{\rm def}{=} \partial\Delta(\partial\Delta_{w_1}, \dots, \partial\Delta_{w_q}, i_1,\dots, i_p). \] We also define recursively the following subcomplex of $\partial\Delta_w$: \[ \partial\Delta^{\rm sph}_w = \partial\Delta^{\rm sph}_{w_1}*\dots*\partial\Delta^{\rm sph}_{w_q}*\partial\Delta(i_1, \dots, i_p). \] By definition, $\partial\Delta_{w}^{\rm sph}$ is a join of boudaries of simplices, so it is homeomorphic to a sphere. Furthermore, $\dim\partial\Delta^{\rm sph}_w=\dim\partial\Delta_w$. We refer to the subcomplex $\partial\Delta^{\rm sph}_w$ as the \emph{top sphere of $\partial\Delta_w$}. For example, the top sphere of $\partial\Delta(\partial\Delta(1,2,3), 4, 5)$ is obtained by deleting the edge $\Delta(4, 5)$, see \hr[Figure]{12345}. \end{construction} \begin{proposition}\label{wedge} The complex $\partial\Delta_w$ is homotopy equivalent to a wedge of spheres, and the top sphere $\partial\Delta^{\rm sph}_w$ represents the sum of top-dimensional spheres in the wedge. \end{proposition} \begin{proof} By construction, $\partial\Delta_w$ is obtained from a sphere $\partial\Delta^{\rm sph}_w$ by attaching simplices of dimension at most $\dim\partial\Delta^{\rm sph}_w$. It follows that the attaching maps are null-homotopic, which implies both statements. \end{proof} \section{Realisation of higher Whitehead products}\label{realisation_section} Given an iterated higher Whitehead product~$w$, we show that the substitution complex $\partial\Delta_w$ realises~$w$. Furthermore, for a particular form of brackets inside~$w$, we prove that $\partial\Delta_w$ is the smallest complex that realises~$w$. We also give a combinatorial criterion for the nontriviality of the product~$w$. Recall from \hr[Proposition]{singlent} that a single higher Whitehead product $[\mu_{i_1},\ldots,\mu_{i_p}]$ is realised by the complex $\partial \Delta(i_1,\ldots,i_p)$. \begin{theorem}\label{realisation} Let $w_1,\ldots,w_q$ be nontrivial iterated higher Whitehead products. The complex $\partial\Delta_w$ described in \textup{\hr[Construction]{main_substitution}} realises the iterated higher Whitehead product \begin{equation}\label{product_realisation} w = [w_1, \dots, w_q, \mu_{i_1}, \dots, \mu_{i_p}]. \end{equation} \end{theorem} \begin{proof} To see that product~\eqref{product_realisation} is defined in $\mathcal Z_{\partial\Delta_w}$ we need to construct the corresponding map $S^{d(w)}\to\mathcal Z_{\partial\Delta_w}$. This is done precisely as described in the proof of \hr[Lemma]{hurewicz_image}. Furthermore, \hr[Lemma]{hurewicz_image} gives the cellular chain $h_c(w)\in\mathcal C_*(\mathcal Z_{\partial\Delta_w})$ representing the Hurewicz image~$h(w)\in H_*(\mathcal Z_{\partial\Delta_w})$. The cellular chain $h_c(w)\in\mathcal C_*(\mathcal Z_{\partial\Delta_w})$ corresponds to the simplicial chain $\partial\Delta_w^{\rm sph}\in C_*(\partial\Delta_w)$ via the isomorphism of \hr[Theorem]{hochster}. Now \hr[Proposition]{wedge} implies that the simplicial homology class $[\partial\Delta_w^{\rm sph}]\in H_*(\partial\Delta_w)$ is nonzero. Thus, $h(w)\ne0$ and the Whitehead product $w$ is nontrivial. \end{proof} \smallskip For a particular configuration of nested brackets, a more precise statement holds. \begin{theorem}\label{smallest_realisation} Let $w_j = [\mu_{j_1}, \dots, \mu_{j_{p_j}}]$, $j = 1,\dots, q$, be nontrivial single higher Whitehead products. Consider an iterated higher Whitehead product \[ w = [w_1, \dots, w_q, \mu_{i_1}, \dots, \mu_{i_p}]. \] Then the product $w$ is \begin{itemize}[leftmargin=0.06\textwidth] \item[\textup(a\textup)]\label{statement_a} defined in $\pi_*(\zk)$ if and only if $\mathcal K$ contains $\partial\Delta_w=\partial\Delta\big(\partial\Delta_{w_1}, \dots, \partial\Delta_{w_q}, i_1, \dots, i_p\big)$ as a subcomplex, where $\partial\Delta_{w_j} = \partial\Delta(j_1, \dots, j_{p_j}), j = 1,\dots, q$; \item[\textup(b\textup)]\label{statement_b} trivial in $\pi_*(\zk)$ if and only if $\mathcal K$ contains $$\Delta\big(\partial\Delta_{w_1}, \dots, \partial\Delta_{w_q}, i_1, \dots, i_p\big) = \partial\Delta_{w_1}*\dots*\partial\Delta_{w_q}*\Delta(i_1,\dots, i_p)$$ as a subcomplex. \end{itemize} \end{theorem} Note that assertion~(a) implies that $\partial\Delta_w$ is the smallest simplicial complex realising the Whitehead product~$w$. \begin{proof} We may assume that $q>0$; otherwise the theorem reduces to the \hr[Proposition]{singlent}. We consider three cases: $p = 0$; $p = 1$; $p>1$. \smallskip \underline{The case $p =0$.} We have $w = [w_1, \dots, w_q]$. We first prove assertion~(b). Let $d_1, \dots, d_q$ and $d = d_1+\dots+d_q -1$ be the dimensions of the Whitehead products $w_1, \dots, w_q$ and $[w_1, \dots, w_q]$, respectively. The condition that~$w$ vanishes implies the existence of the dashed arrow in the diagram \begin{center} \begin{tikzcd} S^d \arrow{r}\arrow[hookrightarrow]{d} & \mathop{\mathrm{FW}}(S^{d_1}, \dots, S^{d_q}) \arrow{r}\arrow[hookrightarrow]{d} & \zk \\ D^{d+1} \arrow{r} & S^{d_1}\times\dots\times S^{d_q} \arrow[dashed]{ur} \end{tikzcd} \end{center} Here $\mathop{\mathrm{FW}}(S^{d_1}, \dots, S^{d_q})$ denotes the fat wedge of spheres $S^{d_1}, \dots, S^{d_q}$, and the top left arrow is the attaching map of the top cell. Let $\sigma_j \in H^{d_j}(\zk)$ be the cohomology class dual to the sphere $S^{d_j} \subset \mathop{\mathrm{FW}}(S^{d_1}, \dots, S^{d_q})$, $j=1,\dots, q$. By the assumption, the single Whitehead product $w_j$ is nontrivial, which implies that $\sigma_j\neq 0$ (see \hr[Propostion]{singlent}). The class $\sigma_j \in H^{d_j}(\zk)$ corresponds to the simplicial cohomology class $\big[\partial\Delta_{w_j}\big]^* \in \widetilde H^*(\mathcal K_{\partial\Delta_{w_j}})$ via the cohomological version of the isomorphism of \hr[Theorem]{hochster}. Here $\mathcal K_{\partial\Delta_{w_j}}$ is the full subcomplex $\partial\Delta_{w_j}$ of~$\mathcal K$. Since the Whitehead product $[w_1, \dots, w_q]$ is trivial, the cohomology product $\sigma_1\cdots\sigma_q$ is nontrivial in $H^*(\zk)$ (see the diagram above). By the cohomology product description in \hr[Theorem]{hochster}, this implies that $\mathcal K$ contains $\partial\Delta_1*\dots*\partial\Delta_{w_q}$ as a full subcomplex, and assertion~(b) follows. To prove assertion~(a), note that the existence of the product $[w_1, \dots,w_q]$ implies that each product $[w_1, \dots, \widehat{w_j},\dots, w_q]$, $j = 1,\dots, q$, is trivial. By assertion~(b), complex $\mathcal K$ contains the union $\bigcup\limits_{j = 1}^q \partial\Delta_{w_1}*\dots*\widehat{\partial\Delta_{w_j}}*\dots*\partial\Delta_{w_q}$ which is precisely $\partial\Delta(\partial\Delta_{w_1}, \dots, \partial\Delta_{w_q})$. This finishes the proof for the case $p = 0$. \smallskip \underline{The case $p=1$.} We have $w = [w_1, \dots, w_q, \mu_{i_1}]$. We first prove (b), that is, assume $w=0$. This implies that $[w_1, \dots, w_q]=0$. By the previous case, we know that $\mathcal K$ contains $\Delta(\partial\Delta_{w_1},\dots, \partial\Delta_{w_q})$ as a full subcomplex. We need to prove that $\mathcal K$ contains $\Delta(\partial\Delta_{w_1},\dots, \partial\Delta_{w_q})*\Delta(i_1)$, which is a cone with apex~$i_1$. The Hurewicz image $h(w)\in H_*(\zk)$ is zero, because $w$ is trivial. Therefore, the canonical cellular chain $h_c(w)=h_c(w_1)\cdots h_c(w_q)S_{i_1}$ (see \hr[Lemma]{hurewicz_image}) is a boundary. By \hr[Theorem]{hochster}, this implies that the simplicial cycle $\partial\Delta_{w_1}*\dots*\partial\Delta_{w_q}$ is a boundary in $\mathcal K_{\Delta(\partial\Delta_{w_1},\dots, \partial\Delta_{w_q})\cup\{i_1\}}$. This can only be the case when $\mathcal K_{\Delta(\partial\Delta_{w_1},\dots, \partial\Delta_{w_q})\cup\{i_1\}}=\Delta(\partial\Delta_{w_1},\dots, \partial\Delta_{w_q})*\Delta(i_1)$, proving~(b). Now we prove (a). By the previous cases, the existence of $w$ implies that $\mathcal K$ contains $\Delta(\partial\Delta_{w_1},\dots, \partial\Delta_{w_q})$ and $\Delta(\partial\Delta_{w_1},\dots,\widehat{\partial\Delta_{w_j}}, \dots, \partial\Delta_{w_q}, i_1)$ for $j = 1,\dots, q$. The union of these subcomplexes is precisely $\partial\Delta(\partial\Delta_{w_1},\dots, \partial\Delta_{w_q}, i_1)$. \smallskip \underline{The case $p>1$.} We induct on $p+q$. We have $w = \big[w_1,\dots, w_q, \mu_{i_1},\dots, \mu_{i_p}\big]$. To prove (b), suppose that $w=0$ but $\mathcal K$ does not contain $\partial\Delta_{w_1}*\dots*\partial\Delta_{w_q}*\Delta(i_1,\dots, i_p)$. Then the cellular chain corresponding to $\partial\Delta_{w_1}*\dots*\partial\Delta_{w_q}*\partial\Delta(i_1,\dots, i_p)$ via \hr[Theorem]{hochster} gives a nontrivial homology class in $H_*(\zk)$. This class coincides with the Hurewicz image $h(w)$, by \hr[Lemma]{hurewicz_image}. Hence, the Whitehead product~$w$ is nontrivial. A contradiction. Assertion~(a) is proved similarly to the case $p=1$. \end{proof} \begin{remark} In our approach, the nontriviality of a higher Whitehead product $w$ is understood as the nontriviality of its canonical representative constructed in~\hr[\S]{prelimi}. Nevertheless, arguments similar to those given in the proof of the case $p=0$ show that the nontriviality assertion in \hr[Theorem]{smallest_realisation} remains valid if the nontriviality is understood in the classical sense, that is, as the absence of a trivial homotopy class in the set of all possible extensions. \end{remark} \begin{example}\label{exreal} Consider the Whitehead product $w = \big[[\mu_1, \mu_2, \mu_3], \mu_4, \mu_5\big]$ in the moment-angle complex $\zk$ corresponding to a simplicial complex~$\mathcal K$ on $5$ vertices. For the existence of $w$ it is necessary that the brackets $\big[[\mu_1,\mu_2,\mu_3],\mu_4\big]$, $\big[[\mu_1,\mu_2,\mu_3],\mu_5\big]$ and $[\mu_4,\mu_5]$ vanish. By \hr[Theorem]{smallest_realisation}~(b), this implies that $\mathcal K$ contains subcomplexes $\partial\Delta(1,2,3)*\Delta(4)$, $\partial\Delta(1,2,3)*\Delta(5)$ and $\Delta(4,5)$. In other words, $\mathcal K$ contains the complex $\partial\Delta\big(\partial\Delta(1,2,3),4,5\big)$ shown in~\hr[Figure]{12345}. Therefore, the latter is the smallest complex realising the Whitehead bracket $w = \big[[\mu_1, \mu_2, \mu_3], \mu_4, \mu_5\big]$. The moment-angle complex $\zk$ corresponding to $\mathcal K=\partial\Delta\big(\partial\Delta(1,2,3),4,5\big)$ is homotopy equivalent to the wedge of spheres $(S^5)^{\vee 4}\vee(S^6)^{\vee3}\vee S^7\vee S^8$, and each sphere is a Whitehead product, see~\cite[Example~5.4]{abra}. For example, $S^7$ corresponds to $w = \big[\big[[\mu_3, \mu_4, \mu_5], \mu_1\big], \mu_2\big]$, and $S^8$ corresponds to $w = \big[[\mu_1, \mu_2, \mu_3], \mu_4, \mu_5\big]$. \end{example} We expect that \hr[Theorem]{smallest_realisation} holds for all iterated higher Whitehead products: \begin{problem}\label{proble} Is it true that for any iterated higher Whitehead product~$w$ the substitution complex~$\partial\Delta_{w}$ is the smallest complex realising~$w$? \end{problem} \section{Resolutions of the face coalgebra}\label{taylor_resolution} Originally, cohomology of $\zk$ was described in~\cite{bu-pa00} as the $\mathop{\mathrm{Tor}}\nolimits$-algebra of the face algebra of~$\mathcal K$. As observed in~\cite{b-b-p04}, the Koszul complex calculating the $\mathop{\mathrm{Tor}}\nolimits$-algebra can be identified with the cellular cochain complex of $\zk$ with respect to the standard cell decomposition. On the other hand, the $\mathop{\mathrm{Tor}}\nolimits$-algebra, and therefore cohomology of~$\zk$, can be calculated via the Taylor resolution of the face algebra as a module over the polynomial ring, see~\cite{wa-zh15},~\cite[\S4]{ayze16}. We dualise both approaches by identifying homology of $\zk$ with the $\mathop{\mathrm{Cotor}}\nolimits$ of the face coalgebra of~$\mathcal K$, and use both co-Koszul and co-Taylor resolutions to describe cycles corresponding to iterated higher Whitehead products. Let $\Bbbk$ be a commutative ring with unit. The \emph{face algebra} $\Bbbk[\mathcal K]$ of a simplicial complex $\mathcal K$ is the quotient of the polynomial algebra $\Bbbk[v_1,\ldots,v_m]$ by the square-free monomial ideal generated by non-simplices of~$\mathcal K$: \[ \Bbbk[\mathcal K]=\Bbbk[v_1,\ldots,v_m]\big/\bigl(v_{j_1}\cdots v_{j_k}\ |\ \{j_1,\ldots,j_k\}\notin \mathcal K\bigr). \] The grading is given by $\deg v_j=2$. Given a subset $J\subset[m]$, we denote by $v_J$ the square-free monomial $\prod_{j\in J}v_j$. Observe that \[ \Bbbk[\mathcal K]=\Bbbk[v_1,\ldots,v_m]\big/\bigl(v_J\ |\ J\in\mathop\mathrm{MF}(\mathcal K)\bigr), \] where $\mathrm{MF}(\mathcal K)$ denotes the set of missing faces (minimal non-faces) of~$\mathcal K$. The face algebra $\mathbb{Z}[\mathcal K]$ is also known as the \emph{face ring}, or the \emph{Stanley--Reisner ring} of~$\mathcal K$. We shall use the shorter notation $\Bbbk[m]$ for the polynomial algebra $\Bbbk[v_1,\ldots,v_m]$. Let $M$ and $N$ be two $\Bbbk[m]$-modules. The $n$-th derived functor of ${}\cdot\otimes_{\Bbbk[m]}N$ is denoted by $\mathop{\mathrm{Tor}}\nolimits_n^{\Bbbk[m]}(M,N)$ or $\mathop{\mathrm{Tor}}\nolimits^{-n}_{\Bbbk[m]}(M,N)$. (The latter notation is better suited for topological application of the Eilenberg--Moore spectral sequence, where the $\mathop{\mathrm{Tor}}\nolimits$ appears naturally as cohomology of certain spaces.) Namely, given a projective resolution $R^\bullet\to M$ with the resolvents indexed by nonpositive integers, we have \[ \mathop{\mathrm{Tor}}\nolimits^{-n}_{\Bbbk[m]}(M,N)=H^{-n}(R^\bullet\otimes_{\Bbbk[m]}N). \] The standard argument using bicomplexes and commutativity of the tensor product gives a natural isomorphism \[ \mathop{\mathrm{Tor}}\nolimits^{-n}_{\Bbbk[m]}(M,N)\cong\mathop{\mathrm{Tor}}\nolimits^{-n}_{\Bbbk[m]}(N,M). \] When $M$ and $N$ are graded $\Bbbk[m]$-modules, $\mathop{\mathrm{Tor}}\nolimits^{-i}_{\Bbbk[m]}(M,N)$ inherits the intrinsic grading and we denote by $\mathop{\mathrm{Tor}}\nolimits^{-i,2j}_{\Bbbk[m]}(M,N)$ the corresponding bigraded components. \begin{theorem}[{\cite[Theorem~4.2.1]{bu-pa00}}]\label{zktor} There is an isomorphism of $\Bbbk$-algebras \[ H^*(\zk;\Bbbk)\cong\mathop{\mathrm{Tor}}\nolimits_{\Bbbk[v_1,\ldots,v_m]}\bigl(\Bbbk[\mathcal K],\Bbbk\bigr) \] where the $\mathop{\mathrm{Tor}}\nolimits$ is viewed as a single-graded algebra with respect to the total degree. \end{theorem} The $\mathop{\mathrm{Tor}}\nolimits$-algebra $\mathop{\mathrm{Tor}}\nolimits_{\Bbbk[m]}(\Bbbk[\mathcal K],\Bbbk)$ can be computed either by resolving the $\Bbbk[m]$-module $\Bbbk$ and tensoring with $\Bbbk[\mathcal K]$, or by resolving the $\Bbbk[m]$-module $\Bbbk[\mathcal K]$ and tensoring with~$\Bbbk$. For the first approach, there is a standard resolution of the $\Bbbk[m]$-module $\Bbbk$, the \emph{Koszul resolution}. It is defined as the acyclic differential graded algebra \[ \bigl(\Lambda[u_1,\ldots,u_m]\otimes\Bbbk[v_1,\ldots,v_m],d_{\Bbbk}\bigr),\quad d_{\Bbbk} =\sum_i\frac\partial{\partial u_i}\otimes v_i. \] Here $\Lambda[u_1,\ldots,u_m]$ denotes the exterior algebra on the generators $u_i$ of cohomological degree $1$, or bidegree $(-1,2)$. After tensoring with $\Bbbk[\mathcal K]$ we obtain the \emph{Koszul complex} $\bigl(\Lambda[u_1,\ldots,u_m]\otimes\Bbbk[\mathcal K],d_{\Bbbk}\bigr)$, whose cohomology is $\mathop{\mathrm{Tor}}\nolimits_{\Bbbk[m]}(\Bbbk[\mathcal K],\Bbbk)$. Furthermore, by~\cite[Lemma~4.2.5]{bu-pa00}, the monomials $v_i^2$ and $u_iv_i$ generate an acyclic ideal in the Koszul complex. The quotient algebra \begin{equation}\label{61} R^*(\mathcal K)=\Lambda[u_1,\ldots,u_m]\otimes\Bbbk[\mathcal K]\big/\bigl(v_i^2=u_iv_i=0,\;1\leqslant i\leqslant m\bigr) \end{equation} has a finite $\Bbbk$-basis of monomials $u_J\otimes v_I$ with $J\subset[m]$, $I\in\mathcal K$ and $J\cap I=\varnothing$. The algebra $R^*(\mathcal K)$ is nothing but the cellular cochain complex of $\zk$ (see~\hr[Construction]{celld}): \begin{theorem}[\cite{b-b-p04}]\label{zkkos} There is an isomorphism of cochain complexes \[ R^*(\mathcal K)\stackrel\cong\longrightarrow\mathcal C^*(\zk), \quad u_J\otimes v_I\mapsto\varkappa(J,I)^* \] inducing the cohomology algebra isomorphism of \textup{\hr[Theorem]{zktor}}. \end{theorem} \begin{remark} The isomorphism of cochain complexes in the theorem above is by inspection. The result of~\cite{b-b-p04} is that it induces an algebra isomorphism in cohomology. Also, the Koszul complex $\bigl(\Lambda[u_1,\ldots,u_m]\otimes\Bbbk[\mathcal K],d_{\Bbbk}\bigr)$ itself can be identified with the cellular cochains of the polyhedral product $(S^\infty,S^1)^\mathcal K$; then taking the quotient by the acyclic ideal in~\eqref{61} corresponds to the homotopy equivalence $\zk=(D^2,S^1)^\mathcal K\stackrel\simeq\longrightarrow(S^\infty,S^1)^\mathcal K$. See the details in~\cite[\S4.5]{bu-pa15}. \end{remark} In the second approach, $\mathop{\mathrm{Tor}}\nolimits_{\Bbbk[m]}(\Bbbk[\mathcal K],\Bbbk)$ is computed by resolving the $\Bbbk[m]$-module $\Bbbk[\mathcal K]$ and tensoring with~$\Bbbk$. The \emph{minimal resolution} has a disadvantage of not supporting a multiplicative structure. There is a nice non-minimal resolution, constructed in the 1966 PhD thesis of Diana Taylor. It has a natural multiplicative structure inducing the algebra isomorphism of~\hr[Theorem]{zktor}. This \emph{Taylor resolution} of $\Bbbk[\mathcal K]$ is defined in terms of the missing faces of $\mathcal K$ and is therefore convenient for calculations with higher Whitehead products. We describe the resolution and its coalgebraic version next. \begin{construction}[Taylor resolution]\label{tares} Given a monomial ideal $(\mathfrak m_1, \ldots, \mathfrak m_t)$ in the polynomial algebra $\Bbbk[m]$, we define a free resolution of the $\Bbbk[m]$-module $\Bbbk[m]/(\mathfrak m_1, \ldots, \mathfrak m_t)$. For each $s = 0, \ldots, t$, let $F_s$ be a free $\Bbbk[m]$-module of rank $\binom m s$ with basis $\{e_J\}$ indexed by subsets $J\subset \{1,\ldots, t\}$ of cardinality $s$. Define a morphism $d\colon F_s \to F_{s-1}$ by \[ d(e_J) = \sum\limits_{j\in J}\sign (j, J) \frac{\mathfrak m_J}{\mathfrak m_{J\setminus j}}e_{J\setminus j}, \] where $\mathfrak m_J = \lcm_{j\in J}(\mathfrak m_j)$ and $\sign (j, J)=(-1)^{n-1}$ if $j$ is the $n$-th element in the ordered set~$J$. It can be verified that $d^2 = 0$. We therefore obtain a complex \[ T(\mathfrak m_1, \dots, \mathfrak m_t) :\; 0 \to F_t \to F_{t-1} \to \dots \to F_1 \to F_0 \to 0. \] By the theorem of D.~Taylor, $T(\mathfrak m_1, \dots, \mathfrak m_t)$ is a free resolution of the $\Bbbk[m]$-module $\Bbbk[m]/(\mathfrak m_1, \dots, \mathfrak m_t)$. For the convenience of the reader, we include the proof of this result in the Appendix as \hr[Theorem]{taylor}. \end{construction} Next we describe the dualisation of the constructions above in the coalgebraic setting. The dual of $\Bbbk[v_1,\ldots,v_m]$ is the symmetric coalgebra, which we denote by $\Bbbk\langle x_1, \dots, x_m\rangle$ or~$\Bbbk\langle m\rangle$. It has a $\Bbbk$-basis consisting of monomials $\mathfrak m$, with the comultiplication defined by the formula \begin{equation}\label{comul} \Delta\mathfrak m = \sum\limits_{\mathfrak m'\cdot\mathfrak m''=\mathfrak m} \mathfrak m'\otimes \mathfrak m''. \end{equation} Given a set of monomials $\mathfrak m_1,\ldots,\mathfrak m_t$ in the variables $x_1,\ldots,x_m$, we define a subcoalgebra $C(\mathfrak m_1,\ldots,\mathfrak m_t)\subset\Bbbk\langle x_1, \dots, x_m\rangle$ with a $\Bbbk$-basis of monomials $\mathfrak m$ that are not divisible by any of the $\mathfrak m_i$, $i=1,\ldots,t$. The \emph{face coalgebra} of a simplicial complex $\mathcal K$ is defined as \[ \Bbbk\langle\mathcal K\rangle=C\bigl(x_J\ |\ J\in\mathop\mathrm{MF}(\mathcal K)\bigr). \] The coalgebra $\Bbbk\langle\mathcal K\rangle$ has a $\Bbbk$-basis of monomials $\mathfrak m$ whose support is a face of~$\mathcal K$, with the comultiplication given by~\eqref{comul}. Let $\Lambda$ be a coalgebra, let $A$ be a right $\Lambda$-comodule with the structure morphism $\nabla_A\colon A\to A\otimes\Lambda$, and let $B$ be a left $\Lambda$-comodule with the structure morphism $\nabla_B\colon B\to\Lambda\otimes B$. The \emph{cotensor product} of $A$ and $B$ is defined as the $\Bbbk$-comodule \[ A\boxtimes_{\Lambda}\!B=\ker\bigl(\nabla_A\otimes\mathds{1}_B - \mathds{1}_A\otimes\nabla_B\colon A\otimes B \to A\otimes\Lambda\otimes B\bigr). \] When $\Lambda$ is cocommutative, $A\boxtimes_{\Lambda}\!B$ is a $\Lambda$-comodule. The $n$-th derived functor of ${}\cdot\boxtimes_{\Lambda}\!B$ is denoted by $\mathop{\mathrm{Cotor}}\nolimits^n_{\Lambda}(A,B)$ or $\mathop{\mathrm{Cotor}}\nolimits_{-n}^{\Lambda}(A,B)$. Namely, given an injective resolution $A \to I^{\bullet}$ with the resolvents indexed by nonnegative integers, we have \[ \mathop{\mathrm{Cotor}}\nolimits_{-n}^{\Lambda}(A,B)=\mathop{\mathrm{Cotor}}\nolimits^n_{\Lambda}(A,B)=H^{n}(I^\bullet\boxtimes_{\Lambda}\!B). \] If $B\to J^\bullet$ is an injective resolution of $B$, then the standard argument using a bicomplex gives isomorphisms \begin{equation}\label{IJres} \mathop{\mathrm{Cotor}}\nolimits^{n}_{\Lambda}(A, B) =H^{n}(I^{\bullet}\boxtimes_{\Lambda}\! B) \cong H^{n}(I^{\bullet}\boxtimes_{\Lambda}\! J^{\bullet}) \cong H^{n}(A\boxtimes_{\Lambda}\! J^{\bullet}). \end{equation} The isomorphism $H^{n}(I^{\bullet}\boxtimes_{\Lambda}\! B) \stackrel\cong\longrightarrow H^{n}(A\boxtimes_{\Lambda}\! J^{\bullet})$ can be described explicitly as follows. \begin{construction}\label{koszul_to_taylor} Let $\eta \in H^{n}(I^{\bullet}\boxtimes_{\Lambda}\! B)$ be a homology class represented by a cycle $\eta^{(0)}\in I^n\boxtimes_{\Lambda}\! B$. We describe how to construct a cycle $\eta^{(n+1)}\in A\boxtimes_{\Lambda}\!J^n$ representing the same homology class in $\mathop{\mathrm{Cotor}}\nolimits^{n}_{\Lambda}(A, B)$. Consider the bicomplex \begin{center} \begin{tikzcd}[column sep=3.3em] A\boxtimes_{\Lambda}\! B \arrow{rrr}\arrow{ddd} &&& I^0\boxtimes_{\Lambda}\! B \arrow{r}\arrow{ddd} & \dots \arrow{rrr} &&& I^n\boxtimes_{\Lambda}\!B \arrow{ddd}[labl]{{ \eta^{(0)}\mapsto \partial_B(\eta^{(0)})}} \\\\\\ A\boxtimes_{\Lambda}\!J^0 \arrow{rrr}\arrow{d} &&& I^0\boxtimes_{\Lambda}\! J^0 \arrow{r}\arrow{d} & \dots \arrow{rrr}{\eta^{(1)}\mapsto \partial_A(\eta^{(1)})=\partial_B(\eta^{(0)})} &&& I^n\boxtimes_{\Lambda}\! J^0 \arrow{d} \\ \vdots \arrow{ddd} &&& \vdots \arrow{ddd}[labl]{{ \eta^{(n)}\mapsto \partial_B(\eta^{(n)})}}& \ddots &&& \vdots \\\\\\ A\boxtimes_{\Lambda}\!J^n \arrow{rrr}{\eta^{(n+1)}\mapsto \partial_A(\eta^{(n+1)})=\partial_B(\eta^{(n)})}&&& I^0\boxtimes_{\Lambda}\! J^n \arrow{r} & \dots \end{tikzcd} \end{center} The rows and columns are exact by the injectivity of the comodules $I^m$ and $J^l$. We have $\partial_A\big(\partial_B\eta^{(0)}\big)=-\partial_B\big(\partial_A\eta^{(0)}\big)=0$. Hence, there exists $\eta^{(1)}\in I^{n-1}\boxtimes_{\Lambda}\! J^0$ such that $\partial_A\eta^{(1)} = \partial_B\eta^{(0)}$. Similarly, there exists $\eta^{(2)}\in I^{n-2}\boxtimes_{\Lambda}\! J^1$ such that $\partial_A\eta^{(2)} = \partial_B\eta^{(1)}$. Proceeding in this fashion, we arrive at an element $\eta^{(n+1)}\in A\boxtimes_{\Lambda}\!J^n$, which represents $\eta$ by construction. \end{construction} We apply this construction in the following setting. Here is the dual version of \hr[Theorem]{zktor}: \begin{theorem}\label{zkcotor} There is an isomorphism of $\Bbbk$-coalgebras \[ H_*(\zk;\Bbbk)\cong\mathop{\mathrm{Cotor}}\nolimits^{\Bbbk\langle x_1,\ldots,x_m\rangle}\bigl(\Bbbk\langle\mathcal K\rangle,\Bbbk\bigr). \] \end{theorem} The coalgebra $\mathop{\mathrm{Cotor}}\nolimits^{\Bbbk\langle m\rangle}(\Bbbk\langle\mathcal K\rangle,\Bbbk)$ can be computed using the dual version of the Koszul resolution. \begin{construction}[Koszul complex of the face coalgebra] The \emph{Koszul resolution} for the $\Bbbk\langle m\rangle$-comodule $\Bbbk$ is defined as the acyclic differential graded coalgebra \[ \bigl(\Bbbk\langle x_1,\ldots,x_m\rangle\otimes\Lambda\langle y_1,\ldots,y_m\rangle,\partial_{\Bbbk}\bigr), \quad \partial_{\Bbbk}=\sum_i\frac\partial{\partial x_i}\otimes y_i. \] After cotensoring with $\Bbbk\langle\mathcal K\rangle$ we obtain the \emph{Koszul complex} $\bigl(\Bbbk\langle\mathcal K\rangle\otimes\Lambda\langle y_1,\ldots,y_m\rangle,\partial_{\Bbbk}\bigr)$, whose homology is $\mathop{\mathrm{Cotor}}\nolimits^{\Bbbk\langle m\rangle}(\Bbbk\langle\mathcal K\rangle,\Bbbk)$. \end{construction} The relationship between the cellular chain complex of $\zk$ and the Koszul complex of $\Bbbk\langle\mathcal K\rangle$ is described by the following dualisation of \hr[Theorem]{zkkos}. \begin{theorem}\label{coKos} There is an inclusion of chain complexes \[ \mathcal C_*(\zk)\to \bigl(\Bbbk\langle\mathcal K\rangle\otimes \Lambda\langle y_1,\ldots,y_m\rangle,\partial_{\Bbbk}\bigr),\quad \varkappa(J,I)\mapsto x_I\otimes y_J \] inducing an isomorphism in homology: \[ H_*(\zk;\Bbbk)\cong H\bigl(\Bbbk\langle\mathcal K\rangle\otimes \Lambda\langle y_1,\ldots,y_m\rangle,\partial_{\Bbbk}\bigr)=\mathop{\mathrm{Cotor}}\nolimits^{\Bbbk\langle x_1,\ldots,x_m\rangle}\bigl(\Bbbk\langle\mathcal K\rangle,\Bbbk\bigr). \] \end{theorem} On the other hand, $\mathop{\mathrm{Cotor}}\nolimits^{\Bbbk\langle m\rangle}(\Bbbk\langle\mathcal K\rangle,\Bbbk)$ can be computed using the dual version of the Taylor resolution for the $\Bbbk\langle m\rangle$-comodule $\Bbbk\langle\mathcal K\rangle$. \begin{construction}[Taylor resolution for comodules]\label{tacor} Given a set of monomials $\mathfrak m_1, \ldots, \mathfrak m_t$, we describe a cofree resolution of the $\Bbbk\langle m\rangle$-comodule $C(\mathfrak m_1, \ldots, \mathfrak m_t)$. For each $s = 0, \ldots, t$, let $I^s$ be a cofree $\Bbbk\langle m\rangle$-comodule of rank $\binom m s$ with basis $\{e^J\}$ indexed by subsets $J\subset \{1,\ldots, t\}$ of cardinality~$s$. The differential $\partial\colon I^s \to I^{s+1}$ is defined by \[ \partial(x_1^{\alpha_1}\cdots x_m^{\alpha_m}e^J) =\sum\limits_{j\notin J}\sign(j,J) \frac{x_1^{\alpha_1}\cdots x_{\vphantom{1} m}^{\alpha_m}\mathfrak m_J}{\mathfrak m_{J\cup\{j\}}}e^{J\cup\{j\}}. \] Here we assume that $\frac{x_1^{\alpha_1}\cdots x_{\vphantom{1}m}^{\alpha_{\vphantom{1}m}}\mathfrak m_J}{\mathfrak m_{J\cup\{j\}}}$ is zero if it is not a monomial. The resulting complex \[ T'(\mathfrak m_1, \ldots, \mathfrak m_t) :\;0\to I^0\to I^1 \to \cdots \to I^t\to 0 \] is called the \emph{Taylor resolution} of the $\Bbbk\langle m\rangle$-comodule $C(\mathfrak m_1, \ldots, \mathfrak m_t)$. The proof that it is indeed a resolution is given in~\hr[Theorem]{taylor}. \end{construction} \begin{construction}[Taylor complex of the face coalgebra] Let $\Bbbk\langle\mathcal K\rangle=C\bigl(x_J\ |\ J\in\mathop\mathrm{MF}(\mathcal K)\bigr)$ be the face coalgebra of a simplicial complex~$\mathcal K$. In this case it is convenient to view the $s$-th term $I^s$ in the Taylor resolution as the cofree $\Bbbk\langle m\rangle$-comodule with basis consisting of exterior monomials $w_{J_1}\wedge\cdots\wedge w_{J_s}$, where $J_1,\ldots,J_s$ are different missing faces of~$\mathcal K$. The differential then takes the form \[ \partial_{\Bbbk\langle\mathcal K\rangle}(x_1^{\alpha_1}\cdots x_{\vphantom{1} m}^{\alpha_m}\cdot w_{J_1}\wedge\dots\wedge w_{J_s})= \sum\limits_{J \neq J_1, \ldots, J_s} \frac{x_1^{\alpha_1}\cdots x_{\vphantom{1} m}^{\alpha_m}} {x_{(J_1\cup\dots\cup J_s\cup J)\setminus (J_1\cup\dots\cup J_s)}} \cdot w_J\wedge w_{J_1}\wedge\dots\wedge w_{J_s} \] (the sum is taken over missing faces $J\in\mathop\mathrm{MF}(\mathcal K)$ different from $J_1,\ldots,J_s$). After cotensoring with $\Bbbk$ over $\Bbbk\langle m\rangle$ we obtain the \emph{Taylor complex} of $\Bbbk\langle\mathcal K\rangle$ calculating $\mathop{\mathrm{Cotor}}\nolimits^{\Bbbk\langle x_1,\ldots,x_m\rangle}\bigl(\Bbbk\langle\mathcal K\rangle,\Bbbk\bigr)$. Its $(-s)$th graded component is a free $\Bbbk$-module with basis of exterior monomials $w_{J_1}\wedge\cdots\wedge w_{J_s}$, where $J_1,\ldots,J_s$ are different missing faces of~$\mathcal K$. The differential is given by \[ \partial_{\Bbbk\langle\mathcal K\rangle}(w_{J_1}\wedge\dots\wedge w_{J_s})= \sum\limits_{J\subset J_1\cup\dots\cup J_s} w_J\wedge w_{J_1}\wedge\dots\wedge w_{J_s} \] (the sum is over missing faces $J\subset J_1\cup\dots\cup J_s$ different from any of the $J_1,\ldots,J_s$). \end{construction} We therefore have two methods of calculating $H_*(\zk)=\mathop{\mathrm{Cotor}}\nolimits^{\Bbbk\langle x_1,\ldots,x_m\rangle}\bigl(\Bbbk\langle\mathcal K\rangle,\Bbbk\bigr)$: by resolving $\Bbbk$ (Koszul resolution) or by resolving $\Bbbk\langle\mathcal K\rangle$ (Taylor resolution). The two resulting complexes are related by the chain of quasi-isomorphisms~\eqref{IJres} and \hr[Construction]{koszul_to_taylor}. \begin{example} Let $\mathcal K$ be the substitution complex $\partial\Delta\big(\partial\Delta(1,2,3), 4,5\big)$, see \hr[Figure]{12345}. After tensoring the Taylor resolution for $\mathbb{Z}\langle\mathcal K\rangle$ with $\mathbb{Z}$ we obtain the following complex: \begin{center} \begin{tikzpicture}[every node/.style={midway}] \matrix[column sep=1.3cm,row sep={4cm,between origins}] at (0, 0) { \node(A) {$\mathbb{Z}$}; & \node(B) {$\mathbb{Z}^4$}; & \node(C) {$\mathbb{Z}^6$}; &&&&&&& \node(D) {$\mathbb{Z}^4$}; \\ & \node(E) {$\mathbb{Z}$}; &&&&&&&& \node(G){};\\}; \draw[->] (A) -- (B) node[below] {\small $1\mapsto 0$}; \draw[->] (B) -- (C) node[below=0.1em] {\small $\begin{aligned} &w_{123}\mapsto 0\\ &w_{145}\mapsto 0\\ &w_{245}\mapsto 0\\ &w_{345}\mapsto 0\\ \end{aligned}$}; \draw[->] (C) -- (D) node[below=0.1em] {\small $\begin{aligned} &w_{123}\wedge w_{145}\mapsto \phantom{-} w_{123}\wedge w_{145}\wedge w_{245} + w_{123}\wedge w_{145}\wedge w_{345}\\ &w_{123}\wedge w_{245}\mapsto - w_{123}\wedge w_{145}\wedge w_{245}+ w_{123}\wedge w_{245}\wedge w_{345} \\ &w_{123}\wedge w_{345}\mapsto -w_{123}\wedge w_{145}\wedge w_{345} - w_{123}\wedge w_{245}\wedge w_{345}\\ &w_{145}\wedge w_{245}\mapsto 0\\ &w_{145}\wedge w_{345}\mapsto 0\\ &w_{245}\wedge w_{345}\mapsto 0 \end{aligned} $}; \draw[->,rounded corners] (D.south) -- +(0,-1) |- node[below=0.1em, pos=.8] {\small $\begin{aligned} -w_{123}\wedge w_{145}\wedge w_{245}\wedge w_{345}\mapsfrom w_{123}\wedge w_{145}\wedge w_{245}&\\ \phantom{-}w_{123}\wedge w_{145}\wedge w_{245}\wedge w_{345}\mapsfrom w_{123}\wedge w_{145}\wedge w_{345}&\\ -w_{123}\mapsfrom w_{145}\wedge w_{245}\wedge w_{345}\mapsfrom w_{123}\wedge w_{245}\wedge w_{345}&\\ \phantom{-}w_{123}\wedge w_{145}\wedge w_{245}\wedge w_{345}\mapsfrom w_{145}\wedge w_{245}\wedge w_{345}& \end{aligned} $} (E); \end{tikzpicture} \end{center} We see that homology of this complex agrees with homology of the wedge $(S^5)^{\vee 4}\vee(S^6)^{\vee3}\vee S^7\vee S^8$, in accordance with \hr[Example]{exreal}. \end{example} \section{Higher Whitehead products and Taylor resolution} Given an iterated higher Whitehead product $w$, \hr[Lemma]{hurewicz_image} gives a canonical cellular cycle representing the Hurewicz image of~$w$. By \hr[Theorem]{coKos}, this cellular cycle can be viewed as a cycle in the Koszul complex calculating $\mathop{\mathrm{Cotor}}\nolimits^{\Bbbk\langle m\rangle}\bigl(\Bbbk\langle\mathcal K\rangle,\Bbbk\bigr)$. Here we use \hr[Construction]{koszul_to_taylor} to describe a canonical cycle representing an iterated higher Whitehead product $w$ in the coalgebraic Taylor resolution. This gives a new criterion for the realisability of~$w$. \begin{theorem}\label{tarep} Let $w$ be a nested iterated higher Whitehead product \begin{equation}\label{taylor_whitehead_product} w = \Big[\big[\dots\big[[\mu_{i_{11}},\ldots, \mu_{i_{1p_1}}], \mu_{i_{21}},\dots, \mu_{i_{2p_2}}\big], \dots\big], \mu_{i_{n1}},\dots, \mu_{i_{np_n}}\Big]. \end{equation} Then the Hurewicz image $h(w) \in H_*(\zk) = \mathop\mathrm{Cotor}\nolimits^{\mathbb{Z}\langle m\rangle}( \mathbb{Z}\langle\mathcal K\rangle, \mathbb{Z})$ is represented by the following cycle in the Taylor complex of ${\mathbb{Z}\langle\mathcal K\rangle}$ \begin{equation}\label{taylor_cycle} \bigwedge\limits_{k=1}^n\hspace{3mm}\Biggl(\hspace{1mm}\sum\limits_{\substack{J\in \mathop\mathrm{MF}(\mathcal K) \\ J\setminus \bigl(\bigcup\limits_{j=1}^{n-k} I_j\bigr) =I_{n-k+1} }} \hspace{-5mm}w_J\hspace{1mm}\Biggr), \end{equation} where $I_k = \{i_{k1}, \dots, i_{kp_k}\}$. \end{theorem} \begin{proof} Recall from \hr[Construction]{celld} that for a given pair of non-intersecting index sets $I = \{i_1, \dots, i_s\}$ and $J = \{j_1, \dots, j_t\}$ we have a cell \[ \varkappa(J,I) = D_{i_1}\cdots D_{i_s} S_{j_1}\cdots S_{j_t}. \] It belongs to $\zk$ whenever $I\in\mathcal K$. Using this notation we can rewrite the canonical cellular chain~$h_c(w)$ from \hr[Lemma]{hurewicz_image_old} as follows: \begin{equation}\label{chain_new_notation} h_c(w) = \prod\limits_{k = 1}^n \left(\sum\limits_{I \in \partial\Delta(I_k)} \varkappa\big(I_k\setminus I, I\big) \right). \end{equation} Here and below the sum is over maximal simplicies $I \in \partial\Delta(I_k)$ only (otherwise the right hand side above is not a homogeneous element). Now we apply \hr[Construction]{koszul_to_taylor} to \eqref{chain_new_notation}. We obtain the following zigzag of elements in the bicomplex relating the Koszul complex with differential $\partial_\mathbb{Z}$ to the Taylor complex with differential $\partial_{\mathbb{Z}\langle\mathcal K\rangle}$: \begin{center} \begin{tikzpicture}[xscale=1, yscale=1] \node(A) at (0,4) {$\varkappa(\varnothing, I_1)\prod\limits_{k=2}^{n} \Bigl(\sum\limits_{I \in \partial\Delta(I_k)} \varkappa\big(I_k\setminus I, I\big) \Bigr)$}; \node(B) at (7.8,4) {$\prod\limits_{k = 1}^n \Bigl(\sum\limits_{I \in \partial\Delta(I_k)} \varkappa \big(I_k\setminus I, I\big) \Bigr)$}; \node(C) at (7.8,2) {$\prod\limits_{k=2}^{n} \Bigl(\sum\limits_{I \in \partial\Delta(I_k)} \varkappa\big(I_k\setminus I, I\big) \Bigr)w_{I_1}$}; \node(D) at (0,2) {$\varkappa(\varnothing, I_{2})\prod\limits_{k=3}^{n} \Bigl(\sum\limits_{I \in \partial\Delta(I_k)} \varkappa\big(I_k\setminus I, I\big) \Bigr)w_{I_1}$}; \node(E) at (7.8,0) {$\prod\limits_{k=3}^{n} \Bigl(\sum\limits_{I \in \partial\Delta(I_k)} \varkappa\big(I_k\setminus I, I\big) \Bigr) \Bigl(\sum\limits_{(J\setminus I_1)=I_2}w_J\Bigr)\wedge w_{I_1}$}; \node(F) at (0,0){\hphantom{aaaaaaaaaa}$\cdots$\hphantom{aaaaaaaaaa}}; \path[|->, font=\scriptsize] (A) edge node[above]{$\partial_{\mathbb{Z}}$} (B) (A) edge node[above]{$\partial_{\mathbb{Z}\langle\mathcal K\rangle}$} (C) (D) edge node[above]{$\partial_{\mathbb{Z}}$} (C) (D) edge node[above]{$\partial_{\mathbb{Z}\langle\mathcal K\rangle}$} (E) (F) edge node[above] {$\partial_{\mathbb{Z}}$} (E); \end{tikzpicture} \end{center} It ends up precisely at element~\eqref{taylor_cycle} in the Taylor complex. \end{proof} \begin{example} Once again consider the complex $\mathcal K = \partial\Delta(\partial\Delta(1,2,3),4,5)$ shown in \hr[Figure]{12345}. We have $\zk\simeq(S^5)^{\vee 4}\vee(S^6)^{\vee3}\vee S^7\vee S^8$ by~\cite[Example~5.4]{abra}, and each sphere is a Whitehead product. These Whitehead products together with the representing cycles in the Koszul and Taylor complexes are shown in \hr[Table]{t1} for each sphere. \begin{table}[!h] \renewcommand{\arraystretch}{1.7} \begin{center} \begin{tabular}{|c|c|c|} \hline \footnotesize{Whitehead product} & \footnotesize{Koszul (cellular) cycle} & \footnotesize{Taylor cycle} \\ \hline {\footnotesize $[\mu_1, \mu_2, \mu_3]$} & {\footnotesize $D_1D_2S_3 + D_1S_2D_3 + S_1D_2D_3$} & {\footnotesize $w_{123}$} \\ \hline {\footnotesize $[\mu_1, \mu_4, \mu_5]$} & {\footnotesize $D_1D_4S_5 + D_1S_4D_5 + S_1D_4D_5$} & {\footnotesize $w_{145}$} \\ \hline {\footnotesize $[\mu_2, \mu_4, \mu_5]$} & {\footnotesize $D_2D_4S_5 + D_2S_4D_5 + S_2D_4D_5$} & {\footnotesize $w_{245}$} \\ \hline {\footnotesize $[\mu_3, \mu_4, \mu_5]$} & {\footnotesize $D_3D_4S_5 + D_3S_4D_5 + S_3D_4D_5$} & {\footnotesize $w_{345}$} \\ \hline {\footnotesize $\big[[\mu_1, \mu_4, \mu_5],\mu_2\big]$} & {\footnotesize $(D_1D_4S_5 + D_1S_4D_5 + S_1D_4D_5)S_2$} & {\footnotesize $w_{245}\wedge w_{145}$} \\ \hline {\footnotesize $\big[[\mu_1, \mu_4, \mu_5],\mu_3\big]$} & {\footnotesize $(D_1D_4S_5 + D_1S_4D_5 + S_1D_4D_5)S_3$} & {\footnotesize $w_{345}\wedge w_{145}$} \\ \hline {\footnotesize $\big[[\mu_2, \mu_4, \mu_5],\mu_3\big]$} & {\footnotesize $(D_2D_4S_5 + D_2S_4D_5 + S_2D_4D_5)S_3$} & {\footnotesize $w_{345}\wedge w_{245}$} \\ \hline {\footnotesize $\big[\big[[\mu_1,\mu_4,\mu_5],\mu_2\big]\mu_3\big]$} & {\footnotesize $(D_1D_4S_5 + D_1S_4D_5 + S_1D_4D_5)S_2S_3$} & {\footnotesize $(w_{123} + w_{345})\wedge w_{245}\wedge w_{145}$} \\ \hline {\footnotesize $\big[[\mu_1,\mu_2,\mu_3],\mu_4,\mu_5\big]$} & {\footnotesize $(D_1D_2S_3 + D_1S_2D_3 + S_1D_2D_3)(D_4S_5+S_4D_5)$} & {\footnotesize $(w_{145}+w_{245}+w_{345}) \wedge w_{123}$} \\ \hline \end{tabular} \caption{Koszul and Taylor cycles representing Whitehead products}\label{t1} \end{center} \end{table} \end{example} An important feature of the Taylor cycle~\eqref{taylor_cycle} is that it has the form of a product of sums of generators $w_J$ corresponding to missing faces, and the rightmost factor is a \emph{single} generator~$w_{I_1}$. This can be seen in the right column of \hr[Table]{t1}. Below we give an example of a Taylor cycle which \emph{does not} have this form. It corresponds to a sphere which \emph{is not} a Whitehead product, although the corresponding $\zk$ is a wedge of spheres. This example was discovered in~\cite[\S7]{abra}. \begin{example} Consider the simplicial complex \begin{multline*} \mathcal K=\partial\Delta\bigl(\partial\Delta(1,2,3),4,5,6\bigr)\cup\Delta(1,2,3) \\=\bigl(\partial\Delta(1,2,3)*\partial\Delta(4,5,6)\bigr)\cup\Delta(1,2,3)\cup\Delta(4,5,6). \end{multline*} We have $\zk\simeq(S^7)^{\vee 6}\vee (S^8)^{\vee 6}\vee (S^9)^{\vee 2}\vee S^{10}$, see~\cite[Proposition~7.1]{abra}. Here is the staircase diagram of \hr[Construction]{koszul_to_taylor} relating the Koszul and Taylor cycles corresponding to~$S^{10}$: \begin{center} \begin{tikzpicture}[xscale=1, yscale=1] \node(A) at (0,4.5) {$D_1D_2D_3(D_4D_5S_6 + D_4S_5D_6 + S_4D_5D_6)$}; \node(B) at (5,6) {$(D_1D_2S_3 + D_1S_2D_3 + S_1D_2D_3)(D_4D_5S_6 + D_4S_5D_6 + S_4D_5D_6)$}; \node(C) at (5,3) {$(D_5S_6 + S_5D_6)w_{1234}+(D_4S_6 + S_4D_6)w_{1235}+(D_4S_5 + S_4D_5)w_{1236}$}; \node(D) at (0,1.5) {$D_5D_6w_{1234}+D_4D_6w_{1235}+D_4D_5w_{1236}$}; \node(E) at (5,0) {$-(w_{1234}+w_{1235}+w_{1236})\wedge(w_{1456}+w_{2456}+w_{3456})$}; \path[|->, font=\scriptsize] (A) edge node[above, near start]{$\partial_{\mathbb{Z}}$} (B) (A) edge node[above, near end]{$\partial_{\mathbb{Z}\langle\mathcal K\rangle}$} (C) (D) edge node[above, near start]{$\partial_{\mathbb{Z}}$} (C) (D) edge node[above, near end]{$\partial_{\mathbb{Z}\langle\mathcal K\rangle}$} (E); \end{tikzpicture} \end{center} We see that the Taylor cycle does not have a factor consisting of a single generator~$w_J$. This reflects the fact that the sphere $S^{10}$ in the wedge is not an iterated higher Whitehead product, see~\cite[Proposition~7.2]{abra}. \end{example} Using the same argument as in the proof of \hr[Theorem]{tarep}, we can write down the Taylor cycle representing the Hurewicz image of an \emph{arbitrary} iterated higher Whitehead product, not only a nested one. The general form of the answer is rather cumbersome though. Instead of writing a general formula, we illustrate it on an example. \begin{example} Consider the substitution complex $\mathcal K = \partial\Delta\big(\partial\Delta(1,2,3), \partial\Delta(4,5,6),7,8\big)$. By~\hr[Theorem]{realisation}, it realises the Whitehead product $w=\big[[\mu_1,\mu_2,\mu_3], [\mu_4,\mu_5,\mu_6],\mu_7,\mu_8\big]$. From the description of the missing faces in \hr[Definition]{substitution_definition} we obtain \begin{align*} \mathop\mathrm{MF}(\mathcal K) = \bigl\{&\Delta(1,2,3), \Delta(4,5,6),\Delta(1,4,7,8),\Delta(1,5,7,8), \Delta(1,6,7,8),\\ &\Delta(2,4,7,8),\Delta(2,5,7,8),\Delta(2,6,7,8),\Delta(3,4,7,8),\Delta(3,5,7,8),\Delta(3,6,7,8)\bigr\}. \end{align*} Applying \hr[Construction]{koszul_to_taylor} to the canonical cellular cycle \[ h_c(w) = (D_1D_2S_3 + D_1S_2D_3 + S_1D_2D_3)(D_4D_5S_6 + D_4S_5D_6 + S_4D_5D_6) (D_7S_8+S_7D_8) \] we obtain the corresponding cycle in the Taylor complex: \[ (w_{1478}+w_{1578}+w_{1678}+w_{2478}+w_{2578}+w_{2678} +w_{3478}+w_{3578}+w_{3678})\wedge w_{456}\wedge w_{123}. \] \end{example}
proofpile-arXiv_067-2320
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} A text is not just a sequence of words, but it has coherent structure. The meaning of each word can not be determined until it is placed in the structure of the text. Recognizing the structure of text is an essential task in text understanding, especially in resolving anaphora and ellipsis. One of the constituents of the text structure is a text segment. A text segment, whether or not it is explicitly marked, as are sentences and paragraphs, is defined as a sequence of clauses or sentences that display local coherence. It resembles a scene in a movie, which describes the same objects in the same situation. This paper proposes an indicator, called the lexical cohesion profile (LCP), which locates segment boundaries in a narrative text. LCP is a record of lexical cohesiveness of words in a sequence of text. Lexical cohesiveness is defined as word similarity (Kozima and Furugori, 1993) computed by spreading activation on a semantic network. Hills and valleys of LCP closely correlate with changing of segments. \section{SEGMENTS AND COHERENCE} Several methods to capture segment boundaries have been proposed in the studies of text structure. For example, cue phrases play an important role in signaling segment changes. (Grosz and Sidner, 1986) \ However, such clues are not directly based on coherence which forms the clauses or sentences into a segment. Youmans (1991) proposed VMP (vocabulary management profile) as an indicator of segment boundaries. VMP is a record of the number of new vocabulary terms introduced in an interval of text. However, VMP does not work well on a high-density text. The reason is that coherence of a segment should be determined not only by reiteration of words but also by lexical cohesion. Morris and Hirst (1991) used Roget's thesaurus to determine whether or not two words have lexical cohesion. Their method can capture almost all the types of lexical cohesion, e.g. systematic and non-systematic semantic relation. However it does not deal with strength of cohesiveness which suggests the degree of contribution to coherence of the segment. \subsection{Computing Lexical Cohesion} Kozima and Furugori (1993) defined lexical cohesiveness as semantic similarity between words, and proposed a method for measuring it. Similarity between words is computed by spreading activation on a semantic network which is systematically constructed from an English dictionary (LDOCE). The similarity $\sigma(w,w') \!\in\! [0,1]$ between words $w,w'$ is computed in the following way: (1) produce an activated pattern by activating the node $w$; (2) observe activity of the node $w'$ in the activated pattern. The following examples suggest the feature of the similarity $\sigma$: \begin{tabbing} \hspace{8mm}\=\hspace{22mm}\=\hspace{22mm}\=\hspace{5mm}\=\kill \> $\sigma$ ({\tt cat}, \> {\tt pet}) \> = \> 0.133722 , \\ \> $\sigma$ ({\tt cat}, \> {\tt hat}) \> = \> 0.001784 , \\ \> $\sigma$ ({\tt waiter}, \> {\tt restaurant}) \> = \> 0.175699 , \\ \> $\sigma$ ({\tt painter}, \> {\tt restaurant}) \> = \> 0.006260 . \end{tabbing} The similarity $\sigma$ depends on the significance $s(w) \!\in\! [0,1]$, i.e. normalized information of the word $w$ in West's corpus (1953). For example: \begin{tabbing} \hspace{8mm}\=\hspace{34mm}\=\kill \> s({\tt red}) = 0.500955 , \> s({\tt and}) = 0.254294 . \end{tabbing} The following examples show the relationship between the word significance and the similarity: \begin{tabbing} \hspace{8mm}\=\hspace{22mm}\=\hspace{22mm}\=\hspace{5mm}\=\kill \> $\sigma$ ({\tt waiter}, \> {\tt waiter}) \> = \> 0.596803 , \\ \> $\sigma$ ({\tt red}, \> {\tt blood}) \> = \> 0.111443 , \\ \> $\sigma$ ({\tt of}, \> {\tt blood}) \> = \> 0.001041 . \end{tabbing} \section{LEXICAL COHESION PROFILE} LCP of the text $T \!=\! \{w_1, \!\cdots\!, w_N\}$ is a sequence $\{c(S_1), \!\cdots\!, c(S_N)\}$ of lexical cohesiveness $c(S_i)$. $S_i$ is the word list which can be seen through a fixed-width window centered on the $i$-th word of $T$: \begin{tabbing}\hspace{5mm}\=\hspace{3mm}\=\hspace{3mm}\= \hspace{20mm}\=\kill \> $S_i$=$\{w_{l},w_{l+1},\cdots,w_{i-1},w_{i}, w_{i+1},\cdots,w_{r-1},w_{r}\}$, \\ \> \> $l$ \> = $i \!-\! \Delta$ \> (if $i \!\leq\! \Delta$, then $l \!=\! 1$),\\ \> \> $r$ \> = $i \!+\! \Delta$ \> (if $i \!>\! N \!-\! \Delta$, then $r \!=\! N$). \end{tabbing} LCP treats the text $T$ as a word list without any punctuation or paragraph boundaries. \begin{figure} \begin{center} \begin{picture}(70, 53) \thicklines \drawline( 10.0000, 55.0000)( 10.0000, 32.0000) \drawline( 10.0000, 32.0000)( 22.0000, 8.0000) \drawline( 22.0000, 8.0000)( 70.0000, 8.0000) \thinlines \multiputlist( 29.2000, 7.0000)(0, 0)[t]{\scriptsize 2} \multiputlist( 38.8000, 7.0000)(0, 0)[t]{\scriptsize 4} \multiputlist( 48.4000, 7.0000)(0, 0)[t]{\scriptsize 6} \multiputlist( 58.0000, 7.0000)(0, 0)[t]{\scriptsize 8} \multiputlist( 67.6000, 7.0000)(0, 0)[t]{\scriptsize 10} \drawline( 10.0000, 37.7500)( 9.0000, 37.7500) \multiputlist( 8.0000, 37.7500)(0, 0)[rt]{\scriptsize 0.1} \drawline( 10.0000, 43.5000)( 9.0000, 43.5000) \multiputlist( 8.0000, 43.5000)(0, 0)[rt]{\scriptsize 0.2} \drawline( 10.0000, 49.2500)( 9.0000, 49.2500) \multiputlist( 8.0000, 49.2500)(0, 0)[rt]{\scriptsize 0.3} \drawline( 10.0000, 55.0000)( 9.0000, 55.0000) \multiputlist( 8.0000, 55.0000)(0, 0)[rt]{\scriptsize 0.4} \multiputlist( 10.0000, 30.8000)(0, 0)[r]{\footnotesize\tt alcohol\_1} \multiputlist( 11.2000, 28.4000)(0, 0)[r]{\footnotesize\tt drink\_1} \multiputlist( 12.4000, 26.0000)(0, 0)[r]{\footnotesize\tt red\_2} \multiputlist( 13.6000, 23.6000)(0, 0)[r]{\footnotesize\tt drink\_2} \multiputlist( 14.8000, 21.2000)(0, 0)[r]{\footnotesize\tt red\_1} \multiputlist( 16.0000, 18.8000)(0, 0)[r]{\footnotesize\tt bottle\_1} \multiputlist( 17.2000, 16.4000)(0, 0)[r]{\footnotesize\tt wine\_1} \multiputlist( 18.4000, 14.0000)(0, 0)[r]{\footnotesize\tt poison\_1} \multiputlist( 19.6000, 11.6000)(0, 0)[r]{\footnotesize\tt swallow\_1} \multiputlist( 20.8000, 9.2000)(0, 0)[r]{\footnotesize\tt spirit\_1} \multiputlist( 3.0000, 51.0000)(0, 0)[r]{\small } \multiputlist( 70.0000, 5.0000)(0, 0)[rt]{\small steps} \drawline( 20.8000, 10.4000)( 25.6000, 10.4000) \drawline( 22.0000, 8.0000)( 26.8000, 8.0000) \drawline( 20.8000, 10.4000)( 22.0000, 8.0000) \drawline( 25.6000, 10.4000)( 26.8000, 8.0000) \drawline( 25.6000, 14.3723)( 30.4000, 14.3723) \drawline( 26.8000, 11.9723)( 31.6000, 11.9723) \drawline( 25.6000, 14.3723)( 26.8000, 11.9723) \drawline( 30.4000, 14.3723)( 31.6000, 11.9723) \drawline( 25.6000, 14.3723)( 25.6000, 10.4000) \drawline( 26.8000, 11.9723)( 26.8000, 8.0000) \drawline( 31.6000, 11.9723)( 31.6000, 8.0000) \drawline( 30.4000, 15.6113)( 35.2000, 15.6113) \drawline( 31.6000, 13.2113)( 36.4000, 13.2113) \drawline( 30.4000, 15.6113)( 31.6000, 13.2113) \drawline( 35.2000, 15.6113)( 36.4000, 13.2113) \drawline( 30.4000, 15.6113)( 30.4000, 14.3723) \drawline( 31.6000, 13.2113)( 31.6000, 11.9723) \drawline( 36.4000, 13.2113)( 36.4000, 8.0000) \drawline( 35.2000, 16.8500)( 40.0000, 16.8500) \drawline( 36.4000, 14.4500)( 41.2000, 14.4500) \drawline( 35.2000, 16.8500)( 36.4000, 14.4500) \drawline( 40.0000, 16.8500)( 41.2000, 14.4500) \drawline( 35.2000, 16.8500)( 35.2000, 15.6113) \drawline( 36.4000, 14.4500)( 36.4000, 8.0000) \drawline( 41.2000, 14.4500)( 41.2000, 8.0000) \drawline( 40.0000, 17.4290)( 44.8000, 17.4290) \drawline( 41.2000, 15.0290)( 46.0000, 15.0290) \drawline( 40.0000, 17.4290)( 41.2000, 15.0290) \drawline( 44.8000, 17.4290)( 46.0000, 15.0290) \drawline( 40.0000, 17.4290)( 40.0000, 16.8500) \drawline( 41.2000, 15.0290)( 41.2000, 8.0000) \drawline( 46.0000, 15.0290)( 46.0000, 8.0000) \drawline( 44.8000, 17.8319)( 49.6000, 17.8319) \drawline( 46.0000, 15.4319)( 50.8000, 15.4319) \drawline( 44.8000, 17.8319)( 46.0000, 15.4319) \drawline( 49.6000, 17.8319)( 50.8000, 15.4319) \drawline( 44.8000, 17.8319)( 44.8000, 17.4290) \drawline( 46.0000, 15.4319)( 46.0000, 8.0000) \drawline( 50.8000, 15.4319)( 50.8000, 8.0000) \drawline( 49.6000, 18.0841)( 54.4000, 18.0841) \drawline( 50.8000, 15.6841)( 55.6000, 15.6841) \drawline( 49.6000, 18.0841)( 50.8000, 15.6841) \drawline( 54.4000, 18.0841)( 55.6000, 15.6841) \drawline( 49.6000, 18.0841)( 49.6000, 17.8319) \drawline( 50.8000, 15.6841)( 50.8000, 8.0000) \drawline( 55.6000, 15.6841)( 55.6000, 8.0000) \drawline( 54.4000, 18.2993)( 59.2000, 18.2993) \drawline( 55.6000, 15.8993)( 60.4000, 15.8993) \drawline( 54.4000, 18.2993)( 55.6000, 15.8993) \drawline( 59.2000, 18.2993)( 60.4000, 15.8993) \drawline( 54.4000, 18.2993)( 54.4000, 18.0841) \drawline( 55.6000, 15.8993)( 55.6000, 8.0000) \drawline( 60.4000, 15.8993)( 60.4000, 8.0000) \drawline( 59.2000, 18.4414)( 64.0000, 18.4414) \drawline( 60.4000, 16.0414)( 65.2000, 16.0414) \drawline( 59.2000, 18.4414)( 60.4000, 16.0414) \drawline( 64.0000, 18.4414)( 65.2000, 16.0414) \drawline( 59.2000, 18.4414)( 59.2000, 18.2993) \drawline( 60.4000, 16.0414)( 60.4000, 8.0000) \drawline( 65.2000, 16.0414)( 65.2000, 8.0000) \drawline( 64.0000, 18.5517)( 68.8000, 18.5517) \drawline( 65.2000, 16.1517)( 70.0000, 16.1517) \drawline( 64.0000, 18.5517)( 65.2000, 16.1517) \drawline( 68.8000, 18.5517)( 70.0000, 16.1517) \drawline( 64.0000, 18.5517)( 64.0000, 18.4414) \drawline( 65.2000, 16.1517)( 65.2000, 16.0414) \drawline( 70.0000, 16.1517)( 70.0000, 8.0000) \drawline( 19.6000, 12.8000)( 24.4000, 12.8000) \drawline( 25.6000, 10.4000)( 25.6000, 10.4000) \drawline( 19.6000, 12.8000)( 20.8000, 10.4000) \drawline( 24.4000, 12.8000)( 25.6000, 10.4000) \drawline( 24.4000, 17.2891)( 29.2000, 17.2891) \drawline( 25.6000, 14.8891)( 30.4000, 14.8891) \drawline( 24.4000, 17.2891)( 25.6000, 14.8891) \drawline( 29.2000, 17.2891)( 30.4000, 14.8891) \drawline( 24.4000, 17.2891)( 24.4000, 12.8000) \drawline( 25.6000, 14.8891)( 25.6000, 10.4000) \drawline( 30.4000, 14.8891)( 30.4000, 14.3723) \drawline( 29.2000, 18.5900)( 34.0000, 18.5900) \drawline( 30.4000, 16.1900)( 35.2000, 16.1900) \drawline( 29.2000, 18.5900)( 30.4000, 16.1900) \drawline( 34.0000, 18.5900)( 35.2000, 16.1900) \drawline( 29.2000, 18.5900)( 29.2000, 17.2891) \drawline( 30.4000, 16.1900)( 30.4000, 15.6113) \drawline( 34.0000, 19.6534)( 38.8000, 19.6534) \drawline( 35.2000, 17.2534)( 40.0000, 17.2534) \drawline( 34.0000, 19.6534)( 35.2000, 17.2534) \drawline( 38.8000, 19.6534)( 40.0000, 17.2534) \drawline( 34.0000, 19.6534)( 34.0000, 18.5900) \drawline( 35.2000, 17.2534)( 35.2000, 16.8500) \drawline( 40.0000, 17.2534)( 40.0000, 16.8500) \drawline( 38.8000, 20.2218)( 43.6000, 20.2218) \drawline( 40.0000, 17.8218)( 44.8000, 17.8218) \drawline( 38.8000, 20.2218)( 40.0000, 17.8218) \drawline( 43.6000, 20.2218)( 44.8000, 17.8218) \drawline( 38.8000, 20.2218)( 38.8000, 19.6534) \drawline( 40.0000, 17.8218)( 40.0000, 16.8500) \drawline( 44.8000, 17.8218)( 44.8000, 17.4290) \drawline( 43.6000, 20.4158)( 48.4000, 20.4158) \drawline( 44.8000, 18.0158)( 49.6000, 18.0158) \drawline( 43.6000, 20.4158)( 44.8000, 18.0158) \drawline( 48.4000, 20.4158)( 49.6000, 18.0158) \drawline( 43.6000, 20.4158)( 43.6000, 20.2218) \drawline( 44.8000, 18.0158)( 44.8000, 17.8319) \drawline( 48.4000, 20.6971)( 53.2000, 20.6971) \drawline( 49.6000, 18.2971)( 54.4000, 18.2971) \drawline( 48.4000, 20.6971)( 49.6000, 18.2971) \drawline( 53.2000, 20.6971)( 54.4000, 18.2971) \drawline( 48.4000, 20.6971)( 48.4000, 20.4158) \drawline( 49.6000, 18.2971)( 49.6000, 18.0841) \drawline( 54.4000, 18.2971)( 54.4000, 18.0841) \drawline( 53.2000, 20.8590)( 58.0000, 20.8590) \drawline( 54.4000, 18.4590)( 59.2000, 18.4590) \drawline( 53.2000, 20.8590)( 54.4000, 18.4590) \drawline( 58.0000, 20.8590)( 59.2000, 18.4590) \drawline( 53.2000, 20.8590)( 53.2000, 20.6971) \drawline( 54.4000, 18.4590)( 54.4000, 18.2993) \drawline( 59.2000, 18.4590)( 59.2000, 18.4414) \drawline( 58.0000, 21.0081)( 62.8000, 21.0081) \drawline( 59.2000, 18.6081)( 64.0000, 18.6081) \drawline( 58.0000, 21.0081)( 59.2000, 18.6081) \drawline( 62.8000, 21.0081)( 64.0000, 18.6081) \drawline( 58.0000, 21.0081)( 58.0000, 20.8590) \drawline( 59.2000, 18.6081)( 59.2000, 18.4414) \drawline( 64.0000, 18.6081)( 64.0000, 18.5517) \drawline( 62.8000, 21.1118)( 67.6000, 21.1118) \drawline( 64.0000, 18.7118)( 68.8000, 18.7118) \drawline( 62.8000, 21.1118)( 64.0000, 18.7118) \drawline( 67.6000, 21.1118)( 68.8000, 18.7118) \drawline( 62.8000, 21.1118)( 62.8000, 21.0081) \drawline( 64.0000, 18.7118)( 64.0000, 18.5517) \drawline( 68.8000, 18.7118)( 68.8000, 18.5517) \drawline( 18.4000, 15.2000)( 23.2000, 15.2000) \drawline( 19.6000, 12.8000)( 19.6000, 12.8000) \drawline( 18.4000, 15.2000)( 19.6000, 12.8000) \drawline( 23.2000, 15.2000)( 24.4000, 12.8000) \drawline( 23.2000, 19.7265)( 28.0000, 19.7265) \drawline( 24.4000, 17.3265)( 29.2000, 17.3265) \drawline( 23.2000, 19.7265)( 24.4000, 17.3265) \drawline( 28.0000, 19.7265)( 29.2000, 17.3265) \drawline( 23.2000, 19.7265)( 23.2000, 15.2000) \drawline( 24.4000, 17.3265)( 24.4000, 17.2891) \drawline( 29.2000, 17.3265)( 29.2000, 17.2891) \drawline( 28.0000, 21.1163)( 32.8000, 21.1163) \drawline( 29.2000, 18.7163)( 34.0000, 18.7163) \drawline( 28.0000, 21.1163)( 29.2000, 18.7163) \drawline( 32.8000, 21.1163)( 34.0000, 18.7163) \drawline( 28.0000, 21.1163)( 28.0000, 19.7265) \drawline( 29.2000, 18.7163)( 29.2000, 17.3265) \drawline( 34.0000, 18.7163)( 34.0000, 18.5900) \drawline( 32.8000, 22.4750)( 37.6000, 22.4750) \drawline( 34.0000, 20.0750)( 38.8000, 20.0750) \drawline( 32.8000, 22.4750)( 34.0000, 20.0750) \drawline( 37.6000, 22.4750)( 38.8000, 20.0750) \drawline( 32.8000, 22.4750)( 32.8000, 21.1163) \drawline( 34.0000, 20.0750)( 34.0000, 18.5900) \drawline( 38.8000, 20.0750)( 38.8000, 19.6534) \drawline( 37.6000, 23.2217)( 42.4000, 23.2217) \drawline( 38.8000, 20.8217)( 43.6000, 20.8217) \drawline( 37.6000, 23.2217)( 38.8000, 20.8217) \drawline( 42.4000, 23.2217)( 43.6000, 20.8217) \drawline( 37.6000, 23.2217)( 37.6000, 22.4750) \drawline( 38.8000, 20.8217)( 38.8000, 20.2218) \drawline( 43.6000, 20.8217)( 43.6000, 20.2218) \drawline( 42.4000, 23.7698)( 47.2000, 23.7698) \drawline( 43.6000, 21.3698)( 48.4000, 21.3698) \drawline( 42.4000, 23.7698)( 43.6000, 21.3698) \drawline( 47.2000, 23.7698)( 48.4000, 21.3698) \drawline( 42.4000, 23.7698)( 42.4000, 23.2217) \drawline( 43.6000, 21.3698)( 43.6000, 20.2218) \drawline( 48.4000, 21.3698)( 48.4000, 20.4158) \drawline( 47.2000, 24.1433)( 52.0000, 24.1433) \drawline( 48.4000, 21.7433)( 53.2000, 21.7433) \drawline( 47.2000, 24.1433)( 48.4000, 21.7433) \drawline( 52.0000, 24.1433)( 53.2000, 21.7433) \drawline( 47.2000, 24.1433)( 47.2000, 23.7698) \drawline( 48.4000, 21.7433)( 48.4000, 20.4158) \drawline( 53.2000, 21.7433)( 53.2000, 20.6971) \drawline( 52.0000, 24.4312)( 56.8000, 24.4312) \drawline( 53.2000, 22.0312)( 58.0000, 22.0312) \drawline( 52.0000, 24.4312)( 53.2000, 22.0312) \drawline( 56.8000, 24.4312)( 58.0000, 22.0312) \drawline( 52.0000, 24.4312)( 52.0000, 24.1433) \drawline( 53.2000, 22.0312)( 53.2000, 20.6971) \drawline( 58.0000, 22.0312)( 58.0000, 20.8590) \drawline( 56.8000, 24.6377)( 61.6000, 24.6377) \drawline( 58.0000, 22.2377)( 62.8000, 22.2377) \drawline( 56.8000, 24.6377)( 58.0000, 22.2377) \drawline( 61.6000, 24.6377)( 62.8000, 22.2377) \drawline( 56.8000, 24.6377)( 56.8000, 24.4312) \drawline( 58.0000, 22.2377)( 58.0000, 20.8590) \drawline( 62.8000, 22.2377)( 62.8000, 21.0081) \drawline( 61.6000, 24.7958)( 66.4000, 24.7958) \drawline( 62.8000, 22.3958)( 67.6000, 22.3958) \drawline( 61.6000, 24.7958)( 62.8000, 22.3958) \drawline( 66.4000, 24.7958)( 67.6000, 22.3958) \drawline( 61.6000, 24.7958)( 61.6000, 24.6377) \drawline( 62.8000, 22.3958)( 62.8000, 21.0081) \drawline( 67.6000, 22.3958)( 67.6000, 21.1118) \drawline( 17.2000, 17.6000)( 22.0000, 17.6000) \drawline( 18.4000, 15.2000)( 23.2000, 15.2000) \drawline( 17.2000, 17.6000)( 18.4000, 15.2000) \drawline( 22.0000, 17.6000)( 23.2000, 15.2000) \drawline( 22.0000, 24.3166)( 26.8000, 24.3166) \drawline( 23.2000, 21.9166)( 28.0000, 21.9166) \drawline( 22.0000, 24.3166)( 23.2000, 21.9166) \drawline( 26.8000, 24.3166)( 28.0000, 21.9166) \drawline( 22.0000, 24.3166)( 22.0000, 17.6000) \drawline( 23.2000, 21.9166)( 23.2000, 15.2000) \drawline( 28.0000, 21.9166)( 28.0000, 19.7265) \drawline( 26.8000, 26.5404)( 31.6000, 26.5404) \drawline( 28.0000, 24.1404)( 32.8000, 24.1404) \drawline( 26.8000, 26.5404)( 28.0000, 24.1404) \drawline( 31.6000, 26.5404)( 32.8000, 24.1404) \drawline( 26.8000, 26.5404)( 26.8000, 24.3166) \drawline( 28.0000, 24.1404)( 28.0000, 19.7265) \drawline( 32.8000, 24.1404)( 32.8000, 21.1163) \drawline( 31.6000, 28.3975)( 36.4000, 28.3975) \drawline( 32.8000, 25.9975)( 37.6000, 25.9975) \drawline( 31.6000, 28.3975)( 32.8000, 25.9975) \drawline( 36.4000, 28.3975)( 37.6000, 25.9975) \drawline( 31.6000, 28.3975)( 31.6000, 26.5404) \drawline( 32.8000, 25.9975)( 32.8000, 21.1163) \drawline( 37.6000, 25.9975)( 37.6000, 22.4750) \drawline( 36.4000, 29.3073)( 41.2000, 29.3073) \drawline( 37.6000, 26.9073)( 42.4000, 26.9073) \drawline( 36.4000, 29.3073)( 37.6000, 26.9073) \drawline( 41.2000, 29.3073)( 42.4000, 26.9073) \drawline( 36.4000, 29.3073)( 36.4000, 28.3975) \drawline( 37.6000, 26.9073)( 37.6000, 22.4750) \drawline( 42.4000, 26.9073)( 42.4000, 23.2217) \drawline( 41.2000, 29.9157)( 46.0000, 29.9157) \drawline( 42.4000, 27.5157)( 47.2000, 27.5157) \drawline( 41.2000, 29.9157)( 42.4000, 27.5157) \drawline( 46.0000, 29.9157)( 47.2000, 27.5157) \drawline( 41.2000, 29.9157)( 41.2000, 29.3073) \drawline( 42.4000, 27.5157)( 42.4000, 23.7698) \drawline( 47.2000, 27.5157)( 47.2000, 24.1433) \drawline( 46.0000, 30.2651)( 50.8000, 30.2651) \drawline( 47.2000, 27.8651)( 52.0000, 27.8651) \drawline( 46.0000, 30.2651)( 47.2000, 27.8651) \drawline( 50.8000, 30.2651)( 52.0000, 27.8651) \drawline( 46.0000, 30.2651)( 46.0000, 29.9157) \drawline( 47.2000, 27.8651)( 47.2000, 24.1433) \drawline( 52.0000, 27.8651)( 52.0000, 24.1433) \drawline( 50.8000, 30.4654)( 55.6000, 30.4654) \drawline( 52.0000, 28.0654)( 56.8000, 28.0654) \drawline( 50.8000, 30.4654)( 52.0000, 28.0654) \drawline( 55.6000, 30.4654)( 56.8000, 28.0654) \drawline( 50.8000, 30.4654)( 50.8000, 30.2651) \drawline( 52.0000, 28.0654)( 52.0000, 24.1433) \drawline( 56.8000, 28.0654)( 56.8000, 24.4312) \drawline( 55.6000, 30.6439)( 60.4000, 30.6439) \drawline( 56.8000, 28.2439)( 61.6000, 28.2439) \drawline( 55.6000, 30.6439)( 56.8000, 28.2439) \drawline( 60.4000, 30.6439)( 61.6000, 28.2439) \drawline( 55.6000, 30.6439)( 55.6000, 30.4654) \drawline( 56.8000, 28.2439)( 56.8000, 24.4312) \drawline( 61.6000, 28.2439)( 61.6000, 24.6377) \drawline( 60.4000, 30.7658)( 65.2000, 30.7658) \drawline( 61.6000, 28.3658)( 66.4000, 28.3658) \drawline( 60.4000, 30.7658)( 61.6000, 28.3658) \drawline( 65.2000, 30.7658)( 66.4000, 28.3658) \drawline( 60.4000, 30.7658)( 60.4000, 30.6439) \drawline( 61.6000, 28.3658)( 61.6000, 24.6377) \drawline( 66.4000, 28.3658)( 66.4000, 24.7958) \drawline( 16.0000, 20.0000)( 20.8000, 20.0000) \drawline( 17.2000, 17.6000)( 22.0000, 17.6000) \drawline( 16.0000, 20.0000)( 17.2000, 17.6000) \drawline( 20.8000, 20.0000)( 22.0000, 17.6000) \drawline( 20.8000, 24.5265)( 25.6000, 24.5265) \drawline( 20.8000, 24.5265)( 22.0000, 22.1265) \drawline( 25.6000, 24.5265)( 25.7049, 24.3166) \drawline( 20.8000, 24.5265)( 20.8000, 20.0000) \drawline( 22.0000, 22.1265)( 22.0000, 17.6000) \drawline( 25.6000, 27.5722)( 30.4000, 27.5722) \drawline( 25.6000, 27.5722)( 26.8000, 25.1722) \drawline( 30.4000, 27.5722)( 30.9159, 26.5404) \drawline( 25.6000, 27.5722)( 25.6000, 24.5265) \drawline( 26.8000, 25.1722)( 26.8000, 24.3166) \drawline( 30.4000, 29.7468)( 35.2000, 29.7468) \drawline( 30.4000, 29.7468)( 31.6000, 27.3468) \drawline( 35.2000, 29.7468)( 35.8746, 28.3975) \drawline( 30.4000, 29.7468)( 30.4000, 27.5722) \drawline( 31.6000, 27.3468)( 31.6000, 26.5404) \drawline( 35.2000, 31.1139)( 40.0000, 31.1139) \drawline( 35.2000, 31.1139)( 36.4000, 28.7139) \drawline( 40.0000, 31.1139)( 40.9033, 29.3073) \drawline( 35.2000, 31.1139)( 35.2000, 29.7468) \drawline( 40.0000, 31.9583)( 44.8000, 31.9583) \drawline( 40.0000, 31.9583)( 41.2000, 29.5583) \drawline( 44.8000, 31.9583)( 45.8213, 29.9157) \drawline( 40.0000, 31.9583)( 40.0000, 31.1139) \drawline( 41.2000, 29.5583)( 41.2000, 29.3073) \drawline( 44.8000, 32.4973)( 49.6000, 32.4973) \drawline( 44.8000, 32.4973)( 46.0000, 30.0973) \drawline( 49.6000, 32.4973)( 50.7161, 30.2651) \drawline( 44.8000, 32.4973)( 44.8000, 31.9583) \drawline( 46.0000, 30.0973)( 46.0000, 29.9157) \drawline( 49.6000, 32.8603)( 54.4000, 32.8603) \drawline( 49.6000, 32.8603)( 50.8000, 30.4603) \drawline( 54.4000, 32.8603)( 55.5975, 30.4654) \drawline( 49.6000, 32.8603)( 49.6000, 32.4973) \drawline( 54.4000, 33.1086)( 59.2000, 33.1086) \drawline( 55.6000, 30.7086)( 60.4000, 30.7086) \drawline( 54.4000, 33.1086)( 55.6000, 30.7086) \drawline( 59.2000, 33.1086)( 60.4000, 30.7086) \drawline( 54.4000, 33.1086)( 54.4000, 32.8603) \drawline( 55.6000, 30.7086)( 55.6000, 30.6439) \drawline( 59.2000, 33.2839)( 64.0000, 33.2839) \drawline( 60.4000, 30.8839)( 65.2000, 30.8839) \drawline( 59.2000, 33.2839)( 60.4000, 30.8839) \drawline( 64.0000, 33.2839)( 65.2000, 30.8839) \drawline( 59.2000, 33.2839)( 59.2000, 33.1086) \drawline( 60.4000, 30.8839)( 60.4000, 30.7658) \drawline( 65.2000, 30.8839)( 65.2000, 30.7658) \drawline( 14.8000, 30.3926)( 19.6000, 30.3926) \drawline( 16.0000, 27.9926)( 20.8000, 27.9926) \drawline( 14.8000, 30.3926)( 16.0000, 27.9926) \drawline( 19.6000, 30.3926)( 20.8000, 27.9926) \drawline( 14.8000, 30.3926)( 14.8000, 22.4000) \drawline( 16.0000, 27.9926)( 16.0000, 20.0000) \drawline( 20.8000, 27.9926)( 20.8000, 20.0000) \drawline( 19.6000, 31.4410)( 24.4000, 31.4410) \drawline( 20.8000, 29.0410)( 25.6000, 29.0410) \drawline( 19.6000, 31.4410)( 20.8000, 29.0410) \drawline( 24.4000, 31.4410)( 25.6000, 29.0410) \drawline( 19.6000, 31.4410)( 19.6000, 30.3926) \drawline( 20.8000, 29.0410)( 20.8000, 20.0000) \drawline( 25.6000, 29.0410)( 25.6000, 24.5265) \drawline( 24.4000, 33.7527)( 29.2000, 33.7527) \drawline( 25.6000, 31.3527)( 30.4000, 31.3527) \drawline( 24.4000, 33.7527)( 25.6000, 31.3527) \drawline( 29.2000, 33.7527)( 30.4000, 31.3527) \drawline( 24.4000, 33.7527)( 24.4000, 31.4410) \drawline( 25.6000, 31.3527)( 25.6000, 24.5265) \drawline( 30.4000, 31.3527)( 30.4000, 29.7468) \drawline( 29.2000, 34.6117)( 34.0000, 34.6117) \drawline( 30.4000, 32.2117)( 35.2000, 32.2117) \drawline( 29.2000, 34.6117)( 30.4000, 32.2117) \drawline( 34.0000, 34.6117)( 35.2000, 32.2117) \drawline( 29.2000, 34.6117)( 29.2000, 33.7527) \drawline( 30.4000, 32.2117)( 30.4000, 31.3527) \drawline( 35.2000, 32.2117)( 35.2000, 29.7468) \drawline( 34.0000, 35.4087)( 38.8000, 35.4087) \drawline( 35.2000, 33.0087)( 40.0000, 33.0087) \drawline( 34.0000, 35.4087)( 35.2000, 33.0087) \drawline( 38.8000, 35.4087)( 40.0000, 33.0087) \drawline( 34.0000, 35.4087)( 34.0000, 34.6117) \drawline( 35.2000, 33.0087)( 35.2000, 29.7468) \drawline( 40.0000, 33.0087)( 40.0000, 31.1139) \drawline( 38.8000, 35.8487)( 43.6000, 35.8487) \drawline( 40.0000, 33.4487)( 44.8000, 33.4487) \drawline( 38.8000, 35.8487)( 40.0000, 33.4487) \drawline( 43.6000, 35.8487)( 44.8000, 33.4487) \drawline( 38.8000, 35.8487)( 38.8000, 35.4087) \drawline( 40.0000, 33.4487)( 40.0000, 31.1139) \drawline( 44.8000, 33.4487)( 44.8000, 31.9583) \drawline( 43.8946, 35.2595)( 48.4000, 35.2595) \drawline( 44.8000, 32.8595)( 49.6000, 32.8595) \drawline( 48.4000, 35.2595)( 49.6000, 32.8595) \drawline( 44.8000, 32.8595)( 44.8000, 31.9583) \drawline( 49.6000, 32.8595)( 49.6000, 32.4973) \drawline( 48.4000, 35.5649)( 53.2000, 35.5649) \drawline( 49.6000, 33.1649)( 54.4000, 33.1649) \drawline( 48.4000, 35.5649)( 49.6000, 33.1649) \drawline( 53.2000, 35.5649)( 54.4000, 33.1649) \drawline( 48.4000, 35.5649)( 48.4000, 35.2595) \drawline( 49.6000, 33.1649)( 49.6000, 32.4973) \drawline( 54.4000, 33.1649)( 54.4000, 32.8603) \drawline( 53.2000, 35.6847)( 58.0000, 35.6847) \drawline( 54.4000, 33.2847)( 59.2000, 33.2847) \drawline( 53.2000, 35.6847)( 54.4000, 33.2847) \drawline( 58.0000, 35.6847)( 59.2000, 33.2847) \drawline( 53.2000, 35.6847)( 53.2000, 35.5649) \drawline( 54.4000, 33.2847)( 54.4000, 32.8603) \drawline( 59.2000, 33.2847)( 59.2000, 33.1086) \drawline( 58.0000, 35.8285)( 62.8000, 35.8285) \drawline( 59.2000, 33.4285)( 64.0000, 33.4285) \drawline( 58.0000, 35.8285)( 59.2000, 33.4285) \drawline( 62.8000, 35.8285)( 64.0000, 33.4285) \drawline( 58.0000, 35.8285)( 58.0000, 35.6847) \drawline( 59.2000, 33.4285)( 59.2000, 33.1086) \drawline( 64.0000, 33.4285)( 64.0000, 33.2839) \drawline( 13.6000, 34.9327)( 18.4000, 34.9327) \drawline( 14.8000, 32.5327)( 19.6000, 32.5327) \drawline( 13.6000, 34.9327)( 14.8000, 32.5327) \drawline( 18.4000, 34.9327)( 19.6000, 32.5327) \drawline( 13.6000, 34.9327)( 13.6000, 24.8000) \drawline( 14.8000, 32.5327)( 14.8000, 22.4000) \drawline( 19.6000, 32.5327)( 19.6000, 30.3926) \drawline( 18.4000, 37.0785)( 23.2000, 37.0785) \drawline( 19.6000, 34.6785)( 24.4000, 34.6785) \drawline( 18.4000, 37.0785)( 19.6000, 34.6785) \drawline( 23.2000, 37.0785)( 24.4000, 34.6785) \drawline( 18.4000, 37.0785)( 18.4000, 34.9327) \drawline( 19.6000, 34.6785)( 19.6000, 30.3926) \drawline( 24.4000, 34.6785)( 24.4000, 31.4410) \drawline( 23.2000, 38.6671)( 28.0000, 38.6671) \drawline( 24.4000, 36.2671)( 29.2000, 36.2671) \drawline( 23.2000, 38.6671)( 24.4000, 36.2671) \drawline( 28.0000, 38.6671)( 29.2000, 36.2671) \drawline( 23.2000, 38.6671)( 23.2000, 37.0785) \drawline( 24.4000, 36.2671)( 24.4000, 31.4410) \drawline( 29.2000, 36.2671)( 29.2000, 34.6117) \drawline( 28.0000, 39.4711)( 32.8000, 39.4711) \drawline( 29.2000, 37.0711)( 34.0000, 37.0711) \drawline( 28.0000, 39.4711)( 29.2000, 37.0711) \drawline( 32.8000, 39.4711)( 34.0000, 37.0711) \drawline( 28.0000, 39.4711)( 28.0000, 38.6671) \drawline( 29.2000, 37.0711)( 29.2000, 36.2671) \drawline( 34.0000, 37.0711)( 34.0000, 34.6117) \drawline( 32.8000, 40.0282)( 37.6000, 40.0282) \drawline( 34.0000, 37.6282)( 38.8000, 37.6282) \drawline( 32.8000, 40.0282)( 34.0000, 37.6282) \drawline( 37.6000, 40.0282)( 38.8000, 37.6282) \drawline( 32.8000, 40.0282)( 32.8000, 39.4711) \drawline( 34.0000, 37.6282)( 34.0000, 34.6117) \drawline( 38.8000, 37.6282)( 38.8000, 35.4087) \drawline( 37.6000, 40.3368)( 42.4000, 40.3368) \drawline( 38.8000, 37.9368)( 43.6000, 37.9368) \drawline( 37.6000, 40.3368)( 38.8000, 37.9368) \drawline( 42.4000, 40.3368)( 43.6000, 37.9368) \drawline( 37.6000, 40.3368)( 37.6000, 40.0282) \drawline( 38.8000, 37.9368)( 38.8000, 35.8487) \drawline( 43.6000, 37.9368)( 43.6000, 35.8487) \drawline( 42.4000, 40.5685)( 47.2000, 40.5685) \drawline( 43.6000, 38.1685)( 48.4000, 38.1685) \drawline( 42.4000, 40.5685)( 43.6000, 38.1685) \drawline( 47.2000, 40.5685)( 48.4000, 38.1685) \drawline( 42.4000, 40.5685)( 42.4000, 40.3368) \drawline( 43.6000, 38.1685)( 43.6000, 35.8487) \drawline( 48.4000, 38.1685)( 48.4000, 35.2595) \drawline( 47.2000, 40.7279)( 52.0000, 40.7279) \drawline( 48.4000, 38.3279)( 53.2000, 38.3279) \drawline( 47.2000, 40.7279)( 48.4000, 38.3279) \drawline( 52.0000, 40.7279)( 53.2000, 38.3279) \drawline( 47.2000, 40.7279)( 47.2000, 40.5685) \drawline( 48.4000, 38.3279)( 48.4000, 35.5649) \drawline( 53.2000, 38.3279)( 53.2000, 35.6847) \drawline( 52.0000, 40.8477)( 56.8000, 40.8477) \drawline( 53.2000, 38.4477)( 58.0000, 38.4477) \drawline( 52.0000, 40.8477)( 53.2000, 38.4477) \drawline( 56.8000, 40.8477)( 58.0000, 38.4477) \drawline( 52.0000, 40.8477)( 52.0000, 40.7279) \drawline( 53.2000, 38.4477)( 53.2000, 35.6847) \drawline( 58.0000, 38.4477)( 58.0000, 35.8285) \drawline( 56.8000, 40.9381)( 61.6000, 40.9381) \drawline( 58.0000, 38.5381)( 62.8000, 38.5381) \drawline( 56.8000, 40.9381)( 58.0000, 38.5381) \drawline( 61.6000, 40.9381)( 62.8000, 38.5381) \drawline( 56.8000, 40.9381)( 56.8000, 40.8477) \drawline( 58.0000, 38.5381)( 58.0000, 35.8285) \drawline( 62.8000, 38.5381)( 62.8000, 35.8285) \drawline( 12.4000, 35.1926)( 17.2000, 35.1926) \drawline( 12.4000, 35.1926)( 13.6000, 32.7926) \drawline( 17.2000, 35.1926)( 17.3300, 34.9327) \drawline( 12.4000, 35.1926)( 12.4000, 27.2000) \drawline( 13.6000, 32.7926)( 13.6000, 24.8000) \drawline( 17.2000, 38.9038)( 22.0000, 38.9038) \drawline( 17.2000, 38.9038)( 18.4000, 36.5038) \drawline( 22.0000, 38.9038)( 22.9126, 37.0785) \drawline( 17.2000, 38.9038)( 17.2000, 35.1926) \drawline( 18.4000, 36.5038)( 18.4000, 34.9327) \drawline( 22.0000, 40.6976)( 26.8000, 40.6976) \drawline( 23.2000, 38.2976)( 23.2000, 38.2976) \drawline( 22.0000, 40.6976)( 23.2000, 38.2976) \drawline( 26.8000, 40.6976)( 27.8153, 38.6671) \drawline( 22.0000, 40.6976)( 22.0000, 38.9038) \drawline( 23.2000, 38.2976)( 23.2000, 37.0785) \drawline( 26.8000, 41.8026)( 31.6000, 41.8026) \drawline( 28.0000, 39.4026)( 28.0000, 39.4026) \drawline( 26.8000, 41.8026)( 28.0000, 39.4026) \drawline( 31.6000, 41.8026)( 32.7658, 39.4711) \drawline( 26.8000, 41.8026)( 26.8000, 40.6976) \drawline( 28.0000, 39.4026)( 28.0000, 38.6671) \drawline( 31.6000, 42.5049)( 36.4000, 42.5049) \drawline( 32.8000, 40.1049)( 37.6000, 40.1049) \drawline( 31.6000, 42.5049)( 32.8000, 40.1049) \drawline( 36.4000, 42.5049)( 37.6000, 40.1049) \drawline( 31.6000, 42.5049)( 31.6000, 41.8026) \drawline( 32.8000, 40.1049)( 32.8000, 39.4711) \drawline( 37.6000, 40.1049)( 37.6000, 40.0282) \drawline( 36.4000, 42.9806)( 41.2000, 42.9806) \drawline( 37.6000, 40.5806)( 42.4000, 40.5806) \drawline( 36.4000, 42.9806)( 37.6000, 40.5806) \drawline( 41.2000, 42.9806)( 42.4000, 40.5806) \drawline( 36.4000, 42.9806)( 36.4000, 42.5049) \drawline( 37.6000, 40.5806)( 37.6000, 40.0282) \drawline( 42.4000, 40.5806)( 42.4000, 40.3368) \drawline( 41.2000, 43.3119)( 46.0000, 43.3119) \drawline( 42.4000, 40.9119)( 47.2000, 40.9119) \drawline( 41.2000, 43.3119)( 42.4000, 40.9119) \drawline( 46.0000, 43.3119)( 47.2000, 40.9119) \drawline( 41.2000, 43.3119)( 41.2000, 42.9806) \drawline( 42.4000, 40.9119)( 42.4000, 40.3368) \drawline( 47.2000, 40.9119)( 47.2000, 40.5685) \drawline( 46.0000, 43.3942)( 50.8000, 43.3942) \drawline( 47.2000, 40.9942)( 52.0000, 40.9942) \drawline( 46.0000, 43.3942)( 47.2000, 40.9942) \drawline( 50.8000, 43.3942)( 52.0000, 40.9942) \drawline( 46.0000, 43.3942)( 46.0000, 43.3119) \drawline( 47.2000, 40.9942)( 47.2000, 40.5685) \drawline( 52.0000, 40.9942)( 52.0000, 40.7279) \drawline( 50.8000, 43.4996)( 55.6000, 43.4996) \drawline( 52.0000, 41.0996)( 56.8000, 41.0996) \drawline( 50.8000, 43.4996)( 52.0000, 41.0996) \drawline( 55.6000, 43.4996)( 56.8000, 41.0996) \drawline( 50.8000, 43.4996)( 50.8000, 43.3942) \drawline( 52.0000, 41.0996)( 52.0000, 40.7279) \drawline( 56.8000, 41.0996)( 56.8000, 40.8477) \drawline( 55.6000, 43.5965)( 60.4000, 43.5965) \drawline( 56.8000, 41.1965)( 61.6000, 41.1965) \drawline( 55.6000, 43.5965)( 56.8000, 41.1965) \drawline( 60.4000, 43.5965)( 61.6000, 41.1965) \drawline( 55.6000, 43.5965)( 55.6000, 43.4996) \drawline( 56.8000, 41.1965)( 56.8000, 40.8477) \drawline( 61.6000, 41.1965)( 61.6000, 40.9381) \drawline( 11.2000, 39.7327)( 16.0000, 39.7327) \drawline( 12.4000, 37.3327)( 17.2000, 37.3327) \drawline( 11.2000, 39.7327)( 12.4000, 37.3327) \drawline( 16.0000, 39.7327)( 17.2000, 37.3327) \drawline( 11.2000, 39.7327)( 11.2000, 29.6000) \drawline( 12.4000, 37.3327)( 12.4000, 27.2000) \drawline( 17.2000, 37.3327)( 17.2000, 35.1926) \drawline( 16.0000, 42.4503)( 20.8000, 42.4503) \drawline( 17.2000, 40.0503)( 22.0000, 40.0503) \drawline( 16.0000, 42.4503)( 17.2000, 40.0503) \drawline( 20.8000, 42.4503)( 22.0000, 40.0503) \drawline( 16.0000, 42.4503)( 16.0000, 39.7327) \drawline( 17.2000, 40.0503)( 17.2000, 35.1926) \drawline( 22.0000, 40.0503)( 22.0000, 38.9038) \drawline( 20.8000, 44.3597)( 25.6000, 44.3597) \drawline( 22.0000, 41.9597)( 26.8000, 41.9597) \drawline( 20.8000, 44.3597)( 22.0000, 41.9597) \drawline( 25.6000, 44.3597)( 26.8000, 41.9597) \drawline( 20.8000, 44.3597)( 20.8000, 42.4503) \drawline( 22.0000, 41.9597)( 22.0000, 38.9038) \drawline( 26.8000, 41.9597)( 26.8000, 41.8026) \drawline( 25.6000, 45.3919)( 30.4000, 45.3919) \drawline( 26.8000, 42.9919)( 31.6000, 42.9919) \drawline( 25.6000, 45.3919)( 26.8000, 42.9919) \drawline( 30.4000, 45.3919)( 31.6000, 42.9919) \drawline( 25.6000, 45.3919)( 25.6000, 44.3597) \drawline( 26.8000, 42.9919)( 26.8000, 41.9597) \drawline( 31.6000, 42.9919)( 31.6000, 41.8026) \drawline( 30.4000, 45.5574)( 35.2000, 45.5574) \drawline( 31.6000, 43.1574)( 36.4000, 43.1574) \drawline( 30.4000, 45.5574)( 31.6000, 43.1574) \drawline( 35.2000, 45.5574)( 36.4000, 43.1574) \drawline( 30.4000, 45.5574)( 30.4000, 45.3919) \drawline( 31.6000, 43.1574)( 31.6000, 41.8026) \drawline( 36.4000, 43.1574)( 36.4000, 42.5049) \drawline( 35.2000, 46.0417)( 40.0000, 46.0417) \drawline( 36.4000, 43.6417)( 41.2000, 43.6417) \drawline( 35.2000, 46.0417)( 36.4000, 43.6417) \drawline( 40.0000, 46.0417)( 41.2000, 43.6417) \drawline( 35.2000, 46.0417)( 35.2000, 45.5574) \drawline( 36.4000, 43.6417)( 36.4000, 42.9806) \drawline( 41.2000, 43.6417)( 41.2000, 43.3119) \drawline( 40.0000, 46.2791)( 44.8000, 46.2791) \drawline( 41.2000, 43.8791)( 46.0000, 43.8791) \drawline( 40.0000, 46.2791)( 41.2000, 43.8791) \drawline( 44.8000, 46.2791)( 46.0000, 43.8791) \drawline( 40.0000, 46.2791)( 40.0000, 46.0417) \drawline( 41.2000, 43.8791)( 41.2000, 43.3119) \drawline( 46.0000, 43.8791)( 46.0000, 43.3119) \drawline( 44.8000, 46.5108)( 49.6000, 46.5108) \drawline( 46.0000, 44.1108)( 50.8000, 44.1108) \drawline( 44.8000, 46.5108)( 46.0000, 44.1108) \drawline( 49.6000, 46.5108)( 50.8000, 44.1108) \drawline( 44.8000, 46.5108)( 44.8000, 46.2791) \drawline( 46.0000, 44.1108)( 46.0000, 43.3942) \drawline( 50.8000, 44.1108)( 50.8000, 43.4996) \drawline( 49.6000, 46.6573)( 54.4000, 46.6573) \drawline( 50.8000, 44.2573)( 55.6000, 44.2573) \drawline( 49.6000, 46.6573)( 50.8000, 44.2573) \drawline( 54.4000, 46.6573)( 55.6000, 44.2573) \drawline( 49.6000, 46.6573)( 49.6000, 46.5108) \drawline( 50.8000, 44.2573)( 50.8000, 43.4996) \drawline( 55.6000, 44.2573)( 55.6000, 43.5965) \drawline( 54.4000, 46.7808)( 59.2000, 46.7808) \drawline( 55.6000, 44.3808)( 60.4000, 44.3808) \drawline( 54.4000, 46.7808)( 55.6000, 44.3808) \drawline( 59.2000, 46.7808)( 60.4000, 44.3808) \drawline( 54.4000, 46.7808)( 54.4000, 46.6573) \drawline( 55.6000, 44.3808)( 55.6000, 43.5965) \drawline( 60.4000, 44.3808)( 60.4000, 43.5965) \drawline( 10.0000, 41.9833)( 14.8000, 41.9833) \drawline( 10.0000, 41.9833)( 11.2000, 39.5833) \drawline( 14.8000, 41.9833)( 15.9253, 39.7327) \drawline( 10.0000, 41.9833)( 10.0000, 32.0000) \drawline( 11.2000, 39.5833)( 11.2000, 29.6000) \drawline( 14.8000, 45.7434)( 19.6000, 45.7434) \drawline( 16.0000, 43.3434)( 20.8000, 43.3434) \drawline( 14.8000, 45.7434)( 16.0000, 43.3434) \drawline( 19.6000, 45.7434)( 20.8000, 43.3434) \drawline( 14.8000, 45.7434)( 14.8000, 41.9833) \drawline( 16.0000, 43.3434)( 16.0000, 39.7327) \drawline( 20.8000, 43.3434)( 20.8000, 42.4503) \drawline( 19.6000, 48.2341)( 24.4000, 48.2341) \drawline( 20.8000, 45.8341)( 25.6000, 45.8341) \drawline( 19.6000, 48.2341)( 20.8000, 45.8341) \drawline( 24.4000, 48.2341)( 25.6000, 45.8341) \drawline( 19.6000, 48.2341)( 19.6000, 45.7434) \drawline( 20.8000, 45.8341)( 20.8000, 42.4503) \drawline( 25.6000, 45.8341)( 25.6000, 45.3919) \drawline( 24.4000, 49.5085)( 29.2000, 49.5085) \drawline( 25.6000, 47.1085)( 30.4000, 47.1085) \drawline( 24.4000, 49.5085)( 25.6000, 47.1085) \drawline( 29.2000, 49.5085)( 30.4000, 47.1085) \drawline( 24.4000, 49.5085)( 24.4000, 48.2341) \drawline( 25.6000, 47.1085)( 25.6000, 45.8341) \drawline( 30.4000, 47.1085)( 30.4000, 45.3919) \drawline( 29.2000, 50.3756)( 34.0000, 50.3756) \drawline( 30.4000, 47.9756)( 35.2000, 47.9756) \drawline( 29.2000, 50.3756)( 30.4000, 47.9756) \drawline( 34.0000, 50.3756)( 35.2000, 47.9756) \drawline( 29.2000, 50.3756)( 29.2000, 49.5085) \drawline( 30.4000, 47.9756)( 30.4000, 45.3919) \drawline( 35.2000, 47.9756)( 35.2000, 45.5574) \drawline( 34.0000, 50.7956)( 38.8000, 50.7956) \drawline( 35.2000, 48.3956)( 40.0000, 48.3956) \drawline( 34.0000, 50.7956)( 35.2000, 48.3956) \drawline( 38.8000, 50.7956)( 40.0000, 48.3956) \drawline( 34.0000, 50.7956)( 34.0000, 50.3756) \drawline( 35.2000, 48.3956)( 35.2000, 45.5574) \drawline( 40.0000, 48.3956)( 40.0000, 46.0417) \drawline( 38.8000, 51.1468)( 43.6000, 51.1468) \drawline( 40.0000, 48.7468)( 44.8000, 48.7468) \drawline( 38.8000, 51.1468)( 40.0000, 48.7468) \drawline( 43.6000, 51.1468)( 44.8000, 48.7468) \drawline( 38.8000, 51.1468)( 38.8000, 50.7956) \drawline( 40.0000, 48.7468)( 40.0000, 46.0417) \drawline( 44.8000, 48.7468)( 44.8000, 46.2791) \drawline( 43.6000, 51.3620)( 48.4000, 51.3620) \drawline( 44.8000, 48.9620)( 49.6000, 48.9620) \drawline( 43.6000, 51.3620)( 44.8000, 48.9620) \drawline( 48.4000, 51.3620)( 49.6000, 48.9620) \drawline( 43.6000, 51.3620)( 43.6000, 51.1468) \drawline( 44.8000, 48.9620)( 44.8000, 46.2791) \drawline( 49.6000, 48.9620)( 49.6000, 46.5108) \drawline( 48.4000, 51.5274)( 53.2000, 51.5274) \drawline( 49.6000, 49.1274)( 54.4000, 49.1274) \drawline( 48.4000, 51.5274)( 49.6000, 49.1274) \drawline( 53.2000, 51.5274)( 54.4000, 49.1274) \drawline( 48.4000, 51.5274)( 48.4000, 51.3620) \drawline( 49.6000, 49.1274)( 49.6000, 46.5108) \drawline( 54.4000, 49.1274)( 54.4000, 46.6573) \drawline( 53.2000, 51.6472)( 58.0000, 51.6472) \drawline( 54.4000, 49.2472)( 59.2000, 49.2472) \drawline( 53.2000, 51.6472)( 54.4000, 49.2472) \drawline( 58.0000, 51.6472)( 59.2000, 49.2472) \drawline( 53.2000, 51.6472)( 53.2000, 51.5274) \drawline( 54.4000, 49.2472)( 54.4000, 46.6573) \drawline( 59.2000, 49.2472)( 59.2000, 46.7808) \end{picture}\\ \vspace{-3mm} {\bf Figure 1.} \ An activated pattern of a word list \\ (produced from \{{\tt red}, {\tt alcoholic}, {\tt drink}\}). \vspace{-3mm} \end{center}\end{figure} \subsection{Cohesiveness of a Word List} Lexical cohesiveness $c(S_i)$ of the word list $S_i$ is defined as follows: \vspace{-1mm} \begin{center} $c(S_i) = \sum_{w \in S_i} s(w) \!\cdot\! a(P(S_i),w)$ , \end{center} \vspace{-1mm} where $a(P(S_i),w)$ is the activity value of the node $w$ in the activated pattern $P(S_i)$. $P(S_i)$ is produced by activating each node $w \!\in\! S_i$ with strength $s(w)^2 / \sum s(w)$. Figure 1 shows a sample pattern of \{{\tt red}, {\tt alcoholic}, {\tt drink}\}. (Note that it has highly activated nodes like {\tt bottle} and {\tt wine}.) The definition of $c(S_i)$ above expresses that $c(S_i)$ represents semantic homogeneity of $S_i$, since $P(S_i)$ represents the average meaning of $w \!\in\! S_i$. For example: \begin{tabbing} \hspace{5mm}\=\hspace{2mm}\=\hspace{2mm}\=\hspace{55mm}\=\hspace{5mm}\=\kill \>$c$\>(\> {\tt "Molly saw a cat. It was her family} \\ \> \> \> {\tt pet. She wished to keep a lion."} \\ \> \> = 0.403239 \ \ (cohesive), \end{tabbing}\begin{tabbing} \hspace{5mm}\=\hspace{2mm}\=\hspace{2mm}\=\hspace{55mm}\=\hspace{5mm}\=\kill \>$c$\>(\> {\tt "There is no one but me. Put on }\\ \> \> \> {\tt your clothes. I can not walk more."} \\ \> \> = 0.235462 \ \ (not cohesive). \end{tabbing} \begin{figure}[t]\begin{center}\begin{picture}(70, 43) \thicklines \drawline( 10.0000, 10.0000)( 70.0000, 10.0000) \drawline( 10.0000, 10.0000)( 10.0000, 45.0000) \thinlines \multiputlist( 8.0000, 41.0000)(0, 0)[r] {\footnotesize LCP} \multiputlist( 70.0000, 4.0000)(0, 0)[rt]{\small words} \multiputlist( 13.5000, 6.0000)(4, 0) {$\circ$, $\circ$, $\circ$, $\circ$, $\circ$, $\circ$, $\circ$, $\circ$, $\circ$, $\bullet$, $\bullet$, $\bullet$, $\bullet$, $\bullet$ } \drawline(16,8)(36,8)(36,4)(16,4)(16,8) \dottedline{0.5}(26,8)(26,35) \drawline(40,8)(60,8)(60,4)(40,4)(40,8) \dottedline{0.5}(50,8)(50,20) \drawline(10,35)(38,35) \bezier200(38,35)(42,35)(46,20) \bezier200(46,20)(48,10)(50,20) \bezier200(50,20)(54,35)(58,35) \drawline(58,35)(70,35) \end{picture}\\ {\bf Figure 2.} \ \vspace{-2.5mm} \begin{minipage}[t]{40mm} Correlation between LCP\\ and text segments. \vspace{-1mm} \end{minipage}\end{center}\end{figure} \begin{figure} \begin{center} \begin{picture}(70, 45) \thicklines \drawline( 10.0000, 10.0000)( 70.0000, 10.0000) \drawline( 10.0000, 10.0000)( 10.0000, 45.0000) \thinlines \drawline( 10.0000, 10.0000)( 10.0000, 9.0000) \multiputlist( 10.0000, 8.0000)(0, 0)[t]{\scriptsize 0} \drawline( 20.8197, 10.0000)( 20.8197, 9.0000) \multiputlist( 20.8197, 8.0000)(0, 0)[t]{\scriptsize 100} \drawline( 31.7486, 10.0000)( 31.7486, 9.0000) \multiputlist( 31.7486, 8.0000)(0, 0)[t]{\scriptsize 200} \drawline( 42.6776, 10.0000)( 42.6776, 9.0000) \multiputlist( 42.6776, 8.0000)(0, 0)[t]{\scriptsize 300} \drawline( 53.6066, 10.0000)( 53.6066, 9.0000) \multiputlist( 53.6066, 8.0000)(0, 0)[t]{\scriptsize 400} \drawline( 64.5355, 10.0000)( 64.5355, 9.0000) \multiputlist( 64.5355, 8.0000)(0, 0)[t]{\scriptsize 500} \drawline( 10.0000, 10.0000)( 9.0000, 10.0000) \multiputlist( 8.0000, 10.0000)(0, 0)[r]{\scriptsize 0.3} \drawline( 10.0000, 15.8333)( 9.0000, 15.8333) \drawline( 10.0000, 21.6667)( 9.0000, 21.6667) \multiputlist( 8.0000, 21.6667)(0, 0)[r]{\scriptsize 0.4} \drawline( 10.0000, 27.5000)( 9.0000, 27.5000) \drawline( 10.0000, 33.3333)( 9.0000, 33.3333) \multiputlist( 8.0000, 33.3333)(0, 0)[r]{\scriptsize 0.5} \drawline( 10.0000, 39.1667)( 9.0000, 39.1667) \drawline( 10.0000, 45.0000)( 9.0000, 45.0000) \multiputlist( 8.0000, 45.0000)(0, 0)[r]{\scriptsize 0.6} \dottedline{1}( 10.0000, 10.0000)( 10.0000, 45.0000) \dottedline{1}( 20.8197, 10.0000)( 20.8197, 45.0000) \dottedline{1}( 31.7486, 10.0000)( 31.7486, 45.0000) \dottedline{1}( 42.6776, 10.0000)( 42.6776, 45.0000) \dottedline{1}( 53.6066, 10.0000)( 53.6066, 45.0000) \dottedline{1}( 64.5355, 10.0000)( 64.5355, 45.0000) \multiputlist( 3.0000, 41.0000)(0, 0)[r]{\footnotesize LCP} \multiputlist( 70.0000, 5.0000)(0, 0)[rt]{\small $i$ (words)} \thinlines \drawline( 10.0000, 20.7817) ( 10.0000, 20.7817) ( 10.0000, 21.0103) ( 10.1093, 21.0103) ( 10.1093, 20.7511) ( 10.2186, 20.7511) ( 10.2186, 20.3554) ( 10.3279, 20.3554) ( 10.3279, 20.4171) ( 10.4372, 20.4171) ( 10.4372, 21.5085) ( 10.5464, 21.5085) ( 10.5464, 22.3558) ( 10.6557, 22.3558) ( 10.6557, 21.6785) ( 10.7650, 21.6785) ( 10.7650, 21.9590) ( 10.8743, 21.9590) ( 10.8743, 21.3547) ( 10.9836, 21.3547) ( 10.9836, 21.7466) ( 11.0929, 21.7466) ( 11.0929, 23.0548) ( 11.2022, 23.0548) ( 11.2022, 23.6215) ( 11.3115, 23.6215) ( 11.3115, 23.3677) ( 11.4208, 23.3677) ( 11.4208, 22.8716) ( 11.5301, 22.8716) ( 11.5301, 22.4416) ( 11.6393, 22.4416) ( 11.6393, 22.0599) ( 11.7486, 22.0599) ( 11.7486, 21.6569) ( 11.8579, 21.6569) ( 11.8579, 21.1319) ( 11.9672, 21.1319) ( 11.9672, 20.6675) ( 12.0765, 20.6675) ( 12.0765, 20.3983) ( 12.1858, 20.3983) ( 12.1858, 22.5147) ( 12.2951, 22.5147) ( 12.2951, 22.3149) ( 12.4044, 22.3149) ( 12.4044, 23.7975) ( 12.5137, 23.7975) ( 12.5137, 25.3114) ( 12.6230, 25.3114) ( 12.6230, 25.0403) ( 12.7322, 25.0403) ( 12.7322, 24.3652) ( 12.8415, 24.3652) ( 12.8415, 22.7153) ( 12.9508, 22.7153) ( 12.9508, 22.8074) ( 13.0601, 22.8074) ( 13.0601, 22.7873) ( 13.1694, 22.7873) ( 13.1694, 24.4521) ( 13.2787, 24.4521) ( 13.2787, 20.4732) ( 13.3880, 20.4732) ( 13.3880, 19.7085) ( 13.4973, 19.7085) ( 13.4973, 20.2231) ( 13.6066, 20.2231) ( 13.6066, 19.6224) ( 13.7158, 19.6224) ( 13.7158, 19.8572) ( 13.8251, 19.8572) ( 13.8251, 18.3241) ( 13.9344, 18.3241) ( 13.9344, 18.7126) ( 14.0437, 18.7126) ( 14.0437, 18.7800) ( 14.1530, 18.7800) ( 14.1530, 19.8092) ( 14.2623, 19.8092) ( 14.2623, 19.5952) ( 14.3716, 19.5952) ( 14.3716, 21.1201) ( 14.4809, 21.1201) ( 14.4809, 21.6633) ( 14.5902, 21.6633) ( 14.5902, 21.1817) ( 14.6995, 21.1817) ( 14.6995, 21.0396) ( 14.8087, 21.0396) ( 14.8087, 21.3871) ( 14.9180, 21.3871) ( 14.9180, 19.7906) ( 15.0273, 19.7906) ( 15.0273, 17.6496) ( 15.1366, 17.6496) ( 15.1366, 16.6414) ( 15.2459, 16.6414) ( 15.2459, 16.5024) ( 15.3552, 16.5024) ( 15.3552, 14.7706) ( 15.4645, 14.7706) ( 15.4645, 14.7085) ( 15.5738, 14.7085) ( 15.5738, 14.8610) ( 15.6831, 14.8610) ( 15.6831, 16.8421) ( 15.7923, 16.8421) ( 15.7923, 16.4034) ( 15.9016, 16.4034) ( 15.9016, 16.2800) ( 16.1202, 16.2800) ( 16.1202, 15.2540) ( 16.2295, 15.2540) ( 16.2295, 15.0933) ( 16.3388, 15.0933) ( 16.3388, 15.0068) ( 16.4481, 15.0068) ( 16.4481, 15.6865) ( 16.5574, 15.6865) ( 16.5574, 15.7871) ( 16.6667, 15.7871) ( 16.6667, 14.4465) ( 16.7760, 14.4465) ( 16.7760, 12.7752) ( 16.9945, 12.7769) ( 16.9945, 13.8816) ( 17.1038, 13.8816) ( 17.1038, 13.7414) ( 17.2131, 13.7414) ( 17.2131, 14.1996) ( 17.3224, 14.1996) ( 17.3224, 14.3257) ( 17.4317, 14.3257) ( 17.4317, 17.8232) ( 17.5410, 17.8232) ( 17.5410, 18.6813) ( 17.6503, 18.6813) ( 17.6503, 18.6264) ( 17.7596, 18.6264) ( 17.7596, 16.9923) ( 17.8689, 16.9923) ( 17.8689, 16.5745) ( 17.9781, 16.5745) ( 17.9781, 23.7893) ( 18.1967, 23.7893) ( 18.1967, 24.2999) ( 18.3060, 24.2999) ( 18.3060, 24.5133) ( 18.4153, 24.5133) ( 18.4153, 24.9910) ( 18.5246, 24.9910) ( 18.5246, 24.7439) ( 18.6339, 24.7439) ( 18.6339, 24.5061) ( 18.7432, 24.5061) ( 18.7432, 23.8557) ( 18.8525, 23.8557) ( 18.8525, 23.8039) ( 18.9617, 23.8039) ( 18.9617, 23.9910) ( 19.0710, 23.9910) ( 19.0710, 23.5683) ( 19.1803, 23.5683) ( 19.1803, 23.8037) ( 19.2896, 23.8037) ( 19.2896, 23.8690) ( 19.3989, 23.8690) ( 19.3989, 23.2844) ( 19.5082, 23.2844) ( 19.5082, 23.0671) ( 19.6175, 23.0671) ( 19.6175, 23.5538) ( 19.7268, 23.5538) ( 19.7268, 22.7663) ( 19.8361, 22.7663) ( 19.8361, 23.6347) ( 19.9454, 23.6347) ( 19.9454, 22.7284) ( 20.0546, 22.7284) ( 20.0546, 24.1181) ( 20.1639, 24.1181) ( 20.1639, 24.0844) ( 20.2732, 24.0844) ( 20.2732, 23.5968) ( 20.3825, 23.5968) ( 20.3825, 23.6625) ( 20.4918, 23.6625) ( 20.4918, 23.6770) ( 20.6011, 23.6770) ( 20.6011, 23.7144) ( 20.7104, 23.7144) ( 20.7104, 23.6258) ( 20.8197, 23.6258) ( 20.8197, 23.7173) ( 20.9290, 23.7173) ( 20.9290, 23.6883) ( 21.0383, 23.6883) ( 21.0383, 23.0294) ( 21.1475, 23.0294) ( 21.1475, 23.2931) ( 21.2568, 23.2931) ( 21.2568, 15.9356) ( 21.3661, 15.9356) ( 21.3661, 16.0723) ( 21.5847, 16.0693) ( 21.5847, 15.3221) ( 21.6940, 15.3221) ( 21.6940, 15.7043) ( 21.8033, 15.7043) ( 21.8033, 16.0938) ( 21.9126, 16.0938) ( 21.9126, 16.3093) ( 22.0219, 16.3093) ( 22.0219, 16.0551) ( 22.1311, 16.0551) ( 22.1311, 17.7890) ( 22.2404, 17.7890) ( 22.2404, 18.3914) ( 22.3497, 18.3914) ( 22.3497, 20.1622) ( 22.4590, 20.1622) ( 22.4590, 20.4050) ( 22.5683, 20.4050) ( 22.5683, 19.4925) ( 22.6776, 19.4925) ( 22.6776, 21.5383) ( 22.7869, 21.5383) ( 22.7869, 21.7892) ( 22.8962, 21.7892) ( 22.8962, 21.7354) ( 23.0055, 21.7354) ( 23.0055, 19.3794) ( 23.2240, 19.3815) ( 23.2240, 20.0080) ( 23.3333, 20.0080) ( 23.3333, 22.2969) ( 23.4426, 22.2969) ( 23.4426, 22.2043) ( 23.5519, 22.2043) ( 23.5519, 20.5385) ( 23.6612, 20.5385) ( 23.6612, 23.2574) ( 23.7705, 23.2574) ( 23.7705, 22.5467) ( 23.8798, 22.5467) ( 23.8798, 23.1124) ( 23.9891, 23.1124) ( 23.9891, 22.3048) ( 24.0984, 22.3048) ( 24.0984, 22.6168) ( 24.2077, 22.6168) ( 24.2077, 22.1779) ( 24.3169, 22.1779) ( 24.3169, 21.9301) ( 24.4262, 21.9301) ( 24.4262, 22.7378) ( 24.5355, 22.7378) ( 24.5355, 23.0427) ( 24.6448, 23.0427) ( 24.6448, 27.6701) ( 24.8634, 27.6701) ( 24.8634, 28.4868) ( 24.9727, 28.4868) ( 24.9727, 29.3868) ( 25.0820, 29.3868) ( 25.0820, 29.3610) ( 25.1913, 29.3610) ( 25.1913, 30.0770) ( 25.3005, 30.0770) ( 25.3005, 29.0880) ( 25.4098, 29.0880) ( 25.4098, 33.2019) ( 25.5191, 33.2019) ( 25.5191, 32.3994) ( 25.6284, 32.3994) ( 25.6284, 31.0421) ( 25.7377, 31.0421) ( 25.7377, 32.2736) ( 25.8470, 32.2736) ( 25.8470, 32.0210) ( 25.9563, 32.0210) ( 25.9563, 36.9355) ( 26.0656, 36.9355) ( 26.0656, 32.7084) ( 26.1749, 32.7084) ( 26.1749, 32.5050) ( 26.2842, 32.5050) ( 26.2842, 32.7378) ( 26.3934, 32.7378) ( 26.3934, 38.0142) ( 26.5027, 38.0142) ( 26.5027, 37.8875) ( 26.6120, 37.8875) ( 26.6120, 38.3704) ( 26.7213, 38.3704) ( 26.7213, 37.9487) ( 26.8306, 37.9487) ( 26.8306, 37.8778) ( 26.9399, 37.8778) ( 26.9399, 39.2865) ( 27.0492, 39.2865) ( 27.0492, 38.1107) ( 27.1585, 38.1107) ( 27.1585, 38.5118) ( 27.2678, 38.5118) ( 27.2678, 38.9851) ( 27.3770, 38.9851) ( 27.3770, 39.1104) ( 27.4863, 39.1104) ( 27.4863, 39.2227) ( 27.5956, 39.2227) ( 27.5956, 38.8692) ( 27.7049, 38.8692) ( 27.7049, 35.6018) ( 27.8142, 35.6018) ( 27.8142, 36.2364) ( 27.9235, 36.2364) ( 27.9235, 36.5794) ( 28.0328, 36.5794) ( 28.0328, 36.1325) ( 28.1421, 36.1325) ( 28.1421, 35.4446) ( 28.2514, 35.4446) ( 28.2514, 35.2504) ( 28.3607, 35.2504) ( 28.3607, 35.8004) ( 28.4699, 35.8004) ( 28.4699, 35.0597) ( 28.5792, 35.0597) ( 28.5792, 36.1739) ( 28.6885, 36.1739) ( 28.6885, 36.5475) ( 28.7978, 36.5475) ( 28.7978, 37.5670) ( 28.9071, 37.5670) ( 28.9071, 37.2871) ( 29.0164, 37.2871) ( 29.0164, 37.9184) ( 29.1257, 37.9184) ( 29.1257, 36.8280) ( 29.2350, 36.8280) ( 29.2350, 36.2633) ( 29.3443, 36.2633) ( 29.3443, 36.9882) ( 29.4536, 36.9882) ( 29.4536, 36.7190) ( 29.5628, 36.7190) ( 29.5628, 38.2917) ( 29.6721, 38.2917) ( 29.6721, 28.5693) ( 29.7814, 28.5693) ( 29.7814, 27.3196) ( 29.8907, 27.3196) ( 29.8907, 26.9996) ( 30.0000, 26.9996) ( 30.0000, 26.4966) ( 30.1093, 26.4966) ( 30.1093, 27.0207) ( 30.2186, 27.0207) ( 30.2186, 27.3063) ( 30.3279, 27.3063) ( 30.3279, 31.3248) ( 30.4372, 31.3248) ( 30.4372, 30.8150) ( 30.5464, 30.8150) ( 30.5464, 30.5944) ( 30.6557, 30.5944) ( 30.6557, 31.2102) ( 30.7650, 31.2102) ( 30.7650, 30.8732) ( 30.8743, 30.8732) ( 30.8743, 28.9310) ( 30.9836, 28.9310) ( 30.9836, 23.0135) ( 31.0929, 23.0135) ( 31.0929, 25.1755) ( 31.2022, 25.1755) ( 31.2022, 24.7955) ( 31.3115, 24.7955) ( 31.3115, 24.0348) ( 31.4208, 24.0348) ( 31.4208, 22.7806) ( 31.5301, 22.7806) ( 31.5301, 22.9499) ( 31.6393, 22.9499) ( 31.6393, 20.2750) ( 31.7486, 20.2750) ( 31.7486, 20.4961) ( 31.8579, 20.4961) ( 31.8579, 20.6293) ( 31.9672, 20.6293) ( 31.9672, 20.3441) ( 32.0765, 20.3441) ( 32.0765, 20.5508) ( 32.1858, 20.5508) ( 32.1858, 20.5859) ( 32.2951, 20.5859) ( 32.2951, 20.3544) ( 32.4044, 20.3544) ( 32.4044, 20.4515) ( 32.5137, 20.4515) ( 32.5137, 20.4412) ( 32.6230, 20.4412) ( 32.6230, 22.0499) ( 32.7322, 22.0499) ( 32.7322, 21.9970) ( 32.8415, 21.9970) ( 32.8415, 21.9562) ( 32.9508, 21.9562) ( 32.9508, 22.4506) ( 33.0601, 22.4506) ( 33.0601, 23.0268) ( 33.1694, 23.0268) ( 33.1694, 23.7061) ( 33.2787, 23.7061) ( 33.2787, 23.6847) ( 33.4973, 23.6808) ( 33.4973, 23.5759) ( 33.6066, 23.5759) ( 33.6066, 23.3400) ( 33.8251, 23.3380) ( 33.8251, 23.7825) ( 33.9344, 23.7825) ( 33.9344, 23.7382) ( 34.0437, 23.7382) ( 34.0437, 23.9457) ( 34.1530, 23.9457) ( 34.1530, 22.5332) ( 34.2623, 22.5332) ( 34.2623, 24.2338) ( 34.3716, 24.2338) ( 34.3716, 24.0543) ( 34.4809, 24.0543) ( 34.4809, 24.8496) ( 34.5902, 24.8496) ( 34.5902, 26.6399) ( 34.6995, 26.6399) ( 34.6995, 25.8682) ( 34.8087, 25.8682) ( 34.8087, 25.6108) ( 34.9180, 25.6108) ( 34.9180, 25.2637) ( 35.0273, 25.2637) ( 35.0273, 25.2173) ( 35.1366, 25.2173) ( 35.1366, 24.3330) ( 35.2459, 24.3330) ( 35.2459, 24.4746) ( 35.3552, 24.4746) ( 35.3552, 24.3064) ( 35.4645, 24.3064) ( 35.4645, 22.6686) ( 35.5738, 22.6686) ( 35.5738, 22.3015) ( 35.6831, 22.3015) ( 35.6831, 18.2585) ( 35.7924, 18.2585) ( 35.7924, 18.9912) ( 35.9016, 18.9912) ( 35.9016, 18.3264) ( 36.0109, 18.3264) ( 36.0109, 18.0155) ( 36.1202, 18.0155) ( 36.1202, 18.3428) ( 36.2295, 18.3428) ( 36.2295, 17.6795) ( 36.3388, 17.6795) ( 36.3388, 17.7373) ( 36.5574, 17.7373) ( 36.5574, 17.9194) ( 36.6667, 17.9194) ( 36.6667, 15.4593) ( 36.7760, 15.4593) ( 36.7760, 15.4869) ( 36.8852, 15.4869) ( 36.8852, 15.7871) ( 36.9945, 15.7871) ( 36.9945, 16.0317) ( 37.1038, 16.0317) ( 37.1038, 16.7300) ( 37.2131, 16.7300) ( 37.2131, 17.1010) ( 37.3224, 17.1010) ( 37.3224, 17.3853) ( 37.4317, 17.3853) ( 37.4317, 17.2010) ( 37.5410, 17.2010) ( 37.5410, 16.8107) ( 37.6503, 16.8107) ( 37.6503, 16.7977) ( 37.7596, 16.7977) ( 37.7596, 16.5059) ( 37.8689, 16.5059) ( 37.8689, 17.0101) ( 37.9781, 17.0101) ( 37.9781, 17.6685) ( 38.0874, 17.6685) ( 38.0874, 18.6954) ( 38.3060, 18.6954) ( 38.3060, 18.6351) ( 38.4153, 18.6351) ( 38.4153, 18.6060) ( 38.5246, 18.6060) ( 38.5246, 17.9795) ( 38.6339, 17.9795) ( 38.6339, 18.8611) ( 38.7432, 18.8611) ( 38.7432, 18.3504) ( 38.8525, 18.3504) ( 38.8525, 17.9204) ( 38.9618, 17.9204) ( 38.9618, 18.2654) ( 39.0710, 18.2654) ( 39.0710, 18.5869) ( 39.1803, 18.5869) ( 39.1803, 18.8859) ( 39.2896, 18.8859) ( 39.2896, 18.1964) ( 39.3989, 18.1964) ( 39.3989, 16.3086) ( 39.5082, 16.3086) ( 39.5082, 17.0644) ( 39.6175, 17.0644) ( 39.6175, 16.9695) ( 39.7268, 16.9695) ( 39.7268, 18.6811) ( 39.8361, 18.6811) ( 39.8361, 17.3267) ( 39.9454, 17.3267) ( 39.9454, 15.7645) ( 40.0546, 15.7645) ( 40.0546, 14.6842) ( 40.1639, 14.6842) ( 40.1639, 14.2171) ( 40.2732, 14.2171) ( 40.2732, 14.8436) ( 40.3825, 14.8436) ( 40.3825, 14.5133) ( 40.4918, 14.5133) ( 40.4918, 14.7383) ( 40.6011, 14.7383) ( 40.6011, 15.2932) ( 40.7104, 15.2932) ( 40.7104, 15.2512) ( 40.8197, 15.2512) ( 40.8197, 16.1583) ( 40.9290, 16.1583) ( 40.9290, 16.1685) ( 41.0382, 16.1685) ( 41.0382, 16.1183) ( 41.1475, 16.1183) ( 41.1475, 17.0933) ( 41.2568, 17.0933) ( 41.2568, 17.5262) ( 41.3661, 17.5262) ( 41.3661, 17.3278) ( 41.4754, 17.3278) ( 41.4754, 18.5301) ( 41.5847, 18.5301) ( 41.5847, 18.6003) ( 41.8033, 18.5987) ( 41.8033, 19.3857) ( 41.9126, 19.3857) ( 41.9126, 19.1767) ( 42.0219, 19.1767) ( 42.0219, 17.9460) ( 42.1311, 17.9460) ( 42.1311, 18.2130) ( 42.2404, 18.2130) ( 42.2404, 17.8208) ( 42.3497, 17.8208) ( 42.3497, 19.2644) ( 42.4590, 19.2644) ( 42.4590, 19.0927) ( 42.5683, 19.0927) ( 42.5683, 18.1030) ( 42.6776, 18.1030) ( 42.6776, 18.9056) ( 42.7869, 18.9056) ( 42.7869, 18.9634) ( 42.8962, 18.9634) ( 42.8962, 19.1692) ( 43.0055, 19.1692) ( 43.0055, 18.3145) ( 43.1148, 18.3145) ( 43.1148, 19.9644) ( 43.2240, 19.9644) ( 43.2240, 22.6137) ( 43.3333, 22.6137) ( 43.3333, 22.2706) ( 43.4426, 22.2706) ( 43.4426, 21.7057) ( 43.5519, 21.7057) ( 43.5519, 21.4907) ( 43.6612, 21.4907) ( 43.6612, 21.8406) ( 43.7705, 21.8406) ( 43.7705, 20.1856) ( 43.8798, 20.1856) ( 43.8798, 19.9322) ( 43.9891, 19.9322) ( 43.9891, 19.0517) ( 44.0984, 19.0517) ( 44.0984, 18.9338) ( 44.2076, 18.9338) ( 44.2076, 18.8178) ( 44.3169, 18.8178) ( 44.3169, 19.0229) ( 44.4262, 19.0229) ( 44.4262, 19.3104) ( 44.5355, 19.3104) ( 44.5355, 22.3190) ( 44.9727, 22.3190) ( 44.9727, 22.6019) ( 45.0820, 22.6019) ( 45.0820, 22.4886) ( 45.1913, 22.4886) ( 45.1913, 21.9386) ( 45.3005, 21.9386) ( 45.3005, 19.0526) ( 45.4098, 19.0526) ( 45.4098, 19.2372) ( 45.6284, 19.2372) ( 45.6284, 19.9009) ( 45.7377, 19.9009) ( 45.7377, 20.1948) ( 45.8470, 20.1948) ( 45.8470, 19.7530) ( 45.9563, 19.7530) ( 45.9563, 21.6380) ( 46.0656, 21.6380) ( 46.0656, 21.8567) ( 46.1749, 21.8567) ( 46.1749, 21.0049) ( 46.2842, 21.0049) ( 46.2842, 20.9066) ( 46.3934, 20.9066) ( 46.3934, 21.0726) ( 46.5027, 21.0726) ( 46.5027, 21.0159) ( 46.6120, 21.0159) ( 46.6120, 21.7631) ( 46.7213, 21.7631) ( 46.7213, 22.5559) ( 46.8306, 22.5559) ( 46.8306, 23.0113) ( 46.9399, 23.0113) ( 46.9399, 23.1936) ( 47.0492, 23.1936) ( 47.0492, 23.1302) ( 47.1585, 23.1302) ( 47.1585, 22.8445) ( 47.2678, 22.8445) ( 47.2678, 22.8202) ( 47.3770, 22.8202) ( 47.3770, 22.5999) ( 47.4863, 22.5999) ( 47.4863, 22.6746) ( 47.5956, 22.6746) ( 47.5956, 22.7284) ( 47.7049, 22.7284) ( 47.7049, 22.7442) ( 47.8142, 22.7442) ( 47.8142, 21.9925) ( 47.9235, 21.9925) ( 47.9235, 20.6832) ( 48.0328, 20.6832) ( 48.0328, 20.6421) ( 48.1421, 20.6421) ( 48.1421, 20.9011) ( 48.3607, 20.8985) ( 48.3607, 21.2730) ( 48.4699, 21.2730) ( 48.4699, 21.8520) ( 48.5792, 21.8520) ( 48.5792, 22.4362) ( 48.6885, 22.4362) ( 48.6885, 22.6737) ( 48.7978, 22.6737) ( 48.7978, 19.9466) ( 48.9071, 19.9466) ( 48.9071, 20.1810) ( 49.0164, 20.1810) ( 49.0164, 21.2870) ( 49.2350, 21.2957) ( 49.2350, 20.6043) ( 49.3443, 20.6043) ( 49.3443, 20.3939) ( 49.4536, 20.3939) ( 49.4536, 20.3213) ( 49.5628, 20.3213) ( 49.5628, 19.6249) ( 49.6721, 19.6249) ( 49.6721, 20.7619) ( 49.7814, 20.7619) ( 49.7814, 21.6562) ( 49.8907, 21.6562) ( 49.8907, 21.9922) ( 50.0000, 21.9922) ( 50.0000, 22.9166) ( 50.1093, 22.9166) ( 50.1093, 22.6169) ( 50.2186, 22.6169) ( 50.2186, 20.4414) ( 50.3279, 20.4414) ( 50.3279, 20.7057) ( 50.4372, 20.7057) ( 50.4372, 20.9735) ( 50.5464, 20.9735) ( 50.5464, 21.3187) ( 50.6557, 21.3187) ( 50.6557, 21.1650) ( 50.7650, 21.1650) ( 50.7650, 21.3369) ( 50.8743, 21.3369) ( 50.8743, 22.3549) ( 50.9836, 22.3549) ( 50.9836, 23.0806) ( 51.0929, 23.0806) ( 51.0929, 23.6717) ( 51.2022, 23.6717) ( 51.2022, 23.1272) ( 51.3115, 23.1272) ( 51.3115, 22.8758) ( 51.4208, 22.8758) ( 51.4208, 23.2675) ( 51.5301, 23.2675) ( 51.5301, 22.8170) ( 51.6393, 22.8170) ( 51.6393, 22.2239) ( 51.7486, 22.2239) ( 51.7486, 22.7294) ( 51.8579, 22.7294) ( 51.8579, 21.5036) ( 51.9672, 21.5036) ( 51.9672, 20.7300) ( 52.0765, 20.7300) ( 52.0765, 20.1274) ( 52.1858, 20.1274) ( 52.1858, 20.8490) ( 52.2951, 20.8490) ( 52.2951, 21.6006) ( 52.5137, 21.6006) ( 52.5137, 22.4274) ( 52.6230, 22.4274) ( 52.6230, 22.1974) ( 52.8415, 22.2032) ( 52.8415, 21.0115) ( 52.9508, 21.0115) ( 52.9508, 21.2225) ( 53.0601, 21.2225) ( 53.0601, 21.5609) ( 53.1694, 21.5609) ( 53.1694, 21.5414) ( 53.3880, 21.5414) ( 53.3880, 21.9921) ( 53.4973, 21.9921) ( 53.4973, 21.3234) ( 53.7158, 21.3234) ( 53.7158, 20.3267) ( 53.8251, 20.3267) ( 53.8251, 20.2111) ( 53.9344, 20.2111) ( 53.9344, 20.1393) ( 54.0437, 20.1393) ( 54.0437, 19.3065) ( 54.1530, 19.3065) ( 54.1530, 19.1925) ( 54.2623, 19.1925) ( 54.2623, 22.6396) ( 54.3716, 22.6396) ( 54.3716, 22.4118) ( 54.4809, 22.4118) ( 54.4809, 21.9239) ( 54.5902, 21.9239) ( 54.5902, 21.3343) ( 54.6995, 21.3343) ( 54.6995, 21.6470) ( 54.8087, 21.6470) ( 54.8087, 20.3719) ( 54.9180, 20.3719) ( 54.9180, 20.1164) ( 55.1366, 20.1162) ( 55.1366, 21.3609) ( 55.2459, 21.3609) ( 55.2459, 20.8724) ( 55.3552, 20.8724) ( 55.3552, 19.5594) ( 55.4645, 19.5594) ( 55.4645, 19.3272) ( 55.5738, 19.3272) ( 55.5738, 18.2438) ( 55.6831, 18.2438) ( 55.6831, 18.0762) ( 55.7924, 18.0762) ( 55.7924, 18.2346) ( 55.9016, 18.2346) ( 55.9016, 18.0918) ( 56.0109, 18.0918) ( 56.0109, 17.9167) ( 56.1202, 17.9167) ( 56.1202, 17.9787) ( 56.2295, 17.9787) ( 56.2295, 18.0907) ( 56.3388, 18.0907) ( 56.3388, 16.9060) ( 56.4481, 16.9060) ( 56.4481, 16.9480) ( 56.5574, 16.9480) ( 56.5574, 20.2373) ( 56.6667, 20.2373) ( 56.6667, 21.5945) ( 56.7760, 21.5945) ( 56.7760, 21.2741) ( 56.8852, 21.2741) ( 56.8852, 21.1275) ( 56.9945, 21.1275) ( 56.9945, 20.9114) ( 57.1038, 20.9114) ( 57.1038, 21.5288) ( 57.2131, 21.5288) ( 57.2131, 21.1262) ( 57.3224, 21.1262) ( 57.3224, 17.6478) ( 57.4317, 17.6478) ( 57.4317, 17.9875) ( 57.5410, 17.9875) ( 57.5410, 17.9447) ( 57.6503, 17.9447) ( 57.6503, 17.8404) ( 57.7596, 17.8404) ( 57.7596, 18.0967) ( 57.8689, 18.0967) ( 57.8689, 17.3834) ( 58.0874, 17.3827) ( 58.0874, 15.9387) ( 58.1967, 15.9387) ( 58.1967, 17.2055) ( 58.4153, 17.2100) ( 58.4153, 17.4263) ( 58.5246, 17.4263) ( 58.5246, 16.9429) ( 58.6339, 16.9429) ( 58.6339, 18.0536) ( 58.7432, 18.0536) ( 58.7432, 17.9106) ( 58.8525, 17.9106) ( 58.8525, 20.9812) ( 58.9618, 20.9812) ( 58.9618, 27.4982) ( 59.0710, 27.4982) ( 59.0710, 27.7918) ( 59.1803, 27.7918) ( 59.1803, 28.1900) ( 59.2896, 28.1900) ( 59.2896, 27.9361) ( 59.3989, 27.9361) ( 59.3989, 29.5188) ( 59.5082, 29.5188) ( 59.5082, 30.0467) ( 59.6175, 30.0467) ( 59.6175, 29.7816) ( 59.7268, 29.7816) ( 59.7268, 32.1484) ( 59.8361, 32.1484) ( 59.8361, 32.4659) ( 59.9454, 32.4659) ( 59.9454, 32.3418) ( 60.0546, 32.3418) ( 60.0546, 32.6065) ( 60.1639, 32.6065) ( 60.1639, 32.8057) ( 60.3825, 32.8040) ( 60.3825, 30.9557) ( 60.4918, 30.9557) ( 60.4918, 31.2657) ( 60.6011, 31.2657) ( 60.6011, 30.4471) ( 60.7104, 30.4471) ( 60.7104, 30.1669) ( 60.9290, 30.1669) ( 60.9290, 30.5034) ( 61.0382, 30.5034) ( 61.0382, 31.3908) ( 61.1475, 31.3908) ( 61.1475, 31.6171) ( 61.2568, 31.6171) ( 61.2568, 31.6560) ( 61.3661, 31.6560) ( 61.3661, 31.6271) ( 61.4754, 31.6271) ( 61.4754, 32.0235) ( 61.5847, 32.0235) ( 61.5847, 32.4291) ( 61.6940, 32.4291) ( 61.6940, 32.6048) ( 61.8033, 32.6048) ( 61.8033, 32.8923) ( 61.9126, 32.8923) ( 61.9126, 33.6437) ( 62.0219, 33.6437) ( 62.0219, 33.5727) ( 62.1311, 33.5727) ( 62.1311, 33.5453) ( 62.2404, 33.5453) ( 62.2404, 32.6412) ( 62.3497, 32.6412) ( 62.3497, 33.2052) ( 62.4590, 33.2052) ( 62.4590, 33.1935) ( 62.5683, 33.1935) ( 62.5683, 32.1619) ( 62.6776, 32.1619) ( 62.6776, 31.3972) ( 62.7869, 31.3972) ( 62.7869, 31.5188) ( 62.8962, 31.5188) ( 62.8962, 31.2986) ( 63.1148, 31.2986) ( 63.1148, 31.5903) ( 63.2240, 31.5903) ( 63.2240, 32.0481) ( 63.3333, 32.0481) ( 63.3333, 32.4593) ( 63.4426, 32.4593) ( 63.4426, 32.5161) ( 63.5519, 32.5161) ( 63.5519, 31.9169) ( 63.6612, 31.9169) ( 63.6612, 35.1813) ( 63.7705, 35.1813) ( 63.7705, 28.1077) ( 63.8798, 28.1077) ( 63.8798, 28.1981) ( 63.9891, 28.1981) ( 63.9891, 27.5583) ( 64.0984, 27.5583) ( 64.0984, 27.2262) ( 64.2076, 27.2262) ( 64.2076, 26.7035) ( 64.3169, 26.7035) ( 64.3169, 27.2753) ( 64.4262, 27.2753) ( 64.4262, 23.1302) ( 64.5355, 23.1302) ( 64.5355, 22.0922) ( 64.6448, 22.0922) ( 64.6448, 20.9642) ( 64.7541, 20.9642) ( 64.7541, 20.0864) ( 64.8634, 20.0864) ( 64.8634, 20.7442) ( 64.9727, 20.7442) ( 64.9727, 20.6956) ( 65.0820, 20.6956) ( 65.0820, 20.6622) ( 65.1913, 20.6622) ( 65.1913, 19.4504) ( 65.3005, 19.4504) ( 65.3005, 18.0848) ( 65.4098, 18.0848) ( 65.4098, 18.6453) ( 65.5191, 18.6453) ( 65.5191, 18.8024) ( 65.6284, 18.8024) ( 65.6284, 18.2180) ( 65.7377, 18.2180) ( 65.7377, 18.2465) ( 65.8470, 18.2465) ( 65.8470, 18.8346) ( 65.9563, 18.8346) ( 65.9563, 19.4966) ( 66.0656, 19.4966) ( 66.0656, 18.8591) ( 66.1749, 18.8591) ( 66.1749, 17.6305) ( 66.2842, 17.6305) ( 66.2842, 18.4922) ( 66.3934, 18.4922) ( 66.3934, 18.2038) ( 66.5027, 18.2038) ( 66.5027, 17.9266) ( 66.6120, 17.9266) ( 66.6120, 18.2471) ( 66.7213, 18.2471) ( 66.7213, 17.7536) ( 66.8306, 17.7536) ( 66.8306, 17.5931) ( 66.9399, 17.5931) ( 66.9399, 17.3183) ( 67.0492, 17.3183) ( 67.0492, 16.7403) ( 67.1585, 16.7403) ( 67.1585, 16.4736) ( 67.3771, 16.4791) ( 67.3771, 16.4153) ( 67.4863, 16.4153) ( 67.4863, 16.1047) ( 67.5956, 16.1047) ( 67.5956, 16.3086) ( 67.7049, 16.3086) ( 67.7049, 16.2440) ( 67.8142, 16.2440) ( 67.8142, 16.4721) ( 67.9235, 16.4721) ( 67.9235, 16.3886) ( 68.0328, 16.3886) ( 68.0328, 16.5622) ( 68.1421, 16.5622) ( 68.1421, 18.3001) ( 68.2514, 18.3001) ( 68.2514, 18.0842) ( 68.3607, 18.0842) ( 68.3607, 18.5231) ( 68.4699, 18.5231) ( 68.4699, 19.7985) ( 68.5792, 19.7985) ( 68.5792, 20.7015) ( 68.6885, 20.7015) ( 68.6885, 19.8177) ( 68.7978, 19.8177) ( 68.7978, 19.6323) ( 68.9071, 19.6323) ( 68.9071, 19.4030) ( 69.0164, 19.4030) ( 69.0164, 19.6622) ( 69.1257, 19.6622) ( 69.1257, 19.0642) ( 69.2350, 19.0642) ( 69.2350, 15.2636) ( 69.3443, 15.2636) ( 69.3443, 15.4434) ( 69.4536, 15.4434) ( 69.4536, 15.6059) ( 69.5628, 15.6059) ( 69.5628, 16.3204) ( 69.6721, 16.3204) ( 69.6721, 16.5289) ( 69.7814, 16.5289) ( 69.7814, 18.1011) ( 69.8907, 18.1011) ( 69.8907, 19.2124) \end{picture}\\ \vspace{-2mm} \ \ \ {\bf Figure 3.} \ An example of LCP \\ \ \ (using rectangular window of $\Delta$=25) \vspace{-2mm} \end{center}\end{figure} \subsection{LCP and Its Feature} A graph of LCP, which plots $c(S_i)$ at the text position $i$, indicates changing of segments: \begin{itemizing} \item If $S_i$ is inside a segment, it tends to be cohesive and makes $c(S_i)$ high. \item If $S_i$ is crossing a segment boundary, it tends to semantically vary and makes $c(S_i)$ low. \end{itemizing} As shown in Figure 2, the segment boundaries can be detected by the valleys (minimum points) of LCP. The LCP, shown in Figure 3, has large hills and valleys, and also meaningless noise. The graph is so complicated that one can not easily determine which valley should be considered as a segment boundary. \begin{figure*}[t] \begin{center} \begin{picture}(155, 43) \thicklines \drawline( 10.0000, 10.0000)(140.0000, 10.0000) \drawline( 10.0000, 10.0000)( 10.0000, 45.0000) \thinlines \drawline( 10.0000, 10.0000)( 10.0000, 9.0000) \multiputlist( 10.0000, 8.0000)(0, 0)[t]{\scriptsize 0} \drawline( 28.4120, 10.0000)( 28.4120, 9.0000) \multiputlist( 28.4120, 8.0000)(0, 0)[t]{\scriptsize 100} \drawline( 47.0100, 10.0000)( 47.0100, 9.0000) \multiputlist( 47.0100, 8.0000)(0, 0)[t]{\scriptsize 200} \drawline( 65.6080, 10.0000)( 65.6080, 9.0000) \multiputlist( 65.6080, 8.0000)(0, 0)[t]{\scriptsize 300} \drawline( 84.2060, 10.0000)( 84.2060, 9.0000) \multiputlist( 84.2060, 8.0000)(0, 0)[t]{\scriptsize 400} \drawline(102.8040, 10.0000)(102.8040, 9.0000) \multiputlist(102.8040, 8.0000)(0, 0)[t]{\scriptsize 500} \drawline(121.4020, 10.0000)(121.4020, 9.0000) \multiputlist(121.4020, 8.0000)(0, 0)[t]{\scriptsize 600} \drawline(140.0000, 10.0000)(140.0000, 9.0000) \multiputlist(140.0000, 8.0000)(0, 0)[t]{\scriptsize 700} \drawline( 10.0000, 10.0000)( 9.0000, 10.0000) \multiputlist( 8.0000, 10.0000)(0, 0)[r]{\scriptsize 0.3} \drawline( 10.0000, 14.3750)( 9.0000, 14.3750) \drawline( 10.0000, 18.7500)( 9.0000, 18.7500) \multiputlist( 8.0000, 18.7500)(0, 0)[r]{\scriptsize 0.4} \drawline( 10.0000, 23.1250)( 9.0000, 23.1250) \drawline( 10.0000, 27.5000)( 9.0000, 27.5000) \multiputlist( 8.0000, 27.5000)(0, 0)[r]{\scriptsize 0.5} \drawline( 10.0000, 31.8750)( 9.0000, 31.8750) \drawline( 10.0000, 36.2500)( 9.0000, 36.2500) \multiputlist( 8.0000, 36.2500)(0, 0)[r]{\scriptsize 0.6} \drawline( 10.0000, 40.6250)( 9.0000, 40.6250) \drawline( 10.0000, 45.0000)( 9.0000, 45.0000) \multiputlist( 8.0000, 45.0000)(0, 0)[r]{\scriptsize 0.7} \dottedline{1}( 11.1159, 10.0000)( 11.1159, 45.0000) \dottedline{1}( 22.0887, 10.0000)( 22.0887, 45.0000) \dottedline{1}( 23.5765, 10.0000)( 23.5765, 45.0000) \dottedline{1}( 30.4578, 10.0000)( 30.4578, 45.0000) \dottedline{1}( 39.1989, 10.0000)( 39.1989, 45.0000) \dottedline{1}( 45.7082, 10.0000)( 45.7082, 45.0000) \dottedline{1}( 62.2604, 10.0000)( 62.2604, 45.0000) \dottedline{1}( 70.4435, 10.0000)( 70.4435, 45.0000) \dottedline{1}( 80.8584, 10.0000)( 80.8584, 45.0000) \dottedline{1}( 88.2976, 10.0000)( 88.2976, 45.0000) \dottedline{1}( 96.4807, 10.0000)( 96.4807, 45.0000) \dottedline{1}(109.8712, 10.0000)(109.8712, 45.0000) \dottedline{1}(123.6338, 10.0000)(123.6338, 45.0000) \dottedline{1}(131.2589, 10.0000)(131.2589, 45.0000) \dottedline{1}(134.7926, 10.0000)(134.7926, 45.0000) \dottedline{1}(138.5122, 10.0000)(138.5122, 45.0000) \multiputlist( 3.0000, 41.0000)(0, 0)[r]{\footnotesize LCP} \multiputlist(140.0000, 5.0000)(0, 0)[rt]{\small $i$ (words)} \thinlines \drawline( 10.0000, 18.7089) ( 10.0000, 18.7089) ( 10.0000, 18.6646) ( 10.1860, 18.6646) ( 10.1860, 18.9114) ( 10.3720, 18.9114) ( 10.3720, 19.0133) ( 10.5579, 19.0133) ( 10.5579, 19.1955) ( 10.7439, 19.1955) ( 10.7439, 19.5103) ( 10.9299, 19.5103) ( 10.9299, 19.4806) ( 11.1159, 19.4806) ( 11.1159, 19.3210) ( 11.3019, 19.3210) ( 11.3019, 19.1470) ( 11.4878, 19.1470) ( 11.4878, 19.2013) ( 11.6738, 19.2013) ( 11.6738, 19.3467) ( 11.8598, 19.3467) ( 11.8598, 19.7604) ( 12.0458, 19.7604) ( 12.0458, 20.0491) ( 12.2318, 20.0491) ( 12.2318, 20.2077) ( 12.4177, 20.2077) ( 12.4177, 20.2985) ( 12.6037, 20.2985) ( 12.6037, 20.3203) ( 12.7897, 20.3203) ( 12.7897, 20.4291) ( 12.9757, 20.4291) ( 12.9757, 20.3119) ( 13.1617, 20.3119) ( 13.1617, 20.2601) ( 13.3476, 20.2601) ( 13.3476, 20.3992) ( 13.5336, 20.3992) ( 13.5336, 20.3585) ( 13.7196, 20.3585) ( 13.7196, 20.9885) ( 13.9056, 20.9885) ( 13.9056, 20.9281) ( 14.0916, 20.9281) ( 14.0916, 21.4905) ( 14.2775, 21.4905) ( 14.2775, 21.9991) ( 14.4635, 21.9991) ( 14.4635, 21.8701) ( 14.6495, 21.8701) ( 14.6495, 21.2323) ( 14.8355, 21.2323) ( 14.8355, 20.1891) ( 15.0215, 20.1891) ( 15.0215, 20.1612) ( 15.2074, 20.1612) ( 15.2074, 20.0361) ( 15.3934, 20.0361) ( 15.3934, 20.7058) ( 15.5794, 20.7058) ( 15.5794, 20.6417) ( 15.7654, 20.6417) ( 15.7654, 20.7804) ( 15.9514, 20.7804) ( 15.9514, 20.7376) ( 16.1373, 20.7376) ( 16.1373, 19.9434) ( 16.3233, 19.9434) ( 16.3233, 19.8888) ( 16.5093, 19.8888) ( 16.5093, 18.9448) ( 16.6953, 18.9448) ( 16.6953, 18.1513) ( 16.8813, 18.1513) ( 16.8813, 17.7816) ( 17.0672, 17.7816) ( 17.0672, 17.5557) ( 17.2532, 17.5557) ( 17.2532, 17.1235) ( 17.4392, 17.1235) ( 17.4392, 17.0289) ( 17.6252, 17.0289) ( 17.6252, 16.7439) ( 17.8112, 16.7439) ( 17.8112, 16.5183) ( 17.9971, 16.5183) ( 17.9971, 15.8583) ( 18.1831, 15.8583) ( 18.1831, 15.6684) ( 18.3691, 15.6684) ( 18.3691, 15.4120) ( 18.5551, 15.4120) ( 18.5551, 14.4761) ( 18.7411, 14.4761) ( 18.7411, 14.5144) ( 18.9270, 14.5144) ( 18.9270, 14.5835) ( 19.1130, 14.5835) ( 19.1130, 13.9452) ( 19.2990, 13.9452) ( 19.2990, 13.9656) ( 19.4850, 13.9656) ( 19.4850, 14.5329) ( 19.6710, 14.5329) ( 19.6710, 14.3308) ( 20.0429, 14.3341) ( 20.0429, 14.3492) ( 20.2289, 14.3492) ( 20.2289, 14.4287) ( 20.4149, 14.4287) ( 20.4149, 14.4458) ( 20.7868, 14.4452) ( 20.7868, 14.8178) ( 20.9728, 14.8178) ( 20.9728, 15.0804) ( 21.1588, 15.0804) ( 21.1588, 15.2324) ( 21.3448, 15.2324) ( 21.3448, 14.7803) ( 21.5308, 14.7803) ( 21.5308, 14.4733) ( 21.7167, 14.4733) ( 21.7167, 14.4627) ( 21.9027, 14.4627) ( 21.9027, 14.4263) ( 22.0887, 14.4263) ( 22.0887, 14.3383) ( 22.2747, 14.3383) ( 22.2747, 14.5521) ( 22.4607, 14.5521) ( 22.4607, 14.6772) ( 22.8326, 14.6855) ( 22.8326, 14.7825) ( 23.0186, 14.7825) ( 23.0186, 14.8253) ( 23.2046, 14.8253) ( 23.2046, 14.5193) ( 23.3906, 14.5193) ( 23.3906, 14.5975) ( 23.5765, 14.5975) ( 23.5765, 18.4759) ( 23.7625, 18.4759) ( 23.7625, 18.7693) ( 23.9485, 18.7693) ( 23.9485, 19.1348) ( 24.1345, 19.1348) ( 24.1345, 19.5796) ( 24.3205, 19.5796) ( 24.3205, 20.3289) ( 24.5064, 20.3289) ( 24.5064, 20.5603) ( 24.6924, 20.5603) ( 24.6924, 20.7823) ( 24.8784, 20.7823) ( 24.8784, 21.1213) ( 25.0644, 21.1213) ( 25.0644, 21.6330) ( 25.2504, 21.6330) ( 25.2504, 22.0820) ( 25.4363, 22.0820) ( 25.4363, 22.4736) ( 25.6223, 22.4736) ( 25.6223, 22.8865) ( 25.8083, 22.8865) ( 25.8083, 23.3300) ( 25.9943, 23.3300) ( 25.9943, 23.6644) ( 26.1803, 23.6644) ( 26.1803, 23.8623) ( 26.3662, 23.8623) ( 26.3662, 24.2063) ( 26.5522, 24.2063) ( 26.5522, 24.3300) ( 26.7382, 24.3300) ( 26.7382, 24.7468) ( 26.9242, 24.7468) ( 26.9242, 24.5307) ( 27.1102, 24.5307) ( 27.1102, 25.0518) ( 27.2961, 25.0518) ( 27.2961, 24.7980) ( 27.4821, 24.7980) ( 27.4821, 24.4732) ( 27.6681, 24.4732) ( 27.6681, 24.0365) ( 27.8541, 24.0365) ( 27.8541, 23.7749) ( 28.0401, 23.7749) ( 28.0401, 23.6415) ( 28.2260, 23.6415) ( 28.2260, 23.1680) ( 28.4120, 23.1680) ( 28.4120, 22.7649) ( 28.5980, 22.7649) ( 28.5980, 22.2199) ( 28.7840, 22.2199) ( 28.7840, 21.3546) ( 28.9700, 21.3546) ( 28.9700, 20.7114) ( 29.1559, 20.7114) ( 29.1559, 16.7451) ( 29.3419, 16.7451) ( 29.3419, 16.3372) ( 29.5279, 16.3372) ( 29.5279, 15.4290) ( 29.7139, 15.4290) ( 29.7139, 14.5332) ( 29.8999, 14.5332) ( 29.8999, 14.2981) ( 30.0858, 14.2981) ( 30.0858, 13.7753) ( 30.2718, 13.7753) ( 30.2718, 13.2597) ( 30.4578, 13.2597) ( 30.4578, 12.8160) ( 30.6438, 12.8160) ( 30.6438, 13.1691) ( 30.8298, 13.1691) ( 30.8298, 13.3124) ( 31.0157, 13.3124) ( 31.0157, 14.2801) ( 31.2017, 14.2801) ( 31.2017, 14.3124) ( 31.3877, 14.3124) ( 31.3877, 14.0436) ( 31.5737, 14.0436) ( 31.5737, 15.2813) ( 31.7597, 15.2813) ( 31.7597, 15.5833) ( 31.9456, 15.5833) ( 31.9456, 15.8702) ( 32.1316, 15.8702) ( 32.1316, 16.3268) ( 32.3176, 16.3268) ( 32.3176, 16.5441) ( 32.5036, 16.5441) ( 32.5036, 16.9367) ( 32.6896, 16.9367) ( 32.6896, 17.2620) ( 32.8755, 17.2620) ( 32.8755, 17.6767) ( 33.0615, 17.6767) ( 33.0615, 18.0028) ( 33.2475, 18.0028) ( 33.2475, 19.6605) ( 33.4335, 19.6605) ( 33.4335, 20.1521) ( 33.6195, 20.1521) ( 33.6195, 21.0369) ( 33.8054, 21.0369) ( 33.8054, 21.3651) ( 33.9914, 21.3651) ( 33.9914, 21.9497) ( 34.1774, 21.9497) ( 34.1774, 22.1429) ( 34.3634, 22.1429) ( 34.3634, 22.3865) ( 34.5494, 22.3865) ( 34.5494, 22.6957) ( 34.7353, 22.6957) ( 34.7353, 23.0032) ( 34.9213, 23.0032) ( 34.9213, 25.0722) ( 35.1073, 25.0722) ( 35.1073, 25.2333) ( 35.2933, 25.2333) ( 35.2933, 25.4370) ( 35.4793, 25.4370) ( 35.4793, 25.7973) ( 35.6652, 25.7973) ( 35.6652, 25.8946) ( 35.8512, 25.8946) ( 35.8512, 26.4947) ( 36.2232, 26.4994) ( 36.2232, 27.5967) ( 36.4092, 27.5967) ( 36.4092, 27.1835) ( 36.5951, 27.1835) ( 36.5951, 26.5064) ( 36.7811, 26.5064) ( 36.7811, 26.7469) ( 36.9671, 26.7469) ( 36.9671, 27.1539) ( 37.1531, 27.1539) ( 37.1531, 29.5830) ( 37.3391, 29.5830) ( 37.3391, 27.7719) ( 37.5250, 27.7719) ( 37.5250, 28.2281) ( 37.7110, 28.2281) ( 37.7110, 29.1110) ( 37.8970, 29.1110) ( 37.8970, 31.9189) ( 38.0830, 31.9189) ( 38.0830, 32.8649) ( 38.2690, 32.8649) ( 38.2690, 34.0380) ( 38.4549, 34.0380) ( 38.4549, 34.7448) ( 38.6409, 34.7448) ( 38.6409, 35.7429) ( 38.8269, 35.7429) ( 38.8269, 36.9146) ( 39.0129, 36.9146) ( 39.0129, 37.4931) ( 39.1989, 37.4931) ( 39.1989, 38.4592) ( 39.3848, 38.4592) ( 39.3848, 39.4464) ( 39.5708, 39.4464) ( 39.5708, 40.6775) ( 39.7568, 40.6775) ( 39.7568, 41.1620) ( 39.9428, 41.1620) ( 39.9428, 41.4156) ( 40.1288, 41.4156) ( 40.1288, 40.0931) ( 40.3147, 40.0931) ( 40.3147, 40.5045) ( 40.5007, 40.5045) ( 40.5007, 41.1888) ( 40.6867, 41.1888) ( 40.6867, 41.0822) ( 40.8727, 41.0822) ( 40.8727, 40.4272) ( 41.0587, 40.4272) ( 41.0587, 40.0878) ( 41.2446, 40.0878) ( 41.2446, 39.7121) ( 41.4306, 39.7121) ( 41.4306, 38.7067) ( 41.6166, 38.7067) ( 41.6166, 38.2942) ( 41.8026, 38.2942) ( 41.8026, 37.2949) ( 41.9886, 37.2949) ( 41.9886, 36.4059) ( 42.1745, 36.4059) ( 42.1745, 35.1418) ( 42.3605, 35.1418) ( 42.3605, 34.2504) ( 42.5465, 34.2504) ( 42.5465, 32.8540) ( 42.7325, 32.8540) ( 42.7325, 30.4697) ( 42.9185, 30.4697) ( 42.9185, 29.5370) ( 43.1044, 29.5370) ( 43.1044, 28.1752) ( 43.2904, 28.1752) ( 43.2904, 28.0982) ( 43.4764, 28.0982) ( 43.4764, 24.5807) ( 43.6624, 24.5807) ( 43.6624, 23.1702) ( 43.8484, 23.1702) ( 43.8484, 21.6647) ( 44.0343, 21.6647) ( 44.0343, 20.3815) ( 44.2203, 20.3815) ( 44.2203, 18.9335) ( 44.4063, 18.9335) ( 44.4063, 17.8493) ( 44.5923, 17.8493) ( 44.5923, 16.7906) ( 44.7783, 16.7906) ( 44.7783, 15.7614) ( 44.9642, 15.7614) ( 44.9642, 15.0298) ( 45.1502, 15.0298) ( 45.1502, 14.7597) ( 45.3362, 14.7597) ( 45.3362, 14.4248) ( 45.5222, 14.4248) ( 45.5222, 14.3886) ( 45.7082, 14.3886) ( 45.7082, 14.2837) ( 45.8941, 14.2837) ( 45.8941, 14.5860) ( 46.0801, 14.5860) ( 46.0801, 14.9133) ( 46.2661, 14.9133) ( 46.2661, 15.3365) ( 46.4521, 15.3365) ( 46.4521, 15.6885) ( 46.6381, 15.6885) ( 46.6381, 17.2475) ( 46.8240, 17.2475) ( 46.8240, 17.4130) ( 47.0100, 17.4130) ( 47.0100, 18.3652) ( 47.1960, 18.3652) ( 47.1960, 19.2489) ( 47.3820, 19.2489) ( 47.3820, 20.0522) ( 47.5680, 20.0522) ( 47.5680, 20.8768) ( 47.7539, 20.8768) ( 47.7539, 21.4001) ( 47.9399, 21.4001) ( 47.9399, 21.9038) ( 48.1259, 21.9038) ( 48.1259, 22.6434) ( 48.3119, 22.6434) ( 48.3119, 23.5590) ( 48.4979, 23.5590) ( 48.4979, 24.5959) ( 48.6838, 24.5959) ( 48.6838, 24.8571) ( 48.8698, 24.8571) ( 48.8698, 25.0750) ( 49.0558, 25.0750) ( 49.0558, 25.5742) ( 49.2418, 25.5742) ( 49.2418, 25.6939) ( 49.4278, 25.6939) ( 49.4278, 25.8677) ( 49.6137, 25.8677) ( 49.6137, 25.6849) ( 49.7997, 25.6849) ( 49.7997, 25.3811) ( 49.9857, 25.3811) ( 49.9857, 24.9635) ( 50.1717, 24.9635) ( 50.1717, 24.4901) ( 50.3577, 24.4901) ( 50.3577, 24.2050) ( 50.5436, 24.2050) ( 50.5436, 23.9497) ( 50.7296, 23.9497) ( 50.7296, 23.6813) ( 50.9156, 23.6813) ( 50.9156, 23.0934) ( 51.1016, 23.0934) ( 51.1016, 21.8481) ( 51.2876, 21.8481) ( 51.2876, 21.2289) ( 51.4735, 21.2289) ( 51.4735, 20.2913) ( 51.6595, 20.2913) ( 51.6595, 19.5001) ( 51.8455, 19.5001) ( 51.8455, 18.8130) ( 52.0315, 18.8130) ( 52.0315, 17.9326) ( 52.2175, 17.9326) ( 52.2175, 17.4491) ( 52.4034, 17.4491) ( 52.4034, 16.9198) ( 52.5894, 16.9198) ( 52.5894, 16.5277) ( 52.7754, 16.5277) ( 52.7754, 15.6130) ( 52.9614, 15.6130) ( 52.9614, 15.4188) ( 53.1474, 15.4188) ( 53.1474, 15.0537) ( 53.3333, 15.0537) ( 53.3333, 13.8986) ( 53.5193, 13.8986) ( 53.5193, 14.2385) ( 53.7053, 14.2385) ( 53.7053, 14.6439) ( 53.8913, 14.6439) ( 53.8913, 15.3355) ( 54.0773, 15.3355) ( 54.0773, 15.9249) ( 54.2632, 15.9249) ( 54.2632, 16.3312) ( 54.4492, 16.3312) ( 54.4492, 16.7036) ( 54.6352, 16.7036) ( 54.6352, 17.2099) ( 54.8212, 17.2099) ( 54.8212, 17.7623) ( 55.0072, 17.7623) ( 55.0072, 18.1536) ( 55.1931, 18.1536) ( 55.1931, 18.6006) ( 55.3791, 18.6006) ( 55.3791, 19.2305) ( 55.5651, 19.2305) ( 55.5651, 19.5889) ( 55.7511, 19.5889) ( 55.7511, 19.6718) ( 55.9371, 19.6718) ( 55.9371, 19.7453) ( 56.1230, 19.7453) ( 56.1230, 19.7728) ( 56.3090, 19.7728) ( 56.3090, 20.0368) ( 56.4950, 20.0368) ( 56.4950, 20.0239) ( 56.6810, 20.0239) ( 56.6810, 19.9669) ( 56.8670, 19.9669) ( 56.8670, 19.8053) ( 57.0529, 19.8053) ( 57.0529, 19.5772) ( 57.2389, 19.5772) ( 57.2389, 19.2218) ( 57.4249, 19.2218) ( 57.4249, 19.0069) ( 57.6109, 19.0069) ( 57.6109, 18.7550) ( 57.7969, 18.7550) ( 57.7969, 18.6054) ( 57.9828, 18.6054) ( 57.9828, 18.1607) ( 58.1688, 18.1607) ( 58.1688, 17.7156) ( 58.3548, 17.7156) ( 58.3548, 17.1128) ( 58.5408, 17.1128) ( 58.5408, 16.6653) ( 58.7268, 16.6653) ( 58.7268, 16.5157) ( 58.9127, 16.5157) ( 58.9127, 15.9350) ( 59.0987, 15.9350) ( 59.0987, 15.3487) ( 59.2847, 15.3487) ( 59.2847, 15.2810) ( 59.4707, 15.2810) ( 59.4707, 15.2655) ( 59.6567, 15.2655) ( 59.6567, 15.1974) ( 59.8426, 15.1974) ( 59.8426, 14.8351) ( 60.0286, 14.8351) ( 60.0286, 14.4595) ( 60.2146, 14.4595) ( 60.2146, 14.5101) ( 60.4006, 14.5101) ( 60.4006, 14.4543) ( 60.5866, 14.4543) ( 60.5866, 14.6251) ( 60.7725, 14.6251) ( 60.7725, 14.9997) ( 60.9585, 14.9997) ( 60.9585, 15.4498) ( 61.1445, 15.4498) ( 61.1445, 15.5289) ( 61.3305, 15.5289) ( 61.3305, 15.3327) ( 61.5165, 15.3327) ( 61.5165, 15.3953) ( 61.7024, 15.3953) ( 61.7024, 15.1168) ( 61.8884, 15.1168) ( 61.8884, 15.4380) ( 62.0744, 15.4380) ( 62.0744, 15.4272) ( 62.2604, 15.4272) ( 62.2604, 15.6807) ( 62.4464, 15.6807) ( 62.4464, 15.7836) ( 62.6323, 15.7836) ( 62.6323, 15.7558) ( 62.8183, 15.7558) ( 62.8183, 15.6787) ( 63.0043, 15.6787) ( 63.0043, 15.7698) ( 63.1903, 15.7698) ( 63.1903, 15.8304) ( 63.3763, 15.8304) ( 63.3763, 15.7074) ( 63.5622, 15.7074) ( 63.5622, 15.8226) ( 63.7482, 15.8226) ( 63.7482, 15.9635) ( 63.9342, 15.9635) ( 63.9342, 15.9889) ( 64.1202, 15.9889) ( 64.1202, 16.1210) ( 64.3062, 16.1210) ( 64.3062, 15.9487) ( 64.4921, 15.9487) ( 64.4921, 15.7013) ( 64.6781, 15.7013) ( 64.6781, 15.9716) ( 64.8641, 15.9716) ( 64.8641, 16.0159) ( 65.0501, 16.0159) ( 65.0501, 16.3501) ( 65.2361, 16.3501) ( 65.2361, 16.2275) ( 65.4220, 16.2275) ( 65.4220, 15.8856) ( 65.6080, 15.8856) ( 65.6080, 16.3025) ( 65.7940, 16.3025) ( 65.7940, 16.4529) ( 65.9800, 16.4529) ( 65.9800, 16.2962) ( 66.3519, 16.2987) ( 66.3519, 16.6479) ( 66.5379, 16.6479) ( 66.5379, 16.5695) ( 66.7239, 16.5695) ( 66.7239, 16.7191) ( 66.9099, 16.7191) ( 66.9099, 16.4536) ( 67.0958, 16.4536) ( 67.0958, 16.7102) ( 67.2818, 16.7102) ( 67.2818, 17.0107) ( 67.4678, 17.0107) ( 67.4678, 17.3308) ( 67.6538, 17.3308) ( 67.6538, 17.6537) ( 67.8398, 17.6537) ( 67.8398, 17.5100) ( 68.0257, 17.5100) ( 68.0257, 17.7269) ( 68.2117, 17.7269) ( 68.2117, 17.9685) ( 68.3977, 17.9685) ( 68.3977, 18.2873) ( 68.5837, 18.2873) ( 68.5837, 18.5798) ( 68.7697, 18.5798) ( 68.7697, 18.9764) ( 68.9557, 18.9764) ( 68.9557, 19.1346) ( 69.1416, 19.1346) ( 69.1416, 19.5234) ( 69.3276, 19.5234) ( 69.3276, 19.7889) ( 69.5136, 19.7889) ( 69.5136, 20.2367) ( 69.6996, 20.2367) ( 69.6996, 20.3278) ( 69.8856, 20.3278) ( 69.8856, 20.4608) ( 70.0715, 20.4608) ( 70.0715, 20.4259) ( 70.2575, 20.4259) ( 70.2575, 20.4091) ( 70.4435, 20.4091) ( 70.4435, 20.5325) ( 70.6295, 20.5325) ( 70.6295, 20.5551) ( 70.8155, 20.5551) ( 70.8155, 20.7256) ( 71.0014, 20.7256) ( 71.0014, 20.9783) ( 71.1874, 20.9783) ( 71.1874, 21.6473) ( 71.3734, 21.6473) ( 71.3734, 22.1313) ( 71.5594, 22.1313) ( 71.5594, 22.0843) ( 71.7454, 22.0843) ( 71.7454, 22.0097) ( 71.9313, 22.0097) ( 71.9313, 21.7210) ( 72.1173, 21.7210) ( 72.1173, 21.4884) ( 72.3033, 21.4884) ( 72.3033, 21.5641) ( 72.4893, 21.5641) ( 72.4893, 21.9537) ( 72.6752, 21.9537) ( 72.6752, 22.2606) ( 72.8612, 22.2606) ( 72.8612, 21.7968) ( 73.0472, 21.7968) ( 73.0472, 21.6724) ( 73.2332, 21.6724) ( 73.2332, 21.2469) ( 73.4192, 21.2469) ( 73.4192, 21.2217) ( 73.6051, 21.2217) ( 73.6051, 21.1074) ( 73.7911, 21.1074) ( 73.7911, 20.9333) ( 73.9771, 20.9333) ( 73.9771, 20.8956) ( 74.1631, 20.8956) ( 74.1631, 20.9654) ( 74.3491, 20.9654) ( 74.3491, 20.4392) ( 74.5350, 20.4392) ( 74.5350, 19.9765) ( 74.7210, 19.9765) ( 74.7210, 19.8080) ( 75.0930, 19.8016) ( 75.0930, 19.6826) ( 75.2790, 19.6826) ( 75.2790, 19.4396) ( 75.4650, 19.4396) ( 75.4650, 19.3942) ( 75.6509, 19.3942) ( 75.6509, 19.1656) ( 76.0229, 19.1622) ( 76.0229, 19.1127) ( 76.2089, 19.1127) ( 76.2089, 18.8689) ( 76.3949, 18.8689) ( 76.3949, 19.3747) ( 76.5808, 19.3747) ( 76.5808, 19.1670) ( 76.7668, 19.1670) ( 76.7668, 18.8835) ( 76.9528, 18.8835) ( 76.9528, 18.7404) ( 77.1388, 18.7404) ( 77.1388, 18.4299) ( 77.3248, 18.4299) ( 77.3248, 18.2741) ( 77.5107, 18.2741) ( 77.5107, 18.1199) ( 77.6967, 18.1199) ( 77.6967, 18.1009) ( 77.8827, 18.1009) ( 77.8827, 18.0563) ( 78.0687, 18.0563) ( 78.0687, 18.4657) ( 78.2546, 18.4657) ( 78.2546, 18.3575) ( 78.4406, 18.3575) ( 78.4406, 17.5402) ( 78.6266, 17.5402) ( 78.6266, 17.4425) ( 78.8126, 17.4425) ( 78.8126, 17.7596) ( 78.9986, 17.7596) ( 78.9986, 17.4279) ( 79.1845, 17.4279) ( 79.1845, 17.3564) ( 79.3705, 17.3564) ( 79.3705, 17.5590) ( 79.5565, 17.5590) ( 79.5565, 17.7838) ( 79.7425, 17.7838) ( 79.7425, 17.5052) ( 79.9285, 17.5052) ( 79.9285, 17.7014) ( 80.1144, 17.7014) ( 80.1144, 17.5895) ( 80.3004, 17.5895) ( 80.3004, 18.0711) ( 80.4864, 18.0711) ( 80.4864, 18.3301) ( 80.6724, 18.3301) ( 80.6724, 18.4199) ( 80.8584, 18.4199) ( 80.8584, 18.2817) ( 81.0443, 18.2817) ( 81.0443, 19.6259) ( 81.2303, 19.6259) ( 81.2303, 19.1603) ( 81.4163, 19.1603) ( 81.4163, 18.8733) ( 81.6023, 18.8733) ( 81.6023, 18.6866) ( 81.7883, 18.6866) ( 81.7883, 19.1176) ( 81.9743, 19.1176) ( 81.9743, 19.0443) ( 82.1602, 19.0443) ( 82.1602, 19.1768) ( 82.3462, 19.1768) ( 82.3462, 19.9800) ( 82.5322, 19.9800) ( 82.5322, 19.7352) ( 82.7182, 19.7352) ( 82.7182, 20.0061) ( 82.9042, 20.0061) ( 82.9042, 20.0274) ( 83.0901, 20.0274) ( 83.0901, 20.3009) ( 83.2761, 20.3009) ( 83.2761, 21.0739) ( 83.4621, 21.0739) ( 83.4621, 21.2290) ( 83.6481, 21.2290) ( 83.6481, 21.2964) ( 83.8340, 21.2964) ( 83.8340, 21.3498) ( 84.0200, 21.3498) ( 84.0200, 21.1200) ( 84.2060, 21.1200) ( 84.2060, 21.1086) ( 84.3920, 21.1086) ( 84.3920, 19.8767) ( 84.5780, 19.8767) ( 84.5780, 19.6667) ( 84.7639, 19.6667) ( 84.7639, 19.4483) ( 84.9499, 19.4483) ( 84.9499, 19.3739) ( 85.1359, 19.3739) ( 85.1359, 18.9695) ( 85.3219, 18.9695) ( 85.3219, 19.4749) ( 85.5079, 19.4749) ( 85.5079, 18.8321) ( 85.6938, 18.8321) ( 85.6938, 18.5216) ( 85.8798, 18.5216) ( 85.8798, 18.0624) ( 86.0658, 18.0624) ( 86.0658, 17.9521) ( 86.2518, 17.9521) ( 86.2518, 17.9998) ( 86.4378, 17.9998) ( 86.4378, 17.5388) ( 86.6237, 17.5388) ( 86.6237, 17.2662) ( 86.8097, 17.2662) ( 86.8097, 17.3146) ( 86.9957, 17.3146) ( 86.9957, 17.4077) ( 87.1817, 17.4077) ( 87.1817, 17.6654) ( 87.3677, 17.6654) ( 87.3677, 17.5737) ( 87.5536, 17.5737) ( 87.5536, 16.8436) ( 87.7396, 16.8436) ( 87.7396, 16.7204) ( 87.9256, 16.7204) ( 87.9256, 16.8270) ( 88.1116, 16.8270) ( 88.1116, 16.7927) ( 88.2976, 16.7927) ( 88.2976, 16.6137) ( 88.4836, 16.6137) ( 88.4836, 16.7061) ( 88.6695, 16.7061) ( 88.6695, 16.6369) ( 88.8555, 16.6369) ( 88.8555, 16.5134) ( 89.0415, 16.5134) ( 89.0415, 16.4553) ( 89.2275, 16.4553) ( 89.2275, 17.9084) ( 89.4134, 17.9084) ( 89.4134, 17.8671) ( 89.5994, 17.8671) ( 89.5994, 17.8836) ( 89.9714, 17.8777) ( 89.9714, 17.8231) ( 90.1574, 17.8231) ( 90.1574, 17.7505) ( 90.3433, 17.7505) ( 90.3433, 17.7401) ( 90.5293, 17.7401) ( 90.5293, 16.3225) ( 90.7153, 16.3225) ( 90.7153, 16.4574) ( 90.9013, 16.4574) ( 90.9013, 16.4047) ( 91.0873, 16.4047) ( 91.0873, 16.2895) ( 91.2732, 16.2895) ( 91.2732, 16.4197) ( 91.4592, 16.4197) ( 91.4592, 16.3633) ( 91.6452, 16.3633) ( 91.6452, 16.4233) ( 91.8312, 16.4233) ( 91.8312, 15.7430) ( 92.0172, 15.7430) ( 92.0172, 16.0185) ( 92.2031, 16.0185) ( 92.2031, 15.8764) ( 92.3891, 15.8764) ( 92.3891, 15.9651) ( 92.5751, 15.9651) ( 92.5751, 15.9827) ( 92.7611, 15.9827) ( 92.7611, 16.4197) ( 92.9471, 16.4197) ( 92.9471, 16.4381) ( 93.1330, 16.4381) ( 93.1330, 16.7366) ( 93.3190, 16.7366) ( 93.3190, 17.8022) ( 93.5050, 17.8022) ( 93.5050, 18.2716) ( 93.6910, 18.2716) ( 93.6910, 19.0496) ( 93.8770, 19.0496) ( 93.8770, 19.6663) ( 94.0630, 19.6663) ( 94.0630, 20.4340) ( 94.2489, 20.4340) ( 94.2489, 21.2522) ( 94.4349, 21.2522) ( 94.4349, 22.2411) ( 94.6209, 22.2411) ( 94.6209, 23.4735) ( 94.8069, 23.4735) ( 94.8069, 24.5553) ( 94.9928, 24.5553) ( 94.9928, 25.3162) ( 95.1788, 25.3162) ( 95.1788, 26.4082) ( 95.3648, 26.4082) ( 95.3648, 27.6287) ( 95.5508, 27.6287) ( 95.5508, 29.0764) ( 95.7368, 29.0764) ( 95.7368, 29.5212) ( 95.9227, 29.5212) ( 95.9227, 30.7512) ( 96.1087, 30.7512) ( 96.1087, 31.6622) ( 96.2947, 31.6622) ( 96.2947, 33.2378) ( 96.4807, 33.2378) ( 96.4807, 34.7411) ( 96.6667, 34.7411) ( 96.6667, 37.9281) ( 96.8526, 37.9281) ( 96.8526, 38.9096) ( 97.0386, 38.9096) ( 97.0386, 39.7504) ( 97.2246, 39.7504) ( 97.2246, 40.2223) ( 97.4106, 40.2223) ( 97.4106, 40.6472) ( 97.5966, 40.6472) ( 97.5966, 40.8758) ( 97.7825, 40.8758) ( 97.7825, 41.1765) ( 97.9685, 41.1765) ( 97.9685, 41.1444) ( 98.1545, 41.1444) ( 98.1545, 40.9846) ( 98.3405, 40.9846) ( 98.3405, 40.7228) ( 98.5265, 40.7228) ( 98.5265, 40.0189) ( 98.7124, 40.0189) ( 98.7124, 39.1716) ( 98.8984, 39.1716) ( 98.8984, 37.9970) ( 99.0844, 37.9970) ( 99.0844, 37.0855) ( 99.2704, 37.0855) ( 99.2704, 35.5943) ( 99.4564, 35.5943) ( 99.4564, 34.0908) ( 99.6423, 34.0908) ( 99.6423, 31.2325) ( 99.8283, 31.2325) ( 99.8283, 30.0488) (100.0143, 30.0488) (100.0143, 28.4210) (100.2003, 28.4210) (100.2003, 26.8765) (100.3863, 26.8765) (100.3863, 25.6643) (100.5722, 25.6643) (100.5722, 24.5642) (100.7582, 24.5642) (100.7582, 23.2384) (100.9442, 23.2384) (100.9442, 22.2902) (101.1302, 22.2902) (101.1302, 21.2655) (101.3162, 21.2655) (101.3162, 20.1760) (101.5021, 20.1760) (101.5021, 18.6356) (101.6881, 18.6356) (101.6881, 18.1334) (101.8741, 18.1334) (101.8741, 17.7640) (102.0601, 17.7640) (102.0601, 17.4402) (102.2461, 17.4402) (102.2461, 17.1904) (102.4320, 17.1904) (102.4320, 17.0905) (102.6180, 17.0905) (102.6180, 16.9894) (102.9900, 16.9800) (102.9900, 17.0124) (103.1760, 17.0124) (103.1760, 17.1060) (103.3619, 17.1060) (103.3619, 17.5857) (103.5479, 17.5857) (103.5479, 17.8603) (103.7339, 17.8603) (103.7339, 17.9812) (103.9199, 17.9812) (103.9199, 18.2279) (104.1059, 18.2279) (104.1059, 18.5866) (104.2918, 18.5866) (104.2918, 18.8417) (104.4778, 18.8417) (104.4778, 19.1187) (104.6638, 19.1187) (104.6638, 19.0316) (105.0358, 19.0231) (105.0358, 19.0565) (105.2217, 19.0565) (105.2217, 19.2575) (105.4077, 19.2575) (105.4077, 19.0018) (105.5937, 19.0018) (105.5937, 19.0496) (105.9657, 19.0448) (105.9657, 18.8915) (106.1516, 18.8915) (106.1516, 18.7406) (106.3376, 18.7406) (106.3376, 18.6889) (106.5236, 18.6889) (106.5236, 18.4479) (106.7096, 18.4479) (106.7096, 18.2569) (106.8956, 18.2569) (106.8956, 17.9103) (107.0815, 17.9103) (107.0815, 17.5905) (107.2675, 17.5905) (107.2675, 17.1171) (107.4535, 17.1171) (107.4535, 17.0496) (107.6395, 17.0496) (107.6395, 16.9200) (107.8255, 16.9200) (107.8255, 16.6253) (108.0114, 16.6253) (108.0114, 16.3160) (108.1974, 16.3160) (108.1974, 15.8430) (108.3834, 15.8430) (108.3834, 15.6682) (108.5694, 15.6682) (108.5694, 15.2763) (108.7554, 15.2763) (108.7554, 14.8407) (108.9413, 14.8407) (108.9413, 15.7735) (109.1273, 15.7735) (109.1273, 15.5423) (109.3133, 15.5423) (109.3133, 15.6025) (109.4993, 15.6025) (109.4993, 16.0736) (109.6853, 16.0736) (109.6853, 15.8503) (109.8712, 15.8503) (109.8712, 15.6798) (110.0572, 15.6798) (110.0572, 15.4912) (110.2432, 15.4912) (110.2432, 15.5173) (110.4292, 15.5173) (110.4292, 15.8168) (110.6152, 15.8168) (110.6152, 15.6511) (110.8011, 15.6511) (110.8011, 15.7223) (110.9871, 15.7223) (110.9871, 15.7590) (111.1731, 15.7590) (111.1731, 15.7804) (111.3591, 15.7804) (111.3591, 16.0198) (111.5451, 16.0198) (111.5451, 16.1005) (111.7310, 16.1005) (111.7310, 16.4845) (112.1030, 16.4888) (112.1030, 16.3923) (112.2890, 16.3923) (112.2890, 16.4943) (112.4750, 16.4943) (112.4750, 16.5904) (112.6609, 16.5904) (112.6609, 16.8625) (112.8469, 16.8625) (112.8469, 18.1961) (113.0329, 18.1961) (113.0329, 18.1796) (113.2189, 18.1796) (113.2189, 18.2298) (113.4049, 18.2298) (113.4049, 18.3890) (113.5908, 18.3890) (113.5908, 19.8708) (113.7768, 19.8708) (113.7768, 20.0186) (113.9628, 20.0186) (113.9628, 20.4466) (114.1488, 20.4466) (114.1488, 20.6618) (114.3348, 20.6618) (114.3348, 20.5162) (114.5207, 20.5162) (114.5207, 20.9132) (114.7067, 20.9132) (114.7067, 21.2884) (114.8927, 21.2884) (114.8927, 20.5044) (115.0787, 20.5044) (115.0787, 20.6600) (115.2647, 20.6600) (115.2647, 21.0993) (115.4506, 21.0993) (115.4506, 21.6602) (115.6366, 21.6602) (115.6366, 22.0765) (115.8226, 22.0765) (115.8226, 22.3043) (116.0086, 22.3043) (116.0086, 22.7567) (116.1946, 22.7567) (116.1946, 23.1331) (116.3805, 23.1331) (116.3805, 23.0024) (116.5665, 23.0024) (116.5665, 25.1714) (116.7525, 25.1714) (116.7525, 25.6700) (116.9385, 25.6700) (116.9385, 25.9976) (117.1245, 25.9976) (117.1245, 26.1373) (117.3104, 26.1373) (117.3104, 26.9231) (117.4964, 26.9231) (117.4964, 27.7076) (117.6824, 27.7076) (117.6824, 28.2822) (117.8684, 28.2822) (117.8684, 28.6618) (118.0544, 28.6618) (118.0544, 28.8425) (118.2403, 28.8425) (118.2403, 30.0052) (118.4263, 30.0052) (118.4263, 28.2213) (118.6123, 28.2213) (118.6123, 28.2414) (118.7983, 28.2414) (118.7983, 30.3426) (118.9843, 30.3426) (118.9843, 30.0195) (119.1702, 30.0195) (119.1702, 30.4758) (119.3562, 30.4758) (119.3562, 30.4919) (119.5422, 30.4919) (119.5422, 30.3883) (119.7282, 30.3883) (119.7282, 30.1061) (120.1001, 30.1126) (120.1001, 30.2018) (120.2861, 30.2018) (120.2861, 29.2827) (120.4721, 29.2827) (120.4721, 29.8626) (120.6581, 29.8626) (120.6581, 30.0900) (120.8441, 30.0900) (120.8441, 29.8147) (121.0300, 29.8147) (121.0300, 29.4679) (121.2160, 29.4679) (121.2160, 30.9205) (121.4020, 30.9205) (121.4020, 30.7036) (121.5880, 30.7036) (121.5880, 30.4034) (121.7740, 30.4034) (121.7740, 30.2192) (121.9599, 30.2192) (121.9599, 30.2402) (122.1459, 30.2402) (122.1459, 32.4896) (122.3319, 32.4896) (122.3319, 33.0606) (122.5179, 33.0606) (122.5179, 32.2021) (122.7039, 32.2021) (122.7039, 32.1766) (122.8898, 32.1766) (122.8898, 32.4740) (123.0758, 32.4740) (123.0758, 30.8360) (123.2618, 30.8360) (123.2618, 30.3078) (123.4478, 30.3078) (123.4478, 30.5868) (123.6338, 30.5868) (123.6338, 30.7995) (123.8197, 30.7995) (123.8197, 31.4305) (124.0057, 31.4305) (124.0057, 31.8143) (124.1917, 31.8143) (124.1917, 32.0521) (124.3777, 32.0521) (124.3777, 32.7487) (124.5637, 32.7487) (124.5637, 33.3458) (124.7496, 33.3458) (124.7496, 33.8932) (124.9356, 33.8932) (124.9356, 34.2190) (125.1216, 34.2190) (125.1216, 34.2769) (125.3076, 34.2769) (125.3076, 34.2532) (125.4936, 34.2532) (125.4936, 33.4872) (125.6795, 33.4872) (125.6795, 33.5021) (125.8655, 33.5021) (125.8655, 33.1857) (126.0515, 33.1857) (126.0515, 33.0679) (126.2375, 33.0679) (126.2375, 32.8763) (126.4235, 32.8763) (126.4235, 32.6631) (126.6094, 32.6631) (126.6094, 32.8624) (126.7954, 32.8624) (126.7954, 32.7691) (126.9814, 32.7691) (126.9814, 31.8455) (127.1674, 31.8455) (127.1674, 31.2950) (127.3534, 31.2950) (127.3534, 30.7575) (127.5393, 30.7575) (127.5393, 30.0329) (127.7253, 30.0329) (127.7253, 25.8421) (127.9113, 25.8421) (127.9113, 25.7417) (128.0973, 25.7417) (128.0973, 25.4617) (128.2833, 25.4617) (128.2833, 25.3802) (128.4692, 25.3802) (128.4692, 24.8212) (128.6552, 24.8212) (128.6552, 24.7369) (128.8412, 24.7369) (128.8412, 24.2407) (129.0272, 24.2407) (129.0272, 23.7461) (129.2132, 23.7461) (129.2132, 23.2849) (129.3991, 23.2849) (129.3991, 22.6886) (129.5851, 22.6886) (129.5851, 22.0120) (129.7711, 22.0120) (129.7711, 21.5226) (129.9571, 21.5226) (129.9571, 21.1924) (130.1431, 21.1924) (130.1431, 20.7789) (130.3290, 20.7789) (130.3290, 20.1038) (130.5150, 20.1038) (130.5150, 19.5288) (130.7010, 19.5288) (130.7010, 18.7540) (130.8870, 18.7540) (130.8870, 18.1822) (131.0730, 18.1822) (131.0730, 18.0591) (131.2589, 18.0591) (131.2589, 17.4847) (131.4449, 17.4847) (131.4449, 17.0820) (131.6309, 17.0820) (131.6309, 16.8498) (131.8169, 16.8498) (131.8169, 16.5244) (132.0029, 16.5244) (132.0029, 16.2740) (132.1888, 16.2740) (132.1888, 15.9168) (132.3748, 15.9168) (132.3748, 15.8767) (132.5608, 15.8767) (132.5608, 15.6701) (132.7468, 15.6701) (132.7468, 15.3821) (132.9328, 15.3821) (132.9328, 14.7445) (133.1187, 14.7445) (133.1187, 14.5018) (133.3047, 14.5018) (133.3047, 15.2222) (133.4907, 15.2222) (133.4907, 15.1879) (133.6767, 15.1879) (133.6767, 16.0611) (133.8627, 16.0611) (133.8627, 16.0838) (134.0486, 16.0838) (134.0486, 16.0422) (134.2346, 16.0422) (134.2346, 15.9248) (134.4206, 15.9248) (134.4206, 15.8132) (134.6066, 15.8132) (134.6066, 15.3803) (134.9785, 15.3763) (134.9785, 15.4599) (135.1645, 15.4599) (135.1645, 15.3341) (135.3505, 15.3341) (135.3505, 15.4328) (135.5365, 15.4328) (135.5365, 15.3569) (135.7225, 15.3569) (135.7225, 16.2557) (135.9084, 16.2557) (135.9084, 16.3101) (136.0944, 16.3101) (136.0944, 17.3556) (136.2804, 17.3556) (136.2804, 17.2408) (136.4664, 17.2408) (136.4664, 17.4155) (136.6524, 17.4155) (136.6524, 17.2004) (136.8383, 17.2004) (136.8383, 18.0529) (137.0243, 18.0529) (137.0243, 17.2881) (137.2103, 17.2881) (137.2103, 17.5600) (137.3963, 17.5600) (137.3963, 17.2310) (137.5823, 17.2310) (137.5823, 17.1773) (137.7682, 17.1773) (137.7682, 17.3748) (137.9542, 17.3748) (137.9542, 17.6105) (138.1402, 17.6105) (138.1402, 17.9019) (138.3262, 17.9019) (138.3262, 18.3062) (138.5122, 18.3062) (138.5122, 18.7586) (138.6981, 18.7586) (138.6981, 18.1042) (138.8841, 18.1042) (138.8841, 18.9681) (139.0701, 18.9681) (139.0701, 20.1407) (139.2561, 20.1407) (139.2561, 18.8873) (139.4421, 18.8873) (139.4421, 19.2756) (139.6280, 19.2756) (139.6280, 19.4252) (139.8140, 19.4252) (139.8140, 19.6755) \thicklines \drawline(140.0000, 10.0000)(140.0000, 45.0000) \thinlines \drawline(140.0000, 10.0000)(141.0000, 10.0000) \multiputlist(142.0000, 10.0000)(0, 0)[l]{\scriptsize 0} \drawline(140.0000, 14.3750)(141.0000, 14.3750) \multiputlist(142.0000, 14.3750)(0, 0)[l]{\scriptsize 2} \drawline(140.0000, 18.7500)(141.0000, 18.7500) \multiputlist(142.0000, 18.7500)(0, 0)[l]{\scriptsize 4} \drawline(140.0000, 23.1250)(141.0000, 23.1250) \multiputlist(142.0000, 23.1250)(0, 0)[l]{\scriptsize 6} \drawline(140.0000, 27.5000)(141.0000, 27.5000) \multiputlist(142.0000, 27.5000)(0, 0)[l]{\scriptsize 8} \drawline(140.0000, 31.8750)(141.0000, 31.8750) \multiputlist(142.0000, 31.8750)(0, 0)[l]{\scriptsize 10} \drawline(140.0000, 36.2500)(141.0000, 36.2500) \multiputlist(142.0000, 36.2500)(0, 0)[l]{\scriptsize 12} \drawline(140.0000, 40.6250)(141.0000, 40.6250) \multiputlist(142.0000, 40.6250)(0, 0)[l]{\scriptsize 14} \drawline(140.0000, 45.0000)(141.0000, 45.0000) \multiputlist(142.0000, 45.0000)(0, 0)[l]{\scriptsize 16} \multiputlist(147.0000, 42.0000)(0, 0)[l]{\footnotesize Segmen-} \multiputlist(147.0000, 39.0000)(0, 0)[l]{\footnotesize tations} \thinlines \drawline( 11.1159, 14.3750)( 11.1159, 10.0000) \put( 11.1159, 14.3750){\makebox(0,0){\tiny $\bullet$}} \drawline( 17.2532, 12.1875)( 17.2532, 10.0000) \put( 17.2532, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline( 22.0887, 38.4375)( 22.0887, 10.0000) \put( 22.0887, 38.4375){\makebox(0,0){\tiny $\bullet$}} \drawline( 23.5765, 14.3750)( 23.5765, 10.0000) \put( 23.5765, 14.3750){\makebox(0,0){\tiny $\bullet$}} \drawline( 30.4578, 29.6875)( 30.4578, 10.0000) \put( 30.4578, 29.6875){\makebox(0,0){\tiny $\bullet$}} \drawline( 36.5951, 12.1875)( 36.5951, 10.0000) \put( 36.5951, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline( 41.2446, 12.1875)( 41.2446, 10.0000) \put( 41.2446, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline( 42.1745, 12.1875)( 42.1745, 10.0000) \put( 42.1745, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline( 45.7082, 23.1250)( 45.7082, 10.0000) \put( 45.7082, 23.1250){\makebox(0,0){\tiny $\bullet$}} \drawline( 49.2418, 14.3750)( 49.2418, 10.0000) \put( 49.2418, 14.3750){\makebox(0,0){\tiny $\bullet$}} \drawline( 52.2175, 27.5000)( 52.2175, 10.0000) \put( 52.2175, 27.5000){\makebox(0,0){\tiny $\bullet$}} \drawline( 62.2604, 25.3125)( 62.2604, 10.0000) \put( 62.2604, 25.3125){\makebox(0,0){\tiny $\bullet$}} \drawline( 70.4435, 12.1875)( 70.4435, 10.0000) \put( 70.4435, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline( 71.0014, 12.1875)( 71.0014, 10.0000) \put( 71.0014, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline( 72.8612, 14.3750)( 72.8612, 10.0000) \put( 72.8612, 14.3750){\makebox(0,0){\tiny $\bullet$}} \drawline( 88.2976, 16.5625)( 88.2976, 10.0000) \put( 88.2976, 16.5625){\makebox(0,0){\tiny $\bullet$}} \drawline( 93.5050, 12.1875)( 93.5050, 10.0000) \put( 93.5050, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline( 96.4807, 42.8125)( 96.4807, 10.0000) \put( 96.4807, 42.8125){\makebox(0,0){\tiny $\bullet$}} \drawline(109.8712, 34.0625)(109.8712, 10.0000) \put(109.8712, 34.0625){\makebox(0,0){\tiny $\bullet$}} \drawline(115.8226, 12.1875)(115.8226, 10.0000) \put(115.8226, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline(123.6338, 12.1875)(123.6338, 10.0000) \put(123.6338, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline(129.0272, 12.1875)(129.0272, 10.0000) \put(129.0272, 12.1875){\makebox(0,0){\tiny $\bullet$}} \drawline(131.2589, 40.6250)(131.2589, 10.0000) \put(131.2589, 40.6250){\makebox(0,0){\tiny $\bullet$}} \drawline(138.5122, 12.1875)(138.5122, 10.0000) \put(138.5122, 12.1875){\makebox(0,0){\tiny $\bullet$}} \end{picture}\\ \vspace{-3mm} \ \ \ {\bf Figure 4.} \ Correlation between LCP and segment boundaries. \vspace{-3mm} \end{center}\end{figure*} The shape of the window, which defines weight of words in it for pattern production, makes LCP smooth. Experiments on several window shapes (e.g. triangle window, etc.) shows that Hanning window is best for clarifying the macroscopic features of LCP. The width of the window also has effect on the macroscopic features of LCP, especially on separability of segments. Experiments on several window widths ($\Delta \!=\! 5 \sim 60$) reveals that the Hanning window of $\Delta \!=\! 25$ gives the best correlation between LCP and segments. \section{VERIFICATION OF LCP} This section inspects the correlation between LCP and segment boundaries perceived by the human judgments. The curve of Figure 4 shows the LCP of the simplified version of O.Henry's ``Springtime \`a la Carte'' (Thornley, 1960). The solid bars represent the histogram of segment boundaries reported by 16 subjects who read the text without paragraph structure. It is clear that the valleys of the LCP correspond mostly to the dominant segment boundaries. For example, the clear valley at $i \!=\! 110$ exactly corresponds to the dominant segment boundary (and also to the paragraph boundary shown as a dotted line). Note that LCP can detect segment changing of a text regardless of its paragraph structure. For example, $i \!=\! 156$ is a paragraph boundary, but neither a valley of the LCP nor a segment boundary; \ $i \!=\! 236$ is both a segment boundary and approximately a valley of the LCP, but not a paragraph boundary. However, some valleys of the LCP do not exactly correspond to segment boundaries. For example, the valley near $i \!=\! 450$ disagree with the segment boundary at $i \!=\! 465$. The reason is that lexical cohesion can not cover all aspect of coherence of a segment; \ an incoherent piece of text can be lexically cohesive. \section{CONCLUSION} This paper proposed LCP, an indicator of segment changing, which concentrates on lexical cohesion of a text segment. The experiment proved that LCP closely correlate with the segment boundaries captured by the human judgments, and that lexical cohesion plays main role in forming a sequence of words into segments. Text segmentation described here provides basic information for text understanding: \begin{itemizing} \item Resolving anaphora and ellipsis: \\ Segment boundaries provide valuable restriction for determination of the referents. \item Analyzing text structure: \\ Segment boundaries can be considered as segment switching (push and pop) in hierarchical structure of text. \end{itemizing} The segmentation can be applied also to text summarizing. (Consider a list of average meaning of segments.) In future research, the author needs to examine validity of LCP for other genres --- Hearst (1993) segments expository texts. Incorporating other clues (e.g. cue phrases, tense and aspect, etc.) is also needed to make this segmentation method more robust. \section{ACKNOWLEDGMENTS} The author is very grateful to Dr.~Teiji Furugori, University of Electro-Communications, for his insightful suggestions and comments on this work. \section{REFERENCES}
proofpile-arXiv_067-2564
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{HIGH ENERGY SCATTERING IN THE REGGE LIMIT} Customary lattice methods are considered suitable for the description of the static, or low energy, nonperturbative properties of hadrons. Even though some progress has been made in studying the scattering length, form factors and the deep inelastic structure functions, the finite ultraviolet cut-off $a^{-1}\sim 2 GeV$ limits the application of classical lattice techniques mainly to the static phenomena. However, there exists an intriguing mapping, between the description of a high energy scattering and the one dimensional Heisenberg chain, which offers a possibility to use the methods of statistical physics to study the genuine high energy scattering. The typical high energy scattering is described by the Regge amplitude \begin{equation} A(s,t)\sim s^{\alpha(t)},\;\;\;\;s >> -t, \end{equation} where $s$ and $t$ are the Mandelstam variables and $\alpha(t)$ describes the Regge trajectory exchanged in the $t$ channel. Phenomenologically Regge trajectories are linear \begin{equation} \alpha(t)=\alpha_0+\alpha' t, \end{equation} with the intercept, $\alpha_0$, and the slope, $\alpha'$, depending on the quantum numbers exchanged in the $t$ channel. The total cross section \begin{equation} \sigma_{tot}={1\over s} Im A_{el}(s,0), \end{equation} is given in terms of the elastic scattering in the forward direction. The latter is described by the exchange of the Pomeranchuk trajectory with the vacuum quantum numbers. Unitarity restricts the growth of the cross sections. The Froissart bound \begin{eqnarray} \sigma_{tot} \leq const. \log^2{(s)}, \end{eqnarray} limits the intercept $\alpha_0 \leq 1$. Regge behaviour occurs also in the deep inelastic scattering at small $x$ \begin{eqnarray} F(x,Q^2)={1\over s}Im A_{\gamma^{\star}P}(s,t=0) && \\ s={Q^2\over x},\;\;\; x={Q^2\over 2M\nu}, \;\;\; x \ll 1 . & & \end{eqnarray} Again unitarity limits the growth of the structure functions at small $x$ \begin{equation} F(x,Q^2) \leq const. \log^2{(x)}. \end{equation} Emergence of the Regge behaviour is qualitatively well understood as the result of the exchange of the ladder diagrams \cite{pol}. The quantitative derivation of the Pomeron exchange in QCD was pioneered by Lipatov and is presently actively investigated \cite{LipRev,FadKor94,Kor95}. \section{POMERON IN QCD } Working in the leading logarithmic approximation, $\alpha_s\ll 1,\;\;\; \alpha_s \log{(s/M^2)} \sim 1$, Lipatov and others \cite{KLFBL} have identified and summed the relevant class of the QCD ladder diagrams in the Regge limit. In addition to the simple ladder diagrams, other diagrams also contribute and the whole class is commonly referred to as the exchange of two reggezied gluons. The result \cite{KLFBL} reads \begin{eqnarray} A_{LLA}\sim s^{1+\Delta_{BFKL}(t)},&& \\ \Delta_{BFKL}(0)={4 \alpha_s N_c\over \pi}\log{(2)} && \label{al}. \end{eqnarray} Therefore QCD reveals the Regge behaviour of the elastic amplitude and the intercept of the Pomeranchuk trajectory is known. However the amplitude (\ref{al}) violates unitarity, hence a lot of subsequent work has been devoted to find unitarity corrections to Eq.(\ref{al}). In particular the scheme termed generalized leading logarithmic approximation (GLLA) was proposed \cite{Bar80,CheDick81} which identifies the non-leading contributions which should restore unitarity. In the GLLA the Mellin transform $\tilde{A}(\omega,t)$ of the scattering amplitude \begin{equation} A(s,t) = is\int_{\delta-i\infty}^{\delta+i\infty} {d\omega\over 2\pi i} \left({s\over M^2}\right)^{\omega} \tilde{A}(\omega,t), \label{mel} \end{equation} is given as a sum of contributions $\tilde{A}_n$ from the exchange of $n$ reggeons (reggeized gluons) in the $t$ channel. For fixed $n$ they are given by the evolution operator $T_n(\omega)$ \begin{eqnarray*} \lefteqn{ \tilde{A}_n(\omega,t) =\int d^2k_1\dots d^2k_n d^2k_{1}'\dots d^2k_{n}' } \\ && \Phi_A(\{k\};q) T_n(\{k\},\{k'\});\omega) \Phi_B(\{k'\};q). \end{eqnarray*} Where $\Phi_{A(B)}$ are the wave functions of the hadron $A(B)$ in the reggeon basis. The evolution operator conserves the number of reggeons and satisfies the following Lippmann-Schwinger equation \begin{equation} \omega T_n(\omega)=T_n^0(\omega) + {\cal{H}}_n T_n(\omega), \end{equation} which can be solved symbolically as \begin{equation} T_n(\omega)={1\over \omega-{\cal{H}}_n } T_n^0(\omega). \label{sol} \end{equation} Comparing Eqs.(\ref{sol}) and (\ref{mel}) we see that the largest eigenvalue of the Hamiltonian of $n$-reggeons determines the high energy behaviour of the scattering amplitude. Moreover in the large $N_c$ limit only planar diagrams give leading contribution, and consequently ${\cal{H}}_n$ describes the system of $n$ degrees of freedom which possesses only the nearest neighbour interactions. At that point the analogy with statistical systems emerges. It becomes even more appealing after transforming to the impact parameter representation, $\vec{k_i} \rightarrow \vec{b}_i$, and after replacing transverse variables by the complex coordinates, $\vec{b}_i=(x_i,y_i)\rightarrow z_i=x_i+i y_i, \overline{z}_i=x_i-i y_i$. It was shown \cite{Lip93JETP,FadKor94} that the resulting Hamiltonian describes the linear chain of the quantum spins with the nearest-neighbours interactions in a complete analogy to the Heisenberg chain of ordinary spins. There are two complications, however, which make this problem nontrivial and challenging. First, the spin operators in question are infinitely dimensional. In fact they generate the $s=0$ unitary representation of the $SL(2,C)$ group. Second, the nearest neighbours interaction is nonlocal in the $z,\overline{z}$ space. In spite of this complications, Faddeev and Korchemsky \cite{FadKor94}, using the generalized Bethe ansatz have confirmed Lipatov's hypothesis \cite{Lip90}, that the n-reggeon problem is holomorphically separable and in principle, exactly solvable. The $n=2$ hamiltonian was diagonalized explicitly and BFKL slope, Eq.(\ref{al}) was reproduced. \section{THE ODERON CASE} The $n=3$ case, which corresponds to phenomenologically important oderon problem, is solved only partially. Analytical results are available for integer values $(h=n)$ of the Casimir operator of two spins, $\hat{q}_2=(s_i+s_j)^2=-h(h-1)$. Also the asymptotic expansion of the maximal eigenvalue of ${\cal{H}}_3$ in $1/h$ was derived. All these results are based on the Bethe ansatz approach. We have studied the solution $Q_3(z)$ of the corresponding Yang-Baxter equation by analogy to the standard theory of Fuchsian equations of the second order. Even though the Yang-Baxter equation for $Q_3(z)$ is of the third order, the elements of the general theory apply. In particular, we have found that the case considered corresponds to the degenerate situation with the characteristic exponent $r=0$. Hence, the equation has one regular and two irregular solutions around $z=0$. This is independent of the eigenvalues of the two Casimir operators $\hat{q}_2$ and $\hat{q}_3$ which parametrize the problem. The coefficients $f_n$ of the power expansion \begin{equation} Q_3(z)=\sum_n f_n z^n, \label{pow} \end{equation} are determined by the following recursion relation \begin{eqnarray} (n+1)^3 f_{n+1}= && \nonumber \\ \left( n((2n+1)(n+1)+q_2)+iq_3 \right) f_n && \nonumber \\ -(n-1)(n(n+1)+q_2)f_{n-1}.&& \label{rec} \end{eqnarray} This recursion is equivalent to the recursion derived in Ref.\cite{FadKor94} where the expansion in terms of the Legendre polynomials was studied. In particular, for integer $h$ $Q_3(z)$ is a finite polynomial in $z$ only for the discrete values of $q_3$. The quantization condition of $q_3$ which follows from Eq.(\ref{rec}) is the same as one derived in Ref.\cite{Kor95} The straightforward power expansion (\ref{pow}) may, however, prove more suitable for the study of the analytic structure of the solution and in consequence may allow the analytic continuation for arbitrary~$h$. \section{LATTICE FORMULATION} In closing we would like to comment on the feasibility of applying lattice Monte Carlo methods to solve this problem numerically. Contrary to the standard applications, physically interesting questions arise already for the {\em finite} number of exchanged reggeons, i.e. for finite length of the Heisenberg chain, say $n=3$. However the volume of the transverse parameter space is infinite, hence the normal difficulties associated with this infinite volume limit would arise. In fact the hamiltonian ${\cal{H}}_n$ has the global conformal invariance, therefore the critical slowing down is expected to occur. So, the challenge is how to simulate effectively a conformal theory on the lattice? The nearest neighbours interaction is nonlocal which also complicates practical applications. Since, however, the system is critical, one is bound to employ the nonlocal (cluster) algorithms. Therefore the nonlocality of ${\cal{H}}_n$ may not be the main problem. The choice of the representation may help in designing a practical approach. The "$z$" representation is the one possibility. However one may try to employ the $SL(2,C)$ symmetry to simplify the problem. Note that one of the analytical solutions of the Pomeron $n=2$ case uses solely the group theoretical information on the spectrum of the unitary representations of the $SL(2,C)$ group. Finally, even though the problem is certainly far from trivial, we should be aware that the $n=2$ case was solved analytically, hence the spectrum of the excitations of elementary interaction is known. Combining this knowledge with the recent progress in simulating critical systems may lead to a realistic proposal for the numerical approach. \vspace*{.75 cm} J. W. thanks G. Korchemsky, L. Lipatov and M. A. Nowak for the discussion.
proofpile-arXiv_067-2632
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction.} In this paper we study algebraic super curves with a view towards applications to super Kadomtsev-Petviashvili hierarchies (SKP). We deal from the start with super curves $X$ over a nonreduced ground ring $\Lambda$, i.e., our curves carry global nilpotent constants. This has as an advantage, compared to super curves over the complex numbers $\mathbb C$, that our curves can be nonsplit, but this comes at the price of some technical complications. The main problem is that the cohomology groups of coherent sheaves on our curves should be thought of as finitely generated modules over the ground ring $\Lambda$, instead of vector spaces over $\mathbb C$. In general these modules are of course not free. Still we have in this situation Serre duality, as explained in Appendix \ref{app:dualSerredual}, the dualizing sheaf being the (relative) berezinian sheaf $\Berx$. In applications to SKP there occurs a natural class of super curves that we call generic SKP curves. For these curves the most important sheaves, the structure sheaf and the dualizing sheaf, have free cohomology. In the later part of the paper we concentrate on these curves. Super curves exhibit a remarkable duality uncovered in \cite{DoRoSc:SupModSpaces}. The projectivized cotangent bundle of any $N=1$ super curve has the structure of an $N=2$ super Riemann surface (SRS), and super curves come in dual pairs $X,\hat{X}$ whose associated $N=2$ SRSs coincide. Further, the ($\Lambda$-valued) points of a super curve can be identified with the effective divisors of degree 1 on its dual. Ordinary $N=1$ SRSs, widely studied in the context of super string theory, are self dual under this duality. By the resulting identification of points with divisors they enjoy many of the properties that distinguish Riemann surfaces from higher-dimensional varieties. By exploiting the duality we extend this good behaviour to all super curves. In particular we define for all super curves a contour integration for sections of $\Berx$, the holomorphic differentials in this situation. The endpoints of a super contour turn out to be not $\Lambda$-points of our super curve, but rather irreducible divisors, i.e., $\Lambda$-points on the dual curve! For SRSs these notions are the same and our integration is a generalization of the procedure already known for SRSs. We use this to prove Riemann bilinear relations, connecting in this situation periods of holomorphic differentials on our curve $X$ with those on its dual curve. In case the cohomology of the structure sheaf is free, e.g. if $X$ is a generic SKP curve, we can define a period matrix and use this to define the Jacobian of $X$ as the quotient of a super vector space by a lattice generated by the period matrix. In this case the Jacobian is a smooth supermanifold. A key question is whether the Jacobian of a generic SKP curve admits ample line bundles (and hence embeddings in projective super space), whose sections could serve as the super analog of theta functions. We show that the symmetry of the even part of the period matrix (together with the automatic positivity of the imaginary part of the reduced matrix) is sufficient for this, and construct the super theta functions in this case. We derive some geometric necessary and sufficient conditions for this symmetry to hold, but it is not an automatic consequence of the Riemann bilinear period relations in this super context. Neither do we know an explicit example in which the symmetry fails. The usual proof that symmetry of the period matrix is necessary for existence of a (principal) polarization also fails because crucial aspects of Hodge theory, particularly the Hodge decomposition of cohomology, do not hold for supertori. The motivation for writing this paper was our wish to generalize the theory of the algebro-geometric solutions to the KP hierarchy of nonlinear PDEs, as described in \cite{SeWi:LpGrpKdV} and references therein, to the closest supersymmetric analog, the ``Jacobian" super KP hierarchy of Mulase and Rabin \cite{Mu:Jac,Ra:GeomSKP}. In the super KP case the geometric data leading to a solution include a super curve $X$ and a line bundle $\mathcal L$ with vanishing cohomology groups over $X$. For such a line bundle to exist the super curve $X$ must have a structure sheaf $\mathcal O_X$ such that the associated split sheaf ${\mathcal O}_{X}\spl$, obtained by putting the global nilpotent constants in $\Lambda$ equal to zero, is a direct sum ${\mathcal O}_{X}\spl={\mathcal O}_{X}^{\text{red}}\oplus \mathcal N$, where ${\mathcal O}_{X}^{\text{red}}$ is the structure sheaf of the underlying classical curve $X^{\text{red}}$ and $\mathcal N$ is an invertible ${\mathcal O}_{X}^{\text{red}}$-sheaf of degree zero. We call such an $X$ an SKP curve, and if moreover $\mathcal N$ is not isomorphic to ${\mathcal O}_{X}^{\text{red}}$ we call $X$ a generic SKP curve. The Jacobian SKP hierarchy describes linear flows $\mathcal L(t_i)$ on the Jacobian of $X$ (with even and odd flow parameters). The other known SKP hierarchies, of Manin--Radul \cite{ManRad:SusyextKP} and Kac--van de Leur \cite{KavdL:SuperBoson} , describe flows on the universal Jacobian over the moduli space of super curves, in which $X$ as well as $\mathcal L$ vary with the $t_i$ \cite{Ra:GeomSKP}. These are outside the scope of this paper, although we hope to return to them elsewhere. As in the non-super case, the basic objects in the theory are the (even and odd) Baker functions, which are sections of $\mathcal L(t_i)$ holomorphic except for a single simple pole, and a tau function which is a section of the super determinant (Berezinian) bundle over a super Grassmannian $\mathcal S\text{gr}$. In contrast to the non-super case, we show that the Berezinian bundle has trivial Chern class, reflecting the fact that the Berezinian is a ratio of ordinary determinants. The super tau function descends, essentially, to $\operatorname{Jac}(X)$ as a section of a bundle with trivial Chern class also, and can be rationally expressed in terms of super theta functions when these exist (its reduced part is a ratio of ordinary tau functions). We also obtain a formula for the even and odd Baker functions in terms of the tau function, confirming that one must know the tau function for the more general Kac--van de Leur flows to compute the Baker functions for even the Jacobian flows in this way, cf. \cite{DoSc:SuperGrass,Takama:GrassmannSKP}. For this we need a slight extension of Cramer's rule for solving linear equations in even and odd variables, which is developed in an Appendix via the theory of quasideterminants. In another Appendix we use the Baker functions found in \cite{Ra:SupElliptic} for Jacobian flow in the case of super elliptic curves to compute the corresponding tau function. Among the problems remaining open we mention the following. First, to obtain a sharp criterion for when a super Jacobian admits ample line bundles --- perhaps always? Second, the fact that generic SKP curves have free cohomology is a helpful simplification which allows us to represent their period maps by matrices and results in their Jacobians being smooth supermanifolds. However, our results should generalize to arbitrary super curves with more singular Jacobians. Finally, one should study the geometry of the universal Jacobian and extend our analysis to the SKP system of Kac--van de Leur. \section{Super curves and their Jacobians.} \subsection{Super curves.} Fix a Grassmann algebra $\Lambda$ over $\mathbb C$; for instance we could take $\Lambda=\mathbb C\,[\beta_1,\beta_2,\dots,\beta_n]$, the polynomial algebra generated by $n$ odd indeterminates. Let $(\bullet,\Lambda)$ be the super scheme $\operatorname{Spec} \Lambda $, with underlying topological space a single point $\bullet$. A smooth compact connected complex super curve over $\Lambda$ of dimension $(1|N)$ is a pair $(X,\mathcal O_X)$, where $X$ is a topological space and $\mathcal O_X$ is a sheaf of super commutative $\Lambda$-algebras over $X$, equipped with a structure morphism $(X,\mathcal O_X)\to (\bullet,\Lambda)$, such that \begin{enumerate} \item $(X,{\mathcal O}_{X}^{\text{red}})$ is a smooth compact connected complex curve, algebraic or holomorphic, depending on the category one is working in. Here ${\mathcal O}_{X}^{\text{red}}$ is the reduced sheaf of $\mathbb C$-algebras on $X$ obtained by quotienting out the nilpotents in the structure sheaf $\mathcal O_X$, \item For suitable open sets $U_\alpha\subset X$ and suitable linearly independent odd elements $\theta_\alpha^i$ of $\mathcal O_X(U_\alpha)$ we have $$ \mathcal O_X(U_\alpha)={\mathcal O}_{X}^{\text{red}}\otimes \Lambda[\theta_\alpha^1,\theta_\alpha^2,\dots,\theta_\alpha^N]. $$ \end{enumerate} The $U_\alpha$\rq s above are called coordinate neighborhoods of $(X,\mathcal O_X)$ and $(z_\alpha,\theta_\alpha^1,\theta_\alpha^2,\dots,\theta_\alpha^N)$ are called local coordinates for $(X,\mathcal O_X)$, if $z_\alpha$ (mod nilpotents) is a local coordinate for $(X,{\mathcal O}_{X}^{\text{red}})$. On overlaps of coordinate neighborhoods $U_\alpha\cap U_\beta$ we have \begin{equation}\label{eq:coordchange} \begin{split} z_\beta &= F_{\beta\alpha}(z_\alpha,\theta_\alpha^j),\\ \theta_\beta^i &= \Psi_{\beta\alpha}^i(z_\alpha,\theta_\alpha^j). \end{split} \end{equation} Here the $F_{\beta\alpha}$ are even functions and $\Psi_{\beta\alpha}^i$ odd ones, holomorphic or algebraic depending on the category we are using. \begin{exmpl}\label{exmpl:split} A special case is formed by the {\it split} super curves. For $N=1$ they are given by transition functions \begin{equation}\label{eq:splitcoordchange} \begin{split} z_\beta &= f_{\beta\alpha}(z_\alpha),\\ \theta_\beta &= \theta_\alpha B_{\beta\alpha}(z_\alpha), \end{split} \end{equation} with $f_{\beta\alpha}(z_\alpha),B_{\beta\alpha}(z_\alpha)$ even holomorphic (or algebraic) functions that are independent of the nilpotent constants in $\Lambda$. So in this case the $f_{\beta\alpha}$ are the transition functions for ${\mathcal O}_{X}^{\text{red}}$ and $\mathcal O_X={\mathcal O}_{X}^{\text{red}}\otimes\Lambda \mid \mathcal N\otimes \Lambda$, where $\mathcal N$ is the ${\mathcal O}_{X}^{\text{red}}$-module with transition functions $B_{\beta\alpha}(z_\alpha)$. Here and henceforth we denote by a vertical $\mid$ a direct sum of free $\Lambda$-modules, with on the left an evenly generated summand and on the right an odd one. To any super curve $(X,\mathcal O_X)$ there is canonically associated a split curve $(X,{\mathcal O}_{X}\spl)$ over $\mathbb C$: just take ${\mathcal O}_{X}\spl=\mathcal O_X\otimes_{\Lambda} \Lambda/\mathfrak{m}=\mathcal O_X/\mathfrak{m}\mathcal O_X$, with $\mathfrak m=\langle \beta_1,\dots,\beta_n\rangle$ the maximal ideal of nilpotents in $\Lambda$. There is a functor from the category of $\mathcal O_X$-modules to the category of ${\mathcal O}_{X}\spl$-modules that associates to a sheaf $\mathcal F$ the {\it associated split sheaf} $\mathcal F^{\text{split}}=\mathcal F/\mathfrak {m}\mathcal F$.\qed\end{exmpl} \smallskip A $\Lambda$-point of a super curve $(X,\mathcal O_X)$ is a morphism $\phi:(\bullet,\Lambda)\to (X,\mathcal O_X)$ such that the composition with the structural morphism \linebreak $(X,\mathcal O_X)\to(\bullet,\Lambda)$ is the identity (of $(\bullet,\Lambda)$). Locally, in an open set $U_\alpha$ containing $\phi(\bullet)$, a $\Lambda$-point is given by specifying the images under the even $\Lambda$-homo\-mor\-phism $\phi^\sharp:\mathcal O_X(U_\alpha)\to \Lambda$ of the local coordinates: $p_\alpha=\phi^\sharp(z_\alpha),\pi^i_\alpha =\phi^\sharp( \theta^i_\alpha)$ . The local parameters $(p_\alpha,\pi^i_\alpha)$ of a $\Lambda$-point transform precisely as the coordinates do, see (\ref{eq:coordchange}). By quotienting out nilpotents in a $\Lambda$-point $(p_\alpha,\pi^i_\alpha)$ we obtain a complex number $p_\alpha^{\text{red}}$, the coordinate of the reduced point of $(X,{\mathcal O}_{X}^{\text{red}})$ corresponding to the $\Lambda$-point $(p_\alpha,\pi^i_\alpha)$. \subsection{Duality and $N=2$ curves.}\label{ss:DualN=2curves} Our main interest is the theory of $N=1$ super curves but as a valuable tool for the study of these curves we make use of $N=2$ curves as well in this paper. Indeed, as is well known, \cite{DoRoSc:SupModSpaces,Schwarz:SuperanalogsSYMPLCONT}, one can associate in a canonical way to an $N=1$ curve an (untwisted) super conformal $N=2$ curve, as we will now recall. The introduction of the super conformal $N=2$ curve clarifies the whole theory of $N=1$ super curves. Let from now on $(X,\mathcal O_X)$ be an $N=1$ super curve. Any invertible sheaf $\mathcal E$ for $(X,\mathcal O_X)$ and any extension of $\mathcal E$ by the structure sheaf: \begin{equation*} 0\rightarrow \mathcal O_X\rightarrow \hat{\mathcal E}\rightarrow \mathcal E\rightarrow 0,\label{eq:extension} \end{equation*} defines in the obvious way an $N=2$ super curve $(X,\hat{\mathcal E})$. It has local coordinates $(z_\alpha,\theta_\alpha,\rho_\alpha)$, where $(z_\alpha,\theta_\alpha)$ are local coordinates for $(X,\mathcal O_X)$. On overlaps we will have \begin{equation}\label{eq:coordchangen=2} \begin{split} z_\beta &= F_{\beta\alpha}(z_\alpha,\theta_\alpha),\\ \theta_\beta &= \Psi_{\beta\alpha}(z_\alpha,\theta_\alpha),\\ \rho_\beta &= H_{\beta\alpha}(z_\alpha,\theta_\alpha) \rho_\alpha + \phi_{\beta\alpha} (z_\alpha,\theta_\alpha). \end{split} \end{equation} (So $H_{\beta\alpha}(z_\alpha,\theta_\alpha)$ is the transition function for the generators of the invertible sheaf $\mathcal E$.) We want to choose the extension (\ref{eq:extension}) such that $(X,\hat{\mathcal E})$ is {\it super conformal}, in the sense that the local differential form $\omega_\alpha=dz_\alpha- d\theta_\alpha \rho_\alpha$ is globally defined up to a scale factor. Now $$ \omega_\beta =dz_\beta- d\theta_\beta \rho_\beta =dz_\alpha (\frac{\partial F}{\partial z_\alpha} - \frac{\partial \Psi}{\partial z_\alpha}\rho_\beta) - d\theta_\alpha (-\frac{\partial F}{\partial \theta_\alpha} + \frac{\partial \Psi}{\partial \theta_\alpha}\rho_\beta). $$ (Here we suppress the subscripts on $F$ and $\Psi$, as we will do below.) We see that for $\hat{\mathcal E}$ to be super conformal we need $$ \rho_\alpha=\frac{(-\frac{\partial F}{\partial \theta_\alpha} + \frac{\partial \Psi}{\partial \theta_\alpha}\rho_\beta)}{ (\frac{\partial F}{\partial z_\alpha} - \frac{\partial \Psi}{\partial z_\alpha}\rho_\beta)}, $$ or \begin{equation}\label{eq:transformrho} \rho_\beta=\frac{ (\frac{\partial F}{\partial \theta_\alpha} + \frac{\partial F}{\partial z_\alpha}\rho_\alpha)} {(\frac{\partial \Psi}{\partial \theta_\alpha} - \frac{\partial \Psi}{\partial z_\alpha}\rho_\alpha)}. \end{equation} Conversely one checks that if (\ref{eq:transformrho}) holds for all overlaps the cocycle condition is satisfied and that we obtain in this manner an $N=2$ super curve. To show that this super curve is an extension as in (\ref{eq:extension}), it is useful to note that (\ref{eq:transformrho}) can also be written as \begin{equation}\label{eq:transformrho2} \rho_\beta=\ber\begin{pmatrix}\partial_z F&\partial_z \Psi\\ \partial_\theta F&\partial_\theta\Psi \end{pmatrix} \rho_\alpha+\frac{\partial_\theta F}{\partial_\theta\Psi}. \end{equation} The homomorphism $\ber$ is defined in Appendix \ref{app:Lineqsupercat}, see \eqref{eq:defberber*}. Recall that the local generators $f_\alpha$ of the dualizing sheaf (see Appendix \ref{app:dualSerredual}) $\mathcal{B}\text{er}_X$ of $(X,\mathcal O_X)$ transform as \begin{equation}\label{eq:transfBer} f_\beta=\ber\begin{pmatrix}\partial_z F&\partial_z \Psi\\ \partial_\theta F&\partial_\theta\Psi \end{pmatrix}f_\alpha. \end{equation} If we denote by $\Cox$ the structure sheaf of the super conformal $N=2$ super curve just constructed, we see that we have an exact sequence \begin{equation} 0\rightarrow\mathcal O_X\rightarrow\Cox\rightarrow \mathcal{B}\text{er}_X\rightarrow0. \label{eq:extberbystruct}\end{equation} $\Cox$ is the only extension of $\Berx$ by the structure sheaf that is super conformal. This sequence is {\it trivial} if it isomorphic (\cite{HiSt:HomAlg}) to a split sequence. \begin{defn}\label{def:projected} A super curve is called {\it{projected} }if there is a cover of $X$ such that the transition functions $F_{\beta\alpha}$ in (\ref{eq:coordchange}) are independent of the odd coordinates $\theta_\alpha^j$. \end{defn} For projected curves we have a projection morphism $(X,\mathcal O_X)\to (X,{\mathcal O}_{X}^{\text{red}}\otimes \Lambda)$ corresponding to the sheaf inclusion ${\mathcal O}_{X}^{\text{red}}\otimes \Lambda\to \mathcal O_X$. This inclusion can be defined only for projected curves. A projected super curve has a $\Cox$ that is a trivial extension but the converse is not true, as we will see when we discuss super Riemann surfaces in subsection \ref{ss:SRS}. The relation between projectedness of $(X,\mathcal O_X)$ and the triviality of the extension defining $(X,\Cox)$ is discussed in detail in subsection \ref{ss:Symmperiodmatrices}. \begin{exmpl}\label{exmpl:splitber} If $(X,\mathcal O_X)$ is split, (\ref{eq:transfBer}) becomes \begin{equation}\label{eq:transfBersplit} f_\beta=\frac{\partial_z f_{\beta\alpha}}{B_{\beta\alpha}} f_\alpha. \end{equation} This means that in this case $\Berx=\mathcal K\mathcal N\inv\otimes \mathcal O_X=\mathcal K\mathcal N\inv\otimes\Lambda\mid \mathcal K\otimes \Lambda$, where $\mathcal K$ is the canonical sheaf for ${\mathcal O}_{X}^{\text{red}}$. Split curves are projected and the sequence (\ref{eq:extberbystruct}) becomes trivial. As an ${\mathcal O}_{X}^{\text{red}}$-module we have $\Cox=({\mathcal O}_{X}^{\text{red}} \oplus \mathcal K)\otimes\Lambda\mid (\mathcal N\oplus \mathcal K\mathcal N\inv)\otimes \Lambda$. \qed\end{exmpl} \smallskip The map $\Cox\to \Berx$ is locally described by the differential operator $\Dc^\alpha=\partial_{\rho_\alpha}$. Indeed, the operator $\Dc^\alpha$ transforms homogeneously, $\Dc^\beta=\ber\begin{pmatrix}\partial_z F&\partial_z \Psi\\ \partial_\theta F&\partial_\theta\Psi \end{pmatrix}\inv \Dc^\alpha$, so this defines a global $(0\mid 1)$ dimensional distribution $\Dc$ and the quotient of $(X, \mathcal {CO}_X)$ by this distribution is precisely $(X,\mathcal O_X)$. Now the distribution $\Dc$ annihilates the 1-form $\omega$ used to find $\Cox$. This form locally looks like $\omega_\alpha=dz_\alpha -d\theta_\alpha \rho_\alpha$ and its kernel is generated by $\Dc^\alpha$ and a second operator $\hat{D}_{\mathcal C}^\alpha=\partial_{\theta_\alpha}+ \rho_\alpha\partial_{z_\alpha}$. (The operators that we call $\Dc$ and $\hat{D}_{\mathcal C}$ are in the literature also denoted by $D^+$ and $D^-$, cf. \cite{DoRoSc:SupModSpaces}) To study the result of ``quotienting by the distribution $\hat{D}_{\mathcal C}$'' we introduce in each coordinate neighborhood $U_\alpha$ new coordinates: \begin{equation*} \begin{split} \hat z_\alpha &= z_\alpha-\theta_\alpha\rho_\alpha,\\ \hat \theta_\alpha &=\theta_\alpha ,\\ \hat \rho_\alpha &= \rho_\alpha. \end{split} \end{equation*} In the sequel we will drop the hats $\hat {}$ on $\theta$ and $\rho$, hopefully not causing too much confusion. In these new coordinates we have $$ \hat{D}_{\mathcal C}^\alpha= \partial_{\theta_\alpha},\quad \Dc^\alpha=\partial_{\rho_\alpha}+ \theta_\alpha\partial_{\hat z_\alpha}. $$ So the kernel of $\hat{D}_{\mathcal C}$ consists locally of functions of $\hat z_\alpha,\rho_\alpha$. To see that this makes global sense we observe that \begin{equation}\label{eq:coordtransfdualcurve} \begin{split} \hat z_\beta &= F(\hat z_\alpha,\rho_\alpha) + \frac{DF (\hat z_\alpha,\rho_\alpha)}{D\Psi(\hat z_\alpha,\rho_\alpha)}\Psi(\hat z_\alpha,\rho_\alpha),\\ \rho_\beta &=\frac{DF(\hat z_\alpha,\rho_\alpha)}{D\Psi(\hat z_\alpha,\rho_\alpha)}, \end{split} \end{equation} where $D=\partial_\theta +\theta\partial_z$. The details of this somewhat unpleasant calculation are left to the reader. From (\ref{eq:coordtransfdualcurve}) we see that $\Cox$ contains the structure sheaf $\hat{\mathcal O}_X$ of another $N=1$ super curve: $\hat{\mathcal O}_X$ is the sheaf of $\Lambda$-algebras locally generated by $\hat z_\alpha,\rho_\alpha$. We call $\hat X=(X,\hat{\mathcal O}_X)$ the {\em{dual curve}} of $(X,\mathcal O_X)$. We have \begin{equation}\label{eq:extberbystructdual} 0\rightarrow\hat{\mathcal O}_X\rightarrow\Cox \overset{\hat{D}_{\mathcal C}}\rightarrow\mathcal B\text{{\^e}r}_{{X}}\rightarrow0, \end{equation} where $\mathcal B\text{{\^e}r}_{{X}}$ is the dualizing sheaf of the dual curve. One can show that the dual curve of the dual curve is the original curve, thereby justifying the terminology. \begin{exmpl}\label{exmpl:dualsplit} We continue the discussion of split curves. In this case (\ref{eq:coordtransfdualcurve}) becomes \begin{equation}\label{eq:coordtransfdualcurvesplit} \begin{split} \hat z_\beta &= f(\hat z_\alpha),\\ \rho_\beta &=\frac{\partial_{\hat z}f(\hat z_\alpha)}{B(\hat z_\alpha)}\rho_\alpha, \end{split} \end{equation} So the dual split curve is $\hat{\mathcal O}_X^{\text{split}}={\mathcal O}_{X}^{\text{red}}\otimes\Lambda\mid \mathcal K\mathcal N\inv \otimes \Lambda$. The Berezinian sheaf for the dual split curve has generators that satisfy \begin{equation}\label{eq:transfBersplitdual} \hat f_\beta=B(z_\alpha) \hat f_\alpha. \end{equation} This means that $\mathcal B\text{{\^e}r}_{{X}}=\mathcal N\otimes \hat{\mathcal O}_X=\mathcal N\otimes \Lambda\mid \mathcal K\otimes \Lambda$. \qed \end{exmpl} \smallskip A very useful geometric interpretation of the dual curve exists, cf. \cite{DoRoSc:SupModSpaces,Schwarz:SuperanalogsSYMPLCONT}: the points (i.e., the $\Lambda$-points) of the dual curve correspond precisely to the irreducible divisors of the original curve and vice versa, as we will presently discuss. In subsection \ref{ss:IntegraOnSupCurve} we will see that irreducible divisors are the limits that occur in contour integration on a super curve. An irreducible divisor (for $\mathcal O_X$) is locally given by an even function $P_\alpha=z_\alpha-\hat z_\alpha - \theta_\alpha \rho_\alpha\in \mathcal O_X(U_\alpha)$, where $\hat z_\alpha$ and $\rho_\alpha$ are now respectively even and odd constants, i.e., elements of $\Lambda$. Two divisors $P_\alpha,P_\beta$ defined on coordinate neighborhoods $U_\alpha$ and $U_\beta$, respectively, are said to correspond to each other on the overlap if \begin{equation} P_\beta(z_\beta,\theta_\beta)=P_\alpha(z_\alpha,\theta_\alpha) g(z_\alpha,\theta_\alpha), \quad g(z_\alpha,\theta_\alpha)\in \mathcal O^\times_{X,\text{ev}}(U_\alpha\cap U_\beta).\label{eq:corresponddiv} \end{equation} (If $R$ is a ring (or sheaf of rings) ${R}^\times$ is the set of invertible elements.) \begin{lem}\label{lem:roots} Let $(U, \mathcal O(U))$ be a $(1\mid 1)$ dimensional super domain with coordinates $(z,\theta)$ and let $f(z,\theta)\in \mathcal O(U)$. Then, with $D=\partial_\theta+\theta\partial_z$, \begin{equation*} f(z,\theta)=(z-\hat z -\theta \rho)g(z,\theta)\quad\Leftrightarrow\quad f(\hat z,\rho)=0,Df(\hat z,\rho)=0, \end{equation*} for $g(z,\theta)$ in $\mathcal O(U)$. \end{lem} Applying Lemma \ref{lem:roots} to (\ref{eq:corresponddiv}) we find \begin{equation*} \begin{split} P_\beta(F(\hat z_\alpha,\rho_\alpha), \Psi(\hat z_\alpha,\rho_\alpha)) &= F(\hat z_\alpha,\rho_\alpha)-\hat z_\beta- \Psi(\hat z_\alpha,\rho_\alpha)\rho_\beta=0,\\ DP_\beta(F(\hat z_\alpha,\rho_\alpha), \Psi(\hat z_\alpha,\rho_\alpha)) &=DF(\hat z_\alpha,\rho_\alpha)- D\Psi(\hat z_\alpha,\rho_\alpha)\rho_\beta=0. \end{split} \end{equation*} {}From this one sees that the parameters $(\hat z_\alpha,\rho_\alpha)$ in the local expression for an irreducible divisor transform as in (\ref{eq:coordtransfdualcurve}), so they are $\Lambda$-points of the dual curve. The $N=2$ super conformal super curve canonically associated to a super curve has a structure sheaf $\Cox$ that comes equipped with two sheaf maps $\Dc$ and $\hat{D}_{\mathcal C}$ with kernels the structure sheaves $\mathcal O_X$ and $\hat{\mathcal O}_X$ of the original super curve and its dual. The intersection of the kernels is the constant sheaf $\Lambda$. The images of these maps are the dualizing sheaves $\Berx$ and $\mathcal B\text{{\^e}r}_{{X}}$. In fact we can restrict $\Dc,\hat{D}_{\mathcal C} $ to the subsheaves $\hat{\mathcal O}_X$ and $\mathcal O_X$, respectively, without changing the images. This gives us exact sequences \begin{equation}\label{eq:DandDhatseq} \begin{split} 0\rightarrow \Lambda\rightarrow &\mathcal O_X\overset{\hat D}\rightarrow\mathcal B\text{{\^e}r}_{{X}}\rightarrow0,\\ 0\rightarrow \Lambda\rightarrow &\hat{\mathcal O}_X\overset{D}\rightarrow\Berx\rightarrow0, \end{split} \end{equation} with $D=\Dc |_{\hat{\mathcal O}_X}$ and $\hat D=\hat{D}_{\mathcal C}|_{\mathcal O_X}$. Just as the sheaf maps $\Dc,\hat{D}_{\mathcal C}$ have local expressions as differential operators, also their restrictions are locally expressible in terms of differential operators: if $\{f_\alpha(z_\alpha,\theta_\alpha)\}$ is a section of $\mathcal O_X$ then the corresponding section $\{(\hat{D}_{\mathcal C} f_\alpha)(\hat z_\alpha,\rho_\alpha)\}$ of $\mathcal B\text{{\^e}r}_{{X}}$ is given by $$ \hat D f_\alpha(\hat z_\alpha,\rho_\alpha)= [(\partial_\theta+\theta\partial_z)f_\alpha]|_{z_\alpha=\hat z_\alpha,\theta_\alpha=\rho_\alpha}. $$ Similarly, if $\{\hat f_\alpha(\hat z_\alpha,\rho_\alpha)\}$ is a section of $\hat{\mathcal O}_X$ then the corresponding section of $\Berx$ is $$ {D}\hat{f}_\alpha( z_\alpha,\theta_\alpha)=[(\partial_\rho+\rho\partial_{\hat z})\hat f_\alpha]|_{\hat z_\alpha= z_\alpha,\rho_\alpha=\theta_\alpha}. $$ We summarize the relationships between the various sheaves and sheaf maps in the following commutative diagram (of sheaves of $\Lambda$-algebras): \begin{equation} \begin{CD}\label{eq:bigcd} {} @. 0 @. 0 @. {} @. {} \\ @. @VVV @VVV @. @. \\ 0 @>>> \Lambda@>>>\hat{\mathcal O}_X@>{ D}>>\Berx @>>>0 \\ @. @VVV @VVV @\vert @. \\ 0 @>>> \mathcal O_X@>>>\Cox@>{\Dc}>>\Berx @>>>0 \\ @. @V\hat{D}VV @V\hat{D}_{\mathcal C} VV @. @. \\ {} @. \mathcal B\text{{\^e}r}_{{X}} @= \mathcal B\text{{\^e}r}_{{X}} @.{} @.{}\\ @. @VVV @VVV @. @. \\ {} @. 0 @. 0 @. {} @. {} \\ \end{CD} \end{equation} \medskip We conclude this subsection with the remark that the dualizing sheaf ${\mathcal B\text{er}(\Cox)}$ of the super conformal super curve $(X,\Cox)$ associated to a super curve $(X,\mathcal O_X)$ is trivial, making $(X,\Cox)$ a super analog of an elliptic curve or a Calabi-Yau manifold, cf. \cite{DistNelson:SemiRigidSGra}. In fact, this statement is true for any $N=2$ super curve $(X,\mathcal E)$ where $\mathcal E$ is an extension of $\Berx$ by the structure sheaf: if we have \begin{equation*} 0 \to \mathcal O_X \to{\mathcal E} \to \Berx \to 0, \end{equation*} then $\mathcal E$ has local generators $(z_\alpha,\theta_\alpha,\rho_\alpha)$ on $U_\alpha$, and on overlaps we get \begin{equation} \label{eq:generalextensionberbystruct} \begin{split} z_\beta &= F_{\beta\alpha}(z_\alpha,\theta_\alpha),\\ \theta_\beta &= \Psi_{\beta\alpha}(z_\alpha,\theta_\alpha),\\ \rho_\beta &= \Phi_{\beta\alpha}(z_\alpha,\theta_\alpha,\rho_\alpha)= \ber(J(z,\theta))\rho_\alpha + \phi_{\beta\alpha} (z_\alpha,\theta_\alpha), \end{split} \end{equation} where $\ber(J(z,\theta))$ is the Berezinian of the super Jacobian matrix of the change of $(z,\theta)$ coordinates; this is precisely the transition function for $\Berx$, see \eqref{eq:transfBer}. Then the super Jacobian matrix \begin{multline*} J(z,\theta,\rho)=\ber \begin{pmatrix} \partial_z F &\partial_z \Psi &\partial_z \Phi\\ \partial_\theta F &\partial_\theta \Psi &\partial_\theta \Phi\\ \partial_\rho F &\partial_\rho \Psi &\partial_\rho \Phi\\ \end{pmatrix}= \ber \begin{pmatrix} \partial_z F &\partial_z \Psi &\partial_z \Phi\\ \partial_\theta F &\partial_\theta \Psi &\partial_\theta \Phi\\ 0 &0 &\partial_\rho \Phi\\ \end{pmatrix}= \\ =\ber( J(z,\theta))/\partial_\rho \Phi=1,\quad \end{multline*} for all overlaps $U_\alpha\cap U_\beta$, and therefore $(X,\mathcal E)$ has trivial dualizing sheaf. \subsection{Super Riemann surfaces.}\label{ss:SRS} In this subsection we briefly discuss a special class of $N=1$ super curves, the super Riemann surfaces (SRS). This class of curves is studied widely in the literature because of its applications in super string theory, see e.g., \cite{Fried:NoteString2DCFT,GidNelson:GeomSRS,LebrRoth:ModuliSRS,% CraneRabin:SRSuniTeichm}. (Also the term $\text{SUSY}_1$ curve is used, \cite{Manin:GaugeFieldTheoryComplexGeom,Manin:Topicsnoncomgeom}, or super conformal manifold, \cite{RoSchVor:GeomSupConf}.) {}From our point of view super Riemann surfaces are special because irreducible divisors and $\Lambda$-points can be identified and because there is a differential operator taking functions to sections of the dualizing sheaf. Both facts simplify the theory considerably. However, by systematically using the duality of the $N=2$ super conformal curve one can extend results previously obtained solely for super Riemann surfaces to arbitrary super curves. In the previous subsection we have seen that every $N=1$ super curve $(X,\mathcal O_X)$ has a dual curve $(X,\hat{\mathcal O}_X)$. Of course it can happen that the transition functions of $(X,\mathcal O_X)$ are identical to those of the dual curve $(X,\hat{\mathcal O}_X)$. This occurs if the transition functions satisfy \begin{equation}\label{eq:SRScondition} DF(z_\alpha,\theta_\alpha)= \Psi(z_\alpha,\theta_\alpha)D\Psi(z_\alpha,\theta_\alpha). \end{equation} If (\ref{eq:SRScondition}) holds then the operator $D_\alpha=\partial_{\theta_\alpha}+\theta_\alpha\partial_{z_\alpha}$ transforms as \begin{equation}\label{eq:transfDSRS} D_\beta=(D\Psi)\inv D_\alpha \end{equation} So in the situation of (\ref{eq:SRScondition}) the super curve $(X,\mathcal O_X)$ carries a $(0\mid 1)$ dimensional distribution $D$ such that $D^2$ is nowhere vanishing (in fact $D^2=\partial_z$). A super curve carrying such a distribution is called a ($N=1$) super Riemann surface. Equivalently an $N=1$ super Riemann surface is a ($N=1$) super curve that carries an odd global differential operator with nowhere vanishing square that takes values in some invertible sheaf. Recall the Berezinian that occurs in the transformation law for generators of $\Berx$, (\ref{eq:transfBer}). It can be written in general as $$ \ber\begin{pmatrix} \partial_z F &\partial_z \Psi\\ \partial_\theta F &\partial_\theta\Psi \end{pmatrix}=D(\frac{DF}{D\Psi}). $$ Therefore if (\ref{eq:SRScondition}) holds we have $\ber\begin{pmatrix}\partial_z F&\partial_z \Psi\\ \partial_\theta F&\partial_\theta\Psi \end{pmatrix}=D\Psi$ so (\ref{eq:transfDSRS}) tells us that $D$ takes values in the dualizing sheaf $\Berx$. So super Riemann surfaces are self dual, as probably first noted in \cite{DoRoSc:SupModSpaces}. More generally, the question then arises what happens if the curves $(X,\mathcal O_X)$ and $(X,\hat{\mathcal O}_X)$ are isomorphic, but a priori not with identical transition functions. We claim that also in this case the curve $(X,\mathcal O_X)$ is a super Riemann surface. Indeed, the operator $\hat{D}_{\mathcal C}$ restricted to $\mathcal O_X$ takes values in the dualizing sheaf $\mathcal B\text{{\^e}r}_{{X}}$ of $\hat{\mathcal O}_X$, as we have seen above. Using the isomorphism we can think of $\hat{D}_{\mathcal C}$ as a differential operator taking values in a sheaf isomorphic to the dualizing sheaf $\Berx$ on $\mathcal O_X$. Since $\hat{D}_{\mathcal C}^2$ does not vanish we see that $(X,\mathcal O_X)$ is a super Riemann surface. Now it is known (and easy to see) that for any super Riemann surface there are coordinates such that (\ref{eq:SRScondition}) holds. In these coordinates the transition functions of $(X,\mathcal O_X)$ and $(X,\hat{\mathcal O}_X)$ are in fact equal. The $N=2$ super conformal curve $(X,\Cox)$ associated to a SRS $(X,\mathcal O_X)$ is very simple. Recall that $\Cox$ is an extension \begin{equation*} 0\to\mathcal O_X\to\Cox\overset\epsilon\to\Berx \to 0. \end{equation*} where locally $\epsilon(z)=\epsilon(\theta)=0$ and $\epsilon(\rho)=f$, with $f$ a local generator of $\Berx$. For SRS there is a splitting $e:\Berx\to\Cox$, given locally by $e(f)=\rho-\theta$. One needs to use the definition of a SRS to check that this definition makes global sense, i.e., that $\rho-\theta$ transforms as a section of $\Berx$; for this see \cite{Ra:oldnew}. In other words for a SRS the associated $N=2$ curve has a split structure sheaf: $$ \Cox=\mathcal O_X\oplus \Berx. $$ Note that not all SRS's are projected, so there are examples where $\Cox$ is a trivial extension but where $(X,\mathcal O_X)$ is not projected. \subsection{Integration on super curves.}\label{ss:IntegraOnSupCurve} Let us first recall the classical situation. On an ordinary Riemann surface $(X,{\mathcal O}_{X}^{\text{red}})$ we can integrate a holomorphic 1-form $\omega$ along a contour connecting two points $p$ and $q$ on $X$. If the contour connecting $p$ and $q$ lies in a single, simply connected, coordinate neighborhood $U_\alpha$ with local coordinate $z_\alpha$ we can write $\omega=d f_\alpha$, with $f_\alpha\in {\mathcal O}_{X}^{\text{red}}(U_\alpha)$ determined up to a constant. The points $p,q$ are described by the irreducible divisors $z_\alpha-p_\alpha$ and $z_\alpha-q_\alpha$. Then we calculate the integral of $\omega$ along the contour by $\int_p^q\omega=f_\alpha(q_\alpha)-f_\alpha(p_\alpha)$. Suppose next that $p$ and $q$ are in different coordinate neighborhoods $U_\alpha$ and $U_\beta$, with coordinates $z_\alpha,z_\beta$ related by $z_\beta=F(z_\alpha)$ on overlaps. Assume furthermore that the contour connecting them contains a point $r \in U_\alpha\cap U_\beta$. Then we can write $\omega=df_\alpha$ on $U_\alpha$, and $\omega=df_\beta$ on $U_\beta$, with $f_\alpha(z_\alpha)=f_\beta(F(z_\alpha))+ c_{\alpha\beta}$ on overlaps, where $c_{\alpha\beta}$ is locally constant on $U_\alpha\cap U_\beta$. The intermediate point $r$ can be described by two (corresponding) irreducible divisors $z_\alpha-r_\alpha$ and $z_\beta-r_\beta$. Then $\int_p^q\omega=\int_p^r\omega+\int_r^q\omega=f_\beta(q_\beta)- f_\beta(r_\beta)+f_\alpha(r_\alpha) -f_\alpha(p_\alpha)$. This is independent of the intermediate point because the parameter $r_\alpha$ in the irreducible divisor $z_\alpha-r_\alpha$ transforms as a $\mathbb C\,$-point of the curve: we have $r_\beta=F(r_\alpha)$, and $f_\alpha(r_\alpha)- f_\beta(r_\beta)=c_{\alpha\beta}$; therefore we can replace $r$ by any other intermediate point in the same connected component of $U_\alpha\cap U_\beta$. If $p$ and $q$ are not in adjacent coordinate neighborhoods we need to introduce more intermediate points. So there are three crucial facts in the construction of the contour integral of holomorphic 1-forms on a Riemann surface: the parameter in an irreducible divisor transforms as a point, $d$ is an operator that produces from a function on $X$ a section of the dualizing sheaf on $X$, and the kernel of the operator $d$ consists of the constants. We will find analogs of all three facts for super curves. We have seen that for an $N=1$ super curve in general the parameters in an irreducible divisor correspond to a $\Lambda$-point of the dual curve. Also the sheaf map $D$ acting on the dual curve maps sections of $\hat{\mathcal O}_X$ to sections of $\Berx$, see (\ref{eq:bigcd}). This suggests that we define a {\it (super) contour} $\Gamma=(\gamma,P,Q)$ on $(X,\mathcal O_X)$ as an ordinary contour $\gamma$ on the underlying topological space $X$, together with two irreducible divisors $P$ and $Q$ for $(X,\mathcal O_X)$ such that the reduced divisors of $P$ and $Q$ are the endpoints of $\gamma$. So if $$ P=z-\hat p-\theta\hat \pi,\quad Q=z-\hat q-\theta \hat\chi, $$ then the corresponding $\Lambda$-points on the dual curve $(X,\hat{\mathcal O}_X)$ are $(\hat p,\hat \pi)$, $(\hat q,\hat \chi)$, and $z=\hat p^{\text{red}}$ and $z=\hat q^{\text{red}}$ are the equations for the endpoints of the curve $\gamma$. Then we define the integral of a section $\{\omega_\alpha={D}\hat f_\alpha\}$ of the dualizing sheaf on $(X,\mathcal O_X)$ along $\Gamma$ by $$ \int_P^Q \omega=\int_P^Q D \hat f=\hat f(\hat q,\hat \chi)-\hat f(\hat p,\hat \pi). $$ Here we assume that the contour connecting $P$ and $Q$ lies in a single simply connected open set. If the contour traverses various open sets we need to choose intermediate divisors on the contour, as before. A super contour $\Gamma$ is called {\it closed} if it is of the form $\Gamma=(\gamma,P,P)$, with the underlying contour $\gamma $ closed in the usual sense. Observe that the integral over $\Gamma$ is independent of the choice of $P$, so we will omit reference to it. The contour integration on $N=1$ super curves introduced here seems to be new; it is a nontrivial generalization of the contour integral on super Riemann surfaces, as described for instance in \cite{Fried:NoteString2DCFT,McArthur:LineintegralsSRS,Rog:ContourSRS}. For closed contours it agrees with the integration theory described in \cite{GaiKhudShvar:IntegrationSurSuperSpce,Khud:BVformoddsympl}. We can also understand this integration procedure in terms of the contour integral on the $N=2$ super conformal super curve $(X,\mathcal{CO}_X)$, introduced by Cohn, \cite{Cohn:N=2SRS}. To this end define on $\Cox(U)\oplus\Cox(U)$ the sheaf map $(\Dc,\hat{D}_{\mathcal C} )$ by the local componentwise action of the differential operators $\Dc^\alpha$ and $\hat{D}_{\mathcal C}^\alpha$ as before. Then the square of the operator $(\Dc,\hat{D}_{\mathcal C} )$ vanishes and the Poincar\'e Lemma holds for $(\Dc,\hat{D}_{\mathcal C})$: \begin{lem}\label{lem:poincare} Let $U$ be a simply connected open set on $X$ and let $(f,g)\in \Cox(U)\oplus\Cox(U)$ such that $(\Dc,\hat{D}_{\mathcal C} )(f,g)=0$. Then there is an element $H\in \Cox(U)$, unique up to an additive constant, such that $$(f,g)=(\Dc H,\hat{D}_{\mathcal C} H).$$ \end{lem} Let then $\mathcal M(U)\subset \Cox(U)\oplus\Cox(U)$ be the subsheaf of $(\Dc,\hat{D}_{\mathcal C} )$-closed sections. Note that a section of $\mathcal M$ looks in $U_\alpha$ like $(f_\alpha,g_\alpha)=(f(z_\alpha,\theta_\alpha),g(\hat z_\alpha,\rho_\alpha))$ and furthermore $f$ is a section of $\Berx$ and $g$ is a section of $\mathcal B\text{{\^e}r}_{{X}}$. This means that $\mathcal M$ globalizes to the direct sum $\Berx\oplus \mathcal B\text{{\^e}r}_{{X}}$. So we get an exact sequence of sheaves: $$ 0\to \Lambda\to \mathcal{CO}_X\overset(\Dc,\hat{D}_{\mathcal C})\to \mathcal M\rightarrow0. $$ Now the sections of $\mathcal M$ are the objects on $\Cox$ that can be integrated. A contour for $\Cox$ is a triple $(\gamma, \mathcal CP,\mathcal CQ)$ where $\mathcal CP,\mathcal CQ$ are two $\Lambda$-points of $(X,\mathcal{CO}_X)$ with as reduced points the endpoints of the contour $\gamma$. Assume that the contour lies in a single simply connected open set $U$. If $\omega\in \mathcal M(U)$ then we can write $\omega=(\Dc H,\hat{D}_{\mathcal C} H)$ for some $H\in \Cox(U)$ and we put $\int_{\mathcal CP}^{\mathcal CQ}\omega=H(\mathcal CQ)-H(\mathcal CP)$. Extension to more complicated contours as before. Now start with a section $\{s_\alpha\}$ of $\Berx$ on $(X,\mathcal O_X)$. We can lift it to the section $\{(s_\alpha,0)\}$ of $\mathcal M $. In particular there is a section $\{H_\alpha\}$ of $\mathcal{CO}_X$ such that $s_\alpha=\Dc H_\alpha$, $\hat{D}_{\mathcal C} H_\alpha=0$. This means that $\{H_\alpha\}$ is in fact a section of the subsheaf $\hat{\mathcal O}_X$. So in specifying the $\Lambda$-points of $(X,\mathcal{CO}_X)$ on the ends of the contour we have the freedom to shift along the fiber of the projection $\hat \pi:(X,\Cox)\to (X,\hat{\mathcal O}_X)$. In other words we only need to specify $\Lambda$-points of the dual curve, or, equivalently, irreducible divisors on the original curve. Therefore we can define the integral of a section $s=\{s_\alpha\}$ of $\mathcal B\text{er}_X$ along a contour with $P,Q$ two irreducible divisors for $(X,\mathcal O_X)$ at the end point as follows. We choose two $\Lambda$-points $\mathcal CP$ and $\mathcal CQ$ of $(X,\Cox)$ that project to the $\Lambda$-points of $(X,\hat{\mathcal{O}}_X)$ corresponding to $P,Q$. Then $\int_P^Q s=H(\mathcal CQ)-H(\mathcal CP)$ if $s_\alpha=\Dc H$ and $\hat{D}_{\mathcal C} H=0$. Again we are assuming here that the contour lies in a simply connected region and extend for the general case using intermediate points. One checks that this procedure of integrating a section of the dualizing sheaf on $(X,\mathcal O_X)$ using integration on $\Cox$ is the same as we had defined before. \subsection{Integration on the universal cover.} We consider from now on only holomorphic (compact, connected, $N=1$) super curves $(X,\mathcal O_X)$ of genus $g>1$. We fix a point $x_0\in X$ and 1-cycles $A_i,B_i, i=1,\dots ,g$ through $x_0$ with intersection $A_i\cdot B_j=\delta_{ij}$, $A_i\cdot A_j=B_i\cdot B_j=0$ as usual. Then the fundamental group $\pi_1(X,x_0)$ is generated by the classes $a_i,b_i$ corresponding to the loops $A_i,B_i$, subject solely to the relation $a_1b_1a_1\inv b_1\inv a_2b_2a_2\inv b_2\inv \dots a_gb_ga_g\inv b_g\inv=e$. The universal cover of the super curve $(X,\mathcal O_X)$ is the open superdisk $\done=(D,\mathcal O_{\done})$ of dimension $(1\mid1)$, where $D=\{z\in \mathbb C\mid |z|<1\}$ and $\mathcal O_{\done}=\mathcal O_D\otimes_{\mathbb C}\Lambda[\theta]$, with $\mathcal O_D$ the usual sheaf of holomorphic functions on the unit disk. The group $G$ of covering transformations of $(D,\mathcal O_{\done})\to (X,\mathcal O_X)$ is isomorphic to $\Pi_1(X,x_0)$ and each covering transformation $g$ is determined by its action on the global coordinates $(z,\theta)$ of $\done$. Introduce super holomorphic functions by $$ F_g(z,\theta):=g\inv\cdot z , \quad\Psi_g(z,\theta):=g\inv\cdot \theta. $$ If $P_p$ is a $\Lambda$-point of $\done$, i.e., a homomorphism $\mathcal O_{\done}\to \Lambda$, determined by $z\mapsto z_P\in \Lambda_0,\theta \mapsto \theta_P\in \Lambda_1$, then the action of $g$ in the covering group is defined by $g\cdot P_p(f)=P_p(g\inv\cdot f)$. Then $z_P\mapsto F_g(z_P,\theta_P)$ and $\theta_P\mapsto \Psi_g(z_P,\theta_P)$. So $\Lambda$-points transform as the coordinates under the covering group. Next consider irreducible divisors $P_d=z-\hat z_1 -\theta\hat\theta_1$, $Q_d=z-\hat z_2 -\theta\hat\theta_2$. We say that $g\cdot P_d=Q_d$ as divisors if we have the identity $g\inv Q_d=P_d h(z,\theta)$ as holomorphic functions for some invertible $h(z,\theta)$. By the same calculation as the one following Lemma \ref{lem:roots} we find that \begin{equation*} \begin{split} \hat z_2&= F_g(\hat z_1, \hat \theta_1) + \frac {DF_g(\hat z_1, \hat \theta_1)}{D\Psi_g(\hat z_1, \hat \theta_1)}\Psi_g(\hat z_1, \hat \theta_1),\\ \hat \theta_2&= \frac {DF_g(\hat z_1, \hat \theta_1)}{D\Psi_g(\hat z_1, \hat \theta_1)}. \end{split} \end{equation*} So irreducible divisors transform with the dual action, compare with (\ref{eq:coordtransfdualcurve}). There is a parallel theory for the dual curve: we have a covering $(D,\mathcal O_{\done})\to (X,\hat{\mathcal O}_X)$, with covering group $\hat G$. The dual covering group $\hat G$ is isomorphic to $G$ by a distinguished isomorphism: $g$ and $\hat g$ are identified if they give the same transformation of the reduced disk. Their action on functions is in general different, however, unless we are dealing with a super Riemann surface. In fact, since duality interchanges irreducible divisors and $\Lambda$-points on the curve and its dual we see that the action of $\hat g$ on the coordinates is dual to the transformation of $g$: \begin{equation*} \begin{split} \hat g\inv\cdot z&= F_g(z,\theta) + \frac {DF_g(z,\theta )}{D\Psi_g( z , \theta )}\Psi_g( z , \theta ),\\ \hat g\inv\cdot\theta &= \frac {DF_g( z , \theta)}{D\Psi_g( z , \theta )}. \end{split} \end{equation*} A function $f$ on $(X,\mathcal O_X)$ lifts to a function that is invariant under the covering group $G$ and similarly $\hat f$, a function on $(X,\hat{\mathcal O}_X)$, lifts to a function that is invariant under the dual covering group $ \hat G$. An irreducible divisor or a $\Lambda$-point on $(X,\mathcal O_X)$ lifts to an infinite set of divisors or points, one for each point on the underlying disk above the corresponding reduced point of $X$. Let as before $x_0$ be a point on $X$ and $d_0$ a point on the disk lying over $x_0$. Let $\gamma$ be a contour for integration on $(X,\mathcal O_X)$, so $\gamma$ consists of a contour on $X$ and two irreducible divisors at the endpoints. The contour lifts to a unique contour on the disk starting at $d_0$ and the irreducible divisors lift to unique irreducible divisors for $(D,\mathcal O_{\done})$ that reduce to $d_0$ and the endpoint on the disk, respectively. Also we can pull back sections of $\Berx$ to $(D,\mathcal O_{\done})$ and calculate integrals on $(X,\mathcal O_X)$ by lifting to $(D,\mathcal O_{\done})$. Since $D$ is simply connected this is a great simplification. For instance any integral over a closed contour is zero. Similar considerations apply to the $N=2$ curve $(X,\Cox)$ and its universal covering space $D^{1|2}$ and covering group $\mathcal G$. Of course $D^{1|2}$ is the $N=2$ curve canonically associated to the $N=1$ curve $D^{1|1}$ as in subsection \ref{ss:DualN=2curves}, and the lifts of $f \in \mathcal O_X$ to $D^{1|2}$ via either $(X,\Cox)$ or $D^{1|1}$ as intermediate space coincide. \subsection{Sheaf cohomology for super curves.}\label{ss:Sheafcohomology} Our super curves are in fact families of curves over the base scheme $(\bullet,\Lambda)$, with $\Lambda$ the Grassmann algebra of nilpotent constants. This means that for any coherent sheaf the cohomology groups are finitely generated $\Lambda$-modules, but they are not necessarily free. This means in particular that standard classical theorems, like the Riemann-Roch theorem, do not hold in general in our situation. (See for instance \cite{Hodg:ProblFieldSRS}.) The basic facts about sheaf cohomology of families of super curves are completely parallel to the classical theory (explained for instance in \cite{Kempf:AbInt}). For a coherent locally free sheaf $\mathcal L$ there exist $\Lambda$-homo\-mor\-phisms $\alpha:F\to G$, with $F, G$ free finite rank $\Lambda$-modules, that calculate the cohomology. More precisely, for every $\Lambda$-module $M$ we have an exact sequence \begin{equation}\label{eq:calccohom} 0\to H^0(X,\mathcal L\otimes M)\to F\otimes M\overset{\alpha\otimes 1_{M}} \to G\otimes M\to H^1(X,\mathcal L\otimes M)\to0. \end{equation} Recall from Example \ref {exmpl:split} that for any sheaf of $\mathcal O_X$-modules $\mathcal F$ we have an associated split sheaf $\mathcal F^{\text{split}}=\mathcal F\otimes_\Lambda \Lambda/\mathfrak m$. Therefore, if we choose $M=\Lambda/\mathfrak m$, the sequence (\ref{eq:calccohom}) calculates the cohomology groups of the split sheaf $\mathcal L\spl$. (These cohomology groups are $\mathbb Z_2$-graded vector spaces over $\Lambda/\mathfrak m=\mathbb C$.) Without loss of generality one can choose the homomorphism $\alpha:F\to G$ such that $\alpha\spl=\alpha\otimes 1_{\Lambda/\mathfrak m}$ is identically zero. This means that $H^0(X,\mathcal L)$ (respectively $H^1(X,\mathcal L)$) is a submodule (resp. a quotient module) of a free $\Lambda$-module of rank $\dim H^0(X,\mathcal L\spl)$ (resp. of rank $\dim H^1(X,\mathcal L\spl)$). We are interested in the question when the $H^i(X,\mathcal L)$ are free. The idea is to check this by an inductive procedure, starting with the free cohomology of $\mathcal L\spl$. We have for every $j=1,\dots,n-1$ the split exact sequence \begin{equation}\label{eq:seqlambda} 0\to \mathfrak m^j/\mathfrak m^{j+1}\to \Lambda/\mathfrak m^{j+1}\to \Lambda/\mathfrak m^j\to 0. \end{equation} Since $\mathfrak m^j/\mathfrak m^{j+1}\otimes_\Lambda \mathcal L=\mathfrak m^j/\mathfrak m^{j+1}\otimes_{\mathbb C} \mathcal L\spl$, $\Lambda/\mathfrak m^i\otimes_\Lambda \mathcal L=\mathcal L/\mathfrak m^i \mathcal L$ and $\mathcal L$ is flat over $\Lambda$ we obtain by tensoring with $\mathcal L$ and taking cohomology the exact sequence ($\Lambda^j=\mathfrak m^j/\mathfrak m^{j+1}$) \begin{equation}\label{eq:longexactcohom} \begin{aligned} 0 &\to\Lambda^j\otimes_{\mathbb C}H^0(X,\mathcal L\spl)& &\to H^0(X, \mathcal L/\mathfrak m^{j+1}\mathcal L) & &\to H^0(X, \mathcal L/\mathfrak m^{j}\mathcal L) & &\overset{q^j}\to \\ &\overset{q^j}\to \Lambda^j\otimes_{\mathbb C}H^1(X,\mathcal L\spl) & &\to H^1(X, \mathcal L/\mathfrak m^{j+1}\mathcal L) & &\to H^1(X, \mathcal L/\mathfrak m^{j}\mathcal L) & &\to 0. \end{aligned} \end{equation} If $H^0(X, \mathcal L/\mathfrak m^{j}\mathcal L)$ and $H^1(X, \mathcal L/\mathfrak m^{j}\mathcal L)$ are free $\Lambda/\mathfrak m^j$-modules, then the module $H^0(X, \mathcal L/\mathfrak m^{j+1}\mathcal L)$ is free over $\Lambda/\mathfrak m^{j+1}$ iff the connecting map $q^j$ in (\ref{eq:longexactcohom}) is zero iff $H^1(X, \mathcal L/\mathfrak m^{j+1}\mathcal L)$ is free as $\Lambda/\mathfrak m^{j+1}$-module (see \cite{Kempf:AbInt}, Lemma 10.4). The relation between the connecting homomorphisms $q^j$ and the homomorphism $\alpha$ that calculates cohomology is as follows: if we assume as above $\alpha\spl$ is zero then $q^1=\alpha\otimes 1_{\Lambda/\mathfrak m^2}$. More generally, if $q^1=q^2=\dots=q^{j-1}=0$ then $q^j=\alpha\otimes 1_{\Lambda/{\mathfrak m}^{j+1}}$. More concretely, we can assume that $\alpha$ is a matrix of size rank $G\times \text{rank }F$ and the $q^j$ are quotients of this matrix by $\mathfrak m^{j+1}$. Then the cohomology of $\mathcal L$ is the kernel and cokernel of the matrix $\alpha$, and the cohomology is free iff $\alpha$ is identically zero. If now $\mathcal L$ is an invertible sheaf, $\mathcal{L}\spl$ obeys a super Riemann-Roch relation and in case of free cohomology we get ($h^i= \text{rank } H^i$): \begin{equation} \label{superRR} h^0(X,\mathcal{L}) - h^1(X,\mathcal{L}) = (\deg \mathcal{L} + 1-g\mid \deg \mathcal{L} + \deg \mathcal{N} + 1-g), \end{equation} where $\mathcal O_X\spl={\mathcal O}_{X}^{\text{red}}\mid \mathcal N$. We can relate by Serre duality the cohomology groups of $\mathcal L$ and $\mathcal L^*\otimes \Berx$, see Appendix \ref{app:dualSerredual}. In particular, $H^0(X,\mathcal L^*\otimes \Berx)$ is free iff $H^1(X,\mathcal L)$ is. We summarize the discussion in this subsection in the following theorem. \begin{thm}\label{thm:freeness} Let $\mathcal L$ be an invertible $\mathcal O_X$-sheaf. Then $H^0(X,\mathcal L)$ (respectively $H^1(X,\mathcal L)$) is a submodule (respectively a quotient module) of a free $\Lambda$-module of rank $\dim H^0(X,\mathcal L\spl)$ (respectively of rank $\dim H^1(X,\mathcal L\spl)$). Furthermore \begin{align*} H^0(X,\mathcal L)\text{ is a free $\Lambda$-module} &\Longleftrightarrow H^1(X,\mathcal L)\text{ is free},\\ &\Longleftrightarrow H^0(X,\mathcal L^*\otimes\Berx)\text{ is free},\\ &\Longleftrightarrow H^1(X,\mathcal L^*\otimes\Berx)\text{ is free}, \end{align*} in which case the rank of $H^i(X,\mathcal L)$ is equal to $\dim H^i(X,\mathcal L\spl)$. \end{thm} \subsection{Generic SKP curves.} \begin{defn} An { \it SKP curve} is a super curve $(X,\mathcal O_X)$ such that the split sheaf ${\mathcal O}_{X}\spl$ is of the form $$ {\mathcal O}_{X}\spl={\mathcal O}_{X}^{\text{red}}\mid \mathcal N, $$ where $\mathcal N$ is an invertible ${\mathcal O}_{X}^{\text{red}}$-module of degree zero. If $\mathcal N\ne{\mathcal O}_{X}^{\text{red}}$ then $(X,\mathcal O_X)$ is called a {\it generic SKP curve}. \qed \end{defn} We will discuss in subsection \ref{ss:Krichever} a Krichever map that associates to an invertible sheaf on a super curve $(X,\mathcal O_X)$ (and additional data) a point $W$ of an infinite super Grassmannian. If this point $W$ belongs to the {\it big cell} (to be defined below) we obtain a solution of the super KP hierarchy. For $W$ to belong to the big cell it is necessary that $(X,\mathcal O_X)$ is an SKP curve. The generic SKP curves enjoy simple cohomological properties. \begin{thm}\label{thm:cohomcurve} Let $(X,\mathcal O_X)$ be a generic SKP curve. Then the cohomology groups of the sheaves $\mathcal O_X, \Berx$ are free $\Lambda$-modules. More precisely: \begin{alignat*}{2} H^0(X,\mathcal O_X) &= \Lambda \mid 0, &\qquad H^1(X,\mathcal O_X)&= \Lambda^g\mid \Lambda ^{g-1},\\ H^0(X,\Berx) &= \Lambda^{g-1} \mid \Lambda^g, &\qquad H^1(X,\Berx)&= 0\mid\Lambda. \end{alignat*} \end{thm} \begin{proof} Since $\mathcal N$ has no global sections, $H^0(X, {\mathcal O}_{X}\spl)=\mathbb C\mid 0$ consists of the constants only. Now by definition of a curve over $(\bullet,\Lambda)$ we have an inclusion $0\to\Lambda\to \mathcal O_X$, so $H^0(X,\mathcal O_X)$ contains at least the constants $\Lambda$. By Theorem \ref{thm:freeness} then $H^0(X,\mathcal O_X)$ must be equal to $\Lambda\mid 0$. Again using Theorem \ref{thm:freeness} then also $H^1(X, \mathcal O_X)$ and the cohomology of $\Berx$ will be free, and the rest of the theorem follows from the properties of the split sheaves, see Examples \ref{exmpl:split}, \ref{exmpl:splitber}. \end{proof} \begin{rem} It is not true that all invertible $\mathcal O_X$-sheaves for a generic SKP curve have free cohomology. For instance, consider a sheaf $\mathcal L$ with $\mathcal L\spl={\mathcal O}_{X}\spl$, but $\mathcal L\neq \mathcal O_X$. Then, for a covering $\{U_\alpha\}$ of $X$, the transition functions of $\mathcal L$ will have the form $g_{\alpha\beta}=1 + f_{\alpha\beta}(z,\theta)$, with $f_{\alpha\beta}(z,\theta)=0$ modulo the maximal ideal $\mathfrak m$ of $\Lambda$. Let then $I\subset\Lambda$ be the ideal of elements that annihilate all $f_{\alpha\beta}$. Then we have $H^0(X,\mathcal L)=I$ and is in particular not free. \qed \end{rem} \subsection{Riemann bilinear relations.} Let us call sections of $\Berx$ and $\mathcal B\text{{\^e}r}_{{X}}$ holomorphic differentials (on $(X,\mathcal O_X)$ and $(X,\hat{\mathcal O}_X)$ respectively). We will in this subsection introduce analogs of the classical bilinear relations for holomorphic differentials. \begin{thm} \label{thm:bilinear} Let $(X,\mathcal O_X)$ be a super curve and let $\omega$, $\hat \omega$ be holomorphic differentials on $(X,\mathcal O_X)$ and $(X,\hat{\mathcal O}_X)$ respectively. Let $\{a_i,b_i\}$ be a standard symplectic basis for $H_1(X,\mathbb Z)$. Then $$ \sum_{i=1}^g\oint_{a_i}\omega\oint_{b_i} \hat \omega=\sum_{i=1}^g\oint_{a_i}\hat\omega\oint_{b_i} \omega. $$ \end{thm} Note that we think here of closed contours on the underlying topological space $X$ as closed super contours on either $(X,\mathcal O_X)$ or on $(X,\hat{\mathcal O}_X)$. \begin{proof} The argument is clearest using the $N=2$ curve $(X,\Cox)$ and its universal covering superdisk $D^{1\mid2}$; this way only one universal covering group $\mathcal G$ appears instead of both $G$ and $\hat{G}$. Choose any holomorphic differentials $\omega$ on $X$ and $\hat{\omega}$ on $\hat{X}$, and lift them to sections $(\omega,0)$ and $(0,\hat{\omega})$ of $\mathcal M$ on $(X,\Cox)$. Lifting further to $D^{1\mid2}$, let $\Omega$ be an antiderivative of $(0,\hat{\omega})$, so that $(\Dc\Omega,\hat{D}_{\mathcal C}\Omega) = (0,\hat{\omega})$. The crucial point is that $(\Omega\omega,0)$ is itself a section of $\mathcal M$, because $\Dc(\Omega\omega)=0$. This could not have been achieved using only differentials from $X$. As per the standard argument, we integrate this object around the polygon obtained by cutting open $(X,\Cox)$. To form this polygon, fix arbitrarily one vertex $P$ (a $\Lambda$-point of the $N=2$ disk $D^{1\mid2}$) and let the other vertices be $a_1^{-1}P,\;b_1^{-1}a_1^{-1}P,\ldots,a_gb_g^{-1}a_g^{-1}\cdots b_1a_1b_1^{-1}a_1^{-1}P$, where $a_i,b_i$ are the generating elements of $\mathcal G$. The vertices are the endpoints of super contours whose reduced contours are the sides of the usual polygon bounding a fundamental region for $\mathcal G$. These contours project down to any of $X,\hat{X},(X,\Cox)$ as closed loops generating the homology; integrating a differential lifted from any of these spaces along a side of our polygon will yield the corresponding period. Labeling the sides of the polygon with generators of $\mathcal G$ as usual, neighborhoods of the sides labeled $a_i$ are identified with each other by $b_i$ and vice versa. Then we have \begin{multline*} 0 = \oint \Omega(\omega,0) = \sum_{i=1}^g \left[ \int_{a_i} \Omega(\omega,0) - \int_{a'_i} \Omega(\omega,0) \right] + \\ +\sum_{i=1}^g \left[ \int_{b_i} \Omega(\omega,0) - \int_{b'_i} \Omega(\omega,0) \right]. \end{multline*} In the first sum, the two integrals are related by the change of variables given by $b_i$; the differential $(\omega,0)$ is invariant under this covering transformation while $\Omega$ changes by the $b_i$-period of $\hat{\omega}$. The second sum is simplified in the same manner, with the result \begin{equation*} \sum_{i=1}^g \left[ \int_{ {a}_i}\omega \int_{b_i} \hat{\omega} - \int_{a_i} \hat{\omega} \int_{{b}_i} \omega \right] = 0. \end{equation*} \end{proof} \subsection{The period map and cohomology.} The commutative diagram \eqref{eq:bigcd} gives a commutative diagram in cohomology that partly reads: \begin{equation} \begin{CD}\label{eq:bigcdcohom} {} @. H^0(X,\mathcal B\text{{\^e}r}_{{X}}) @= H^0(X,\mathcal B\text{{\^e}r}_{{X}}) \\ @. @V\operatorname{p\hat er}VV @V{\hat q}VV \\ H^0(X,\Berx) @>{\operatorname{per}}>> H^1(X,\Lambda) @>{\operatorname{r\hat ep}}>> H^1(X,\hat{\mathcal O}_X) \\ @| @V\operatorname{rep}VV @. \\ H^0(X,\Berx) @>{q}>>H^1(X,\mathcal O_X)@. \end{CD} \end{equation} Let $\{a_i,b_i; i=1,\dots, g\}$ be a symplectic basis for $H_1(X,\mathbb Z)$ and let $\{a_i^*,b_i^*; i=1,\dots, g\}$ be a dual basis for $H^1(X,\mathbb Z)$ and also for $H^1(X,\Lambda)$. We will use Serre duality (see Appendix \ref{appss:SerredualSupermanifold}) to identify $H^1(X,\mathcal O_X)$ and $H^1(X,\hat{\mathcal O}_X)$ with the duals of $H^0(X, \Berx)$ and $H^0(X, \mathcal B\text{{\^e}r}_{{X}})$. \begin{lem}\label{lem:perrep} The maps $\operatorname{per}$, $\operatorname{p\hat er}$, $\operatorname{rep}$ and $\operatorname{r\hat ep}$ are explicitly given by \begin{align*} \operatorname{per}(\omega) &=\sum_{i=1}^g(\oint_{a_i} \omega) a_i^* +\sum_{i=1}^g(\oint_{b_i} \omega) b_i^*,\\ \operatorname{p \hat er}(\hat\omega) &=\sum_{i=1}^g(\oint_{a_i} \hat\omega) a_i^* +\sum_{i=1}^g(\oint_{b_i} \hat\omega) b_i^*,\\ \operatorname{rep}(\sigma)(\omega) &=\sum_{i=1}^g\alpha_i(\oint_{b_i} \omega)-\sum_{i=1}^g \beta_i(\oint_{a_i} \omega),\\ \operatorname{r\hat ep} (\sigma)(\hat\omega) &=\sum_{i=1}^g\alpha_i(\oint_{b_i} \hat\omega) -\sum_{i=1}^g \beta_i(\oint_{a_i} \hat\omega), \end{align*} where $\omega\in H^0(X,\Berx)$, $\hat\omega\in H^0(X,\mathcal B\text{{\^e}r}_{{X}})$ and $\sigma= \sum_{i=1}^g \alpha_i a_i^*+\beta_i b_i^*\in H^1(X,\Lambda)$. \end{lem} \noindent If we introduce a basis $\{\omega_\alpha,\alpha=1,\dots,g-1\mid w_j,j=1,\dots,g\}$ of $H^0(X,\Berx)$ we obtain the {\it period matrix} associated to $\operatorname{per}$: $$ \Pi=\begin{pmatrix} \oint_{a_i}\omega_\alpha&\oint_{a_i}w_j\\ \oint_{b_i}\omega_\alpha&\oint_{b_i}w_j \end{pmatrix}, $$ where $i,j$ run from $1$ to $g$ and $\alpha$ runs from $1$ to $g-1$. For the split curve we have $H^0(X,\hat{\mathcal O}_X^{\text{split}})=\mathbb C\mid\mathbb C^{g-1}$ and the map $$ D:H^0(X,\hat{\mathcal O}_X^{\text{split}})\to H^0(X,\Berxsplit) $$ has as image a $g-1$ dimensional, even subspace of exact differentials. For these elements the periods vanish, and one finds the reduction mod $\mathfrak m$ of $\Pi$ is given by $$ \Pi\spl=\begin{pmatrix} 0&\Pi\red(a)\\ 0&\Pi\red(b) \end{pmatrix}, $$ where $\Pi\red=\begin{pmatrix}\Pi\red(a)\\ \Pi\red(b)\end{pmatrix}$ is the classical period matrix of the underlying curve $(X,{\mathcal O}_{X}^{\text{red}})$. By classical results we can choose the basis of holomorphic differentials on the reduced curve so that $\Pi\red(a)=1_g$. From this it follows that we can also choose in $H^0(X,\Berx)$ a basis such that $\oint_{a_i}w_j=\delta_{ij}$ and so that the period matrix takes the form \begin{equation}\label{eq:normperiodmatrix} \Pi=\begin{pmatrix} 0&1_g\\ Z_{o}&Z_e \end{pmatrix}. \end{equation} Note that $\Pi$ is not uniquely determined by the conditions we have imposed: we are still allowed to change $\Pi\mapsto \Pi^\prime= \begin{pmatrix} 0&1_g\\ Z_{o}G&Z_e + Z_o \Gamma \end{pmatrix} $, corresponding to a change of basis of $H^0(X,\Berx)$ by an even invertible matrix $\begin{pmatrix} G&\Gamma\\ 0&1_g \end{pmatrix}$ of size $g-1\mid g\times g-1\mid g$. Using the same basis we see that $\operatorname{rep}$ has matrix $$ \begin{pmatrix} \oint_{b_i}\omega_\alpha&-\oint_{a_i}\omega_\alpha\\ \oint_{b_i}w_j&-\oint_{a_i}w_j \end{pmatrix}=\Pi^tI=\begin{pmatrix} Z_o^t&0\\ Z_e^t&-I_g \end{pmatrix}, \quad I=\begin{pmatrix} 0&-1_g\\ 1_g&0 \end{pmatrix}. $$ Again this matrix is not entirely determined by our choices. {}From the commutativity of the diagram \eqref{eq:bigcdcohom} we see that the matrix of the map $q$ is given by \begin{equation}\label{eq:matrixconnectinghom} Q=\Pi^t I \Pi=\begin{pmatrix} 0&Z_o^t\\ -Z_o&Z_e^t-Z_e \end{pmatrix}. \end{equation} In general, the structure sheaf $\hat{\mathcal O}_X$ and dualizing sheaf $\mathcal B\text{{\^e}r}_{{X}}$ of the dual curve will not have free cohomology, so that we cannot represent the maps $\operatorname{rep}$, $\operatorname{r\hat ep}$ and $\hat q$ by explicit matrices. The nonfreeness of the cohomology of $\hat{\mathcal O}_X$ and $\mathcal B\text{{\^e}r}_{{X}}$ is determined by the odd component $Z_o$ of the period matrix, see \eqref{eq:normperiodmatrix}. Recall that $\hat{\mathcal O}_X^{\text{split}}={\mathcal O}_{X}^{\text{red}} \mid \mathcal K\mathcal N\inv$ and $\Berxhatsplit=\mathcal N\mid \mathcal K$ (see Example \ref{exmpl:dualsplit}) and hence \begin{alignat*}{2} H^0(X,\hat{\mathcal O}_X^{\text{split}})&= \mathbb C\mid \mathbb C^{g-1}, & H^1(X,\hat{\mathcal O}_X^{\text{split}})&= \mathbb C^g\mid 0,\\ H^0(X,\Berxhatsplit)&= 0\mid \mathbb C^{g}, & H^1(X,\Berxhatsplit)&= \mathbb C^{g-1}\mid \mathbb C. \end{alignat*} {}From the diagram \eqref{eq:bigcd} we extract in cohomology, using that the map $H^1(X, \Berx) \to H^2(X,\Lambda)=\Lambda\mid 0$ is an (odd!) isomorphism and $H^0(X, \Lambda)=\Lambda$, \begin{equation}\label{eq:period-seq} 0 \to \frac{H^0(X, \hat{\mathcal O}_X)}{\Lambda}\overset{D}\to H^0(X, \Berx) \overset {\operatorname{per}}\to H^1(X,\Lambda) \to H^1(X, \hat{\mathcal O}_X) \to 0 \end{equation} so that the period map has as kernel $H^0(X, \hat{\mathcal O}_X)$ mod constants and as cokernel $H^1(X, \hat{\mathcal O}_X)$. Therefore $\operatorname{per}$ is essentially one of the homomorphisms that calculate cohomology introduced in subsection \ref{ss:Sheafcohomology}. We can even be more explicit: if $\{\omega_\alpha\mid w_j\}$ is the (partially) normalized basis of holomorphic differentials as above the homomorphism $\operatorname{per}$ maps the submodule generated by the $w_j$ isomorphically to a free rank $g$ summand of $H^1(X,\Lambda)$. This is irrelevant for the calculation of cohomology, so we can replace the sequence \eqref{eq:period-seq} by \begin{equation}\label{eq:period-seqsimple} 0 \to H^0(X, \hat{\mathcal O}_X)/\Lambda \overset{D}\to \Lambda^{g-1} \overset{Z_o}\to \Lambda^g \to H^1(X, \hat{\mathcal O}_X) \to 0, \end{equation} and $H^0(X, \hat{\mathcal O}_X)$ mod constants is the kernel of $Z_o$, whereas the cokernel of $Z_o$ is $H^1(X, \hat{\mathcal O}_X)$. Similarly, the cohomology of $\mathcal B\text{{\^e}r}_{{X}}$ is calculated by the sequence \begin{multline}\label{eq:repiod-seq} 0 \to H^0(X, \mathcal B\text{{\^e}r}_{{X}}) \overset{\operatorname{p\hat er}}\to H^1(X, \Lambda) \overset{\operatorname{rep}}\to H^0(X,\Berx)^* \\ \to H^1(X, \mathcal B\text{{\^e}r}_{{X}}) \to \Lambda\to 0 \end{multline} The image of a holomorphic differential $\hat\omega$ in $H^1(X,\Lambda)$ is then a vector $\operatorname{p\hat er}(\hat\omega)=\begin{pmatrix} a(\hat\omega)\\ b(\hat\omega)\end{pmatrix}$, where $a(\hat\omega)$ and $b(\hat\omega)$ are the vectors of $a$ respectively $b$ periods of $\hat\omega$. By exactness of \eqref{eq:repiod-seq} we have $\operatorname{rep}\circ \operatorname{p\hat er}=0$, or, using bases, $$ \begin{pmatrix} Z_o^t&0\\ Z_e^t&-I_g \end{pmatrix}\begin{pmatrix} a(\hat\omega)\\ b(\hat\omega)\end{pmatrix}=0 $$ This means that the vector $b(\hat\omega)$ of $b$ periods is (uniquely) determined by the $a$ periods: $b(\hat\omega)= Z_e^t a(\hat\omega)$, and the vector of $a$ periods is constrained by the equation $Z_o^t a(\hat\omega)=0$. The submodule of $H^1(X, \Lambda)$ generated by the elements $b_i^*$ maps under $\operatorname {rep}$ isomorphically to a free rank $0\mid g$ summand of $H^1(X,\Berx)^*$, so that for the calculation of cohomology we can simplify \eqref{eq:repiod-seq} to \begin{equation}\label{eq:repiod-seqsimple} 0 \to H^0(X, \mathcal B\text{{\^e}r}_{{X}}) \to \Lambda^{g} \overset{Z_o^t}\to \Lambda^{g-1} \to H^1(X, \mathcal B\text{{\^e}r}_{{X}}) \to \Lambda\to 0. \end{equation} We summarize the results on the cohomology of the dual curve in the following Theorem. \begin{thm} Let $(X,\mathcal O_X)$ be a generic SKP curve with odd period matrix $Z_o$. Then $$ H^0(X,\hat{\mathcal O}_X)/\Lambda\simeq \operatorname{Ker}(Z_o),\quad H^1(X,\hat{\mathcal O}_X)\simeq\operatorname{Coker}(Z_o). $$ Furthermore $H^0(X,\mathcal B\text{{\^e}r}_{{X}})\simeq \operatorname{Ker}(Z_o^t)$ and $\operatorname{Coker}(Z_o^t)$ is a submodule of $H^1(X,\mathcal B\text{{\^e}r}_{{X}}) $ such that $$H^1(X,\mathcal B\text{{\^e}r}_{{X}})/\operatorname{Coker}(Z_o^t)\simeq\Lambda. $$ \end{thm} \subsection{$\Cox$ as extension of $\Berx$.} We discuss in this subsection, for generic SKP curves, the structure of $\Cox$ as extension of $\Berx$ and the relation with free cohomology and the projectedness of the curve $(X,\mathcal O_X)$. {}From the sequence (\ref{eq:extberbystruct}) that defines $\Cox$ we obtain in cohomology \begin{gather}\label{eq:cohomn=2} \begin{aligned} 0 &\to H^0(X,\mathcal O_X)& &\to H^0(X, \Cox) & &\to H^0(X, \Berx) & &\overset{q}\to \\ {}&\overset{q}\to H^1(X,\mathcal O_X)&&\to H^1(X, \Cox) &&\to H^1(X, \Berx)&&\to 0 \end{aligned} \end{gather} The cohomology of the sheaves $\mathcal O_X, \Berx$ is given by Theorem \ref{thm:cohomcurve}. By Theorem \ref{thm:freeness} (or its extension to rank two sheaves) $H^0(X,\Cox)$ is a submodule of a $\Lambda^{g+1}\mid\Lambda^{g-1}$ and $H^1(X,\Cox)$ is a quotient of a $\Lambda^{g+1}\mid \Lambda^{g-1}$. We see from this that the cohomology of $\Cox$ is free if and only if $q$ is the zero map. To describe the map $q$ in more detail we need to recall some facts about principal parts and extensions, (see e.g., \cite{Kempf:AbInt}). For any invertible sheaf $\mathcal L$ let $\underline { \mathcal Rat}(\mathcal L)$ and $\underline { \mathcal Prin}(\mathcal L)$ denote the sheaves of rational sections and principal parts for $\mathcal L$ and denote by $\mathcal Rat(\mathcal L)$ and $ \mathcal Prin(\mathcal L)$ their $\Lambda$-modules of global sections. Then the cohomology of $\mathcal L$ is calculated by $$ 0\to H^0(X,\mathcal L)\to \mathcal Rat(\mathcal L)\to \mathcal Prin(\mathcal L)\to H^1(X,\mathcal L)\to 0. $$ In particular we can represent a class $\alpha\in H^1(X,\mathcal L)$ as a principal part $p=\sum p_{x}$, where $p_{x}\in \mathcal Rat(\mathcal L)/\mathcal L_{x}$, for $x\in X$. If $\alpha\in H^1(X,\mathcal L)$ and $\omega\in H^0(X,\mathcal M)$, for some other invertible sheaf $\mathcal M$, then we can define the {\it cup product} $\omega\cup \alpha$ by representing $\alpha$ by a principal part $p$ and calculating the principal part $\omega p= \sum \omega_{x}p_{x}$ in $\mathcal Prin(\mathcal M\otimes \mathcal L)$; the image of $\omega p$ in $H^1(X, \mathcal M\otimes \mathcal L)$ is then by definition $\omega\cup \alpha$. We want to understand the kernel of the cup product with $\omega \in H^0(X, \mathcal M)$ in case $\omega$ is odd and free (i.e., linearly independent over $\Lambda$). In this case there will be for any invertible sheaf $\mathcal L$ sections that are immediately annihilated by $\omega$; let therefore $\operatorname{Ann}(\mathcal L,\omega)\subset \mathcal L$ be the subsheaf of such sections. Putting $\mathcal L_\omega=\mathcal L/\operatorname{Ann}(\mathcal L,\omega)$, we get, because $\omega^2=0$, the exact sequence \begin {equation} \label{eq:cupsequence} 0\to \mathcal L_\omega\overset{\omega}\to \operatorname{Ann}(\mathcal L\otimes \mathcal M,\omega)\to Q\to 0 \end {equation} Locally, in an open set $U_\alpha\subset X$, we have $\mathcal L(U_\alpha)=\mathcal O_X(U_\alpha)l_\alpha$, $\mathcal M(U_\alpha)=\mathcal O_X(U_\alpha)m_\alpha$ and we write $\omega=\omega_\alpha(z,\theta)m_\alpha$, with $\omega_\alpha=\phi_\alpha+\theta_\alpha f_\alpha$. Then $f_{\alpha}^{\text{red}}$ is a regular function on $U_\alpha$ with some divisor of zeros $D_f=\sum n_i q_i$. Some of the $q_i$ may also be zeros of (the lowest order part of) $\phi_\alpha$ and there will be a maximal $g_\alpha(z,\theta) \in \mathcal O_X(U_\alpha)_{\bar 0}$ (here $\mathcal O_X(U_\alpha)_{\bar 0}$ is the module of even sections) such that $$ \omega_\alpha(z,\theta)=\omega_\alpha(z,\theta)^\prime g_\alpha(z,\theta) $$ with $g_\alpha^{\text{red}}$ a regular function with divisor of zeros $D_g$ (on $U_\alpha$) satisfying $0\le D_g\le D_f$. Then $\operatorname{Ann}(\mathcal L\otimes \mathcal M,\omega)(U_\alpha)$ is generated by $\omega_\alpha(z,\theta)^\prime l_\alpha\otimes m_\alpha$ and we see that $Q$ is a torsion sheaf: $Q$ is killed by the invertible sheaf generated locally by the even invertible rational function $g_\alpha(z,\theta)$. Let $D_\omega=\{(g_\alpha(z,\theta),U_\alpha)\}$ be the corresponding Cartier divisor. Then we have an isomorphism $$ \operatorname{Ann}(\mathcal L\otimes \mathcal M,\omega)\to \mathcal L(D_\omega), \quad \omega_\alpha(z,\theta)^\prime l_\alpha\otimes m_\alpha\mapsto l_\alpha\otimes 1/g_\alpha(z,\theta) $$ The sequence \eqref{eq:cupsequence} is equivalent to $$ 0\to \mathcal L\to \mathcal L(D_\omega)\to \mathcal L(D_\omega)|_{D_\omega}\to 0. $$ Now the cup product with $\omega$ gives a map $H^1(\mathcal L_\omega)\to H^1(X, \operatorname{Ann}(\mathcal L\otimes \mathcal M,\omega))$ with kernel the image of the natural map $\phi:H^0(X,Q)\to H^1(\mathcal L_\omega)$. Identifying $H^0(X,Q)$ with $H^0(X, \mathcal L(D_\omega)|_{D_\omega})$, we see that $\phi$ is the composition $$ H^0(X,\mathcal L(D_\omega)|_{D_\omega})\to \mathcal Prin (\mathcal L)\to H^1(X,\mathcal L). $$ Therefore the kernel of $\omega\cup$ consists of those $\alpha\in H^1(X,\mathcal L)$ that have a representative $p\in \mathcal{P}{rin}(\mathcal L)$ such that $\omega p$ has zero principal part, i.e., the poles in $p$ are compensated by the zeros in $\omega$. Extensions of the form (\ref{eq:extberbystruct}) are classified by $\delta\in H^1(X,\Berx^*)$: we think of $\Cox$ as a subsheaf of $\underline {\mathcal Rat}(\mathcal O_X\oplus \Berx)$ consisting on an open set $U$ of pairs $(f, \omega)$ where $\omega\in \Berx(U)$ and $f$ a rational function such that the principal part $\bar f$ is equal to $\omega p$, for $p\in \mathcal Prin(\Berx^*)$; then $\delta\in H^1(X,\Berx^*)$ is the class of $p$. It is then easy to see that the connecting map $q: H^0(X,\Berx)\to H^1(X,\mathcal O_X)$ is cup product by the extension class $\delta$: $q(\omega)=\omega\cup \delta$. The class $\delta$ is ly represented by the {\it \v Cech} cocycle \begin{equation}\label{eq:cechcocycle} \phi_{\beta\alpha} = \partial_\theta F_{\beta\alpha} / \partial_\theta \Psi_{\beta\alpha}\in \Berx^*(U_\alpha\cap U_\beta), \end{equation} from \eqref{eq:coordchangen=2}, \eqref{eq:transformrho2}. \begin{lem}\label{lem:q=0iffexttriv} Let $(X,\mathcal O_X)$ be a generic SKP curve. Then the connecting homomorphism $q: H^0(X,\Berx)\to H^1(X,\mathcal O_X)$ in \eqref{eq:cohomn=2} is the zero map iff the extension \eqref{eq:extberbystruct} is trivial. In particular the cohomology of $\Cox$ is free iff the extension is trivial. \end{lem} \begin{proof}It is clear that if the extension is trivial the connecting map $q$ is trivial. From the explicit form, in particular the $\theta$ independence, of the cocycle we see that it is not immediately killed by multiplication by an odd free section $\omega$ of $\Berx$, i.e., the cocycle is not zero in the cohomology group $H^1(X,\Berx^*)/\operatorname{Ann}(\Berx^*,\omega)$ if it is nonzero in $H^1(X, \Berx^*)$. The split sheaf $\Berxsplit$ is $\mathcal K\mathcal N\inv\mid \mathcal K$. An odd free section $\omega$ of $\Berx$ therefore has an associated divisor $D_\omega$ as constructed above with reduced support included in the divisor of a section of $\mathcal K$ on the underlying curve. Now $q(\omega)=\omega\cup \delta$ is zero if the zeros of $\omega$ cancel the poles occuring in the principal part $p$ representing $\delta$. But by classical results the complete linear system of $\mathcal K$ has no base points, i.e., there is no point on $X$ where where all global sections of $\mathcal K$ vanish. This means that wherever the poles of $p$ occur, there will be a section $\omega$ of $\Berx$ that does not vanish there. So $q$ being zero on all odd generators of $H^0(X,\Berx)$ implies that the extension is trivial. A fortiori if $q$ is the zero map the extension will also be trivial. \end{proof} The extension given by the cocycle \eqref{eq:cechcocycle} is trivial if \begin{equation} \label{eq:phitriv} \phi_{\beta\alpha}(z_\alpha) = \sigma_\beta(z_\beta,\theta_\beta) - H_{\beta\alpha}(z_\alpha,\theta_\alpha) \sigma_\alpha(z_\alpha,\theta_\alpha) \end{equation} for some 1-cochain $\sigma_\alpha \in\Berx^*(U_\alpha)$. In that case, a splitting $e: \Berx \rightarrow \Cox$ is obtained by $e(f_\alpha) = \rho_\alpha - \sigma_\alpha(z_\alpha,\theta_\alpha)$. \begin{thm} \label{thm:split=proj} For a generic SKP curve $(X,\mathcal O_X)$, $\Cox$ is a trivial extension of $\Berx$ iff $(X,\mathcal O_X)$ is projected. \end{thm} \begin{proof} We have already observed (in subsection \ref{ss:DualN=2curves}) that $X$ projected implies $\phi_{\beta\alpha} = 0$ in a projected atlas, making the extension trivial. Now suppose, if possible, that the extension is trivial but that $X$ is not projected. Write the transition functions of $X$ in the form \begin{equation*} z_\beta = f_{\beta\alpha}(z_\alpha) + \theta_\alpha \eta_{\beta\alpha}(z_\alpha), \;\;\;\; \theta_\beta = \psi_{\beta\alpha}(z_\alpha) + \theta_\alpha B_{\beta\alpha}(z_\alpha) \end{equation*} and assume that the atlas has been chosen so that $\eta_{\beta\alpha}$ vanishes to the highest possible (odd) order $n$ in nilpotents. That is, $\eta_{\beta\alpha} = 0$ mod $\mathfrak m^n$, but not mod $\mathfrak m^{n+2}$. Writing also $\sigma_\alpha(z_\alpha,\theta_\alpha) = \chi_\alpha(z_\alpha) + \theta_\alpha h_\alpha(z_\alpha)$ and substituting in (\ref{eq:phitriv}) yields two conditions. From the $\theta_\alpha$-independence of $\phi_{\beta\alpha}$ one finds that $h_\alpha$ mod $\mathfrak m^{n+1}$ is a global section of ${\mathcal K}^{-1}{\mathcal N}^2$. Since $X$ is a generic SKP curve, there are no such sections and $h_\alpha=0$ to this order. Using this, the second condition becomes, \begin{equation*} \eta_{\beta\alpha} = B_{\beta\alpha} \chi_\beta(f_{\beta\alpha}) - f'_{\beta\alpha} \chi_\alpha \;\;\; {\text{mod }} {\mathfrak m}^{n+2}. \end{equation*} This condition implies that the coordinate change $\tilde{z}_\alpha = z_\alpha - \theta_\alpha \chi_\alpha$ will make $\eta_{\beta\alpha}$ vanish to higher order than $n$, a contradiction. \end{proof} To lowest order in nilpotents, the cocycle conditions for the transition functions of $X$ imply that $\eta_{\beta\alpha}/B_{\beta\alpha}$ is a cocycle for $H^1(X,\mathcal{NK}^{-1})$, while $\psi_{\beta\alpha}$ is a cocycle for $H^1(X,\mathcal N^{-1})$. This implies that the projected $X$'s have codimension $(0 \mid 3g-3)$ in the moduli space of generic SKP curves, which has dimension $(4g-3 \mid 4g-4)$ (see \cite{Vain:DeformSupSpacShe}). The proof of Theorem \ref{thm:split=proj} generalizes to higher order in nilpotents the fact that at lowest order $\phi_{\beta\alpha}$ is a cocycle in $H^1(X,\mathcal{NK}^{-1} \mid \mathcal N^2 \mathcal K^{-1})$. \subsection{$\Cox$ as extension of $\mathcal B\text{{\^e}r}_{{X}}$ and symmetric period matrices} \label{ss:Symmperiodmatrices} One can equally view $\Cox$ as an extension of $\mathcal B\text{{\^e}r}_{{X}}$ by $\hat{\mathcal O}_X$. Obviously, if $(X,\hat{\mathcal O}_X)$ is projected this extension is trivial, but the converse no longer holds. In the proof of Theorem \ref{thm:split=proj} there is now the possibility that $h_\alpha \neq 0$. (Recall from subsection \ref {ss:SRS} that for $X$ a SRS, a splitting of the extension was universally given by $\chi_\alpha=0,h_\alpha=-1$.) One can see that this extension is not always trivial, however, by constructing examples with $\psi_{\beta\alpha} = 0$ and $\phi_{\beta\alpha}$ a nontrivial class. (We are now refering to an atlas for $(X,\hat{\mathcal O}_X)$.) In this subsection we will exhibit a connection between the structure of $\Cox$ as extension of $\mathcal B\text{{\^e}r}_{{X}}$ and the symmetry of the component $Z_e$ of the period matrix, see \eqref{eq:normperiodmatrix}. By classical results $Z_e^{\text{red}}$ is symmetric. However, there seems to be no reason that $Z_e$ is symmetric in general. \begin{thm} \label{thm:Zsym&noZo=proj} Let $(X,\mathcal O_X)$ be a generic SKP curve and $Z_e,Z_o$ its (partially) normalized period matrices (as in \eqref{eq:normperiodmatrix}). Then we have $Z_e$ symmetric and $Z_o=0$ iff $(X,\mathcal O_X)$ is projected. \end{thm} \begin{proof} This follows immediately from Theorem \ref{thm:split=proj}, Lemma \ref{lem:q=0iffexttriv} and the explicit form \eqref{eq:matrixconnectinghom} of the connecting homomorphism $q$. \end{proof} Recall the exact sequence $$ 0\to \Lambda\to \Cox \overset{(\Dc,\hat{D}_{\mathcal C} )}\to \mathcal M \rightarrow 0, $$ where $\mathcal M=\Berx\oplus\mathcal B\text{{\^e}r}_{{X}}$ is the sheaf of objects that can be integrated on $\Cox$. The corresponding cohomology sequence is in part \begin{equation*} 0 \to \Lambda \to H^0(X,\Cox) \stackrel{(\Dc,\hat{D}_{\mathcal C})}{\longrightarrow} H^0(X,{\mathcal M}) \stackrel{\operatorname{cper}}{\longrightarrow} H^1(X,\Lambda) \end{equation*} where $\operatorname{cper}(\omega,\hat\omega)=\{\sigma\mapsto \int_\sigma [\omega+\hat\omega]\}$. So we see that we can identify $H^0(X,\Cox)/\Lambda$ with pairs $(\omega,\hat{\omega})$ of differentials with opposite periods. Now let $(\omega,\hat{\omega})$ be such a pair. $\omega$ can be written in terms of the basis of $H^0(X,\Berx)$ in the form \begin{equation*} \omega = \sum a_i(\omega) w_i + \sum A_\alpha \omega_\alpha, \end{equation*} where $a_i(\omega)$ denote the a-periods and $A_\alpha$ are other constants uniquely determined by $\omega$. Then the vector of b-periods of $\omega$ will be \begin{equation*} b(\omega) = Z_e a(\omega) + Z_o A. \end{equation*} Since these coincide with minus the b-periods of $\hat{\omega}$, which are $b(\hat{\omega}) = Z_e^t a(\hat\omega)=-Z_e^t a(\omega)$, we obtain for each such pair of differentials a relation \begin{equation} \label{eq:basicZrelation} (Z_e - Z_e^t) a(\omega) + Z_o A = 0. \end{equation} We have a sequence analogous to \eqref{eq:cohomn=2} for $\Cox$ as extension of $\mathcal B\text{{\^e}r}_{{X}}$ and a connecting map $\hat q$ for this situation. \begin{thm} \label{thm:Zsym->dualexttriv} Let $(X,\mathcal O_X)$ be a generic SKP curve and $Z_e,Z_o$ its normalized period matrices. If $Z_e$ is symmetric, then $\hat q$ is the zero map. \end{thm} \begin{proof} Assuming that $Z_e=Z_e^t$, we determine the set of pairs $(\omega,\hat{\omega})$ with opposite periods. The a-periods of $\hat{\omega}$ can be chosen freely from the kernel of $Z_o^t$. According to (\ref{eq:basicZrelation}), any $\omega$ chosen to match these a-periods will also have matching b-periods iff $A_\alpha$ belongs to the kernel of $Z_o$. Therefore, $H^0(X,\Cox)$ mod constants can be identified with $\text{Ker} Z_o \oplus \text{Ker} Z_o^t$, which is precisely $H^0(X,\hat{\mathcal O}_X)/\Lambda \oplus H^0(X,\mathcal B\text{{\^e}r}_{{X}})$. In this case $\hat q$ is the zero map. \end{proof} In general it seems that $\hat q=0$ will not imply that the extension $\Cox$ of $\mathcal B\text{{\^e}r}_{{X}}$ is trivial, as in Lemma \ref{lem:q=0iffexttriv} for the extension of $\Berx$ by $\mathcal O_X$. Also it seems that $Z_e=Z_e^t$ cannot be deduced from (\ref{eq:basicZrelation}) as long as the a-periods are constrained to the kernel of $Z_o^t$. \subsection{Moduli of invertible sheaves.}\label{ss:modinvertible sheaves} In this subsection we will discuss some facts about invertible sheaves on super curves and their moduli spaces, see also \cite{RoSchVor:GeomSupConf,GidNelson:LinebSRS}. An invertible sheaf on $(X,\mathcal O_X)$ is determined by transition functions $g_{\alpha\beta}$ on overlaps $U_\alpha\cap U_\beta$, and so isomorphism classes of invertible sheaves are classified by the cohomology group $H^1(X,\mathcal O^\times_{X,\text{ev}})$. The degree of an invertible sheaf $\mathcal L$ is the degree of the underlying reduced sheaf $\mathcal L^{\text{red}}$, with transition functions $g_{\alpha\beta}^{\text{red}}$. Let $\Pic^0(X)$ denote the group of degree zero invertible sheaves on $(X,\mathcal O_X)$. The exponential sheaf sequence \begin{equation}\label{expsequence} 0\to \mathbb Z\to \mathcal O_{X,\text{ev} } \overset{\exp(2\pi i \times \cdot )}\to \mathcal O^\times_{X,\text{ev}}\to 0 \end{equation} reduces mod nilpotents to the usual exponential sequence for ${\mathcal O}_{X}^{\text{red}}$ and we see that $\Pic^0(X)=H^1(X,\mathcal O_{X,\text{ev} } )/H^1(X,\mathbb Z)$. If $(X,\mathcal O_X)$ is a generic SKP curve $H^1(X,\mathcal O_X)$ is a free rank $g\mid g-1$ $\Lambda$-module and the map $H^1(X,\mathbb Z)\to H^1(X,\mathcal O_X)$ is the restriction of the map $H^1(X,\Lambda)\to H^1(X,\mathcal O_X)$, which is dual to the map $\operatorname{per}$ of Lemma \ref{lem:perrep}. So with respect to a suitable basis $H^1(X,\mathbb Z)\to H^1(X,\mathcal O_X)$ is described by the transpose of the period matrix \eqref{eq:normperiodmatrix}. This implies that the image of $H^1(X,\mathbb Z)$ is generated by $2g$ elements that are linearly independent over the real part $\Lambda_\Re$ of $\Lambda$ (see Appendix \ref{ss:realstrconj} for the definition of $\Lambda_\Re$). The elements of the quotient $\Pic^0(X)=H^1(X,\mathcal O_{X,\text{ev} } )/H^1(X,\mathbb Z)$ are the $\Lambda$-points of a super torus of dimension $(g\mid g-1)$. Each component of $\operatorname{Pic}(X)$ is then isomorphic as a supermanifold to this supertorus. In general, however, $H^1(X,\mathcal O_X)$ is not free, nor is the image of $H^1(X,\mathbb Z)$ generated by $2g$ independent vectors. It seems an interesting question to understand $\Pic^0(X)$ in this generality. For any supercurve $(X,\mathcal O_X)$ we define the Jacobian by $$\operatorname{Jac}(X)=H^0(X,\Berx)^*_{\text{odd}}/H_1(X,\mathbb Z),$$ where elements of $H_1(X,\mathbb Z)$ act by odd linear functionals on holomorphic differentials from $H^0(X,\Berx)$ by integration over 1-cycles. We have, as discussed in Appendix \ref{app:dualSerredual}, a pairing of $\Lambda$-modules \begin{equation}\label{eq:lambdapairing} H^1(X,\mathcal O_X)\times H^0(X,\Berx)\to \Lambda. \end{equation} As we will discuss in more detail in subsection \ref{ss:effdivisorpoinc} invertible sheaves are also described by divisor classes. We use this in the following Theorem. \begin{thm} The pairing (\ref{eq:lambdapairing}) induces an isomorphism of the identity component $\Pic^0(X)$ with the Jacobian $\operatorname{Jac}(X)$ given by the usual Abel map: a bundle $\mathcal L \in \Pic^0(X)$ with divisor $P-Q$ corresponds to the class of linear functionals $\int_Q^P$, modulo the action of $H_1(X,\mathbb Z)$ by addition of cycles to the path from $Q$ to $P$. \end{thm} \begin{proof} Let $\mathcal L \in \Pic^0(X)$ have divisor $P-Q$, with the reduced points $P^{\text{red}}$ and $Q^{\text{red}}$ contained in a single chart $U_0$ of a good cover of $X$. If $P=z-p-\theta\pi$ and $Q=z-q-\theta\xi$, this bundle has a canonical section equal to unity in every other chart, and equal to $$ \frac{z-p-\theta\pi}{z-q-\theta\xi} = \frac{z-p}{z-q} - \frac{\theta\pi}{z-q} + \theta\xi \frac{z-p}{(z-q)^2} $$ in $U_0$. In the covering space $H^1(X,\mathcal O_{X,\text{ev} } )$ of $\Pic^0(X)$, with covering group $H^1(X,\mathbb Z)$, $\mathcal L$ lifts to a discrete set of cocycles given by the logarithms of the transition functions of $\mathcal L$ in the chart overlaps, namely $$ a_{0i} = \frac{1}{2\pi i}[\log (z-p) - \log (z-q) - \frac{\theta\pi}{z-p} + \frac{\theta\xi}{z-q}] $$ in $U_0 \cap U_i$, and zero in other overlaps. The covering group acts by changing the choice of branches for the logarithms. We now fix the particular choice for which the branch cut $C$ from $Q$ to $P$ lies entirely in $U_0$ and meets no other $U_i$. Under the Dolbeault isomorphism, this cocycle corresponds to a $(0,1)$ form most conveniently represented by the current $\bar{\partial}a_i$ in $U_i$, where $a_{ij} = a_i - a_j$ and $\bar{\partial} = d\bar{z} \partial_{\bar{z}} + d\bar{\theta} \partial_{\bar{\theta}}$. It is supported on the branch cut $C$, and we can take $a_i = 0$ for $i \ne 0$. The pairing (\ref{eq:lambdapairing}) now associates to this the linear functional on $H^0(X,\Berx)$ which sends $\omega \in H^0(X,\Berx)$, written as $f(z) + \theta \phi(z)$ in $U_0$, to \cite{HaskeWells:Serreduality} $$ \int_X i(\partial_{\bar{z}}) \bar{\partial}a_0 \, \omega \bar{\theta} \,[dz\, d\bar{z}\, d\bar{\theta} \, d\theta] = \int_X (\partial_{\bar{z}}a_0) \omega dz \,d\bar{z}\, d\theta. $$ By the definition of the derivative of a current \cite{GrHa:PrincAlgGeom} and Stokes' theorem this can be rewritten \begin{multline*} - \int_{\partial(X-C)} dz \int d\theta \, a_0 (f+\theta\phi)= \\ = - \frac{1}{2\pi i} \int_{\partial(X-C)} dz \{[\log(z-p) - \log(z-q)]\phi + [\frac{\xi}{z-q} - \frac{\pi}{z-p}]f\}, \end{multline*} where $\partial(X-C)$ denotes the limit of a small contour enclosing $C$. Using the residue theorem and the discontinuity of the logarithms across the cut, this evaluates to $$ \int_C \phi \, dz + \pi f(p) - \xi f(q) = \int_Q^P \omega .$$ By linearity of the pairing (\ref{eq:lambdapairing}), we can extend this correspondence to arbitrary bundles of degree zero by taking sums of divisors of the form $P_i-Q_i$. In particular, the divisor $(P-Q) + (P_1-P) + (P_2-P_1) + \cdots + (P_n-P_{n-1}) + (P-P_n)$ is equivalent to $P-Q$, but if the contour $PP_1P_2 \cdots P_nP$ represents a nontrivial homology class then the corresponding linear functionals $\int_Q^P$ differ by addition of this cycle to the integration contour. This shows that the action of $H_1(X,\mathbb Z)$ specified in the definition of $\operatorname{Jac}(X)$ is the correct one. \end{proof} \subsection{Effective divisors and Poincar\'e sheaf for generic SKP curves.} \label{ss:effdivisorpoinc} Another description of invertible sheaves is given by divisor classes. Recall that a divisor $D\in \Divx$ is a global section of the sheaf $\mathcal Rat_{\text{ev}}^\times(X)/\mathcal O^\times_{X,\text{ev}}$, so $D$, up to equivalence, is given by a collection $(f_\alpha,U_\alpha)$ where the $f_\alpha$ are even invertible rational functions that are on overlaps related by an element of $\mathcal O^\times_{X,\text{ev}}(U_\alpha\cap U_\beta)$. Each $f_\alpha$ reduces mod nilpotents to a nonzero rational function $f_\alpha^{\text{red}}$ on the reduced curve, so that $D$ determines a divisor $D^{\text{red}}$. Then the {\it degree} of $D$ is the usual degree of its reduction $D^{\text{red}}$. We have a mapping $\mathcal Rat_{\text{ev}}^\times(X)\to \Divx$, $f\mapsto (f)$, and elements $(f)$ of the image are called {\it principal}. Two even invertible rational functions $f_1, f_2$ give rise to the same divisor iff $f_1=kf_2$ where $k\in H^0(X,\mathcal O^\times_{X,\text{ev}})$. So if $(X,\mathcal O_X)$ is a generic SKP curve $k$ is just an even invertible element of $\Lambda$ but in general more exotic possibilities for $k$ exist. A divisor $D$ is {\it effective}, notation $D\ge 0$, if all $f_\alpha\in \mathcal O_{X,\text{ev} } (U_\alpha)$. An invertible $\mathcal O_X$-module $\mathcal L$ can be thought of as a submodule of rank $1| 0$ of the constant sheaf $\mathcal Rat(X)$. If $\mathcal L(U_\alpha)=\mathcal O_X(U_\alpha)e_\alpha$, then $e_\alpha\in \mathcal Rat_{\text{ev}}^\times(X)$ and $\mathcal L$ determines the divisor $D=\{(f_\alpha=e_\alpha\inv,U_\alpha)\}$. Conversely any divisor $D$ determines an invertible sheaf $\mathcal O_X(D)$ (in $\mathcal Rat(X)$) with local generators $e_\alpha=f_\alpha\inv$. Two divisors $D_1=\{(f^{(1)}_\alpha,U_\alpha)\}$ and $D_2=\{(f^{(2)}_\alpha,U_\alpha)\}$ give rise to equivalent invertible sheaves iff they are {\it linearly equivalent}, i.e., $D_1=D_2+(f)$ for some element $f$ of $\mathcal Rat_{\text{ev}}^\times(X)$, or more explicitly iff $f^{(1)}_\alpha=ff^{(2)}_\alpha$ for all $\alpha$. If $f\in \mathcal Rat_{\text{ev}}^\times(X)$ is a global section of an invertible sheaf $\mathcal L=\mathcal O_X(D)$ then $D+(f)\ge 0$ and vice versa. The {\it complete linear system} $|D|=|\mathcal O_X(D)|$ of a divisor (or of the corresponding invertible sheaf) is the set of all effective divisors linearly equivalent to $D$. So we see that if $\mathcal L=\mathcal O_X(D)$ then $$ |D|\simeq H^0(X,\mathcal L)_{\text{ev}}^\times/H^0(X,\mathcal O_X)_{\text{ev}}^\times. $$ In case the cohomology of $\mathcal L$ is free of rank $p+1\mid q$ and $H^0(X,\mathcal O_X)$ is just the constants $\Lambda\mid 0$, the complete linear system $|D|$ is (the set of $\Lambda$-points of) a super projective space $\mathbb P^{p\mid q}_\Lambda$. In particular, if $(X,\mathcal O_X)$ is a generic SKP curve and the degree $d$ of $\mathcal L$ is $\ge 2g-1$ the first cohomology of $\mathcal L$ vanishes, the zeroth cohomology is free of rank $d +1-g\mid d+1-g$ and $|D|\simeq \mathbb P_\Lambda^{d-g\mid d+1-g}$. Let $\hat X=(X,\hat{\mathcal O}_X)$ be the dual curve and denote by $\hatxd$ the $d$-fold symmetric product of $\hat X$, see \cite{DomPerHerRuiSanchSal:Superdiv}. This smooth supermanifold of dimension $(d\mid d)$ parametrizes effective divisors of degree $d$ on $(X,\mathcal O_X)$. We have a map (called {\it Abelian sum}) $A:\hatxd\to \operatorname{Pic}^d(X)$ sending an effective divisor $D$ to the corresponding invertible sheaf $\mathcal O_X(D)$. An invertible sheaf $\mathcal L$ is in the image of $A$ iff $\mathcal L$ has a even invertible global section: if $D\in \hatxd$ and $\mathcal L=A(D)$ then the fiber of $A$ at $\mathcal L$ is the complete linear system $|D|$. If the degree $d$ of $\mathcal L$ is at least $2g-1$ $H^1(X,\mathcal L)$ is zero and hence the cohomology of $\mathcal L$ is free. So in that case $A$ is surjective and the fibers of $A$ are all projective spaces $\mathbb P^{d-g\mid d+1-g}$ and $A$ is in fact a fibration. The symmetric product $\hatxd$ is a universal parameter space for effective divisors of degree $d$. This is studied in detail by Dom\'\i nguez P\'erez et al. \cite{DomPerHerRuiSanchSal:Superdiv}; we will summarize some of their results and refer to their paper for more details. (In fact they consider curves over a field, but the theory is not significantly different for curves over $\Lambda$.) A {\it family of effective divisors } of degree $d$ on $X$ parametrized by a super scheme $S$ is a pair $(S,D_S)$, where $D_S$ is a Cartier divisor on $X\times_\Lambda S$ such that for any morphism $\phi:T\to S$ the induced map $(1\times\phi)^*\mathcal O_{X\times S}(-D_S)\to (1\times\phi)^*\mathcal O_{X\times S}$ is injective and such that for any $s\in S$ the restriction of $D_S$ to $X\times \{s\}\simeq X$ is an effective divisor of degree $d$. For example, in $X\times \hatxd$ there is a canonical divisor $\Delta^{(d)}$ such if $p_D$ is any $\Lambda$-point of $\hatxd$ corresponding to a divisor $D$ then the restriction of $\Delta^{(d)}$ to $X\times \{p_D\}\simeq X$ is just $D$. Then $(\hatxd, \Delta^{(d)}) $ is universal in the sense that for any family $(S,D_S)$ there is a unique morphism $\Psi:S\to \hatxd$ such that $D_S=\Psi^*\Delta^{(d)}$. A {\it family of invertible sheaves } of degree $d$ on $X$ parametrized by a super scheme $S$ is a pair $(S,\mathcal L_S)$, where $\mathcal L_S$ is an invertible sheaf on $X\times_\Lambda S$ such that for any $s\in S$ the restriction of $\mathcal L_S$ to $X\times \{s\}$ is a sheaf of degree $d$ on $X$. For example, $(\hatxd,\mathcal O_{X\times \hatxd}(\Delta^{(d)})$ is a family of invertible sheaves of degree $d$. Two families $(S,\mathcal L_1)$, $(S,\mathcal L_2)$ are equivalent if $\mathcal L_1=\mathcal L_2\otimes \pi_S^* \mathcal N$, where $\pi_S:X\times S\to S$ is the canonical projection and $\mathcal N$ is an invertible sheaf on $S$. For example, fix a point $x$ of $X$; then$(\hatxd,\mathcal O_{X\times \hatxd}(\Delta^{(d)}))$ is equivalent to $(\hatxd,\mathcal R_{x}))$, where $\mathcal R_x=\mathcal O_{X\times \hatxd}(\Delta^{(d)})\otimes \pi_{\hatxd}^*[\mathcal O_{X\times \hatxd}(\Delta^{(d)})|_{\{x\}\times\hatxd})]\inv$. The family $(\hatxd,\mathcal R_x)$ is normalized: it has the property that $\mathcal R_x$ restricted to $\{x\}\times \hatxd$ is canonically trivial. Now consider the mapping $(1\times A):X\times \hatxd\to X\times\operatorname{Pic}^d(X)$ and the direct image $\mathcal P^{(d)}_x=(1\times A)_* \mathcal R_x$. \begin{thm} Let $(X,\mathcal O_X)$ be a generic SKP curve. Let $d\ge 2g-1$. Then $\mathcal P^{(d)}_x$ is a Poincar\'e sheaf on $X\times\operatorname{Pic}^d(X)$, i.e., $(\operatorname{Pic}^d(X), \mathcal P^{(d)}_x)$ is a family of invertible sheaves of degree $d$ that is universal in the sense that for any family $(S,\mathcal L)$ of degree $d$ invertible sheaves there is a unique morphism $\phi:S\to \operatorname{Pic}^d(X)$ so that $\mathcal L=\phi^*\mathcal P^{(d)}_x$. Furthermore $\mathcal P^{(d)}_x$ is normalized so that the restriction to $\{x\}\times \operatorname{Pic}^d(X)$ is canonically trivial. \end{thm} \subsection{Berezinian bundles.}\label{subs:Berbundles} We continue with the study of a generic SKP curve $(X,\mathcal O_X)$; we fix an integer $n$ and write $\mathcal P$ for $\mathcal P^n_X$, the Poincar\'e sheaf on $X\times \operatorname{Pic}^n(X)$. Let $\mathcal L_s$ be an invertible sheaf corresponding to $s\in \operatorname{Pic}^n(X)$. The cohomology groups $H^i(X,\mathcal L_s)$ will vary as $s$ varies over $\operatorname{Pic}^n(X)$ and can in general be nonfree, as we have seen. Even if the cohomology groups are free $\Lambda$-modules their ranks will jump. Still it is possible to define an invertible sheaf $\Ber$ over $\operatorname{Pic}^n(X)$ with fiber at $s$ the line $$ \ber(H^0(X,\mathcal L_s))\otimes \ber^*(H^1(X,\mathcal L_s)), $$ in case $\mathcal L_s$ has free cohomology. Here $\ber(M)$ for a free rank $d\mid \delta$ $\Lambda$-module with basis $\{f_1,\dots,f_d,\phi_1,\dots,\phi_\delta\}$ is the rank 1 $\Lambda$-module with generator $B[f_1,\dots,f_d,\phi_1,\dots,\phi_\delta]$. If we are given another basis $\{f^\prime_1,\dots,f^\prime_d,\phi^\prime_1,\dots,\phi^\prime _\delta\}=g\cdot \{f_1,\dots,f_d,\phi_1,\dots,\phi_\delta\}$, with $g\in Gl(d\mid\delta,\Lambda)$, we have the relation $$ B[f^\prime_1,\dots,f^\prime_d,\phi^\prime_1,\dots,\phi^\prime_\delta] =\ber(g)B[f_1,\dots,f_d,\phi_1,\dots,\phi_\delta]. $$ Similarly $\berdual(M)$ is defined using the inverse homomorphism $\berdual$. Here $\ber$ and $\berdual$ are the group homomorphisms defined in (\ref{eq:defberber*}). The invertible sheaf $\mathcal L_s$ is obtained from the Poincar\'e sheaf via $i_s^*\mathcal P$. We can reformulate this somewhat differently: $\mathcal P$ is an $\mathcal O_{\operatorname{Pic}^n(X)}$-module and for every $\Lambda$-point $s$ of $\operatorname{Pic}^n(X)$, via the homomorphism $s^\sharp:\mathcal O_{\operatorname{Pic}^n(X)}\to \Lambda$, also $\Lambda$ becomes an $\mathcal O_{\operatorname{Pic}^n(X)}$-module, denoted by $\Lambda_s$. Then $\mathcal L_s=i_s^*\mathcal P=\mathcal P\otimes_{\mathcal O_{\operatorname{Pic}^n(X)}} \Lambda_s$. It was Grothendieck's idea to study the cohomology of $\mathcal P\otimes M$ for arbitrary $\mathcal O_{\operatorname{Pic}^n(X)}$-modules $M$. We refer to Kempf (\cite{Kempf:AbInt}) for an excellent discussion and more details on these matters. The basic fact is that, given the Poincar\'e bundle $\mathcal P$ on $X\times \operatorname{Pic}^n(X)$, there is a homomorphism $\alpha:\mathcal F\to \mathcal G$ of locally free coherent sheaves on $\operatorname{Pic}^n(X)$ such that we get for any sheaf of $\mathcal O_{\operatorname{Pic}^n(X)}$-modules $M$ an exact sequence \begin{gather*} \begin{aligned} 0 \to H^0(X\times \operatorname{Pic}^n(X),\mathcal P\otimes M)\to \mathcal F\otimes M\overset {\alpha\times 1_M}\to \mathcal G\otimes M\to \\ \to H^1(X\times \operatorname{Pic}^n(X), \mathcal P\otimes M)\to 0. \end{aligned} \end{gather*} The proof of this is the same as for the analogous statement in the classical case, see \cite{Kempf:AbInt}. Now $\mathcal F$ and $\mathcal G$ are locally free, so for small enough open sets $U$ on $\operatorname{Pic}^n(X)$ one can define $\ber(\mathcal F(U))$ and $\ber^*(\mathcal G(U))$. This globalizes to invertible sheaves $\ber(\mathcal F)$ and $\ber^*(\mathcal G)$. Next we form the ``Berezinian of the cohomology of $\mathcal P$'' by defining $\Ber=\ber(\mathcal F)\otimes \ber^*(\mathcal G)$. Finally one proves, as in Soul\'e, \cite{Soul:Arakelov}, VI.2, Lemma 1, that $\Ber$ does not depend, up to isomorphism, on the choice of homomorphism $\alpha:\mathcal F\to \mathcal G$. \begin{thm}\label{thm:1ChernBertriv} The first Chern class of the $\Ber$ bundle is zero. \end{thm} We will prove this theorem in subsection \ref{ss:ChernclassBeronPic}, after the introduction of the infinite super Grassmannian and the Krichever map. The topological triviality of the $\Ber$ bundle is a fundamental difference from the situation of classical curves: there the determinant bundle on $\operatorname{Pic}$ is ample. Next we consider the special case of $n=g-1$. In this case, because of Riemann-Roch (\ref{superRR}) $\mathcal F$ and $\mathcal G$ have the same rank. Indeed, locally $\alpha(U):\mathcal F(U)\to \mathcal G(U)$ is given, after choosing bases, by a matrix over $\mathcal O_{\operatorname{Pic}^n(X)}(U)$ of size $d\mid\delta \times e\mid \epsilon$, say. If we fix a $\Lambda$-point $s$ in $U$ we get a homomorphism $\alpha(U)_s:\mathcal F(U)\otimes \Lambda_s\to \mathcal G(U)\otimes\Lambda_s$ represented by a matrix over $\Lambda$. The kernel and cokernel are the cohomology groups of $\mathcal L_s$ and these have the same rank by Riemann-Roch. On the other hand if the kernel and cokernel of a matrix over $\Lambda$ are free we have rank of kernel $-$ rank of cokernel= $d-e\mid\delta- \epsilon=0\mid 0$. So $\alpha(U)$ is a square matrix. This allows us to define a map $$ \ber(\alpha):\ber(\mathcal F)\to \ber (\mathcal G). $$ But this is a (non-holomorphic!) section of $\ber^*(\mathcal F)\otimes \ber(\mathcal G)$, i.e., of the dual Berezinian bundle $\mathcal P^*$ on $\operatorname{Pic}^{g-1}$, because of the non-polynomial (rational) character of the Berezinian. This section $\ber(\alpha)$ is essential for the definition of the $\tau$-function in subsection \ref{ss:Bakerf-fullsuperH-tau}. \subsection{Bundles on the Jacobian; theta functions} We continue with $X$ being a generic SKP curve. Super theta functions will be defined as holomorphic sections of certain ample bundles on $J=\text{Jac}(X)$, when such bundles exist. (As usual, the existence of ample invertible sheaves is necessary and sufficient for projective embeddability.) Given one such bundle, all others with the same Chern class $c_1$ are obtained by tensor product with bundles having trivial Chern class, so we begin by determining these, that is, computing $\text{Pic}^0(J)$. As we briefly discussed in subsection \ref{ss:modinvertible sheaves} $J$ is the quotient of the affine super space $V={\mathbb A}^{g|g-1} = \operatorname{Spec} \Lambda[z_1,\ldots,z_g,\eta_1,\ldots,\eta_{g-1}]$ by the lattice $L$ generated by the columns of the transposed period matrix: \begin{equation}\label{eq:latticegen} \begin{aligned} \lambda_i: & \quad z_j \rightarrow z_j + \delta_{ij}, &\quad &\eta_\alpha \rightarrow\eta_\alpha,\\ \lambda_{i+g}: &\quad z_j \rightarrow z_j+(Z_e)_{ij}, &\quad &\eta_\alpha \rightarrow \eta_\alpha + (Z_o)_{i \alpha}, \quad i=1,2,\ldots,g. \end{aligned} \end{equation} We will often omit the parity labels $e,o$ on $Z$, since the index structure makes clear which is meant. Any line bundle $\mathcal L$ on such a supertorus $J$ lifts to a trivial bundle on the covering space $V$. A section of $\mathcal L$ lifts to a function on which the translations $\lambda_i$ act by multiplication by certain invertible holomorphic functions, the {\it multipliers} of $\mathcal L$. We can factor the quotient map $V \rightarrow J$ through the cylinder $V/L_0$, where $L_0$ is the subgroup of $L$ generated by the first $g$ $\lambda_i$ only. Since holomorphic line bundles on a cylinder are trivial, this means that the multipliers for $L_0$ can always be taken as unity. We have $\text{Pic}^0(J) \cong H^1(J,\mathcal{O}_{\text{ev}})/H^1(J,\mathbb{Z})$. It is very convenient to compute the numerator as the group cohomology $H^1(L,\mathcal{O}_{\text{ev}})$ of $L$ acting on the even functions on the covering space $V$, in part because the cocycles for this complex are precisely (the logarithms of) the multipliers. For the basics of group cohomology, see for example \cite{Si:ArithEllipticCurves,Mum:AbelVar}. In particular, factoring out the subgroup $L_0$ reduces our problem to computing $H^1(L/L_0,\mathcal{O}^{L_0})$, the cohomology of the quotient group acting on the $L_0$-invariant functions. A 1-cochain for this complex assigns to each generator of $L/L_0$ an even function (log of the multiplier) invariant under each shift $z_j \rightarrow z_j + 1$, \begin{equation*} \lambda_{i+g} \mapsto F^i(z,\eta) = \sum_{\vec{n}} F_{\vec{n}}^i(\eta) e^{2 \pi i \vec{n} \cdot \vec{z}}. \end{equation*} It is a cocycle if the multiplier induced for every sum $\lambda_{i+g} + \lambda_{j+g}$ is independent of the order of addition, which amounts to the symmetry of the matrix $\Delta_i F^j$ giving the change in $F^j$ under the action of $\lambda_{i+g}$: \begin{multline*} F^i(z_k + Z_{jk},\eta_\alpha + Z_{j \alpha}) - F^i(z_k,\eta_\alpha) =\\ F^j(z_k + Z_{ik},\eta_\alpha + Z_{i \alpha}) - F^j(z_k,\eta_\alpha), \end{multline*} or, in terms of Fourier coefficients, \begin{equation*} F_{\vec{n}}^i(\eta_\alpha + Z_{j \alpha}) e^{2\pi i\sum_k n_k Z_{jk}} - F_{\vec{n}}^i(\eta) = F_{\vec{n}}^j(\eta_\alpha + Z_{i \alpha}) e^{2\pi i\sum_k n_k Z_{ik}} - F_{\vec{n}}^j(\eta). \end{equation*} One does not have to allow for an integer ambiguity in the logarithms of the multipliers in these equations, precisely because we are considering bundles with vanishing Chern class. The coboundaries are of the form, \begin{equation*} \lambda_{i+g} \mapsto A(z,\eta) - A(z_k + Z_{ik}, \eta_\alpha + Z_{i \alpha}) \end{equation*} for a single function $A$, that is, those cocycles for which \begin{equation*} F_{\vec{n}}^i(\eta) = A_{\vec{n}}(\eta) - A_{\vec{n}}(\eta_\alpha + Z_{i \alpha}) e^{2\pi i\sum_k n_k Z_{ik}}. \end{equation*} This equation has the form, \begin{equation*} F_{\vec{n}}^i(\eta) = A_{\vec{n}}(\eta) ( 1 - e^{2\pi i\sum_k n_k Z_{ik}}) + O(Z_o). \end{equation*} The point now is that, by the linear independence of the columns of $Z_e^{\text{red}}$, for any $\vec{n} \ne \vec{0}$ there is some choice of $i$ for which the reduced part of the exponential in the last equation differs from unity. This ensures that, for this $i$, the equation is solvable for $A_{\vec{n}}$, first to zeroth order in $Z_o$ and then to all orders by iteration. Adding this coboundary to the cocycle produces one for which $F_{\vec{n}}^i = 0$, whereupon the cocycle conditions imply $F_{\vec{n}}^j = 0$ for all $\vec{n} \ne \vec{0}$ and all $j$ as well. Thus the only potentially nontrivial cocycles are independent of $z_i$. In the simplest case, when the odd period matrix $Z_o=0$, all such cocycles are indeed nontrivial, and we have an analog of the classical fact that bundles of trivial Chern class are specified by $g$ constant multipliers. Here a cocycle is specified by giving $g$ even elements $F^i_{\vec{0}}(\eta)$ in the exterior algebra $\Lambda[\eta_{\alpha}]$ (elements of $H^0(J,\mathcal O_J$)), leading to $\dim \text{Pic}^0(J) = g^{2^{g-2}} \mid g^{2^{g-2}}$ (the number of $\eta_{\alpha}$ is $g-1$). In general, when $Z_o \neq 0$, not all cochains specified in this way will be cocycles, and some cocycles will be trivial: $\text{Pic}^0(J)$ will be smaller, and in general not a supermanifold. As to the existence of ample line bundles, let us examine in the super case the classical arguments leading to the necessary and sufficient Riemann conditions \cite{GrHa:PrincAlgGeom,LaBirk:ComplAbVar}. The Chern class of a very ample bundle is represented in de Rham cohomology by a $(1,1)$ form obtained as the pullback of the Chern class of the hyperplane bundle via a projective embedding. We can introduce real even coordinates $x_i,i=1,\ldots,2g$ for $J$ dual to the basis $\lambda_i$ of the lattice $L$, meaning that $x_j \rightarrow x_j + \delta_{ij}$ under the action of $\lambda_i$. The associated real odd coordinates $\xi_\alpha,\alpha=1,\ldots,2g-2$ can be taken to be globally defined because every real supermanifold is split. The relation between the real and complex coordinates can be taken to be \begin{eqnarray*} z_j & = & x_j + \sum_{i=1}^g Z_{ij} x_{i+g}, \; j=1,\ldots,g, \\ \eta_\alpha & = & \xi_\alpha + i \xi_{\alpha + g-1} + \sum_{i=1}^g Z_{i \alpha} x_{i+g}, \; \alpha = 1,\ldots,g-1. \end{eqnarray*} The de Rham cohomology is isomorphic to that of the reduced torus and can be represented by translation-invariant forms in the $dx_i$. The Chern class represented by a form $\sum_{i=1}^g \delta_i\, dx_i\, dx_{i+g}$ is called a polarization of type $\Delta = \text{diag}(\delta_1,\ldots,\delta_g)$ with elementary divisors the positive integers $\delta_i$. We consider principal polarizations $\delta_i=1$ only, because nontrivial nonprincipal polarizations generically do not exist, even on the reduced torus \cite{Lef:ThmCorrAlgCurv}. Furthermore, a nonprincipal polarization is always obtained by pullback of a principal one from another supertorus whose lattice $L'$ contains $L$ as a sublattice of finite index \cite{GrHa:PrincAlgGeom}. Reexpressing the Chern form in complex coordinates, the standard calculations lead to the usual Riemann condition $Z_e = Z_e^t$ to obtain a $(1,1)$ form. Together with the positivity of the imaginary part of the reduced matrix, the symmetry of $Z_e$ (in some basis) is necessary and sufficient for the existence of a $(1,1)$ form with constant coefficients representing the Chern class. This can be viewed as the cocycle condition, symmetry of $\Delta_iF^j$, for the usual multipliers of a theta bundle, $F^j = -2\pi i z_j$. The usual argument that the $(1,1)$ form representing the Chern class can always be taken to have constant coefficients depends on Hodge theory, particularly the Hodge decomposition of cohomology, for a K\"ahler manifold such as a torus. This does not hold in general for a supertorus with $Z_o \neq 0$. For example, $H_{\text{dR}}^1(J)$ is generated by the $2g$ 1-forms $dx_i$, whereas $H^{1,0}(J)$ contains the $g \mid g-1$ nontrivial forms $dz_i,\,d\eta_\alpha$, with certain nilpotent multiples of the latter being trivial. Indeed, since by (\ref{eq:latticegen}) $\eta_\alpha$ is defined modulo entries of column $\alpha$ of $Z_o$, $\epsilon\eta_\alpha$ is a global function and $\epsilon d \eta_\alpha$ is exact when $\epsilon \in \Lambda$ annihilates these entries. Thus, $H^{1,0}(J)$ cannot be a direct summand in $H^1_{\text{dR}}(J)$. Correspondingly, some $\eta$-dependent multipliers $F^j = -2\pi i z_j + \cdots$ may satisfy the cocycle condition and give ample line bundles. We do not know a simple necessary condition for a Jacobian to admit such polarizations. When $Z_e$ is symmetric, we can construct theta functions explicitly. Consider first the trivial case with $Z_o=0$ as well. Then the standard Riemann theta function $\Theta(z;Z_e)$ gives a super theta function on $\operatorname{Jac}(X)$, where $\Theta(z;Z_e)$ is defined by Taylor expansion in the nilpotent part of $Z_e$ as usual. It has of course the usual multipliers, \begin{equation} \label{eq:thetafactors} \Theta(z_j + \delta_{ij};Z_e) = \Theta(z_j;Z_e),\;\;\; \Theta(z_j + Z_{ij};Z_e) = e^{-\pi i (2z_i + Z_{ii})} \Theta(z_j;Z_e). \end{equation} Multiplication of $\Theta(z;Z_e)$ by any monomial in the odd coordinates $\eta_{\alpha}$ gives another, even or odd, theta function having the same multipliers, whereas translation of the argument $z$ by polynomials in the $\eta_{\alpha}$ leads to the multipliers for another bundle with the same Chern class. In the general case with $Z_o \neq 0$, theta functions with the standard multipliers can be constructed as follows. Such functions must obey \begin{align*} H(z_j + \delta_{ij},\eta_{\alpha};Z) &= H(z_j,\eta_{\alpha};Z),\\ H(z_j + Z_{ij},\eta_{\alpha} + Z_{i \alpha};Z) &= e^{-\pi i (2z_i + Z_{ii})} H(z_j,\eta_{\alpha};Z). \end{align*} The function $\Theta(z;Z_e)$ is a trivial example independent of $\eta$; to obtain others one checks that when $H$ satisfies these relations then so does $$H_\alpha = \left( \eta_{\alpha} + \frac{1}{2\pi i} \sum_k Z_{k \alpha} \frac{\partial}{\partial z_k} \right) H.$$ Applying this operator repeatedly one constructs super theta functions $\Theta_{\alpha \cdots \gamma}$ reducing to $\eta_\alpha \cdots \eta_\gamma \Theta(z;Z_e)$ when $Z_o=0$. ``Translated" theta functions which are sections of other bundles having the same Chern class can be obtained by literally translating the arguments of these only in the simplest cases. Constant shifts in the multiplier exponents $F^j$ can be achieved by constant shifts of the arguments $z_j$. Shifts linear in the $\eta_\alpha$ are obtained by $z_j \rightarrow z_j + \eta_\alpha \Gamma_{\alpha j}$, which is a change in the chosen basis of holomorphic differentials on $X$, see the discussion after \eqref{eq:normperiodmatrix}. The resulting theta functions have the new period matrix $Z_e + Z_o\Gamma$. More generally, translated theta functions can be obtained by the usual method of determining their Fourier coefficients from the recursion relations following from the desired multipliers. We do not know an explicit expression for them in terms of conventional theta functions. It is easy to see that any meromorphic function $F$ on the Jacobian can be rationally expressed in terms of the theta functions we have defined. Expand $F(z,\eta) = \sum_{IJ} \beta_I \eta_J F_{IJ}(z)$ in the generators of $\Lambda[\eta_{\alpha}]$, with multi-indices $I,J$. Then the zeroth-order term $F_{00}$ is a meromorphic function on the reduced Jacobian, hence a rational expression in ordinary theta functions. Using $Z_e$ as the period matrix argument of these theta functions gives a meromorphic function on the Jacobian itself, whose reduction agrees with $F_{00}$. Subtract this expression from $F$ to get a meromorphic function on the Jacobian whose zeroth-order term vanishes, and continue inductively, first in $J$, then in $I$. For example, $F_{0\alpha}$ is equal, to lowest order in the $\beta$'s, to a rational expression in theta functions of which one numerator factor is a $\Theta_\alpha$. Subtracting this expression removes the corresponding term in $F$ while only modifying other terms of higher order in $\beta$'s. \section{Super Grassmannian, $\tau$-function and Baker function.} \subsection{Super Grassmannians.} In this subsection we will introduce an infinite super Grassmannian and related constructions. The infinite Grassmannian of Sato (\cite{Sa:KPinfDymGr}) or of Segal-Wilson (\cite {SeWi:LpGrpKdV}) consists (essentially) of ``half infinite dimensional'' vector subspaces $W$ of an infinite dimensional vector space $H$ such that the projection on a fixed subspace $H_-$ has finite dimensional kernel and cokernel. In the super category we replace this by the super Grassmannian of free ``half infinite rank'' $\Lambda$-modules of an infinite rank free $\Lambda$-module $H$ such that the kernel and cokernel of the projection on $H_-$ are a submodule respectively a quotient module of a free finite rank $\Lambda$-module. In \cite{Schwarz:FermStringModSpa} a similar construction can be found, but it seems that there $\Lambda=\mathbb C$ is taken as is also the case in \cite{Mu:Jac}. This is too restrictive for our purposes involving algebraic super curves over nonreduced base ring $\Lambda$. Let $\Linfinf$ be the free $\Lambda$-module $\Lambda[z,z\inv,\theta]$ with $z$ an even and $\theta$ an odd variable. Introduce the notation \begin{equation}\label{eq:basisLinfinf} e_i=z^i,\quad e_{i-\frac12}=z^i\theta,\quad i\in \mathbb Z. \end{equation} We will think of an element $h=\sum_{i=-N}^\infty h_i e_i$, $h_i\in \Lambda$ of $\Linfinf$ not only as a series in $z,\theta$ but also as an infinite column vector: $$ h=(\dots,0,\dots, h_{-1},h_{-\frac12},h_0,h_{\frac12},h_1,\dots,0,\dots)^t $$ Introduce on $\Linfinf$ an odd Hermitian product \begin{multline} \langle f(z,\theta),g(z,\theta)\rangle= \frac1{2\pi i}\oint \frac{dz}{z}d\theta\overline{f(z,\theta)}g(z,\theta)=\\ =\frac1{2\pi i}\oint (\overline{f_{\bar 0}}g_{\bar 1}+\overline{f_{\bar 1}}g_{\bar 0})\frac{dz}{z}, \end{multline} where $\overline{f(z,\theta)}$ is the extension of the complex conjugation of $\Lambda$ (see Appendix \ref{ss:realstrconj}) to $\Linfinf$ by $\overline{z}=z\inv$ and $\overline{\theta}=\theta$, and $f(z,\theta)=f_{\bar 0}+\theta f_{\bar 1}$, and similarly for $g$. Let $H$ be the completion of $\Linfinf$ with respect to the Hermitian inner product. We have a decomposition $H=H_-\oplus H_+$, where $H_-$ is the closure of the subspace spanned by $e_i$ for $i\le0$, and $H_+$ is the closure of the space spanned by $e_i$ with $i>0$, for $i\in \halfz$. The super Grassmannian $\mathcal S\text{gr}$ is the collection of all free closed $\Lambda$-modules $W\subset H$ such that the projection $\pi_-:W\to H_-$ is super Fredholm, i.e., the kernel and cokernel are a submodule respectively a quotient module of a free finite rank $\Lambda$-module. \begin{exmpl} Let $W$ be the closure of the subspace generated by $\delta +z, \theta$ and $z^i, z^i\theta$ for $i\le -1$, for $\delta$ a nilpotent even constant. Let $A\subset \Lambda$ be the ideal of annihilators of $\delta$. Then $W$ is free and the kernel of $\pi_-$ is $A (\delta +z) \subset \Lambda (\delta+z)$ and the cokernel is isomorphic to $\Lambda/\Lambda\delta$. \qed \end{exmpl} Let $I$ be the subset $\{i\in \halfz\mid i\le 0\}$. We consider matrices with coefficients in $\Lambda$ of size $\halfz\times I$: $$ \mathcal W=(W_{ij}) \quad \text{where } i\in \halfz,\ j\in I. $$ An even matrix of this type is called an {\it admissible frame} for $W\in \mathcal S\text{gr}$ if the closure of the subspace spanned by the columns of $\mathcal W$ is $W$ and if moreover in the decomposition $\mathcal W=\begin{pmatrix} W_-\\W_+\end{pmatrix}$ induced by $H=H_-\oplus H_+$ the operator $W_-:H_-\to H_-$ differs from the identity by an operator of super trace class and $W_+:H_-\to H_+$ is compact. Let $Gl(H_-)$ be the group of invertible maps $1+X:H_-\to H_-$ with $X$ super trace class. Then the super frame bundle $\mathcal S\text{fr}$, the collection of all pairs $(W,\mathcal W)$ with $\mathcal W$ an admissible frame for $W\in \mathcal S\text{gr}$, is a principal $Gl(H_-)$ bundle over the super Grassmannian. Elements of $Gl(H_-)$ have a well defined berezinian, see \cite{ Schwarz:FermStringModSpa} for some details. This allows us to define two associated line bundles $\Bersgr$ and $\Bersgrdual$ on $\mathcal S\text{gr}$. More explicitly, an element of $\Bersgr$ is an equivalence class of triples $(W,\mathcal W, \lambda)$, with $\mathcal W$ a frame for $W$, $\lambda\in \Lambda$; here $(W,\mathcal Wg, \lambda)$ and $(W,\mathcal W, \ber(g)\lambda)$ are equivalent for $g\in Gl(H_-)$. For $\Bersgrdual$ replace $\ber(g)$ by $\berdual(g)$. For simplicity we shall write $(\mathcal W,\lambda)$ for $(W,\mathcal W, \lambda)$, as $\mathcal W$ determines $W$ uniquely. The two bundles $\Bersgr$ and $\Bersgrdual$ each have a canonical section. Let $\mathcal W$ be a frame for $W\in \mathcal S\text{gr}$ and write $\mathcal W=\begin{pmatrix}W_-\\W_+\end{pmatrix}$ as above. Then \begin{equation}\label{eq:defsigma*} \sigma(W)=(\mathcal W, \ber(W_-)),\quad \sigma^*(W)=(\mathcal W, \berdual(W_-)), \end{equation} are sections of $\Bersgrdual$ and $\Bersgr$, respectively. It is a regrettable fact of life that neither of these sections is holomorphic; indeed there are no global sections to $\Bersgr$ or $\Bersgrdual$ at all, see \cite{Manin:GaugeFieldTheoryComplexGeom}. This is a major difference between classical geometry and super geometry. \subsection{The Chern class of $\Bersgr$ and the $gl_{\infty\mid\infty}$ cocycle.} First we summarize some facts about complex supermanifolds that are entirely analogous to similar facts for ordinary complex manifolds. Then we apply this to the super Grassmannian, following the treatment in \cite{PrSe:LpGrps} of the classical case. Let $M$ be a complex supermanifold. The Chern class of an invertible sheaf $\mathcal L$ on $M$ is an element $c_1(\mathcal L)\in H^2(M,\mathbb Z)$. By the sheaf inclusion $\mathbb Z\to \Lambda$ and the de Rham theorem $H^2(M,\Lambda)\simeq H^2_{\text{dR}}(M)$ we can represent $c_1(\mathcal L)$ by a closed two form on $M$. On the other hand, if $\nabla:\mathcal L\to \mathcal L\otimes \mathcal A^1$, with $\mathcal A^1$ the sheaf of smooth 1-forms, is a connection compatible with the complex structure, the curvature $F$ of $\nabla$ is also a two form. By the usual proof (see e.g., \cite{GrHa:PrincAlgGeom}) we find that $c_1(\mathcal L) $ and $F$ are equal, up to a factor of $i/2\pi$. We can locally calculate the curvature on an invertible sheaf $\mathcal L$ by introducing a Hermitian metric $\langle\,, \rangle$ on it: if $s,t\in \mathcal L(U)$ then $\langle s, t\rangle(m)$ is a smooth function in $m\in U$ taking values in $\Lambda$, linear in $t$ and satisfying $\langle s, t\rangle(m)=\overline{\langle t, s\rangle(m)}$. Choose a local generator $e$ of $\mathcal L$ and let $h=\langle e, e\rangle$. The curvature is then $F=\bar\partial \partial \log h$, with $\partial=\sum dz_i\frac{d}{dz_i}+\sum d\theta_\alpha \frac{\partial}{\partial \theta_\alpha}$ and $\bar\partial$ defined by a similar formula. Now consider the invertible sheaf $\Bersgr$ on $\mathcal S\text{gr}$. If $s=(\mathcal W, \lambda)$ is a section the square length is defined to be $\langle s,s\rangle= \bar \lambda \lambda\ber (\mathcal W^H \mathcal W)$, where superscript ${}^H$ indicates conjugate transpose. Of course, this metric is not defined everywhere on $\mathcal S\text{gr}$ because of the rational character of $\ber$, but we are interested in a neighborhood of the point $W_0$ with standard frame $\mathcal W_0=\begin{pmatrix} 1_{H_-}\\ 0\end{pmatrix}$ where there is no problem. The tangent space at $W_0$ can be identified with the space of maps $H_-\to H_+$, or, more concretely, by matrices with the columns indexed by $I=\{i\in \halfz\mid i\le 0\}$ and with rows indexed by the complement of $I$. Let $x,y$ be two tangent vectors at $W_0$. Then the curvature at $W_0$ is calculated to be \begin{equation}\label{eq:curvature} F(x,y)=\bar\partial\partial \log h (x,y)= Str(x^Hy-y^Hx), \end{equation} where we take as local generator $e=\sigma$, the section defined by \eqref{eq:defsigma*}, so that $h=\langle \sigma, \sigma\rangle$. We can map the tangent space at $W_0$ to the Lie super algebra $gl_{\infty\mid \infty}(\Lambda)$ via $x\mapsto \begin{pmatrix} 0&-x^H\\x&0\end{pmatrix}$. Here $gl_{\infty\mid \infty}(\Lambda)$ is the Lie super algebra corresponding to the Lie super group $Gl_{\infty\mid \infty}(\Lambda)$ of infinite even invertible matrices $g$ (indexed by $\halfz$) with block decompostion $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ with $b,c$ compact and $a,d$ super Fredholm. We see that (\ref{eq:curvature}) is the pullback under this map of the cocycle on $gl_{\infty\mid \infty}(\Lambda)$ (see also \cite{KavdL:SuperBoson}) given by \begin{equation}\label{eq:cocycle} \begin{aligned} c:gl_{\infty\mid \infty}(\Lambda)\times gl_{\infty\mid \infty}(\Lambda)&\to \quad\quad\quad\Lambda\\ (X,Y)\quad \quad\quad&\mapsto\quad\frac14 \Str(J[J,X][J,Y]), \end{aligned} \end{equation} where $J=\begin{pmatrix} 1_{H_-}&0\\0&-1_{H_+}\end{pmatrix}$. In terms of the block decomposition of $X,Y$ we have $$ c(X,Y)= \Str(c_Xb_Y-b_Xc_Y). $$ The natural action of $Gl_{\infty\mid \infty}(\Lambda)$ on $\mathcal S\text{gr}$ lifts to a projective action on $\Bersgr$; the cocycle $c$ describes infinitesimally the obstruction for this projective action to be a real action. Indeed, if $g_1=\exp(f_1),g_2=\exp(f_2)$ and $g_3=g_1g_2$ are all in the open set of $Gl_{\infty\mid \infty}(\Lambda)$ where the ${}_{--}$ blocks $a_i$ are invertible, the action on a point of $\Bersgr$ is given by \begin{equation} \label{eq:lift} g_i\circ(\mathcal W,\lambda)=(g_i\mathcal Wa_i\inv,\lambda). \end{equation} (One checks as in \cite{SeWi:LpGrpKdV} that if $\mathcal W$ is an admissible basis then so is $g\mathcal Wg_{--}\inv$.) Then we have $$ g_1\circ g_2 \circ(\mathcal W,\lambda)=\exp[c(f_1,f_2)]g_3\circ(\mathcal W,\lambda). $$ We can also introduce the {\it projective multiplier } $C(g_1,g_2)$ for elements $g_1$ and $g_2$ that commute in $Gl_{\infty\mid\infty}(\Lambda)$: \begin{equation}\label{eq:projmult} g_1\circ g_2\circ g_1\inv\circ g_2\inv (\mathcal W,\lambda)=C(g_1,g_2)(\mathcal W,\lambda), \end{equation} where $C(g_1,g_2)=\exp[S(f_1,f_2)]$ if $g_i=\exp(f_i)$ and \begin{equation}\label{eq:logprojmult} S(f_1,f_2)=\Str([f_1,f_2]). \end{equation} We will in subsection \ref{ss:ChernclassBeronPic} use the projective multiplier to show that the Chern class of the Berezinian bundle on $\Pic^0(X)$ is trivial. \subsection{The Jacobian super Heisenberg algebra.} In the theory of the KP hierarchy an important role is played by a certain Abelian subalgebra of the infinite matrix algebra and its universal central extension, loosely referred to as the (principal) Heisenberg subalgebra. In this subsection we introduce one of the possible analogs of this algebra in the super case. Let the {\it Jacobian super Heisenberg algebra} be the $\Lambda$-algebra $\mathcal J\text{Heis}=\Lambda[z,z\inv,\theta]$. Of course, this is as a $\Lambda$-module the same as $\Linfinf$ but now we allow multiplication of elements. When convenient we will identify the two; in particular we will use the basis $\{e_i\}$ of (\ref{eq:basisLinfinf}) also for $\mathcal J\text{Heis}$. We think of elements of $\mathcal J\text{Heis}$ as infinite matrices in $gl_{\infty\mid\infty}(\Lambda)$: if $E_{ij}$ is the elementary matrix with all entries zero except for the $ij$th entry which is 1, then $$ e_i=\sum_{n\in\mathbb Z} E_{n+i,n}+E_{n+i-\frac12,n-\frac12},\quad e_{i-\frac12}= \sum_{n\in\mathbb Z} E_{n+i-\frac12,n}. $$ We have a decomposition $\mathcal J\text{Heis}=\mathcal J\text{Heis}_-\oplus\mathcal J\text{Heis}_+$ in subalgebras $\mathcal J\text{Heis}_-=z\inv\Lambda[z\inv,\theta]$ and $\mathcal J\text{Heis}_+=\Lambda[z,\theta]$. Elements of $\mathcal J\text{Heis}_+$ correspond to lower triangular matrices and elements of $\mathcal J\text{Heis}_-$ to upper triangular ones. By exponentiation we obtain from $\mathcal J\text{Heis}_-$ and $\mathcal J\text{Heis}_+$ two subgroups $G_-$ and $G_+$ of $Gl_{\infty\mid\infty}(\Lambda)$, generated by $$ g_\pm(t)=\exp(\sum_{i\in\pm I} t_i e_i), $$ where $t_i\in \Lambda$ is homogeneous of the same parity as $e_i$ (and $t_i$ is zero for almost all $i$, say). For an element $g=\begin{pmatrix} a&b\\c&d\end{pmatrix}$ of $G_+$ the block $b$ vanishes, whereas if $g\in G_-$ the block $c=0$. In either case the diagonal block $a$ is invertible and we can lift the action of either $G_-$ or $G_+$ to a (potentially projective) action on $\Bersgr$ and $\Bersgrdual$, via (\ref{eq:lift}). Since the cocycle (\ref{eq:cocycle}) is zero when restricted to both $\mathcal J\text{Heis}_-$ and $\mathcal J\text{Heis}_+$ we get an honest action of the Abelian groups $G_\pm$ on $\Bersgr$ and $\Bersgrdual$, just as in the classical case. In contrast with the classical case, however, as was pointed out in \cite{Schwarz:FermStringModSpa}, the actions of $G_-$ and $G_+$ on the line bundles $\Bersgr$ and $\Bersgrdual$ mutually commute. This follows from the following Lemma. \begin{lem}\label{lem:comactionJheis} Let $g_\pm\in G_\pm$ and write $a_\pm=\exp(f_\pm)$, with $f_\pm \in gl(H_-)$. Then $$ \Str_{H_-}([f_-,f_+])=0, $$ so that the actions of $g_-$ and $g_+$ on $\Bersgr$ and $\Bersgrdual$ commute. \end{lem} \begin{proof} The elements $f_\pm$ act on $H_-$ by multiplication by an element of $\mathcal J\text{Heis}_\pm$, followed by projection on $H_-$ if necessary. So write $f_\pm=\pi_{H_-}\circ \sum_{i>0} c^\pm_iz^{\pm i} +\gamma^\pm_{i}z^{\pm i}\theta$. To find the supertrace we need to calculate the projection on the rank $1\mid 0$ and $0\mid 1$ submodules of $H_-$ generated by $z^{-i}$ and $z^{-i}\theta$: \begin{align*} f_+ f_- z^{-k}|_{\Lambda z^{-k}}&= f_+(\sum_{i>0}c_i^-z^{-i-k})|_{\Lambda z^{-k}}=(\sum_{i>0}c^+_ic^-_i)z^{-k},\\ f_+ f_- z^{-k}\theta|_{\Lambda z^{-k}\theta}&= (\sum_{i>0}c^+_ic^-_i)z^{-k}\theta,\\ f_- f_+ z^{-k}|_{\Lambda z^{-k}}&= f_-(\sum_{i=1}^ka_i^+z^{i-k})|_{\Lambda z^{-k}}=(\sum_{i=1}^k c^+_ic^-_i)z^{-k},\\ f_- f_+ z^{-k}\theta|_{\Lambda z^{-k}\theta}&= f_-(\sum_{i=1}^kc_i^+z^{i-k}\theta)|_{\Lambda z^{-k}\theta}=(\sum_{i=1}^k c^+_ic^-_i)z^{-k}\theta. \end{align*} Since the super trace is the difference of the traces of the restrictions to the even and odd submodules we see that $\Str([f_+,f_-])=0$ so that, by (\ref {eq:projmult},\ref{eq:logprojmult}), the actions of $G_\pm$ on $\Bersgr$ commute. \end{proof} \subsection{Baker functions, the full super Heisenberg algebra, and $\tau$-functions.}\label{ss:Bakerf-fullsuperH-tau} We define $W\in \mathcal S\text{gr}$ to be {\it in the big cell} if it has an admissible frame $\mathcal W^{(0)}$ of the form \[\mathcal{W}^{\,(0)}=\begin{pmatrix}\ddots &\vdots&\vdots&\vdots&\vdots\\ \dots &1&0&0&0\\ \dots &0&1&0&0\\ \dots &0&0&1&0\\ \dots &0&0&0&1\\ ***&*&*&*&*\\ \end{pmatrix}, \] i.e. $(\mathcal W^{\,(0)})_-$ is the identity matrix. Note that the canonical sections $\sigma$ and $\sigma^*$ do not vanish, nor blow up, at a point in the big cell. If $\mathcal{W}$ is any frame of a point $W$ in the big cell we can calculate the standard frame $\mathcal{W}^{\,(0)}$ through quotients of Berezinians of minors of $\mathcal{W}$. Indeed, if we put $A=\mathcal W_-$ then the maximal minor $A$ of $\mathcal W$ is invertible and we have \begin{equation}\label{eq:connw0w} \mathcal{W}^{\,(0)}A=\mathcal{W}. \end{equation} Write $\mathcal{W}^{\,(0)}=\sum w_{ij}^{(0)}E_{ij}$. Then we can solve \thetag{\ref{eq:connw0w}} by Cramer's rule, \thetag{\ref{eq:supercramer}}, to find for $i>0,j\le 0$: \begin{equation*} w_{ij}^{(0)}=\begin{cases} \ber\,(A_j(r_i))/ \ber\,(A)&\text{if $j\in \mathbb Z$},\\ \berdual \,(A_j(r_i))/ \berdual \,(A)&\text{if $j\in \mathbb Z+\frac 12$}. \end{cases} \end{equation*} Here $A_j(r_i)$ is the matrix obtained from $A$ by replacing the $j$th row by $r_i$, the $i$th row of $\mathcal W$. In particular the even and odd ``Baker vectors'' of $W$, i.e. the zeroth and $-\frac12$th column of $\mathcal W^{\,(0)}$, are given by \begin{equation}\label{eq:bakervector} \begin{split} w_{\bar 0}&=e_0+\sum_{\frac{i>0} {i\in \frac 12 \mathbb Z}}\frac{\ber\,( A_0(r_i))}{\ber\,(A)}e_i,\\ w_{\bar 1}&=e_{-\frac 12}+\sum_{\frac{i>0} { i\in \frac 12 \mathbb Z}}\frac{\berdual \,( A_{-\frac 12}(r_i))}{\berdual \,(A)}e_i \end{split} \end{equation} The corresponding ``Baker functions'' are obtained by using $e_i=z^{i}$, $e_{i-\frac12}=z^{i}\theta$. Then \thetag{ \ref{eq:bakervector}} reads \begin{equation}\label{eq:bakerfunction} \begin{split} w_{\bar 0}(z,\theta)&=1 + \sum_{i>0} z^{i}\frac { \ber\,( A_0(r_i))+ \ber\,( A_0(r_{i-\frac 12}))\theta } {\ber\,(A)},\\ w_{\bar 1}(z,\theta)&=\theta + \sum_{i>0} z^{i}\frac{ \berdual \,(A_{-\frac12}(r_i))+ \berdual \,(A_{-\frac12}(r_{i-\frac12}))\theta} {\berdual \,(A)}. \end{split} \end{equation} Here and henceforth (unless otherwise noted) the summations run over (subsets of) the integers. The full super Heisenberg algebra $\mathcal S\text{Heis}$ is the extension $\mathcal J\text{Heis}[\frac{d}{d\theta}]=\Lambda[z,z\inv,\theta][\frac{d}{d\theta}]$ . This is, just as the Jacobian super Heisenberg algebra, a possible analog of the principal Heisenberg of the infinite matrix algebra used in the standard KP hierarchy, see \cite{KavdL:SuperBoson}. $\mathcal S\text{Heis}$ is non--Abelian and the restriction of the cocycle \thetag{\ref{eq:cocycle}} to it is nontrivial, in contrast to the subalgebra $\mathcal J\text{Heis}$. $\mathcal S\text{Heis}$ acts in the obvious way on $\Linfinf$ and we can represent it by infinite matrices from $gl_{\infty\mid\infty}(\Lambda)$. Introduce a basis for $\mathcal S\text{Heis}$ by \begin{alignat*}{2} \lambda(n)&=z^{-n}(1-\theta\frac{d}{d\theta})=\sum_{k\in \mathbb Z}E_{k,k+n},&\quad f(n)&=z^{-n}\frac{d}{d\theta}=\sum_{k\in \mathbb Z}E_{k,k+n-\frac 12},\\ \mu(n)&=z^{-n}\theta\frac{d}{d\theta}=\sum_{k\in \mathbb Z}E_{k-\frac12,k-\frac12+n},&\quad e(n)&=z^{-n}\theta=\sum_{k\in \mathbb Z}E_{k-\frac12,k+n}. \end{alignat*} We can rewrite the Baker functions as quotients of Berezinians, using $\mathcal S\text{Heis}$. To this end define the following even invertible matrices (over the ring $\mathbb \Lambda[u,\phi,\frac{\partial} {\partial\phi}]$): \begin{align*} Q_{\bar 0}(u,\phi)&=1+\sum_{n=1}^\infty {u^n}[\lambda(n)+f(n)\phi],\\ Q_{\bar 1}(u,\phi)&=1+\sum_{n=1}^\infty {u^n}[{\mu(n)+e(n)\frac{\partial} {\partial\phi}}], \end{align*} where $u$, resp $\phi$, is an even, resp. odd, variable. We can let these matrices act on $H$ and obtain in this way infinite vectors over the ring $\Lambda[u,\phi,\frac{\partial}{\partial\phi}]$. Also we can let these matrices act on an admissible frame and obtain a matrix over $\Lambda[u,\phi,\frac{\partial}{\partial\phi}]$. \begin{lem} Let $w_{\bar 0}(u,\phi)$ and $w_{\bar 1}(u,\phi)$ be the even and odd Baker functions of a point $W$ in the big cell. For any frame $\mathcal W$ of $W$ we have: \begin{equation*} w_{\bar 0}(u,\phi)=\frac{\ber\,([Q_{\bar 0}(u,\phi)\mathcal W]_-)} {\ber\,(A)},\quad w_{\bar 1}(u,\phi)=\frac{\berdual \,([Q_{\bar 1}(u,\phi)\mathcal W]_-)\phi} {\berdual \,(A)}, \end{equation*} with $A=\mathcal W_-$. \end{lem} \begin{proof} Let $r_i$, $r_{i,\bar 0}$ and $r_{i,\bar 1}$, be respectively the $i$th row of $\mathcal W$, $Q_{\bar 0}(u,\phi)\mathcal W$ and of $Q_{\bar 1}(u,\phi)\mathcal W$. Then one calculates that for $i\in \mathbb Z$ we have $\quad r_{i-\frac 12, \bar 0} = r_{i-\frac 12}$, and $r_{i, \bar 1}= r_{i}$ and : \begin{equation}\label{eq:tildewinw} \begin{aligned} r_{i,\bar 0} &=r_i+\sum_{k\ge 1}{u^{k}}({r_{i+k}+r_{i+k-\frac12}\phi}), & & \\ &=r_i+u( r_{i+1,\bar 0}+ r_{i+\frac 12,\bar 0}\phi),\\ r_{i-\frac12,\bar 1} &=r_{i-\frac12}+\sum_{k\ge 1} {u^{k}}(r_{i+k-\frac12} +r_{i+k}\frac{\partial}{\partial \phi}), & &\quad \\ &=r_{i-\frac12}+u ( r_{i+1-\frac 12,\bar 1}+ r_{i+1,\bar 1}\frac{\partial}{\partial \phi}). \end{aligned} \end{equation} Let $X$ be an even matrix. Because of the multiplicative property of Berezinians we can add multiples of a row to another row of $X$ without changing $\ber\,(X)$ and $\berdual(X)$. Using such row operations we see, using \thetag{\ref{eq:tildewinw}}, that \begin{align*} \ber\,([Q_{\bar 0}(u,\phi)\mathcal W]_-) &=\ber\,(A_0(r_{0,\bar 0})),\\ \berdual \,([Q_{\bar 1}(u,\phi)\mathcal W]_-) &=\berdual\,(A_{-\frac 12}( r_{-\frac 12,\bar 1})). \end{align*} Now $\ber$ is linear in even rows, and $\berdual $ in odd rows, so by \thetag{\ref{eq:tildewinw}} we find \begin{multline*} \ber\,([Q_{\bar 0}(u,\phi)\mathcal W]_-)=\ber\,(A)+ \\\sum_{i>0 }u^{i}[\ber\,(A_0(r_i))+\ber\,(A_0(r_{i-\frac12}))\phi], \end{multline*} and \begin{multline*} \berdual \,([Q_{\bar1}(u,\phi)\mathcal W]_-) =\berdual \,(A)+\\ +\sum_{i>0}\,u^{i}[\berdual\,(A_{-\frac12}(r_{i-\frac12}))+ \berdual\,(A_{-\frac12}(r_i))\frac{\partial} {\partial\phi}] . \end{multline*} Comparing with \thetag{\ref{eq:bakerfunction}} proves the lemma. \end{proof} We now consider the flow on $\mathcal S\text{gr}$ generated by the negative part of the Jacobian Heisenberg algebra: define \begin{equation}\label{expfactor} \gamma(t)=\exp(\sum_{{i>0}} t_i z^{-i}+ t_{i-\frac12}z^{-i}\theta), \quad t_i \in \Lambda_{\text{ev}},t_{i-\frac12}\in\Lambda_{\text{odd}} \end{equation} and put for $W\in \mathcal S\text{gr}$: $$W(t)=\gamma(t)\inv W.$$ The $\tau$-functions associated to a point $W$ in the big cell are then functions on $\mathcal J\text{Heis}_{-,\text{ev}}$: \begin{equation}\label{eq:tau} \tau_W(t),\tau^*_W(t):\mathcal J\text{Heis}_{-,\text{ev}}\to \Lambda\cup \{\infty\} \end{equation} given by \begin{align}\label{eq:taudef} \tau_W(t) &=\frac{\sigma(\gamma(t)\inv W)}{\gamma(t)\inv \sigma (W)}= \frac{\ber( [\gamma(t)\inv\circ \mathcal W]_-)}{\ber([\mathcal W]_-)},\\ \tau_W^*(t) &=\frac{\sigma^*(\gamma(t)\inv W)}{\gamma(t)\inv \sigma^* (W)}= \frac{\berdual( [\gamma(t)\inv\circ \mathcal W]_-)}{\berdual([\mathcal W]_-)}. \end{align} Here $\sigma$ and $\sigma^*$ are the sections of $\Bersgrdual$ and $\Bersgr$ defined in \thetag{\ref{eq:defsigma*}} and $\gamma\inv\in Gl_{\infty\mid\infty}(\Lambda)$ acts via \thetag{\ref{eq:lift}} on $\Bersgr$ and $\Bersgrdual$. The Baker function of $W$ becomes now a function on $\mathcal J\text{Heis}_{-,\text{ev}}$, and we have an expression in terms of a quotient of (shifted) $\tau$-functions: \begin{equation} \label{eq:Bakertauquotient} w_{\bar 0}(t;u,\phi)=\frac{\tau_W(t;Q_{\bar 0})}{\tau_W(t)},\quad w_{\bar 1}(t;u,\phi)=\frac{\tau_W^*(t;Q_{\bar 1})}{\tau^*_W(t)}, \end{equation} where $$ \tau_W(t;Q_{\bar 0})=\frac{\ber([Q_{\bar0}\mathcal W]_-)}{\ber(\mathcal W_-)}, \quad \tau^*_W(t;Q_{\bar 1})=\frac{\berdual([Q_{\bar1}\mathcal W]_-)\phi}{\berdual(\mathcal W_-)}. $$ Note that even if we are only interested in the Jacobian Heisenberg flows the full Heisenberg flows automatically appear in the theory if we express the Baker functions in terms of the $\tau$ functions. In principle we could also consider the flows on $\mathcal S\text{gr}$ generated by the full super Heisenberg algebra $\mathcal S\text{Heis}$. However, since $\mathcal S\text{Heis}$ is non--Abelian the interpretation of these flows is less clear and therefore we leave the discussion of these matters to another occasion. \section{The Krichever map and algebro-geometric solutions} \subsection{The Krichever map.}\label{ss:Krichever} Consider now a set of geometric data $(X,P,(z,\theta),\mathcal{L},t)$, where: \begin{itemize} \item $X$ is a generic SKP curve as before. \item $P$ is an irreducible divisor on $X$, so that $P^{\text{red}}$ is a single point of the underlying Riemann surface $X^{\text{red}}$. \item $(z,\theta)$ are local coordinates on $X$ near $P$, so that $P$ is defined by the equation $z=0$. \item $\mathcal{L}$ is an invertible sheaf on $X$. \item $t$ is a trivialization of $\mathcal{L}$ in a neighborhood of $P$, say $U_P = \{| z^{\text{red}}|<1\}$. \end{itemize} We will associate to this data a point of the super Grassmannian $\mathcal S\text{gr}$. For studying meromorphic sections of $\mathcal L$ we have the exact sequence \begin{equation} 0 \rightarrow \mathcal{L} \overset{\text{inc}}{\rightarrow} \mathcal{L}(P) \overset{\text{res}}{\rightarrow} \mathcal{L}_{P^{\text{red}}} \cong \Lambda | \Lambda \rightarrow 0, \end{equation} which gives \begin{equation} \label{polesequence} H^0(\mathcal{L}) \hookrightarrow H^0(\mathcal{L}(P)) \rightarrow \Lambda | \Lambda \rightarrow H^1(\mathcal{L}) \rightarrow H^1(\mathcal{L}(P)) \rightarrow 0, \end{equation} where the residue is the pair of coefficients of $z^{-1}$ and $\theta z^{-1}$ in the Laurent expansion. Let $\mathcal{L}(*P) = \lim_{n \rightarrow \infty} \mathcal{L}(nP)$ be the sheaf of sections of $\mathcal{L}$ holomorphic except possibly for a pole of arbitrary order at $P$. The {\it Krichever map} associates to a set of geometric data as above the $\Lambda$-module of formal Laurent series $W = z\ t[H^0(X,\mathcal{L}(*P))]$, which will be viewed as a submodule of $H$. In \cite{MuRa:SupKrich,Ra:GeomSKP} the concern was expressed that $W$ might not be freely generated, and hence not an element of $\mathcal S\text{gr}$ as we have defined it. However, \begin{thm} $H^0(X,\mathcal{L}(*P))$ is a freely generated $\Lambda$-module, and $W \in \mathcal S\text{gr}$. Further, $W$ belongs to the big cell if the geometric data satisfy $H^0(X,\mathcal{L}) = H^1(X,\mathcal{L}) = 0$, which happens generically if $\deg \mathcal{L} = g-1$. \end{thm} \begin{proof} Assume first that $H^0(X,\mathcal{L}) = H^1(X,\mathcal{L}) = 0$. Then the sequence \thetag{\ref{polesequence}} applied to $\mathcal{L}$ gives \begin{equation} 0 \rightarrow H^0(\mathcal{L}(P)) \rightarrow \Lambda | \Lambda \rightarrow 0 \rightarrow H^1(\mathcal{L}(P)) \rightarrow 0, \end{equation} so that $H^0(X,\mathcal{L}(P))$ is freely generated by an even and odd section having principal parts $z^{-1}$ and $\theta z^{-1}$, and $H^1(X,\mathcal{L}(P))$ is still zero. Applying the same sequence inductively to $\mathcal{L}(nP)$ shows that $H^0(X,\mathcal{L}(*P))$ is freely generated by one even and one odd section of each positive pole order. So $W$ is obtainable from $H_-$ by multiplication by a lower triangular invertible matrix, and $W$ belongs to the big cell of $\mathcal S\text{gr}$. We also have $H^i(\mathcal{L}\spl) = 0, i=0,1$, from Theorem \ref{thm:freeness}. And, by the super Riemann-Roch Theorem \thetag{\ref{superRR}}, $\deg \mathcal{L} = \deg \mathcal{L}\spl = g-1$. Moreover, by semicontinuity, in $\operatorname{Pic}^{g-1}(X)$ the cohomology groups $H^i(\mathcal{L})$ can only get larger on Zariski closed subsets, so generically they are zero. Now consider the general situation in which $H^i(\mathcal{L})$ may not be zero. Still, by twisting, $H^1(\mathcal{L}(nP)) = H^1(\mathcal{L}\spl(nP)) = 0$ for $n$ sufficiently large. Then, by the previous argument, $H^0(\mathcal{L}(*P))$ has non purely nilpotent elements with poles of order $n+1$ and higher; the worry is that one may only be able to find nilpotent generators for $H^0(\mathcal{L}(nP))$. So take $f \in H^0(\mathcal{L}(nP))$ of order $k$ in nilpotents: its image in $H^0(\mathcal{L}(nP)/\mathfrak m^k)$ is zero, but its image $\hat{f}$ in $H^0(\mathcal{L}(nP)/\mathfrak m^{k+1})$ is nonzero and also lies in $\Lambda^k H^0(\mathcal{L}\spl(nP))$. Then $\hat{f}$ can be identified with a sum of elements $f_a$ of $H^0(\mathcal{L}\spl(nP))$ with coefficients from $\Lambda^k$. By the extension sequence \thetag{\ref{eq:longexactcohom}}, each $f_a$ can be extended order by order in nilpotents to an element of $H^0(\mathcal{L}(nP))$ which is not purely nilpotent. So we can write the order $k$ element $f$ as a $\Lambda$-linear combination of not purely nilpotent elements of $H^0(\mathcal{L}(nP))$, modulo an element of order $k+1$. Induction on $k$ shows then that any element of $H^0(\mathcal{L}(nP))$ is a $\Lambda$-linear combination of not purely nilpotent elements of $H^0(\mathcal{L}(nP))$. So there exists a set of non-nilpotent elements which span $H^0(\mathcal{L}(nP))$ over $\Lambda$. A linearly independent subset of these completes a basis for $H^0(\mathcal{L}(*P))$. \end{proof} \subsection{The Chern class of the Ber bundle on $\Pic^0(X)$.}\label{ss:ChernclassBeronPic} By the arguments of the previous subsection we have, in case $W\in \mathcal S\text{gr}$ is obtained by the Krichever map from geometric data $(X, P, \mathcal L, (z,\theta), t)$, an exact sequence of $\Lambda$-modules: \begin{equation}\label{eq:seqcohomW} 0\to H^0(X,\mathcal L)\to W\to H_-\to H^1(X,\mathcal L)\to 0. \end{equation} We can interpret the Ber bundle $\Bersgr$ in terms of this sequence as follows. Let $M$ be a free $\Lambda$-module, possibly of infinite rank, and let $B=\{\mu\}$ be a collection of bases for $M$ such that any two bases $\mu,\mu^\prime\in B$ are related by $\mu^\prime=\mu T$ where $T\in \operatorname{Aut}(M)$ has a well defined Berezinian. Then we associate to the pair $(M,B)$ a free rank $(1\mid 0)$ module $\ber(M)$ with generator $b(\mu)$ for any $\mu\in B$ with identification $b(\mu^\prime)=\ber(T)b(\mu)$. The fiber of $\Bersgr$ at $W$ can then be interpreted as $\ber(W)$, using the collection of admissible bases as $B$ in the above definition. Similarly we can construct on $\mathcal S\text{gr}$ a line bundle with fiber at $W$ the module $\ber(H_-)$. Clearly this bundle is trivial, so we can, even better, think of $\Bersgr$ as having fiber $\ber(W)\otimes \berdual(H_-)$. But by the properties of the Berezinian we get from \eqref{eq:seqcohomW} $$ \ber(W)\otimes \berdual(H_-)=\ber(H^0(X,\mathcal L))\otimes \berdual(H^1(X,\mathcal L)). $$ Now we have seen in subsection \ref{subs:Berbundles} that the Ber bundle $\Ber(\Pic^0(X))$ on $\Pic^0(X)$ has the same fiber, with the difference that there we were dealing with bundles of degree 0 and here $\mathcal L$ has degree $g-1$. For fixed $(X,P,(z,\theta))$ the collection $M$ consisting of Krichever data $(X,P,(z,\theta),\mathcal L,t)$ forms a supermanifold and we have two morphisms $$ i:M\to \mathcal S\text{gr},\quad p:M\to \Pic^0(X) $$ where $i$ is the Krichever map and $p$ is the projection from Krichever data to the line bundle $\mathcal L$. (Here we identify $\operatorname{Pic}^n(X)$ with $\Pic^0(X)$ via the invertible sheaf $\mathcal O_X(-nP)$.) Then we see that $i^*(\Bersgr)\simeq p^*(\Ber(\Pic^0(X)))$. This fact allows us to prove Theorem \ref {thm:1ChernBertriv}. Note first that we have a surjective map \begin{equation}\label{eq:surjecttoH1} \mathcal J\text{Heis}_-\to H^1(X,\mathcal O_X). \end{equation} Indeed, let $X=U_0\cup U_P$ be an open cover where $U_0=X-P^{\text{red}}$ and $U_P$ is a suitable disk around $P^{\text{red}}$. Then if $[a]\in H^1(X,\mathcal O_X)$ is represented by $a\in\mathcal O_X(U_0\cap U_P)$ we can write, using the local coordinates on $U_P$, $a=a_P+\sum_{i>0} a_i z^{-i}+\alpha_i z^{-i}\theta$, with $a_P\in \mathcal O_X(U_P)$. Then $a-a_P=\sum a_i z^{-i}+\alpha_i z^{-i}\theta\in \mathcal J\text{Heis}_-$ and $[a]=[a-a_P]$. Now the tangent space to any point $\mathcal L\in\Pic^0(X)$ can be identified with $H^1(X,\mathcal O_X)$ and so we have a surjective map from $\mathcal J\text{Heis}_-$ to the tangent space of $\Pic^0(X)$. Note secondly that a change of trivialization of $\mathcal L$, given by $t\mapsto t^\prime$, corresponds to multiplication of the point $W\in \mathcal S\text{gr}$ by an element $a_0+\alpha_0\theta +\sum_{i>0}a_i z^{ i}+\alpha_i z^{i}\theta$ of the group corresponding to $\mathcal J\text{Heis}_+$. From these two facts we conclude that there is a surjective map from $\mathcal J\text{Heis}$ to the tangent space to the image of the Krichever map $i:M\to \mathcal S\text{gr}$ at any point $W=W(X,P,(z,\theta),\mathcal L,t)$. Now the first Chern class of $\Bersgr$ is calculated from the cocycle \eqref{eq:cocycle} on $gl_{\infty\mid\infty}(\Lambda)$ and it follows from Lemma \ref{lem:comactionJheis} that the restriction of this cocycle to $\mathcal J\text{Heis}$ is identically zero. This implies that $$ i^*(c_1[\Bersgr])=p^*(c_1[\Ber(\Pic^0(X))])=0. $$ But the map $p:M\to \Pic^0(X)$ is surjective, so we finally find that $c_1(\Ber(\Pic^0(X)))=0$ and $\Ber(\Pic^0(X))$ is topologically trivial, proving Theorem \ref{thm:1ChernBertriv}. \subsection{Algebro-geometric tau and Baker functions.} \label{ss:AlgebrogeometrictauBaker} We consider geometric data mapping to $W$ in the big cell of $\mathcal S\text{gr}$, so that $\deg \mathcal{L} = g-1$. As discussed in Section 3, we can associate to $W$ both a tau function and a Baker function. A system of super KP flows on $\mathcal S\text{gr}$ applied to $W$ produces an orbit corresponding to a family of deformations of the original geometric data. The simplest system of super KP flows, the ``Jacobian" system of Mulase and Rabin \cite{Mu:Jac,Ra:GeomSKP}, deforms the geometric data by moving $\mathcal{L}$ in $\text{Pic}^{g-1}(X)$. Solutions to this system for $X$ a super elliptic curve were obtained in terms of super theta functions in \cite{Ra:SupElliptic}. On the basis of the ordinary KP theory, cf. \cite{SeWi:LpGrpKdV}, section 9, we might expect that in general the tau and Baker functions for this family can be given explicitly as functions of the flow parameters by means of the super theta functions (when these exist) on the Jacobian of $X$. We now discuss the extent to which this is possible. Recall from \eqref{eq:surjecttoH1} that we have a surjection from $\mathcal J\text{Heis}_{-,\text{ev}}$ to the cohomology group $H^1(X,\mathcal O_{X,\text{ev} } )$. By exponentiation we obtain a map from $\mathcal J\text{Heis}_{-,\text{ev}}$ to $\Pic^0(X)$ and these maps fit together in a diagram \begin{equation} \begin{CD}\label{eq:cdtautheta} {} @. {} @. {} @. 0 @. {} \\ @. @. @. @VVV @. \\ {} @. 0 @. {} @. H^1(X,\mathbb Z)@. \\ @. @VVV @. @VVV @. \\ 0 @>>> K_0 @>>>\mathcal J\text{Heis}_{-,\text{ev}}@>>> H^1(X,\mathcal O_{X,\text{ev} } ) @>>>0 \\ @. @VVV @\vert @VVV @. \\ 0 @>>> K @>>>\mathcal J\text{Heis}_{-,\text{ev}}@>>>\Pic^0(X) @>>>0 \\ @. @VVV @. @VVV @. \\ {} @. K/K_0 @. {} @.0 @.{}\\ @. @VVV @. @. @. \\ {} @. 0 @. {} @. {} @. {} \\ \end{CD} \end{equation} Here $K_0$ is the $\Lambda_{\text{ev}}$-submodule of elements $f$ of $\mathcal J\text{Heis}_{-,\text{ev}}$ that split as $f=f_0 + f_P$, with $f_0\in \mathcal O_X(U_0)$ and $f_P\in \mathcal O_X(U_P)$ and $K$ is the Abelian subgroup (not submodule!) of elements $k$ of $\mathcal J\text{Heis}_{-,\text{ev}}$ that after exponentiation factorize: $e^k=\phi_k e^{k_p}$, with $\phi_k\in \mathcal O_X(U_0)^\times$ and $k_P\in \mathcal O_X(U_P)$. {}From the Snake Lemma it then follows that $H^1(X,\mathbb Z)\simeq K/K_0$. So a function $\hat F$ on $\mathcal J\text{Heis}_{-,\text{ev}}$ descends to a function $ F$ on $H^1(X,\mathcal O_{X,\text{ev} } )$ if it is invariant under $K_0$. The automorphic behavior of such a function $ F$ with respect to the lattice $H^1(X,\mathbb Z)$ translates into behaviour of $\hat{F}$ under shifts by elements of $K$. In particular we consider the function $\tau_W$ associated to a point $W$ in the big cell of $\mathcal S\text{gr}$, see \eqref{eq:tau} and \eqref{eq:taudef}. This is a function on $\mathcal J\text{Heis}_{-,\text{ev}}$ and, because of Lemma \ref{lem:comactionJheis}, we see by an easy adaptation of the proof of Lemma 9.5 in \cite{SeWi:LpGrpKdV} that $$ \tau_W(f+k)=\tau_W(f)\tau_W(k),\quad f\in \mathcal J\text{Heis}_{-,\text{ev}}, k\in K. $$ In particular we obtain by restriction a homomorphism $$ \tau_W:K_0\to \Lambda^\times_{\text{ev}}. $$ Let $\eta:K_0\to \Lambda_{\text{ev}}$ be a homomorphism such that $\tau_W(k_0)=e^{\eta(k_0)}$, for all $k_0\in K_0$. Then we can define a new function $$ \hat \tau_1(f)=\tau_W(f)e^{-\eta(f)}. $$ Then $\hat \tau_1(k_0)=1$, but still we have \begin{equation}\label{eq:multtau} \hat \tau_1(f+k)=\hat \tau_1(f)\hat \tau_1(k), \end{equation} so that $\hat \tau_1$ descends to a function $\tau_1$ on $H^1(X,\mathcal O_{X,\text{ev} } )$. From \eqref{eq:multtau} we see that $\tau_1$ corresponds to a (meromorphic) section of a line bundle on $\Pic^0(X)$ with trivial Chern class. A suitable ratio of translated theta functions gives a section of this same bundle, so that $\tau_1$ is expressed as this ratio times a meromorphic function, the latter being rationally expressible in terms of super theta functions. Then the modified tau function $\tau_1$ is rationally expressed in terms of super theta functions. The even Baker function $w^W_{\bar 0}(z,\theta)$ associated to the point $W$ is just the even section of $\mathcal L$ holomorphic except for a pole $1/z$ at $P$. Such a section can be specified by its restrictions to the charts $U_0$ and $U_P$. The Jacobian super KP flows act by multiplying the transition function of $\mathcal L$ across the boundary of $U_P$ by a factor $\gamma(t)$ as in \eqref{expfactor}. The corresponding action on the associated point $W$ of $\mathcal S\text{gr}$ is generated by the matrices $\lambda(n)+\mu(n)$ and $f(n)$ of Section 3; the remaining matrices generate deformations of the curve $X$ and enter the Kac--van de Leur SKP flows. Then $w^{W(t)}_{\bar 0} / w^W_{\bar 0}$ is a section of the bundle with transition function \thetag{\ref{expfactor}}. Equivalently, it is a meromorphic function on $U_0$ which extends into $U_P$ except for an essential singularity of the form \thetag{\ref{expfactor}}, having zeros at the divisor of $\mathcal L(t)$ and poles at the divisor of $\mathcal L$. By analogy with the ``Russian formula" of ordinary KP theory, such a function would be expressed in the form \begin{equation} \label{Russian} \exp [ \sum_{k=1}^{\infty} \int_{(0,0)}^{(z,\theta)} (t_k \hat{\psi}_k + t_{k-\frac12} \hat{E}_k) + c(t) ] \end{equation} times a ratio of theta functions providing the zeros and poles. Here $\hat{\psi}_k$ and $\hat{E}_k$ are differentials on $\hat{X}$, with vanishing $a$-periods and holomorphic except for the behavior near $P$, \begin{equation} \hat{\psi}_k \sim \hat{D}(z^{-k}) = -k\rho \hat{z}^{-(k+1)},\;\;\;\; \hat{E}_k \sim \hat{D}(\theta z^{-k}) = \hat{z}^{-k}. \end{equation} The constant $c(t)$ is linear in the flow parameters. In addition to the symmetry of the period matrix, we have to require the existence of these differentials. This requires that they exist in the split case, and then that these split differentials extend through the sequence \thetag{\ref{eq:longexactcohom}}. In the split case, the odd differentials $\hat{\psi}_k$ are just $\theta$ times the ordinary differentials on the reduced curve which appear in the Russian formula (and they do extend). However, the even differentials $\hat{E}_k$ are sections of $\mathcal N$, which is of degree zero and nontrivial, with $h^1 = g-1$. Consequently, when $g>1$ there will be Weierstrass gaps in the list of pole orders of these differentials. This means that the odd flow parameters corresponding to the missing differentials must be set to zero in order for the Baker function to assume the ``Russian" form. Even then, however, the function given by the Russian formula will generically behave as $1 + \alpha\theta + \mathcal O(z)$ for $z \rightarrow 0$, rather than the correct $1 + O(z)$ for $w^{W(t)}_{\bar 0}$ containing no $\theta/z$ pole. In \cite{Ra:SupElliptic} this was dealt with by including a term $\xi \hat{E}_0$ in the exponential, taking $\partial_\xi$ to construct a section with a pure $\theta/z$ pole, and subtracting off the appropriate multiple of this. In general, however, no such $\hat{E}_0$ will exist. These difficulties are understandable in view of the relations (\ref{eq:Bakertauquotient}) which require that the tau function be known for the full set of K-vdL flows in order to compute the Baker functions for even the Jacobian flows. Since the dependence of the tau function on the non-Jacobian flows is likely to be far more complicated than our super theta functions, it is unlikely that the Baker functions can be expressed in terms of them.
proofpile-arXiv_067-2697
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION AND SUMMARY} In the last four years there has been much activity in trying to obtain information about the structure of soft Supersymmetry (SUSY)-breaking terms in effective $N=1$ theories coming from four-dimensional (4-D) Strings \cite{review}. The basic idea is to identify some $N=1$ chiral fields whose auxiliary components could break SUSY by acquiring a vacuum expectation value (VEV). Natural candidates in 4-D Strings are 1) the complex dilaton field $S={{1}\over {g^2}} +ia$ which is present in any 4-D String and 2) the moduli fields $T^i, U^i$ which parametrize the size and shape of the compactified variety in models obtained by compactification of a ten-dimensional heterotic String. It is not totally unreasonable to think that some of these fields may play an important role in SUSY breaking. To start with, if String models are to make any sense, these fields should be strongly affected by non-perturbative phenomena. They are massless in perturbation theory and non-perturbative effects should give them a mass to avoid deviations from the equivalence principle and other phenomenological problems. Secondly, these fields are generically present in large classes of 4-D models (the dilaton in all of them). Finally, the couplings of these fields to charged matter are suppressed by powers of the Planck mass, which makes them natural candidates to constitute the SUSY-breaking ``hidden sector'' which is assumed to be present in phenomenological models of low-energy SUSY. The important point in this assumption of locating the seed of SUSY-breaking in the dilaton/moduli sectors, is that it leads to some interesting relationships among different soft terms [2-7] which could perhaps be experimentally tested. This general analysis was applied in particular to the gaugino condensation scenario in ref.\cite{gaugino}, whereas in refs.[3-7] no special assumption was made about the possible origin of SUSY breaking. In ref.\cite{BIM} a systematic discussion of the structure of soft terms which may be obtained under the assumption of dilaton/moduli-dominated SUSY breaking in some classes of 4-D Strings was presented, with particular emphasis on the case of Abelian $(0,2)$ orbifold models \cite{orbifolds}. It was mostly considered a situation in which only the dilaton $S$ and an ``overall modulus $T$'' field contribute to SUSY breaking. In fact, actual 4-D Strings like orbifolds contain several $T_i$ and $U_i$ moduli. Generic $(0,2)$ orbifold models contain three $T_i$ moduli fields (only $Z_3$ has 9 and $Z_4$, $Z_6'$ have 5) and a maximum of three (``complex structure'') $U_i$ fields. The use of an overall modulus $T$ is equivalent to the assumption that the three $T_i$ fields of generic orbifold models contribute exactly the same to SUSY breaking and the rest do not contribute. In the absence of further dynamical information it is reasonable to expect similar contributions from {\it all} the moduli although not necessarily exactly the same. Thus it is natural to ask what changes if one relaxes the overall modulus hypothesis and works with the multimoduli case \cite{BIMS}. This is one of the purposes of the present talk. The second one is to analyze in more detail the dilaton-dominated limit, where only the dilaton field contributes to SUSY breaking \cite{KL,BIM}. This is a very interesting possibility not only due to phenomenological reasons, as universality of the soft terms, but also to theoretical arguments. In this connection it has recently been realized \cite{tonto,I,jeta} that the boundary conditions $-A=M_{1/2}={\sqrt{3}}m$ of dilaton dominance coincide with some boundary conditions considered by Jones, Mezincescu and Yau in 1984 \cite{JMY} in a complete different context. They found that those same boundary conditions mantain the (two-loop) finiteness properties of certain $N=1$ SUSY theories \cite{finitas}. This coincidence is in principle quite surprising since we did not bother about the loop corrections when extracting these boundary conditions from the dilaton-dominance assumption. Also, effective $N=1$ field theories from Strings do not in general fulfil the finiteness requirements. It has also been noticed \cite{I} that this coincidence could be related to an underlying $N=4$ structure of the dilaton Lagrangian and that the dilaton-dominated boundary conditions could appear as a fixed point of renormalization group equations \cite{I,J}. This could perhaps be an indication that at least some of the possible soft terms obtained in the present scheme could have a more general relevance, not necessarily linked to a particular form of the tree-level Lagrangian. In section 2 we present an analysis of the effects of several moduli on the results obtained for soft terms. In the multimoduli case several parameters are needed to specify the Goldstino direction in the dilaton/moduli space, in contrast with the overall modulus case where the relevant information is contained in just one angular parameter $\theta$. The presence of more free parameters leads to some loss of predictivity for the soft terms. This predictivity is recovered and increased in the case of dilaton dominance where the soft terms, eq.(\ref{dilaton}), are independent of the 4-D String considered and fulfil the low-energy mass relations given by eq.(\ref{dilaton2}). Also we show that, even in the multimoduli case, in some schemes there are certain sum-rules among soft terms, eq.(\ref{rulox}), which hold independently of the Goldstino direction. The presence of these sum rules cause that, {\it on average} the {\it qualitative} results in the dilaton-dominated case still apply. Specifically, if one insists e.g. in obtaining scalar masses heavier than gauginos (something not possible in the dilaton-dominated scenario), this is possible in the multimoduli case, but the sum-rules often force some of the scalars to get negative squared mass. If we want to avoid this, we have to stick to gaugino masses bigger than (or of order) the scalar masses. This would lead us back to the qualitative results obtained in dilaton dominance. Let us notice however that in very specific limits, which will be discussed below, these results might be modified. In the case of standard model 4-D Strings the tachyonic behaviour may be particularly problematic, since charge and/or colour could be broken. In the case of GUTs constructed from Strings, it may just be the signal of GUT symmetry breaking. However, even in this case one expects the same order of magnitude results for observable scalar and gaugino masses and hence the most natural mass relations {\it at low-energy} are still similar to the dilaton dominance ones. Another topic of interest is the $B$ parameter, the soft mass term which is associated to a SUSY-mass term $\mu H_1H_2$ for the pair of Higgses $H_{1,2}$ in the Minimal Supersymmetric Standard Model (MSSM). Compared to the other soft terms, the result for the $B$ parameter is more model dependent. Indeed, it depends not only on the dilaton/moduli-dominance assumption but also on the particular mechanism which could generate the associated ``$\mu$ term'' \cite{review}. An interesting possibility to generate such a term is the one suggested in refs.\cite{GM,CM} in which it was pointed out that in the presence of certain bilinear terms in the K\"ahler potential an effective $\mu$ term of order the gravitino mass, $m_{3/2}$, is naturally generated. Interestingly enough, such bilinear terms in the K\"ahler potential do appear in String models and particularly in Abelian orbifolds. In section 3 we compute the $\mu $ and $B$ parameters\footnote{The results for $B$ corresponding to the possibility of generating a small $\mu$ term from the superpotential \cite{CM} can also be found, for the multimoduli case under consideration, in ref.\cite{BIMS}. They are more model dependent.} as well as the soft scalar masses of the charged fields which could play the role of Higgs particles in such Abelian orbifold schemes. We find the interesting result that, {\it independently of the Goldstino direction} in the dilaton/moduli space, one gets the prediction $|tg\beta |=1$ at the String scale. On the other hand, if we consider the interesting Goldstino direction where only the dilaton breaks SUSY, the whole soft terms and the $\mu$ parameter depend only on the gravitino mass. Imposing the phenomenological requirement of correct electroweak breaking we arrive to the remarkable result that the whole SUSY spectrum is completely determined with no free parameters. Unfortunately, this direction is not consistent with the measured value of the top-quark mass. In this connection, an interesting comment about low-energy charge and color breaking minima in the dilaton-dominated limit can be found at the end of the section. A few comments before closing up this summary are in order. First of all we are assuming here that the seed of SUSY breaking propagates through the auxiliary fields of the dilaton $S$ and the moduli $T_i$, $U_i$ fields. However attractive this possibility might be, it is fair to say that there is no compelling reason why indeed no other fields in the theory could participate. Nevertheless the present scheme has a certain predictivity due to the relative universality of the couplings of the dilaton and moduli. Indeed, the dilaton has universal and model-independent couplings which are there independently of the 4-D String considered. The moduli $T_i$, $U_i$ fields are less universal, their number and structure depend on the type of compactification considered. However, there are thousands of different $(0,2)$ models with different particle content which share the same $T_i$, $U_i$ moduli structure. For example, the moduli structure of a given $Z_N$ orbifold is the same for all the thousands of $(0,2)$ models one can construct from it by doing different embeddings and adding discrete Wilson lines. So, in this sense, although not really universal, there are large classes of models with identical $T_i$, $U_i$ couplings. This is not the case of generic charged matter fields whose number and couplings are completely out of control, each individual model being in general completely different from any other. Thus assuming dilaton/moduli dominance in the SUSY-breaking process has at least the advantage of leading to specific predictions for large classes of models whereas if charged matter fields play an important role in SUSY breaking we will be forced to a model by model analysis, something which looks out of reach. Another point to remark is that we will use the tree level forms for both the gauge kinetic function and the K\"ahler potential. One-loop corrections to these functions have been computed in refs.\cite{loop} and \cite{loop2} respectively in some classes of 4-D Strings (orbifold models) and their effects on the soft terms have also been studied \cite{BIM,japoneses,BC} and could be included in the analysis without difficulty. In fact, the effects of these one-loop corrections will in general be negligible except for those corners of the Goldstino directions in which the tree-level soft terms vanish. However, as we will see below, this situation would be a sort of fine-tuning. More worrysome are the possible non-perturbative String corrections to the K\"ahler and gauge kinetic functions. We have made use in our orbifold models of the known tree-level results for those functions. If the non-perturbative String corrections turn out to be important, it would be impossible to make any prediction about soft terms unless we know all the relevant non-perturbative String dynamics, something which looks rather remote (although perhaps not so remote as it looked one year ago!). \section{SOFT TERMS} \subsection{General structure: the multimoduli case} \label{subsec:general} We are going to consider $N=1$ SUSY 4-D Strings with $m$ moduli $T_i$, $i=1,..,m$. Such notation refers to both $T$-type and $U$-type (K\"ahler class and complex structure in the Calabi-Yau language) fields. In addition there will be charged matter fields $C_{\alpha }$ and the complex dilaton field $S$. In general we will be considering $(0,2)$ compactifications and thus the charged fields do not need to correspond to $27$s of $E_6$. Before further specifying the class of theories that we are going to consider a comment about the total number of moduli is in order. We are used to think of large numbers of $T$ and $U$-like moduli due to the fact that in $(2,2)$ ($E_6$) compactifications there is a one to one correspondence between moduli and charged fields. However, in the case of $(0,2)$ models with arbitrary gauge group (which is the case of phenomenological interest) the number of moduli is drastically reduced. For example, in the standard $(2,2)$ $Z_3$ orbifold there are 36 moduli $T_i$, 9 associated to the untwisted sector and 27 to the fixed points of the orbifold. In the thousands of $(0,2)$ $Z_3$ orbifolds one can construct by adding different gauge backgrounds or doing different gauge embeddings, only the 9 untwisted moduli remain in the spectrum. The same applies to models with $U$-fields. This is also the case for compactifications using $(2,2)$ minimal superconformal models. Here all singlets associated to twisted sectors are projected out when proceeding to $(0,2)$ \cite{plesser}. So, as these examples show, in the case of $(0,2)$ compactifications the number of moduli is drastically reduced to a few fields. In the case of generic Abelian orbifolds one is in fact left with only three T-type moduli $T_i$ ($i=1,2,3$), the only exceptions being $Z_3$, $Z_4$ and $Z'_6$, where such number is 9, 5 and 5 respectively. The number of $U$-type fields in these $(0,2)$ orbifolds oscillates between $0$ and $3$, depending on the specific example. Specifically, $(0,2)$ $Z_2\times Z_2$ orbifolds have 3 $U$ fields, the orbifolds of type $Z_4,Z_6$,$Z_8,Z_2\times Z_4$,$Z_2\times Z_6$ and $Z_{12}'$ have just one $U$ field and the rest have no untwisted $U$-fields. Thus, apart from the three exceptions mentioned above, this class of models has at most 6 moduli, three of $T$-type (always present) and at most three of $U$-type. In the case of models obtained from Calabi-Yau type of compactifications a similar effect is expected and only one $T$-field associated to the overall modulus is guaranteed to exist in $(0,2)$ models. We will consider effective $N=1$ supergravity (SUGRA) K\"ahler potentials of the type: \begin{eqnarray} & K(S,S^*,T_i,T_i^*,C_{\alpha},C_{\alpha}^*)\ = \ -\log(S+S^*)\ +\ {\hat K}(T_i,T_i^*)\ +\ {\tilde K}_{{\overline{\alpha }}{ \beta }}(T_i,T_i^*){C^*}^{\overline {\alpha}} C^{\beta }\ + (Z_{{\alpha }{ \beta }}(T_i,T_i^*){C}^{\alpha} C^{\beta }\ +\ h.c. \ ). & \label{kahl} \end{eqnarray} The first piece is the usual term corresponding to the complex dilaton $S$ which is present for any compactification whereas the second is the K\"ahler potential of the moduli fields, where we recall that we are denoting the $T$- and $U$-type moduli collectively by $T_i$. The greek indices label the matter fields and their kinetic term functions are given by ${\tilde K_{{\overline{\alpha }}{ \beta }}}$ and $Z_{{\alpha }{\beta }}$ to lowest order in the matter fields. The last piece is often forbidden by gauge invariance in specific models although it may be relevant in some cases as discussed in section 3. The complete $N=1$ SUGRA Lagrangian is determined by the K\"ahler potential $K({\phi }_M ,\phi^*_M)$, the superpotential $W({\phi }_M)$ and the gauge kinetic functions $f_a({\phi }_M)$, where $\phi_M$ generically denotes the chiral fields $S,T_i,C_{\alpha }$. As is well known, $K$ and $W$ appear in the Lagrangian only in the combination $G=K+\log|W|^2$. In particular, the (F-part of the) scalar potential is given by \begin{eqnarray} & V(\phi _M, \phi ^*_M)\ =\ e^{G} \left( G_M{K}^{M{\bar N}} G_{\bar N}\ -\ 3\right) \ , & \label{pot} \end{eqnarray} where $G_M \equiv \partial_M G \equiv \partial G/ \partial \phi_M$ and $K^{M{\bar N}}$ is the inverse of the K\"ahler metric $K_{{\bar N }M}\equiv{\partial}_{\bar N}{\partial }_M K$. The crucial assumption now is to locate the origin of SUSY breaking in the dilaton/moduli sector. Then, plugging eq.(\ref{kahl}) into eq.(\ref{pot}), the bosonic soft SUSY-breaking terms can be computed. Applying the standard SUGRA formulae \cite{Soni} to the most general case where the moduli and matter metrics are not diagonal we obtain: \begin{eqnarray} & {m}'^2_{{\overline{\alpha }}{ \beta }} = m_{3/2}^2 {\tilde K_{{\overline{\alpha }}{ \beta }}} - {\overline F}^{\overline{i}} ( \partial_{\overline{i}}\partial_j {\tilde K_{{\overline{\alpha }}{ \beta }}} -\partial_{\overline{i}} {\tilde K_{{\overline{\alpha }}{ \gamma}}} {\tilde K^{{ \gamma} {\overline{\delta}} }} \partial_j {\tilde K_{{\overline{\delta}}{ \beta}}} ) F^j \ , & \label{mmatrix} \\ & A'_{\alpha\beta\gamma} = F^S K_S h_{\alpha\beta\gamma} + F^i \left[ {\hat K}_i h_{\alpha\beta\gamma} + \partial_i h_{\alpha\beta\gamma} - \left( {\tilde K^{{ \delta} {\overline{\rho}} }} \partial_i {\tilde K_{{\overline{\rho}}{ \alpha}}} h_{\delta\beta\gamma} +(\alpha \leftrightarrow \beta)+(\alpha \leftrightarrow \gamma)\right)\right] \ , & \label{mmatrix2} \end{eqnarray} where ${m}'^2_{ {\overline{\alpha }} { \beta } }$ and $A'_{\alpha\beta\gamma}$ are the soft mass matrix and the soft trilinear parameters respectively (corresponding to un-normalized charged fields), $h_{\alpha \beta \gamma }$ is a (un-rescaled) renormalizable Yukawa coupling involving three charged chiral fields and $F^S=e^{G/2} K_{ {\bar{S}} S}^{-1} G_{\bar{S}}$, $F^i=e^{G/2} {\hat K}^{i {\overline j}} G_{\overline j}$ are the dilaton and moduli auxiliary fields. Notice that, after normalizing the fields to get canonical kinetic terms, the first piece in eq.(\ref{mmatrix}) will lead to universal diagonal soft masses but the second piece will generically induce off-diagonal contributions. Concerning the $A$-parameters, notice that we have not factored out the Yukawa couplings as usual, since proportionality is not guaranteed. Indeed, although the first term in $A'_{\alpha\beta\gamma}$ is always proportional in flavour space to the corresponding Yukawa coupling, the same thing is not necessarily true for the other terms. In this section we are going to consider the case of diagonal metric both for the moduli and the matter fields\footnote{An extensive analysis of the off-diagonal case in specific orbifold constructions, including the calculation of the soft terms and their effects on flavour changing neutral currents (FCNC), can be found in ref.\cite{BIMS}.}. Then ${\hat K}(T_i,T_i^*)$ will be a sum of contributions (one for each $T_i$), whereas ${\tilde K_{{\overline{\alpha }}{ \beta }}}$ will be taken of the diagonal form ${\tilde K_{{\overline{\alpha }}{ \beta }}} \equiv \delta _{{\overline{\alpha }}{ \beta }} {\tilde K_{\alpha }}$. Let us take the following parametrization for the VEV's of the dilaton and moduli auxiliary fields \begin{eqnarray} & G_{ {\bar{S}} S}^{1/2} F^S\ =\ \sqrt{3}m_{3/2}\sin\theta e^{-i\gamma _S}\ \ , & \nonumber \\ & G_{ {\bar{i}} i}^{1/2} F^i\ =\ \sqrt{3}m_{3/2}\cos\theta\ e^{-i\gamma _i} \Theta _i \ \ , & \label{auxi} \end{eqnarray} where $\sum _i \Theta _i^2=1$ and $e^G=m^2_{3/2}$ is the gravitino mass-squared. The angle $\theta $ and the $\Theta _i$ just parametrize the direction of the goldstino in the $S,T_i$ field space. We have also allowed for the possibility of some complex phases $\gamma _S, \gamma _i$ which could be relevant for the CP structure of the theory. This parametrization has the virtue that when we plug it in the general form of the SUGRA scalar potential eq.(\ref{pot}), its VEV (the cosmological constant) vanishes by construction. Notice that such a phenomenological approach allows us to `reabsorb' (or circumvent) our ignorance about the (nonperturbative) $S$- and $T_i$- dependent part of the superpotential, which is responsible for SUSY breaking. It is now a straightforward exercise to compute the bosonic soft SUSY-breaking terms in this class of theories. Plugging eq.(\ref{auxi}) into eqs.(\ref{mmatrix},\ref{mmatrix2}) one finds the following results (we recall that we are considering here a diagonal metric for the matter fields): \begin{eqnarray} & m_{\alpha }^2 = \ m_{3/2}^2 \ \left[ 1\ -\ 3\cos^2\theta \ ({\hat K}_{ {\overline i} i})^{-1/2} {\Theta }_i e^{i\gamma _i} (\log{\tilde K}_{\alpha })_{ {\overline i} j} ({\hat K}_{ {\overline j} j})^{-1/2} {\Theta }_j e^{-i\gamma _j} \ \right] \ , & \nonumber \\ & A_{\alpha \beta \gamma } = \ -\sqrt{3} m_{3/2}\ \left[ e^{-i{\gamma }_S} \sin\theta - \ e^{-i{\gamma }_i} \cos\theta \ \Theta_i ({\hat K}_{ {\overline i} i})^{-1/2} \left({\hat K}_i - \sum_{\delta=\alpha,\beta,\gamma} (\log {\tilde K}_{\delta })_i + (\log h_{\alpha \beta \gamma } )_i \ \right) \ \right] \ . & \label{soft} \end{eqnarray} The above scalar masses and trilinear scalar couplings (where we have factorized out the Yukawa coupling as usual) correspond to charged fields which have already been canonically normalized. Physical gaugino masses $M_a$ for the canonically normalized gaugino fields are given in general by $M_a=F^M[log(Re f_a)]_M$. Since the tree-level gauge kinetic function is given for any 4-D String by $f_a=k_aS$, where $k_a$ is the Kac-Moody level of the gauge factor, the result for tree-level gaugino masses is independent of the moduli sector and is simply given by: \begin{eqnarray} & M\equiv M_a\ =\ m_{3/2}\sqrt{3} \sin\theta e^{-i\gamma _S} \ . & \label{gaugin} \end{eqnarray} As we mentioned above, the parametrization of the auxiliary field VEV's was chosen in such a way to guarantee the automatic vanishing of the VEV of the scalar potential ($V_0=0$). If the value of $V_0$ is not assumed to be zero the above formulae (\ref{auxi}-\ref{gaugin}) are modified in the following simple way. One just has to replace $m_{3/2}\rightarrow Cm_{3/2}$, where $|C|^2=1+V_0/3m_{3/2}^2$. In addition, the formula for $m_{\alpha }^2$ gets an additional contribution given by $2m_{3/2}^2(|C|^2-1)=2V_0/3$. The soft term formulae above (\ref{soft}, \ref{gaugin}) are in general valid for any compactification as long as we are considering diagonal metrics. In addition one is tacitally assuming that the tree-level K\"ahler potential and $f_a$-functions constitute a good aproximation. The K\"ahler potentials for the moduli are in general complicated functions. Before going into specific classes of Superstring models, it is worth studying the interesting limit $\cos\theta =0$, corresponding to the case where the dilaton sector is the source of all the SUSY breaking (see eq.(\ref{auxi})). \subsection{The $\cos\theta =0$ (dilaton-dominated) limit} \label{subsec:dilaton} Since the dilaton couples in an universal manner to all particles, {\it this limit is quite model independent}. Using eqs.(\ref{soft},\ref{gaugin}) one finds the following simple expressions for the soft terms which are independent of the 4-D String considered \begin{eqnarray} & m_{\alpha } = \ m_{3/2} \ , & \nonumber \\ & M_a = \ \pm \sqrt{3} m_{3/2} \ , & \nonumber \\ & A_{\alpha \beta \gamma } = \ - M_a , & \label{dilaton} \end{eqnarray} where, from the limits on the electric dipole moment of the neutron, we have imposed $\gamma_S$ = $0$ mod $\pi$. This dilaton-dominated scenario \cite{KL,BIM} is attractive for its simplicity and for the natural explanation that it offers to the universality of the soft terms. Actually, universality is a desirable property not only to reduce the number of independent parameters in the MSSM, but also for phenomenological reasons, particularly to avoid FCNC. Because of the simplicity of this scenario, the low-energy predictions are quite precise \cite{BLM,BIM,Vissani}. Since scalars are lighter than gauginos at the String scale, at low-energy ($\sim M_Z$), gluino, slepton and (first and second generation) squark mass relations turn out to be\footnote{The phenomenology of SUSY breaking by the dilaton in the context of a flipped $SU(5)$ model was also studied in ref.\cite{Nano}.} \begin{eqnarray} & M_g:m_Q:m_u:m_d:m_L:M_e \simeq 1:0.94:0.92:0.92:0.32:0.24 \ . & \label{dilaton2} \end{eqnarray} Although squarks and sleptons have the same soft mass, at low-energy the former are much heavier than the latter because of the gluino contribution to the renormalization of their masses. \subsection{Orbifold compactifications}\label{subsec:orbifold} To illustrate some general features of the multimoduli case we will concentrate here on the case of generic $(0,2)$ symmetric Abelian orbifolds. As we mentioned above, this class of models contains three $T$-type moduli and (at most) three $U$-type moduli. We will denote them collectively by $T_i$, where e.g. $T_i=U_{i-3}$; $i=4,5,6$. For this class of models the K\"ahler potential has the form \cite{potential} \begin{eqnarray} & K(\phi,\phi^*)\ =\ -\log(S+S^*)\ -\ \sum _i \log(T_i+T_i^*)\ + \sum _{\alpha } |C_{\alpha }|^2 \Pi_i(T_i+T_i^*)^{n_{\alpha }^i} \ . & \label{orbi} \end{eqnarray} Here $n_{\alpha }^i$ are fractional numbers usually called ``modular weights" of the matter fields $C_{\alpha }$. For each given Abelian orbifold, independently of the gauge group or particle content, the possible values of the modular weights are very restricted. For a classification of modular weights for all Abelian orbifolds see ref.\cite{IL}. As a matter of fact, the K\"ahler potentials which appear in the large-$T$ limit of Calabi-Yau compactifications \cite{calabi} and 4-D fermionic Strings \cite{fermionic} are quite close to the above one. Thus the results that we will obtain below will probably be more general than just for orbifold compactifications. Using the particular form (\ref{orbi}) of the K\"ahler potential and eqs.(\ref{soft},\ref{gaugin}) we obtain the following results\footnote{This analysis was also carried out for the particular case of the three diagonal moduli $T_i$ in ref.\cite{japoneses} and \cite{BC} in order to obtain unification of gauge coupling constants and to analyze FCNC constraints respectively. Some particular multimoduli examples were also considered in ref.\cite{FKZ}.} for the scalar masses, gaugino masses and soft trilinear couplings: \begin{eqnarray} & m_{\alpha }^2 = \ m_{3/2}^2(1\ +\ 3\cos^2\theta\ {\vec {n_{\alpha }}}. {\vec {\Theta ^2}}) \ , & \nonumber\\ & M = \ \sqrt{3}m_{3/2}\sin\theta e^{-i{\gamma }_S} \ , & \nonumber\\ & A_{\alpha \beta \gamma } = \ -\sqrt{3} m_{3/2}\ ( \sin\theta e^{-i{\gamma }_S} \ +\ \cos\theta \sum _{i=1}^6 e^{-i\gamma _i} {\Theta }^i {\omega }^i_{\alpha \beta \gamma } ) \ , & \label{masorbi} \end{eqnarray} where we have defined : \begin{eqnarray} & {\omega }^i_{\alpha \beta \gamma }\ =\ (1+n^i_{\alpha }+n^i_{\beta }+n^i_{\gamma }- {Y}^i_{\alpha \beta \gamma } )\ \ ;\ {Y}^i_{\alpha \beta \gamma } \ = \ {{h^i_{\alpha \beta \gamma }}\over {h_{\alpha \beta \gamma }}} 2ReT_i \ . & \label{formu} \end{eqnarray} Notice that neither the scalar nor the gaugino masses have any explicit dependence on $S$ or $T_i$, they only depend on the gravitino mass and the goldstino angles. This is one of the advantages of a parametrization in terms of such angles. Although in the case of the $A$-parameter an explicit $T_i$-dependence may appear in the term proportional to $Y^i_{\alpha \beta \gamma }$, it disappears in several interesting cases \cite{BIMS}. With the above information we can now analyze the structure of soft terms available for Abelian orbifolds. {\it 1) Universality of soft terms} In the dilaton-dominated case ($\cos\theta =0$) the whole soft terms are universal. However, in general, they show a lack of universality due to the modular weight dependence (see eqs.(\ref{masorbi},\ref{formu})). {\it 2) Soft masses} In the multimoduli case, depending on the goldstino direction, tachyons may appear. For $\cos^2\theta \geq 1/3 $, one has to be very careful with the goldstino direction if one is interested in avoiding tachyons. Nevertheless, as we will discuss below, having a tachyonic sector is not necessarily a problem, it may even be an advantage, so one should not disregard this possibility at this point. Consider now three particles $C_{\alpha }$,$C_{\beta }$,$C_{\gamma }$ coupling through a Yukawa $h_{\alpha \beta \gamma }$. They may belong both to the untwisted (${\bf U}$) sector or to a twisted (${\bf T}$) sector, i.e. couplings of the type ${\bf U}{\bf U}{\bf U}$, ${\bf U}{\bf T}{\bf T}$, ${\bf T}{\bf T}{\bf T}$. Then, using the above formulae, one finds \cite{BIMS} that in general for {\it any choice} of goldstino direction \begin{equation} m_{\alpha }^2\ +\ m_{\beta }^2\ +\ m_{\gamma }^2\ \leq \ |M|^2\ =3 m_{3/2}^2\sin^2\theta \ . \label{rulox} \end{equation} Notice that if we insist in having a vanishing gaugino mass, the sum-rule (\ref{rulox}) forces the scalars to be either all massless or at least one of them tachyonic. Nevertheless we should not forget that tachyons, as we already mentioned above, are not necessarily a problem, but may just show us an instability. {\it 3) Gaugino versus scalar masses} In the multimoduli case on average the scalars are lighter than gauginos but there may be scalars with mass bigger than gauginos. Eq.(\ref{rulox})\ tells us that this can only be true at the cost of having some of the other three scalars with {\it negative} squared mass. This may have diverse phenomenological implications depending what is the particle content of the model, as we now explain in some detail: {\it 3-a) Gaugino versus scalar masses in standard model 4-D Strings} Let us suppose we insist in e.g., having tree-level gaugino masses lighter than the scalar masses. If we are dealing with a String model with gauge group $SU(3)_c\times SU(2)_L\times U(1)_Y$$\times G$ this is potentially a disaster. Some observable particles, like Higgses, squarks or sleptons would be forced to acquire large VEV's (of order the String scale). For example, the scalars associated through the Yukawa coupling $H_2Q_Lu_L^c$, which generates the mass of the $u$-quark, must fulfil the above sum-rule (\ref{rulox}). If we allow e.g. the scalars $H_2$, $Q_L$ to be heavier than gauginos, then $u_L^c$ will become tachyonic breaking charge and color. However, tachyons may be helpful if the particular Yukawa coupling does not involve observable particles. They could break extra gauge symmetries and generate large masses for extra particles. We recall that standard-like models in Strings usually have too many extra particles and many extra U(1) interactions. Although the Fayet-Iliopoulos mechanism helps to cure the problem \cite{suplemento}, the existence of tachyons is a complementary solution. We thus see that, for standard model Strings, if we want to avoid charge and colour-breaking minima (or VEV's of order the String scale for the Higgses\footnote{For a possible way-out to this problem, allowing the possibility of scalars heavier than gauginos, see ref.\cite{nuevo}.}), we should grosso modo come back to a situation with gauginos heavier than scalars. Thus the low-energy phenomenological predictions of the multimoduli case are similar to those of the dilaton-dominated scenario (see subsect.2.2): due to the sum-rule the tree-level observable scalars are always ligther than gauginos \begin{eqnarray} & m_{\alpha} < M \ . & \label{masas1} \end{eqnarray} Now, at low-energy ($\sim M_Z$), gluino, slepton and (first and second generation) squark mass relations turn out to be \begin{eqnarray} & m_l < m_q \simeq M_g \ , & \label{masas2} \end{eqnarray} where gluinos are slightly heavier than scalars. This result is qualitatively similar to the dilaton dominance one, in spite of the different set of (non-universal) soft scalar masses, because the low-energy scalar masses are mainly determined by the gaugino loop contributions. The only exception are the sleptons masses, which do not feel the important gluino contribution, and therefore can get some deviation from the result of eq.(\ref{dilaton2}). As emphasized in \cite{BIM} there is however a way to get scalars heavier than gauginos, even in the overall modulus case, if all the observable particles have overall modular weight $n_{\alpha}=-1$ and $\sin\theta \rightarrow 0$ (i.e. in the moduli-dominated limit). Then, at tree-level, $M \rightarrow 0$ and $m_{\alpha}\rightarrow 0$ if the different moduli participate in the SUSY breaking in almost {\it exactly} the same way, i.e. the overall modulus situation. Including String loop corrections to $K$ and $f_a$ can yield scalars heavier than gauginos \cite{BIM} \begin{eqnarray} & m_{\alpha} > M_a & \label{masas3} \end{eqnarray} and the low-energy spectrum can be reversed with respect to the above one (in the case of $\sin\theta$ sufficiently small as to produce $m_{\alpha} >> M_a$) \begin{eqnarray} & M_g < m_l \simeq m_q \ . & \label{masas4} \end{eqnarray} The physical masses of squarks and sleptons are almost degenerate because the universality of soft scalar masses at high-energy is not destroyed by the gluino contribution to the mass renormalization, which is now very small. Notice however that this possibility of obtaining scalars heavier than gauginos is a sort of fine-tuning. In the absence of a more fundamental theory which tells us in what direction the goldstino angles point, one would naively say that the most natural possibility would be to assume that all moduli contribute to SUSY breaking in more or less (but {\it not} exactly\footnote{For an explicit example of this, using gaugino condensation, see ref.\cite{Bailin}.}) the same amount. We just saw how, in the context of standard model Strings, the results for soft terms are qualitatively similar to the dilaton dominance ones if we want to avoid the breaking of charge and colour conservation. There is however a loophole in the above analysis. Up to now we have assumed that the masses of the observable fermions arise through renormalizable Yukawa couplings. If we give up that assumption and allow the existence of non-renormalizable Yukawa couplings generating masses for the observable particles (e.g. $H_2Q_Lu_L^c<\phi...\phi>$), then new sum-rules would apply to the full set of fields in the coupling and the above three-particle sum-rules could be violated. In particular, observable scalars would be allowed to be heavier than gauginos, possibly at the price of having some tachyon among the (standard model singlet) $\phi$ fields. Then qualitative results different from the ones of the dilaton dominance case may be obtained In this respect, it is easy to find explicit examples of orbifold sectors yielding scalar masses bigger than gaugino masses even at the tree-level. From eq.(\ref{masorbi}) we see that always $m_{\alpha}<m_{3/2}$ and therefore scalars heavier than gauginos can be obtained if the constraint \begin{eqnarray} & \cos^2\theta > 2/3 & \label{coseno} \end{eqnarray} is fulfilled. Let us consider e.g. the case of the $Z_8$ orbifold with an observable particle in the twisted sector ${\bf T}_{\theta^6}$. The modular weight associated to that sector is ${\vec {n_{\theta^6}}}=(1/4,3/4,0,0)$ and therefore (see eq.(\ref{masorbi})) \begin{eqnarray} & m_{\theta^6}^2\ =\ m_{3/2}^2\ \left[1-3\cos^2\theta \left(\frac{1}{4}\Theta^2_1+\frac{3}{4}\Theta^2_2\right) \right] \ . & \label{masa} \end{eqnarray} For the particular values $\cos^2\theta=5/6$, $\Theta_1=\Theta_2=0$ one gets $m_{\theta^6}^2=m_{3/2}^2$, $M^2=m_{3/2}^2/2$. In spite of the new possibilities offered by the multimoduli extension, one typically finds that, unless very particular choices for the goldstino angles are chosen, the masses of scalar and gauginos are still of the same order and therefore at low-energy eq.(\ref{masas2}) is typically still valid, the only difference being that now squarks will be slightly heavier than gluinos. To reverse the situation (i.e. eq.(\ref{masas4})) we would need $m_{\alpha}>>M_a$. This can be obtained in the limit $\sin\theta \rightarrow 0$, i.e. $M \rightarrow 0$. However, there may be a phenomenological problem in this case. Experimental bounds on gluino mass imply $M>50$ GeV which only can be obtained for a large $m_{3/2}$ but this would yield a large $m_{\alpha}\sim m_{3/2}$. In general one must be careful to avoid $m_{\alpha}$ bigger than $1$ TeV, spoiling the solution to the gauge hierarchy problem. {\it 3-b ) Gaugino versus scalar masses in GUT 4-D Strings} What it turned out to be a potential disaster in the case of standard model Strings may be an interesting advantage in the case of String-GUTs. In this case it could well be that the negative squared mass may just induce gauge symmetry breaking by forcing a VEV for a particular scalar (GUT-Higgs field) in the model. The latter possibility provides us with interesting phenomenological consequences. Here the breaking of SUSY would directly induce further gauge symmetry breaking. An explicit example of this situation can be found in ref.\cite{BIMS}. In summary, the situation concerning gaugino versus scalar masses is as follows. If any of the physical quark-lepton Yukawas come from renormalizable terms, the sum rules leads us to a distribution of soft terms in such a way that gaugino masses are generically bigger than those of scalars (otherwise charge and/or colour would be broken). For a possible exception see footnote 5. If the physical Yukawas come all from non-renormalizable terms the constraints coming from the sum rules may be avoided, possibly allowing standard model singlets to become tachyonic. However, even in this case one expects the same order of magnitude results for scalar and gaugino masses and hence the most natural (slepton-squark-gluino) mass relations {\it at low-energy} will be similar to the ones of the dilaton-dominated case eq.(\ref{masas2}) as showed in point {\it 3-a}. Only in the particular limit of very small $\sin\theta$ this situation might be reversed. \section{THE $B$ PARAMETER AND THE $\mu$ PROBLEM} It was pointed out in refs.\cite{GM,CM} that terms in a K\"ahler potential like the one proportional to $Z_{\alpha \beta }$ in eq.(\ref{kahl}) can naturally induce a $\mu $-term for the $C_{\alpha }$ fields of order $m_{3/2}$ after SUSY breaking, thus providing a rationale for the size of $\mu$. From eqs.(\ref{kahl},\ref{pot}) and from the fermionic part of the SUGRA lagrangian one can check that a SUSY mass term $\mu_{\alpha \beta} C_{\alpha} C_{\beta}$ and a scalar term $B_{\alpha \beta} (C_{\alpha} C_{\beta}) +h.c.$ are induced upon SUSY breaking in the effective low-energy theory (here the kinetic terms for $C_{\alpha,\beta}$ have not still been canonically normalized) \begin{eqnarray} & {\mu}_{\alpha \beta} = m_{3/2} {Z}_{\alpha \beta} - {\overline F}^{\overline{i}} \partial_{\overline{i}} {Z}_{\alpha \beta} \ , & \label{bmu1} \\ & B_{\alpha \beta} = 2m_{3/2}^2 {Z}_{\alpha \beta} + m_{3/2} F^i \left[ \partial_i Z_{\alpha \beta} - \left( {\tilde K^{{ \delta} {\overline{\rho}} }} \partial_i {\tilde K_{{\overline{\rho}}{ \alpha}}} Z_{\delta \beta} +(\alpha \leftrightarrow \beta)\right)\right] - m_{3/2} {\overline F}^{\overline{i}} \partial_{\overline{i}} Z_{\alpha \beta} & \nonumber\\ & - {\overline F}^{\overline{i}} F^j \left[ \partial_j \partial_{\overline{i}} Z_{\alpha \beta} - \left( {\tilde K^{{ \delta} {\overline{\rho}} }} \partial_j {\tilde K_{{\overline{\rho}}{ \alpha}}} \partial_{\overline{i}} Z_{\delta \beta} +(\alpha \leftrightarrow \beta)\right)\right] \ . & \label{bmu2} \end{eqnarray} Notice that, as in the case of the $A$-terms and the corresponding Yukawa couplings (see subsection 2.1), $B_{\alpha \beta}$ is not necessarily proportional to $\mu_{\alpha \beta}$. Recently it has been suggested that terms of the type $Z_{\alpha \beta} C_{\alpha} C_{\beta} +h.c.$ may appear in the K\"ahler potential of some Calabi-Yau type compactifications \cite{KL}. It has also been explicitly shown \cite{LLM} that they appear in orbifold models. Let us consider the case in which e.g., due to gauge invariance, there is only one possible $\mu $-term (and correspondingly one $B$ term) associated to a pair of matter fields $C_1$,$C_2$. This is e.g. the case of the MSSM. If we introduce the abbreviations \begin{equation} L^Z \equiv \log Z \;\; , \;\; L^{\alpha} \equiv \log {\tilde K}_{\alpha } \;\; , \;\; X \equiv 1 - \sqrt{3} \cos\theta \ e^{i\gamma _i}{\Theta _i} ({\hat K}_{ {\overline i} i})^{-1/2} L_{\overline i}^Z \ , \label{xxx} \end{equation} using eqs.(\ref{bmu1},\ref{bmu2}) the $\mu$ and $B$ parameters are given by \begin{eqnarray} & \mu \ =\ m_{3/2} ( {\tilde K}_1 {\tilde K}_2 )^{-1/2} Z X \ , & \label{mmu} \\ & B\ =\ m_{3/2} X^{-1} \left[ 2 + \sqrt{3} \cos\theta ({\hat K}_{ {\overline i} i})^{-1/2} {\Theta_i } \left( e^{-i\gamma _i} ( L_i^Z - L^1_i - L^2_i ) -e^{i\gamma _i} L_{\overline i}^Z \right) \ \right. & \nonumber\\ & \left. + \ 3 \cos^2\theta ({\hat K}_{ {\overline i} i})^{-1/2} {\Theta_i } e^{i\gamma _i} \ \left( L_{\overline i}^Z ( L^1_j+L^2_j) - L_{\overline i}^Z L_j^Z - L_{{\overline i} j}^Z\ \right) ({\hat K}_{ {\overline j} j})^{-1/2} {\Theta _j } e^{-i\gamma _j} \right] \ , & \label{bcy} \end{eqnarray} where we are assuming that the moduli on which ${\tilde K}_1(T_i,T_i^*)$, ${\tilde K}_2(T_i,T_i^*)$ and $Z(T_i,T_i^*)$ depend have diagonal metric, which is the relevant case we are going to discuss. The above $\mu$ and $B$ (where we have factorized out the $\mu$ term as usual) parameters correspond now to charged fields which have already been canonically normalized. If the value of $V_0$ is not assumed to be zero, one just has to replace $\cos\theta \rightarrow C\cos\theta$ in eqs.(\ref{xxx},\ref{mmu},\ref{bcy}), where $C$ is given below eq.(\ref{gaugin}). In addition, the formula for $B$ gets an additional contribution given by $m_{3/2} X^{-1} 3(C^2-1)$. As mentioned above, it has recently been shown that the untwisted sector of orbifolds with at least one complex-structure field $U$ possesses the required structure $Z(T_i,T_i^*)C_1C_2+h.c.$ in their K\"ahler potentials \cite{LLM}. Specifically, the $Z_N$ orbifolds based on $Z_4,Z_6$,$Z_8,Z_{12}'$ and the $Z_N\times Z_M$ orbifolds based on $Z_2\times Z_4$ and $Z_2\times Z_6$ do all have a $U$-type field in (say) the third complex plane. In addition the $Z_2\times Z_2$ orbifold has $U$ fields in the three complex planes. In all these models the piece of the K\"ahler potential involving the moduli and the untwisted matter fields $C_{1,2}$ in the third complex plane has the form \begin{eqnarray} & K(T_i,T_i^*,C_1,C_2)=K'(T_l,T_l^*) -\log\left((T_3+T_3^*)(U_3+U_3^*) - (C_1+C_2^*)(C_1^*+C_2)\right)& \label{kahlb} \\ & \simeq K'(T_l,T_l^*) - \log(T_3+T_3^*) - \log(U_3+U_3^*)\ + \frac{(C_1+C_2^*)(C_1^*+C_2)}{(T_3+T_3^*)(U_3+U_3^*)} \ . \label{kahlexp} \end{eqnarray} The first term $K'(T_l,T_l^*)$ determines the (not necessarily diagonal) metric of the moduli $T_l \neq T_3, U_3$ associated to the first and second complex planes. The last term describes an $SO(2,n)/SO(2)\times SO(n)$ K\"ahler manifold ($n=4$ if we focus on just one component of $C_1$ and $C_2$) parametrized by $T_3, U_3, C_1, C_2$. If the expansion shown in (\ref{kahlexp}) is performed, on one hand one recovers the well known factorization $SO(2,2)/SO(2)\times SO(2) \simeq (SU(1,1)/U(1))^2$ for the submanifold spanned by $T_3$ and $U_3$ (which have therefore diagonal metric to lowest order in the matter fields), whereas on the other hand one can easily identify the functions $Z, {\tilde K}_1, {\tilde K}_2$ associated to $C_1$ and $C_2$: \begin{equation} Z\ =\ {\tilde K}_1 \ =\ {\tilde K}_2\ =\ {1\over {(T_3+T_3^*)(U_3+U_3^*)}} \ . \label{zzz} \end{equation} Plugging back these expressions in eqs.(\ref{mmu},\ref{bcy},\ref{xxx}) one can compute $\mu$ and $B$ for this interesting class of models \cite{BIMS}: \begin{eqnarray} & \mu \ =\ m_{3/2}\ \left( 1\ +\ \sqrt{3}\cos\theta (e^{i \gamma_3} \Theta _3 + e^{i \gamma_6} \Theta _6)\right) \ , & \label{muu} \\ & B\mu=2m_{3/2}^2\ \left( 1\ +\sqrt{3} \cos\theta ( \cos\gamma_3 \Theta_3 + \cos\gamma_6 \Theta_6) \ + \ 3\cos^2\theta \cos(\gamma_3-\gamma_6) {\Theta _3}{\Theta _6} \right) \ . & \label{bmu} \end{eqnarray} In addition, we recall from eq.(\ref{masorbi}) that the soft masses are \begin{eqnarray} & m^2_{C_1}\ =\ m^2_{C_2}\ =\ m_{3/2}^2\ \left( 1\ -\ 3\cos^2\theta (\Theta_3^2+\Theta _6^2)\right) \ . & \label{mundos} \end{eqnarray} In general, the dimension-two scalar potential for $C_{1,2}$ after SUSY breaking has the form \begin{eqnarray} & V_2(C_1,C_2)\ =\ (m_{C_1}^2+|\mu|^2)|C_1|^2\ + (m_{C_2}^2+|\mu| ^2)|C_2|^2 +(B\mu C_1C_2+h.c.)\ . & \label{flaty} \end{eqnarray} In the specific case under consideration, from eqs.(\ref{muu},\ref{bmu},\ref{mundos}) we find the remarkable result that the three coefficients in $V_2(C_1,C_2)$ are equal, i.e. \begin{eqnarray} & m_{C_1}^2+|\mu|^2 = m_{C_2}^2+|\mu| ^2 = B\mu \ . & \label{result} \end{eqnarray} so that $V_2(C_1,C_2)$ has the simple form \begin{eqnarray} & V_2(C_1,C_2)\ =\ B\mu \ (C_1+C_2^*)(C_1^*+C_2) \ . & \label{potflat} \end{eqnarray} Although the common value of the three coefficients in eq.(\ref{result}) depends on the Goldstino direction via the parameters $\cos\theta$, $\Theta_3$, $\Theta_6$,\ldots (see expression of $B\mu$ in eq.(\ref{bmu})), we stress that the equality itself and the form of $V_2$ hold {\em independently of the Goldstino direction}. The only constraint that one may want to impose is that the coefficient $B\mu$ be non-negative, which would select a region of parameter space. For instance, if one neglects phases, such requirement can be written simply as \begin{eqnarray} & (1+\sqrt{3} \cos\theta \ \Theta_3) (1+\sqrt{3} \cos\theta \ \Theta_6) \geq 0 \ . & \end{eqnarray} We notice in passing that the fields $C_{1,2}$ appear in the SUSY-breaking scalar potential in the same combination as in the K\"ahler potential. This particular form may be understood as due to a symmetry under which $C_{1,2}\rightarrow C_{1,2}+i\delta $ in the K\"ahler potential which is transmitted to the final form of the scalar potential. It is well known that, for a potential of the generic form (\ref{flaty}) (+D-terms), the minimization conditions yield \begin{eqnarray} & \sin2\beta \ =\ { {-2 B\mu} \over {m_{C_1}^2+m_{C_2}^2+2|\mu|^2} } \ . & \label{sbet} \end{eqnarray} In particular, this relation embodies the boundedness requirement: if the absolute value of the right-hand side becomes bigger than one, this would indicate that the potential becomes unbounded from below. As we have seen, in the class of models under consideration the particular expressions of the mass parameters lead to the equality (\ref{result}), which in turns implies $\sin 2\beta= -1$. Thus one finds $\tan\beta=<C_2>/<C_1>=-1$ {\it for any value of $\cos\theta $,$\Theta _3 $,$\Theta _6 $} (and of the other $\Theta_i$'s of course), i.e. for any Goldstino direction. As an additional comment, it is worth recalling that in previous analyses of the above mechanism for generating $\mu$ and $B$ in the String context \cite{KL,BIM,BLM} the value of $\mu$ was left as a free parameter since one did not have an explicit expression for the function $Z$. However, if the explicit orbifold formulae for $Z$ are used, one is able to predict both \cite{BIMS} $\mu$ and $B$ reaching the above conclusion\footnote{We should add that situations are conceivable where the above result may be evaded, for example if the physical Higgs doublets are a mixture of the above fields with some other doublets coming from other sectors (e.g. twisted) of the theory.}. Now that we have computed explicitly the whole soft terms and the $\mu$ parameter, it would be interesting to analyze the dilaton-dominated scenario ($\cos\theta=0$) because of its predictivity. In particular, from eqs.(\ref{dilaton},\ref{muu},\ref{bmu}) we obtain\footnote{It is worth noticing here that although the value of $\mu$ is compactification dependent even in this dilaton-dominated scenario, $\mu=m_{3/2}({\tilde K}_1{\tilde K}_2)^{-1/2}Z$ as can be obtained from eq.(\ref{mmu}), the result $\mu=m_{3/2}$ will be obtained in any compactification scheme with the following property: $({\tilde K}_1{\tilde K}_2)^{-1/2}Z=1$. Of course, this is the case of orbifolds, where in particular ${\tilde K}_1={\tilde K}_2=Z$, as was shown in eq.(\ref{zzz}).} \begin{eqnarray} & m_{\alpha } = \ m_{3/2} \ , & \nonumber \\ & M_a = \ \pm \sqrt{3} m_{3/2} \ , & \nonumber \\ & A_{\alpha \beta \gamma } = \ - M_a \ , & \nonumber \\ & B = \ 2 m_{3/2} \ , & \nonumber \\ & \mu = \ m_{3/2} \ , & \label{dilaton3} \end{eqnarray} and therefore the whole SUSY spectrum depends only on one parameter $m_{3/2}$. If we would know the particular mechanism which breaks SUSY, then we would be able of computing the superpotential and hence $m_{3/2}=e^K|W|$. Although this is not the case, still this parameter can be fixed from the phenomenological requirement of correct electroweak breaking $2M_W/g_2^2=<H_1>^2+<H_2>^2$. Thus at the end of the day we are left with no free parameters. Of course, if in the next future the mechanism which breaks SUSY is known (i.e. $m_{3/2}$ can be explicitly calculated) and the above scenario is the correct one, the value of $m_{3/2}$ should coincide with the one obtained from the phenomenological constraint. In ref.\cite{nuevo} the consistency of the above boundary conditions with the appropriate radiative electroweak symmetry breaking is explored. Unfortunately, it is found that there is no consistency with the measured value of the top-quark mass, namely the mass obtained in this squeme turns out to be too small. A possible way-out to this situation is to assume that also the moduli fields contribute to SUSY breaking since then the soft terms are modified (see eqs.(\ref{muu},\ref{bmu})) \cite{nuevo}. Of course, this amounts to a departure of the pure dilaton-dominated scenario. Finally, let us remark that the previous dramatical conclusion in the pure dilaton-dominated limit is also obtained in a different context, namely to avoid low-energy charge and color breaking minima deeper than the standard vacuum \cite{Amanda}. In fact, on these grounds, the dilaton-dominated limit is excluded not only for a $\mu$ term generated through the K\"ahler potential but for any possible mechanism solving the $\mu$ problem. The results indicate that the whole free parameter space ($m_{3/2}$, $B$) is excluded after imposing the present experimental data on the top mass. The inclusion of a non-vanishing cosmological constant does not improve esentially this situation. \section*{Acknowledgments} I thank my collaborators A. Brignole, L.E. Iba\~nez and C. Scheich for an enjoyable work in this project.
proofpile-arXiv_067-2925
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{References}} \newcommand{\bild}[2]{{\psfig{figure=#1,#2}}\label{#1}} \newcommand{\bildh}[3]{{\psfig{figure=#1,height=#2,#3}}\label{#1}} \begin{document} \thesaurus{01(03.20.4; 08.09.2 Eta Carinae; 08.03.4; 08.09.1)} \title{Speckle-masking imaging polarimetry of $\eta$ Carinae: evidence for an equatorial disk} \author{Heino Falcke\inst{1,2}, Kris Davidson\inst{3}, Karl-Heinz Hofmann\inst{1}, Gerd Weigelt\inst{1}} \institute{Max-Planck Institut f\"ur Radioastronomie, Auf dem H\"ugel 69, D-53121 Bonn, Germany (weigelt@mpifr-bonn.mpg.de) \and Department of Astronomy, University of Maryland, College Park, MD 20742-2421, USA (hfalcke@astro.umd.edu) \and Astronomy Department, University of Minnesota, 116 Church St., Minneapolis, MN 55455, USA } \date{Accepted for publication in A\&A Letters (Dec. 19, 1995) -- in press} \maketitle \markboth{Falcke et al.: Speckle-masking imaging polarimetry of $\eta$ Carinae}{Falcke et al.: Speckle-masking imaging polarimetry of $\eta$ Carinae} \begin{abstract} With our new speckle imaging polarimeter we have obtained the first polarimetric images with sub-arcsecond resolution of the Luminous Blue Variable $\eta$ Carinae in the H$\alpha$ line. The polarization patterns at the 3'' scale match well earlier conventional imaging photometry and can be interpreted as Mie scattering. In crosscorrelation-centered images we detected in polarized light a bar in the NE part of the equatorial plane of $\eta$ Carinae. High-resolution 0.11'' polarimetric speckle reconstructions reveal a compact structure elongated in the same direction which is consistent, in degree and position angle of the polarisation, with the presence of a circumstellar, equatorial disk. The degree of polarization of the previously discovered speckle objects and the H$\alpha$ arm is relatively low ($\sim10\%$) and thus may indicate a position within the equatorial plane. We also discovered a highly polarized ($20\%-40\%$) bipolar structure along the major axis of the Homunculus nebula which can be traced down to the sub-arcsecond scale. This is probably the inner part of a bipolar outflow into the Homunculus. \end{abstract} {\bf Keywords:} {techniques: polarimetric -- stars: individual: Eta Carinae -- circumstellar matter -- stars: imaging} \section{Introduction} Because of its extraordinary luminosity of $\sim10^{6.6}L_{\odot}$ the Luminous Blue Variable $\eta$ Carinae is one of the most interesting objects for the understanding of the late evolutionary stages of massive stars (see Humphreys and Davidson 1994, and references therein). It is embedded in the Homunculus, a bi-polar nebula oriented at PA 132$^\circ$, which is reflecting light from the central object. $\eta$ Carinae was also one of the earliest complex structures that was successfully studied at sub-arcsec resolution by speckle methods. Weigelt \& Ebersberger (1986) and Hofmann \& Weigelt (1988) found 3 objects close to the central star (0.1-0.2'' separation); first HST UV observations of the speckle objects were reported by Weigelt et al. (1995). They appear very compact in far-red light; but since they are moving outward (Weigelt et al. 1995 \& 1996) and have forbidden lines in their spectra (Davidson et al. 1995), they must be ejected clouds rather than companion stars. In H$\alpha$ the sub-arcsecond structure of $\eta$ Carinae is even more complex, showing an arm-shaped feature in the north (Weigelt et al. 1996). Polarization observations have shown that $\eta$ Carinae is intrinsically polarized (Visvanathan 1967, Marraco et al. 1993). Warren-Smith et al. (1979) showed that the total polarization of the Homunculus is always perpendicular to the direction to the central object as expected for Mie scattering by dust grains. In the outer regions the degree of polarization reaches up to 40\% while in the inner regions it is well below 10\% (see also Meaburn et al. 1993). Dust is expected to form at roughly the same distance from the star as the speckle objects. To extend those studies we have built a polarimeter for our speckle camera and are now for the first time able to obtain high-resolution polarimetric information at optical wavelengths with ground-based telescopes. Here we report on results we obtained during a first test-run of our speckle imaging polarimeter where we observed $\eta$ Carinae with an H$\alpha$ filter. \section {Observations} \subsection{The polarimeter} Our new polarimeter consists mainly of a rotatable, achromatic $\lambda/2$-retardation mica plate in front of a fixed polarization filter. If rotated by an angle $\alpha$ the $\lambda/2$ plate rotates the polarization vector of the incident light by $2\alpha$. The usable wavelength range is 450-800 nm with an error for the retardation of $2\%$ of $\lambda$/2 over the whole wavelength range and a transmission of $\sim70\%$. The polarization filter has a transmission of $\sim33\%$ and a polarization degree of $>99.99\%$ over a wavelength range of 450-750nm. The $\lambda/2$-plate is mounted on a remote-controlled rotator with a step motor of 1/500 Degree resolution. Filter, rotator and $\lambda/2$-plate were installed on a single mount that was inserted into the optical axis in front of the telescope focus of our camera. The use of a fixed polarization filter basically eliminates the effects of depolarization in the camera, as the polarization vector of the light entering the camera has always the same orientation. Circular polarization can not be measured. \subsection{Observing strategy} $\eta$ Carinae was observed with the ESO 2.2 m telescope in Chile on March 12, 1995 between ST 12:20 and ST 12:50 with an improved version of our MPIfR speckle camera (Baier \& Weigelt 1983) using a 30 nm wide H$\alpha$ filter and 20 fold magnification giving us a field of view of $\sim6$ arcsec. In total we took 4800 images of 50 ms exposure time at a frame rate of 4 images/sec including flatfield and dark images. The observation was split into 16 sections of 300 images each. After each section we rotated the $\lambda/2$-plate by $22.5^\circ$ corresponding to a rotation of $45^\circ$ of the incident polarization vector. We finally obtained four independent measurements of the polarized intensity for each of the four possible orientations of the polarization vector ($0^\circ$, $45^\circ$, $90^\circ$, and $135^\circ$). This has the benefit that we have a full rotation of the $\lambda/2$-plate and two rotations of the polarization vector with respect to the polarization filter which helps to detect and reduce the effects of any rotational asymmetries in the polarimeter. A second advantage of taking interleaved images is that the images for each $45^\circ$ rotation of the polarization vector -- if added together -- are taken quasi-simultaneously thus a slow monotonic change in the seeing conditions affects all images in a similar way. This is especially important for the reconstruction of speckle images. Finally, we want to note that the use of a speckle camera does have another intrinsic advantage over conventional imaging polarimetry: by evaluating the total intensities of all images we can monitor the short-time variability of the atmospheric transmission. After the observation we measured a flatfield to determine the photonbias (Pehlemann et al. 1992) and a single star (SAO251486) to obtain the speckle transfer function. The same observations as for $\eta$ Carinae were performed thereafter with a nearby cluster member (HDE 303308) for comparsion. \begin{figure*} \centerline{ \bildh{etapol-fig1a.drw.ps}{5.5cm}{bbllx=2.3cm,bblly=5.4cm,bburx=19.2cm,bbury=22.1cm} \bildh{etapol-fig1b.ps}{5.5cm}{bbllx=2.3cm,bblly=5.5cm,bburx=19.2cm,bbury=22.2cm} \bildh{etapol-fig1c.ps}{5.5cm}{bbllx=2.3cm,bblly=5.5cm,bburx=19.2cm,bbury=22.2cm} } \caption[]{a) Crosscorrelation-centered $6\times6$ arcsec image of $\eta$ Carinae and its polarization -- north is up and east to the left. The short lines indicate the orientation of the E-vector and the length is proportional to the degree of polarization. b) The total polarized flux of the left image. c) The per-cent polarization of the left image. The images were zoomed from our $512^2$ format down to a $64^2$ format. The distortions at the edges of the images are consequences of the centering procedure. } \end{figure*} \section{Data reduction} \subsection{Total polarization}\label{totalpol} Prior to the data reduction we combined the four small data sets for each polarization angle into a large data set containing 1200 images each. During the speckle-masking reconstruction of the images the information of the total intensity is lost and therefore one has to re-scale the intensity according to the intensities of the co-added long-exposure images. One way to achieve this would be to simply co-add all images and determine the degree of polarization from the four long-exposure images. However, using this method, we would loose the valuable informaton of the short-term variability of our data. Therefore, we extracted the (image-intensifier dark subtracted) integrated intensities of each image and plotted them in the order they had been obtained originally. Although the seeing was relatively stable and we had perfect weather conditions, we found small variations of the integrated short-exposure image intensities of $\eta$ Carinae of the order of a few per cent, clearly exceeding the instrumental noise in amplitude. The run of intensities can be described by a constant upper envelope plus occasional, erratic dips. We interpret this as short-term reduction of the atmospheric transmission from the optimal value; an uneven distribution of those dips among the different polarization angles would clearly affect any polarization results based on the usual average (as in long exposures). A simple way to eliminate the influence of those dips is to take the median value of an upper constant envelope of the intensities, e.g. the brightest $N$ of all images. Indeed we found that by varying $N$ and calculating the deviation of the polarized intensity from the expected Sinus shape the error has a well defined minimum around $N=16\%$, yielding a degree of polarization of $P=4.06\%$ and a position angle (PA) for the E-vector of $\theta_{\rm P}=73.5^\circ$ for our field of view. Nevertheless, the results differed only by $\pm0.025\%$ in $P$ and $\pm0.5^\circ$ in $\theta_{\rm P}$ over a range of $N=10\%-60\%$ (which would include the `normal' median).\footnote{For this extended object, however, the total polarization vector of our small square aperture can not easily be compared with usual large round apertures.} The situation for the cluster star HDE 303308 was slightly different as it is substantially fainter than $\eta$ Carinae and the intensity variations are mainly dominated by photon and instrumental noise. Hence, we took the usual median intensities for the four polarization angles at $N=50\%$ -- where the minimum error was found -- yielding $P=2.76\%$ and $\theta_{\rm P}=102\%$. Here, the systematic variations by changing $N$ are $\pm0.3\%$ and $\pm5^\circ$ respectively. We noticed a 100 times fainter companion $3.4\mbox{''}$ away from HDE 303308 at PA $231^\circ$. \subsection{Image reconstruction and polarization maps} From the four combined data sets we reconstructed four images using the basic speckle-masking technique as described in Weigelt (1977), Lohmann et al. (1983), and Hofmann \& Weigelt (1986). To ensure that the reconstructions of all images were done in an identical manner, we used the automatized speckle processing package (ASP) recently developed at the MPIfR (Falcke et al. 1996). The basic scheme of the automated speckle data reduction process consists of the following steps: image intensifier dark current and flatfield corrections, detection of ion contamination, seeing selection, calculation of crosscorrelation-centered images, photon-bias compensation, bispectrum calculation, and image reconstruction. Thus we obtained images at different resolutions up to the diffraction limit. Our diffraction-limited H$\alpha$ total intensity image confirms the detections by Weigelt et al. (1996) and especially shows the northern $H\alpha$-arm and its blobs and even the weak features to the SE and NE of the nucleus. To obtain our polarization map, we first determined the position of the central peak in each image by fitting a two-dimensional gaussian and shifted each image onto a common center. We then normalized the total intensity of the reconstructed images according to the values found in Section \ref{totalpol}. From the four shifted and normalized images corresponding to the four polarization angles we determined the Stokes parameters for each pixel and the standard deviation from a sinusoidal distribution. In the polarization maps we left out all vectors having errors $>30\%$ although mostly the errors are $<10\%$. For the vector maps we usually also combined four pixels for one vector. Most affected by errors are the polarization angles in the fainter parts of the reconstructed image and we have not made any attempt to correct the interstellar polarization in the H$\alpha$ line towards $\eta$ Carinae. \section{Results} In Fig. 1a we show the contours of the crosscorrelation-centered $6\times6$ arcsec total intensity image of $\eta$ Carinae overlayed with the vectormap of the polarization. The tangential pattern of the polarization vectors is in good agreement with the results by Warren-Smith et al. (1979); this is best seen in the NW spur where the polarization reaches values of 20-40\%. The bipolar nature of the Homunculus becomes apparent in the polarized light (Fig. 1b\&c) and there is also a bar along the minor axis (PA 42$^\circ$) towards the NE, which is already present in the total intensity map but becomes a marked feature in the polarized intensity map (Fig. 1b). There may also be a very weak SW counterpart which is, however, not well visible in the contour plots. Figure 1c shows that the degree of polarization is asymmetric with respect to the center and is lower towards the SE and higher in the NW. The degree of polarization seems to be reduced in a strip along the minor axis and the central object appears elongated. Such a pattern can be found if the central star itself is polarized (Els\"asser \& Staude 1978; Gledhill 1990). In Fig. 2a we show the contour map of the H$\alpha$ speckle-masking reconstruction of the inner arcsecond of the Homunculus and its polarization. To increase the SNR of the polarization map we have not reconstructed the image up to the diffraction limit but with a lower resolution of 0.11 arcsec. At the central peak the polarization is $P=9.1\%$ and $\theta_{\rm P}\simeq80^\circ$. It is remarkable that the largest part of the H$\alpha$ arm and the four speckle objects A-D are in a region of relatively low polarization around $10\%$. The PA of the polarization vectors of the three speckle objects B-D are similar to those of the central star and almost perpendicular to the radial axis towards the center. The polarization increases strongly towards the NW where it still is perpendicular to the radial axis and $P=20-40\%$. This feature connects well into the NW spur seen already in Fig.1a. It is also noteworthy that the total intensity of $\eta$ Carinae in the high-resolution image is sharply reduced below the minor (NE-SW) axis. In the polarized intensity map (Fig. 1b) we do see several co-linear blobs along the same axis which are symmetric around the center. We note that this linear feature also connects the NE end of the northern arm and the central star. In a larger field of view it also continues smoothly into the bar noted already in Fig.1 while there is no such strong feature in the SW on the larger scale. \section{Summary and Discussion} \begin{figure*} \centerline{ \bildh{etapol-fig2a.ps}{8cm}{bbllx=2.3cm,bblly=5.5cm,bburx=19.2cm,bbury=22.2cm} \bildh{etapol-fig2b.ps}{8cm}{bbllx=2.3cm,bblly=5.5cm,bburx=19.2cm,bbury=22.2cm} } \caption[]{a) Contour map of the H$\alpha$ speckle-masking reconstruction and E-vector map of $\eta$ Carinae for a $0.94\times0.94$ arcsec field (north is up, east to the left). The resolution is artificially degraded to $60\%$ of the diffraction limit corresponding to $0.11$ arcsec for the benefit of a higher SNR. The short lines indicate the orientation of the E-vector and the length is proportional to the degree of polarization. b) The total polarized flux of the left image. The hole at (65,65) in the NW structure corresponds to a high-error pixel which was left out.} \end{figure*} H$\alpha$ observations of the LBV $\eta$ Carinae during a first test run of our polarimeter have demonstrated the feasibility of optical speckle imaging polarimetry. Degree and PA of the polarization of the reference star HDE 303308 agree within the errors with the literature values (Visvanathan 1976; Marraco et al. 1993), the outer polarization pattern of our $\eta$ Carinae image itself match well the inner structure found by Warren-Smith et al. (1979) and the high-resolution reconstruction of $\eta$ Carinae confirms the structures detected by Weigelt et al. (1996). Several large-scale features continue down to the small scales. Along the minor axis of the Homunculus we have detected a bar to the NE of $\eta$ Carinae in the polarized light and we find a linear symmetric structure in the inner arcsecond which is oriented in the same direction. This, together with the sharp intensity drop from the NW to the SE in the high-resolution image and the constriction of the central contours on the larger scale may be indicative of the presence of a dusty equatorial disk around $\eta$ Carinae with its rotation axis along the major axis of the Homunculus. The NE bar and the central arcsecond bar could then be interpreted as scattered light from the surface of the disk. This may explain the PA and the high degree of the polarization in the NE and SW blobs in Fig. 2b. The absence of a SW counterpart to the NE bar on the large scale (while present at the sub-arcsecond scale) might be explained by a warped or distorted geometry. The up-turn of the NE end of the H$\alpha$ arm seen in this paper and by Weigelt et al. (1996) could be indicative of a physical connection between the arm and the putative disk. The usual explanation of the brightness contrast between the parts above and below the minor axis would be obscuration by the disk. The SE side of the homunculus polar axis points obliquely {\em towards} us, as seen in velocity data (Thackeray 1961, Meaburn et al. 1987) and in modern high-resolution images (Duschl et al. 1995, Humphreys \& Davidson 1994). Material oriented along the polar axis would therefore be more visible on the SE (nearer) side and obscured on the NW side, exactly the opposite to what is observed near the star. Hence we conclude that most of the small-scale structure discussed above is essentially equatorial rather than polar: we are seeing the most visible inner NW parts of the equatorial ejecta-disk pointing towards us. This picture is consistent with the low polarization of the speckle objects and the H-alpha arm ($\le$ 10\%), since Mie scattering produces lower polarization at relatively small scattering angles. We also point out that the polarization properties of the central star and the speckle objects B-D are basically indistinguishable even though the latter are clearly scattered light from the nucleus (Davidson et al. 1995), hence in the nucleus we may see in the polarized light a dusty envelope (or disk) rather than the naked star alone, and the speckle objects B-D as well as the H$\alpha$ arm might well be part of the disk and its radial streamers (see Duschl et al. 1995). That the central star is most likely obscured was already evident from earlier observations where it was shown that the bright nucleus is extended and much too faint with respect to the surrounding speckle objects B-D (Weigelt et al. 1995). Some of the H-alpha light observed in the speckle objects and the H-alpha arm may be emitted there, rather than scattered -- unlike the case for larger size scales in the homunculus. This would of course be consistent with low polarizations. Further observations in continuum light may show larger amounts of polarization, since most {\em continuum} light in the blobs is expected to be scattered from the central star. In addition to the speckle objects and the H$\alpha$ arm we also find at the sub-arcsecond scale something which might be the continuation of the NW spur seen in Figure 1. It appears in the polarized intensity image (Fig. 2b) but is barely visible in normal intensity (Fig. 2a). The polarization of this feature is very high (up to 40\%), indicating large scattering angles and suggesting that this feature is part of the low-density, highly polarized NW part of the Homunculus nebula rather than a part of the disk. We conclude that our findings support the model of an equatorial disk surrounding $\eta$ Carinae. The presence of such a disk can be very important for the angular momentum loss in the late phases of massive stars -- if it is an excretion disk, as indicated by the radial streamers (often referred to as jets). Still, we have to await further observations as from this test run we had only a limited number of images available for our reconstruction. {\it Acknowledgements.} HF was supported by a Max-Planck Stipend. We thank P.L.~Biermann for stimulating comments and R.~\"Osterreicher and C.~M\"ollenhoff for helpful discussions on imaging polarimetry.
proofpile-arXiv_067-2947
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The balance equation transport theory of Lei and Ting\cite{leiting,leihoring} was originally developed to treat high-field electrical condition in homogeneous semiconductors, and has achieved much success in hot-electron semiconductor transport problems. This theory is based on a separation of the center of mass of the system from the relative motion of electrons in the presence of a uniform electric field. The center of mass is treated as a classical particle, whereas the relative electron system, which is composed of a large number of interacting particles, is treated fully quantum-mechanically. The theory has been successfully applied to a variety of transport problems, and the results obtained have exhibited good agreement with experiments.\cite{hirakawa} This theory was subsequently generalized to deal with weakly nonuniform, inhomogeneous systems by Lei {\em et al.}.\cite{lei} The resulting hydrodynamic balance equations obtained by them consist of the following three equations: (a) continuity equation; (b) momentum balance equation; and (c) energy balance equation. The form of these hydrodynamic balance equations appears very similar to their classical counterparts, generally called hydrodynamic models.\cite{1,2,3,4,5,6,7,8,9,10,11,12} However, in actual fact, they are quite different. The latter is derived from the Boltzmann transport equation, as the first three moments of that equation. Very recently, the fourth moment was discussed by Anile {\em et al.}\cite{anile1,anile2,anile3}, in an attempt to include the equation describing heat flux. Although, in principle, a complete determination of Boltzmann equation is equivalent to the determination of all the moments, it is not practical to solve the infinite hierarchy of coupled equations governing the various moments. The hydrodynamic approach is based on truncation of this hierarchy after the second order moment, and simplification of the remaining equations. However these three moment equations by themselves do not form a closed system, requiring input of information about scattering, generally supplied in the form of momentum and energy relaxation times. Nevertheless, to accurately evaluate the relaxation times requires a predetermination of the distribution function, which is precisely the task that the hydrodynamic models strive to avoid. This difficulty is circumvented by one of the following ways. One approach is to calculate the relaxation times by Monte Carlo simulations. Another employs empirical forms of relaxation times. The third is to postulate a distribution function with unknown parameters, and use the hydrodynamic equations to solve for these parameters. One of the most popular parameterized distribution functions is the drifted Maxwell distribution, which depends on two unknown parameters, the electron drift velocity and the electron temperature. The hydrodynamic balance equation approach employs a drifted local equilibrium description similar to the latter. In this it employs unknown parameters including the local electron temperature $T_e({\bf R})$, local electron drift velocity ${\bf v} ({\bf R})$ and local chemical potential $\mu({\bf R})$. The distinctive features of balance equation theory rest with the ansatz of such local equilibrium parameters in an appropriately chosen initial density matrix, which is treated quantum mechanically, describing the dynamics of the many-body system of electrons, impurities and phonons. Of course, these unknown parameters are also to be determined from the resulting balance equations. It is now believed that the specific quasi-equilibrium form of the initial density matrix chosen in balance equation theory is specifically suited to the condition of strong electron-electron interactions, since it requires rapid thermalization about the drifted transport state.\cite{chen1,chen2} A salient feature of this new hydrodynamic approach is that it includes a microscopic description of scattering in the form of a frictional force function due to electron-impurity and electron-phonon scattering, as well as an electron energy loss rate function due to electron-phonon interaction. These functions are calculated within the model itself, as functions of carrier drift velocity and carrier temperature, along with the carrier density, which are themselves determined self-consistently within the same model. These hydrodynamic balance equations have recently been applied to device simulations by Cai {\em et al.}.\cite{cai1,cai2,cai3} A hitherto unresolved question, unanswered since the development of hydrodynamic balance equations, concerns the capability of this theory to lead to the correct form of Onsager relations\cite{onsager,mahan} and/or how to determine Onsager relations within the framework of this theory. There is even some misunderstanding that the energy flux predicted by this theory is zero. The purpose of this paper is to clarify the role of heat flux in this theory, and to also show how to generate Onsager relations within the framework of this theory. We have closely checked the Onsager relation predicted by this theory and find, that for any temperature, when electron density is sufficiently high, the balance equation theory satisfies Onsager relations exactly. The condition of high density is consonant with the requirement that Lei-Ting balance equations hold only for strong electron-electron interactions. Furthermore, our results support the validity of this theory in weakly nonuniform systems. To our knowledge, this is the first set of hydrodynamic equations which obeys Onsager relation exactly. Anile {\em et al.} showed very recently,\cite{anile3} by Monte Carlo simulation that the Onsager relation fails in the traditional hydrodynamic models. This paper is organized as follows: In Sec.\ II we review the derivation of the hydrodynamic balance equations. This is not insignificant because we explicitly exhibit the role of the energy flux in this theory. Moreover, we also formulate the hydrodynamic force and energy balance equations in somewhat different forms than those of Lei {\em et al.},\cite{lei} which clarifies the meaning of every term. Then, in Sec.\ III we derive the Onsager relation for linear particle and heat flux currents driven by electric field and temperature gradient, and check it closely. We present our conclusions and discussions in Sec.\ IV. \section{Hydrodynamic balance equations} The starting point of hydrodynamic balance equation theory consists of the following fluid-element-composed electron Hamiltonian \begin{equation} H=\int d{\bf R}\ [H_e({\bf R})+H_I({\bf R})]\;. \end{equation} Here, \begin{equation} \label{he} H_e({\bf R})=\sum_i\left[\frac{{\bf p}_i^2}{2m}+\frac{1}{2}\sum_{i\not= j} \frac{e^2}{|{\bf r}_i-{\bf r}_j|}\right]\delta({\bf r}_i-{\bf R}) \end{equation} denotes the kinetic energy and Coulomb interaction energy of electrons within a fluid cell around ${\bf R}$. Macroscopically this cell is small over which all the expectations of physical quantities change little, whereas microscopically it is large enough that a great number of particles are within it. ${\bf p}_i$ and ${\bf r}_i$ are the momentum and coordinate of the $i$-th electron. \begin{equation} \label{hi} H_I({\bf R})=\sum_i[e\phi ({\bf r}_i)+\Phi ({\bf r}_i)]\delta ({\bf r}_i -{\bf R}) \end{equation} is the interaction Hamiltonian in which $\phi ({\bf r})$ denotes the potential of the external electric field ${\bf E}$, hence ${\bf E}=-\nabla \phi ({\bf r})$, and $\Phi ({\bf r})=\sum_a u({\bf r}-{\bf R}_a)+ \sum_\ell {\bf u}_\ell\cdot\nabla v_\ell ({\bf r}-{\bf R}_\ell)$ represents the scattering potential due to randomly distributed (${\bf R}_a$) impurities and lattice vibrations (${\bf R}_\ell$ stands for the lattice sites). The number density of electrons in the cell around ${\bf R}$ may be written as \begin{equation} N({\bf R})=\sum_i\delta({\bf r}_i-{\bf R})\;. \end{equation} Similarly the ${\bf R}$-dependent momentum density is given by \begin{equation} \label{p} {\bf P}({\bf R})=\sum_i{\bf p}_i\delta({\bf r}_i-{\bf R})\;. \end{equation} Letting ${\bf v}({\bf R})$ be the average electron velocity in the fluid cell about ${\bf R}$, which is a parameter to be determined self-consistently from the resulting balance equations, one can write the statistical average of the momentum density as \begin{equation} \langle{\bf P}({\bf R})\rangle=mn({\bf R}){\bf v}({\bf R})\;, \end{equation} with $n({\bf r})=\langle N({\bf R})\rangle$, the statistical average of the electron number density. Introducing relative electron variables \begin{equation} \label{pprp} {\bf p}_i^\prime={\bf p}_i-m{\bf v}({\bf R})\hspace{0.5cm},\hspace{0.5cm} {\bf r}_i^\prime={\bf r}_i-{\bf R}\;, \end{equation} which represent the momentum and coordinate of the $i$-th electron relative to the center of mass of the fluid cell around ${\bf R}$, we can write the statistical average of $H_e({\bf R})$ as \begin{equation} \label{heav} \langle H_e({\bf R})\rangle=u({\bf R})+\frac{1}{2}mn({\bf R})v^2({\bf R})\;, \end{equation} with \begin{equation} u({\bf R})=\langle\sum_i\frac{{\bf p}_i^{\prime 2}}{2m} \delta({\bf r}_i^\prime)\rangle \end{equation} denoting the average kinetic energy of the relative electron in cell ${\bf R}$. It is noted that in deriving Eq.\ (\ref{heav}) we have treated electron-electron Coulomb interaction effect in the spirit of Landau fermi-liquid theory, which is appropriate for electrons in semiconductors and metals, {\em ie.}, it leads to a self-energy correction in the single electron energy, and also renormalizes the bare phonon frequency, jointly with the bare electron-phonon interaction vertex, and also the electron-impurity interaction vertex.\cite{scalapino,mahan,callaway} We assume that these renormalized corrections are already included in the corresponding quantities. The use of the Hamiltonian above is well established and similar to those discussed in the book of Zubarev.\cite{zubarev} Considering the rate of change of particle number density, $\dot N({\bf R})=-i[N({\bf R}),H]$, and performing the statistical average, the equation of continuity follows as \begin{equation} \label{eq1} \frac{\partial n}{\partial t}+\nabla\cdot(n{\bf v})=0\;, \end{equation} where we have used the relation \begin{equation} \label{rdot} \dot{\bf r}_i=-i[{\bf r}_i,H]={\bf p}_i/m\;. \end{equation} The particle flux density operator ${\bf J}({\bf R})$ can be derived from the momentum density operator Eq.\ (\ref{p}) as \begin{equation} {\bf J}({\bf R})=\frac{1}{m}{\bf P}({\bf R})=\sum_i\frac{{\bf p}_i}{m} \delta({\bf r}_i-{\bf R})\;, \end{equation} and the rate of change of ${\bf J}({\bf R})$ can be written as \begin{equation} \label{jdot} \dot{\bf J}({\bf R})=-i[{\bf J}({\bf R}),H]=\sum_i\frac{1}{m}(e{\bf E} +{\bf F}_i)\delta({\bf r}_i-{\bf R})-\nabla_{\bf R}\cdot\sum_i \frac{{\bf p}_i}{m}\frac{{\bf p}_i}{m}\delta({\bf r}_i-{\bf R})\;. \end{equation} Here, we have used the relation \begin{equation} \label{pdot} \dot{\bf p}_i=-i[{\bf p}_i,H]=(e{\bf E}-\nabla \Phi({\bf r}_i))/m\equiv (e{\bf E}+{\bf F}_i)/m\;, \end{equation} with ${\bf F}_i$ representing the force operator of the $i$th-electron. Transforming to the relative coordinate variables, Eq.\ (\ref{pprp}), and performing the statistical average of Eq.\ (\ref{jdot}), we have \begin{equation} \label{jdotav} \frac{\partial}{\partial t}\langle {\bf J}({\bf R})\rangle+ \nabla\cdot(\langle{\bf J}({\bf R})\rangle{\bf v}) =-\nabla\cdot\left\langle\sum_i\frac{{\bf p}_i^\prime}{m} \frac{{\bf p}_i^\prime}{m}\delta({\bf r}_i^\prime)\right\rangle+\frac{e n({\bf R}){\bf E}}{m}+\frac{{\bf f}({\bf R})}{m}\;, \end{equation} where \begin{equation} \label{jav} \langle {\bf J}({\bf R})\rangle=n({\bf R}){\bf v}({\bf R})\;, \end{equation} and \begin{equation} {\bf f}({\bf R})=-\langle\sum_i\nabla\Phi({\bf r}_i^\prime+{\bf R}) \delta({\bf r}_i^\prime)\rangle \end{equation} is the frictional force experienced by the fluid cell due to impurity and phonon scattering. Since \begin{equation} \left\langle \frac{{\bf p}_i^\prime}{m}\frac{{\bf p}_i^\prime}{m} \delta({\bf r}_i^\prime)\right\rangle=\frac{2}{3m}\left\langle \sum_i\frac{{\bf p}_i^{\prime 2}}{2m}\delta({\bf r}_i^\prime)\right \rangle {\cal I}=\frac{2}{3m}u({\bf R}){\cal I}\;, \end{equation} with ${\cal I}$ as the unit tensor, one follows that \begin{equation} \label{eq2} \frac{\partial}{\partial t}\langle {\bf J}({\bf R})\rangle+\nabla\cdot( \langle {\bf J}({\bf R})\rangle{\bf v})=-\frac{2}{3m}\nabla u({\bf R}) +\frac{{\bf f}({\bf R})}{m}\;. \end{equation} This equation can be proved directly to be just the original Euler-type momentum balance equation obtained by Lei {\em et al.}:\cite{lei} \begin{equation} \label{eq2lei} \frac{\partial {\bf v}}{\partial t}+{\bf v}\cdot\nabla {\bf v}=-\frac{2}{3} \frac{\nabla u}{mn}+\frac{e}{m}{\bf E}+\frac{\bf f}{mn}\;, \end{equation} if one takes Eq.\ (\ref{eq1}) into account. Similarly one can derive the energy balance equation by averaging the Heisenberg equation of motion $\dot H_e({\bf R})=-i[H_e({\bf R}),H]$, which should be combined with the time derivative of Eq.\ (\ref{heav}), and yields \begin{eqnarray} \frac{\partial u}{\partial t}+\nabla\cdot\langle{\bf J}_H\rangle&=& \frac{2}{3}{\bf v}\cdot\nabla u+\frac{1}{2}mv^2\nabla\cdot(n{\bf v}) +\frac{1}{2}mn{\bf v}\cdot\nabla v^2\nonumber\\ \label{eq3} &&-w-{\bf v}\cdot{\bf f}\;. \end{eqnarray} Here \begin{equation} w({\bf R})=\frac{1}{2}\langle\sum_i\frac{{\bf p}_i^\prime}{m}\cdot \nabla\Phi({\bf r}_i^\prime+{\bf R})\delta({\bf r}_i^\prime)\rangle+ \frac{1}{2}\langle\sum_i \nabla\Phi({\bf r}_i^\prime+{\bf R})\cdot \frac{{\bf p}_i^\prime}{m}\delta({\bf r}_i^\prime)\rangle-{\bf v}({\bf R}) \cdot{\bf f}({\bf R}) \end{equation} is the energy transfer rate from electron system to phonon system, and \begin{equation} \label{jhoper} {\bf J}_H({\bf R})=\sum_i\frac{{\bf p}_i^2}{2m}\frac{{\bf p}_i}{m}\delta ({\bf r}_i-{\bf R}) \end{equation} is the energy flux operator, whose statistical average being \begin{equation} \label{jhl} \langle {\bf J}_H({\bf R})\rangle=\frac{5}{3}u({\bf R}){\bf v}({\bf R}) +\frac{1}{2}mn({\bf R})v^2({\bf R}){\bf v}({\bf R})\;. \end{equation} This is just the energy flux predicted by balance equation theory. Taking this equation into account, one can easily recover the original form of the energy balance equation of Ref.\ \onlinecite{lei} by substituting Eq.\ (\ref{jhl}) into Eq.\ (\ref{eq3}): \begin{equation} \label{eq3lei} \frac{\partial u}{\partial t}+{\bf v}\cdot\nabla u=-\frac{5} {3}u(\nabla\cdot{\bf v})-w-{\bf v}\cdot{\bf f}\;. \end{equation} The resistive force ${\bf f}$, the energy transfer rate $w$, together with the local kinetic energy $u$ and the local density $n$ are calculated within the framework of balance equation theory\cite{leiting}, which requires knowledge of the density matrix $\hat{\rho}$. This density matrix can be determined by solving the Liouville equation, $i\partial\hat{\rho}/ \partial t=[H,\hat{\rho}]$, with an appropriate initial condition. In the balance equation theory, the electron-impurity and electron-phonon couplings are turned on from $t=0$, together with the electric field ${\bf E}$. Meanwhile in the present model the interactions between different fluid cells are included approximately in the local potential with a mean field treatment. Therefore different cells are dynamically independent, and thus evolve separately from their own initial state. Thus, the ${\bf R}$-dependent initial density matrix is chosen such that the relative electron system in the fluid cell is in a local quasi-thermal equilibrium state at electron temperature $T_e({\bf R})$ and chemical potential $\mu({\bf R})$, which are parameters to be determined self-consistently from the resulting hydrodynamic balance equations; whereas the phonon system is assumed in thermal equilibrium: \begin{equation} \label{rho0} {\hat \rho}_0=\frac{1}{Z}\exp\{-\sum_{\bf R}[H_e({\bf R})-{\bf v}({\bf R}) \cdot{\bf P}({\bf R})-\mu]/T_e({\bf R})\}\exp(-H_{ph}/T) \end{equation} with $H_{ph}$ and $T$ being the phonon Hamiltonian and temperature. It follows that the resistive force and the energy transfer rate are given by \begin{eqnarray} {\bf f}({\bf R})&=&{\bf f}(n({\bf R}),T_e({\bf R}),{\bf v}({\bf R}))= n_i\sum_{\bf q}{\bf q}|u({\bf q})|^2\Pi_2({\bf q},{\bf q}\cdot{\bf v} ({\bf R}))\nonumber\\ &&\mbox{}-2\sum_{{\bf q}\lambda}{\bf q}|M({\bf q},\lambda)|^2\Pi_2 ({\bf q},\Omega_{{\bf q}\lambda}-{\bf q}\cdot{\bf v}_d)\left[n(\frac{\Omega _{{\bf q}\lambda}}{T})-n(\frac{\Omega_{{\bf q}\lambda}-{\bf q}\cdot{\bf v} ({\bf R})}{T_e({\bf R})})\right]\;,\\ w({\bf R})&=&w(n({\bf R}),T_e({\bf R}),{\bf v}({\bf R}))\nonumber\\ &=&2\sum_{{\bf q}\lambda}\Omega_{{\bf q}\lambda} |M({\bf q},\lambda)|^2\Pi_2 ({\bf q},\Omega_{{\bf q}\lambda}-{\bf q}\cdot{\bf v}_d)\left[n(\frac{\Omega _{{\bf q}\lambda}}{T})-n(\frac{\Omega_{{\bf q}\lambda}-{\bf q}\cdot{\bf v} ({\bf R})}{T_e({\bf R})})\right]\;, \end{eqnarray} with $n(x)=(e^x-1)^{-1}$ being Bose distribution function; $n_i$, impurity density; $\Omega_{{\bf q}\lambda}$, the phonon frequency of wave ${\bf q}$ and mode $\lambda$; $u({\bf q})$, the electron-impurity interaction potential, and $M({\bf q},\lambda)$, the electron-phonon correction matrix element. $\Pi_2({\bf q},\lambda)$ denotes the imaginary part of electron density-density correction function. Note that ${\bf f}$ and $w$ depend on ${\bf R}$ through the quantities $n({\bf R})$, $T_e({\bf R})$ and $v({\bf R})$. The average local kinetic energy density of the relative electrons is \begin{equation} \label{u} u=2\sum_{\bf k}\varepsilon_{\bf k}f[(\varepsilon_{\bf k}-\mu)/T_e]\;, \end{equation} and the local chemical potential $\mu({\bf R})$ is related to the local density $n({\bf R})$ of electrons through the relation \begin{equation} \label{n} n=2\sum_{\bf k}f[(\varepsilon_{\bf k}-\mu)/T_e]\;, \end{equation} with $\varepsilon_{\bf k}=k^2/2m$ and $f(x)=1/(e^x+1)$ representing the energy dispersion of electrons and fermi distribution function respectively. There are, altogether, eight variables which need to be determined: the cell velocity ${\bf v} ({\bf R})$, the cell electron temperature $T_e({\bf R})$, the particle flux $\langle{\bf J}\rangle$, the energy flux $\langle{\bf J}_H\rangle$, the average local kinetic energy density $u({\bf R})$, the local number density of electrons $n({\bf R})$, the local chemical potential $\mu({\bf R})$, and the total electrical potential $\phi({\bf R})$. Moreover, there are three balance equations (\ref{eq1}), (\ref{eq2}), and (\ref{eq3}), supplemented by four relations (\ref{jav}), (\ref{jhl}), (\ref{u}) and (\ref{n}), as well as the Poisson equation relating electron density with potential: \begin{equation} \nabla^2\phi=-4\pi e[n({\bf R})-n^+] \end{equation} with $n^+$ as the density of the ionized donor background. These eight equations form a close set of equations for the hydrodynamic device modeling. \section{Onsager relation in hydrodynamic balance equation approach} In this section, we demonstrate the Onsager relation,\cite{onsager,mahan} more accurately, we verify the validity of hydrodynamic balance equations in regard to the Onsager relation. It is well known that the Onsager relation is a manifestation of microscopic irreversibility for any statistical system near thermal equilibrium. Therefore any properly formulated statistical physics model should satisfy this relation. It is very easy to verify this relation in the framework of Kubo linear response theory. Moreover, if one can determine the distribution function from the Boltzmann equation, it is also straightforward to verify the Onsager relation by calculating the pertinent moments of the distribution function. However, for the traditional hydrodynamic model,\cite{1,2,3,4,5,6,7,8,9,10,11,12} verification has been elusive. In fact, in a very recent article, Anile {\em et al.}\cite{anile3} showed that the Onsager relation breaks down in this model. Although they tried to circumvent this difficulty, they did not establish the existence of the relation they employed within the model itself by Monte Carlo simulation. Here, we will examine the Onsager relation within the framework of the hydrodynamic balance equations. The Onsager relation\cite{mahan} is concerned with the linear response of the particle current $\langle{\bf J}\rangle$ and the heat flux $\langle {\bf J}_Q\rangle$ near thermal equilibrium, which flow as a result of forces ${\bf X}_i$ on the system: \begin{eqnarray} \label{onsager1} \langle{\bf J}\rangle&=&L^{11}{\bf X}_1+L^{12}{\bf X}_2\;,\\ \label{onsager2} \langle{\bf J}_Q\rangle&=&L^{21}{\bf X}_1+L^{22}{\bf X}_2\;, \end{eqnarray} with ${\bf X}_1=-\frac{1}{T}\nabla (\mu+e\phi)$ and ${\bf X}_2=\nabla(1/T)$. The Onsager relation states that \begin{equation} L^{12}=L^{21}\;. \end{equation} The heat flux $\langle{\bf J}_Q\rangle$ relates to the energy flux in Eq.\ (\ref{jhl}) through \begin{equation} \langle{\bf J}_Q\rangle=\langle{\bf J}_H\rangle-\mu\langle{\bf J}\rangle\;. \end{equation} The fluxes $\langle{\bf J}\rangle$ and $\langle{\bf J}_H\rangle$ have already been defined in the previous section by Eqs.\ (\ref{jav}) and (\ref{jhl}). Our first task is to express them in terms of linear response in the form of Eqs.\ (\ref{onsager1}) and (\ref{onsager2}). The first relation can be acquired directly by linearization of force balance equation, Eq.\ (\ref{eq2}), near thermal equilibrium, so that we only need to consider a steady state with the external electric field ${\bf E}$ and the spatial gradient being very small. Then $T_e=T$ and ${\bf v}$ is also very small. We take ${\bf E}$, $\nabla T$ and ${\bf v}$ to be in the $x$-direction and treat Eq.\ (\ref{eq2}) to first order in the small quantities. This means, for instance, the gradient operator $\nabla_x\equiv\partial/\partial x$ is a first order small quantity and $v_x$ is also a first order small quantity, thus $\nabla_x v_x$ is a higher-order small quantity and can be neglected. These facts should be took in mind in all of our following calculations. Therefore the force balance equation Eq.\ (eq2) can be written as \begin{equation} 0=-\frac{2}{3nm}\nabla_x u+\frac{eE_x}{m}+\frac{f_x}{nm}\;. \end{equation} All the quantities in the other two directions are zero. For small $v_x$, $f_x$ is proportional to $v_x$,\cite{leihoring} thus proportional to $\langle{J_x}\rangle$, and \begin{equation} \rho=-\frac{f_x}{n^2e^2v_x}=-\frac{f_x}{ne^2\langle J_x\rangle}\;, \end{equation} is the resistivity and independent of $v_x$ ($\langle J_x\rangle$), which is given by \begin{eqnarray} \rho&=&-\frac{4\pi}{n^2e^2}\sum_{{\bf q}\lambda}q_x^2|M({\bf q},\lambda)|^2 \left[-\frac{1}{T}n^\prime(\frac{\Omega_{{\bf q}\lambda}}{T})\right] \left[f(\frac{\varepsilon_{\bf k}-\mu}{T})- f(\frac{\varepsilon_{{\bf k}+{\bf q}}-\mu}{T})\right] \delta(\varepsilon_{{\bf k}+{\bf q}} -\varepsilon_{\bf k}+\Omega_{{\bf q}\lambda})\nonumber\\ \label{rho} &&\mbox{}-\frac{n_i}{n^2e^2}\sum_{\bf q}q_x^2|u({\bf q})|^2\frac{\partial} {\partial \omega}\Pi_2({\bf q},\omega)|_{\omega=0}\;. \end{eqnarray} We then have \begin{equation} \label{j1} \langle J_x\rangle=\frac{E_x}{e\rho}-\frac{2}{3}\frac{\nabla_xu}{ne^2\rho}\;. \end{equation} Employing Eqs.\ (\ref{u}) and (\ref{n}), we can express Eq.\ (\ref{j1}) in the form of Eq.\ (\ref{onsager1}), with \begin{eqnarray} L^{11}&=&\frac{T}{\rho e^2}\;,\\ \label{l12} L^{12}&=&\frac{T^2}{\rho e^2}\left[\frac{5}{3}\frac{F_{3/2}(\zeta)} {F_{1/2}(\zeta)}\zeta\right]\;. \end{eqnarray} Here $\zeta=\mu /T$ and the function $F_{\nu}(y)$ is defined by \begin{equation} F_\nu(y)=\int_0^\infty\frac{x^\nu dx}{\exp(x-y)+1}\;. \end{equation} The procedure for identifying the linearized heat flux is, of course, similar to that of particle flux. Therefore we consider the rate of change of the energy flux operator ${\bf J}_H$ defined by Eq.\ (\ref{jhoper}): \begin{eqnarray} \dot {\bf J}_H({\bf R})&=&-i[{\bf J}_H({\bf R}),H]=-\nabla\cdot {\cal A} \nonumber\\ &&\mbox{}+\frac{1}{2}\sum_i\frac{(e{\bf E}+{\bf F}_i)\cdot{\bf p}_i} {2m}\frac{{\bf p}_i}{m}\delta({\bf r}_i-{\bf R})+\frac{1}{2}\sum_i \frac{{\bf p}_i\cdot(e{\bf E}+{\bf F}_i)}{2m}\frac{{\bf p}_i}{m}\delta({\bf r}_i-{\bf R})\nonumber\\ &&\mbox{}+\frac{1}{2}\sum_i\frac{{\bf p}_i^2}{2m}\frac{e{\bf E}+{\bf F}_i} {m}\delta({\bf r}_i-{\bf R})+\frac{1}{2}\sum_i\frac{e{\bf E}+{\bf F}_i} {m}\frac{{\bf p}_i^2}{2m}\delta({\bf r}_i-{\bf R})\nonumber\\ \label{jhdot} &&\mbox{}+\frac{1}{2}\sum_i\frac{{\bf p}_i}{m}\frac{(e{\bf E}+{\bf F}_i) \cdot{\bf p}_i}{2m}\delta({\bf r}_i-{\bf R})+\frac{1}{2}\sum_i\frac{{\bf p}_i}{m}\frac{{\bf p}_i\cdot(e{\bf E}+{\bf F}_i)}{2m} \delta({\bf r}_i-{\bf R})\;, \end{eqnarray} where we have used Eqs.\ (\ref{rdot}) and (\ref{pdot}) again. The tensor ${\cal A}$ is defined as \begin{equation} \label{a} {\cal A}=\sum_i\frac{{\bf p}_i^2}{2m}\frac{{\bf p}_i}{m}\frac{{\bf p}_i}{m} \delta({\bf r}_i-{\bf R})\;. \end{equation} Performing the statistical average of Eq.\ (\ref{jhdot}), we have \begin{equation} \label{eq4} \langle\dot{\bf J}_H\rangle+\nabla\cdot\langle{\cal A}\rangle =\langle{\bf B}\rangle+\frac{5}{3m}eu{\bf E}+en{\bf E}\cdot{\bf v}{\bf v} +\frac{1}{2}env^2{\bf E}+\frac{1}{2}v^2{\bf f}-w{\bf v}\;. \end{equation} It is understood that the right hand side of Eq.\ (\ref{eq4}) is derived by transforming the coordinate and moment operators to the relative variables of Eq.\ (\ref{pprp}), before performing the statistical averages. The expression of $\langle {\bf B}\rangle$ is given in the Appendix, and $\langle{\cal A}\rangle$ can be expressed as \begin{equation} \label{aa} \langle{\cal A}\rangle=\frac{1}{3}(S({\bf R})+uv^2){\cal I}+\langle{\bf J} _H\rangle{\bf v}+{\bf v}\langle{\bf J}_H\rangle-u{\bf v}{\bf v}-\frac{1}{2} mnv^2{\bf v}{\bf v}\;, \end{equation} with \begin{equation} S({\bf R})=\left\langle\sum_i\frac{{\bf p}_i^{\prime 4}}{2m^3} \delta({\bf r}_i^\prime)\right\rangle\;. \end{equation} This average can be calculated in the balance equation theory mentioned using the density matrix ${\hat\rho}$ discussed in the previous section, with the result \begin{equation} \label{sr} S({\bf R})=2\sum_{\bf k}\frac{k^4}{2m^3}f(\frac{\varepsilon_{\bf k}-\mu} {T_e})\;. \end{equation} It should be emphasized here that if the density matrix employed in the balance equation is exactly the real physical one, then Eq.\ (\ref{eq4}) should be consistent with Eqs.\ (\ref{eq1})-(\ref{eq3}). This is to say that if we have calculated every unknown parameters from the hydrodynamic balance equations presented in the previous section, and substitute them in Eq.\ (\ref{eq4}), then Eq.\ (\ref{eq4}) should merely be an identity. Unfortunately, in actual fact, this is not the case, especially when the system is a bit far away from weakly nonuniform system. However, here we do not care about it, because we only need this equation holds near thermal equilibrium. In this circumstance, the density matrix, chosen in balance equation theory, has already been shown to be reasonable, in particular for a system with strong electron-electron interactions.\cite{chen1,chen2} Therefore Eq.\ (\ref{eq4}) should yield agreement with the balance equations near thermal equilibrium, and we may use it to determine the linear response relation of $\langle{\bf J}_H\rangle$ with the external forces ${\bf X}_i$ and examine whether the result obtained satisfies Onsager relation. Thus, to the first order in the small quantities, Eq.\ (\ref{eq4}) can be written in the form \begin{equation} \label{j20} \frac{5}{3m}eu({\bf R})E_x-\frac{1}{3}\nabla_x S({\bf R})+\langle B_x \rangle=0\;. \end{equation} In deriving this equation, we have used the linearized force and energy balance equations, Eqs.\ (\ref{eq2lei}) and (\ref{eq3lei}), and $\langle B_x\rangle$ has also been linearized and is proportional to $\langle{\bf J}_H\rangle$, which is $\frac{5}{3}uv_x$ to first order. Thus we may define \begin{equation} \frac{1}{\tau}=\frac{\langle{\bf B}\rangle}{n({\bf R}) \langle{\bf J}_H\rangle}\;, \end{equation} which is also independent of $v_x$ ($\langle{\bf J}_H\rangle$). Substituting this relation into Eq.\ (\ref{j20}) and calculating the gradient of $S({\bf R})$ in Eq.\ (\ref{sr}), we find the average energy flux is given by \begin{equation} \langle{\bf J}_H\rangle=-\frac{5}{3}\frac{T^2}{m}\frac{F_{3/2}(\zeta)} {F_{1/2}(\zeta)}\tau{\bf X}_1-\frac{T^3}{m} \left[\frac{7}{3}\frac{F_{5/2}(\zeta)} {F_{1/2}(\zeta)}{F_{1/2}(\zeta)}-\frac{5}{3}\zeta\frac{F_{3/2}(\zeta)}{F_{1/2} (\zeta)}\right]\tau{\bf X}_2\;. \end{equation} Subtracting $\mu\langle{\bf J}\rangle$, we obtain the linearized heat flux in terms of ${\bf X}_1$ and ${\bf X}_2$ and can identify the linear coefficients of Eq.\ (\ref{onsager2}) as \begin{eqnarray} \label{l21} L^{21}&=&\frac{T^2}{\rho e^2}\left[-\frac{\tau\rho e^2}{m}\frac{5}{3} \frac{F_{3/2}(\zeta)}{F_{1/2}(\zeta)}-\zeta\right]\;,\\ L^{22}&=&-\frac{\tau T^3}{m}\left[\frac{7}{3}\frac{F_{5/2}(\zeta)} {F_{1/2}(\zeta)}{F_{1/2}(\zeta)}-\frac{5}{3}\zeta\frac{F_{3/2}(\zeta)}{F_{1/2} (\zeta)}\right]-\frac{\zeta T^3}{\rho e^2}\left[\frac{5}{3} \frac{F_{3/2}(\zeta)}{F_{1/2}(\zeta)}-\zeta\right]\;. \end{eqnarray} Comparing Eq.\ (\ref{l21}) with Eq.\ (\ref{l12}), we find that the condition under which the Onsager relation holds is given by \begin{equation} \label{id} I\equiv-\frac{\tau\rho e^2}{m}=1\;. \end{equation} We have closely examined Eq.\ (\ref{id}) for a GaAs system to see if it is indeed satisfied in balance equation theory. Both $\rho$ (Eq.\ (\ref{rho})) and $\langle B_x\rangle$ (Appendix) are composed of contributions due to electron-impurity, electron--LO-phonon, and electron--Ac-phonon scatterings (with the electron--acoustic-phonon scatterings due to longitudinal mode acoustic phonons via deformation potential and piezoelectric interactions, and transverse mode via piezoelectric interaction). We have examined each scattering contribution in detail to check Eq.\ (\ref{id}) separately for each interaction. It is clear that if $-\frac{e^2\rho_i/m}{(1/\tau)_i}=1$ is satisfied for each interaction, we have $-\frac{e\sum_i\rho_i/m}{\sum_i(1/\tau)_i}=1$. Moreover, this procedure effects the fact that the result should be independent of impurity concentration and parameters of the electron-phonon interaction matrixes. The expressions for $I$ obtained from the balance equations are given by \begin{equation} \label{iei} I_{ei}=\frac{\sum_{\bf q}q^2|u({\bf q})|^2[\frac{\partial}{\partial \omega} \Pi_2^\varepsilon({\bf q},\omega)]|_{\omega=0}}{(\frac{5}{3})(\frac{u}{n}) \sum_{\bf q}q^2|u({\bf q})|^2[\frac{\partial}{\partial \omega} \Pi_2({\bf q},\omega)]|_{\omega=0}}\;, \end{equation} due to electron-impurity scattering; and \begin{eqnarray} I_{e-ph}(\lambda)&=&\frac{\sum_{\bf q}|M({\bf q},\lambda)|^2\Omega_{{\bf q} \lambda}(\varepsilon_{\bf q}+\Omega_{{\bf q}\lambda})n^\prime(\frac{\Omega_ {{\bf q}\lambda}}{T})\Pi_2({\bf q},\Omega_{{\bf q}\lambda})}{(\frac{5}{3}) (\frac{u}{n})\sum_{\bf q}|M({\bf q},\lambda)|^2\frac{{\bf q}^2}{m} n^\prime(\frac{\Omega_{{\bf q}\lambda}}{T})\Pi_2({\bf q}, \Omega_{{\bf q}\lambda})}\nonumber\\ \label{ieph} &&\mbox{}+\frac{-\sum_{\bf q}|M({\bf q},\lambda)|^2\frac{{\bf q}^2}{m} n^\prime(\frac{\Omega_{{\bf q}\lambda}}{T})\Pi_2^\varepsilon({\bf q}, -\Omega_{{\bf q}\lambda})}{(\frac{5}{3}) (\frac{u}{n})\sum_{\bf q}|M({\bf q},\lambda)|^2\frac{{\bf q}^2}{m} n^\prime(\frac{\Omega_{{\bf q}\lambda}}{T})\Pi_2({\bf q}, \Omega_{{\bf q}\lambda})}\;, \end{eqnarray} due to electron-phonon scattering, for phonons of mode $\lambda$. $I_{e-ph}(\lambda)$ is further composed of contributions due to electron--LO-phonon scattering, $I_{e-LO}$; due to electron--longitudinal acoustic phonons by deformation potential coupling, $I_{edl}$; and by piezoelectric interaction, $I_{epl}$; and due to electron--transverse acoustic phonons by piezoelectric interaction, $I_{ept}$. $\Pi_2^\varepsilon$ in Eqs.\ (\ref{iei}) and (\ref{ieph}) is defined by \begin{equation} \Pi_2^\varepsilon({\bf q},\omega)=2\pi\sum_{\bf k}\varepsilon_{\bf k} \delta(\varepsilon_{{\bf k}+{\bf q}}-\varepsilon_{\bf k}+\omega) \left[f\left(\frac{\varepsilon_{\bf k}-\mu}{T}\right)- f\left(\frac{\varepsilon_{{\bf k}+{\bf q}}-\mu}{T}\right)\right]\;. \end{equation} For the LO phonon, $\Omega_{{\bf q},LO}=\Omega_0=35.4$\ meV, and the Fr\"olich matrix element is $|M({\bf q},LO)|^2=e^2(\kappa_\infty^{-1} -\kappa^{-1})\Omega_0/(2\varepsilon_0q^2)\propto 1/q^2$. (Since the constants in the matrix elements cancel in Eq.\ (\ref{ieph}), therefore in the following we only specify their relation to $q$.) The matrix element due to longitudinal deformation potential coupling is $|M({\bf q},dl)|^2\propto q$, that due to longitudinal piezoelectric interaction is $|M({\bf q},pl)|^2\propto(q_xq_yq_z)^2/q^7$, and for the two branches of independent transverse piezoelectric interaction: $\sum_{j=1,2}|M({\bf q},pt_j)|^2\propto (q_x^2q_y^2+q_y^2q_z^2 +q_z^2q_x^2-(3q_xq_yq_z)^2/q^2)/q^5$. For acoustic phonons $\Omega_{{\bf q} \lambda}$ can be written as $v_{s}q$, with the longitudinal sound speed $v_s$ being 5.29$\times 10^3$\ m/s, and the transverse sound speed being 2.48$\times 10^3$\ m/s. The effective mass of electron is $0.07m_e$, with $m_e$ denoting the free electron mass. The results of our numerical calculations are presented in Fig.\ 1 to Fig.\ 5, where contributions to $I$ due to the various interactions discussed above are plotted against electron density for several different temperatures. As it is generally believed that the contribution of acoustic phonons is important only at low temperature, while the contribution of LO phonons is dominant at high temperature, our temperatures are chosen as 10, 20, and 40\ K for the former, and 50, 300, 500, and 1000\ K for the letter. Impurity scattering is present at any temperature, so we take $T=$10, 50, 100, 300, and 1000\ K in Fig.\ 1. From these figures it is evident that, for any temperature, when electron density is sufficiently high $I$ is exactly unity, indicating that the Onsager relation holds. It is also seen from the figures as temperature becomes higher, the electron density needed to make the Onsager relation hold is also higher. An interesting exception is the LO phonon in Fig.\ 2, in which we can see that the needed density for $T=300$\ K is lower than that for $T=50$\ K, to assure that $I_{eLO}=1$. \section{Conclusions and Discussions} In this paper, we have clarified the role of heat flux in hydrodynamic balance equations. We have further shown that, for any temperature, when electron density is sufficiently high, the hydrodynamic balance equation theory satisfies the Onsager relation. This is consistent with the understanding that the Lei-Ting balance equation theory holds only for strong electron-electron interactions. Our result supports the validity of this theory in a weakly nonuniform system. To our knowledge, this is the first set of hydrodynamic equations which satisfies the Onsager relation self contained and without the {\em ad hoc} introduction of terms which do not originate within the theory. However, we should also point out that the hydrodynamic balance equations can only be used to describe weakly nonuniform systems. When the temperature gradient is large, and/or there is a large heat flux in the system, for example in phenomena as impact ionization and heat generation in nonuniform systems, the energy flux equation (Eq.\ (\ref{eq4})), or heat flux equation, which is of paramount importance in describing these phenomena, is no longer consistent with the other balance equations (Eqs.\ (\ref{eq1})-(\ref{eq3})), and a contradiction emerges. This reflects the inadequacy of the assumed initial density matrix, Eq.\ (\ref{rho0}), in Lei-Ting balance equation theory, by failing to include the detailed information about the physics of heat flux. This can be further illustrated as follows: In our deriving the average energy flux operator Eq.\ (\ref{jhoper}), there should be another term \begin{equation} \langle{\bf j}_H\rangle=\langle\sum_i\frac{{\bf p}_i^{\prime 2}}{2m} \frac{{\bf p}_i^\prime}{m}\delta({\bf r}_i^\prime)\rangle \end{equation} on the right hand side of Eq.\ (\ref{jhl}). Moreover, in obtaining the average of the tensor ${\cal A}$ in Eq.\ (\ref{a}), there should be another term ${\bf v}\cdot\langle\sum_i\frac{{\bf p}_i^\prime}{m} \frac{{\bf p}_i^\prime} {m}\frac{{\bf p}_i^\prime}{m}\delta({\bf r}_i^\prime)\rangle$ on the right hand side of Eq.\ (\ref{aa}). These two terms do not vanish when the system is not near thermal equilibrium, and should be included in the theory if they are calculated from a {\em real} physical density matrix. Anile {\em et al}.\cite{anile3} have included such terms in their traditional hydrodynamic equations mentioned in the introduction. Unfortunately these terms are exactly zero predicted by balance equation theory. It is clear that for mediately nonuniform systems and/or systems far from thermal equilibrium, an accurate prediction of the behavior of heat flux requires the inclusion of one or more additional unknown parameters in the initial density matrix (in high-order terms so that they do not violate the particle and momentum balance equations) to be followed by their determination from expanded balance equations, which now include the heat flux equation(s). This problem is currently under investigation, and the results will be published in elsewhere. \acknowledgements One of the authors (MWW) would like to thank Professor X.L. Lei, who first brought this problem into his attention. This research is supported by U.S. Office Naval Research (Contract No. N66001-95-M-3472), and the U.S. Army Research Office.
proofpile-arXiv_067-3000
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{#1}\setcounter{equation}{0}} \def\nappendix#1{\vskip 1cm\no{\bf Appendix #1} \def#1{#1} \setcounter{equation}{0}} \renewcommand{\theequation}{#1.\arabic{equation}} \baselineskip 16pt \oddsidemargin 0pt \evensidemargin 0pt \topmargin 0pt \headheight 0pt \headsep 0pt \footskip 32pt \textheight 40\baselineskip \advance \textheight by \topskip \textwidth 470pt \begin{document} \author{Mikhail S. Plyushchay\thanks{On leave from the {\it Insti\-tu\-te for High Ener\-gy Phy\-sics, Protvino, Russia;} E-mail: mikhail@posta.unizar.es}\\ \\ {\it Departamento de F\'{i}sica Te\'orica, Facultad de Ciencias}\\ {\it Universidad de Zaragoza, 50009 Zaragoza, Spain}} \title{\bf Minimal bosonization of supersymmetry} \date{To appear in {\bf Mod. Phys. Lett. A}} \maketitle \begin{abstract} The minimal bosonization of supersymmetry in terms of one bosonic degree of freedom is considered. A nontrivial relationship of the construction to the Witten supersymmetric quantum mechanics is illustrated with the help of the simplest $N=2$ SUSY system realized on the basis of the ordinary (undeformed) bosonic oscillator. It is shown that the generalization of such a construction to the case of Vasiliev deformed bosonic oscillator gives a supersymmetric extension of the 2-body Calogero model in the phase of exact or spontaneously broken $N=2$ SUSY. The construction admits an extension to the case of the OSp(2$\vert$2) supersymmetry, and, as a consequence, $osp(2\vert 2)$ superalgebra is revealed as a dynamical symmetry algebra for the bosonized supersymmetric Calogero model. Realizing the Klein operator as a parity operator, we construct the bosonized Witten supersymmetric quantum mechanics. Here the general case of the corresponding bosonized $N=2$ SUSY is given by an odd function being a superpotential. \end{abstract} \vskip1.5cm \begin{flushright} {\bf DFTUZ/95/09}\\ {\bf hep-th/9601141} \end{flushright} \newpage \nsection{Introduction} The possibility of describing (1+1)-dimensional fermionic systems in terms of bosonic fields is known for a long time \cite{klei,bosf}. The corresponding Bose-Fermi transformation is the Klein transformation \cite{klei,bos1}, which has a nonlocal nature. Such a nonlocality lies also in the basis of (2+1)-dimensional anyonic constructions \cite{any}. Analogous Bose-Fermi transformation (bosonization) exists in the (0+1)-dimensional case of quantum mechanics \cite{bos1,garb}, and its generalization leads to the $q$-deformed oscillator \cite{bosq}. It is obvious that the bosonization constructions can be straightforwardly generalized to the case of supersymmetric quantum mechanics\footnote{In the last section we shall comment on the attempt of applying bosonization technique to (1+1)-dimensional SUSY systems \cite{damg}.}. Indeed, realizing fermionic oscillator operators in terms of creation-annihilation bosonic operators and extending the bosonized fermionic system by independent bosonic oscillator operators, we can realize $N=2$ supersymmetric system following the Nicolai-Witten supersymmetric quantum mechanical constructions \cite{nic,wit}. But, as we shall see, such a straightforward construction turns out to be a {\it nonminimal} one. In this paper, we investigate the possibility of realizing supersymmetric quantum mechanical bosonization constructions in a {\it minimal way}, in terms of one bosonic degree of freedom in the simplest case. As we shall see, the crucial difference of the minimal bosonization scheme from the nonminimal one is coded in relation (\ref{af}). For the first time the possibility of revealing superalgebraic structures in a quantum system of one bosonic oscillator was pointed out, probably, in ref. \cite{cromb}, where it was noted that the $osp(1\vert 2)$ superalgebra is the spectrum generating algebra of the system. Explicit realization of this algebra in terms of creation-annihilation bosonic operators was given in ref. \cite{macmaj}, and subsequently generalized in ref. \cite{mac} to the case of Vasiliev deformed bosonic oscillator. This deformed bosonic oscillator was introduced in ref. \cite{vas1} in the context of higher spin algebras, and subsequently was used for investigation of the quantum mechanical Calogero model \cite{bri1}--\cite{bri2} and for constructing (2+1)-dimensional anyonic field equations \cite{ply1}. The deformed Heisenberg algebra, corresponding to the deformed bosonic oscillator of ref. \cite{vas1}, involves the Klein operator as an essential object, which introduces $Z_2$-grading structure on the Fock space of the system. Such a structure, in turn, is an essential ingredient of the $N=2$ supersymmetry, which was interpreted in ref. \cite{mac} as a {\it hidden supersymmetry} of the {\it deformed} bosonic system. In the present paper we shall demonstrate that a `hidden' $N=2$ supersymmetry, revealed in ref. \cite{mac}, has a nonlocal nature analogous to that of the standard bosonization constructions for fermionic systems \cite{klei,bosf,bos1,garb}. We shall show that the simplest $N=2$ SUSY system, constructed on the basis of the deformed Heisenberg algebra, is the bosonized $N=2$ supersymmetric 2-body Calogero model, and that the bosonization constructions of $N=2$ SUSY can be generalized to the case of OSp(2$\vert$2) supersymmetry. Moreover, it will be shown how the simplest bosonization constructions can be generalized to the case corresponding (formally) to the Witten supersymmetric quantum mechanics. This generalization will allow us to demonstrate that the bosonized supersymmetric quantum mechanics constructed on the basis of the deformed Heisenberg algebra gives the same results as the general supersymmetric bosonization scheme realized on the basis of the Heisenberg algebra of ordinary, undeformed, bosonic oscillator. The paper is organized as follows. In section 2 we realize the simplest $N=2$ SUSY system with the help of the Heisenberg algebra supplied with the Klein operator which, in the coordinate representation, can be considered as a parity operator. We show that such a construction contains both phases of exact and spontaneously broken SUSY \cite{wit,alv}. Here these two phases are distinguished by the sign parameter being present in the bosonized $N=2$ SUSY generators. We trace out a formal analogy between this simplest bosonized SUSY system and the simplest system of the Witten supersymmetric quantum mechanics \cite{wit}. The latter one is the Nicolai superoscillator \cite{nic}, which, in contrast to the constructed system, contains only the phase of unbroken SUSY. Proceeding from the phase of the broken SUSY, we construct fermionic creation-annihilation operators, i.e. realize a Bose-Fermi (Klein) transformation. This allows us to reveal the point where the minimal bosonization of supersymmetry essentially differs from the nonminimal one. In section 3 we generalize the constructions to the case of deformed bosonic oscillator \cite{vas1}, whose creation-annihilation operators satisfy the deformed Heisenberg algebra involving the Klein operator. The specific feature of such a generalization is that in the phase of spontaneously broken supersymmetry, the scale of supersymmetry breaking is governed by the deformation parameter of the system. On the other hand, the phase of exact supersymmetry turns out to be isospectral to the corresponding phase of the simplest bosonized $N=2$ SUSY system from section 2. We show that this generalized construction, connected with the deformed Heisenberg algebra, gives the supersymmetric extension of the 2-body Calogero model \cite{calog} realized, in contrast to the standard approach \cite{freed,bri2}, without extending the system by the fermionic degrees of freedom. In section 4 we demonstrate that the construction admits an extension to the case of the OSp(2$\vert$2) supersymmetry. This means, in particular, that the corresponding $osp(2\vert 2)$ superalgebra is realizable as an operator algebra for the quantum mechanical 2-body nonsupersymmetric Calogero model. On the other hand, it is the algebra of a dynamical symmetry for the bosonized supersymmetric extension of the 2-body Calogero model presented in section 3. Thus, we bosonize all the constructions of refs. \cite{freed,bri2} corresponding to the case of the 2-body supersymmetric Calogero model. In section 5 we generalize the $N=2$ SUSY bosonization constructions to the case which corresponds (formally) to the general case of the Witten supersymmetric quantum mechanics. Here we show that from the point of view of the bosonized supersymmetric constructions, the deformed Heisenberg algebra gives the same general results as the undeformed Heisenberg algebra. A priory this fact is not to be surprising, since the bosonization scheme is based on the use of the Klein operator, and from the point of view of its realization as a parity operator, it is exactly one and the same object for both cases. In section 6 we list some problems which may be interesting for further consideration. \nsection{The simplest $N=2$ SUSY system} We shall begin with the construction of the simplest bosonized supersymmetric system. This will give the basis for subsequent generalizations and will allow us to demonstrate the nontrivial relationship of the bosonized supersymmetry to the standard supersymmetry realized by supplying the bosonic system with independent fermionic operators \cite{wit,salhol}. So, let us consider the ordinary bosonic oscillator with the operators $a^{+}$ and $a^{-}$ satisfying the commutation relation $ [a^{-},a^{+}]=1, $ and introduce the Klein operator $K$ defined by the relations \begin{equation} K^{2}=1,\quad \{K,a^{\pm}\}=0. \label{kle} \end{equation} This operator separates the complete orthonormal set of states $|n>=(n!)^{-1/2}(a^+)^n \vert 0>$, $n=0,1,\ldots$, $a^-\vert 0>=0$, into even and odd subspaces: $ K|n>=\kappa\cdot (-1)^{n}|n>, \label{z2} $ and, so, it introduces $Z_2$-grading structure on the Fock space of the bosonic oscillator. Without loss of generality, we fix the sign factor as $\kappa=+1$. The operator $K$ can be realized as $ K=\exp{i\pi N}, $ or in the explicitly hermitian form, \begin{equation} K=\cos \pi N, \label{kcos} \end{equation} with the help of the number operator $N=a^{+}a^{-}$, $ N|n>=n|n>. $ Before going over to the SUSY constructions, we note that in the coordinate representation, where the creation-annihilation operators are realized as $a^{\pm}=(x\mp ip)/\sqrt{2},$ $p=-id/dx$, the Klein operator can be considered as the parity operator, whose action is defined by the relation $K\psi(x)=\psi(-x)$, and, so, the space of wave functions is separated into even and odd subspaces, \begin{equation} K\psi_{\pm}=\pm\psi_\pm(x),\quad \psi_\pm(x)= \frac{1}{2}(\psi(x)\pm\psi(-x)), \label{kpsi} \end{equation} in correspondence with relations (\ref{kle}) and the above mentioned choice of the sign parameter $\kappa$. It is the relations (\ref{kpsi}) that we shall consider as defining the Klein operator in the coordinate representation. But, on the other hand, if here we write the exact analog of the expression (\ref{kcos}), $ K=\sin (\pi H_0), $ $ H_{0}=\frac{1}{2}(x^{2}-{d^{2}}/{dx^{2}}), $ we shall immediately reveal the hidden nonlocal character of the bosonization constructions. Now let us proceed to the supersymmetric constructions. For realizing $N=2$ supersymmetry, we shall construct the mutually conjugate nilpotent operators $Q^{+}$ and $Q^{-}=(Q^{+})^{\dagger}$, $Q^{\pm 2}=0$. We shall look for the simplest possible realization of such operators in the form $ Q^{+}=\frac{1}{2}a^{+}(\alpha+\beta K)+\frac{1}{2}a^{-}(\gamma+\delta K), $ which is linear in the oscillator variables $a^{\pm}$ but contains also the dependence on the Klein operator $K$. Then, the nilpotency condition, $ Q^{+2}=Q^{-2}=0, $ gives the following restriction on the complex number parameters: $ \beta=\epsilon\alpha,\quad \delta=\epsilon\gamma, $ where $\epsilon=\pm$. Therefore, we have two possibilities for choosing operator $Q^{+}$: $ Q^{+}_{\epsilon}=(\alpha a^{+}+\gamma a^{-})\Pi_{\epsilon},$ which are distinguished by the sign parameter. Here we introduced a notation $\Pi_{\epsilon}$ for hermitian operators $ \Pi_{\pm}=\frac{1}{2}(1\pm K) $ being the projectors: $\Pi_{\pm}^{2}=\Pi_{\pm},$ $\Pi_{+}\Pi_{-}=0,$ $\Pi_{+}+\Pi_{-}=1.$ {}From the explicit form of the anticommutator, $ \{Q^{+}_{\epsilon},Q^{-}_{\epsilon}\}= a^{+2}\alpha\gamma^{*}+a^{-2}\alpha^{*}\gamma +\frac{1}{2}\{a^{+},a^{-}\}(\gamma\gamma^{*}+\alpha\alpha^{*}) -\frac{1}{2}\epsilon K [a^{-},a^{+}] (\gamma\gamma^{*}-\alpha\alpha^{*}), $ we conclude that it will commute with the number operator $N$ if we choose the parameters in such a way that $\alpha\gamma^{*}=0$. As a consequence, in this case the spectra of the corresponding Hamiltonians, $H_\epsilon=\{Q^{+}_{\epsilon},Q^{-}_{\epsilon}\}$, $\epsilon=\pm$, will be the simplest, linear in $n$. We can put $\alpha=0$ and normalize the second parameter as $\gamma\gamma^*=1$. The remaining phase factor can be removed by the unitary transformation of the oscillator operators $a^{\pm}$, and, so, finally we arrive at the nilpotent operators in the compact form: \begin{equation} Q^{+}_{\epsilon}=a^{-}\Pi_{\epsilon},\quad Q^{-}_{\epsilon}=a^{+}\Pi_{-\epsilon}. \label{qexp} \end{equation} They together with the operator \begin{equation} H_{\epsilon}=\frac{1}{2}\{a^{+},a^{-}\} -\frac{1}{2}\epsilon K [a^{-},a^{+}] \label{hexp1} \end{equation} form the $N=2$ (or $s(2)$, according to the terminology of ref. \cite{cromb}) superalgebra: \begin{equation} Q^{\pm2}_{\epsilon}=0,\quad \{Q^{+}_{\epsilon},Q^{-}_{\epsilon}\} =H_{\epsilon},\quad [Q^{\pm}_{\epsilon},H_{\epsilon}]=0. \label{salg} \end{equation} Note that the hermitian supercharge operators $Q^{1,2}_{\epsilon}$, $ Q^{\pm}_{\epsilon}=\frac{1}{2}(Q^{1}_{\epsilon}\pm iQ^{2}_{\epsilon}), $ $ \{Q^{i}_{\epsilon},Q^{j}_{\epsilon}\}=2\delta^{ij}H_{\epsilon}, $ have the following form in terms of coordinate and momentum operators: \begin{equation} Q^{1}_{\epsilon}=\frac{1}{\sqrt{2}}(x+i\epsilon pK),\quad Q^{2}_{\epsilon} =\frac{1}{\sqrt{2}}(p-i\epsilon xK)=-i\epsilon Q^{1}_{\epsilon}K. \label{q12} \end{equation} Let us consider the spectrum of the constructed SUSY Hamiltonian (\ref{hexp1}). In the case when $\epsilon=-$, the states $|n>$ are the eigenstates of the operator $H_{-}$ with the eigenvalues \begin{equation} E_{n}^{-}=2[n/2]+1, \label{eb} \end{equation} where $[n/2]$ means the integer part of $n/2$. Therefore, here $E_{n}^{-}>0$ and all the states $|n>$ and $|n+1>$, $n=2k$, $k=0,1,\ldots$, are paired in supermultiplets, i.e. we have the case of spontaneously broken supersymmetry. For $\epsilon=+$ we have the case of exact supersymmetry, characterized by the spectrum \begin{equation} E_{n}^{+}=2[(n+1)/2]. \label{ee} \end{equation} Here the vacuum state, being a SUSY singlet, has the energy $E_{0}^{+}=0$, whereas $E_{n}^{+}=E_{n+1}^{+}>0$ for $n=2k+1, k=0,1,\ldots$. Thus, we have realized the simplest $N=2$ SUSY system in terms of one bosonic oscillator. In the coordinate representation, this system is given by the supercharge operators (\ref{q12}) and by the Hamiltonian \begin{equation} H_\epsilon=\frac{1}{2}\left(-\frac{d^2}{dx^2}+x^2 -\epsilon K\right). \label{hsch} \end{equation} The structure of these operators formally is similar to the structure of corresponding operators in Witten supersymmetric quantum mechanics \cite{wit} for the simplest system of the Nicolai superoscillator \cite{nic}, where, in particular, the diagonal Pauli matrix $\sigma_3$ is present in Hamiltonian instead of parity operator $K$. But this difference turns out to be crucial. It reveals itself in the fact that the constructed system contains both phases of exact and spontaneously broken SUSY, which are distinguished by the parameter $\epsilon$, whereas in the case of Witten supersymmetric quantum mechanics both cases $\epsilon=+$ and $\epsilon=-$ give one and the same superoscillator system with unbroken SUSY. Note also that in the present system in the phase of exact SUSY ($\epsilon=+$), the energy level intervals in spectrum (\ref{ee}) are twice as much as those for the corresponding spectrum of the superoscillator \cite{nic}. One can further extend the formal analogy with the simplest Nicolai-Witten SUSY system. Indeed, due to the property $E_{n}^{-}>0$, taking place for $\epsilon=-$, we can construct the Fermi oscillator operators \begin{equation} f^{\pm}=\frac{Q^{\mp}_{-}}{\sqrt{H_{-}}} \label{fdef} \end{equation} satisfying the relations $ \{f^{+},f^{-}\}=1$ and $f^{\pm 2}=0.$ So, we get a Bose-Fermi (Klein) transformation in terms of one bosonic oscillator. With the help of these fermionic operators, satisfying the relation $\{K,f^{\pm}\}=0$, we can present the hamiltonian $H_{\epsilon}$, given by eq. (\ref{hexp1}) or (\ref{hsch}), in the original form of the superoscillator Hamiltonian \cite{nic}: \begin{equation} H_{\epsilon}=\frac{1}{2}\{a^{+},a^{-}\}+\epsilon \frac{1}{2}[f^{+},f^{-}]. \label{haf} \end{equation} The formal character of such a coincidence can be stressed once more by the fact of a noncommutativity of the bosonic creation-annihilation operators and the fermionic ones, \begin{equation} [a^{\pm},f^{\pm}]\neq 0. \label{af} \end{equation} This noncommutativity reveals the essential difference between the present minimal SUSY bosonization scheme and the nonminimal one, described in the previous section. Recall that in the nonminimal scheme the fermionic operators can be constructed from some bosonic operators $\tilde{a}{}^{\pm}$ independent from $a^{\pm}$. Therefore, in the nonminimal bosonization scheme we would have the Hamiltonian in the same form (\ref{haf}) but with $[a^\pm,f^\pm]=0$, that would give the bosonized system exactly corresponding to the Nicolai superoscillator in the phase of exact supersymmetry. \nsection{Supersymmetric 2-body Calogero model} We pass over to the generalizations of the presented simplest supersymmetric constructions, and turn to the `$\nu$-deformed' bosonic oscillator system defined by the deformed Heisenberg algebra \cite{vas1} \begin{equation} [a^{-},a^{+}]=1+\nu K. \label{def} \end{equation} Here the Klein operator $K$ is again given by the relations of the form (\ref{kle}). In the coordinate representation it can be realized as a parity operator with the help of eq. (\ref{kpsi}), whereas the deformed creation-annihilation operators can be realized in the form generalizing the ordinary case of the undeformed ($\nu=0$) bosonic oscillator \cite{bri1}--\cite{bri2}: \begin{equation} a^{\pm}=\frac{1}{\sqrt{2}}(x\mp ip), \quad p=-i\left(\frac{d}{dx}-\frac{\nu}{2x}K\right). \label{pdef} \end{equation} However, it will be more convenient to generalize the previous constructions in terms of these deformed creation-annihilation operators and corresponding Fock space, and then to pass over to the coordinate representation. For the purpose, let us introduce the vacuum state and put the sign factor $\kappa=+1$, so that $K\vert 0>=\vert 0>.$ We find that the operator $a^{+}a^{-}$ acts on the states $|n>=C_n (a^+)^n\vert 0>$ in the following way: $ a^{+}a^{-}|n>=[n]_{\nu}|n>, $ $ [n]_{\nu}=n+\frac{\nu}{2}(1+(-1)^{n+1}). $ {}From here we conclude that in the case when $ \nu >-1, $ the space of unitary representation of algebra (\ref{def}), (\ref{kle}) is given by the complete set of the orthornormal states $|n>$, in which the corresponding normalization coefficients can be chosen as $ C_{n}=([n]_{\nu}!)^{-1/2}, $ $[n]_{\nu}!=\prod^{n}_{k=1}[k]_{\nu}$. Then, proceeding from eq. (\ref{def}), one can get the following expression for the number operator $N$, $N\vert n>=n\vert n>$, in terms of the operators $a^{\pm}$: $ N=\frac{1}{2}\{a^{-},a^{+}\}-\frac{1}{2}(\nu+1). $ Therefore, we can realize the Klein operator $K$ in terms of the operators $a^{\pm}$ by means of eq. (\ref{kcos}), and the constructions carried out with the use of ordinary bosonic oscillator operators can be repeated here in the same way. So, we get the supercharges and Hamiltonian in the form of eqs. (\ref{qexp}) and (\ref{hexp1}), respectively. Then, again, we find that $\epsilon=+$ corresponds to the case of exact supersymmetry. Here the states $|n>$ are the eigenstates of the Hamiltonian $H_{+}$ with the same spectrum (\ref{ee}) as in the case of Heisenberg algebra ($\nu=0$). On the other hand, for $\epsilon=-$ we have the case of spontaneously broken supersymmetry with the shifted energy spectrum: instead of (\ref{eb}), we have here \begin{equation} E_{n}^{-}=2[n/2]+1+\nu. \label{scale} \end{equation} Hence, in this case the shift of the energy (the scale of the supersymmetry breaking) is defined by the deformation parameter, and here we have $E_{n}^{-}>0$ for all $n$ due to the restriction $\nu>-1$, and, therefore, in the case of the deformed bosonic oscillator we can also realize the Bose-Fermi transformation with the help of the relation of the same form (\ref{fdef}) as in the case of the ordinary oscillator. So, from the point of view of the SUSY constructions, the deformation of the Heisenberg algebra reveals itself in the scale of supersymmetry breaking. Now, let us present the hamiltonian (\ref{hexp1}) in the coordinate representation with the help of relations (\ref{pdef}) and (\ref{kpsi}): \begin{eqnarray} H_\epsilon&=& -\frac{1}{2}\left(\frac{d}{dx}+ \left(\epsilon x-\frac{\nu}{2x}\right)K\right)^2 \label{q22}\\ &=&\frac{1}{2}\left(-\frac{d^2}{dx^2}+x^2+\frac{\nu^2}{4x^2}- \frac{\nu}{2x^2}K-\epsilon(\nu+K)\right). \label{supcal} \end{eqnarray} The expression (\ref{q22}) is a square of the hermitian supercharge operator $Q^2_\epsilon$ defined by eq. (\ref{q12}), and the system can be interpreted as a particle minimally coupled to a specific `gauge field' given by the operator-valued potential $V(x)=i(\epsilon x -\nu/2x)K$. For $\nu=0$, it is reduced to the potential $V=i\epsilon xK$, corresponding to the simplest supersymmetric system considered in the previous section. If we omit the last term $\epsilon(\nu+K)$ from eq. (\ref{supcal}), we reduce the present supersymmetric Hamiltonian to the Hamiltonian of the 2-body (nonsupersymmetric) Calogero model \cite{calog} (see refs. \cite{bri1,bri2}). It is the use of the deformed Heisenberg algebra that allowed the authors of refs. \cite{bri1}--\cite{bri2} to simplify considerably the problem of solving $n$-body Calogero model. So, we see that the same algebra allows us to get $N=2$ supersymmetric extension of the 2-body Calogero model without extending the initial system by fermionic creation-annihilation operators. \nsection{OSp(2$\vert$2) supersymmetry} Now we are going to demonstrate that the bosonization constructions of the $N=2$ SUSY admit the generalization to the case of more broad OSp(2$\vert 2)$ supersymmetry. This will allow us to get some further results connected with the 2-body Calogero model. For the purpose, we note that the algebra of the OSp(2$\vert$2) supersymmetry contains $s(2)$ and $osp(1\vert 2)$ superalgebras as subalgebras \cite{osp2}. Then, using the results of papers \cite{vas1,mac,ply1} on realization of $sl(2)$ algebra and more broad $osp(1\vert 2)$ superalgebra on the Fock space of the {\it deformed} bosonic oscillator, we construct the following operators: \begin{eqnarray} &T_3=\frac{1}{2}(a^{+}a^{-}+a^{-}a^{+}),\quad T_\pm=\frac{1}{2}(a^{\pm})^{2}, \quad J=-\frac{1}{2}\epsilon K[a^-,a^+],& \label{eveng}\\ &Q^\pm =Q^\mp_\epsilon ,\quad S^\pm=Q^\mp_{-\epsilon}.& \label{oddg} \end{eqnarray} The operators (\ref{eveng}) and (\ref{oddg}) are even and odd generators of OSp(2$\vert$2) supergroup. Indeed, they form $osp(2|2)$ superalgebra given by the nontrivial (anti)commutators \begin{eqnarray} &[T_3 ,T_\pm ]=\pm 2T_\pm,\quad [T_- ,T_+ ]=T_3,&\nonumber\\ &\{S^+ ,Q^+ \}=T_+ ,\quad \{Q^+ ,Q^- \}=T_3 +J,\quad \{S^+ ,S^- \}=T_3 -J,&\nonumber\\ &[T_+ ,Q^- ]=-S^+ ,\quad [T_+ , S^- ]=-Q^+,\quad [T_3 ,Q^+ ]=Q^+,\quad [T_3, S^- ]=-S^- ,&\nonumber\\ &[J,S^- ]=-S^- ,\quad [J,Q^+ ]=-Q^+ ,& \label{osp} \end{eqnarray} and by corresponding other nontrivial (anti)commutators which can be obtained from eqs. (\ref{osp}) by hermitian conjugation. Therefore, even generators (\ref{eveng}) form the subalgebra $sl(2)\times u(1)$, whereas $s(2)$ superalgebra (\ref{salg}), as a subalgebra, is given by the sets of generators $Q^\pm$ and $T_3+J$, or $S^\pm$ and $T_3-J$. Moreover, the operators $a^{\pm}$, being odd generators of $osp(1\vert 2)$ superalgebra (whereas operators $T_3$ and $T_{\pm}$ are its even generators) \cite{mac}, are expressed in terms of odd generators of $osp(2|2)$ superalgebra as $ a^\pm =Q^\pm + S^\pm. $ Hence, both phases of exact and spontaneously broken $N=2$ SUSY, discussed above, are contained in the extended bosonized OSp(2$|$2) supersymmetry. Thus, we can conclude that the $osp(2\vert 2)$ superalgebra can be realized as an operator algebra of the 2-body (nonsupersymmetric) Calogero model with the help of the deformed Heisenberg algebra involving the Klein (parity) operator $K$. Moreover, the given construction means that the OSp(2$\vert$2) supersymmetry is a dynamical symmetry for the bosonized supersymmetric extension of the 2-body Calogero model presented in the previous section. Note that the supersymmetric extension of the $n$-body Calogero model \cite{calog} was realized in ref. \cite{freed} in a standard way by introducing fermionic degrees of freedom into initial nonsupersymmetric system. In ref. \cite{bri2}, the supersymmetric extension of the $n$-body Calogero model was investigated with the help of the $n$-extended deformed Heisenberg algebra supplied with the corresponding set of the fermionic creation-annihilation operators, where OSp(2$\vert$2) supersymmetry was also revealed as a dynamical symmetry of the model of ref. \cite{freed}. Therefore, the constructions given here bosonize corresponding SUSY constructions of ref. \cite{bri2} for the 2-body case. \nsection{Bosonized supersymmetric quantum mechanics} We turn now to the generalization of the previous bosonization constructions of the $N=2$ SUSY to the case corresponding to the more complicated quantum mechanical $N=2$ supersymmetric systems \cite{wit,salhol,gen}. To this end, consider the operators $ \tilde{Q}^{\pm}_{\epsilon}=A^{\mp}\Pi_{\pm\epsilon} $ with mutually conjugate odd operators $A^{\pm}=A^{\pm}(a^{+},a^{-})$, $A^{-}=(A^{+})^{\dagger}$, $KA^{\pm}=-A^{\pm}K$. These properties of $A^{\pm}$ guarantee that the operators $\tilde{Q}^{\pm}_{\epsilon}$ are, in turn, mutually conjugate, $\tilde{Q}^{-}_{\epsilon}= (\tilde{Q}^{+}_{\epsilon})^{\dagger}$, and nilpotent: $ (\tilde{Q}{}^{\pm}_{\epsilon})^{2}=0. $ Taking the anticommutator $ \tilde{H}_{\epsilon}=\{\tilde{Q}^{+}_{\epsilon},\tilde{Q}^{-}_{\epsilon}\} $ as the Hamiltonian, we get the $N=2$ superalgebra with the generators $\tilde{Q}_\epsilon$ and $\tilde{H}_\epsilon$. The explicit form of the supersymmetric Hamiltonian $\tilde{H}_\epsilon$ has the form given by eq. (\ref{hexp1}) with operators $a^\pm$ replaced by $A^\pm$. Now, let us turn to the coordinate representation, and choose the operators $A^{\pm}$ in the form $A^{\pm}=\frac{1}{\sqrt{2}}(\mp ip+\tilde{W}(x,K))$ with odd function $\tilde{W}(x,K)$, $K\tilde{W}(x,K)=-\tilde{W}(x,K)K$, which generally can depend on the parity operator $K$, and, so, has the form $\tilde{W}=W_0(x)+iW_1(x)K$, where $W_0(x)$ and $W_1(x)$ are real odd functions. We shall call $\tilde{W}$ a superpotential. Taking into account realization (\ref{pdef}) for the deformed momentum operator $p$, as a result we get the following most general form of the $N=2$ supersymmetric Hamiltonian, quadratic in the derivative $d/dx$, \begin{equation} \tilde{H}_{\epsilon}= -\frac{1}{2}\left(\frac{d}{dx}+ i\epsilon W_1 +\left(\epsilon W_0-\frac{\nu}{2x} \right)K\right)^2. \label{swit} \end{equation} Here the Hamiltonian is written formally as a square of the corresponding supercharge operator $\tilde{Q}{}^2_\epsilon =i(\tilde{Q}{}^-_\epsilon -\tilde{Q}{}^+_\epsilon)$. Therefore, from the point of view of the present constructions, the $N=2$ supersymmetric system given by the superpotential $\tilde{W}=W_0+iW_1 K$ in the case of the deformed Heisenberg algebra (\ref{def}), (\ref{kle}) is equivalent to supersymmetric system given by the shifted superpotential $\tilde{W}=(W_0-\epsilon \nu/2x)+iW_1 K$ in the undeformed case ($\nu=0$). In particular, in terms of the ordinary ($\nu=0$) Heisenberg algebra, the $N=2$ supersymmetric extension of the 2-body Calogero model, constructed in section 4, is the supersymmetric system given by the superpotential with $W_1=0$ and $W_0=x-\epsilon \nu/2x$. Moreover, as follows from the explicit form of the supersymmetric Hamiltonian (\ref{swit}), the function $W_1$ can be eliminated from the superpotential by the phase transformation of a wave function, $\psi(x)\rightarrow \tilde{\psi}(x)= \exp(-i\epsilon\int_{}^{x}W_1(x')dx')\psi(x)$. Therefore, finally we arrive at the following general form of the supersymmetric Hamiltonian and corresponding selfconjugate supercharge operators, \begin{eqnarray} &&H=\frac{1}{2}\left(-\frac{d^2}{dx^2}+W^2(x)- W'\cdot K\right), \nonumber\\ &&Q_1=iQ_2\cdot K,\quad Q_2=-\frac{i}{\sqrt{2}}\left(\frac{d}{dx}+ W(x)\cdot K\right), \label{swit0} \end{eqnarray} which are defined by {\it odd function} $W(x)$ being a superpotential. This formally corresponds to the case of Witten supersymmetric quantum mechanics \cite{wit}, in which $N=2$ supersymmetric system is also defined by one function being a superpotential. However, it is necessary to stress once more that here the superpotential is an odd function, and, as it has been shown with the help of the simplest system given by a linear superpotential $W=\epsilon x$, the present construction relates to the Witten supersymmetric quantum mechanics in a nontrivial way. \nsection{Concluding remarks and outlook} When the operator $K$ is understood as the Klein operator given by eq. (\ref{kcos}) in terms of creation-annihilation operators, the described bosonization of supersymmetric quantum mechanics is minimal in the sense that it is realized on the Fock space of one (ordinary or deformed) bosonic oscillator. On the other hand, in the coordinate representation the operator $K$ can be considered as a parity operator defined by relation (\ref{kpsi}). We have illustrated a nontrivial relationship of the construction to the Witten supersymmetric quantum mechanics with the help of the simplest $N=2$ SUSY system, and have shown that the essential difference between the minimal SUSY bosonization scheme and the nonminimal one, discussed at the beginning of the paper, is coded in relation (\ref{af}). The general case of the bosonized Witten supersymmetric quantum mechanics is given by the Hamiltonian and supercharges (\ref{swit0}), which, in turn, are defined by odd superpotential. So, it would be interesting to investigate the general properties of the bosonized $N=2$ SUSY and establish its exact relationship to the Witten supersymmetric quantum mechanics. We have revealed the OSp(2$\vert$2) supersymmetry in the system being the bosonized supersymmetric 2-body Calogero model. The open problem here is establishing the criteria for the existence of such an extension of the $N=2$ supersymmetry in general case of the bosonized Witten supersymmetric quantum mechanics (\ref{swit0}). The classical analog of the Witten supersymmetric quantum mechanics is formulated on the superspace containing Grassmann variables being the classical analogs of the fermionic operators, and the corresponding path-integral formulation of the theory is well known (see, e.g., ref. \cite{das}). What is the classical analog and corresponding path-integral formulation for the bosonized supersymmetric system given by eq. (\ref{swit0})? This question is very interesting because a priori it is not clear at all how the supersymmetry will reveal itself without using Grassmann variables (in this respect see, however, ref. \cite{pls}). Possibly, the answer can be obtained using recent result on realization of the classical analog of the $q$-deformed oscillator with the help of constrained systems \cite{sha} and the known observation on the common structure of different bosonic deformed systems \cite{dasc}. As a further generalization of the constructions, one could investigate a possibility to bosonize S(N)-supersymmetric, $N>2$, \cite{cromb} and parasupersymmetric \cite{para} quantum mechanical systems. Another obvious development of the present bosonization constructions would be their generalization to the case of $n>1$ bosonic degrees of freedom. Possibly, in this direction $osp(2\vert 2)$ superalgebra could be revealed in the form of the operator algebra for the general case of the $(n+1)$-body nonsupersymmetric Calogero model, and, moreover, a supersymmetric extension of this model could be constructed without supplying the system with fermionic degrees of freedom. Then, taking the limit $n\rightarrow\infty$, one could try to generalize SUSY bosonization constructions to the case of (1+1)-dimensional quantum field theory. In connection with such hypothetical possible generalization it is necessary to point out that earlier some different problem was investigated by Aratyn and Damgaard \cite{damg}. They started from the (1+1)-dimensional supersymmetric field system containing free complex scalar and Dirac fields, and bosonized fermionic field in terms of an independent scalar field with the help of the Mandelstam nonlocal constructions \cite{bosf}, i.e. used nonminimal bosonization scheme according to our terminology. As a result, they arrived at the quantum field system of free bosonic scalar fields, described by a local action. On the other hand, due to relation (\ref{af}) one can conjecture that the corresponding quantum field generalization of the present minimal SUSY bosonization constructions will give some different results. At last, it seems to be interesting to investigate the possibility of realizing the bosonized supersymmetric extension of (2+1)-dimensional anyonic equations constructed in ref. \cite{ply1} with the help of the deformed Heisenberg algebra. We hope to consider the problems listed here in future publications. $\ $ The author thanks P.H. Damgaard, T.H. Hansson, A. Niemi and P.Di~Vecchia for useful discussions. The research was supported by MEC-DGICYT (Spain).
proofpile-arXiv_067-3002
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A {\em double quantum group} is a Hopf algebra of the form $A \Join A$ where $A$ is a braided Hopf algebra and $A\Join A$ denotes the generalized Drinfeld double constructed using the pairing defined by the braiding. This construction originated independently in Podles and Woronowicz's construction of the quantum Lorentz group and in early work of Majid on bicrossproducts. Here we analyze this construction from an algebraic point of view with emphasis on the case when $A$ is the standard quantum group ${\Bbb{C}}_q[G]$ for $G$ a connected semi-simple complex algebraic group. In this case the double ${\Bbb{C}}_q[D(G)] = {\Bbb{C}}_q[G] \Join {\Bbb{C}}_q[G]$ can be thought of as a non-standard quantization of $\Bbb{C}[G \times G]$. One reason why the double is a particularly natural object, is the existence of surjective Hopf algebra maps $m : {\Bbb{C}}_q[D(G)] \to {\Bbb{C}}_q[G]$ and $\theta: {\Bbb{C}}_q[D(G)] \to U_q(\g)$. These maps are quantizations of the natural embeddings of $G$ and its dual $G_r$ into the double group $G \times G$ that occur in the analysis of the symplectic leaves of $G$ and of dressing transformations on $G$ \cite{STS,LW,HL1}. The main results we prove here are: \begin{enumerate} \item The category of right ${\Bbb{C}}_q[D(G)]$-comodules is equivalent to the category of right $\Bbb{C}_q[G\times G]$-comodules as a braided rigid monoidal category. \item The natural pairing between ${\Bbb{C}}_q[D(G)]$ and its Fadeev-Reshetikhin-Takhtadjan (FRT) dual $U_q(\frak{d}(\g))$ is non-degenerate. Hence $U_q(\frak{d}(\g))$ has a category of finite dimensional left modules which is equivalent to the category of right ${\Bbb{C}}_q[D(G)]$-comodules and ${\Bbb{C}}_q[D(G)]$ is the restricted Hopf dual of $U_q(\frak{d}(\g))$ with respect to this category. \item The map $(m \otimes \theta)\Delta : {\Bbb{C}}_q[D(G)] \to {\Bbb{C}}_q[G] \otimes U_q(\g)^{{\text{cop}}}$ is injective. This is a generalization of the ``Iwasawa decomposition'' proved for the quantum Lorentz group in \cite{PW}. More precisely we show that there is a finite group $\Gamma$ acting on ${\Bbb{C}}_q[G] \otimes U_q(\g)^{{\text{cop}}}$ and an Ore set $\cal{S}$ of ${\Bbb{C}}_q[D(G)]$ such that the induced map from ${\Bbb{C}}_q[D(G)]_{\cal{S}}$ to $({\Bbb{C}}_q[G] \otimes U_q(\g)^{{\text{cop}}})^\Gamma$ is an isomorphism. \item The algebra ${\Bbb{C}}_q[D(G)]$ is Noetherian. \end{enumerate} A key starting point is the observation by Doi and Takeuchi \cite{DT} that the double $A \Join A$ can be realized as a twist of the usual tensor product by a 2-cocycle (a construction dual to Drinfeld's gauge transformation \cite{D1}). From this result, assertion (1) follows easily and then (2) and the injectivity of the Iwasawa decomposition map follow by standard calculations. The second part of (3) (the identification of the image of the Iwasawa decomposition map) appears to be new even in the case $G = SL(2)$. The proof that ${\Bbb{C}}_q[D(G)]$ is noetherian uses the approach of Brown and Goodearl \cite{BG}. Since both ${\Bbb{C}}_q[G]$ and $U_q(\g)$ are homomorphic images of ${\Bbb{C}}_q[D(G)]$ this provides an interesting unified proof of the noetherianity of the standard quantum function algebra and the standard quantized enveloping algebra. The algebra $\Bbb{C}_q[SL(2)]$ was introduced and studied (using different constructions) by Podles and Woronowicz in \cite{PW} and by Carow-Watamura, Schlieker, Scholl and Watamura in \cite{CSSW}. Further work on this and more general ``complex quantum groups'' was done by a variety of authors, (see for instance \cite{CW,Ta, DSWZ}). In \cite{M1}, Majid pointed out that these algebras could be constructed using the quantum double construction that we consider here. This approach was also taken up in \cite{CEJS}. This article is, in a sense, a continuation of the approach of \cite{M1,CEJS} with the important distinction that we do not consider the *-algebra structure. All of the above papers are concerned with quantizing the algebra of complex valued coordinate functions on a simple complex algebraic group $G$, considered as a real Lie group. Thus the relevant notion is a complex Hopf algebra equipped with a *-operation. Here we consider these algebras rather as quantifications of the algebra of regular functions on the double complex algebraic group $G \times G$. However, the underlying Hopf algebra is the same and many of the basic structural results are relevant in both contexts. We also show how to apply the same twisting technique to construct some new non-standard quantum groups. Recall that the Poisson group structures on a semi-simple Lie group $G$ given by a solution of the modified classical Yang Baxter equation, have been classified by Belavin and Drinfeld \cite{BD}. They are given (in part) by triples of the form $(\tau, \bold{B}_1,\bold{B}_2)$ where $\bold{B}_i$ are subsets of the base $\bold{B}$ of the root system for $\frak{g}$ and $\tau$ is a bijection from $\bold{B}_1$ to $\bold{B}_2$ satisfying certain conditions. In the special case where $\bold{B}_1$ and $\bold{B}_2$ are ``disjoint'' we construct quantizations of the associated Poisson group. This construction is closely related to the construction of non-standard quantum groups by Fronsdal and Galindo using the universal $T$-matrix \cite{FG}. The Hopf algebra notation that we use is generally the standard notation used in Sweedler's book except that we remove the parenthesis from the `Sweedler notation' writing $\Delta(x) = \sum x_1 \otimes x_2$ rather than $\sum x_{(1)} \otimes x_{(2)}$. We use $m,\Delta, \epsilon$ and $S$ indiscriminately for the multiplication, comultiplication, counit and antipode in any Hopf algebra. This work originated in discussions between the author and T. Levasseur on non-standard quantum groups while the former was visiting the Universit\'{e} de Poitiers. He would like to thank Levasseur for his significant contribution and the Mathematics Department in Poitiers for their hospitality. \section{Braided Hopf algebras and their doubles} An appropriately general setting in which to view the basic constructions is that of a {\em braided Hopf algebra}. Braided Hopf algebras are also known as dual quasi-triangular, coquasi-triangular or cobraided Hopf algebras. They play the role (roughly) of the dual of a quasi-triangular Hopf algebra and include all of the standard, multi-parameter and non-standard quantizations of semi-simple algebraic groups. The notation follows closely that of Doi and Takeuchi but many of the ideas occur in the work of Majid \cite{M1} and Larson and Towber \cite{LT}. Let $A$ and $U$ be a pair of Hopf algebras. A {\em skew pairing} on the ordered pair $(A,U)$ is a bilinear form $\tau$ satisfying \begin{enumerate} \item $\tau(bc, u) = \sum\tau(b,u_1)\tau(c,u_2)$ \item $\tau(b,uv) = \sum\tau(b_1,v)\tau(b_2,u)$ \item $\tau(1,u) = \epsilon(u)$; $\tau(a,1)=\epsilon(a).$ \end{enumerate} A bilinear form on the pair $(A,U)$ is said to be invertible if it is invertible as an element of $(A \otimes U)^*$. Any skew pairing $\tau$ is invertible with inverse given by $$\tau^{-1}(a,u) = \tau(S(a), u).$$ For any bilinear form $\tau$ we denote by $\tau_{21}$ the pairing given by $\tau_{21}(a,b) = \tau(b,a)$. Note that if $\tau$ is a skew pairing, then so is $\tau^{-1}_{21}$. Given a skew pairing on a pair of Hopf algebras $(A,U)$, one may construct a new Hopf algebra, the {\em quantum double} $A \Join_\tau U$ (or just $A \Join U$). As a coalgebra the double is isomorphic to $A \otimes U$. The multiplication is given by the rule: $$(1 \otimes u)(a \otimes 1) = \sum \tau(a_1,u_1)a_2 \otimes u_2 \tau^{-1}(a_3,u_3)$$ or, equivalently, $$\sum (1\otimes u_1)(a_1 \otimes 1) \tau(a_2,u_2) = \sum \tau(a_1,u_1) a_2 \otimes u_2$$ A {\em braided Hopf algebra} is a Hopf algebra $A$ together with an invertible skew pairing $\beta$ on $(A,A)$ such that $$\sum b_1a_1 \beta(a_2,b_2) = \sum \beta(a_1,b_1) a_2b_2.$$ The pairing $\beta$ is called a {\em braiding} on $A$. Note that the antipode in a braided algebra is always bijective \cite{Do}. Given a braided Hopf algebra $A$ we may form the double Hopf algebra $A\Join_\beta A$. A {\em 2-cocycle} on a Hopf algebra $A$ is an invertible pairing $\sigma : A \otimes A \to k$ such that for all $x$, $y$ and $z$ in $A$, $$\sum \sigma(x_{1},y_{1}) \sigma(x_{2}y_{2},z) = \sum \sigma(y_{1},z_{1}) \sigma(x,y_{2}z_{2}) $$ and $\sigma(1,1) = 1$. Given a 2-cocycle $\sigma$ on a Hopf algebra $A$, one can twist the multiplication to get a new Hopf algebra $A_\sigma$ \cite{DT}. The new multiplication is given by $$x \cdot y = \sum \sigma(x_{1},y_{1}) x_{2} y_{2} \sigma^{-1}(x_{3},y_{3}). $$ The comultiplication in $A_\sigma$ remains the same. In particular, $A$ and $ A_\sigma$ are isomorphic as coalgebras. This construction is essentially the dual of Drinfeld's gauge transformation \cite{D1}. Recall that the category of right comodules over a braided Hopf algebra has a natural structure of a braided rigid monoidal category \cite{LT}. A 2-cocycle $\sigma$ on a braided Hopf algebra can be used to define an equivalence of such categories between $\operatorname {Comod} A$ and $\operatorname {Comod} A_\sigma$ when $A_\sigma$ is equipped with the braiding defined below. \begin{thm} Let $A$ be a braided Hopf algebra with braiding $\beta$. Let $\sigma$ be a 2-cocycle on $A$. Let $A_\sigma$ be the twisted Hopf algebra defined above. Then $\sigma_{21}* \beta* \sigma^{-1}$ (convolution product) is a braiding on $A_\sigma$. Moreover the categories of comodules over $A$ and $A_\sigma$ are equivalent as rigid braided monoidal categories. \end{thm} \begin{pf} See \cite{M2}. \end{pf} Doi and Takeuchi observed in \cite{DT} that the quantum double may be constructed from the tensor product by twisting by a 2-cocycle. Let $\tau$ be an invertible skew pairing between the Hopf algebras $B$ and $H$. Then the bilinear form $[\tau]$ defined on $B\otimes H$ by $$[\tau](b\otimes g, c\otimes h) = \epsilon(b)\epsilon(h)\tau(c,g)$$ is a 2-cocycle and the twisted Hopf algebra $(B \otimes H)_{[\tau]}$ is isomorphic as a Hopf algebra to the quantum double $B\Join_{\tau} H$ \cite[Prop. 2.2]{DT}. In particular, if $(A, \beta)$ is a braided Hopf algebra, then the double $A\Join _\beta A$ is isomorphic to $(A \otimes A)_{[\beta]}$. There are a number of different ways of defining a braiding on $A \Join A$. Recall that if $\beta$ is a braiding on $A$, then so is $\beta_{21}^{-1}$. The tensor algebra $A \otimes A$ can therefore be made into a braided algebra in a number of different ways using these two braidings. It turns out that the appropriate choice is the braiding given by $\beta$ on the first component and $\beta_{21}^{-1}$ on the second component which we shall denote by $(\beta,\beta_{21}^{-1})$. Using the above theorem we deduce that $$\gamma = [\beta]_{21} * (\beta,\beta_{21}^{-1})*[\beta]^{-1}$$ is a braiding on $A \Join A$ (cf. \cite[Proposition 2]{M1}). This braiding makes the category of right comodules into a braided category. \begin{thm} \label{comod} Let $(A, \beta)$ be a braided Hopf algebra. Then the double $A\Join A$ has a braiding $\gamma$ given by $$\gamma(a \otimes b, a' \otimes b') = \beta(a,a_1'b_1')\beta^{-1}(a_2'b_2',b). $$ The category $\operatorname {Comod} A\Join A$ is equivalent as a braided rigid monoidal category to $\operatorname {Comod} A\otimes A$. \end{thm} There are two important Hopf algebra homomorphisms associated to the double $A \Join A$ of a braided Hopf algebra $A$. The first is the multiplication map \cite[Prop. 3.1]{DT} $$m:A\Join A \to A.$$ The second is the map $$\theta : A \Join A \to (A^\circ)^{{\text{cop}}}$$ defined by \cite[Thm. 3.2]{DT} $$\theta(x\otimes y ) = \beta(x,-)\beta^{-1}(-,y).$$ Notice that $\theta= m(l^+\otimes l^-)$ where $l^\pm : A \to (A^\circ)^{{\text{cop}}}$ are the maps $l^+(x) = \beta(x,-)$ and $l^-(y) = \beta^{-1}(-,y)$. We denote the image of $\theta$ in $A^\circ$ by $U(A)$ and refer to it as the {\em FRT-dual} of $A$. Recall that $U(A)$ is said to be dense in $A^\circ$ if the pairing between $A$ and $U(A)$ is non-degenerate. Combining the above maps via the comultiplication yields an algebra map $$ \xi = (\theta \otimes m)\Delta : A \Join A \to U(A)^{\text{cop}}\otimes A.$$ Given the braiding $\gamma$ on $A \Join A$ we have an associated FRT-dual $U(A \Join A)$ and maps $l^{\pm}: A \Join A \to U(A \Join A)$. For any Hopf algebra $B$ we shall denote by $\pr{-}{-}$ the pairing between $B^{\circ}$ and $B$. The maps $m$ and $\theta$ have duals $m^*: U(A) \to (A \Join A)^\circ$ and $\theta^* : A^{{\text{op}}} \to (A \Join A)^\circ$ given by $\pr{m^*(u)}{b\otimes b'} = \pr{u}{m(b\otimes b')}= \pr{u}{bb'}$ and $\pr{\theta^*(a)}{b \otimes b'} = \pr{\theta(b\otimes b')}{a}= \sum \beta(b,a_1)\beta^{-1}(a_2,b')$. \begin{lem} Consider the maps $l^{\pm}: A \Join A \to U(A\Join A)$. Then $l^+ = m^* \theta$ and $l^- = S \theta^* m$. In particular, the images of both $m^*$ and $\theta^*$ are contained in $U(A \Join A)$. \end{lem} \begin{pf} Observe that $$ \gamma(a \otimes b, a' \otimes b') = \pr{\theta(a\otimes b)}{m(a' \otimes b')} = \pr{m^* \theta(a\otimes b)}{a' \otimes b'}.$$ Similarly, $$ \gamma^{-1}(a \otimes b, a' \otimes b') = \pr{\theta(a\otimes b)}{S^{-1}m(a' \otimes b')} = \pr{S\theta^* m(a' \otimes b')}{a\otimes b} $$ \end{pf} Recall that the usual pairing between $U(A)$ and $A$ becomes a skew pairing between $U(A)$ and $A^{{\text{op}}}$, allowing us to form the double $U(A)\Join A^{{\text{op}}}$. \begin{thm}\label{iwaua} The map $$\zeta = m(m^* \otimes \theta^*): U(A)\Join A^{{\text{op}}} \to U(A \Join A)$$ is a surjective Hopf algebra map. \end{thm} \begin{pf} The lemma implies that the image of $\zeta$ is precisely $U(A\Join A)$. The fact that $\zeta$ is a Hopf algebra map is more or less well-known (see for instance \cite{M1}). It follows from the formula $$ \sum \theta^*(a_1)m^*(u_1) \pr{u_2}{a_2} = \sum \pr{u_1}{a_1} m^*(u_2) \theta^*(a_2) $$ which may be verified directly. \end{pf} \begin{cor} \label{inj1} If $U(A\Join A)$ is dense in $(A\Join A)^\circ$, then the map $$ \xi = (m \otimes \theta)\Delta : A \Join A \to A \otimes U(A)^{{\text{cop}}} $$ is injective. \end{cor} \begin{pf} Notice that the pairing between $A \Join A$ and $U(A)\Join A^{{\text{op}}}$ induced from $\zeta$ is the same as that induced from the map $\xi$. Thus $$ \operatorname {Ker} \xi \subset (U(A)\Join A^{{\text{op}}})^\perp \subset U(A\Join A)^\perp .$$ The density of $U(A\Join A)$ implies that $U(A\Join A)^\perp =0$, so $ \operatorname {Ker} \xi =0$ also. \end{pf} We now give an alternative construction of the braiding on $A \Join A$. This approach is analogous to Drinfeld's construction of a universal $R$-matrix for the Drinfeld double $H^* \Join H$. We follow the reformulation of this construction given in \cite{Ga,Jo}. This construction is also needed later in defining the braiding on the standard quantum groups. Let $A$ and $B$ be Hopf algebras and let $\sigma$ be a skew pairing between $A$ and $B$. Define Hopf algebra maps $$ \Phi_1: A^{{\text{cop}}} \to B^\circ, \quad \Phi_2: B^{{\text{op}}} \to A^\circ$$ by $\Phi_1(a) = \sigma(a,-)$ and $\Phi_2(b) = \sigma(-,b)$. If the pairing $\sigma$ is non-degenerate these maps will be injective. Henceforth we shall assume that this is the case. Now suppose that we have a Hopf pairing $\phi$ between $C$ and $A \Join B$ which identifies $C$ with a subalgebra of the dual of $A \Join B$. There are Hopf algebra maps $$ \theta_1 : C \to B^\circ, \quad \theta_2 : C \to A^\circ$$ defined by $\pr{\theta_1(c)}{b}= \phi(c,1\otimes b)$ and $\pr{\theta_2(c)}{a}=\phi(c,a\otimes 1)$ respectively. In order to construct a braiding on $C$ we need to assume that $\operatorname {Im} \theta_i \subset \operatorname {Im} \Phi_i$ for $i =1$ and $2$. In this case we can construct maps $$ \psi_1 = \Phi_1^{-1} \theta_1 : C \to A^{{\text{cop}}}, \quad \psi_2 = \Phi_2^{-1}\theta_2 : C \to B^{{\text{op}}}$$ \begin{thm} \label{gait} The form $ \beta \in (C \otimes C)^*$ defined by $$ \beta(x,y) = \sigma(\psi_1(x), \psi_2(y))$$ is a braiding on $C$. Let $\pi: A \Join B \to C^*$ be the natural map and let $l^\pm : C \to U(C)^{{\text{cop}}}$ be the maps described above. Then $l^+ = \pi \psi_1$ and $l^- = S \pi \psi_2$. \end{thm} \begin{pf} See \cite[9.4.6]{Jo} \end{pf} We now apply this result in the case where $\sigma$ is the natural skew pairing between $U(A)$ and $A^{\text{op}}$ for some braided Hopf algebra $A$ and $C$ is $A \Join A$. When $U(A\Join A)$ is dense in $(A\Join A)^\circ$, this produces a braiding on $A \Join A$ which we will denote by $\gamma'$. \begin{thm} The braidings $\gamma$ and $\gamma'$ coincide. \end{thm} \begin{pf} The pairing between $U(A)\Join A^{{\text{op}}}$ and $A \Join A$ is given by $$\pr{u \otimes c}{a \otimes b} = \pr{m^*(u)}{a_1\otimes b_1} \pr{\theta^*(c)}{a_2\otimes b_2} =\pr{u}{a_1b_1}\pr{\theta(a_2\otimes b_2)}{c}$$ So $$\pr{\theta_1(a\otimes b)}{c} =\pr{1\otimes c}{a \otimes b}=\pr{\theta(a\otimes b)}{c} = \pr{ \Phi_1 \theta(a\otimes b)}{c}.$$ Hence $\psi_1 =\Phi_1^{-1}\theta_1 =\theta$. Similarly, $$\pr{\theta_2(a\otimes b)}{u} =\pr{u\otimes 1}{a \otimes b}=\pr{u}{m(a\otimes b)} = \pr{ \Phi_2 m(a\otimes b)}{u}.$$ Thus $\psi_2 = m$. Hence $$ \gamma'(a \otimes b, a' \otimes b') = \pr{\theta(a\otimes b)}{m(a' \otimes b')} = \gamma(a \otimes b, a' \otimes b')$$ as required. \end{pf} This theorem says that the braiding $\gamma$ is essentially the same object as the universal $T$-matrix of \cite{RS} and \cite{FG}. There is a natural algebra embedding of $U(A \Join A)$ into $U(A) \otimes U(A)$ given by $$ \chi(u) = \sum f(u_1) \otimes f'(u_2) $$ where $f(u)$, $f'(u)$ denotes the restriction of $u$ to the first and second copies of $A$ respectively. Of particular importance is the composition of $\chi$ and $\zeta$. \begin{lem} \label{chir} \begin{enumerate} \item For all $a \in A$, $\chi \theta^*(a) = \sum l^-S(a_1) \otimes l^+S(a_2)$. \item For all $u \in U(A)$, $\chi m^*(u) = \sum u_1 \otimes u_2 = \Delta(u)$. \end{enumerate} \end{lem} \begin{pf} Notice that \begin{align*} \pr{\theta^*(a)}{b \otimes c} & = \pr{\theta(b \otimes c)}{a} = \beta(b, a_1)\beta^{-1}(a_2,c) \\ & = \beta^{-1}(b,S(a_1))\beta(S(a_2),c) = \pr{l^-S(a_1)}{b}\pr{l^+S(a_2)}{c} \end{align*} This proves the first assertion. The second assertion is proved similarly. \end{pf} \section{Double quantum groups and the quantum Lorentz group} We now define the double quantum group associated to a connected complex semi-simple algebraic group $G$. We continue to use the notation of \cite{HLT}. For the convenience of the reader we recall briefly the relevant details. Let $\frak{g}$ be the Lie algebra of $G$. Let $\frak{h}$ be a Cartan subalgebra of $\frak{g}$, $\bold{R}$ the associated root system, $\bold{B} = \{ \alpha_1, \dots, \alpha_n\}$ a basis of $\bold{R}$, $\bold{R}_+$ the set of positive roots and $W$ the Weyl group. We denote by $\bold{P}$ and $\bold{Q}$ the lattices of weights and roots respectively and by $\bold{P}^+$ the set of dominant integral weights. Let $H$ be a maximal torus of $G$ with Lie algebra $\frak{h}$ and denote by $\bold{L}$ the character group of $H$, which we shall identify with a sublattice of $\bold{P}$ containing $\bold{Q}$. Let $(-,-)$ be the Killing form on $\frak{h}^*$ and set $d_i = (\alpha_i,\alpha_i)/2$. Set $\frak{n}^{\pm} = \oplus_{\alpha \in \bold{R}_+} \frak{g}_{\pm \alpha}, \quad \frak{b}^{\pm} = \frak{h} \oplus \frak{n}^{\pm}$. Let $q \in \Bbb{C}^*$ and assume that {\em $q$ is not a root of unity.} Since we need to consider rational powers of $q$ we adopt the following convention. Pick $\hbar \in \Bbb{C}$ such that $q=e^{- \hbar /2}$ and define $q^m = e^{-m \hbar /2}$ for all $m \in \Bbb{Q}$. We set $q_i = q^{d_i}$ for $i = 1, \dots, n$. Denote by $U^0$ the group algebra of $\bold{L}$, $$U^0 = \Bbb{C}[k_\lambda \, ; \, \lambda \in \bold{L}], \quad k_0 =1, \quad k_\lambda k_\mu = k_{\lambda + \mu}.$$ Set $k_i = k_{\alpha_i}$, $ 1 \le i \le n$. The standard quantized enveloping algebra associated to this data is the Hopf algebra $$U_q(\frak{g}) = U^0[e_i, f_i \, ; \, 1 \le i \le n]$$ with defining relations: \begin{gather*} k_\lambda e_j k_\lambda^{-1} = q^{(\lambda,\alpha_j)} e_j, \quad k_\lambda f_j k_\lambda^{-1} = q^{-(\lambda,\alpha_j)} f_j \\ e_if_j - f_je_i = \delta_{ij}(k_i-k_i^{-1})/(q^{d_i}-q^{-d_i}) \end{gather*} and the quantum Serre relations as given in \cite{HLT}. The Hopf algebra structure is given by \begin{gather*} \Delta(k_\lambda) = k_\lambda \otimes k_\lambda, \quad \epsilon(k_\lambda) = 1, \quad S(k_\lambda) = k_\lambda^{-1} \\ \Delta(e_i) = e_i \otimes 1 + k_i \otimes e_i, \quad \Delta(f_i) = f_i \otimes k_i^{-1} + 1 \otimes f_i \\ \epsilon(e_i) = \epsilon(f_i) =0, \quad S(e_i) = -k_i^{-1} e_i, \quad S(f_i) = -f_i k_i. \end{gather*} Notice that $U_q(\g)$ depends on $G$ rather than $\frak{g}$. Thus the notation is a little ambiguous. We define subalgebras of $U_q(\g)$ as follows \begin{gather*} U_q(\frak{b}^+) = U^0[e_i \, ; \, 1 \le i \le n], \quad U_q(\frak{b}^-) = U^0[f_i \, ; \, 1 \le i \le n]. \end{gather*} Notice that $U^0$ and $U_q(\frak{b}^{\pm})$ are Hopf subalgebras of $U_q(\g).$ Let $M$ be a left $U_q(\g)$-mo\-du\-le. An element $x \in M$ is said to have {\em weight $\mu \in \bold{L}$} if $k_\lambda x = q^{(\lambda,\mu)}x$ for all $\lambda \in \bold{L}$; we denote by $M_\mu$ the subspace of elements of weight $\mu$. It is known that the category of finite dimensional $U_q(\g)$-modules is a completely reducible braided rigid monoidal category. Set $\bold{L}^+ = \bold{L} \cap \bold{P}^+$ and recall that for each $\lambda \in \bold{L}^+$ there exists a finite dimensional simple module of highest weight $\lambda$, denoted by $L(\lambda)$. One has $L(\lambda)^* \cong L(w_0 \lambda)$ where $w_0$ is the longest element of $W$. Let $\cal{C}_q$ be the subcategory of finite dimensional $U_q(\g)$-modules consisting of finite direct sums of $L(\lambda)$, $\lambda \in \bold{L}^+$. The category $\cal{C}_q$ is closed under tensor products and the formation of duals. Let $M$ be an object of $\cal{C}_q$, then $M =\oplus_{\mu \in \bold{L}} M_\mu$. For $f \in M^*$, $v \in M$ we define the coordinate function $c_{f,v} \in U_q(\g)^*$ by $$\forall u \in U_q(\g), \quad c_{f,v}(u) = \pair{f}{uv}$$ where $\pair{\,}{\,}$ is the duality pairing. The quantized function algebra ${\Bbb{C}}_q[G]$ is the restricted dual of $U_q(\g)$ with respect to $\cal{C}_q$. That is, $${\Bbb{C}}_q[G] = \Bbb{C}[c_{f,v} \, ; \, v \in M, f \in M^*, \, M \in \text{obj}(\cal{C}_q)].$$ The algebra ${\Bbb{C}}_q[G]$ is a Hopf algebra. If $\{v_1, \dots, v_s; f_1, \dots, f_s\}$ is a dual basis for $M$ one has \begin{equation} \label{formulas 3.1} \Delta(c_{f,v}) = \sum_i c_{f,v_i} \otimes c_{f_i,v}, \quad \epsilon(c_{f,v}) = \pair{f}{v}, \quad S(c_{f,v}) = c_{v,f}. \end{equation} Notice that we may assume that $v_j \in M_{\nu_j}, \, f_j \in M^*_{-\nu_j}$. When $v \in L(\lambda)_\mu$ and $f \in L(\lambda)^*_{-\nu}$ we denote the element $c_{f,v}$ by $c^\lambda_{-\nu,\mu}$. Although convenient this notation is a little ambiguous and some care has to be taken in interpreting the standard formulas such as $\Delta(c^\lambda_{\nu,\mu})=c^\lambda_{\nu,\mu_i}\otimes c^\lambda_{-\mu_i,\mu}$ and $S(c^\lambda_{\nu,\mu}) = c^{- w_0\lambda}_{\mu,\nu}$. Recall that the Rosso form $\phi(\;,\;)$ defines a skew pairing on $(U_q(\frak{b}^-),U_q(\frak{b}^+))$. Consider the induced maps $$\Phi_1: U_q(\frak{b}^-)^{{\text{cop}}} \to U_q(\frak{b}^+)^\circ, \quad \Phi_2 : U_q(\frak{b}^+)^{{\text{op}}} \to U_q(\frak{b}^-)^\circ $$ and the maps $$ \theta_1: {\Bbb{C}}_q[G] \to U_q(\frak{b}^+)^\circ, \quad \theta_2: {\Bbb{C}}_q[G] \to U_q(\frak{b}^-)^\circ $$ as defined above. By \cite[Proposition 4.6]{HLT}, we have that $\operatorname {Im} \theta_i = \operatorname {Im} \Phi_i$. Hence the Rosso form induces a braiding on ${\Bbb{C}}_q[G]$ defined by $$ \beta(x,y) = \phi(\psi_1(x),\psi_2(y))$$ where $\psi_i = \Phi_i^{-1} \theta_i$. \begin{defn} We define ${\Bbb{C}}_q[D(G)]$ to be the double quantum group ${\Bbb{C}}_q[G] \Join {\Bbb{C}}_q[G]$. \end{defn} \begin{ex} In the case when $G= SL(2,\Bbb{C})$, the double quantum group ${\Bbb{C}}_q[D(SL(2))]$ is the Hopf algebra underlying the quantum Lorentz group. It is often denoted by $SL_q(2,\Bbb{C})$. As an algebra it is generated by the elements $a,b,c,d$ and $\tilde{a}, \tilde{b},\tilde{c},\tilde{d}$ subject to the relations: $$ba = q ab, \; ca = q ac, \; db = q bd, \; dc = qcd $$ $$ cb = bc, \quad ad - da = (q-q^{-1}) bc, \quad da - qbc = 1$$ $$\tilde{b} \tilde{a} = q \tilde{a} \tilde{b} , \; \tilde{c} \tilde{a} = q \tilde{a} c, \; \tilde{d} \tilde{b} = q \tilde{b} \tilde{d} , \; \tilde{d} \tilde{c} = q\tilde{c} \tilde{d} $$ $$ \tilde{c} \tilde{b} = \tilde{b} \tilde{c} , \quad \tilde{a} \tilde{d} - \tilde{d} \tilde{a} = (q-q^{-1}) \tilde{b} \tilde{c} , \quad \tilde{d} \tilde{a} - q\tilde{b} \tilde{c} = 1$$ $$a \tilde{a} = \tilde{a} a, \; q a \tilde{b} = \tilde{b} a, \; a\tilde{c} = \tilde{c} a + (q-q^{-1}) c \tilde{a}, \; a \tilde{d} + (q-q^{-1}) c \tilde{b} = \tilde{d} a $$ $$qb \tilde{a} = \tilde{a} b = (q-q^{-1}) \tilde{b} a, \; b \tilde{b} = \tilde{b} b, \; b \tilde{c} + (q-q^{-1}) d \tilde{a} = (q-q^{-1}) \tilde{d} a + \tilde{a} b $$ $$ b \tilde{d} + (q-q^{-1}) d \tilde{b} = q \tilde{d} b$$ $$c \tilde{a} = q \tilde{a} c, \quad c \tilde{b} = \tilde{b} c, \quad c \tilde{c} = \tilde{c} c, \quad q c \tilde{d} = \tilde{d} c$$ $$ d \tilde{a} = \tilde{a} d + (q-q^{-1}) \tilde{b} c, \; d \tilde{b} = q \tilde{b} d, \; q d \tilde{c} =\tilde{c} d + (q-q^{-1}) \tilde{d} c, \; d \tilde{d} = \tilde{d} d$$ \end{ex} We begin by stating explicitly Theorem \ref{comod} for this case. \begin{thm} \label{comodG} The category $\operatorname {Comod} {\Bbb{C}}_q[D(G)]$ is equivalent as a braided rigid monoidal category to $\operatorname {Comod} \Bbb{C}_q[G\times G]$. \end{thm} The equivalence of the comodule categories in the case of the quantum Lorentz group was proved by Podles and Woronowicz in their original paper \cite{PW}. To our knowledge no more general result has appeared in the literature, although this result is more or less implicit in some of Majid's work. \begin{defn} Define $U_q(\frak{d}(\g))$ to be the FRT-dual $U({\Bbb{C}}_q[D(G)])$. \end{defn} Notice again that $U_q(\frak{d}(\g))$ depends not only on $\frak{g}$ but also on the choice of $G$. Theorem \ref{iwaua} gives a weak version of Iwasawa decomposition for $U_q(\frak{d}(\g))$. \begin{thm} \label{iwauqg} The map $$ \zeta = m(m^* \otimes \theta^*): U_q(\g)\Join {\Bbb{C}}_q[G]^{{\text{op}}} \to U_q(\frak{d}(\g))$$ is an epimorphism of Hopf algebras. \end{thm} We conjecture that this map $\zeta$ is in fact an isomorphism. We now show that the pairing between ${\Bbb{C}}_q[D(G)]$ and $U_q(\frak{d}(\g))$ is non-degenerate. We need a detailed description of the maps $l^\pm$. Note that Theorem \ref{gait} implies that $$\operatorname {Ker} l^+ = \{ c_{f,v} \mid f \in (U_q(\frak{b}^+) v)^\perp \}$$ and $$\operatorname {Ker} l^- = \{ c_{f,v} \mid f \in (U_q(\frak{b}^-) v)^\perp \}.$$ Hence $c^\lambda_{-\mu,\nu} \in \operatorname {Ker} l^+$ if $\mu-\nu$ is not a non-negative integer combination of positive roots. Similarly $c^\lambda_{-\mu,\nu} \in \operatorname {Ker} l^-$ if $\nu-\mu$ is not a non-negative integer combination of positive roots. Define $$ U^+ = \Bbb{C}[e_i \mid 1 \leq i \leq n], \quad U^- = \Bbb{C}[f_i \mid 1 \leq i \leq n]$$ and for all $\beta \in \bold{Q}$ set $$U^\pm_\beta = \{ u \in U^\pm \mid k_\lambda u k _{-\lambda} = q^{(\lambda, \beta) } u \}.$$ \begin{lem} \label{lpm} Let $\lambda \in \bold{L}^+$ and let $\mu \in \bold{L}$. \begin{enumerate} \item There exists a unique $y \in U^-_{\mu-\lambda}$ such that $l^+(c^\lambda_{-\lambda,\mu}) = y k_{-\mu}$. \item There exists a unique $x \in U^+_{\lambda-\mu}$ such that $l^-(c^\lambda_{-\mu,\lambda}) = x k_{\mu}$ \end{enumerate} \end{lem} \begin{pf} See \cite{HLT} or \cite[9.2.11]{Jo}. \end{pf} Consider the natural functor from right ${\Bbb{C}}_q[D(G)]$-comodules to left $U_q(\frak{d}(\g))$-modules. For a right ${\Bbb{C}}_q[D(G)]$-comodule $V$, we define an action of $U_q(\frak{d}(\g))$ by $$u.v = \sum \pr{v_1}{u}v_0$$ for $v \in V$ and $u \in U_q(\frak{d}(\g))$. The proof of the following theorem generalizes the proof of an analogous result for the quantum Lorentz group given in \cite{Ta}. \begin{thm} Let $V$ be a simple right ${\Bbb{C}}_q[D(G)]$-comodule and consider $V$ as a left $U_q(\frak{d}(\g))$-module. Then $V$ is simple. \end{thm} \begin{pf} By Theorem \ref{comodG} any irreducible ${\Bbb{C}}_q[D(G)]$-comodule is of the form $V=L(\nu) \otimes L(\nu')$ where $\nu,\nu' \in \bold{L}^+$. The action of $U_q(\frak{d}(\g))$ on $V$ is then given by the map $\chi$ and the usual action of $U_q(\g) \otimes U_q(\g)$. Suppose that $\lambda \in \bold{L}^+$ and $\mu \in \bold{L}$. It follows from Lemma \ref{chir} that for any $c \in {\Bbb{C}}_q[G]$, $$\chi \theta^*(c) = l^-S(c_1) \otimes l^+S(c_2)$$ Hence $$\chi \theta^*(c^\lambda_{-\mu,\lambda}) =\sum l^-S(c^\lambda_{-\mu,\nu}) \otimes l^+S(c^\lambda_{-\nu,\lambda}) = l^-S(c^\lambda_{-\mu,\lambda}) \otimes l^+S(c^\lambda_{- \lambda,\lambda})$$ and similarly $$\chi \theta^*(c^\lambda_{-\lambda, \mu}) = l^-S(c^\lambda_{-\lambda,\lambda}) \otimes l^+S(c^\lambda_{- \lambda, \mu})$$ (cf. \cite[9.2.14]{Jo}). In particular Lemma \ref{lpm} yields $$\chi\theta^*(c^\lambda_{-\lambda,\lambda}) = k_{- \lambda} \otimes k_{\lambda}$$ Thus the image of $\chi\theta^*$ contains an `antidiagonal' copy of the subalgebra $\Bbb{C}[k_\lambda \mid \lambda \in \bold{L}^+]$ of $U^0$. On the other hand, the image under $m^*$ of $U^0$ is a `diagonal' copy of $U^0$. It follows that any $U_q(\frak{d}(\g))$-submodule $V'$ of $V$ is a sum of its $U^0 \otimes U^0$-weight spaces. Now by applying Lemma \ref{lpm} again for $\mu = \lambda - \alpha_j$ yields (up to a scalar factor) $$\chi\theta^*(c^\lambda_{-\mu,\lambda}) = e_j k_{-\lambda} \otimes k_{\lambda}$$ $$\chi\theta^*(c^\lambda_{-\lambda,\mu}) = k_{- \lambda} \otimes f_j k_{\lambda}$$ Hence if $V'$ is a non-zero submodule of $V$ it must contain a weight vector $v_+\otimes v_-'$ where $v_-'$ is a highest weight vector of $V(\nu')$ and $v_+$ is a lowest weight vector of $V(\nu)$. However this element generates $V$ as a $U_q(\g)$ module via the diagonal action. Thus $V'=V$ as required. \end{pf} The theorem yields a form of Peter-Weyl theorem linking ${\Bbb{C}}_q[D(G)]$ and $U_q(\frak{d}(\g))$. Denote by $\tilde{\cal{C}}_q$ the full subcategory of $U_q(\frak{d}(\g))$-mod consisting of direct sums of modules of the form $L(\nu) \otimes L(\nu')$. Since $\tilde{\cal{C}}_q$ is closed under tensor products, duals and direct sums we may form the restricted dual of $U_q(\frak{d}(\g))$ with respect to $\tilde{\cal{C}}_q$. This is the Hopf algebra of coordinate functions $$U_q(\frak{d}(\g))^\circ_{\tilde{\cal{C}}_q} = \Bbb{C}[c_{f,v} \, ; \, v \in M, f \in M^*, \, M \in \text{obj}(\tilde{\cal{C}}_q)].$$ \begin{cor} The pairing between ${\Bbb{C}}_q[D(G)]$ and $U_q(\frak{d}(\g))$ is non-degenerate. The categories $\cal{C}_q$ and $\tilde{\cal{C}}_q$ are equivalent and ${\Bbb{C}}_q[D(G)]$ is the restricted dual of $U_q(\frak{d}(\g))$ with respect to $\tilde{\cal{C}}_q$. \end{cor} \begin{pf} For $\nu \in \bold{L}^+$ let $C(L(\nu)) \subset {\Bbb{C}}_q[G]$ be the subcoalgebra of coordinate functions on $L(\nu)$. Then $$ {\Bbb{C}}_q[D(G)] = \oplus_{\nu,\nu' \in \bold{L}^+} C(L(\nu)) \otimes C(L(\nu')).$$ Similarly let $C(L(\nu) \otimes L(\nu'))$ be the subcoalgebra of $U_q(\frak{d}(\g))^*$ of coordinate functions on $L(\nu) \otimes L(\nu')$. Then $$U_q(\frak{d}(\g))^\circ_{\tilde{\cal{C}}_q} = \oplus_{\nu,\nu' \in \bold{L}^+} C(L(\nu) \otimes L(\nu')).$$ Let $\eta: {\Bbb{C}}_q[D(G)] \to U_q(\frak{d}(\g))^*$ be the natural map. Then since $L(\nu) \otimes L(\nu')$ is simple $\eta$ maps $C(L(\nu)) \otimes C(L(\nu'))$ isomorphically onto $C(L(\nu) \otimes L(\nu'))$. Hence $\eta$ is an isomorphism from ${\Bbb{C}}_q[D(G)]$ to $U_q(\frak{d}(\g))^\circ_{\tilde{\cal{C}}_q}$. \end{pf} In particular this implies that the intersection of the annihilators of the simple modules $L(\nu)\otimes L(\nu')$ is zero. Hence $U_q(\frak{d}(\g))$ is semi-primitive and residually finite. Applying Corollary \ref{inj1} yields the following version of Iwasawa decomposition for ${\Bbb{C}}_q[D(G)]$. \begin{cor} The map $$\xi=(m \otimes \theta)\Delta : {\Bbb{C}}_q[D(G)] \to {\Bbb{C}}_q[G] \otimes U_q(\g)^{\text{cop}} $$ is injective. \end{cor} This result is proved for the quantum Lorentz group by Podles and Woronowicz in \cite[Theorem 1.3]{PW}. In \cite{Ta}, Takeuchi defines the quantum Lorentz group as a subalgebra of $\Bbb{C}_q[SL(2)] \otimes U_q(\frak{sl}(2))^{\text{cop}}$ and then goes on to prove that this subalgebra is isomorphic to $\Bbb{C}_q[D(SL(2))]$. A discussion of this map in a fairly general context appears in \cite{M1}. Although $\xi$ is never surjective, it is in a certain sense not far away from being surjective. We now show that the image has a localization which is the invariant subring for the action of a certain finite group, $\Gamma$. This is only to be expected if we view this map as the quantization of the map $G \times G_r \to G \times G$ described in \cite{HL1}. This map is a finite morphism onto the open subset $GG_r$ of $G \times G$ whose fibre at every point is $\Gamma$. Let $\Gamma = \{ h \in H \mid h^2 = e \}$. We may identify the dual group $\hat{\Gamma}$ with the quotient $P /2P$. \begin{lem} ${\Bbb{C}}_q[D(G)] /(\operatorname {Ker} m + \operatorname {Ker} \theta) \cong \Bbb{C}[\hat{\Gamma}]$. \end{lem} \begin{pf} Denote by $\Bbb{C}[\Gamma]$ the group algebra of $\Gamma$. Then there is a surjective Hopf algebra map $\eta: {\Bbb{C}}_q[D(G)] \to \Bbb{C}[\hat{\Gamma}] = \Bbb{C}[\Gamma]^*$ given by $$\eta(c^\lambda_{-\mu,\lambda}\otimes c^{\lambda'}_{-\mu',\lambda'})(h) = \lambda(h)\lambda(h')\epsilon(c^\lambda_{-\mu,\lambda} c^{\lambda'}_{-\mu',\lambda'}).$$ It is easily checked that $\operatorname {Ker} \eta \supset \operatorname {Ker} \theta + \operatorname {Ker} m$. Conversely, let $t_\lambda$ be the image of $c^\lambda_{- \lambda,\lambda}\otimes 1$ in ${\Bbb{C}}_q[D(G)]/(\operatorname {Ker} m + \operatorname {Ker} \theta)$. Then $t_\lambda^2 = 1$, $t_\lambda t_\mu = t_{\lambda+\mu}$ and the $t_\lambda$ span ${\Bbb{C}}_q[D(G)]/(\operatorname {Ker} m + \operatorname {Ker} \theta)$. Thus by comparing dimensions we obtain that $\operatorname {Ker} \eta = \operatorname {Ker} \theta + \operatorname {Ker} m$. \end{pf} Denote by $R(A)$ the group of one-dimensional representations of a Hopf algebra $A$. The group $\Gamma$ embeds in $R({\Bbb{C}}_q[G])$, $R(U_q(\g)^{{\text{cop}}})$ and $R({\Bbb{C}}_q[D(G)])$ in such a way that the embeddings commute with the induced maps $R(U_q(\g)^{{\text{cop}}}) \to R({\Bbb{C}}_q[D(G)])$ and $R({\Bbb{C}}_q[G]) \to R({\Bbb{C}}_q[D(G)])$. Recall that for any Hopf algebra $A$, there are left and right translation maps $l,r : R(A) \to \operatorname {Aut}(A)$ given by $$l_h(x) = \sum h^{-1}(x_1)x_2,\quad r_h(x) = \sum x_1h(x_2). $$ The left and right translation actions of $\Gamma$ on ${\Bbb{C}}_q[G]$ and $U_q(\g)^{{\text{cop}}}$ factor through left and right translation actions on ${\Bbb{C}}_q[D(G)]$. Define an action $\tilde{d}: \Gamma \to \operatorname {Aut} {\Bbb{C}}_q[D(G)] \otimes {\Bbb{C}}_q[D(G)]$ by $$ \tilde{d}_h (x \otimes y) = r_h(x) \otimes l_h(y).$$ Consider $\Gamma$ acting similarly on $({\Bbb{C}}_q[G]\otimes U_q(\g)^{{\text{cop}}})$. \begin{lem} The image of the comultiplication $\Delta: {\Bbb{C}}_q[D(G)] \to {\Bbb{C}}_q[D(G)] \otimes {\Bbb{C}}_q[D(G)]$ is contained in the subring of invariants, $({\Bbb{C}}_q[D(G)] \otimes {\Bbb{C}}_q[D(G)])^\Gamma$. Hence the image of ${\Bbb{C}}_q[D(G)]$ under $\xi$ is contained in the invariants $({\Bbb{C}}_q[G]\otimes U_q(\g)^{{\text{cop}}})^\Gamma$. \end{lem} \begin{pf} Notice that for $c_{\mu,\lambda} \in {\Bbb{C}}_q[G]$ and $h \in \Gamma$, $$ l_h(c_{\mu,\lambda}) = \mu(h) c_{\mu,\lambda}, \quad r_h(c_{\mu,\lambda}) = \lambda(h) c_{\mu,\lambda}.$$ Hence for $c_{\mu,\lambda} \otimes c_{\mu',\lambda'} \in {\Bbb{C}}_q[D(G)]$, \begin{align*} \tilde{d}_h(\Delta(c_{\mu,\lambda} \otimes c_{\mu',\lambda'})) & = \sum r_h(c_{\mu,\lambda_i} \otimes c_{\mu',\lambda'_j}) \otimes l_h(c_{-\lambda_i,\lambda} \otimes c_{- \lambda_j',\lambda'})\\ & = \sum\lambda_i(h)\lambda_j'(h)(-\lambda_i)(h)(- \lambda_j')(h) c_{\mu,\lambda_i} \otimes c_{\mu',\lambda'_j} \otimes c_{-\lambda_i,\lambda} \otimes c_{-\lambda_j',\lambda'}\\ & =\Delta(c_{\mu,\lambda} \otimes c_{\mu',\lambda'}) \end{align*} The second assertion follows from the first because $\theta \otimes m$ is a $\Gamma$-equivariant map. \end{pf} \begin{thm} The set $\cal{S}=\{1 \otimes k_{-\lambda} \mid \lambda \in 2\bold{L}^+\}$ is an Ore subset of $\xi({\Bbb{C}}_q[D(G)])$. The map $\xi$ extends to an algebra isomorphism $$ \xi: {\Bbb{C}}_q[D(G)]_{\cal{S}} \to ({\Bbb{C}}_q[G] \otimes U_q(\g)^{{\text{cop}}})^\Gamma. $$ \end{thm} \begin{pf} We first show that $\cal{S} \subset \xi({\Bbb{C}}_q[D(G)])$. Notice that for $\lambda \in \bold{L}^+$, $$ \xi( c^\lambda_{\nu,\lambda} \otimes 1) = \sum c^\lambda_{\nu,\mu_i} \otimes l^+(c^\lambda_{-\mu_i,\lambda}) = c^\lambda_{\nu,\lambda} \otimes k_{-\lambda}.$$ Similarly, $$\xi(1 \otimes c^\lambda_{\nu,w_0\lambda}) = c^\lambda_{\nu,w_0\lambda} \otimes k_{w_0 \lambda}.$$ Hence if $$\Delta(c^\lambda_{-\lambda,\lambda}) = c^\lambda_{-\lambda,\mu_i} \otimes c^\lambda_{-\mu_i,\lambda}$$ then $$1 = \epsilon(c^\lambda_{-\lambda,\lambda}) = \sum S(c^\lambda_{-\lambda,\mu_i}) c^\lambda_{-\mu_i,\lambda}$$ Hence $$\sum \xi(1\otimes c^{-w_0\lambda}_{\mu_i,-\lambda})\xi (c^\lambda_{-\mu_i,\lambda}\otimes 1) =\sum (c^{-w_0\lambda}_{\mu_i,-\lambda} c^\lambda_{-\mu_i,\lambda}) \otimes k_{-2\lambda} = 1\otimes k_{-2\lambda} $$ as required. To prove the result it now remains to show that \begin{equation} \label{star} \forall r \in ({\Bbb{C}}_q[G] \otimes U_q(\g)^{\text{cop}})^\Gamma, \;\; \exists s \in \cal{S} \text{ such that }rs \in \xi({\Bbb{C}}_q[D(G)]). \end{equation} Notice that for $\lambda \in \bold{L}$ and $h \in \Gamma$, $$ d_h(1\otimes k_\lambda) = \lambda(h) 1\otimes k_\lambda. $$ From this it follows that $$({\Bbb{C}}_q[G] \otimes U_q(\g)^{\text{cop}})^\Gamma =({\Bbb{C}}_q[G] \otimes U^0)^\Gamma (1 \otimes U_q(\g)^{\text{cop}})^\Gamma. $$ Since $\cal{S}$ is contained in the units of $U_q(\g)$, it suffices to verify condition \ref{star} for the two factors separately. By \cite[9.2.2]{Jo} ${\Bbb{C}}_q[G] \otimes U^0$ is spanned by elements of the form $$c^\lambda_{\nu,w_0\lambda} c^{\lambda'}_{\nu',\lambda'} \otimes k_\mu$$ where $\nu,\nu',\mu \in \bold{L}$ and $\lambda, \lambda' \in \bold{L}^+$. This element is invariant if and only if $w_0\lambda +\lambda'+\mu \in 2\bold{L}$. On the other hand the image of ${\Bbb{C}}_q[D(G)]$ contains $$\xi(1\otimes c^\lambda_{\nu,w_0\lambda}) \xi( c^{\lambda'}_{\nu',\lambda'}\otimes 1) = c^\lambda_{\nu,w_0\lambda} c^{\lambda'}_{\nu',\lambda'} \otimes k_{w_0\lambda-\lambda'}.$$ Now for any $\gamma \in \bold{L}$, there exists a $\gamma' \in \bold{L}^+$ such that $\gamma'-\gamma \in \bold{L}^+$. Thus if $w_0\lambda + \lambda' +\mu =2 \gamma$, then $$\mu - 2(\gamma'-w_0\lambda) = w_0\lambda -\lambda' -2(\gamma'-\gamma).$$ Hence, $$(c^\lambda_{\nu,w_0\lambda} c^{\lambda'}_{\nu',\lambda'} \otimes k_\mu) (1 \otimes k_{-2(\gamma'-w_0\lambda)}) = \xi(c^\lambda_{\nu,w_0\lambda}\otimes c^{\lambda'}_{\nu',\lambda'}) (1\otimes k_{-2(\gamma'-\gamma)}) $$ which lies in $\xi({\Bbb{C}}_q[D(G)])$. Notice that $(U_q(\g)^{{\text{cop}}})^\Gamma$ is generated over $(U^0)^\Gamma$ by the elements $f_ik_i$ and $e_i$. Hence it remains to verify condition \ref{star} for these elements. Let $\lambda \in \bold{L}^+$, let $\alpha_i$ be a simple root, set $\mu = \lambda -\alpha_i$ and let $\nu \in \bold{L}$. Then \begin{align*} \xi(c^\lambda_{\nu,\mu}\otimes 1) & = \sum c^\lambda_{\nu,\mu_j} \otimes l^+(c^\lambda_{-\mu_j,\mu})\\ & =c^\lambda_{\nu,\lambda} \otimes l^+(c^\lambda_{-\lambda,\mu}) + c^\lambda_{\nu,\mu}\otimes l^+(c^\lambda_{-\mu,\mu}) \end{align*} Now $l^+(c^\lambda_{-\lambda,\mu}) =f_ik_{-\mu}$ and $c^\lambda_{\nu,\mu} \otimes l^+(c^\lambda_{-\mu,\mu}) \in ({\Bbb{C}}_q[G] \otimes U^0)^\Gamma$. Hence by the paragraph above, $\xi({\Bbb{C}}_q[D(G)])$ contains $ c^\lambda_{\nu,\lambda} \otimes f_ik_{-\mu}k_\gamma$ for some $\gamma \in -2\bold{L}^+$. As noted above, $$\xi(1 \otimes c^{-w_0\lambda}_{\mu_j,-\lambda}) = c^{-w_0\lambda}_{\mu_j,-\lambda} \otimes k_{-\lambda}.$$ Hence, for a suitably chosen $\gamma \in \bold{L}^+$, $\xi({\Bbb{C}}_q[D(G)])$ contains $$\sum (c^{-w_0\lambda}_{\mu_j,-\lambda} \otimes k_{-\lambda}) (c^\lambda_{-\mu_j,\lambda} \otimes f_ik_{-\mu}k_\gamma) = ( 1 \otimes f_ik_i)(1 \otimes k_{\gamma-2\lambda})$$ as required. A similar argument works for the element $e_i$. \end{pf} \section{Noetherianity} We now prove that the double of a standard quantum group ${\Bbb{C}}_q[G]$ is again Noetherian. We use the approach of Brown and Goodearl \cite{BG}. Since both ${\Bbb{C}}_q[G]$ and $U_q(\g)$ are homomorphic images of this algebra, this provides a straightforward and unified proof that both ${\Bbb{C}}_q[G]$ and $U_q(\g)$ are noetherian. Recall that if $\beta$ is a braiding on a Hopf algebra $A$, then the braiding on the category of right comodules is given by the maps $\beta_{V \otimes W} : V \otimes W \to W \otimes V$ where $$\beta(v \otimes w) = \sum w_0 \otimes v_0 \beta(v_1, w_1)$$ (see, for instance, \cite{Ha}). \begin{defn} Let $(A, \beta)$ be a braided Hopf algebra and let $V$ be a finite dimensional right $A$-comodule. A full flag $0=V_0 \subset V_1 \subset \dots \subset V_n=V$ of subspaces of $V$ is said to be {\em $\beta$-invariant} if $$\beta_{V\otimes W}(V_i \otimes V) = V \otimes V_i$$ The flag is said to be {\em strongly $\beta$-invariant} if $$\beta_{V\otimes W}(V_i \otimes W) = W \otimes V_i$$ for all right comodules $W$. \end{defn} If $V$ is a right $A$-comodule with basis $v_i$ for $I \in I$ and structure map $\rho:V \to V \otimes A$, then the subspace of $A$ spanned by $$\{ a_{ij} \mid \rho(v_j) = \sum v_i \otimes a_{ij} \}$$ is denoted by $C(V)$ \cite[p129]{Ab}. \begin{defn} A right comodule $V$ over a Hopf algebra $A$ is said to be a {\em generator} for $A$ if $A = k\langle C(V)\rangle$, or equivalently, if $A = \sum_jC(V^{\otimes j})$. \end{defn} One of the main results of \cite{BG} is the following (slightly reworked into the language of braided Hopf algebras). \begin{thm} Let $(A, \beta)$ be a braided Hopf algebra. Suppose there exits a finite dimensional right $A$-comodule $V$ such that \begin{enumerate} \item $V$ is a generator for $A$; \item $V$ has a $\beta $-invariant flag. \end{enumerate} Then $A$ is Noetherian. \end{thm} \begin{pf} See \cite[Theorem 4.4]{BG} \end{pf} The key result is the following. \begin{lem} All finite dimensional right ${\Bbb{C}}_q[D(G)]$-comodules have a strongly $\gamma'$-invariant flag. \end{lem} \begin{pf} It suffices to prove the result for comodules of the form $V \otimes V'$ for comodules $V$ and $V'$ over ${\Bbb{C}}_q[G]$. It follows from the form of $\beta$ given above that there exist strongly $\beta$-invariant flags of $V$ and $V'$ such that $$ \beta_{V \otimes V'} (V_i \otimes V'_j ) = V'_j \otimes V_i$$ Now let $W$ and $W'$ be two other such ${\Bbb{C}}_q[G]$-comodules. Notice that $$\gamma'_{V\otimes V',W \otimes W'} = \beta_{14}(\tau \beta_{13})(\tau\beta^{- 1}_{42})(\tau\beta_{32}^{-1})$$ where $\beta_{kl}$ denotes the map induced from $\beta$ on the $k$ and $l$-th components. Using this one can observe easily that $$\gamma'_{V\otimes V',W \otimes W'}(V_i \otimes V'_j \otimes W \otimes W')= W \otimes W'\otimes V_i \otimes V'_j $$ From this it follows easily that $V \otimes V'$ has a strongly $\gamma'$-invariant flag. \end{pf} \begin{thm} The algebra ${\Bbb{C}}_q[D(G)]$ is Noetherian for any connected semi-simple algebraic group $G$. \end{thm} \begin{pf} It remains to notice that ${\Bbb{C}}_q[D(G)]$ has a finite dimensional right comodule generator. The argument is the same as in the classical case. \end{pf} \begin{cor} The algebras ${\Bbb{C}}_q[G]$ and $U_q(\g)$ are noetherian. \end{cor} \section{Further Non-standard quantum Groups} We now discuss briefly a generalization of double quantum groups and a lifting theorem which provides a method of constructing new families of nonstandard quantum groups corresponding to certain families of solutions of the modified classical Yang-Baxter equation. We begin again with a very general result on lifting of Hopf algebra twists. The proof is routine. \begin{thm}\label{lift} Let $\phi: A \to B$ be a homomorphism of Hopf algebras and let $\sigma$ be a 2-cocycle on $B$. Then $\sigma'=\phi^*(\sigma)$ is a 2-cocycle. Moreover the induced map $$\phi: A_{\sigma'} \to B_{\sigma}$$ between the corresponding twisted Hopf algebras is a Hopf algebra homomorphism. In particular, if $C$ is a braided Hopf algebra and $\phi: A \to C \otimes C$ is a Hopf algebra map, then there exists a 2-cocycle $\sigma'$ on $A$ such that the map $$\phi: A_{\sigma'} \to C \Join C$$ is a Hopf algebra homomorphism. \end{thm} In order to apply this result to standard quantum groups, one needs Hopf algebra maps of the form ${\Bbb{C}}_q[G] \to \Bbb{C}_{q'}[G' \times G']$. For this it is not quite enough to find morphisms $G'\times G' \to G$. Let us say that a morphism of $G' \to G$ of connected, simply-connected semi-simple groups is {\em admissible} if the induced map on the Lie algebras arises from a Dynkin diagram embedding as defined in \cite[10.4.5]{Jo}. Then Braverman has shown that there is a Hopf algebra surjection ${\Bbb{C}}_q[G] \to \Bbb{C}_{q'}[G']$ and moreover that all such maps arise in this way \cite{Br}. \begin{thm}\label{qglift} Let $G$ and $G'$ be connected, simply connected semi-simple algebraic groups. Let $\psi:G'\times G'\to G$ be an admissible embedding. Then there is a 2-cocycle $\sigma'$ on ${\Bbb{C}}_q[G]$ and an associated non-standard quantum group $\Bbb{C}_{q,\psi}[G]={\Bbb{C}}_q[G]_{\sigma'}$ such that \begin{enumerate} \item $\Bbb{C}_{q,\psi}[G]$ is a braided Hopf algebra; \item the category $\operatorname {Comod}$-$\Bbb{C}_{q,\psi}[G]$ is equivalent as a braided rigid monoidal category to $\operatorname {Comod}$-${\Bbb{C}}_q[G]$. \end{enumerate} Moreover there is a natural surjective homomorphism, $\Bbb{C}_{q,\psi}[G] \to \Bbb{C}_{q'}[D(G')]$ where $q'=q^r$ for some rational number $r$ . \end{thm} For instance, if $2m\leq n$, then we have admissible embeddings $\psi: SL(m,\Bbb{C}) \times SL(m,\Bbb{C}) \to SL(n,\Bbb{C})$. This yields some nonstandard quantum groups of the form $\Bbb{C}_{q,\psi}[SL(n)]$. Since the braiding $\gamma$ is essentially the universal $T$-matrix, these quantizations appear to be related to some of the ``esoteric quantum groups'' constructed by Fronsdal and Galindo \cite{FG}. The connection between these quantum groups and the solutions of the modified classical Yang Baxter equation classified by Belavin and Drinfeld appears to be the following. Let $\frak{g}$ and $\frak{g}'$ be the Lie algebras of $G$ and $G'$ respectively. Let $\bold{B}$ and $\bold{B}'$ be bases for the roots of $g$ and $\frak{g}'$ respectively. Then we may choose $\bold{B}$ and $\bold{B}'$ such that the map $\psi$ induces an embedding of the Dynkin diagram $\bold{B}'\times \bold{B}'$ into $\bold{B}$. Let $\bold{B}_1$ and $\bold{B}_2$ be the images of the two copies of $\bold{B}'$ in $\bold{B}$ and let $\tau: \bold{B}_1 \to \bold{B}_2$ be the natural isomorphism. Then $\tau$ is a triple in the sense of \cite{BD}. We conjecture that $\Bbb{C}_{q,\psi}[G]$ can be regarded as a deformation in the algebraic sense \cite{DP} of the algebra of functions on the group $G$ with Poisson structure given by a solution of the modified Yang-Baxter equation associated to $\tau$. On the other hand, given any triple $\tau: \bold{B}_1 \to \bold{B}_2$ in the sense of \cite{BD} which is disjoint in the sense that $(\alpha, \alpha')=0$ for all $\alpha \in \bold{B}_1$ and $\alpha' \in \bold{B}_2$, then we may construct an admissible map of the form given in the theorem. Thus we may think of the quantum group as being constructed directly from this data. Notice that these cocycle twists are deceptively simple. Although the coalgebra structure is preserved by the twist, the algebra structure is altered quite dramatically. Composing the homomorphism $\Bbb{C}_{q,\psi}[G] \to \Bbb{C}[D(G')]$ with the map $\Bbb{C}_{q'}[D(G')] \to U_{q'}(\frak{g}')$ yields a Hopf algebra map $\Bbb{C}_{q,\psi}[G]\to U_{q'}(\frak{g}')$. Thus $\Bbb{C}_{q,\psi}[G]$ has a category of finite dimensional representations equivalent to those of $U_{q'}(\frak{g}')$. This is quite different from the case of the standard quantum groups where all finite dimensional representations are one-dimensional \cite[9.3.11]{Jo}. On the other hand the existence of this map is consistent with the philosophy on non-standard quantum groups advanced in \cite{H1}. Briefly the conjecture suggested by this work is the following. Let $\Bbb{C}_q[G,r]$ be an algebraic deformation of the group $G$ with Poisson structure given by a solution $r$ of the modified classical Yang Baxter equation associated to a triple $(\tau, \bold{B}_1,\bold{B}_2)$. Then there should be a Hopf algebra homomorphism $\Bbb{C}_q[G,r] \to U_{q'}(\tilde{\frak{g}},\tilde{r})$ where $ U_{q'}(\tilde{\frak{g}},\tilde{r})$ is the FRT-dual of the quantization $\Bbb{C}_{q'}[(\tilde{G},\tilde{r})$ of the reductive group $\tilde{G}$ associated to the reductive Lie algebra $\tilde{\frak{g}}$ constructed in \cite[Theorem 6.4]{H1}. The semi-simple part of $\tilde{\frak{g}}$ is the semi-simple Lie algebra given by $\bold{B}_1$. Using Theorem \ref{lift} we may easily generalize the notion of the double $A\Join A$ of a braided Hopf algebra $A$ to a twisted product $A^{\Join n}$ of $n$ copies of $A$ in the following way. Using the map $m\otimes 1:(A\Join A)\otimes A \to A \otimes A$ and Theorem \ref{lift} we may twist $(A\Join A)\otimes A$ to obtain a new Hopf algebra $A^{\Join 3}$ which itself maps onto $A$. Continuing in this way, we may iteratively construct $A^{\Join (n+1)}$ from $A^{\Join n}$ using the map $A^{\Join n} \otimes A \to A \otimes A$ and Theorem \ref{lift}. Clearly we may also view the construction $A^{\Join n}$ as a series of cocycle twists of the $n$-fold tensor product $A^{\otimes n}= A \otimes A \otimes A \dots \otimes A$. Hence there exists a single cocycle $\tau$ such that $A^{\Join n} \cong A^{\otimes n}_\tau$. Again we may lift this construction to obtain interesting families of non-standard quantum groups. The algebra ${\Bbb{C}}_q[G]^{\Join n}$ can be thought of as a nonstandard quantization of $G \times G \times \dots \times G$. If $G$ and $G'$ are simply connected, then any admissible embedding $G' \times G' \times \dots \times G' \to G$ induces a homomorphism $ {\Bbb{C}}_q[G] \to \ C_{q'}[G'\times \dots \times G'] $ and we may lift the twisting of $\Bbb{C}_q[G']^{\Join n}$ to a twisting on ${\Bbb{C}}_q[G]$ using Theorem \ref{lift}. Thus, for instance, the natural block-diagonal embedding $\psi: SL(2,\Bbb{C})\times SL(2,\Bbb{C})\times SL(2,\Bbb{C})$ into $SL(6,\Bbb{C})$ yields a new quantum group $\Bbb{C}_{q,\psi}[SL(6)]$. One of the original motivations for this work was the desire to find a construction of the Cremmer-Gervais quantum groups \cite{H1} from the standard quantum groups using a cocycle twist. However none of the above constructions cover this case. The existence of such a cocycle is equivalent to the existence of an equivalence of braided rigid monoidal categories between the category of right comodules over the Cremmer-Gervais quantum groups and the category of right comodules over the standard quantum groups. Thus this remains a key problem in the study of non-standard quantum groups. A positive answer to this question would presumably also lead to a far more general procedure for constructing non-standard quantum groups.
proofpile-arXiv_067-3075
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Proof of Lemma~\ref{Lem_usr_detect}} \label{Sec_Lem_detct_proof} First let us consider the case of bounded $\ell_n$. In this case, one can employ a scheme where each user gets an exclusive channel use to convey whether it is active or not. For such a scheme, it is easy to show that (see the proof of Lemma~\ref{Lem_detect_uppr} in Appendix~\ref{Sec_appnd_ortho}) the probability of a detection error $P(\mbox{$\cal{D }$})$ is upper-bounded by \begin{align*} P(\mbox{$\cal{D }$}) & \leq \ell_n e^{-E_n'' t} \end{align*} for some $t > 0$. The energy $E_n'' = b c_n \ln \ell_n$ used for detection tends to infinity since $c_n \to \infty$ as $n \to\infty$. Thus, $P(\mbox{$\cal{D }$})$ tends to zero as $n \to \infty$. Next we prove Lemma~\ref{Lem_usr_detect} for the case where $\ell_n \to \infty$ as $n \to \infty$. To this end, we closely follow the proof of~\cite[Theorem~2]{ChenCG17}, but with the power constraint replaced by an energy constraint. Specifically, we analyze $\text{Pr} (\mbox{$\cal{D }$})$ for the user-detection scheme given in~\cite{ChenCG17} where signatures are drawn i.i.d. according to a zero mean Gaussian distribution. Note that the proof in~\cite{ChenCG17} assumes that \begin{align} \lim\limits_{n \to \infty} \ell_n e^{-\delta k_n} =0 \label{Eq_Guo_cond} \end{align} for all $\delta > 0$. However, as we shall show next, in our case this assumption is not necessary. To show that all signatures satisfy the energy constraint, we follow the technique used in the proof of Lemma~\ref{Lem_err_expnt}. Similar to Lemma~\ref{Lem_err_expnt}, we denote by $\tilde{q}$ the probability density function of a zero-mean Gaussian random variable with variance $E_n''/(2n'')$. We further let \begin{align*} \tilde{{\bf q}}(\bar{a}) & = \prod_{i=1}^{n} \tilde{q}(a_i), \quad \bar{a} =(a_1,\ldots,a_n) \end{align*} and \begin{align*} {\bf q}(\bar{a}) & = \frac{1}{\mu} \I{ \|\bar{a}\|^2 \leq E_n''} \tilde{{\bf q}}(\bar{a}) \end{align*} where \begin{align*} \mu & = \int \I{ \|\bar{a}\|^2 \leq E_n''} \; \tilde{{\bf q}}(\bar{a}) d \bar{a} \end{align*} is a normalizing constant. Clearly, any vector ${\bf S}_i$ distributed according to ${\bf q}(\cdot)$ satisfies the energy constraint $E''_n$ with probability one. For any index set $I \subseteq \{1,\ldots, \ell_n\}$, let the matrices $\underline{{\bf S}}_{I}$ and $ \tilde{\underline{{\bf S}}}_{I}$ denote the set of signatures for the users in $I$ that are distributed respectively as \begin{align*} \underline{{\bf S}}_{I} & \sim \prod_{i\in I} {\bf q}({\bf S}_i) \end{align*} and \begin{align*} \tilde{\underline{{\bf S}}}_{I} & \sim \prod_{i\in I} \tilde{{\bf q}}({\bf S}_i). \end{align*} As noted in the proof of Lemma~\ref{Lem_err_expnt}, we have \begin{align} {\bf q}({\bf s}_i) & \leq \frac{1}{\mu} \tilde{{\bf q}}( {\bf s}_i). \label{Eq_prob_signt_uppr} \end{align} To analyze the detection error probability, we first define the $\ell_n$-length vector ${\bf D}^a$ as \begin{align*} {\bf D}^a \triangleq ( \I{W_1\neq0}, \ldots, \I{W_{\ell_n} \neq 0}). \end{align*} For $c_n$ given in~\eqref{Eq_energy_choice}, let \begin{align*} v_n \triangleq k_n(1 + c_n). \end{align*} Further let \begin{align*} \mbox{$\cal{B}$}^n(v_n) \triangleq \{ {\bf d} \in \{0,1 \}^{\ell_n} : 1 \leq |{\bf d}| \leq v_n \} \end{align*} where $|{\bf d}|$ denotes the number of $1$'s in ${\bf d}$. We denote by ${\bf S}^a$ the matrix of signatures of all users which are generated independently according to ${\bf q}(\cdot)$, and we denote by $\mathbf{Y}^a$ the first $n''$ received symbols, based on which the receiver performs user detection. The receiver outputs the $\hat{{\bf d}}$ given by \begin{align} \hat{{\bf d}} = \mathrm{ arg\,min}_{ {\bf d} \in \mbox{$\cal{B}$}^n(v_n) } \| {\bf Y}^a - {\bf S}^a {\bf d} \| \label{Eq_decod_rule} \end{align} as a length-$\ell_n$ vector indicating the set of active users. Then, the probability of a detection error $\text{Pr} (\mbox{$\cal{D }$})$ is upper-bounded by \begin{align} \text{Pr} (\mbox{$\cal{D }$}) & \leq \text{Pr} (|{\bf D}^a| > v_n ) + \sum_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \text{Pr} (\mbox{$\cal{E}$}_d|{\bf D}^a = {\bf d}) \text{Pr} ({\bf D}^a = {\bf d}) + \text{Pr} (\mbox{$\cal{E}$}_d| |{\bf D}^a| = 0) \text{Pr} (|{\bf D}^a| = 0) \label{Eq_detect_err_uoor} \end{align} where $|{\bf D}^a|$ denotes the number of $1$'s in ${\bf D}^a$ and $\text{Pr} (\mbox{$\cal{E}$}_d|{\bf D}^a = {\bf d})$ denotes the detection error probability for a given ${\bf D}^a = {\bf d}$. Next we show that each term on the RHS of~\eqref{Eq_detect_err_uoor} vanishes as $n \to \infty$. Using the Chernoff bound for the binomial distribution, we have \begin{align*} \text{Pr} (|{\bf D}^a| > v_n) & \leq \exp(-k_n c_n/3) \end{align*} which vanishes since $c_n \to \infty $ and $k_n = \Omega(1)$. We continue with the term $\text{Pr} (\mbox{$\cal{E}$}_d|{\bf D}^a = {\bf d})$. For a given ${\bf D}^a = {\bf d}$, let $\kappa_1$ and $\kappa_2$ denote the number of miss detections and false alarms, respectively, i.e., \begin{align*} \kappa_1 &= |\{ j: d_j \neq 0, \hat{d}_j = 0 \} |\\ \kappa_2 &= |\{ j: d_j = 0, \hat{d}_j \neq 0 \} | \end{align*} where $d_j$ and $\hat{d}_j$ denote the $j$-th components of the corresponding vectors. An error happens only if either $\kappa_1$ or $\kappa_2$ or both are strictly positive. The number of users that are either active or are declared as active by the receiver satisfies $|{\bf d}|+\kappa_2 = |\hat{{\bf d}}|+\kappa_1$, so \begin{align*} |{\bf d}|+ \kappa_2 & \leq v_n + \kappa_1 \end{align*} since $|\hat{{\bf d}}|$ is upper-bounded by $v_n$ by the decoding rule~\eqref{Eq_decod_rule}. So, the pair $(\kappa_1, \kappa_2)$ belongs to the following set: \begin{align} \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}} = & \left\{(\kappa_1,\kappa_2): \kappa_1 \in \{0,1,\ldots, |{\bf d}| \}, \kappa_2 \in \{0,1,\ldots,v_n\}, \kappa_1+\kappa_2 \geq 1, |{\bf d}|+\kappa_2 \leq v_n + \kappa_1 \right\}. \label{Eq_decision_sets} \end{align} Let $\text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}|{\bf D}^a = {\bf d})$ be the probability of having exactly $\kappa_1$ miss detections and $\kappa_2$ false alarms when ${\bf D}^a = {\bf d}$. For given $ {\bf d}$ and $\hat{{\bf d}}$, let $A^* = \{j : d_j \neq 0 \}$ and $A = \{j : \hat{d}_j \neq 0 \}$. We further define $A_1 = A^* \setminus A$, $A_2 = A \setminus A^*$, and \begin{align*} T_A & = \|{\bf Y}^a - \sum_{j \in A} {\bf S}_j \|^2 - \|{\bf Y}^a - \sum_{j \in A^*} {\bf S}_j \|^2. \end{align*} Using the analysis that led to~\cite[eq.~(67)]{ChenCG17}, we obtain \begin{align} \text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}|{\bf D}^a = {\bf d}) & \leq \binom{|A^*|}{\kappa_1} \binom{\ell_n}{\kappa_2} \mathrm{E}_{\underline{{\bf S}}_{A^*}, {\bf Y}} \{ [\mathrm{E}_{\underline{{\bf S}}_{A_2}}\{ \I{T_A \leq 0} |\underline{{\bf S}}_{A^*}, {\bf Y} \}]^{\rho}|\} \notag \\ & \leq \binom{|A^*|}{\kappa_1} \binom{\ell_n}{\kappa_2} \left(\frac{1}{\mu} \right)^{\rho \kappa_2} \mathrm{E}_{\underline{{\bf S}}_{A^*}, {\bf Y}} \{ [\mathrm{E}_{\underline{\tilde{{\bf S}}}_{A_2}}\{ \I{T_A \leq 0} |\underline{{\bf S}}_{A^*}, {\bf Y} \}]^{\rho}\} \notag\\ & \leq \binom{|A^*|}{\kappa_1} \binom{\ell_n}{\kappa_2} \left(\frac{1}{\mu}\right)^{|A^*|} \left(\frac{1}{\mu} \right)^{\rho \kappa_2} \mathrm{E}_{\underline{\tilde{{\bf S}}}_{A^*}, {\bf Y}} \{ [\mathrm{E}_{\underline{\tilde{{\bf S}}}_{A_2}}\{ \I{T_A \leq 0} |\underline{{\bf S}}_{A^*}, {\bf Y} \}]^{\rho} \} \label{Eq_detect_err_new_distr2} \end{align} where in the second inequality we used that \begin{align} {\bf q}( \underline{{\bf s}}_{A_2}) \leq \left( \frac{1}{\mu}\right)^{\kappa_2}\prod_{i \in A_2} \tilde{{\bf q}}({\bf s}_i ) \label{Eq_Q_uppr1} \end{align} and in the third inequality we used that \begin{align} {\bf q}(\underline{{\bf s}}_{A^*}) \leq \left( \frac{1}{\mu}\right)^{|A^*|}\prod_{i \in A^*} \tilde{{\bf q}}({\bf s}_i ). \label{Eq_Q_uppr2} \end{align} Here, \eqref{Eq_Q_uppr1} and~\eqref{Eq_Q_uppr2} follow from~\eqref{Eq_prob_signt_uppr}. For every $\rho \in [0,1]$ and $\lambda \geq 0$, we obtain from~\cite[eq.~(78)]{ChenCG17} that \begin{align} \binom{|A^*|}{\kappa_1} \binom{\ell_n}{\kappa_2} \mathrm{E}_{\underline{\tilde{{\bf S}}}_{A^*}, {\bf Y}} \{ [\mathrm{E}_{\underline{\tilde{{\bf S}}}_{A}}\{ \I{T_A \leq 0} \}]^{\rho} \} & \leq \exp[ -\tilde{E}_n g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) ] \label{Eq_detect_prob_old_distrb} \end{align} where \begin{align} \tilde{E}_n \triangleq & E_n''/2, \notag \\ g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) \triangleq & -\frac{(1-\rho)n''}{2 \tilde{E}_n} \log (1+\lambda \kappa_2 \tilde{E}_n/n'') + \frac{n''}{2\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) \notag \\ & - \frac{|{\bf d}|}{\tilde{E}_n} H_2\left(\frac{\kappa_1}{|{\bf d}|}\right) - \frac{\rho \ell_n}{\tilde{E}_n} H_2\left(\frac{\kappa_2}{\ell_n}\right). \label{Eq_err_exp} \end{align} Thus, it follows from~\eqref{Eq_detect_err_new_distr2} and \eqref{Eq_detect_prob_old_distrb} that \begin{align} \text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}|{\bf D}^a = {\bf d}) & \leq \left(\frac{1}{\mu}\right)^{|A^*| + \rho\kappa_2} \exp[ -\tilde{E}_n g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) ].\label{Eq_err_mu_uppr} \end{align} Next we show that the RHS of~\eqref{Eq_err_mu_uppr} vanishes as $n \to \infty$. To this end, we first show that $\left(\frac{1}{\mu}\right)^{|A^*| + \rho\kappa_2} \to 1$ as $n \to \infty$ uniformly in $(\kappa_1, \kappa_2) \in \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}}$ and ${\bf d} \in \mbox{$\cal{B}$}^n(v_n) $. From the definition of $\mu$, we have \begin{align*} \mu & = 1 - \text{Pr} \left(\|\tilde{{\bf S}}_1\|_2^2 \geq E_n''\right). \end{align*} Further, by defining $\tilde{{\bf S}}_0 \triangleq \frac{2 n''}{E_n''} \|\tilde{{\bf S}}_1\|_2^2$ and following the steps that led to~\eqref{Eq_mu_uppr2}, we obtain \begin{align} 1 & \leq \left(\frac{1}{\mu}\right)^{|A^*| + \rho\kappa_2} \notag \\ & \leq \left(\frac{1}{\mu}\right)^{2 \ell_n} \notag\\ & = (1-\text{Pr} (\tilde{{\bf S}}_0 \geq 2 n''))^{-2 \ell_n}\notag \\ & \leq \left(1 - \exp \left[-\frac{n''}{2} \tau \right]\right)^{-2 \ell_n} \label{Eq_mu_uppr} \end{align} where $\tau = (1 - \ln 2)$. Here, in the second inequality we used that $|A^*| \leq \ell_n$ and $\rho \kappa_2 \leq \ell_n$. Since $k_n \log \ell_n = o(n)$ and $k_n = \Omega(1)$, we have $\log \ell_n =o(n)$. Furthermore, $n'' = \Theta(n)$. As noted before, for any two non-negative sequences $a_n$ and $b_n$ satisfying $a_n\to 0$ and $a_nb_n \to 0$ as $n \to \infty$, it holds that $(1-a_n)^{-b_n} \to 1$ as $n \to \infty$. So, we obtain that the RHS of~\eqref{Eq_mu_uppr} goes to one as $n \to \infty$ uniformly in $(\kappa_1, \kappa_2)\in \mbox{$\cal{W}$}_{{\bf d}}^{\ell_n}$ and ${\bf d} \in \mbox{$\cal{B}$}^n(v_n)$. So there exists a positive constant $n_0$ that is independent of $\kappa_1$, $\kappa_2$, and ${\bf d}$ and satisfies \begin{align} \left( \frac{1}{\mu} \right)^{|A^*| + \rho\kappa_2} & \leq 2, \quad (\kappa_1, \kappa_2)\in \mbox{$\cal{W}$}_{{\bf d}}^{\ell_n}, {\bf d} \in \mbox{$\cal{B}$}^n(v_n), n \geq n_0. \label{Eq_mu_lowr} \end{align} Next we show that there exist constants $\gamma >0$ and $n'_0$ (independent of $\kappa_1$, $\kappa_2$, and ${\bf d}$) as well as some $\rho$ and $\lambda$ such that \begin{align} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{(\kappa_1,\kappa_2) \in \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) \geq \gamma, \quad n \geq n'_0. \label{Eq_detect_err_exp} \end{align} This then implies that $\text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}| {\bf D}^a=\ {\bf d})$ vanishes as $n \to\infty$ uniformly in $(\kappa_1,\kappa_2) \in \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}}$ and ${\bf d} \in \mbox{$\cal{B}$}^n(v_n)$. Indeed, if ${\bf d} \in \mbox{$\cal{B}$}^n(v_n)$, then $|{\bf d}| \leq v_n$ which implies that $\kappa_1 \leq v_n$. Furthermore, since the decoder outputs a vector in $\mbox{$\cal{B}$}^n(v_n)$, we also have $\kappa_2 \leq v_n$. It thus follows from~\eqref{Eq_err_mu_uppr}, \eqref{Eq_mu_lowr}, and~\eqref{Eq_detect_err_exp} that \begin{align} \text{Pr} (\mbox{$\cal{E}$}_d| {\bf D}^a=\ {\bf d}) &\leq 2 v_n^2 \exp[-\tilde{E}_n \gamma ] \notag \\ & = 2 \exp\left[ -\tilde{E}_n \left(\gamma - \frac{ 2 \ln v_n}{\tilde{E}_n}\right)\right], \quad {\bf d} \in \mbox{$\cal{B}$}^n(v_n), n \geq \max(n_0,n'_0). \label{Eq_detect_err_uppr} \end{align} Furthermore, by the definition of $v_n$ and $\tilde{E}_n$, \begin{align} \frac{2\ln v_n}{\tilde{E}_n} & = \frac{2 \log e \log ((1+c_n)k_n)}{\tilde{E}_n} \notag \\ & = \frac{4 \log e \log (1+c_n)}{b c_n \log \ell_n} + \frac{ 4 \log e \log k_n}{b c_n \log \ell_n} \label{Eq_sn_En2} \end{align} which tends to zero since $c_n \to \infty$ and $1 \leq k_n \leq \ell_n$. Consequently, the RHS of~\eqref{Eq_detect_err_uppr} vanishes as $n \to \infty$. To obtain~\eqref{Eq_detect_err_exp}, we first note that \begin{align} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{(\kappa_1,\kappa_2) \in \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) & = \min \{ \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} g^n_{\lambda, \rho}(\kappa_1,0,{\bf d}), \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ 1 \leq \kappa_2 \leq v_n } g^n_{\lambda, \rho}(0,\kappa_2, {\bf d}), \notag \\ & \qquad \qquad \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ \substack{ 1 \leq \kappa_1 \leq v_n \\ 1 \leq \kappa_2 \leq v_n}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) \}. \label{Eq_inf_gn} \end{align} Then we show that for $\lambda = 2/3$ and $\rho = 3/4$, \begin{align} \liminf_{n \rightarrow \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} g^n_{\lambda, \rho}(\kappa_1, 0,{\bf d}) & > 0 \label{Eq_w1_lowr} \\ \liminf_{n \rightarrow \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ 1 \leq \kappa_2 \leq v_n } g^n_{\lambda, \rho}(0,\kappa_2,{\bf d}) & > 0 \label{Eq_w2_lowr} \\ \liminf_{n \rightarrow \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ \substack{ 1 \leq \kappa_1 \leq v_n \\ 1 \leq \kappa_2 \leq v_n}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) & > 0 \label{Eq_w1w2_lowr} \end{align} from which~\eqref{Eq_detect_err_exp} follows. Indeed, for $0 \leq \lambda \rho \leq 1$, we have \begin{align} & 2 \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) \notag \\ & \qquad \qquad \geq \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right) + \log \left(1+\lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right). \label{Eq_log_lowr} \end{align} Using~\eqref{Eq_log_lowr} in the second term on the RHS of~\eqref{Eq_err_exp}, we obtain that \begin{align} g^n_{\lambda, \rho}(\kappa_1, \kappa_2,{\bf d}) & \geq a^n_{\lambda, \rho}(\kappa_1,{\bf d}) + b^{n}_{\lambda, \rho}(\kappa_2) \label{Eq_gn_lowr} \end{align} where \begin{align*} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) \triangleq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) - \frac{|{\bf d}|}{\tilde{E}_n} H_2\left(\frac{\kappa_1}{|{\bf d}|}\right) \end{align*} and \begin{align*} b^{n}_{\lambda, \rho}(\kappa_2) \triangleq \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right) -\frac{(1-\rho)}{2 \tilde{E}_n} \log (1+\lambda \kappa_2 \tilde{E}_n/n'') - \frac{\rho \ell_n}{\tilde{E}_n} H_2\left(\frac{\kappa_2}{\ell_n}\right). \end{align*} We begin by proving~\eqref{Eq_w1_lowr}. We have \begin{align} g^n_{\lambda, \rho}(\kappa_1, 0,{\bf d}) & \geq a^n_{\lambda, \rho}(\kappa_1,{\bf d}) + b^{n}_{\lambda, \rho}(0) \notag \\ & \geq a^n_{\lambda, \rho}(\kappa_1,{\bf d}) \label{Eq_gn_w1_lowr} \end{align} by~\eqref{Eq_gn_lowr} and $b^{n}_{\lambda, \rho}(0) =0$. Consequently, \begin{align*} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} g^n_{\lambda, \rho}(\kappa_1, 0,{\bf d}) & \geq \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) \end{align*} so~\eqref{Eq_w1_lowr} follows by showing that \begin{align} \liminf_{n \rightarrow \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) > 0. \label{Eq_first_lowr} \end{align} To this end, let \begin{align*} i_n(\kappa_1) & \triangleq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) \\ j_n(\kappa_1,{\bf d}) & \triangleq \frac{|{\bf d}|}{\tilde{E}_n} H_2\left(\frac{\kappa_1}{|{\bf d}|}\right) \end{align*} so that \begin{align} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) = i_n(\kappa_1) \left(1- \frac{j_n(\kappa_1,{\bf d})}{i_n(\kappa_1)}\right). \label{Eq_an} \end{align} Note that \begin{align} i_n(\kappa_1) & = \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) \notag \\ & \geq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right), \quad 1 \leq \kappa_1 \leq v_n \label{Eq_in_lowr} \end{align} and \begin{align} \frac{j_n(\kappa_1,{\bf d})}{i_n(\kappa_1)} & = \frac{4 |{\bf d}| H_2\left(\frac{\kappa_1}{|{\bf d}|}\right)}{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) } \notag \\ & = \frac{4 \kappa_1 \log (|{\bf d}|/\kappa_1) + |{\bf d}|(\kappa_1/|{\bf d}| - 1) \log (1 -\kappa_1/|{\bf d}|) }{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) } \label{Eq_jn_in_ratio}. \end{align} Next we upper-bound $(\kappa_1/|{\bf d}| - 1) \log (1 -\kappa_1/|{\bf d}|) $. Indeed, consider the function $f(p) = p - (p-1) \ln (1-p)$, which satisfies $f(0)=0$ and is monotonically increasing in $p$. So, $(p-1) \ln (1-p) \leq p$, $0\leq p \leq 1$ which for $ p=\kappa_1/|{\bf d}|$ gives \begin{align} (\kappa_1/|{\bf d}| - 1) \log (1 -\kappa_1/|{\bf d}|) \leq (\log e) \kappa_1/|{\bf d}|. \label{Eq_fn_uppr} \end{align} Using~\eqref{Eq_fn_uppr} in~\eqref{Eq_jn_in_ratio}, we obtain that \begin{align} \frac{j_n(\kappa_1,{\bf d})}{i_n(\kappa_1)} & \leq \frac{ 4 \log (|{\bf d}|/\kappa_1) + 4 \log e}{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right)/\kappa_1 } \notag \\ & \leq \frac{4 \log (|{\bf d}|/\kappa_1) +4 \log e }{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)/v_n} \notag \\ & \leq \frac{4 \log v_n + 4 \log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \label{Eq_ratio_uppr4} \end{align} where the second inequality follows because $\frac{\log (1+x)}{x}$ is monotonically decreasing in $x > 0$, and the subsequent inequality follows because $|{\bf d}| \leq v_n$ and $1 \leq \kappa_1 \leq v_n$. Combining~\eqref{Eq_an}, \eqref{Eq_in_lowr}, and~\eqref{Eq_ratio_uppr4}, $ a^n_{\lambda, \rho}(\kappa_1,{\bf d})$ can thus be lower-bounded by \begin{align} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) &\geq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right)\left(1- \frac{4 \log v_n + 4 \log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \right). \label{Eq_an_lowr} \end{align} Note that the RHS of~\eqref{Eq_an_lowr} is independent of $\kappa_1$ and ${\bf d}$. Furthermore, the term \begin{align} \frac{ v_n \tilde{E}_n}{n''} & = \frac{ (1+c_n) k_n b c_n \ln \ell_n}{2 b n} \notag \\ & = \frac{ c_n k_n \ln \ell_n + c_n^2 k_n \ln \ell_n}{2 n} \label{Eq_En_sn_zero2} \end{align} tends to zero as $n \to \infty$ since $k_n \ln \ell_n = o(n)$ and $c_n = \ln\left(\frac{n}{k_n\ln \ell_n}\right)$. Furthermore, $\tilde{E}_n\to\infty$ and, as observed in~\eqref{Eq_sn_En2}, $\frac{\log v_n}{\tilde{E}_n} \to 0$ as $n \to \infty$. It follows that \begin{align} & \liminf_{n \to \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) \notag \\ & \qquad \qquad \geq \lim_{n \to \infty} \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right) \lim_{n \to \infty} \left(1- \frac{ \log v_n + \log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \right) \notag \\ & \qquad \qquad= \frac{(\log e) \; \lambda \rho (1-\lambda \rho)}{4} \notag \\ &\qquad \qquad = \frac{\log e}{16} \notag \end{align} which implies~\eqref{Eq_w1_lowr}. We next prove~\eqref{Eq_w2_lowr}. Since $a^{n}_{\lambda, \rho}(0,{\bf d}) =0 $, it follows that \begin{align*} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_2 \leq v_n} g^n_{\lambda, \rho}(0, \kappa_2,{\bf d}) & \geq \min_{1 \leq \kappa_2 \leq v_n} b^n_{\lambda, \rho}(\kappa_2). \end{align*} Thus~\eqref{Eq_w2_lowr} follows by showing that \begin{align} \liminf_{n \rightarrow \infty} \min_{1 \leq \kappa_2 \leq v_n} b^n_{\lambda, \rho}(\kappa_2) > 0. \label{Eq_w2_lowr1} \end{align} To show~\eqref{Eq_w2_lowr1}, we define \begin{align} q_n(\kappa_2) & \triangleq \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right) \label{Eq_qn} \\ r_n(\kappa_2) & \triangleq \frac{(1-\rho)}{2 \tilde{E}_n} \log (1+\lambda \kappa_2 \tilde{E}_n/n'') \\ u_n(\kappa_2) & \triangleq \frac{\rho \ell_n}{\tilde{E}_n} H_2\left(\frac{\kappa_2}{\ell_n}\right). \label{Eq_sn} \end{align} Then, \begin{align} b^n_{\lambda, \rho}(\kappa_1) & = q_n(\kappa_2) \left(1- \frac{r_n(\kappa_2)}{q_n(\kappa_2)} - \frac{u_n(\kappa_2)}{q_n(\kappa_2)}\right). \label{Eq_bn_lowr} \end{align} Note that \begin{align} q_n(\kappa_2) & = \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right) \notag \\ & \geq \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right), \quad 1 \leq \kappa_2 \leq v_n. \label{Eq_qn_lowr} \end{align} Furthermore, \begin{align} \frac{r_n(\kappa_2)}{ q_n(\kappa_2)} &= \frac{\frac{(1-\rho)}{2 \tilde{E}_n} \log (1+\lambda \kappa_2 \tilde{E}_n/n'')}{\frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right)} \notag \\ & \leq \frac{\frac{(1-\rho)}{2 \tilde{E}_n} \log (1+\lambda v_n \tilde{E}_n/n'')}{\frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)} \notag \\ & \leq \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}} \label{Eq_rn_qn_ratio} \end{align} for $1 \leq \kappa_2 \leq v_n$. The term \begin{align*} \frac{v_n}{n''} & = \frac{ k_n(1+c_n)}{bn} \end{align*} tends to zero since $k_n c_n = o(n)$ by the lemma's assumption that $k_n \log \ell_n = o(n)$. This together with the fact that $v_n \tilde{E}_n/n'' \to 0$ as $n \to \infty$ (cf.~\eqref{Eq_En_sn_zero2}), and hence $\tilde{E}_n/n'' \to 0$, implies that the RHS of~\eqref{Eq_rn_qn_ratio} tends to zero as $n \to \infty$. Finally, \begin{align} \frac{u_n(\kappa_2)}{ q_n(\kappa_2)} & = \frac{4 \rho \ell_n H_2\left(\frac{\kappa_2}{\ell_n}\right)}{n'' \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right)} \notag \\ & = \frac{4 \rho \left[ \kappa_2 \log (\ell_n/\kappa_2) + \ell_n (\kappa_2/\ell_n -1) \log (1 -\kappa_2/\ell_n) \right] }{n'' \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right)}\notag\\ & \leq \frac{4 \rho \left[ \kappa_2 \log (\ell_n/\kappa_2) + \kappa_2 \log e \right] }{n'' \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right)} \notag\\ & \leq \frac{4 \rho \left[ \log (\ell_n/\kappa_2) + \log e \right] }{n'' \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)/v_n} \notag \\ & \leq \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}} \label{Eq_sn_qn_uppr} \end{align} for $1 \leq \kappa_2 \leq v_n$, where the first inequality follows from~\eqref{Eq_fn_uppr}. Since $\frac{\log \ell_n}{ \tilde{E}_n} = \frac{\log \ell_n}{c_n \ln \ell_n} \to 0$ and $v_n\tilde{E}_n/n'' \to 0$ as $n \to \infty$, the RHS of~\eqref{Eq_sn_qn_uppr} tends to zero as $n \to \infty$. Thus, it follows from~\eqref{Eq_qn_lowr}, \eqref{Eq_rn_qn_ratio}, and \eqref{Eq_sn_qn_uppr} that \begin{align} b^n_{\lambda, \rho}(\kappa_2) & \geq \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)\left(1 - \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}} - \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}} \right). \label{Eq_bn_lowr_bnd} \end{align} The lower bound in~\eqref{Eq_bn_lowr_bnd} is independent of $\kappa_2$ and ${\bf d}$. It thus follows that \begin{align} &\liminf_{n \rightarrow \infty} \min_{1 \leq \kappa_2 \leq v_n} b^n_{\lambda, \rho}(\kappa_2) \notag \\ & \geq \lim_{n \to \infty} \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right) \lim_{n \to \infty} \! \left(1 - \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}} - \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}} \right) \! \label{Eq_lim_bn} \notag \\ & = \frac{(\log e) \; \lambda(1-\lambda \rho)}{4} \notag \\ & = \frac{\log e}{12}\notag \end{align} which implies~\eqref{Eq_w2_lowr}. To prove~\eqref{Eq_w1w2_lowr}, we use~\eqref{Eq_gn_lowr}, \eqref{Eq_an_lowr}, and \eqref{Eq_bn_lowr_bnd} to lower-bound \begin{align} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) & \geq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right)\left(1- \frac{ \log v_n + \log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \right) \notag \\ & + \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)\left(1 - \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}} - \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}} \right) \notag \end{align} which is independent of $\kappa_1, \kappa_2$, and ${\bf d}$. Consequently, \begin{align} & \liminf_{n \to \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ \substack{ 1 \leq \kappa_1 \leq v_n \\ 1 \leq \kappa_2 \leq v_n}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) \notag \\ & \geq \lim_{n \to \infty} \left\{ \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right)\left(1- \frac{ \log v_n + \log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \right)\right\} \notag \\ & + \lim_{n \to \infty} \left\{ \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)\left(1 - \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}} - \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}} \right)\right\} \notag \\ & = \frac{\log e}{12} + \frac{\log e}{16} \notag \end{align} which implies~\eqref{Eq_w1w2_lowr}. This was the last step required to prove~\eqref{Eq_detect_err_exp}. We finish the proof of Lemma 1 by analyzing the third term on the RHS of~\eqref{Eq_detect_err_uoor}, namely, $\text{Pr} (\mbox{$\cal{E}$}_d| |{\bf D}^a| = 0) \text{Pr} (|{\bf D}^a| = 0)$. This term is upper-bounded by \begin{align*} \text{Pr} (\mbox{$\cal{E}$}_d| |{\bf D}^a| = 0)\left((1-\alpha_n)^{\frac{1}{\alpha_n}}\right)^{k_n} \end{align*} and vanishes if $k_n$ is unbounded. Next we show that this term also vanishes when $k_n$ is bounded. When $|{\bf D}^a| = 0$, an error occurs only if there are false alarms. For $\kappa_2$ false alarms, let $\bar{{\bf S}} \triangleq \sum_{j=1}^{\kappa_2} {\bf S}_j$, and let $S'_i$ denote the $i$-th component of $\bar{{\bf S}}$. From~\cite[eq.~(303)]{ChenCG17}, we obtain the following upper bound on the probability that there are $\kappa_2$ false alarms when $|{\bf D}^a| = 0$: \begin{align*} P(\mbox{$\cal{E}$}_{\kappa_2}| |{\bf d}| =0) & \leq \binom{\ell_n}{\kappa_2} \mathrm{E}_{\underline{{\bf S}}_{A_2}} \left\{ \text{Pr} \left\{ \sum_{i=1}^{n''} Z_i S'_i\geq \frac{1}{2} \|\bar{{\bf S}} \|^2 \right\} |\bar{{\bf S}} \right\} \notag \\ & \leq \left(\frac{1}{\mu}\right)^{\kappa_2} \binom{\ell_n}{\kappa_2} \mathrm{E}_{\tilde{\underline{{\bf S}}}_{A_2}} \left\{ \text{Pr} \left\{ \sum_{i=1}^{n''} Z_i S'_i \geq \frac{1}{2} \|\bar{{\bf S}} \|^2 \right\} |\bar{{\bf S}} \right\} \end{align*} where in the last inequality, we used~\eqref{Eq_prob_signt_uppr}. By following the analysis that led to~\cite[eq.~(309)]{ChenCG17}, we obtain \begin{align} P(\mbox{$\cal{E}$}_{\kappa_2}| |{\bf d}| =0) & \leq \left(\frac{1}{\mu}\right)^{\kappa_2} \exp \left[ -\tilde{E}_n (q'_n(\kappa_2) - u_n'(\kappa_2) ) \right] \notag \end{align} where \begin{align} q'_n(\kappa_2) &\triangleq \frac{n''}{2\tilde{E}_n}\log \left( 1+ \frac{\kappa_2 \tilde{E}_n}{4 n''} \right) \notag \end{align} and \begin{align} u'_n(\kappa_2) & \triangleq \frac{ \ell_n}{\tilde{E}_n} H_2\left(\frac{\kappa_2}{\ell_n}\right). \notag \end{align} As before, we upper-bound $ \left(\frac{1}{\mu}\right)^{\kappa_2} \leq 2$ uniformly in $\kappa_2$ for $n \geq \tilde{n}_0$. Furthermore, we observe that the behaviours of $q'_n(\kappa_2)$ and $u'_n(\kappa_2)$ are similar to $q_n(\kappa_2)$ and $v_n(\kappa_2)$ given in~\eqref{Eq_qn} and in~\eqref{Eq_sn}, respectively. So by following the steps as before, we can show that \begin{align} \liminf_{n \to \infty} \min_{1 \leq \kappa_2 \leq v_n} q'_n(\kappa_2) > 0 \notag \end{align} and \begin{align} \lim_{n \to \infty} \min_{1 \leq \kappa_2 \leq v_n} \frac{u'_n(\kappa_2)}{ q'_n(\kappa_2)} = 0. \notag \end{align} It follows that there exist positive constants $\tau'$ and $ \tilde{n}'_0$ (independent of $\kappa_2$) such that, \begin{align} P(\mbox{$\cal{E}$}_d | |{\bf d}| =0) & = \sum_{\kappa_2=1}^{v_n}P(\mbox{$\cal{E}$}_{\kappa_2} | |{\bf d}| =0)\\ & \leq 2 v_n \exp \left[ -\tilde{E}_n \tau' \right], \quad n\geq \max(\tilde{n}_0, \tilde{n}'_0). \notag \end{align} We have already shown that $ v_n^2 \exp [ -\tilde{E}_n \tau' ]$ vanishes as $n \to \infty$ (cf. \eqref{Eq_detect_err_uppr}--\eqref{Eq_sn_En2}), which implies that $ 2 v_n \exp [ -\tilde{E}_n \tau' ]$ vanishes as $n \to \infty$. It thus follows that $P(\mbox{$\cal{E}$}_d | |{\bf d}| =0)$ tends to zero as $n \to \infty$. This was the last step required to prove Lemma~\ref{Lem_usr_detect}. \section{Proof of Lemma~\ref{Lem_energy_bound}} \label{Append_prob_lemma} Let $\mbox{$\cal{W}$}$ denote the set of $(M_n+1)^{\ell_n}$ messages of all users. To prove Lemma~\ref{Lem_energy_bound}, we represent each ${\bf w} \in \mbox{$\cal{W}$}$ using an $\ell_n$-length vector such that the $i^{\mathrm{th}}$ position of the vector is set to $j$ if user $i$ has message $j$. The Hamming distance $d_H$ between two messages ${\bf w}=(w_1,\ldots,w_{\ell_n})$ and ${\bf w}'=(w'_1,\ldots,w'_{\ell_n})$ is defined as the number of positions at which ${\bf w}$ differs from ${\bf w}'$, i.e., $d_H({\bf w},{\bf w}') := \left|\{i: w_i\neq w'_i \}\right|$. We first group the set $\mbox{$\cal{W}$}$ into $\ell_n +1$ subgroups. Two messages ${\bf w}, {\bf w}' \in \mbox{$\cal{W}$}$ belong to the same subgroup if they have the same number of zeros. We can observe that all the messages in a subgroup have the same probability since the probability of a message ${\bf w}$ is determined by the number of zeros in it. Let $\mbox{$\cal{T}$}_{{\bf w}}^{t} $ denote the set of all messages with $t$ non-zero entries, where $t=0, \ldots, \ell_n$. Further let \begin{align} \text{Pr}(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) \triangleq \text{Pr}({\bf W} \in \mbox{$\cal{T}$}_{{\bf w}}^{t} )\notag \end{align} which can be evaluted as \begin{equation} \text{Pr}(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) = (1-\alpha_n)^{\ell_n-t} \left( \frac{\alpha_n}{M_n}\right)^{t} |\mbox{$\cal{T}$}_{{\bf w}}^{t} |. \label{Eq_type_prob} \end{equation} We define \begin{align} P_e(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) \triangleq \frac{1}{|\mbox{$\cal{T}$}_{{\bf w}}^{t} |} \sum_{w\in \mbox{$\cal{T}$}_{{\bf w}}^{t} } P_e({\bf w}) \label{Eq_type_err_prob} \end{align} where $P_e({\bf w})$ denotes the probability of error in decoding the set of messages ${\bf w}=(w_1,\ldots,w_{\ell_n})$. It follows that \begin{align} P_{e}^{(n)} & = \sum_{w \in \mbox{$\cal{W}$}} \text{Pr}({\bf W} ={\bf w}) P_e({\bf w}) \notag \\ & = \sum_{t=0}^{\ell_n} \sum_{w\in \mbox{$\cal{T}$}_{{\bf w}}^{t} } (1-\alpha_n)^{\ell_n-t} \left( \frac{\alpha_n}{M_n}\right)^{t} |\mbox{$\cal{T}$}_{{\bf w}}^{t} | \frac{1}{|\mbox{$\cal{T}$}_{{\bf w}}^{t} |} P_e({\bf w})\notag \\ & = \sum_{t=0}^{\ell_n} \text{Pr}( \mbox{$\cal{T}$}_{{\bf w}}^{t} ) \frac{1}{|\mbox{$\cal{T}$}_{{\bf w}}^{t} |} \sum_{w\in \mbox{$\cal{T}$}_{{\bf w}}^{t} } P_e({\bf w}) \notag \\ & = \sum_{t=0}^{\ell_n} \text{Pr}(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) P_e(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) \notag \\ & \geq \sum_{t=1}^{\ell_n} \text{Pr}(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) P_e(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) \label{Eq_avg_prob_err3} \end{align} where we have used \eqref{Eq_type_prob} and the definition of $P_e(\mathcal{T}_{\mathbf{w}}^t)$ in \eqref{Eq_type_err_prob}. To prove Lemma~\ref{Lem_energy_bound}, we next show that \begin{align} P_e(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) \geq 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n}, \quad t=1,\ldots, \ell_n. \label{Eq_prob_typ_lowr} \end{align} To this end, we partition each $\mbox{$\cal{T}$}_{{\bf w}}^{t},t=1,\ldots, \ell_n $ into $D_t$ sets $\mbox{$\cal{S}$}_d^t$. For every $ t=1,\ldots, \ell_n $, the partition that we obtain satisfies \begin{align} \frac{1}{|\mbox{$\cal{S}$}_d^t|} \sum_{{\bf w} \in \mbox{$\cal{S}$}_d^t} P_e({\bf w}) \geq 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n}. \label{Eq_avg_prob_lowr} \end{align} This then gives~\eqref{Eq_prob_typ_lowr} since \begin{align} P_e(\mbox{$\cal{T}$}_{{\bf w}}^{t} ) & = \sum_{d=1}^{D_t} \frac{|\mbox{$\cal{S}$}_d^t|}{|\mbox{$\cal{T}$}_{{\bf w}}^{t} |} \frac{1}{|\mbox{$\cal{S}$}_d^t|} \sum_{w \in \mbox{$\cal{S}$}_d^t} P_e({\bf w}). \end{align} Before we continue by defining the sets $\mathcal{S}_d^t$, we note that \begin{align} M_n \geq 2 \label{Eq_seqMn_assum} \end{align} since $M_n=1$ would contradict the assumption that $\dot{R}>0$. We further assume that \begin{align} \ell_n \geq 5. \label{Eq_seqln_assum} \end{align} This assumption comes without loss of generality since $\ell_n \to \infty$ as $n \to \infty$ by the assumption that $k_n \log \ell_n = \omega(n)$ and $k_n = \Omega(1)$. We next define a partition of $\mbox{$\cal{T}$}_{{\bf w}}^{t}, t=1,\ldots, \ell_n $ that satisfies the following: \begin{align} |\mbox{$\cal{S}$}_d^t| \geq \ell_n +1, \quad d=1, \ldots, D_t \label{Eq_set_lowr} \end{align} and \begin{align} d_{H}({\bf w},{\bf w}') \leq 8, \quad {\bf w}, {\bf w} \in \mbox{$\cal{S}$}_d^t. \label{Eq_distnce_uppr} \end{align} To this end, we consider the following four cases: \underline{ Case 1: $t=1$:} For $t=1$, we do not partition the set, i.e., $\mbox{$\cal{S}$}_1^1 = \mbox{$\cal{T}$}_ {{\bf w}}^{1}$. Thus, we have $|\mbox{$\cal{S}$}_1^1| = \ell_nM_n$. From~\eqref{Eq_seqMn_assum} and~\eqref{Eq_seqln_assum}, it follows that $|\mbox{$\cal{S}$}_1^1| \geq \ell_n +1$. Since any two messages ${\bf w}, {\bf w}' \in \mbox{$\cal{T}$}_ {{\bf w}}^{1}$ have only one non-zero entry, we further have that $d_{H}({\bf w},{\bf w}') \leq 2$. Consequently, \eqref{Eq_set_lowr} and~\eqref{Eq_distnce_uppr} are satisfied. \underline{ Case 2: $t=2,\ldots,\ell_n -2$:} In this case, we obtain a partition by finding a code $\mbox{$\cal{C}$}_t$ in $\mbox{$\cal{T}$}_ {{\bf w}}^{t}$ that has minimum Hamming distance $5$ and for every ${\bf w}\in \mbox{$\cal{T}$}_ {{\bf w}}^{t}$ there exists at least one codeword in $\mbox{$\cal{C}$}_t$ which is at most at a Hamming distance 4 from it. Such a code exists because if for some ${\bf w}\in \mbox{$\cal{T}$}_ {{\bf w}}^{t}$ all codewords were at a Hamming distance 5 or more, then we could add ${\bf w}$ to $\mbox{$\cal{C}$}_t$ without affecting its minimum distance. Thus for all ${\bf w} \notin \mbox{$\cal{C}$}_t$, there exists at least one index $j$ such that $d_H({\bf w},{\bf c}_t(j)) \leq 4$, where ${\bf c}_t(1),\ldots, {\bf c}_t(|\mbox{$\cal{C}$}_t|)$ denote the codewords of code $\mbox{$\cal{C}$}_t$. With this code $\mathcal{C}_t$, we partition $\mathcal{T}_{\mathbf{w}}^t$ into the sets $\mathcal{S}_d^t$, $d=1,\ldots,D_t$ using the following procedure: For a given $d=1, \ldots,D_t$, we assign ${\bf c}_t(d)$ to $\mbox{$\cal{S}$}_d^t$ as well as all ${\bf w} \in \mbox{$\cal{T}$}_ {{\bf w}}^{t}$ that satisfy $d_H({\bf w}, {\bf c}_t(d))\leq 2$. These assignments are unique since the code $\mbox{$\cal{C}$}_t$ has minimum Hamming distance 5. We next consider all ${\bf w}\in \mbox{$\cal{T}$}_ {{\bf w}}^{t}$ for which there is no codeword ${\bf c}_t(1), \ldots, {\bf c}_t(|\mbox{$\cal{C}$}_t|)$ satisfying $d_H({\bf w}, {\bf c}_t(d))\leq 2$ and assign it to the set with index $d = \min \{j=1,\ldots, D_t: d_H({\bf w}, {\bf c}_t(j)) \leq 4 \}$. Like this, we obtain a partition of $ \mbox{$\cal{T}$}_ {{\bf w}}^{t}$. Since any two ${\bf w}, {\bf w}' \in \mbox{$\cal{S}$}_d^t$ are at most at a Hamming distance 4 from the codeword ${\bf c}_t(d)$, we have that $d_{H}({\bf w},{\bf w}') \leq 8$. Consequently, \eqref{Eq_distnce_uppr} is satisfied. To show that~\eqref{Eq_set_lowr} is satisfied, too, we use the following fact: \begin{align} \text{For two natural numbers } a \text{ and } b, \text { if } a \geq 4 \text{ and } 2\leq b \leq a-2, \text{ then } b(a-b) \geq a. \label{Eq_prod_seq} \end{align} This fact follows since $b(a-b)$ is increasing from $b=1$ to $b = \lfloor a/2\rfloor$ and is decreasing from $b = \lfloor a/2\rfloor$ to $b=a-1$. So $b(a-b)$ is minimized at $b=2$ and $b=a-2$, where it has the value $2a-4$. For $a\geq 4$, this value is greater than or equal to $a$, hence the claim follows. From~\eqref{Eq_prod_seq}, it follows that if $|\mbox{$\cal{S}$}_d^t| \geq 1+ t(\ell_n - t)$, then $|\mbox{$\cal{S}$}_d^t|\geq 1+ \ell_n$. It thus remains to show that $|\mbox{$\cal{S}$}_d^t| \geq 1+ t(\ell_n - t)$. To this end, for every codeword ${\bf c}_t(d)$, consider all sequences in $\mbox{$\cal{T}$}_ {{\bf w}}^{i}$ which differ exactly in one non-zero position and in one zero position from ${\bf c}_t(d)$. There are $ t(\ell_n - t)M_n$ such sequences in $\mbox{$\cal{T}$}_ {{\bf w}}^{t}$, so we get \begin{align} t(\ell_n - t)M_n & \geq t(\ell_n - t) \notag \\ & \geq \ell_n \label{Eq_set_lowr2} \end{align} by~\eqref{Eq_seqMn_assum}, \eqref{Eq_seqln_assum}, and~\eqref{Eq_prod_seq}. Since the codeword ${\bf c}_t(d)$ also belongs to $S_d^t$, it follows from~\eqref{Eq_set_lowr2} that \begin{align} | \mbox{$\cal{S}$}_d^t| &\geq \ell_n +1. \notag \end{align} \underline{ Case 3: $t=\ell_n -1$:} We obtain a partition by defining a code $\mbox{$\cal{C}$}_t$ in $ \mbox{$\cal{T}$}_ {{\bf w}}^{\ell_n -1}$ that has the same properties as the code used for Case 2. We then use the same procedure as in Case 2 to assign messages in ${\bf w} \in\mbox{$\cal{T}$}_ {{\bf w}}^{\ell_n -1}$ to the sets $\mbox{$\cal{S}$}_d^t$, $d=1,\ldots,D_t$. This gives a partition of $\mbox{$\cal{T}$}_ {{\bf w}}^{\ell_n -1}$ where any two ${\bf w}, {\bf w}'\in\mbox{$\cal{S}$}_d^t$ satisfy $d_H(\mathbf{w},\mathbf{w}')\leq 8$. Consequently, this partition satisfies~\eqref{Eq_distnce_uppr}. We next show that this partition also satisfies~\eqref{Eq_set_lowr}. To this end, for every codeword ${\bf c}_t(d)$, consider all the sequences which differ exactly in two non-zero positions from ${\bf c}_t(d)$. There are $ \binom{\ell_n-1}{2} (M_n-1)^2$ such sequences in $ \mbox{$\cal{T}$}_ {{\bf w}}^{\ell_n -1}$. Since $\mbox{$\cal{S}$}_d^t$ also contains the codeword ${\bf c}_t(d)$, we obtain that \begin{align*} | \mbox{$\cal{S}$}_d^t| & \geq \notag \binom{\ell_n-1}{2} (M_n-1)^2 + 1\\ & \geq \binom{\ell_n-1}{2} +1 \\ & \geq \ell_n +1 \end{align*} by~\eqref{Eq_seqMn_assum} and~\eqref{Eq_seqln_assum}. \underline{ Case 4: $t=\ell_n$:} We obtain a partition by defining a code $\mathcal{C}_t$ in $\mathcal{T}_{\mathbf{w}}^{\ell_n-1}$ that has the same properties as the code used in Case 2. We then use the same procedure as in Case 2 to assign messages in $\mathbf{w}\in\mathcal{T}_{\mathbf{w}}^t$ to the sets $\mathcal{S}_d^t$, $d=1,\ldots,D_t$. This gives a partition of $\mathcal{T}_{\mathbf{w}}^t$ where any two $\mathbf{w}, \mathbf{w}'\in\mathcal{S}_d^t$ satisfy $d_H(\mathbf{w},\mathbf{w}')\leq 8$. Consequently, this partition satisfies~\eqref{Eq_distnce_uppr}. We next show that this partition also satisfies~\eqref{Eq_set_lowr}. To this end, for every codeword ${\bf c}_t(d)$, consider all sequences which are at Hamming distance $1$ from ${\bf c}_t(d)$. There are $\ell_n(M_n-1)$ such sequences. Since $\mbox{$\cal{S}$}_d^t$ also contains the codeword, we have \begin{align} | \mbox{$\cal{S}$}_d^t| & \geq 1+ \ell_n(M_n-1) \notag \\ & \geq 1+\ell_n \notag \end{align} by~\eqref{Eq_seqMn_assum}. Having obtained a partition of $\mathcal{T}_{\mathbf{w}}^t$ that satisfies~\eqref{Eq_set_lowr} and~\eqref{Eq_distnce_uppr}, we next derive the lower bound~\eqref{Eq_avg_prob_lowr}. To this end, we use a stronger form of Fano's inequality known as Birg\'e's inequality. \begin{lemma}[Birg\'e's inequality] \label{Lem_Berge} Let $(\mbox{$\cal{Y}$}, \mbox{$\cal{B}$})$ be a measurable space with a $\sigma$-field, and let $P_1,\ldots, P_N$ be probability measures defined on $\mbox{$\cal{B}$}$. Further let $\mbox{$\cal{A}$}_i$, $i=1, \ldots,N$ denote $N$ events defined on $\mbox{$\cal{Y}$}$, where $N\geq 2$. Then \begin{align*} \frac{1}{N} \sum_{i=1}^{N} P_i(\mbox{$\cal{A}$}_i) \leq \frac{\frac{1}{N^2} \sum_{i,j}^{} D(P_i\|P_j)+\log 2}{\log (N-1)}. \end{align*} \end{lemma} \begin{proof} See~\cite{Yatracos88} and references therein. \end{proof} To apply Lemma~\ref{Lem_Berge} to the problem at hand, we set $N=|\mbox{$\cal{S}$}_d^t|$ and $P_j = P_{Y|{\bf X}}(\cdot|{\bf x}(j))$, where ${\bf x}(j)$ denotes the set of codewords transmitted to convey the set of messages $j \in \mbox{$\cal{S}$}_d^t$. We further define $\mbox{$\cal{A}$}_j$ as the subset of $\mbox{$\cal{Y}$}^n$ for which the decoder declares the set of messages $j\in\mbox{$\cal{S}$}_d^t$. Then, the probability of error in decoding messages $j\in\mbox{$\cal{S}$}_d^t$ is given by $P_e(j) =1-P_j(\mbox{$\cal{A}$}_j)$, and $\frac{1}{|\mbox{$\cal{S}$}_d^t|} \sum_{j\in \mbox{$\cal{S}$}_d^t} P_j(\mbox{$\cal{A}$}_j)$ denotes the average probability of correctly decoding a message in $\mbox{$\cal{S}$}_d^t$. For two multivariate Gaussian distributions \mbox{${\bf Z}_1 \sim \mbox{$\cal{N}$}(\boldsymbol {\mu_1 }, \frac{N_0}{2}I)$} and ${\bf Z}_2 \sim \mbox{$\cal{N}$}(\boldsymbol {\mu_2}, \frac{N_0}{2}I)$ (where $I$ denotes the identity matrix), the relative entropy $D({\bf Z}_1\| { \bf Z}_2)$ is given by $ \frac{ ||\boldsymbol {\mu_1 - \mu_2}||^2}{N_0}$. We next note that $P_{{\bf w}} = \mbox{$\cal{N}$}(\overline{{\bf x}}({\bf w}), \frac{N_0}{2}I)$ and $P_{{\bf w}'} = \mbox{$\cal{N}$}(\overline{{\bf x}}({\bf w}'), \frac{N_0}{2}I)$, where $\overline{{\bf x}}(j)$ denotes the sum of codewords contained in ${\bf x}(j)$. Furthermore, any two messages ${\bf w}, {\bf w}' \in \mbox{$\cal{S}$}_d^t$ are at a Hamming distance of at most 8. Without loss of generality, let us assume that $w_j = w'_j$ for $j=9, \ldots, \ell_n$. Then \begin{align} \left\|\sum_{j=1}^{\ell_n} {\bf x}_j(w_j) -\sum_{i=1}^{\ell_n} {\bf x}_j(w'_j)\right\|^2 & = \left\|\sum_{i=1}^{8} {\bf x}_j(w_j) - {\bf x}_j(w'_j)\right\|^2 \notag \\ & \leq \left\|\sum_{j=1}^{8} |{\bf x}_j(w_j) - {\bf x}_j(w'_j)|\right\|^2 \notag\\ & \leq (8 \times 2\sqrt{E_n})^2 \notag \\ & = 256 E_n \notag \end{align} where we have used the triangle inequality and that the energy of a codeword for any user is upper-bounded by $E_n$. Thus, $D(P_{{\bf w}}\| P_{{\bf w}'}) \leq 256 E_n/N_0$. It follows from Birg\'e's inequality that \begin{align} \frac{1}{|\mbox{$\cal{S}$}_d^t|} \sum_{{\bf w} \in \mbox{$\cal{S}$}_d^t} P_e({\bf w}) & \geq 1 - \frac{ 256 E_n/N_0+\log 2}{\log (|\mbox{$\cal{S}$}_d^t|-1)} \notag \\ & \geq 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n } \label{Eq_prob_set_lowr} \end{align} where the last step holds because $ |\mbox{$\cal{S}$}_d^t|-1 \geq \ell_n$. This proves~\eqref{Eq_avg_prob_lowr} and hence also~\eqref{Eq_prob_typ_lowr}. Combining~\eqref{Eq_prob_typ_lowr} and~\eqref{Eq_avg_prob_err3}, we obtain \begin{align*} P_{e}^{(n)} & \geq \left( 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n } \right) \sum_{i=1}^{\ell_n} \text{Pr}(\mbox{$\cal{T}$}_ {{\bf w}}^{i}) \\ & = \left( 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n } \right) (1-\text{Pr}(\mbox{$\cal{T}$}_{{\bf w}}^{0} )). \end{align*} By the assumption $k_n = \Omega(1)$, the probability $\text{Pr}(\mbox{$\cal{T}$}_{{\bf w}}^{0} ) = \left( (1-\alpha_n)^{\frac{1}{\alpha_n}}\right)^{k_n}$ converges to a value strictly less than one. Consequently, $P_{e}^{(n)}$ tends to zero only if \begin{align} E_n & = \Omega\left(\log \ell_n \right).\notag \end{align} This proves Lemma~\ref{Lem_energy_bound}. \section{Proof of Lemma~\ref{Lem_detect_uppr}} \label{Sec_appnd_ortho} Let ${\bf Y}_1$ denote the received vector of length $n/\ell_n$ corresponding to user 1 in the orthogonal-access scheme. From the pilot signal, which is the first symbol $Y_{11} $ of ${\bf Y}_1$, the receiver guesses whether user 1 is active or not. Specifically, the user is estimated as active if $Y_{11} > \frac{\sqrt{tE_n}}{2}$ and as inactive otherwise. If the user is declared as active, then the receiver decodes the message from the rest of ${\bf Y}_1$. Let $\text{Pr} ( \hat{W}_1 \neq w |W_1 = w)$ denote the decoding error probability when message $w,w=0, \ldots, M_n$ was transmitted. Then, $P_1$ is given by \begin{align} P_1 & = (1-\alpha_n)\text{Pr} ( \hat{W}_1 \neq 0) + \frac{\alpha_n}{M_n} \sum_{w=1}^{M_n} \text{Pr} ( \hat{W}_1 \neq w |W_1 = w) \notag \\ & \leq \text{Pr} ( \hat{W}_1 \neq 0|W_1=0) + \frac{1}{M_n} \sum_{w=1}^{M_n} \text{Pr} ( \hat{W}_1 \neq w | W_1 = w). \label{Eq_err_prob_uppr} \end{align} If $W_1=0$, then an error occurs if $Y_{11} > \frac{\sqrt{tE_n}}{2}$. So, we have \begin{align} \text{Pr} ( \hat{W}_1 \neq 0|W_1=0) & = Q\left( \frac{\sqrt{tE_n}}{2} \right). \label{Eq_err_prob_uppr2} \end{align} Let $\mbox{$\cal{E}$}_{11}$ denote the event $Y_{11} \leq \frac{\sqrt{tE_n}}{2}$ and $D_w$ denote the error event in decoding message $w$ for the transmission scheme described in Section~\ref{Sec_proof_ortho_access} when the user is known to be active. Then, for every $w=1,\ldots,M_n$ \begin{align} \text{Pr} ( \hat{W}_1 \neq w |W_1 = w) & = \text{Pr} (\mbox{$\cal{E}$}_{11} \cup \{ \mbox{$\cal{E}$}_{11}^c \cap \hat{W}_1 \neq w \}| W_1 = w) \notag \\ & \leq \text{Pr} (\mbox{$\cal{E}$}_{11}| W_1 = w) + \text{Pr} ( \mbox{$\cal{E}$}_{11}^c | W_1 = w) \text{Pr} ( \hat{W}_1 \neq w | W_1 = w, \mbox{$\cal{E}$}_{11}^c ) \notag \\ & \leq \text{Pr} (\mbox{$\cal{E}$}_{11}| W_1 = w) + \text{Pr} ( D_w | W_1 = w) \notag \end{align} where the last step follows because $\text{Pr} (\mbox{$\cal{E}$}_{11}^c|W_1=w)\leq 1$ and by the definition of $D_w$. We next define $\text{Pr} (D) = \frac{1}{M_n} \sum_{w=1}^{M_n} \text{Pr} (D_w)$. Since $P(\mbox{$\cal{E}$}_{11} | W_1 =w) = Q\left( \frac{\sqrt{tE_n}}{2} \right)$, it follows from~\eqref{Eq_err_prob_uppr} that \begin{align} P_1 & \leq 2 Q\left( \frac{\sqrt{tE_n}}{2} \right)+ P(D). \label{Eq_singl_usr_uppr} \end{align} We next upper-bound $P(D)$. To this end, we use the following upper bound on the average probability of error $P(\mathcal{E})$ of the Gaussian point-to-point channel for a code of blocklength $n$ with power $P$~\cite[Section~7.4]{Gallager68} \begin{align} P(\mbox{$\cal{E}$}) & \leq M_n^{ \rho} \exp[-nE_0(\rho, P)], \; \mbox{ for every } 0< \rho \leq 1 \label{Eq_upp_dec_AWGN} \end{align} where \begin{align} E_0(\rho, P) & \triangleq \frac{\rho}{2} \ln \left(1+\frac{2P}{(1+\rho)N_0}\right). \notag \end{align} By substituting in~\eqref{Eq_upp_dec_AWGN} $n$ by $\frac{n}{\ell_n} - 1$ and $P$ by $P_n = \frac{(1-t)E_n}{\frac{n}{\ell_n} -1}$, we obtain that $P(D)$ can be upper-bounded in terms of the rate per unit-energy $\dot{R}=\frac{\log M_n}{E_n}$ as follows: \begin{align} P(D) & \leq M_n^{ \rho} \exp\left[-\left(\frac{ n}{\ell_n}-1\right)E_0(\rho, P_n)\right] \nonumber \\ & = \exp\left[ \rho \ln M_n - \left(\frac{ n}{\ell_n}-1\right) \frac{\rho}{2} \ln \left(1+\frac{ 2E_n(1-t)}{ \left(\frac{ n}{\ell_n}-1\right)(1+\rho)N_0}\right) \right] \nonumber \\ & = \exp\left[ -E_n(1-t) \rho \left( \frac{\ln \left(1+\frac{ 2E_n(1-t)}{ \left(\frac{ n}{\ell_n}-1\right)(1+\rho)N_0}\right)}{ \frac{2E_n(1-t)}{ \left(\frac{ n}{\ell_n}-1\right)} } -\frac{\dot{R}}{(1-t) \log e} \right)\right]. \label{Eq_err_uppr} \end{align} We next choose $E_n = c_n \ln n$ with $c_n \triangleq \ln\bigl(\frac{n}{\ell_n\ln n}\bigr)$. Since, by assumption, $\ell_n = o(n / \log n)$, this implies that $\frac{\ell_nE_n}{n} \to 0$ as $n \to \infty$, hence $\frac{E_n}{n/\ell_n -1} \to 0$. Thus, the first term in the inner most bracket in \eqref{Eq_err_uppr} tends to $1/((1+\rho)N_0)$ as $n \to \infty$. It follows that for $\dot{R} < \frac{\log e}{N_0}$, there exists a sufficiently large $n'_0$, a $ t > 0$, a $\rho > 0$, and a $\delta>0$ such that, for $n\geq n'_0$, the RHS of \eqref{Eq_err_uppr} is upper-bounded by $\exp[-E_n(1-t) \rho \delta]$. It follows that, for our choice $E_n=c_n\ln n$, we have for $n\geq n_0'$ \begin{align} P(D) & \leq \exp \left[ \ln \left(\frac{1}{n}\right)^{c_n\delta \rho(1-t)} \right]. \notag \end{align} Since $c_n \to \infty $ as $n\to\infty$, and hence also $c_n\delta \rho(1-t) \to \infty$, this yields \begin{align} P(D) & \leq \frac{1}{n^2} \label{Eq_act_dec_uppr} \end{align} for sufficiently large $n \geq n_0'$. Similary, for $n\geq \tilde{n}_0$ and sufficiently large $\tilde{n}_0$, we can upper-bound \begin{equation} 2 Q\left(\frac{\sqrt{t E_n}}{2}\right) \leq \frac{1}{n^2} \label{Eq_usr_det_uppr} \end{equation} by upper-bounding the $Q$-function as $Q(\beta)\leq \frac{e^{-\beta^2/2}}{\sqrt{2\pi}\beta}$ and evaluating the resulting bound for $E_n=c_n\ln n$. Using~\eqref{Eq_act_dec_uppr} and~\eqref{Eq_usr_det_uppr} in~\eqref{Eq_singl_usr_uppr}, we obtain for $n \geq \max(\tilde{n}_0,n_0')$ that \begin{align} P_1 \leq \frac{2}{n^2}. \notag \end{align} This proves Lemma~\ref{Lem_detect_uppr}. \section{Introduction} Chen \emph{et al.} \cite{ChenCG17} introduced the many-access channel (MnAC) as a multiple-access channel (MAC) where the number of users grows with the blocklength and each user is active with a given probability. This model is motivated by systems consisting of a single receiver and many transmitters, the number of which is comparable or even larger than the blocklength, a situation that may occur, \emph{e.g.}, in a machine-to-machine communication system with many thousands of devices in a given cell that are active only sporadically. In \cite{ChenCG17}, Chen \emph{et al.} considered a Gaussian MnAC with $\ell_n$ users, each of which is active with probability $\alpha_n$, and determined the number of messages $M_n$ each user can transmit reliably with a codebook of average power not exceeding $P$. Since then, MnACs have been studied in various papers under different settings. For example, Polyanskiy \cite{Polyanskiy17} considered a Gaussian MnAC where the number of active users grows linearly in the blocklength and each user's payload is fixed. Zadik \emph{et al.} \cite{ZadikPT19} presented improved bounds on the tradeoff between user density and energy-per-bit of this channel. Low-complexity schemes for the MnAC were studied in \cite{OrdentlichP17,VemNCC17}. Generalizations to quasi-static fading MnACs can be found in \cite{KowshikPISIT19,KowshikP19,KowshiKAFPISIT19,KowshiKAFP19}. Shahi \emph{et al.} \cite{ShahiTD18} studied the capacity region of strongly asynchronous MnACs. Recently, we studied the capacity per unit-energy of the Gaussian MnAC as a function of the order of growth of users when all users are active with probability one \cite{RaviKISIT19}. We showed that if the order of growth is above $n/ \log n$, then the capacity per unit-energy is zero, and if the order of growth is below $n/ \log n$, then each user can achieve the singe-user capacity per unit-energy. Thus, there is a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate is infeasible. We further showed that the capacity per unit-energy can be achieved by an \emph{orthogonal-access scheme} where the codewords of different users are orthogonal to each other. In this paper, we extend the analysis of \cite{RaviKISIT19} to a random-access setting. In particular, we consider a setting where the total number of users $\ell_n$ may grow as an arbitrary function of the blocklength and the probability $\alpha_n$ that a user is active may be a function of the blocklength, too. Let $k_n = \alpha_n \ell_n$ denote the average number of active users. We demonstrate that if $k_n \log \ell_n$ is sublinear in $n$, then each user can achieve the single-user capacity per unit-energy. Conversely, if $k_n \log \ell_n$ is superlinear in $n$, then the capacity per unit-energy is zero. Hence, there is again a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate is infeasible, but the transition threshold depends on the behaviors of both $\ell_n$ and $k_n$. We further show that orthogonal-access schemes, which are optimal when $\alpha_n=1$, are strictly suboptimal when $\alpha_n \to 0$. The rest of the paper is organized as follows. Section~\ref{Sec_model} introduces the system model. Section~\ref{sec_joint} presents our main results. Section~\ref{sec_average} briefly discusses the capacity per unit-energy when the error probability is replaced by the so-called \emph{per-user probability of error} considered, e.g., in \cite{Polyanskiy17, OrdentlichP17,VemNCC17,ZadikPT19,KowshikPISIT19,KowshikP19,KowshiKAFPISIT19,KowshiKAFP19}. \section{Problem Formulation and Preliminaries} \label{Sec_model} \subsection{Model and Definitions} \label{Sec_Def} Consider a network with $\ell$ users that, if they are active, wish to transmit their messages $W_i, i=1, \ldots, \ell$ to one common receiver. The messages are assumed to be independent and uniformly distributed on $\{1,\ldots,M_n^{(i)}\}$. To transmit their messages, the users send a codeword of $n$ symbols over the channel, where $n$ is referred to as the \emph{blocklength}. We consider a many-access scenario where the number of users $\ell$ grows with $n$, hence, we denote it as $\ell_n$. We further assume that a user is active with probability $\alpha_n$, where $\alpha_n \to \alpha \in [0,1]$ as $n$ tends to infinity. Since an inactive user is equivalent to a user transmitting the all-zero codeword, we can express the distribution of the $i$-th user's message as \begin{align} \text{Pr} \{W_i = w\} = \begin{cases} 1 - \alpha_n, & \quad w=0 \\ \frac{\alpha_n}{M_n^{(i)}}, & \quad w \in \{1,\ldots,M_n^{(i)}\} \end{cases} \label{Eq_messge_def} \end{align} and assume that the codebook is such that message $0$ is mapped to the all-zero codeword. We denote the average number of active users at blocklength $n$ by $k_n$, i.e., $k_n = \alpha_n \ell_n$. We consider a Gaussian channel model where the received vector ${\bf Y}$ is given by \begin{align*} {\bf Y} & = \sum_{i=1}^{\ell_n} {\bf X}_i(W_i) + {\bf Z}. \end{align*} Here $ {\bf X}_i(W_i)$ is the $n$-length transmitted codeword from user $i$ for message $W_i$ and ${\bf Z}$ is a vector of $n$ i.i.d. Gaussian components $Z_j \sim \mbox{$\cal{N}$}(0, N_0/2)$ independent of ${\bf X}_i$. \begin{definition} \label{Def_nMCode} For $0 \leq \epsilon < 1$, an $(n,\bigl\{M_n^{(\cdot)}\bigr\},\bigl\{E_n^{(\cdot)}\bigr\}, \epsilon)$ code for the Gaussian many-access channel consists of: \begin{enumerate} \item Encoding functions $f_i: \{0, 1,\ldots,M_n^{(i)}\} \rightarrow \mathbb{R}^n$, \mbox{$i =1,\ldots, \ell_n$} which map user $i$'s message to the codeword ${\bf X}_i(W_i)$, satisfying the energy constraint \begin{align} \label{Eq_energy_consrnt} \sum_{j=1}^{n} x_{ij}^2(w_i) \leq E_n^{(i)} \end{align} where $x_{ij}$ is the $j$-th symbol of the transmitted codeword. If $W_i =0$, then $x_{ij} =0$ for $j=1, \ldots, n$. \item Decoding function $g: \mathbb{R}^n \rightarrow \{ 0,1,\ldots,M_n^{(1)}\} \times \ldots \times \{ 0,1,\ldots,M_n^{(\ell_n)}\} $ which maps the received vector ${\bf Y}$ to the messages of all users and whose probability of error $P_{e}^{(n)}$ satisfies \end{enumerate} \begin{align} \label{Eq_prob_err} P_{e}^{(n)} \triangleq \text{Pr} \{ g({\bf Y}) \neq (W_1,\ldots,W_{\ell_n}) \} \leq \epsilon. \end{align} \end{definition} An $(n,\{M_n^{(\cdot)}\},\{E_n^{(\cdot)}\}, \epsilon)$ code is said to be \emph{symmetric} if $M_n^{(i)} = M_n$ and $E_n^{(i)} = E_n$ for all $i=1, \ldots, \ell_n$. For compactness, we denote such a code by $(n, M_n, E_n, \epsilon)$. In this paper, we restrict ourselves to symmetric codes. \begin{definition} \label{Def_Sym_Rate_Cost} For a symmetric code, the rate per unit-energy $\dot{R}$ is said to be $\epsilon$-achievable if for every $\delta > 0$ there exists an $n_0$ such that if $n \geq n_0$, then an $(n,M_n,E_n, \epsilon)$ code can be found whose rate per unit-energy satisfies $\frac{\log M_n}{ E_n} > \dot{R} - \delta$. Furthermore, $\dot{R}$ is said to be achievable if it is $\epsilon$-achievable for all $0 < \epsilon < 1$. The capacity per unit-energy $\dot{C}$ is the supremum of all achievable rates per unit-energy. \end{definition} \if 0 1 \subsection{Order Notations} Let $\{a_n\}$ and $\{b_n\}$ be two sequences of nonnegative real numbers. We write $a_n = O(b_n)$ if there exists an $n_0$ and a positive real number $S$ such that for all $n \geq n_0$, $a_n \leq S b_n$. We write $a_n = o(b_n)$ if $ \lim\limits_{n\rightarrow \infty} \frac{a_n}{b_n} = 0$, and $a_n = \Omega(b_n)$ if $\liminf\limits_{n \rightarrow \infty} \frac{a_n}{b_n} >0$. Similarly, $a_n = \Theta (b_n)$ indicates that there exist $ 0 < l_1<l_2$ and $n_0$ such that $l_1 b_n \leq a_n \leq l_2 b_n$ for all $n \geq n_0$. We write $a_n = \omega (b_n)$ if $\lim\limits_{n\rightarrow \infty} \frac{a_n}{b_n} = \infty$. \else \subsection{Order Notations} Let $\{a_n\}$ and $\{b_n\}$ be two sequences of nonnegative real numbers. We write $a_n = O(b_n)$ if there exists an $n_0$ and a positive real number $S$ such that for all $n \geq n_0$, $a_n \leq S b_n$. We write $a_n = o(b_n)$ if $ \lim\limits_{n\rightarrow \infty} \frac{a_n}{b_n} = 0$, and $a_n = \Omega(b_n)$ if $\liminf\limits_{n \rightarrow \infty} \frac{a_n}{b_n} >0$. Similarly, $a_n = \Theta (b_n)$ indicates that there exist $ 0 < l_1<l_2$ and $n_0$ such that $l_1 b_n \leq a_n \leq l_2 b_n$ for all $n \geq n_0$. We further write $a_n = \omega (b_n)$ if $\lim\limits_{n\rightarrow \infty} \frac{a_n}{b_n} = \infty$. \fi \section{Capacity per Unit-Energy} \label{sec_joint} In this section, we discuss our results on the behavior of capacity per unit-energy for Gaussian random MnACs. Our main result is Theorem~\ref{Thm_random_JPE}, which characterizes the capacity per unit-energy in terms of $\ell_n$ and $k_n$. In Theorem~\ref{Thm_ortho_accs}, we characterize the behavior of the largest rate per unit-energy that can be achieved by an orthogonal-access scheme. These results are presented in Subsection~\ref{Sec_results}. The proofs of Theorems~\ref{Thm_random_JPE} and~\ref{Thm_ortho_accs} are given in Subsections~\ref{Sec_proof_JPE} and~\ref{Sec_proof_ortho_access}, respectively. Before presenting our results, we first note that the case where $k_n$ vanishes as $n \to \infty$ is uninteresting. Indeed, this case only happens if $\alpha_n \to 0$. Then, the probability that all the users are inactive, given by $\bigl( (1-\alpha_n)^{\frac{1}{\alpha_n}}\bigr)^{k_n}$, tends to one since $(1-\alpha_n)^{\frac{1}{\alpha_n}} \to 1/e $ and $k_n \to 0$. Consequently, a code with $M_n=2$ and $E_n =0$ for all $n$ and a decoding function that always declares that all users are inactive achieve an error probability $P_{e}^{(n)}$ that vanishes as $n \to \infty$. This implies that $\dot{C}= \infty$. In the following, we avoid this trivial case and assume that $\ell_n$ and $\alpha_n$ are such that $k_n$ is bounded away from zero. \subsection{Our Main Results} \label{Sec_results} \begin{theorem} \label{Thm_random_JPE} Assume that $k_n =\Omega(1)$. Then the capacity per unit-energy of the Gaussian random MnAC has the following behavior: \begin{enumerate} \item If $k_n \log \ell_n = o(n)$, then $\dot{C} = (\log e )/ N_0$. \label{Thm_achv_part} \item If $k_n \log \ell_n = \omega(n)$, then $\dot{C} =0$. \label{Thm_conv_part} \end{enumerate} \end{theorem} \begin{proof} See Subsection~\ref{Sec_proof_JPE}. \end{proof} Theorem~\ref{Thm_random_JPE} demonstrates that there is a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where no positive rate per unit-energy is feasible. The same behavior was observed for the non-random-access case, where the transition threshold seperating these two regimes is at $n/ \log n$~\cite{RaviKISIT19}. When $\alpha_n$ converges to a positive value, the order of growth of $k_n \log \ell_n$ coincides with that of both $k_n \log k_n$ and $\ell_n \log \ell_n$. In this case, the transition threshold in the random-access case is also at $n /\log n$. However, when $\alpha_n \to 0$, the orders of growth of $k_n$ and $\ell_n$ are different and the transition threshold for $\ell_n$ is in general larger than $n / \log n$, so random user-activity enables interference-free communication at an order of growth above the limit $n/ \log n$ of the non-random-access case. Similarly, when $\alpha_n \to 0$, the transition threshold for $k_n$ is in general smaller than $n/ \log n$, so treating a random MnAC with $\ell_n$ users as a non-random MnAC with $k_n$ users may be overly-optimistic. In~\cite{RaviKISIT19}, it was shown that, when \mbox{$k_n =o(n/ \log n)$} and \mbox{$\alpha_n =1$}, an orthogonal-access scheme is sufficient to achieve the capacity per unit-energy. It turns out that this is not the case anymore when $\alpha_n \to 0$. \begin{theorem} \label{Thm_ortho_accs} Assume that $k_n = \Omega(1)$. The largest rate per unit-energy $\dot{C}_{\bot}$ achievable with an orthogonal-access scheme satisfies the following: \begin{enumerate}[1)] \item If $ \ell_n = o(n/ \log n)$, then $\dot{C}_{\bot} = (\log e )/ N_0$. \label{Thm_ortho_accs_achv} \item If $ \ell_n = \omega(n/ \log n)$, then $\dot{C}_{\bot} =0$. \label{Thm_ortho_accs_conv} \end{enumerate} \end{theorem} \begin{proof} See Subsection~\ref{Sec_proof_ortho_access}. \end{proof} Observe that there is again a sharp transition between the orders of growth of $\ell_n$ where interference-free communication is feasible and orders of growth where no positive rate per unit-energy is feasible. In contrast to the optimal transmission scheme, the transition threshold for orthogonal-access schemes happens at $n/ \log n$, irrespective of the behavior of $\alpha_n$. Thus, by using an orthogonal-access scheme, we treat the random MnAC as if it were a non-random MnAC. Theorem~\ref{Thm_ortho_accs} also implies that there are orders of growth of $\ell_n$ and $k_n$ where non-orthogonal-access schemes are necessary to achieve the capacity per unit-energy. \subsection{Proof of Theorem~\ref{Thm_random_JPE}} \label{Sec_proof_JPE} To prove Part~\ref{Thm_achv_part}), we use an achievability scheme with a decoding process consisting of two steps. First, the receiver determines which users are active. If the number of estimated active users is less than or equal to $\xi k_n$ for some positive integer $\xi$, then the receiver decodes the messages of all active users. If the number of estimated active users is greater than $\xi k_n$, then it declares an error. The total error probability of this scheme is upper-bounded by \begin{equation*} P(\mbox{$\cal{D }$}) + \sum_{k'_n=1}^{\xi k_n}\text{Pr} \{K'_n=k_n'\}P\bigl(\mbox{$\cal{E}$}_m(k'_n)\bigr) + \text{Pr} \{K'_n>\xi k_n\} \end{equation*} where $K'_n$ is the number of active users, $P(\mbox{$\cal{D }$})$ is the probability of a detection error, and $P\bigl(\mbox{$\cal{E}$}_m(k'_n)\bigr)$ is the probability of a decoding error when the receiver has correctly detected that there are $k'_n$ users active. In the following, we show that these probabilities vanish as $n\to\infty$ for any fixed, positive integer $\xi$. Furthermore, by Markov's inequality, we have that $\text{Pr} \{K'_n>\xi k_n\}\leq 1/\xi$. It thus follows that the total probability of error vanishes as we let first $n\to\infty$ and then $\xi\to\infty$. To enable user detection at the receiver, out of $n$ channel uses, each user uses the first $n''$ channel uses to send its signature and \mbox{$n'=n -n''$} channel uses for sending the message. Furthermore, the signature uses energy $E_n''$ out of $E_n$, while the energy used for sending message is given by $E_n' = E_n -E_n''$. Let ${\bf s}_i$ denote the signature of user $i$ and $\tilde{{\bf x}}_i(w_i)$ denote the codeword of length $n'$ for sending the message $w_i$, where $w_i =1,\ldots, M_n$. Then the codeword ${\bf x}_i(w_i)$ is given by \begin{align*} {\bf x}_i(w_i) = ({\bf s}_i, \tilde{{\bf x}}_i(w_i)). \end{align*} Explicitly, for a given arbitrary $0 < b < 1$, we let \begin{equation} n'' = bn, \quad \label{Eq_channel_choice} \end{equation} and \begin{equation} \label{Eq_energy_choice} E_n'' = bE_n, \quad E_n = c_n \ln \ell_n \end{equation} with $c_n = \ln (\frac{n}{k_n\ln \ell_n})$. Based on the first $n''$ received symbols, the receiver detects which users are active. We need the following lemma to show that the detection error probability vanishes as $n \to \infty$. \begin{lemma} \label{Lem_usr_detect} If $k_n \log \ell_n = o(n)$, then there exist signatures ${\bf s}_i, i=1, \ldots, \ell_n$ with $n''$ channel uses and energy $E_n''$ such that $P(\mbox{$\cal{D }$})$ vanishes as $n \to \infty$. \end{lemma} \begin{proof} \if 0 1 The proof follows along similar lines as that of~\cite[Theorem~2]{ChenCG17}. For details, see the extended version of this paper~\cite{RaviKISIT20}. \else The proof follows along similar lines as that of~\cite[Theorem~2]{ChenCG17}. For details, see Appendix~\ref{Sec_Lem_detct_proof}. \fi \end{proof} We next use the following lemma to show that $P\bigl(\mbox{$\cal{E}$}_m(k'_n)\bigr)$ vanishes as $n \to \infty$ uniformly in $k'_n \in \mbox{$\cal{K}$}_n$, where $\mbox{$\cal{K}$}_n \triangleq \{1, \ldots, \xi k_n\}$. \begin{lemma} \label{Lem_err_expnt} Let $A_{k'_n} \triangleq \frac{1}{k_n'} \sum_{i=1}^{k_n'} \I{ \hat{W}_i \neq W_i}$ and \mbox{$\mbox{$\cal{A}$}_{k'_n} \triangleq \{1/k_n', \ldots,1 \}$}, where $\I{\cdot}$ denotes the indicator function. Then for any arbitrary $0<\rho \leq 1$, we have \begin{equation} \textnormal{Pr}\{A_{k'_n} = a\} \leq \left(\frac{1}{\mu}\right)^{2k'_n} {k_n' \choose a k_n'} M_n^{a k_n' \rho} e^{-nE_0(a, \rho)}, \quad a\in\mbox{$\cal{A}$}_{k'_n}\label{Eq__random_prob_err} \end{equation} where \begin{align} E_0(a, \rho) \triangleq \frac{\rho}{2} \ln \left(1+\frac{a 2k_n' E_n'}{n'(\rho +1)N_0}\right) \label{Eq_random_expnt} \end{align} and \begin{align} \mu & \triangleq \int \I{ \|\bar{a}\|^2 \leq E_n'} \prod_{i=1}^{n} \tilde{q}(a_i) d \bar{a} \label{Eq_def_mu} \end{align} is a normalizing constant. In~\eqref{Eq_def_mu}, $\tilde{q}$ denotes the probability density function of a zero-mean Gaussian random variable with variance $E_n'/(2n')$. \end{lemma} \begin{proof} The upper bound in~\eqref{Eq__random_prob_err} without the factor $(1/\mu)^{2k'_n}$ can be obtained using random coding with i.i.d. Gaussian inputs~\cite[Theorem~2]{Gallager85}. However, while i.i.d. Gaussian codebooks satisfy the energy constraint on average (averaged over all codewords), there may be some codewords in the codebook that violate it. We therefore need to adapt the proof of~\cite[Theorem~2]{Gallager85} as follows. Let \begin{align*} \tilde{{\bf q}}(\bar{a}) & = \prod_{i=1}^{n} \tilde{q}(a_i), \quad \bar{a} =(a_1,\ldots,a_n). \end{align*} For codewords distributed according to $\tilde{{\bf q}}(\cdot)$, the probability $\text{Pr} (A_{k'_n}=a)$ can be upper-bounded as~\cite[Theorem~2]{Gallager85} \begin{align} \text{Pr} (A_{k'_n} = a) & \leq {k_n' \choose a k_n'} M_n^{a k_n' \rho} \int \tilde{{\bf q}}(\tilde{{\bf x}}_{ak'_n+1}) \cdots \tilde{{\bf q}}(\tilde{{\bf x}}_{k'_n}) \; G ^{1+\rho} \; d\tilde{{\bf x}}_{ak'_n+1} \cdots d\tilde{{\bf x}}_{k'_n} \;d\tilde{{\bf y}} \label{eq_a1} \end{align} where \begin{align*} G & = \int \tilde{{\bf q}}(x_{1}) \cdots \tilde{{\bf q}}(x_{ak'_n}) \left( p(\tilde{{\bf y}} \mid \tilde{{\bf x}}_{1},\cdots, \tilde{{\bf x}}_{k'_n})\right) ^{1/1+\rho} d\tilde{{\bf x}}_{1} \cdots d\tilde{{\bf x}}_{ak'_n} \notag. \end{align*} Using the fact that the channel is memoryless, the RHS of~\eqref{Eq__random_prob_err} without the factor $(1/\mu)^{2k'_n}$ follows from~\eqref{eq_a1}. The case of $k'_n =2$ was analyzed in~\cite[Eq.~(2.33)]{Gallager85}. Now suppose that all codewords are generated according to the distribution \begin{align*} {\bf q}(\bar{a}) & = \frac{1}{\mu} \I{ \|\bar{a}\|^2 \leq E_n'} \tilde{{\bf q}}(\bar{a}). \end{align*} Clearly, such codewords satisfy the energy constraint $E'_n$ with probability one. Furthermore, \begin{align} {\bf q}(\bar{a}) & \leq \frac{1}{\mu} \tilde{{\bf q}}(\bar{a}). \label{Eq_prob_signt_uppr1} \end{align} By replacing $\tilde{{\bf q}}(\cdot)$ in~\eqref{eq_a1} by ${\bf q}(\cdot)$ and upper-bounding ${\bf q}(\cdot)$ by~\eqref{Eq_prob_signt_uppr1}, we obtain that \begin{align} \textnormal{Pr}\{A_{k'_n} = a\} \leq \left(\frac{1}{\mu}\right)^{(1+\rho)(ak'_n)}\left(\frac{1}{\mu}\right)^{k'_n - ak'_n} {k_n' \choose a k_n'} M_n^{a k_n' \rho} e^{-nE_0(a, \rho)}, \quad a\in\mbox{$\cal{A}$}_{k'_n}. \label{Eq_Prob_An_uppr} \end{align} From the definition of $\mu$, we have that $0 \leq \mu \leq 1$. Since we further have $\rho\leq 1$ and $a\leq 1$, it follows that $(1/\mu)^{(1+\rho)(ak'_n)} \leq (1/\mu)^{a k_n'+k_n'}$. Consequently, \eqref{Eq__random_prob_err} follows from~\eqref{Eq_Prob_An_uppr}. \end{proof} Next we show that $\left(\frac{1}{\mu}\right)^{2k'_n} \to 1$ as $n \to \infty$ uniformly in $k'_n \in \mbox{$\cal{K}$}_n$. By the definition of $\mu$, we have \begin{align} \mu & = 1 - \text{Pr} \left(\|\tilde{{\bf X}}_1\|_2^2 \geq E_n'\right) \notag \end{align} so $(1/\mu)^{2 k'_n} \geq 1$. Let us consider $\tilde{{\bf X}}_0 \triangleq \frac{2 n'}{E_n'} \|\tilde{{\bf X}}_1\|_2^2$. Then, \begin{align*} \text{Pr} \left(\|\tilde{{\bf X}}_1\|_2^2 \geq E_n'\right) & = \text{Pr} (\tilde{{\bf X}}_0 \geq 2 n'). \end{align*} Furthermore, $\tilde{{\bf X}}_0$ has a central chi-square distribution with $n'$ degrees of freedom. So, from the Chernoff bound we obtain that \begin{align*} \text{Pr} (\tilde{{\bf X}}_0 \geq a) & \leq \frac{E(e^{t\tilde{{\bf X}}_0})}{e^{ta}} \\ & = \frac{(1-2t)^{-n'/2}}{e^{ta}} \end{align*} for every $t > 0$. By choosing $a= 2 n'$ and $t= \frac{1}{4}$, this yields \begin{align} \text{Pr} (\tilde{{\bf X}}_0 \geq 2 n') & \leq \frac{ \left(\frac{1}{2}\right)^{-n'/2}}{ \exp(n'/2) } \notag \\ & = \exp \left[-\frac{n'}{2} \tau \right] \notag \end{align} where $\tau \triangleq \left( 1 - \ln 2 \right)$ is strictly positive. Thus, \begin{align} 1 & \leq \left(\frac{1}{\mu}\right)^{2k'_n} \notag \\ & \leq \left(\frac{1}{\mu}\right)^{2\xi k_n} \notag \\ &= (1-\text{Pr} (\tilde{{\bf X}}_0 \geq 2 n'))^{-(2\xi k_n)}\notag \\ & \leq \left(1 - \exp \left[-\frac{n'}{2} \tau \right]\right)^{-(2\xi k_n)}, \quad k'_n \in \mbox{$\cal{K}$}_n. \label{Eq_mu_uppr2} \end{align} We have that $k_n=o(n)$ and $n' = \Theta(n)$. Since for any two non-negative sequences $a_n$ and $b_n$ such that $a_n\to 0$ and $a_nb_n \to 0$ as $n \to \infty$, it holds that $(1-a_n)^{-b_n} \to 1$ as $n \to \infty$, we obtain that the RHS of~\eqref{Eq_mu_uppr2} tends to one as $n \to \infty$ uniformly in $k'_n\in \mbox{$\cal{K}$}_n$. So there exists a positive constant $n_0$ that is independent of $k'_n$ and satisfies \begin{align*} \left(\frac{1}{\mu}\right)^{2k'_n} \leq 2, \quad k'_n \in \mbox{$\cal{K}$}_n, n \geq n_0. \end{align*} The probability of error $P\bigl(\mbox{$\cal{E}$}_m(k'_n)\bigr)$ can be written as \begin{equation} P\bigl(\mbox{$\cal{E}$}_m(k'_n)\bigr) = \sum\limits_{a\in\mbox{$\cal{A}$}_{k'_n}} \textnormal{Pr}\{A_{k'_n} = a\}. \label{Eq_prob_err_def} \end{equation} So, from Lemma~\ref{Lem_err_expnt}, we obtain \begin{align} \textnormal{Pr}\{A_{k'_n} = a\} & \leq 2 {k_n' \choose a k_n'} M_n^{a k_n' \rho} \exp[-n'E_0(a, \rho)] \notag\\ & \leq 2 \exp\left[ k_n'H_2(a) + a \rho k_n' \log M_n - n'E_0(a, \rho) \right]\notag \\ & = 2 \exp \left[-E_n'f_{k'_n}(a, \rho)\right], \quad n \geq n_0 \end{align} where \begin{align} f_{k'_n}(a, \rho) \triangleq \frac{n'E_0(a, \rho)}{E_n'} - \frac{a \rho k_n' \log M_n}{E_n'} - \frac{k_n' H_2(a)}{E_n'}. \label{Eq_fn_def} \end{align} We next show that, for sufficiently large $n$, we have \begin{align} \textnormal{Pr}\{A_{k'_n} = a\} \leq 2 \exp \left[-E_n'f_{\xi k_n}(1/(\xi k_n), \rho)\right], \quad a\in\mbox{$\cal{A}$}_{k'_n}, k'_n \in \mbox{$\cal{K}$}_n. \label{Eq_err_upp_bnd} \end{align} To this end, we first note that using basic algebra, we obtain \begin{align*} \frac{d f_{k'_n}(a, \rho)}{da} & \geq \rho k'_n \left[ \frac{1}{1+\frac{2k'_nE'_n}{n'(\rho+1)N_0}} \frac{1}{(1+\rho)N_0} - \frac{\dot{R}}{(1-b)\log e} \right]\\ & \geq \rho \left[ \frac{1}{1+\frac{2 \xi k_nE'_n}{n'(\rho+1)N_0}} \frac{1}{(1+\rho)N_0} - \frac{\dot{R}}{(1-b)\log e} \right]. \end{align*} This implies that for any fixed value of $\rho$ and our choices of $E_n'$ and \mbox{$\dot{R} = \frac{(1-b)\log e}{(1+\rho)N_0} - \delta$} (for some arbitrary $0<\delta<\frac{(1-b)\log e}{(1+\rho)N_0}$), \begin{equation*} \liminf_{n\to\infty} \min_{k'_n \in \mbox{$\cal{K}$}_n} \min_{a \in \mbox{$\cal{A}$}_{k'_n}} \frac{d f_{k'_n}(a, \rho)}{da} > 0. \end{equation*} This follows from the fact that $\frac{k_n E_n'}{n'} \to 0$ as $n \to \infty$, which in turn follows from our choice of $E_n'$ and since $k_n = o(n / \log n)$. So there exists a positive constant $n'_0$ that is independent of $k'_n$ and satisfies \begin{equation*} \min_{a \in \mbox{$\cal{A}$}_{k'_n}} f_{k'_n}(a, \rho) \geq f_{ k'_n}(1/k'_n, \rho), \quad k'_n \in \mbox{$\cal{K}$}_n, n \geq n'_0 \end{equation*} Furthermore, from the definition of $f_{k'_n}(a, \rho) $ in~\eqref{Eq_fn_def}, it follows that for $a = 1/k'_n$ and for a given $\rho$, $f_{ k'_n}(a, \rho)$ is decreasing in $k'_n$ since in this case the first two terms on the RHS of~\eqref{Eq_fn_def} are independent of $k'_n$ and the third term is increasing in $k'_n$. Hence, we can further lower-bound \begin{equation*} \min_{a \in \mbox{$\cal{A}$}_{k'_n}} f_{k'_n}(a, \rho) \geq f_{\xi k_n}(1/(\xi k_n), \rho), \quad k'_n \in \mbox{$\cal{K}$}_n, n \geq n'_0. \end{equation*} Next we show that, for our choice of $E_n'$ and \mbox{$\dot{R}$}, we have \begin{equation} \label{eq:lim_pos} \liminf_{n \rightarrow \infty} f_{\xi k_n}(1/(\xi k_n), \rho) >0. \end{equation} Let \begin{IEEEeqnarray*}{rCl} i_n(1/(\xi k_n),\rho) & \triangleq & \frac{n' E_0(1/(\xi k_n),\rho)}{E_n'}\\ j(\rho) & \triangleq & \frac{\rho \dot{R}}{(1-b)\log e}\\ h_n(1/(\xi k_n)) & \triangleq & \frac{\xi k_n H_2(1/(\xi k_n))}{E_n'}. \end{IEEEeqnarray*} Note that $\frac{h_n(1/(\xi k_n))}{j(\rho)}$ vanishes as $n \to \infty$ for our choice of $E_n'$. Consequently, \begin{IEEEeqnarray*}{lCl} \liminf_{n \rightarrow \infty} f_n(1/(\xi k_n), \rho) & = & j(\rho) \biggl\{\liminf_{n\to\infty} \frac{i_n(1/(\xi k_n),\rho)}{j(\rho)} - 1 \biggr\}. \end{IEEEeqnarray*} The term $j( \rho) =\rho \dot{R}/(1-b)\log e$ is bounded away from zero for our choice of $\dot{R}$ and $\delta < \frac{(1-b)\log e}{(1+\rho)N_0}$. Furthermore, since $E_n'/n' \to 0$, we get \begin{equation*} \lim_{n\to\infty} \frac{i_n(1/(\xi k_n),\rho)}{j(\rho)} = \frac{(1-b)\log e}{(1+\rho)N_0 \dot{R}} \end{equation*} which is strictly larger than $1$ for our choice of $\dot{R}$. So, \eqref{eq:lim_pos} follows. Consequently, there exist two positive constants $\gamma$ and $n''_0$ that are independent of $k'_n$ and satisfy $f_{k'_n}(a, \rho) \geq \gamma $ for $a\in \mbox{$\cal{A}$}_{k'_n}$, $k'_n\in \mbox{$\cal{K}$}_n$, and $n \geq n''_0$. We conclude that for $n \geq \max (n_0,n'_0,n''_0)$, \begin{align} \textnormal{Pr}\{A_{k'_n} = a\} \leq 2 e^{-E_n'\gamma}, \quad a \in \mbox{$\cal{A}$}_{k'_n}, k'_n \in \mbox{$\cal{K}$}_n. \label{Eq_type_uppr} \end{align} Since $|\mbox{$\cal{A}$}_{k'_n}| = k_n'$, it follows from~\eqref{Eq_prob_err_def} and~\eqref{Eq_type_uppr} that \begin{align} P\bigl(\mbox{$\cal{E}$}_m(k'_n)\bigr) \leq k_n' 2 e^{-E_n'\gamma}, \quad k'_n \in \mbox{$\cal{K}$}_n, n \geq \max (n_0,n'_0,n''_0). \notag \end{align} Further upper-bounding $k'_n \leq \xi k_n$, this implies that \begin{align} \sum_{k'_n=1}^{\xi k_n}\text{Pr} \{K'_n=k_n'\}P\bigl(\mbox{$\cal{E}$}_m(k'_n)\bigr) & \leq \xi k_n 2 e^{-E_n'\gamma}, \quad n \geq \max (n_0,n'_0,n''_0). \label{Eq_sum_prob_uppr} \end{align} Since $E_n' = (1-b)c_n \ln \ell_n$ and $k_n = O(\ell_n)$, it follows that the RHS of~\eqref{Eq_sum_prob_uppr} tends to 0 as $n \to \infty$ for our choice of \mbox{$\dot{R} = \frac{(1-b)\log e}{(1+\rho)N_0} - \delta$}. Since $\rho,\delta,$ and $b$ are arbitrary, any rate $\dot{R} < \frac{\log e}{N_0}$ is thus achievable. This proves Part~\ref{Thm_achv_part}) of Theorem~\ref{Thm_random_JPE}. Next we prove Part~\ref{Thm_conv_part}). Let $\hat{W_i}$ denote the receiver's estimate of $W_i$, and denote by ${\bf W}$ and ${\bf \hat{W}}$ the vectors $(W_1,\ldots, W_{\ell_n})$ and $(\hat{W_1},\ldots,\hat{W}_{\ell_n})$, respectively. The messages $W_1,\ldots, W_{\ell_n}$ are independent, so it follows from~\eqref{Eq_messge_def} that \begin{align*} H({\bf W}) = \ell_n H({\bf W}_1) = \ell_n \left(H_2(\alpha_n) + \alpha_n \log M_n \right) \end{align*} where $H_2(\cdot)$ denotes the binary entropy function. Since $ H({\bf W}) = H({\bf W}|{\bf Y})+I({\bf W};{\bf Y})$, we obtain \begin{align} \ell_n \left(H_2(\alpha_n) + \alpha_n \log M_n \right) & =H({\bf W}|{\bf Y})+I({\bf W};{\bf Y}). \label{Eq_messge_entrpy} \end{align} To bound $H({\bf W})$, we use the upper bounds~\cite[Lemma~2]{ChenCG17} \begin{align} H({\bf W}|{\bf Y}) \leq & \log 4 + 4 P_{e}^{(n)}\big(k_n \log M_n + k_n + \ell_n H_2(\alpha_n) + \log M_n \big) \label{Eq_messg_cond_entrpy} \end{align} and~\cite[Lemma~1]{ChenCG17} \begin{align} I({\bf W};{\bf Y}) \leq \frac{n}{2} \log \left(1+\frac{ 2k_nE_n}{nf_{k'_n}}\right). \label{Eq_mutl_info_uppr} \end{align} Using~\eqref{Eq_messg_cond_entrpy} and~\eqref{Eq_mutl_info_uppr} in~\eqref{Eq_messge_entrpy}, rearranging terms, and dividing by $k_nE_n$, yields \begin{align} \left(1-4 P_{e}^{(n)}(1+1/k_n)\right) \dot{R} \leq & \frac{\log 4}{k_nE_n} + \frac{H_2(\alpha_n)}{\alpha_n E_n} \! \left(4 P_{e}^{(n)} -1\right) \notag \\ & \quad + 4 P_{e}^{(n)} (1/E_n + 1/k_n) +\frac{n}{2 k_nE_n} \log \left(1+\frac{ 2k_nE_n}{nN_0}\right)\! . \label{Eq_rate_joint_uppr} \end{align} We next show that if $k_n \log \ell_n = \omega(n)$, then the right-hand side (RHS) of~\eqref{Eq_rate_joint_uppr} tends to a non-positive value. To this end, we need the following lemma. \begin{lemma} \label{Lem_energy_bound} If $\dot{R} > 0$, then $P_{e}^{(n)}$ vanishes as $n\to\infty$ only if \mbox{$E_n = \Omega(\log \ell_n)$}. \end{lemma} \begin{proof} \if 0 1 The proof of this lemma follows along similar lines as that of~\cite[Lemma~2]{RaviKISIT19}. For details, see~\cite{RaviKISIT20}. \else See Appendix~\ref{Append_prob_lemma}. \fi \end{proof} Part~\ref{Thm_conv_part}) of Theorem~\ref{Thm_random_JPE} follows now by contradiction. Indeed, let us assume that $k_n \log \ell_n = \omega(n)$, $P_{e}^{(n)} \to 0$, and $\dot{R} >0$. Then, Lemma~\ref{Lem_energy_bound} together with the assumption that $k_n = \Omega(1)$ implies that $k_nE_n = \omega(n)$. It follows that the last term on the RHS of~\eqref{Eq_rate_joint_uppr} tends to zero as $n \to \infty$. The assumption $k_n \log \ell_n = \omega(n)$ in turn implies that $\ell_n \to \infty$ as $n \to \infty$. So, by Lemma~\ref{Lem_energy_bound}, $E_n \to \infty$. Together with the assumption that $k_n = \Omega(1)$, this implies that the first and third term on the RHS of~\eqref{Eq_rate_joint_uppr} vanish as $n \to \infty$. Finally, $\frac{H_2(\alpha_n)}{\alpha_n E_n}$ is a sequence of non-negative numbers and $(4 P_{e}^{(n)} -1) \to -1$ as $n \to \infty$, so the second term converges to a non-positive value. Thus, we obtain that $\dot{R}$ tends to a non-positive value as $n \to \infty$. This contradicts the assumption $\dot{R} > 0$, so Part~\ref{Thm_conv_part}) of Theorem~\ref{Thm_random_JPE} follows. \subsection{Proof of Theorem~\ref{Thm_ortho_accs}} \label{Sec_proof_ortho_access} To prove Part~\ref{Thm_ortho_accs_achv}), we present a scheme that is similar to the one given in~\cite{RaviKISIT19} for the non-random-access case. Specifically, each user is assigned $n/\ell_n$ channel uses out of which the first one is used for sending a pilot signal and the rest are used for sending the message. Out of the available energy $E_n$, $t E_n$ for some arbitrary $0 < t < 1$ is used for the pilot signal and $(1-t)E_n$ is used for sending the message. Let $\tilde{{\bf x}}(w) $ denote the codeword of length $\frac{n}{\ell_n}-1$ for sending message $w$. Then user $i$ sends in his assigned slot the codeword \begin{align*} {\bf x}(w_i) = \left(\sqrt{t E_n}, \tilde{{\bf x}}(w_i)\right). \end{align*} The receiver first detects from the pilot signal whether user $i$ is active or not. If the user is estimated as active, then it decodes the user's message. Let $P_i = \textnormal{Pr}\{\hat{W_i} \neq W_i\}$ denote the probability that user $i$'s message is decoded erroneously. Since all users follow the same coding scheme, the probability of correct decoding is given by \begin{align} P_c^{(n)} = \left(1-P_1\right)^{\ell_n}. \label{Eq_ortho_corrct} \end{align} By employing the transmission scheme that was used to prove~\cite[Theorem~2]{RaviKISIT19}, we get an upper bound on the probability of error $P_1$ as follows. \begin{lemma} \label{Lem_detect_uppr} For $n\geq n_0$ and sufficiently large $n_0$, the probability of error in decoding user 1's message can be upper-bounded as: \begin{align*} P_1 & \leq \frac{2}{n^2}. \end{align*} \end{lemma} \begin{proof} \if 0 1 See~\cite{RaviKISIT20}. \else See Appendix~\ref{Sec_appnd_ortho}. \fi \end{proof} From Lemma~\ref{Lem_detect_uppr} and~\eqref{Eq_ortho_corrct}, \begin{align} P_c^{(n)} & \geq \left(1-\frac{2}{n^{2}}\right)^{\ell_n} \notag \\ & \geq \left(1-\frac{2}{n^{2}}\right)^{\frac{n}{\log n}} \notag \end{align} which tends to one as $n \to \infty$. Thus, Part~\ref{Thm_ortho_accs_achv}) of Theorem~\ref{Thm_ortho_accs} follows. To prove Part~\ref{Thm_ortho_accs_conv}), we first note that we consider symmetric codes, i.e., the pair $(M_n,E_n)$ is the same for all users. However, each user may be assigned different numbers of channel uses. Let $n_i$ denote the number of channel uses assigned to user $i$. For an orthogonal-access scheme, if $\ell_n = \omega(n/ \log n)$, then there exists at least one user, say $i=1$, such that $n_i = o(\log n)$. Using that $H(W_1 | W_1 \neq 0 ) = \log M_n$, it follows from Fano's inequality that \begin{align} \log M_n & \leq 1+P_1 \log M_n + \frac{n_1}{2 }\log\left(1+\frac{ 2 E_n}{n_1N_0}\right). \nonumber \end{align} This implies that the rate per unit-energy $\dot{R}=(\log M_n)/E_n$ for user 1 is upper-bounded by \begin{align} \dot{R} \leq \frac{ \frac{1}{E_n} + \frac{n_1}{2 E_n}\log\left(1+\frac{ 2E_n}{n_1N_0}\right)}{1 -P_1}.\label{Eq_R_avg} \end{align} Since $\ell_n = \omega(n/ \log n)$, it follows from Lemma~\ref{Lem_energy_bound} that $P_{e}^{(n)}$ goes to zero only if \begin{align} E_n = \Omega(\log n). \label{Eq_ortho_enrg_lowr} \end{align} In contrast, \eqref{Eq_R_avg} implies that $\dot{R}>0$ only if $E_n = O(n_1)$. Since $n_1 = o(\log n)$, this further implies that \begin{align} E_n = o(\log n). \label{Eq_ortho_enrg_uppr} \end{align} No sequence $\{E_n\}$ can satisfy both~\eqref{Eq_ortho_enrg_uppr} and~\eqref{Eq_ortho_enrg_lowr} simultaneously. We thus obtain that if $\ell_n =\omega(n/ \log n)$, then the capacity per unit-energy is zero. This is Part~\ref{Thm_ortho_accs_conv}) of Theorem~\ref{Thm_ortho_accs}. \label{sec_average} Many works in the literature on many-access channels, including \cite{Polyanskiy17, OrdentlichP17,VemNCC17,ZadikPT19,KowshikPISIT19,KowshikP19,KowshiKAFPISIT19,KowshiKAFP19}, consider a \emph{per-user probability of error} \begin{equation} \label{eq:Pe_A} P_{e,A}^{(n)} \triangleq \frac{1}{\ell_n} \sum_{i=1}^{\ell_n} \textnormal{Pr}\{\hat{W_i} \neq W_i\} \end{equation} rather than the joint error probability \eqref{Eq_prob_err}. In the following, we briefly discuss the behavior of the capacity per unit-energy when the error probability is $P_{e,A}^{(n)}$, which in this paper we shall refer to as \emph{average probability of error (APE)}. To this end, we define an $(n,\{M_n^{(\cdot)}\},\{E_n^{(\cdot)}\}, \epsilon)$ code under APE with the same encoding and decoding functions defined in Section~\ref{Sec_model}, but with the probability of error \eqref{Eq_prob_err} replaced with \eqref{eq:Pe_A}. We denote the capacity per unit-energy under APE by $\dot{C}^A$. Under APE, if $\alpha_n \to 0$ as $n \to \infty$, then $\text{Pr} \{W_i =0\} \to 1$ for all $i=1,\ldots,\ell_n$. Consequently, a code with $M_n =2$ and $E_n =0$ for all $n$ and a decoding function that always declares that all users are inactive achieves an APE that vanishes as $n \to \infty$. This implies that $\dot{C}^A = \infty$ for vanishing $\alpha_n$. In the following, we avoid this trivial case and assume that $\alpha_n$ is bounded away from zero. For a Gaussian MnAC with APE and $\alpha_n=1$ (non-random-access case), we showed in~\cite{RaviKIZS20} that if the number of users grows sublinear in $n$, then each user can achieve the single-user capacity per unit-energy, and if the order of growth is linear or superlinear, then the capacity per unit-energy is zero. Perhaps not surprisingly, the same result holds in the random-access case since, when $\alpha_n$ is bounded away from zero, $k_n$ is of the same order as $\ell_n$. \begin{theorem} \label{Thm_capac_PUPE} If $k_n = \Theta( \ell_n)$ and $\alpha_n \to \alpha \in (0,1]$, then $\dot{C}^A$ has the following behavior: \begin{enumerate} \item If $\ell_n = o(n)$, then $\dot{C}^A = \frac{\log e}{N_0}$. Moreover, the capacity per unit-energy can be achieved by an orthogonal-access scheme where each user uses a codebook with orthogonal codewords. \label{Thm__avg_achv_part} \item If $\ell_n = \Omega(n)$, then $\dot{C}^A =0$. \label{Thm__avg_conv_part} \end{enumerate} \end{theorem} \begin{proof} To prove Part~\ref{Thm__avg_achv_part}), we first argue that $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$. Indeed, we have \begin{align*} P_{e,A}^{(n)} & \geq \min_{i} \text{Pr}\{\hat{W}_i\neq W_i\} \\ & \geq \alpha_n \text{Pr}(\hat{W_i} \neq W_i | W_i \neq 0) \; \text{ for some } i. \end{align*} Since $\alpha_n\to\alpha > 0$, this implies that $P_{e,A}^{(n)} $ vanishes only if $ \text{Pr}(\hat{W_i} \neq W_i | W_i \neq 0)$ vanishes. We next note that \mbox{$\text{Pr}(\hat{W_i} \neq W_i | W_i \neq 0)$} is lower-bounded by the error probability of the Gaussian single-user channel. By following the arguments in the proof of \cite[Theorem~2, Part~1)]{RaviKIZS20}, we obtain that $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$, which also implies that $\dot{C}^A \leq \frac{\log e}{N_0}$. We next show that any rate per unit-energy $\dot{R} < \frac{\log e}{N_0}$ is achievable by an orthogonal-access scheme where each user uses an orthogonal codebook of blocklength $n/\ell_n$. Out of these $n/\ell_n$ channel uses, the first one is used for sending a pilot signal to convey whether the user is active or not, and the remaining channel uses are used to send the message. Specifically, to transmit message $w_i$, user $i$ sends in his assigned slot the codeword ${\bf x}(w_i) = (x_1(w_1), \ldots, x_{n/\ell_n }(w_i))$, which is given by \begin{align*} x_{k}(w_i) = \begin{cases} \sqrt{t E_n}, & \text{ if } k=1 \\ \sqrt{(1-t) E_n}, & \text{ if } k=w_i+1\\ 0, & \text{ otherwise}. \end{cases} \end{align*} \if 0 1 From the pilot signal, the receiver first detects whether the user is active or not. As shown in the proof of Lemma~\ref{Lem_detect_uppr}, the detection error vanishes as $n \to \infty$. Using the upper bound on the decoding-error probability for an orthogonal code with $M$ codewords and rate per unit-energy $\dot{R}$ given in~\cite[Lemma~3]{RaviK19}, we can then show that $P_i, i=1,\ldots, \ell_n$ vanishes as $n$ tends to infinity. This implies that also $P_{e,A}^{(n)} $ vanishes as $n \to \infty$. More details can be found in~\cite{RaviKISIT20}. \else From the pilot signal, the receiver first detects whether the user is active or not. As shown in the proof of Lemma~\ref{Lem_detect_uppr} that the detection error vanishes as $n \to \infty$. Furthermore, the probability of error in decoding for an orthogonal code with $M$ codewords and rate per unit-energy $\dot{R}$ for the AWGN channel is upper-bounded by~\cite[Lemma~3]{RaviK19}: \begin{align} P_e \leq \begin{cases} \exp\left\{- \frac{\ln M}{\dot{R}}\left(\frac{\log e}{2 N_0} - \dot{R} \right) \right\}, \text{ if } 0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0}\\ \exp\left\{- \frac{\ln M}{\dot{R}} \left(\sqrt{\frac{\log e}{N_0}} - \sqrt{ \dot{R}}\right)^2\right\}, \text{ if } \frac{1}{4} \frac{\log e}{N_0} \leq \dot{R} \leq \frac{\log e}{N_0}. \end{cases} \label{Eq_ortho_prob_uppr} \end{align} It follows from~\eqref{Eq_ortho_prob_uppr} that if $\dot{R} < \frac{\log e}{N_0}$ and $M \to \infty$ as $n \to \infty$, then $P_e$ tends to zero as $n \to \infty$. Since $\ell_n =o(n)$, it follows that $M = n/\ell_n -1$ tends to $\infty$, as $n \to \infty$. Thus, for any $\dot{R} < \frac{\log e}{N_0}$, the probability of error in decoding vanishes. Thus, we obtain that $P_i, i=1,\ldots, \ell_n,$ vanishes as $n$ tends to infinity This implies that also $P_{e,A}^{(n)} $ vanishes as $n \to \infty$. \fi \if 0 1 The proof of Part~\ref{Thm__avg_conv_part}) follows from Fano's inequality and is similar to that of~\cite[Theorem~2, Part~2)]{RaviKIZS20}. Details can be found in~\cite{RaviKISIT20}. \else Now we prove Part~\ref{Thm__avg_conv_part}). Fano's inequality yields that $H(\hat{W}_i|W_i) \leq 1+ P_i\log M_n$. Since $H(W_i) = H_2(\alpha_n) + \alpha_n \log M_n$, we have \begin{equation*} H_2(\alpha_n) + \alpha_n \log M_n \leq 1+ P_i\log M_n+ I(W_i; \hat{W}_i) \end{equation*} for $i=1,\ldots, \ell_n$. Averaging over all $i$'s then gives \begin{align} H_2(\alpha_n) + \alpha_n \log M_n & \leq 1+ \frac{1}{\ell_n} \sum_{i=1}^{\ell_n} P_i\log M_n+ \frac{1}{\ell_n} I({\bf W}; {\bf \hat{W}}) \nonumber\\ & \leq 1+P_{e,A}^{(n)}\log M_n+ \frac{1}{\ell_n} I({\bf W}; {\bf Y}) \nonumber\\ & \leq 1 + P_{e,A}^{(n)} \log M_n+\frac{n}{2\ell_n} \log \left(1+\frac{2 k_nE_n}{nN_0}\right). \label{Eq_avg_prob_uppr} \end{align} Here, the first inequality follows because the messages $W_i, i=1, \ldots, \ell_n$ are independent and because conditioning reduces entropy, the second inequality follows from the definition of $P_{e,A}^{(n)}$ and the data processing inequality, and the third inequality follows by upper-bounding $I({\bf W};{\bf Y})$ by $\frac{n}{2} \log \bigl(1+\frac{2 k_nE_n}{nN_0}\bigr)$ \cite[Lemma~1]{ChenCG17}. Dividing both sides of \eqref{Eq_avg_prob_uppr} by $E_n$, and rearranging terms, yields an upper-bound on the rate per unit-energy $\dot{R}^A$ as \begin{equation} \label{eq:Part2_Th1_end} \dot{R^{A}}\leq \frac{ \frac{1 - H_2(\alpha_n)}{E_n} + \frac{n}{2 \ell_nE_n}\log(1+\frac{ 2k_nE_n}{nN_0})}{\alpha_n -P_{e,A}^{(n)}}. \end{equation} As noted before, $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$. It follows that $\frac{1 - H_2(\alpha_n)}{E_n}$ vanishes as $n \to \infty$. Furthermore, together with the assumptions $\ell_n=\Omega(n)$ and $\alpha_n \to \alpha>0$, $E_n\to\infty$ yields that $k_nE_n/n=\ell_nE_n/(\alpha_n E_n)$ tends to infinity as $n\to\infty$. This in turn implies that \begin{equation*} \frac{n}{2\ell_n E_n} \log\left(1+\frac{2 k_n E_n}{n N_0}\right)=\frac{n \alpha_n}{2 k_n E_n}\log\left(1+\frac{2k_n E_n}{n N_0}\right) \end{equation*} vanishes as $n\to \infty $. It thus follows from~\eqref{eq:Part2_Th1_end} that $\dot{R}^A$ vanishes as $n\to\infty$, thereby proving Part~\ref{Thm__avg_conv_part}) of Theorem~\ref{Thm_capac_PUPE}. \end{proof} \if 0 0 \appendices \input{appendix.tex} \fi
proofpile-arXiv_067-3248
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The motivation of this work comes from the hedging problem in the presence of basis risk. When a derivative product is based on a non traded or illiquid underlying, the specification of a hedging strategy becomes problematic. In practice one frequent methodology consists in constituting a portfolio based on a (traded and liquid) additional asset which is correlated with the original one. The use of a non perfectly correlated asset induces a residual risk, often called \textbf{basis risk}, that makes the market incomplete. A common example is the hedging of a basket (or index based) option, only using a subset of the assets composing the contract. Commodity markets also present many situations where basis risk plays an essential role, since many goods (as kerosene) do not have liquid future markets. For instance, kerosene consumers as airline companies, who want to hedge their exposure to the fuel use alternative future contracts, as crude oil or heating oil. The latter two commodities are strongly correlated to kerosene and their corresponding future market is liquid. Weather derivatives constitute an example of contract written on a non-traded underlying, since they are based on heating temperature; natural gas or electricity are in general used to hedge these contracts. In this work, we consider a maturity $T > 0$, a pair of processes $(X,S)$ and a contingent claim of the type $h:=g(X_T)$ or even $h:=g(X_T,S_T)$. $X$ is a non traded or illiquid, but observable asset and $S$ is a traded asset, correlated to $X$. In order to hedge this derivative, in general the practitioners use the \textit{proxy} asset $S$ as a hedging instrument, but since the two assets are not perfectly correlated, a basis risk exists. Because of the incompleteness of this market, one should define a risk aversion criterion. One possibility is to use the utility function based approach to define the hedging strategy, see for example \cite{Davis2006}, \cite{Henderson2002}, \cite{Monoyios2004}, \cite{Monoyios2007}, \cite{CeciGirardi1, CeciGirardi2}, \cite{Ankirchner2010NonTradable}. We mention also \cite{Ankirchner2013BasisRiskLiq} who consider the case when an investor has two possibilities, either hedge with an illiquid instrument, which implies liquidity costs, or hedge using a liquid correlated asset, which entails basis risk. Another approach is based on the quadratic hedging error criterion: it follows the idea of the seminal work of \cite{FS1991} that introduces the theoretical bases of the quadratic hedging in incomplete markets. In particular, they show the close relation of the quadratic hedging problem with a special semimartingale decomposition, known as the F\"ollmer-Schweizer (F-S) decomposition. The reader can consult \cite{FS1994, Schweizer2001} for basic information on F-S decomposition, which provides the so called {\it local risk minimizing hedging strategy} and it is a significant tool for solving the {\it mean variance hedging problem} in an incomplete market. \cite{Hulley2008} applied this general framework to the quadratic hedging under basis risk in a simple log-normal model. They consider for instance the two-dimensional Black-Scholes model for the non traded (but observable) $X$ and the hedging asset $S$, described by \begin{eqnarray*} d X_t &=& \mu_X X_t dt + \sigma_X X_t dW^X_t, \\ d S_t &=& \mu_S S_t dt + \sigma_S S_t dW^S_t, \end{eqnarray*} where $(W^X,W^S)$ is a standard correlated two-dimensional Brownian motion. They derive the F-S decomposition of a European payoff $h = g(X_T)$, i.e. \begin{equation} \label{introFS} g(X_T) = h_0 + \int_0^T Z^h_u dS_u + L^h_T, \end{equation} where $L^h$ is a martingale which is strongly orthogonal to the martingale part of the hedging asset process $S$. Using the Feynman-Kac theorem, they relate the decomposition components $h_0$ and $Z^h$ to a PDE terminal-value problem. This yields, as byproduct, the price and hedging portfolio of the European option $h$. These quantities can be expressed in closed formulae in the case of call-put options. Extensions of those results to the case of stochastic correlation between the two assets $X$ and $S$, have been performed by \cite{Ankirchner2012StochCorrel}. Coming back to the general case, the F-S decomposition of $h$ with respect to the $\cF_t$-semimartingale $S$ can be seen as a special case of the well-known backward stochastic differential equations (BSDEs). We look for a triplet of processes $(Y,Z,O)$ being solution of an equation of the form \begin{equation} \label{BSDEI} Y_t = h + \int_t^T \hat{f}(\w, u, Y_{u-}, Z_u)d V^S_u - \int_t^T Z_u dM^S_u - (O_T-O_t), \end{equation} where $M^S$ (resp. $V^S$) is the local martingale (resp. the bounded variation process) appearing in the semimartingale decomposition of $S$, $O$ is a strongly orthogonal martingale to $M^S$, and $ \hat f(\omega,s, y, z) = - z$. BSDEs were first studied in the Brownian framework by \cite{Pardoux1990adapted} with an early paper of \cite{Bismut73}. \cite{Pardoux1990adapted} showed existence and uniqueness of the solutions when the coefficient $ \hat f$ is globally Lipschitz with respect to $(y, z)$ and $h$ being square integrable. It was followed by a long series of contributions, see for example \cite{ElKarouiSurveyBSDE} for a survey on Brownian based BSDEs and applications to finance. For example, the Lipschitz condition was essential in $z$ and only a monotonicity condition is required for $y$. Many other generalizations were considered. We also drive the attention on the recent monograph by \cite{pardoux2014stochastic}. When the driving martingale in the BSDE is a Brownian motion, $h = g(S_T)$, and $S$ is a Markov diffusion, a solution of a BSDE constitutes a probabilistic representation of a semilinear parabolic PDE. In particular if $u$ is a solution of the mentioned PDE, then, roughly speaking setting $Y_t = u(t,S_t), Z = \partial_s u (t,S_t), O \equiv 0$, the triplet $(Y,Z,O)$ is a solution to \eqref{BSDEI}. That PDE is a deterministic problem naturally related to the BSDE. When $M^S$ is a general c\`adl\`ag martingale, the link between a BSDE \eqref{BSDEI} and a deterministic problem is less obvious. As far as backward stochastic differential equations driven by a martingale, the first paper seems to be \cite{Buckdahn93}. Later, several authors have contributed to that subject, for instance \cite{Briand2002robustness} and \cite{ElKarouiHuang1997}. More recently \cite[Theorem 3.1]{CarFerSan08} give sufficient conditions for existence and uniqueness for BSDEs of the form \eqref{BSDEI}. BSDEs with partial information driven by c\`adl\`ag martingales were investigated by \cite{CCR1, CCR2}. In this paper we consider a forward-backward SDE, issued from \eqref{BSDEI}, where the forward process solves a sort of martingale problem (in the strong probability sense, i.e. where the underlying filtration is fixed) instead of the usual stochastic differential equation appearing in the Brownian case. More particularly we suppose the existence of an adapted continuous bounded variation process $A$, of an operator $\a: \Da \subset \cC([0,T] \times \R^2) \rightarrow \cL$, where $\cL$ is a suitable space of functions $[0,T] \times \R^2 \rightarrow \C$ (see \eqref{E23}), such that $(X,S)$ verifies \begin{equation*} \forall y \in \Da,\quad \left(y(t,X_t,S_t) - \int_0^t \a(y)(u,X_{u-},S_{u-}) d A_u\right)_{0\leq t\leq T} \quad \text{is an $\cF_t$-local martingale.} \end{equation*} With $\a$ we associate the operator $\widetilde \a$ defined by $$ \widetilde{\a}(y):= \a(\widetilde{y}) - y \a(id) - id \a(y), \quad id(t,x,s) = s, \widetilde y = y \times id.$$ In the forward-backward BSDE we are interested in, the driver $\hat f$ verifies \begin{equation} \label{EFormTildef} \a (id)(t, X_{t-}(\omega), S_{t-}(\omega)) \hat f(\omega, t, y, z) = f(t, X_{t-}(\omega), S_{t-}(\omega), y, z), \ (t,y,z) \in [0,T] \times \C^2, \omega \in \Omega, \end{equation} for some $f: [0,T] \times \R^2 \times \C^2 \rightarrow \C$. The main idea is to settle a deterministic problem which is naturally associated with the forward-backward SDE \eqref{BSDEI}. The deterministic problem consists in looking for a pair of functions $(y,z)$ which solves \begin{align} \label{E134} \begin{split} \a(y)(t, x, s) &= - f(t, x, s, y(t, x, s), z(t, x, s)) \\ \widetilde{\a}(y)(t, x, s)&=z(t, x, s) \widetilde{\a}(id)(t, x, s), \end{split} \end{align} for all $t\in [0,T]$ and $(x,s)\in\R^2$, with the terminal condition $y(T,.,.) = g(.,.)$. Any solution to the deterministic problem \eqref{E134} will provide a solution $(Y,Z,O)$ to the corresponding BSDE, setting $ Y_t = y(t, X_t, S_t), Z_t = z(t, X_{t-}, S_{t-}).$ For illustration, let us consider the elementary case when $S$ is a diffusion process fulfilling $ dS_t = \sigma_S(t,S_t) dW_t + b_S (t,S_t) dt,$ and $X \equiv 0$. Then $A_t \equiv t$, $\langle M^S\rangle = \int_0^\cdot (\sigma_S)^2(u,S_u)du$, $V^S = \int_0^\cdot b_S(u,S_u) du =\int_0^\cdot \a(id)(u,S_u) du; $ $ \a$ is the parabolic generator of $S$, $\Da = C^{1,2}([0,T] \times \R^2) \rightarrow \C$. In that case \eqref{E134} becomes \begin{align} \label{E134Diff} \begin{split} \partial_t y(t, x, s) + (b_S \partial_s y + \frac{1}{2} \sigma_S^2 \partial_{ss} y) (t,x,s) &= - f(t, x, s, y(t, x, s), z(t, x, s)) \\ z &= \partial_s y \end{split} \end{align} In that situation $\widetilde \a$ is closely related to the classical derivation operator. When $S$ models the price of a traded asset and $f(t,x,s,y,z) = - b_S (t,s) z$, the resolution of \eqref{E134Diff} emerging from the BSDE \eqref{BSDEI} with \eqref{EFormTildef}, allows to solve the usual (complete market Black-Scholes type) hedging problem with underlying $S$. Consequently, in the general case, $\widetilde \a$ appears to be naturally associated with a sort of ''generalized derivation map''. A first link between the hedging problem in incomplete markets and generalized derivation operators was observed for instance in \cite{gorSAA}. The aim of our paper is threefold. \begin{enumerate}[label=\arabic*)] \item To provide a general methodology for solving forward-backward SDEs driven by a c\`adl\`ag martingale, via the solution of a deterministic problem generalizing the classical partial differential problem appearing in the case of Brownian martingales. \item To give applications to the hedging problem in the case of basis risk via the F\"ollmer-Schweizer decomposition. In particular we revisit the case when $(X,S)$ is a diffusion process whose particular case of Black-Scholes was treated by \cite{Hulley2008}, discussing some analysis related to a corresponding PDE. \item To furnish a quasi-explicit solution when the pair of processes $(X,S)$ is an exponential of additive processes, which constitutes a generalization of the results of \cite{gor2013variance} and \cite{Hubalek2006}, established in the absence of basis risk. This yields a characterization of the hedging strategy in terms of Fourier-Laplace transform and the moment generating function. \end{enumerate} The paper is organized as follows. In Section \ref{S2}, we state the strong inhomogeneous martingale problem, and we give several examples, as Markov flows and the exponential of additive processes. In Section \ref{S3_BSDE}, we state the general form of a BSDE driven by a martingale and we associate a deterministic problem with it. We show in particular that a solution for this deterministic problem yields a solution for the BSDE. In Section \ref{S4_BSDE}, we apply previous methodology to the F-S decomposition problem under basis risk. In the case of exponential of additives processes, we obtain a quasi-explicit decomposition of the mentioned F-S decomposition. \section{Strong inhomogeneous martingale problem} \label{S2} \setcounter{equation}{0} \subsection{General considerations} In this paper $T$ will be a strictly positive number. We consider a complete probability space $(\Omega, \cF, \P)$ with a filtration $(\cF_t)_{t\in[0,T]}$, fulfilling the usual conditions. By default, all the processes will be indexed by $[0,T]$. Let $(X,S)$ a couple of $\cF_t$-adapted processes. We will often mention concepts as {\it martingale, semimartingale, adapted, predictable} without mentioning the underlying filtration $(\cF_t)_{t\in[0,T]}$. Given a bounded variation function $\phi:[0,T] \rightarrow \R$, we will denote by $t \mapsto \Vert \phi \Vert_t$ the associated total variation function. We introduce a notion of martingale type problem related to $(X,S)$, which is a generalization of a stochastic differential equation. We emphasize that the present notion looks similar to the classical notion of \cite{StroockVaradhanBook} but here the notion is probabilistically strong and relies on a fixed filtered probability space. In the context of Stroock and Varadhan, however, a solution is a probability measure and the underlying process is the canonical process on some canonical space. Here a filtered probability space is given at the beginning. A similar notion was introduced in \cite{trutnau} Definition 5.1. A priori, we will not suppose that our strong martingale problem is well-posed (existence and uniqueness). \begin{definition} \label{DSMP} Let $\cO$ be an open set of $\R^2$. Let $(A_t)$ be an $\cF_t$-adapted bounded variation continuous process, such that a.s. $ dA_t \ll d\rho_t,$ for some bounded variation function $\rho$, and $\a$ a map \begin{equation} \label{D26} \a : \Da\subset \mathcal{C}([0,T]\times \cO, \C) \longrightarrow \mathcal{L}, \end{equation} where \begin{align} \label{E23} \begin{split} \mathcal{L} = \lbrace f :& [0,T]\times \cO \rightarrow \C, \text{such that for every compact $K$ of $\cO$ } \\ & \norm{f}_K(t):=\sup_{(x,s)\in K}|f(t,x,s)| <\infty \quad d\rho_t\; a.e.\rbrace. \end{split} \end{align} We say that a couple of c\`adl\`ag processes $(X,S)$ is a solution of the {\bf strong martingale problem} related to $(\Da, \a, A)$ , if for any $g\in \Da$, $(g(t,X_t,S_t))$ is a semimartingale with continuous bounded variation component such that \begin{equation} \label{aFinite} \int_{0}^{t} |\a(g)(u,X_{u-}, S_{u-})| d\norm{A}_u <\infty \; a.s. \end{equation} and \begin{equation} \label{SMgDecomp} t \longmapsto g(t,X_t, S_t) - \int_{0}^{t} \a(g)(u,X_{u-}, S_{u-}) dA_u \end{equation} is an $\cF_t$- local martingale. \end{definition} We start introducing some significant notations. \begin{notation} \label{NSpecial}\ \begin{enumerate} [label=\arabic*)] \item $id: (t,x,s)\longmapsto s$. \item For any $y \in \mathcal{C}([0,T]\times \cO)$, we denote by $\widetilde{y}$ the function $\widetilde{y}:=y\times id$, i.e. \begin{equation} \label{EidS} (y\times id)(t,x,s) = s y(t,x,s). \end{equation} \item Suppose that $id\in \Da$. For $y \in \Da$ such that $\widetilde{y}\in\Da$, we set \begin{equation} \label{EAtilde} \widetilde{\a}(y):= \a(\widetilde{y}) - y \a(id) - id \a(y). \end{equation} \end{enumerate} \end{notation} As we have mentioned in the introduction, the map $\widetilde{\a}$ will play the role of a generalized derivative. We state first a preliminary lemma. \begin{lemma}\label{LC} Let $(X,S)$ be a solution of the strong martingale problem related to $(\Da, \a, A)$ (as in Definition \ref{DSMP}). Let $y$ be a function such that $y, id, y\times id \in \Da$. We set $Y = y(\cdot,X_\cdot,S_\cdot)$ and $M^Y$ be its martingale component given in \eqref{SMgDecomp}. Then $ \langle M^Y, M^S \rangle = \int_0^\cdot \widetilde\a(y)(u,X_{u-},S_{u-})dA_u. $ \end{lemma} \begin{proof} In order to compute the angle bracket $\langle M^Y, M^S \rangle$, we start by expressing the corresponding square bracket. First, notice that, since $y, id \in \Da$ and $A$ is a continuous process, then the bounded variation parts of the semimartingales $S$ and $y(\cdot,X_\cdot, S_\cdot)$ are continuous. We have, for $t\in[0,T]$, $$ [M^Y,M^S]_t = [Y,S]_t= (SY)_{t} - \int_0^t Y_{u-}dS_u - \int_0^t S_{u-} dY_u, $$ where the first equality is justified by the fact that the square bracket of any process with a continuous bounded variation process vanishes. Using moreover the fact that $y\times id \in \Da$, the process \begin{multline*} [M^Y,M^S] - \int_0^\cdot \a(y\times id)(u,X_{u-},S_{u-})dA_u + \int_0^\cdot y(u,X_{u-},S_{u-}) \a(id)(u,X_{u-},S_{u-})dA_u \\ + \int_0^\cdot S_{u-} \a(y)(u,X_{u-},S_{u-})dA_u \end{multline*} is an $\cF_t$-local martingale. Consequently, $[M^Y,M^S]$ is a special $\cF_t$-semimartingale, because the integrals above with respect to $A$ are predictable. Finally, since $\langle M^Y, M^S\rangle - [M^Y,M^S]$ is a local martingale, the uniqueness of the canonical decomposition of the special semimartingale $[M^Y,M^S]$ yields the desired result. \end{proof} In the sequel, we will make the following assumption. \begin{assumption} \label{E0} $(\Da, \a, A)$ verifies the following axioms. i) $id \in \Da$. ii) $ (t,x,s) \mapsto s^2 \in \Da$. \end{assumption} \begin{corollary} \label{RSpecial} Let $(X,S)$ be a solution of the strong martingale problem introduced in Definition \ref{DSMP} then, under Assumption \ref{E0}, $S$ is a special semimartingale with decomposition $M^S + V^S$ given below. \begin{enumerate}[label=\roman*)] \item \label{RSpecial_1} $V^S_t= \int_0^t \a (id)(u, X_{u-},S_{u-}) dA_u$. \item \label{RSpecial_2} $\langle M^S \rangle_t = \int_0^t {\widetilde \a} (id)(u, X_{u-},S_{u-}) dA_u$. \end{enumerate} \end{corollary} \begin{proof} \ref{RSpecial_1} is obvious since $id \in \Da$ and \ref{RSpecial_2} is a consequence of Lemma \ref{LC} and the fact that $(t,x,s) \mapsto s^2$ belongs to $\Da$. \end{proof} In many situations, the operator $\a$ is related to the classical infinitesimal generator, when it exists. We will make this relation explicit in the below example of Markov processes. \subsection{The case of Markov semigroup} \label{SMarkov} In this section we only consider a single process $S$ instead of a couple $(X,S)$. Without restriction of generality $\cO$ will be chosen to be $\R$. Here $(\cF_t)$ will indicate the canonical filtration associated with $S$. For this reason, it is more comfortable to re-express Definition \ref{DSMP} into the following simplified version. \begin{definition} \label{DSMPMod} We say that $S$ is a solution of the {\bf strong martingale problem} related to $(\Da, \a, A)$ with $A_t \equiv t$, if there is a map \begin{equation} \label{pbMg} \a : \Da\subset \mathcal{C}([0,T]\times \R) \longrightarrow \cL, \end{equation} where \begin{align} \label{ELX} \begin{split} \mathcal{L} = \lbrace f:&[0,T]\times \R \rightarrow \R, \text{such that for every compact $K$ of $\R$ } \\ & \norm{f}_K(t):=\sup_{s\in K}|f(t,s)| <\infty \quad dt \; a.e.\rbrace, \end{split} \end{align} such that for any $g\in \Da$, $g(t,S_t)$ is a (special) $\cF_t$-semimartingale with continuous bounded variation component verifying \begin{equation} \label{aFiniteMarkov} \int_{0}^{T} |\a(g)(u,S_{u-})| du <\infty \; a.s. \end{equation} and \begin{equation} \label{SMgDecompMarkov} t \longmapsto g(t,S_t) - \int_{0}^{t} \a(g)(u,S_{u-}) du \end{equation} is an $\cF_t$- local martingale. \end{definition} Let $(X^{u,x}_t)_{t\geq u, x\in \R}$ be a time-homogeneous Markovian flow. In particular, if $S=X^{0,x}$ and $f$ is a bounded Borel function, then \begin{equation} \label{MarkovDef} \E{f(S_t) | \cF_u} = \E{f(X^{0,y}_{t-u})}\vert_{y = S_u}, \end{equation} where $0 \leq u \leq t \leq T$. We suppose moreover that $X_t^{0,x}$ is square integrable for any $0 \le t \le T$ and $x \in \R$. We denote by $E$ the linear space of functions such that \begin{equation} \label{setE} E = \Big\{ f\in \cC \; \text{such that }\; \widetilde{f}:= s\mapsto \frac{f(s)}{1+s^2}\; \text{is uniformly continuous and bounded}\;\Big\}, \end{equation} equipped with the norm $$\norm{f}_E:= \sup_s{\frac{|f(s)|}{1+s^2}} < \infty.$$ The set $E$ can easily be shown to be a Banach space equipped with the norm $\norm{.}_E$. Indeed $E$ is a suitable space for Markov processes which are square integrable. In particular, \eqref{MarkovDef} remains valid if $f \in E$. From now on we consider the family of linear operators $(P_t, t \ge 0)$ defined on the space $E$ by \begin{equation} \label{EPT} P_t f(x) = \E{f(X_t^{0,x})}, \text{for } t\in[0,T], x\in \R, \quad \forall f \in E. \end{equation} We formulate now a fundamental assumption. \begin{assumption}\ \label{A_Markov} \begin{enumerate}[label=\roman*)] \item \label{EStable} $P_t E \subset E$ for all $t\in[0,T]$. \item \label{PBounded} The linear operator $P_t$ is bounded, for all $t\in[0,T]$. \item \label{strongCont} $(P_t)$ is {\bf strongly continuous}, i.e. $\displaystyle\lim_{t \rightarrow 0} P_t f = f$ in the $E$ topology. \end{enumerate} \end{assumption} Using the Markov flow property \eqref{MarkovDef}, it is easy to see that the family of continuous operators $(P_t)$ defined above has the semigroup property. In particular, under Assumption \ref{A_Markov}, the family $(P_t)$ is strongly continuous semigroup on $E$. Assumption \ref{A_Markov} is fulfilled in many common cases, as mentioned in Proposition \ref{PropMarkov} and Remarks \ref{R25} and \ref{RStrongCont}. The proposition below concerns the validity of items \ref{EStable} and \ref{PBounded}. \begin{proposition} \label{PropMarkov} Let $t\in[0,T]$. Suppose that $x\mapsto X^{0,x}_t$ is differentiable in $L^2(\Omega)$ such that \begin{equation} \label{derFlow} \sup_{x\in\R}\E{\vert\partial_x X^{0,x}_t\vert^2} < \infty. \end{equation} Then $P_t f \in E$ for all $f\in E$ and $P_t$ is a bounded operator. \end{proposition} The proof of this proposition is reported in Appendix \ref{PropMarkovProof}. \begin{remark} \label{R25} Condition \eqref{derFlow} of Proposition \ref{PropMarkov} is fulfilled in the following two cases. \begin{enumerate}[label=\arabic*)] \item \label{R25i} If $\Lambda$ is a L\'evy process, the Markov flow $X^{0,x} = x + \Lambda$ verifies $\partial_x X^{0,x} = 1$. \item \label{R25ii} If $(X^{0,x}_t)$ is a diffusion process verifying $$ X^{0,x}_t = x + \int_0^t b(X^{0,x}_u)du + \int_0^t \sigma(X^{0,x}_u) dW_u, $$ where $b$ and $\sigma$ are $C^1_b$ functions. \end{enumerate} \end{remark} \begin{remark} \label{RStrongCont} Item \ref{strongCont} of Assumption \ref{A_Markov} is verified in the case of square integrable L\'evy processes, cf. Proposition \ref{LevyPtStongCont} in Appendix \ref{appA_BSDE}. \end{remark} For the rest of this subsection we work under Assumption \ref{A_Markov}. Item \ref{strongCont} of Assumption \ref{A_Markov} permits to introduce the definition of the generator of $(P_t)$ as follows. \begin{definition}\label{DGen} The {\bf generator} $L$ of $(P_t)$ in $E$ is defined on the domain $D(L)$ which is the subspace of E defined by \begin{equation} \label{MarkovGen} D(L) = \Big\{ f\in E \; \text{such that }\; \lim_{t \to 0} \dfrac{P_t f-f}{t} \; \text{exists in} \;E\;\Big\}. \end{equation} We denote by $Lf$ the limit above. We refer to \cite[Chapter 4]{JacobBookVol1}, for more details. \end{definition} \begin{remark} \label{R100} If $f\in E$ such that there is $g\in E$ such that $$ (P_t f)(x) - f(x) - \int_0^t P_ug(x) du = 0, \; \forall t\geq 0, \;x\in E,$$ then $f\in D(L)$ and $g = Lf$. Previous integral is always defined as $E$-valued Bochner integral. Indeed, since $(P_t)$ is strongly continuous, then by \cite[Lemma 4.1.7]{JacobBookVol1}, we have \begin{equation} \label{LStable} \Vert P_t \Vert \le M_w e^{wt}, \end{equation} for some real $w$ and related constant $M_w.$ $\Vert \cdot \Vert$ denotes here the operator norm. \end{remark} A useful result which allows to deal with time-dependent functions is given below. \begin{lemma} \label{lemmaMarkov} Let $f: [0,T] \to D(L) \subset E$. We suppose the following properties to be verified. \begin{enumerate}[label=\roman*)] \item \label{lemMA1} $f$ is continuous as a $D(L)$-valued function, where $D(L)$ is equipped with the graph norm. \item \label{lemMA2} $f:[0,T] \to E$ is of class $C^1$. \end{enumerate} Then, the below $E$-valued equality holds: \begin{equation} \label{lemmaSGroup} P_t f(t,.) = f(0,.) + \int_{0}^{t} P_u(Lf(u,.)) du + \int_{0}^{t} P_u(\frac{\partial f}{\partial u}(u,.)) du , \; \forall t \in [0,T]. \end{equation} \end{lemma} \begin{remark} \label{RStable} We observe that the two integrals above can be considered as $E$-valued Bochner integrals because, by hypothesis, $t \mapsto Lf(t,\cdot)$ and $ t \mapsto \frac{\partial f}{\partial t}(t,.)$ are continuous with values in $E$, and so we can apply again \eqref{LStable} in Remark \ref{R100}. \end{remark} \begin{proof} It will be enough to show that \begin{equation} \label{ElemmaSGroup} \frac{d}{dt} P_t f(t,.) = P_t(Lf(t,.)) + P_t \left(\frac{\partial f}{\partial t}(t,.) \right), \; \forall t \in [0,T]. \end{equation} In fact, even if Banach space valued, a differentiable function at each point is also absolutely continuous. Since the right-hand side of \eqref{ElemmaSGroup} is continuous it is enough to show that the right-derivative of $t \mapsto P_tf(t,\cdot)$ coincides with the right-hand side of \eqref{ElemmaSGroup}. Let $h>0$. We evaluate $ P_{t+h}f(t+h,.) - P_tf(t,.) = I_1(t,h) + I_2(t,h), $ where $ I_1(t,h) = P_{t+h}f(t+h,.) - P_tf(t+h,.), I_2(t,h) = P_{t}f(t+h,.) - P_tf(t,.).$ Now by \cite[Lemma 4.1.14]{JacobBookVol1}, we get $ I_1(t,h) := P_{t+h}f(t+h,.) - P_tf(t+h,.) = \int_{t}^{t+h} P_u Lf(t+h, .)du.$ We divide by $h$ and we get \begin{eqnarray*} \norm{ \frac{1}{h} \int_{t}^{t+h} (P_u Lf(t+h, .) - P_u Lf(t, .))du}_E &\leq& \frac{1}{h} \int_{t}^{t+h} du \norm{P_u\left\{ Lf(t+h, .)-Lf(t, .)\right\}}_E \\ &\leq& \norm{Lf(t+h, .)-Lf(t, .)}_E \frac{1}{h} \int_{t}^{t+h} \norm{P_u}du \\ &\leq& \norm{f(t+h, .)-f(t, .)}_{D(L)} \frac{1}{h} \int_{t}^{t+h} \norm{P_u}du, \end{eqnarray*}where $\|.\|_{D(L)}$ is the graph norm of $L$. This converges to zero (notice that $\norm{P_u}$ is bounded by \eqref{LStable}), and we get that $ \frac{1}{h}I_1(t,h) \xrightarrow{h\to 0} P_t(Lf(t,.)). $ We estimate now $I_2(t,h)$. $$ \norm{\dfrac{P_tf(t+h,.) - P_tf(t,.)}{h} -P_t(\frac{\partial f}{\partial t}(t,.)) }_{E} \leq \norm{P_t} \norm{\dfrac{f(t+h,.) - f(t,.)}{h} -\frac{\partial f}{\partial t}(t,.) }_{E}. $$ This goes to zero as $h$ goes to zero, by Assumption \ref{lemMA2}. This concludes the proof of Lemma \ref{lemmaMarkov}. \end{proof} We can now discuss the fact that a process $S = X^{0,x}$, where $(X^{u,x}_t)_{t\geq u, x\in \R}$ is a Markovian flow (as precised at the beginning of Section \ref{SMarkov}) is a solution to our (time inhomogeneous) strong martingale problem introduced in Definition \ref{DSMPMod}. \begin{theorem} \label{thMarkov} We denote $$ \Da = \lbrace g:[0,T] \to D(L) \; \text{such that assumptions \ref{lemMA1} and \ref{lemMA2} of Lemma \ref{lemmaMarkov} are fulfilled} \rbrace $$ and, for $g \in \Da$, $\;\a(g)(t,s) = \frac{\partial g}{\partial t}(t,s) + Lg(t,\cdot)(s), \; \forall t \in [0,T], s \in \R.$ Then $S$ is a solution of the strong martingale problem introduced in Definition \ref{DSMPMod}. \end{theorem} \begin{remark} \label{DALMarkov} Let $g \in \Da$. Since for each $t \in [0,T]$, by assumptions \ref{lemMA1} and \ref{lemMA2} of Lemma \ref{lemmaMarkov}, $\a(g)(t,\cdot) \in E$, then, obviously $\a(g) \in \cL$. Moreover, the same assumptions imply that $t\mapsto \frac{\partial g}{\partial t} (t,\cdot)$ and $t\mapsto Lg(t,\cdot)$ are continuous on $[0,T]$ and hence are bounded, i.e. $ \sup_{t\in[0,T]} \norm{\frac{\partial g}{\partial t} (t,\cdot)}_E < \infty,\quad \sup_{t\in[0,T]} \norm{Lg(t,\cdot)}_E < \infty. $ This yields in particular that Condition \eqref{aFiniteMarkov} is fulfilled. \end{remark} \begin{proof}[Proof of Theorem \ref{thMarkov}]\ It remains to show the martingale property \eqref{SMgDecompMarkov}. We fix $0\le u<t\leq T$ and a bounded random variable $\cF_u$-measurable $G$. It will be sufficient to show that \begin{equation} \label{proofMg} \E{A(u,t)} = 0, \end{equation} where $ A(u,t) = G \left( g(t,S_t) - g(u,S_u) - \int_{u}^{t} \partial_r g(r,S_r)dr - \int_{u}^{t} L g(r,.)(S_r)dr \right)$. By taking the conditional expectation of $A(u,t)$ with respect to $\cF_u$, using \eqref{MarkovDef} and Fubini's theorem, we get $ \E{A(u,t) | \cF_u} = G \phi(S_u), $ where $ \phi(x) = \left( P_{t-u}g(t,.) (x) - g(u,x) - \int_{u}^{t} (P_{r-u}\partial_r g(r,.))(x)dr - \int_{u}^{t} (P_{r-u}L g(r,.)) (x)dr \right), \; \forall x\in\R. $ We define $f:[0,T-u] \times \R \rightarrow \R$ by $f(\tau, \cdot) = g(\tau+u, \cdot)$. $f$ fulfills the assumptions of Lemma \ref{lemmaMarkov} with $T$ being replaced by $T-u$. By the change of variable $v=r-u$, setting $\tau = t-u$, the equality above becomes $ \phi(x) = \left( P_{\tau}f(\tau,.) (x) - f(0,x) - \int_{0}^{\tau} (P_{v}\partial_r f(v,.))(x)dv - \int_{0}^{\tau} (P_{v}L f(v,.)) (x)dr \right). $ Now by Lemma \ref{lemmaMarkov} we get that $\phi(x)=0, \; \forall x\in\R$. Consequently $ \E{A(u,t) | \cF_u} = 0$ and \eqref{proofMg} is fulfilled. \end{proof} \begin{remark} \label{R23Markov} We introduce the following subspace $E^2_0$ of $C^2$. \begin{equation} \label{SetE2_0} E^2_0=\lbrace f \in C^2 \; \text{such that $f''$ vanishes at infinity} \rbrace. \end{equation} Notice that only the second derivative is supposed to vanish at infinity. $E^2_0$ is included in $ E$. Indeed, if $f\in E^2_0$, then the Taylor expansion $f(x) = f(0) + x f^\prime(0) + x^2 \int_0^1 (1-\alpha) f''(x\alpha) d\alpha$ implies that $\widetilde{f}$ is bounded. On the other hand, by straightforward calculus we see that the first derivative $\frac{d \widetilde{f}}{dx}$ is bounded. This implies that $\widetilde{f}$ is uniformly continuous. In several examples it is easy to identify $E^2_0$ as a significant subspace of $D(L)$, see for instance the example of L\'evy processes which is described below. \end{remark} \subsubsection{A significant particular case: L\'evy processes} \label{SSLevy} As anticipated above, an insightful example for Markov flows is the case of L\'evy processes. We specify below the corresponding infinitesimal generator. Let $(\Lambda_t)$ be a square integrable L\'evy process with characteristic triplet $(A, \nu, \gamma)$, such that $\Lambda_0=0$. We refer to, e.g., \cite[Chapter 3]{ContTankovBook} for more details. We suppose that $(\Lambda_t)$ is of pure jump, i.e. $A=0$ and $\gamma=0$. Since $\Lambda$ is square integrable, then (cf. \cite[Proposition 3.13]{ContTankovBook}) \begin{equation} \label{LevySqInt} \int_{\R} |s|^2 \nu(ds) < \infty \end{equation} and \begin{equation} \label{LevySqInt12} c_1 := \frac{\E{\Lambda_t}}{t} = \int_{|s|>1} s \nu(ds) < \infty, \quad c_2 := \frac{\mathrm{Var}[\Lambda_t]}{t} = \int_{\R} |s|^2 \nu(ds) < \infty. \end{equation} Clearly the corresponding Markov flow is given by $X_t^{0,x} = x + \Lambda_t, t \ge 0, x\in \R$. The classical theory of semigroup for L\'evy processes is for instance developed in Section 6.31 of \cite{SatoBook}. There one defines the semigroup $P$ on the set $C_0$ of continuous functions vanishing at infinity, equipped with the sup-norm $\norm{u}_\infty = \sup_s{|u(s)|}$. By \cite[Theorem 31.5]{SatoBook}, the semigroup $P$ is strongly continuous on $C_0$, with norm $\norm{P}=1$, and its generator $L_0$ is given by \begin{equation} \label{L0} L_0f(s) = \int \left(f(s+y)-f(s)-yf^\prime(s)\1_{|y|<1}\right)\nu(dy). \end{equation} Moreover, the set $C^2_0$ of functions $f\in C^2$ such that $f$, $f^\prime$ and $f^{''}$ vanish at infinity, is included in $D(L_0)$. We remark that the domain $D(L)$ includes the classical domain $D(L_0)$. In fact, we have $ \Vert g \Vert_E \le \Vert g \Vert_{C_0}, \ \forall g \in C_0.$ Consequently, if $f\in D(L_0) \subset C_0$, then for $t>0$ $$\norm{\dfrac{P_t f-f}{t} - L_0 f}_E \leq \norm{\dfrac{P_t f-f}{t} - L_0 f }_{C_0}.$$ So $f\in D(L)$ and $Lf = L_0 f$. Assumption \ref{A_Markov} is verified because of Proposition \ref{PropMarkov}, item \ref{R25i} of Remark \ref{R25} and Remark \ref{RStrongCont}. The theorem below shows that the space $E^2_0$, defined in Remark \ref{R23Markov}, is a subset of $D(L)$. \begin{theorem} \label{thLevyGen} Let $L$ be the infinitesimal generator of the semigroup $(P_t)$. Then $E^2_0\subset D(L)$ and \begin{equation} \label{L_Levy} Lf(s) = \int \left(f(s+y)-f(s)-yf^\prime(s)\1_{|y|<1}\right)\nu(dy), \ f \in E^2_0. \end{equation} \end{theorem} A proof of this result, using arguments in \cite{Figueroa2008small}, is developed in Appendix \ref{appA_BSDE}. In conclusion, $C^2$ functions whose second derivative vanishes at infinity belong to $D(L)$. For instance, $id:s \mapsto s \in D(L)$. On the other hand the function $s \mapsto s^2$ also belongs to $D(L)$. In fact, for every $s\in \R, t\geq 0$ we have $$ P_t f(s) - f(s) = \dfrac{\E{(s+\Lambda_t)^2} - s^2}{t} = \dfrac{2s c_1t + c_2t +c_1^2 t^2}{t}. $$ Obviously, this converges to the function $s \mapsto 2s c_1 + c_2$ according to the $E$-norm. Finally it follows that $L(s^2) = 2s c_1 + c_2$. \begin{corollary} \label{CGenerator} We have the inclusion $$ E^2_0 \oplus \{s \mapsto s^2 \} \subset D(L) $$ \end{corollary} \subsection{Diffusion processes} \label{E22} Here we will suppose again $\cO = \R\times E$, where $E=\R$ or $]0,\infty[$. A function $f: [0,T] \times \cO$ will be said to be {\bf globally Lipschitz} if it is Lipschitz with respect to the second and third variable uniformly with respect to the first. We consider here the case of a diffusion process $(X,S)$ whose dynamics is described as follows: \begin{align} \label{expleDiff} \begin{split} d X_t &= b_X(t,X_t,S_t) dt + \Sum_{i=1}^{2}\sigma_{X,i}(t,X_t,S_t) d W^i_t\\ d S_t &= b_S(t,X_t,S_t) dt + \Sum_{i=1}^{d}\sigma_{S,i}(t,X_t,S_t) d W^i_t, \end{split} \end{align} where $W = (W^1,W^2)$ is a standard two-dimensional Brownian motion with canonical filtration $(\cF_t)$, $b_X$, $b_S$, $\sigma_{X,i}$, and $\sigma_{S,i}$, for $i=1,2$, $b, \sigma: [0,T] \times \R^2 \rightarrow \R$ are continuous functions which are globally Lipschitz. We also suppose $(X_0,S_0)$ to have all moments and that $S$ takes value in $E$. For instance a geometric Brownian motion takes value in $E=]0,\infty[$, if it starts from a positive point. \begin{remark}\label{Rallmoments} Let $ p \ge 1$. It is well-known, that there is a constant $C(p)$, only depending on $p$, such that $$ \E{\sup_{t \le T} (\vert X_t \vert^ p + \vert S_t \vert^ p)} \le C(p) ( \vert X_0 \vert ^ p + \vert S_0 \vert ^ p).$$ \end{remark} By It\^o formula, for $f \in \mathcal{C}^{1,2}([0,T[\times \cO) $, we have \begin{eqnarray*} d f(t,X_t,S_t) &=& \partial_t f(t,X_t,S_t) dt + \partial_s f(t,X_t,S_t) dS_t + \partial_x f(t,X_t,S_t) dX_t \\ &+& \frac{1}{2} \left\lbrace \partial_{ss} f(t,X_t,S_t) d[S]_t + \partial_{xx} f(t,S_t,X_t) d[X]_t + 2 \partial_{sx} f(t,X_t,S_t) d[S, X]_t \right\rbrace. \end{eqnarray*} We denote $|\sigma_S|^2 = \Sum_{i=1}^{2}\sigma_{S,i}^2$, $\;|\sigma_X|^2 = \Sum_{i=1}^{2}\sigma_{X,i}^2$ and $\langle \sigma_S, \sigma_X\rangle = \Sum_{i=1}^{2}\sigma_{S,i}\sigma_{X,i}$. Hence, the operator $\a$ can be defined as $$ \a(f) = \partial_t f + b_S \partial_s f + b_X \partial_x f + \frac{1}{2} \left\lbrace |\sigma_S|^2\partial_{ss} f + |\sigma_X|^2 \partial_{xx} f + 2 \langle \sigma_S, \sigma_X\rangle \partial_{sx} f \right\rbrace, $$ associated with $A_t \equiv t$ and a domain $\Da = \mathcal{C}^{1,2}([0,T[\times \cO) \cap \mathcal{C}^{1}([0,T]\times \cO)$. \\ Notice that Assumption \ref{E0} is verified since $id$ and $id \times id$ belong to $\Da$. Moreover, a straightforward calculation gives $$ \widetilde{\a}(f) = |\sigma_S|^2 \partial_s f(t,x,s) + \langle \sigma_S, \sigma_X\rangle \partial_x f(t,x,s) $$ In particular, $\;\widetilde{\a}(id) = |\sigma_S|^2.$ \begin{remark} \label{RE22} By It\^o formula, for $0\leq u \leq T$, we obviously have \begin{align*} \begin{split} f(u,X_u,S_u) - \int_0^u \a(f)(r,X_r,S_r) dr &= \int_0^u \partial_x f(r,X_r,S_r) \left(\sigma_{X,1}(r,X_r,S_r) d W^1_r+\sigma_{X,2}(r,X_r,S_r) d W^2_r\right) \\ &+ \int_0^u \partial_s f(r,X_r,S_r) \left(\sigma_{S,1}(r,X_r,S_r) d W^1_r+\sigma_{S,2}(r,X_r,S_r) d W^2_r\right). \end{split} \end{align*} \end{remark} \subsection{Variant of diffusion processes} Let $(W_t)$ be an $\cF_t$-standard Brownian motion and $S$ be a solution of the SDE \begin{equation} \label{exp3} d S_t = \sigma(t,S_t) dW_t + b_1(t,S_t)da_t + b_2(t,S_t)dt, \end{equation} where $b_1, b_2, \sigma: [0,T] \times \R^2 \rightarrow \R$ are continuous functions which are globally Lipschitz, and $a: [0,T] \rightarrow \R$ is an increasing function such that $da$ is singular with respect to Lebesgue measure. We set $A_t = a_t + t$. The equation \eqref{exp3} can be written as $$ d S_t = \sigma(t,S_t) dW_t + \left( b_1(t,S_t)\frac{d\a_t}{dA_t} + b_2(t,S_t)\frac{dt}{dA_t} \right) dA_t. $$ A solution $S$ of \eqref{exp3} verifies the strong martingale problem related to $(\Da, \a, A)$ with $A_t = t$, where $\Da=\mathcal{C}^{1,2}([0,T]\times \R)$ and for $f\in \Da$, $$ \a(f)(t,s) = \left( \partial_t f(t,s) \frac{dt}{dA_t} + \partial_s f(t,s) \widetilde{b}(t,s) + \frac{1}{2}\partial_{ss} f(t,s) \widetilde{\sigma}^2(t,s) \right), $$ where $\widetilde{b}(t,s) = b_1(t,s) \frac{d\a_t}{dA_t}(t) + b_2(t,s) \frac{dt}{dA_t}(t)$ and $\widetilde{\sigma}^2(t,s) = \sigma^2(t,s)\frac{dt}{dA_t}(t)$. Indeed, by It\^o formula, the process $\; t \mapsto f(t,S_t) - \int_0^t \a (f)(r,S_r) dA_r $ is a local martingale. \subsection{Exponential of additive processes} \label{expleExpAdd} A c\`adl\`ag process $(Z^1,Z^2)$ is said to be an {\bf additive process} if $(Z^1,Z^2)_0=0$, $(Z^1,Z^2)$ is continuous in probability and it has independent increments, i.e. $(Z^1_t-Z^1_u,Z^2_t-Z^2_u)$ is independent of $\cF_u$ for $0 \leq u \leq t \leq T$ and $(\cF_t)$ is the canonical filtration associated with $(Z^1,Z^2)$. In this section we restrict ourselves to the case of exponential of additive processes which are semimartingales (shortly semimartingale additive processes) and we specify a corresponding martingale problem $(\a, \Da, A)$ for this process. This will be based on Fourier-Laplace transform techniques. The couple of processes $(X, S)$ is defined by $$ X = \exp(Z^1) \quad S = \exp(Z^2), $$ where $(Z^1, Z^2)$ is a semimartingale additive process taking values in $\R^2$. \\ We denote by $D$ the set $$ D :=\lbrace z=(z_1, z_2)\in\C^2 |\quad \E{ |X_T^{{\rm Re}(z_1)} S_T^{{\rm Re}(z_2)}|}<\infty \rbrace. $$ We convene that $\C^2 = \R^2 + i \R^2$, associating the couple $(z_1,z_2)$ with $({\rm Re} z_1, {\rm Re} z_2) + i ({\rm Im} z_1, {\rm Im} z_2)$. Clearly we have $D = (D \cap \R^2) + i \R^2$. We also introduce the notation $$ D/2 := \lbrace z \in\C^2| \quad 2z\in D\} \subset D . $$ \begin{remark} \label{R23} By Cauchy-Schwarz inequality, $z,y \in D/2$ implies that $z+y\in D$. \end{remark} We denote by $\kappa: D \rightarrow \C $, the generating function of $(Z^1, Z^2)$, see for instance \cite[Definition 2.1]{gor2013variance}. In particular $\kappa$ verifies $ \exp(\kappa_t(z_1, z_2)) = \E{ \exp(z_1 Z_t^1 + z_2 Z_t^2)} = \E{ X_t^{z_1} S_t^{z_2}}. $ We will adopt similar notations and assumptions as in \cite{gor2013variance}, which treated the problem of variance optimal hedging for a one-dimensional exponential of additive process. We introduce a function $\rho$, defined, for each $t\in[0,T]$, as follows: \begin{eqnarray} \label{ExpAddRhoS} \rho_t(z_1,z_2,y_1,y_2) &:=& \kappa_t(z_1+y_1, z_2+y_2) - \kappa_t(z_1, z_2) - \kappa_t(y_1, y_2), \quad \text{ for } (z_1,z_2),(y_1,y_2)\in D/2, \nonumber\\ \rho_t(z_1,z_2) &:=& \rho_t(z_1,z_2,\bar{z_1},\bar{z_2}), \quad \text{ for } (z_1,z_2)\in D/2, \\ \rho^S_t &:=& \rho_t(0,1) = \kappa_t(0,2) - 2\kappa_t(0,1), \text{ if } (0,1) \in D/2. \nonumber \end{eqnarray} We remark that for $(z_1,z_2)\in D/2$, $t\mapsto \rho_t(z_1,z_2)$ is a real function. These functions appear naturally in the expression of the angle brackets of $(M^X,M^S)$ where $M^X$ (resp. $M^S$) is the martingale part of $X$ (resp. $S$). From now on, in this section, the assumption below will be in force. \begin{assumption}\ \label{A_PAI} \begin{enumerate}[label=\arabic*)] \item $\rho^S$ is strictly increasing. \item $(0,2) \in D$. \end{enumerate} \end{assumption} Notice that item 2) is equivalent to the existence of the second order moment of $S_T$. Moreover, 2) implies, by Cauchy-Schwarz, that $D/2 + (0,1) \subset D$. We recall that previous assumption implies that $Z^2$ has no deterministic increments, see \cite[Lemma 3.9]{gor2013variance}. Similarly as in \cite[Propositions 3.4 and 3.15]{gor2013variance}, one can prove the following result. \begin{proposition} \label{propAdd}\ \begin{enumerate}[label=\arabic*)] \item \label{propAddPt1} For every $(z_1,z_2)\in D$, $\left(X_t^{z_1}S_t^{z_2}e^{-\kappa_t(z_1,z_2)}\right)$ is a martingale. \item \label{propAddPt2} $t\mapsto \kappa_t(z_1,z_2)$ is a bounded variation continuous function, for every $(z_1,z_2)\in D$. In particular, $t \mapsto \rho_t(z_1,z_2)$ is also a bounded variation continuous function, for every $(z_1,z_2)\in D/2$. \item \label{propAddPt3} Let $I$ be a compact real set included in $D$. Then $$\sup_{(x,y)\in I}\sup_{t\leq T} \E{X_t^x S_t^y}= \sup_{(x,y)\in I}\sup_{t\leq T} e^{\kappa_t(x,y)}< \infty.$$ \item \label{propAddPt4} $\forall (z_1,z_2)\in D/2$, $t\mapsto \rho_{t}(z_1,z_2)$ is non-decreasing. \item \label{propAddPt5} $ \kappa_{dt}(z_1,z_2) \ll \rho^S_{dt} \text{ , for every } z\in D$. \item \label{propAddPt6} $ \rho_{dt}(z_1,z_2,y_1,y_2) \ll \rho^S_{dt} \text{ , for every } (z_1,z_2),(y_1,y_2)\in D/2$. \end{enumerate} \end{proposition} \begin{remark} \label{R13} Notice that, for any $(z_1,z_2) \in D$, $X^{z_1} S^{z_2}$ is a special semimartingale. Indeed, by Proposition \ref{propAdd}, $X_t^{z_1} S_t^{z_2} = N_t e^{\kappa_t(z_1,z_2)}$ where $\kappa(z_1,z_2)$ is a bounded variation continuous function and $N$ is a martingale. Hence, integration by parts implies that $X^{z_1} S^{z_2}$ is a special semimartingale whose decomposition is given by \begin{equation} \label{ER13} X^{z_1} S^{z_2}= M(z_1,z_2) + V(z_1,z_2), \end{equation} where $M_t(z_1,z_2)= X_0^{z_1} S_0^{z_2} + \int_0^t e^{\kappa_u(z_1,z_2)} dN_u$ and $V_t(z_1,z_2)= \int_0^t X^{z_1}_{u-} S^{z_2}_{u-} \kappa_{du}(z_1,z_2)$, $\forall t\in[0,T]$. \end{remark} The proposition below shows that the local martingale part of the decomposition above is a square integrable martingale if $(z_1,z_2) \in D/2$ and gives its angle bracket in terms of the generating function. \begin{proposition} \label{ExpAddSqMg} Let $z=(z_1,z_2), y=(y_1,y_2) \in D/2$. Then $X^{z_1} S^{z_2}$ is a special semimartingale, whose decomposition $X^{z_1} S^{z_2}= M(z_1,z_2) + V(z_1,z_2)$ satisfies, for $t\in[0,T]$, \begin{eqnarray*} V(z_1,z_2)_t &=& \int_0^t X^{z_1}_{u-} S^{z_2}_{u-} \kappa_{du}(z_1, z_2) \\ \langle M(z_1,z_2), M(y_1,y_2) \rangle_t &=& \int_0^t X^{z_1+y_1}_{u-} S^{z_2+y_2}_{u-} \rho_{du}(z_1, z_2,y_1, y_2). \end{eqnarray*} In particular, \begin{equation*} \langle M(z_1,z_2)\rangle_t := \langle M(z_1,z_2), \overline{M(z_1,z_2)} \rangle_t = \int_0^t X^{2{\rm Re}(z_1)}_{u-} S^{2{\rm Re}(z_2)}_{u-} \rho_{du}(z_1,z_2). \end{equation*} Moreover, $M(z_1,z_2)$ is a square integrable martingale. \end{proposition} \begin{proof} This can be done adapting the techniques of \cite[Lemma 3.2]{Hubalek2006} and its generalization to one-dimensional additive processes, i.e. \cite[Proposition 3.17 and Lemma 13.19]{gor2013variance}. \end{proof} The measure $d \rho^S$, called \textbf{reference variance measure} in \cite{gor2013variance}, plays a central role in the expression of the canonical decomposition of special semimartingales depending on the couple $(X,S)$. \begin{corollary} \label{CCS} The semimartingale decomposition of $S$ is given by $S=M^S+V^S $, where, for $t\in[0,T]$ $$ V^S_t = \int_0^t S_{u-} \kappa_{du}(0,1) \quad \langle M^S \rangle_t = \int_0^t S_{u-}^2 \rho^S_{du}. $$ \end{corollary} \begin{proof} It follows from Proposition \ref{ExpAddSqMg} setting $z_1 = 0, z_2=1$. \end{proof} Now we state some useful estimates. \begin{lemma} \label{expAddMoments} Let $(a,b)\in D \cap \R^2$. Then $\E{\sup_{t\leq T} X_t^a S_t^b} < \infty.$ \end{lemma} \begin{proof} Let $(a,b)\in D \cap \R^2$, then $(a/2,b/2)\in D/2$. By Proposition \ref{ExpAddSqMg}, we have $$X_t^{a/2} S_t^{b/2}= M_t(a/2,b/2) + \int_0^t X^{a/2}_{u-} S^{b/2}_{u-} \kappa_{du}(a/2,b/2), \; \forall t \in [0,T]$$ and $M(a/2,b/2)$ is a square integrable martingale. Hence, by Doob inequality, we have $$ \E{\sup_{t\leq T} \left| M_t(a/2,b/2)\right|^2 } \leq 4 \E{\left| M_T(a/2,b/2)\right|^2 }<\infty. $$ On the other hand, using Cauchy-Schwarz inequality and Fubini theorem, we obtain \begin{eqnarray*} \E{\sup_{t\leq T} \left| \int_0^t X^{a/2}_{u-} S^{b/2}_{u-} \kappa_{du}(a/2, b/2)\right|^2} &\leq & \norm{\kappa(a/2, b/2)}_{T} \int_0^T \E{X^{a}_{u-} S^{b}_{u-}} \norm{\kappa(a/2, b/2)}_{du} \\ &= & \norm{\kappa(a/2, b/2)}_{T} \int_0^T e^{\kappa_u(a, b)} \norm{\kappa(a/2, b/2)}_{du}\\ &\leq & e^{\norm{\kappa(a, b)}_T} \norm{\kappa(a/2, b/2)}_{T}^2 < \infty. \end{eqnarray*} Finally $ \E{\sup_{t\leq T} X_t^{a} S_t^{b} } = \E{\sup_{t\leq T} \left| X_t^{a/2} S_t^{b/2} \right|^2}< \infty.$ \end{proof} In the general case, when $(z_1,z_2) \in D$, the local martingale part of the special semimartingale $X^{z_1} S^{z_2}$ is a true (not necessarily square integrable) martingale. \begin{proposition} \label{ExpAddMg} Let $(z_1,z_2) \in D$, then, $M(z_1,z_2)$, the local martingale part of $X^{z_1} S^{z_2}$, is a true martingale such that $ \E{\sup_{t\leq T} \left| M_t(z_1,z_2)\right| } < \infty .$ \end{proposition} \begin{proof} Let $(z_1,z_2) \in D$. Adopting the notations of \eqref{ER13}, we recall that, by Proposition \ref{ExpAddSqMg}, $\forall t\in [0,T]$, $M_t(z_1,z_2)= X_t^{z_1} S_t^{z_2} - \int_0^t X^{z_1}_{u-} S^{z_2}_{u-} \kappa_{du}(z_1,z_2)$. For this local martingale we can write \begin{eqnarray*} \E{\sup_{t\leq T} \left| M_t(z_1,z_2)\right| } &\leq & \E{\sup_{t\leq T} \left| X_t^{z_1} S_t^{z_2}\right|} + \E{\int_0^T \left|X^{z_1}_{t-} S^{z_2}_{t-}\right| \norm{\kappa(z_1,z_2)}_{dt}} \\ &\leq & \E{\sup_{t\leq T} \left| X_t^{{\rm Re}(z_1)} S_t^{{\rm Re}(z_2)}\right|} \left( 1 + \norm{\kappa(z_1,z_2)}_{T} \right). \end{eqnarray*} Since $({\rm Re}(z_1),{\rm Re} (z_2))$ belongs to $D$, by Lemma \ref{expAddMoments}, the right-hand side is finite. Consequently the local martingale $M(z_1,z_2)$ is indeed a true martingale. \end{proof} The goal of this section is to show that $(X,S)$ is a solution of a strong martingale problem, with related triplet $(\Da, \a, A)$, which will be specified below. For this purpose, we determine the semimartingale decomposition of $(f(t,X_t,S_t))$ for functions $f:[0,T]\times \cO \rightarrow \C$, where $\cO=]0,\infty[^2$, of the form \begin{equation} \label{DFLaplace} f(t,x,s) := \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2} \lambda(t,z_1,z_2),\; \forall t\in[0,T], x,s> 0, \end{equation} where $\Pi$ is a finite complex Borel measure on $\C^2$ and $\lambda: [0,T]\times \C^2\longrightarrow \C$. The family of those functions will include the set $\Da$ defined later. Proposition \ref{ExpAddMg} and item \ref{propAddPt5} of Proposition \ref{propAdd} say that, for $z=(z_1,z_2) \in D$, $$ t \mapsto X_t^{z_1} S_t^{z_2} - \int_0^t X^{z_1}_{u-} S^{z_2}_{u-} \kappa_{du}(z_1, z_2) = X_t^{z_1} S_t^{z_2} - \int_0^t X^{z_1}_{u-} S^{z_2}_{u-} \frac{d\kappa_{u}(z_1, z_2)}{d \rho^S_u} \rho^S_{du}$$ is a martingale. This provides the semimartingale decomposition of the basic functions $(t,x,s) \mapsto x^{z_1} s^{z_2}$ for $z_1,z_2 \in D$, applied to $(X,S)$. Those functions are expected to be elements of $\Da$ and one candidate for the bounded variation process $A$ is $\rho^S$. It remains to precisely define $\Da$ and the operator $\a$. A first step in this direction is to consider a Borel function $\lambda: [0,T]\times \C^2 \rightarrow \C$ such that, for any $(z_1,z_2)\in D$, $t\in[0,T] \mapsto \lambda(t, z_1, z_2)$ is absolutely continuous with respect to $\rho^S$. \begin{lemma} \label{ExpAddLemma} Let $\lambda: [0,T]\times \C^2 \rightarrow \C$ such that, for any $(z_1,z_2)\in D$, $t\in[0,T] \mapsto \lambda(t, z_1, z_2)$ is absolutely continuous with respect to $\rho^S$. Then for any $(z_1,z_2)\in D$, \begin{equation} \label{MLambda} t \mapsto M^\lambda_t(z_1,z_2) := S_t^{z_1} X_t^{z_2}\lambda(t,z_1,z_2) - \int_0^t X_{u-}^{z_1} S_{u-}^{z_2} \left\lbrace \dfrac{d\lambda(u,z_1,z_2)}{d\rho^S_u} + \lambda(u,z_1,z_2)\dfrac{d\kappa_u(z_1,z_2)}{d\rho^S_u} \right\rbrace \rho^S_{du}, \end{equation} is a martingale. Moreover, if $(z_1,z_2)\in D/2$ then $M^\lambda(z_1,z_2)$ is a square integrable martingale and \begin{equation} \label{MLambda2Moment} \E{|M^\lambda_t(z_1,z_2)|^2} = \int_0^t e^{\kappa_u(2{\rm Re}(z_1), 2{\rm Re}(z_2))} |\lambda(u,z_1,z_2)|^2 \rho_{du}(z_1,z_2). \end{equation} \end{lemma} \begin{proof} Let $(z_1,z_2) \in D$, $M(z_1,z_2)$ and $V(z_1,z_2)$ be the random fields introduced in Remark \ref{R13}. Since $\lambda(dt, z_1, z_2) \ll \rho^S_{dt}$, then $t\mapsto \lambda(t,z_1,z_2)$ is a bounded continuous function on $[0,T]$. By item \ref{propAddPt5} of Proposition \ref{propAdd} $M^\lambda(z_1,z_2)$ is well-defined. Integrating by parts and taking into account Remark \ref{R13} allows to show \begin{equation} \label{EMLambda} M^\lambda_t(z_1,z_2) = \lambda(0,z_1,z_2) M_0(z_1,z_2) + \int_0^t \lambda(u,z_1,z_2) dM_u(z_1,z_2),\; \forall t\in[0,T]. \end{equation} Obviously $M^\lambda(z_1,z_2)$ is a local martingale. In order to prove that it is a true martingale, we establish that $$ \E{\sup_{t\leq T} \left| M^\lambda_t(z_1,z_2) \right| } < \infty. $$ Indeed, by integration by parts in \eqref{EMLambda}, for $t\in [0,T]$ we have $$ M^\lambda_t(z_1,z_2) = \lambda(t,z_1,z_2) M_t(z_1,z_2) -\int_0^t M_{u-}(z_1,z_2)\lambda(du,z_1,z_2). $$ Hence, as in the proof of Lemma \ref{expAddMoments}, \begin{align} \label{EMartlambda} \begin{split} \E{\sup_{t\leq T} \left| M^\lambda_t(z_1,z_2) \right| } \leq & \E{\sup_{t\leq T} \left|\lambda(t,z_1,z_2) M_t(z_1,z_2)\right|} + \E{\int_0^T \left|M_{u-}(z_1,z_2)\right| \norm{\lambda(.,z_1,z_2)}_{dt}} \\ \leq & 2 \E{\sup_{t\leq T} \left| M_t(z_1,z_2)\right|} \norm{\lambda(.,z_1,z_2)}_T. \end{split} \end{align} Thanks to Proposition \ref{ExpAddMg}, the right-hand side of \eqref{EMartlambda} is finite and finally $ M^\lambda(z_1,z_2)$ is shown to be a martingale so that the first part of Lemma \ref{ExpAddLemma} is proved. Now, suppose that $(z_1,z_2) \in D/2$. By \eqref{EMLambda} and Proposition \ref{ExpAddSqMg}, we have \begin{align} \label{E224} \begin{split} \E{ \langle M^\lambda(z_1, z_2) \rangle_T } &= \E{\int_0^T |\lambda(u,z_1,z_2)|^2\langle M(z_1, z_2)\rangle_{du}} \\ &= \E{\int_0^T X_{u-}^{2 {\rm Re}(z_1)} S_{u-}^{2{\rm Re}(z_2)}|\lambda(u,z_1,z_2)|^2 \rho_{du}(z_1,z_2)} \\ &= \int_0^T e^{\kappa_u(2{\rm Re}(z_1), 2{\rm Re}(z_2))} |\lambda(u,z_1,z_2)|^2 \rho_{du}(z_1,z_2)\\ \leq & \sup_{u\leq T} e^{\kappa_u(2{\rm Re}(z_1), 2{\rm Re}(z_2))} \int_0^T |\lambda(u,z_1,z_2)|^2 \rho_{du}(z_1,z_2) < \infty. \end{split} \end{align} The latter term is finite by point \ref{propAddPt3} of Proposition \ref{propAdd} and by the fact that $\lambda(.,z_1,z_2)$ is bounded on $[0,T]$. Consequently, $M^\lambda(z_1,z_2)$ is a square integrable martingale and since $|M^\lambda(z_1,z_2)|^2 - \langle M^\lambda(z_1, z_2) \rangle$ is a martingale, the estimate \eqref{E224} yields the desired identity \eqref{MLambda2Moment}. \end{proof} Now, let $\Pi$ be a finite Borel measure on $\C^2$ and let us formulate the following assumption on it. \begin{assumption} \label{A_Pi} We set $I_0:={\rm Re}(supp\; \Pi)$. \begin{enumerate} \item $I_0$ is bounded. \item $I_0 \subset D.$ \end{enumerate} \end{assumption} Notice that this assumption implies that $supp\;\Pi \subset D$. \begin{theorem} \label{T17} Suppose that Assumptions \ref{A_PAI} and \ref{A_Pi} are verified. Let $\lambda: [0,T]\times \C^2 \rightarrow \C$ be a function such that \begin{eqnarray} \forall(z_1,z_2)\in supp\;\Pi, \; \lambda(dt, z_1, z_2) &\ll& \rho^S_{dt}, \label{CondLambda0}\\ \forall t\in[0,T], \; \int_{\C^2}d|\Pi|(z_1,z_2) |\lambda(t,z_1,z_2)|^2 &<& \infty,\label{CondLambda1}\\ \int_0^T d\rho^S_t \int_{\C^2}d|\Pi|(z_1,z_2) \left| \dfrac{d\lambda(t,z_1,z_2)}{d\rho^S_t} + \lambda(t,z_1,z_2) \dfrac{d\kappa_t(z_1,z_2)}{d\rho^S_t}\right| &<& \infty. \label{CondLambda3} \end{eqnarray} Then the function $f$ of the form \eqref{DFLaplace} is continuous. Moreover \begin{equation} \label{EMartFubini} t \mapsto \widehat{M}^\lambda_t :=f(t,X_t,S_t) - \int_0^t \rho^S_{du} \int_{\C^2} d \Pi(z_1, z_2) X_{u-}^{z_1} S_{u-}^{z_2} \left\lbrace \dfrac{d\lambda(u,z_1,z_2)}{d\rho^S_u} + \lambda(u,z_1,z_2) \dfrac{d\kappa_u(z_1,z_2)}{d\rho^S_u} \right\rbrace, \end{equation} is a martingale. \end{theorem} \begin{remark} \label{R226} \ In \eqref{CondLambda3} and \eqref{EMartFubini}, part of the integrand with respect to $\Pi$ is only defined for $(z_1,z_2) \in D$. By convention the integrand will be set to zero for $(z_1,z_2) \notin D$. In the sequel we will adopt the same rule. \end{remark} \begin{proof} Let $\lambda: [0,T]\times \C^2 \rightarrow \C$ verifying the hypotheses of the theorem. The function $f$ is well-defined. Indeed, for $t\in[0,T], x,s> 0$, $$|f(t,x,s)| \leq \sup_{(a,b)\in I_0}x^{a} s^{b} \int_{\C^2} d |\Pi|(z_1, z_2) |\lambda(t,z_1,z_2)|, $$ which is finite because of Condition \eqref{CondLambda1} and Assumption \ref{A_Pi}, taking into account Cauchy-Schwarz inequality. Moreover, by Fubini theorem and using the definition of $f$ in \eqref{DFLaplace}, we get \begin{eqnarray} \label{EfFinite} \E{|f(t,X_t,S_t)|} & \leq & \int_{\C^2} d |\Pi|(z_1, z_2) \E{X_t^{{\rm Re}(z_1)}S_t^{{\rm Re}(z_2)}} |\lambda(t,z_1,z_2)| \nonumber\\ &\leq & \sup_{u\in[0,T], (a,b)\in I_0} \E{ X_{u}^{a} S_{u}^{b}} \int_{\C^2} d |\Pi|(z_1, z_2)|\lambda(t,z_1,z_2)|. \end{eqnarray} The right-hand side is finite by item \ref{propAddPt3} of Proposition \ref{propAdd} and Condition \eqref{CondLambda1}. We observe that $t \mapsto \lambda(t,z_1,z_2)$ is a continuous function since it is absolutely continuous with respect to $\rho^S$ for fixed $(z_1, z_2) \in \C^2$. On the other hand, Condition \eqref{CondLambda1} implies that the family $(\lambda(t,z_1,z_2), t \in [0,T]) $ is $\vert \Pi \vert$ -uniformly integrable. These properties, together with Lebesgue dominated convergence theorem imply that $f$ defined in \eqref{DFLaplace} is continuous with respect to all the variables. We show now that the process $t \mapsto \widehat{M}^\lambda_t$ is well-defined. This holds because \begin{eqnarray} \label{E2350} &&\E{\int_0^t \rho^S_{du} \int_{\C^2} d |\Pi|(z_1, z_2) |X_{u-}^{z_1} S_{u-}^{z_2}| \left| \dfrac{d\lambda(u,z_1,z_2)}{d\rho^S_u} + \lambda(u,z_1,z_2) \dfrac{d\kappa_u(z_1,z_2)}{d\rho^S_u} \right| } \\ &&\leq \displaystyle \sup_{u\in[0,T], (a,b)\in I_0} \E{ X_{u}^{a} S_{u}^{b}} \int_0^t \rho^S_{du}\int_{\C^2} d |\Pi|(z_1, z_2) \left| \frac{d\lambda(u,z_1,z_2)}{d\rho^S_u} + \lambda(u,z_1,z_2) \frac{d\kappa_u(z_1,z_2)}{d\rho^S_u} \right| \nonumber, \end{eqnarray} which is finite by point \ref{propAddPt3} of Proposition \ref{propAdd} and Condition \eqref{CondLambda3}. Inequality \eqref{E2350} allows to apply Fubini theorem to the integral term in \eqref{EMartFubini}, so that we get \begin{eqnarray} \label{E153} \widehat{M}^\lambda_t &=& \int_{\C^2} d \Pi(z_1, z_2) \left( X_{t}^{z_1} S_{t}^{z_2} \lambda(t,z_1,z_2) - \int_0^t X_{u-}^{z_1} S_{u-}^{z_2} \left\lbrace \dfrac{d\lambda(u,z_1,z_2)}{d\rho^S_u} + \lambda(u,z_1,z_2) \dfrac{d\kappa_u(z_1,z_2)}{d\rho^S_u} \right\rbrace \rho^S_{du} \right) \nonumber\\ &=& \int_{\C^2} d \Pi(z_1, z_2) M^\lambda_t(z_1,z_2), \end{eqnarray} where $M^\lambda(z_1,z_2)$ is defined in \eqref{MLambda} for any $(z_1,z_2)\in D$. We observe that \begin{equation} \label{E2351} \E{ \int_{\C^2} d |\Pi|(z_1, z_2) \left|M^\lambda_t(z_1,z_2)\right|} < \infty, \end{equation} taking into account \eqref{EfFinite} and \eqref{E2350}. It remains to show that $\widehat{M}^\lambda$ is a martingale. \\ Let $0\leq s \leq t \le T$ and a bounded, $\cF_s$-measurable random variable $G$. By Fubini theorem and Lemma \ref{ExpAddLemma} it follows \begin{eqnarray*} \E{\widehat{M}^\lambda_t G} &=& \int_{\C^2} d \Pi(z_1, z_2) \E{M^\lambda_t(z_1,z_2) G} = \int_{\C^2} d \Pi(z_1, z_2) \E{M^\lambda_s(z_1,z_2) G} \\ &=& \E{\int_{\C^2} d \Pi(z_1, z_2) M^\lambda_s(z_1,z_2) G} = \E{\widehat{M}^\lambda_s G}, \end{eqnarray*} which implies the desired result. \end{proof} We proceed now to the definition of the domain $\Da$ in view of the specification of the corresponding martingale problem. We set \begin{align} \label{ExpAddADom} \begin{split} \Da &= \Big\{ f : (t,x,s) \mapsto \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2} \lambda(t,z_1,z_2), \forall t\in [0,T], x,s>0, \\ &\text{where } \Pi \text{ is a finite Borel measure on } \C^2\text{ verifying Assumption \ref{A_Pi},} \\ &\text{ with } \lambda:[0,T]\times\C^2\rightarrow \C \text{ Borel such that conditions \eqref{CondLambda0}, \eqref{CondLambda1}} \\ &\text{and \eqref{CondLambda3} are fulfilled} \Big\}. \end{split} \end{align} \begin{corollary} \label{ExpAddMgPb} Suppose that Assumptions \ref{A_PAI} and \ref{A_Pi} are verified. Then $(X,S)$ is a solution of the strong martingale problem related to $(\Da, \a, \rho^S)$ where, for $f \in \Da$ of the form \eqref{DFLaplace}, $\a(f)$ is defined by \begin{equation} \label{ExpAddA} \a(f)(t,x,s) = \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2} \left\lbrace \dfrac{d\lambda(t,z_1,z_2)}{d\rho^S_t} + \lambda(t,z_1,z_2) \dfrac{d\kappa_t(z_1,z_2)}{d\rho^S_t} \right\rbrace, \end{equation} for all $t \in [0,T], x,s >0.$ \end{corollary} \begin{proof} By Theorem \ref{T17} notice that $f \in \Da $ defined by \eqref{DFLaplace} is continuous, which implies that \eqref{D26} is fulfilled. By \eqref{CondLambda3}, $\a(f)$ belongs to $\cL$ defined in \eqref{E23} and Condition \eqref{aFinite} is fulfilled. Finally \eqref{SMgDecomp} is a consequence of \eqref{EMartFubini} in Theorem \ref{T17}. \end{proof} Under additional conditions, one can say more about the martingale decomposition given by the strong martingale problem related to $(\Da, \a, \rho^S)$. \begin{proposition} \label{ExpAddSqIntgMg} Suppose that Assumptions \ref{A_PAI} and \ref{A_Pi} are verified. Let $f\in \Da$ as defined in \eqref{ExpAddADom}. Suppose that the following conditions are verified. \begin{enumerate}[label=\alph*)] \item \label{ExpAddSqIntgMg_1} $\displaystyle I_0:=\textit{{\rm Re}(supp }\Pi\text{)} \subset D/2,$ \item \label{ExpAddSqIntgMg_2} $\displaystyle \int_{\C^2}d|\Pi|(z_1,z_2) \int_0^T |\lambda(u,z_1,z_2)|^2 \rho_{du}(z_1,z_2) < \infty.$ \end{enumerate} Then, the martingale $\displaystyle t\mapsto \widehat{M}^\lambda_t=f(t,X_t,S_t) - \int_0^t \a(f)(u,X_{u-},S_{u-}) \rho^S_{du}$ is square integrable. \end{proposition} \begin{proof} Let $t\in[0,T]$ and $\widehat{M}^\lambda$ as defined in \eqref{EMartFubini}, which is a martingale by Theorem \ref{T17}. By \eqref{E153} we have \begin{equation} \label{EMMLambda} \widehat{M}^\lambda_t = \int_{\C^2} d\Pi(z_1,z_2) M^\lambda_t(z_1,z_2), \end{equation} where $M^\lambda(z_1,z_2)$ was defined in \eqref{MLambda}. By Lemma \ref{ExpAddLemma}, For every $(z_1,z_2)\in D/2$, $M^\lambda(z_1,z_2)$ is a square integrable martingale and \eqref{MLambda2Moment} holds. By Fubini theorem, integrating both sides of \eqref{MLambda2Moment} with respect to $\vert \Pi \vert$, gives \begin{align*} \begin{split} \E{\int_{\C^2}d|\Pi|(z_1,z_2) |M^\lambda_t(z_1,z_2)|^2} & = \int_{\C^2}d|\Pi|(z_1,z_2) \E{ |M^\lambda_t(z_1,z_2)|^2 } \\ & = \int_{\C^2}d|\Pi|(z_1,z_2) \int_0^t e^{\kappa_u(2{\rm Re}(z_1), 2{\rm Re}(z_2))} |\lambda(u,z_1,z_2)|^2 \rho_{du}(z_1,z_2) \\ &\leq \sup_{u\in [0,T], (a,b)\in I_0} e^{\kappa_u(a, b)} \int_{\C^2}d|\Pi|(z_1,z_2) \int_0^t |\lambda(u,z_1,z_2)|^2 \rho_{du}(z_1,z_2). \end{split} \end{align*} Now, by point \ref{propAddPt3} of Proposition \ref{propAdd} and condition \ref{ExpAddSqIntgMg_2}, the right-hand side is finite. This together with \eqref{EMMLambda} and Cauchy-Schwarz show that $\widehat{M}^\lambda$ is square integrable. \end{proof} \begin{proposition} \label{P24}\ We suppose the validity of Assumptions \ref{A_PAI}. \begin{enumerate}[label=\arabic*)] \item \label{P24_1} Assumption \ref{E0} is verified. More precisely \begin{enumerate}[label=\roman*)] \item \label{P24_1_i} $id: (t,x,s)\longmapsto s \in \Da$ and \begin{equation}\label{ExpAddId} \a(id)(t,x,s)=s\dfrac{d\kappa_t(0,1)}{d\rho^S_t},\quad \forall t\in[0,T], x,s>0. \end{equation} \item \label{P24_1_ii} $(t,x,s)\longmapsto s^2 \in \Da$ and \begin{equation}\label{ExpAddAlpha} \widetilde\a(id)(t,x,s)= s^2,\quad \forall t\in[0,T], x,s>0. \end{equation} \end{enumerate} \item \label{P24_2} Let $\Pi$ be a finite signed Borel measure on $\C^2$ verifying Assumption \ref{A_Pi}. Let $f\in \Da$ of the form \eqref{ExpAddADom}, such that $\widetilde f = f\times id \in \Da$. Then $\widetilde{\a}$, defined in \eqref{EAtilde}, is given by, $\forall t\in[0,T], x,s>0$, \begin{equation} \label{ExpAddATilde} \widetilde{\a}(f)(t,x,s) = \int_{\C^2} d \Pi(z_1, z_2) \lambda(t,z_1,z_2) x^{z_1} s^{z_2+1} \dfrac{d\rho_t(z_1, z_2,0,1)}{d\rho^S_t}. \end{equation} \end{enumerate} \end{proposition} \begin{proof} \ We first address item \ref{P24_1}. \begin{enumerate}[label=\roman*)] \item Let $\Pi_1(z_1,z_2)=\delta_{\{z_1=0,z_2=1\}}$ and $\lambda\equiv1$. Since by Assumption \ref{A_PAI} $(0,1) \in D$, $\Pi_1$ fulfills Assumption \ref{A_Pi}. The other conditions \eqref{CondLambda0}, \eqref{CondLambda1}, \eqref{CondLambda3} defining $\Da$ in \eqref{ExpAddADom} are trivially satisfied. Consequently $id \in \Da$ and \eqref{ExpAddId} follows from \eqref{ExpAddA}. \item Let $\Pi_2(z_1,z_2)=\delta_{\{z_1=0,z_2=2\}}$ and $\lambda\equiv1$. Again, by Assumption \ref{A_PAI} $(0,2) \in D$, and $\Pi_2$ fulfills Assumption \ref{A_Pi}. Similar arguments as for \ref{P24_1_i} allow to show that $(t,x,s)\longmapsto s^2 \in \Da$. \end{enumerate} Formula \eqref{ExpAddATilde} constitutes a direct application of \eqref{ExpAddA}, taking into account \eqref{ExpAddADom}, which establishes item \ref{P24_2}. In particular \eqref{ExpAddAlpha} holds. \end{proof} \section{The basic BSDE and the deterministic problem} \label{S3_BSDE} \setcounter{equation}{0} \subsection{Forward-backward SDE} \label{SGF} We consider two $\cF_t$-adapted processes $(X,S)$ fulfilling the martingale problem related to $(\Da, \a, A)$ stated in Definition \ref{DSMP} under Assumption \ref{E0}. We denote by $M^S$ (resp. $V^S$) the martingale part (resp. the predictable bounded variation component) of the special semimartingale $S$. Let $f: [0,T] \times \cO \times \C^2 \longrightarrow \C$ be a locally bounded function and a random variable $h := g(X_T, S_T)$, for some continuous function $g: \cO \rightarrow \C$. In this chapter we concentrate on forward-backward SDEs of the type \begin{equation} \label{BSDEMgMarkov} Y_t = h + \int_t^T f(u,X_{u-}, S_{u-}, Y_{u-},Z_u)d A_u - \int_t^T Z_u dM^S_u - (O_T-O_t), \quad t\in [0,T]. \end{equation} \begin{definition} \label{DefBSDE} A triplet $(Y,Z,O)$ of processes is called {\bf solution} of \eqref{BSDEMgMarkov} if the following conditions hold. \begin{enumerate}[label=\arabic*)] \item \label{DefBSDE_1} $(Y_t)$ is $\cF_t$-adapted; \item \label{DefBSDE_2} $(Z_t)$ is $\cF_t$-predictable and \begin{enumerate} \item $\int_0^T |Z_u|^2 d\langle M^S\rangle_u < \infty$ a.s. \item $\int_0^T |f(u,X_{u-}, S_{u-}, Y_{u-},Z_u)| d \norm{A}_u < \infty $ a.s. \end{enumerate} \item \label{DefBSDE_3} Equality \eqref{BSDEMgMarkov} holds and $(O_t)$ is an $\cF_t$-local martingale such that $\langle O,M^S\rangle=0$ and $O_0 = 0$ a.s. \end{enumerate} \end{definition} Our object of study is the formulation of a deterministic problem linked to the BDSE \eqref{BSDEMgMarkov}, generalizing the "classical" semilinear PDE in the Brownian motion case. In particular we look for solutions $(Y,Z,O)$ for which there is a function $y \in \Da$ such that $\widetilde y = y \times id \in \Da$ and a locally bounded Borel function $ z: [0,T]\times \cO \longrightarrow \C, $ such that \begin{equation} \label{solForm} Y_t = y(t,X_t,S_t), \quad Z_t = z(t,X_{t-},S_{t-}), \quad \forall t \in [0,T], \end{equation} and \begin{equation} \label{condIntegYZ} \int_0^t \vert Z_u\vert^2 d\langle M^S\rangle_u < \infty \text{ a.s.} \quad \int_0^t |f(u, X_{u-}, S_{u-}, Y_{u-}, Z_u)|d \norm{A}_u <\infty \text{ a.s.} \end{equation} By \eqref{solForm} and \eqref{condIntegYZ}, Conditions \ref{DefBSDE_1} and \ref{DefBSDE_2} of Definition \ref{DefBSDE} are obviously fulfilled. Consequently the triplet $(Y,Z,O)$ where \begin{equation} \label{orthMG} O_t := Y_t-Y_0-\int_0^t Z_u dM^S_u + \int_0^t f(u,X_{u-},S_{u-},Y_{u-},Z_u) dA_u, \end{equation} is a solution of \eqref{BSDEI} provided that \begin{eqnarray} &1.&(O_t) \text{ is an } \cF_t\text{-local martingale}, \label{cond1} \\ &2.&\langle O,M^S\rangle=0, \label{cond2} \\ &3.& Y_T = g(X_T,S_T). \label{cond3} \end{eqnarray} Since $(X,S)$ solves the strong martingale problem related to $(\Da, \a, A)$, replacing \eqref{solForm} in expression \eqref{orthMG}, Condition \eqref{cond1} can be rewritten saying that $ \int_0^t \a(y)(u,X_{u-}, S_{u-}) dA_u + \int_0^t f(u,X_{u-},S_{u-},Y_{u-},Z_u)dA_u $ is an $\cF_t$-local martingale. This implies that \begin{equation} \label{cond1bis} \int_0^t \a(y)(u,X_{u-}, S_{u-}) dA_u + \int_0^t f(u,X_{u-},S_{u-},Y_{u-},Z_u)dA_u = 0. \end{equation} On the other hand, Condition \eqref{cond2} implies \begin{equation} \label{cond2bis} \langle M^Y, M^S\rangle_t = \int_0^t Z_u d\langle M^S\rangle_u, \end{equation} where $M^Y$ denotes the martingale part of $Y$. By Lemma \ref{LC} and item \ref{RSpecial_2} of Corollary \ref{RSpecial}, Condition \eqref{cond2bis} can be re-expressed as \begin{equation} \label{cond2ter} \int_0^t \widetilde\a(y)(u,X_{u-},S_{u-})dA_u = \int_0^t z(u,X_{u-},S_{u-}) \widetilde\a(id)(u,X_{u-},S_{u-})dA_u. \end{equation} Condition \eqref{cond3} requires $y(T,\cdot,\cdot) = g(\cdot,\cdot)$. This allows to state below a representation theorem. \begin{theorem} \label{T19} Suppose the existence of a function $y,$ such that $y, \widetilde{y}:=y\times id$ belong to $\Da$, and a Borel locally bounded function $z$, solving the system \begin{eqnarray} \a(y)(t, x, s) &=& - f(t, x, s, y(t, x, s), z(t, x, s))\label{suffConds1}\\ \widetilde{\a}(y)(t, x, s) &=& z(t, x, s) \widetilde{\a}(id)(t, x, s) \label{suffConds2}, \end{eqnarray}for $t\in [0,T]$ and $(x,s)\in\cO$, where the equalities hold in $\cL$, with the terminal condition $y(T,.,.) = g(.,.)$. Then the triplet $(Y,Z,O)$ defined by \begin{equation} \label{ESolYZ} Y_t = y(t,X_t,S_t), \; Z_t = z(t,X_{t-},S_{t-}) \end{equation} and $(O_t)$ given by \eqref{orthMG}, is a solution to the BSDE \eqref{BSDEMgMarkov}. \end{theorem} \begin{proof} The triplet $(Y,Z,O)$ fulfills the three conditions of Definition \ref{DefBSDE} provided that \eqref{condIntegYZ} is verified. Indeed, since $y \in \Da$ then the integral $\int_0^t |f(u,X_{u-},S_{u-},Y_{u-},Z_u)| d\norm{A}_u,$ is finite taking into account \eqref{aFinite}. Since $z$ is locally bounded, then $\int_0^T \vert Z_u\vert^2 d\langle M\rangle_u$ is also finite. This concludes the proof of the theorem. \end{proof} \begin{remark}\label{RC33}\ \begin{enumerate}[label=\arabic*.] \item \label{RC33_1} The statement of Theorem \ref{T19} can be generalized relaxing the assumption on $z$ to be locally bounded. We replace this with the condition \begin{equation} \label{ERC43} \int_0^T z^2(u,X_{u-},S_{u-})\widetilde\a(id)(u,X_{u-},S_{u-}) dA_u < \infty \text{ a.s.} \end{equation} This is equivalent to $\int_0^T \vert Z_u\vert ^2 d\langle M \rangle_u < \infty \text{ a.s.}$ \item \label{RC33_2} In particular, if $z$ is locally bounded a.s., then \eqref{ERC43} is fulfilled. \end{enumerate} \end{remark} \begin{remark}\label{RUniqueness} Theorem \ref{T19} constitutes also an existence theorem for particular BSDEs. If $M^S$ is a square integrable martingale and the function $\hat f$ associated with $f$, fulfills some Lipschitz type conditions then the solution $(Y,Z,O)$ provided by \eqref{ESolYZ} is unique in the class of processes introduced in \cite[Theorem 3.1]{CarFerSan08}. \end{remark} The presence of the local martingale $O$ is closely related to the classical martingale representation property. In fact, if $(\Omega, \cF, \P)$ verifies the local martingale representation property with respect to $M^S$, then $O$ vanishes. \begin{proposition} Suppose that $(\Omega, \cF, \P)$ fulfills the local martingale representation property with respect to $M$. Then, if $(Y,Z,O)$ is a solution to \eqref{BSDEMgMarkov} in the sense of Definition \ref{DefBSDE}, then, necessarily $O_t=0, \; \forall t\in [0,T]$. \end{proposition} \begin{proof} Since $(O_t)$ is an $\cF_t$-local martingale, there is a predictable process $(U_t)$ such that $ O_t=O_0+\int_0^t U_u dM^S_u, \; \forall t \in [0,T], \text{ with } O_0=0. $ So the condition $\langle O,M^S\rangle \equiv 0$ implies $\int_0^. U_u d \langle M^S\rangle_u=0.$ Consequently, $\;U \equiv 0 \; d \P \otimes d\langle M^S\rangle \; \text{a.e.,}\;$ and so $\;O_t=O_0=0 \; \forall t \in [0,T].$ \end{proof} \begin{remark} \label{RLink} We end this section recalling that the forward-backward SDE \eqref{BSDEMgMarkov} that we study is a particular of the general BSDE \eqref{BSDEI} driven by a martingale, provided in the Introduction. The link is given by \eqref{EFormTildef}. \end{remark} \subsection{Illustration 1: the Markov semigroup case} Let us consider the case of Section \ref{SMarkov} with related notations. Let $S = X^{0,x}$ be a solution of the strong martingale problem related to $(\Da, \a, A)$, see Definition \ref{DSMPMod}. Let $(P_t)$ be the semigroup introduced in \eqref{EPT}, fulfilling Assumption \ref{A_Markov} with generator $L$ defined in Definition \ref{DGen}. Let $f: [0,T] \times \R \times \C \longrightarrow \C$ be a locally bounded function and a continuous function $g: \R \longrightarrow \C$.\\ Here we have of course $S = M^S + V^S$ where $V^S = \int_0^\cdot \a(id)(u,S_{u-}) du$ and $id(s) \equiv s$. Theorem \ref{T19} gives rise to the following proposition. \begin{proposition} Suppose the existence of a function $y:[0,T]\times \R \to \C$ and a Borel locally bounded function $z:[0,T]\times \R \to \C$ fulfilling the following conditions. \begin{enumerate}[label=\roman*)] \item $t\mapsto y(t,\cdot)\; (\text{resp. } \tilde y(t,\cdot))$ takes value in $D(L)$ and it is continuous with respect to the graph norm. \item $t\mapsto y(t,\cdot)\; (\text{resp. }\tilde y(t,\cdot))$ is of class $C^1$ with values in $E$. \item For $(t,x) \in [0,T] \times \R$, \begin{eqnarray*} \partial_t y(t,x) + Ly(t,\cdot)(x) &=& - f(t,x,y(t,x),z(t,x)), \\ z(t,\cdot) \widetilde L(id) &=& \widetilde L y(t,\cdot), \\ y(T,.) &=& g, \end{eqnarray*} where $\widetilde L \varphi = L \tilde \varphi - \varphi L id - id L \varphi$. \end{enumerate} Then the triplet $(Y,Z,O)$, where $$ Y_t = y(t,X_t,S_t), \; Z_t = z(t,X_t,S_t),$$ and $(O_t)$ is given by \eqref{orthMG} is a solution to the BSDE \eqref{BSDEMgMarkov}. \end{proposition} \begin{remark} \label{RMSC} If $S = \sigma W$ with $\sigma > 0$ and $\varphi:[0,T] \times \R \rightarrow \C$ is of class $C^{1,2}$ then $\a(\varphi) = \partial_t \varphi + \frac{\sigma^2}{2} \partial_{ss} \varphi $ and $\widetilde \a(\varphi) = \sigma^2 \partial_s \varphi = \widetilde \a(id) \partial_s \varphi $. In the case where $L$ is a generic generator, the formal quotient $\frac{\widetilde \a(\varphi)}{\widetilde \a(id)}$ can be considered as a sort of generalized derivative. \end{remark} \subsection{Illustration 2: the diffusion case} Consider the case of where $(X,S)$ is diffusion process as given in equations \eqref{expleDiff}. We recall that in that case, the operator $\a$, for $\varphi\in \mathcal{C}^{1,2}([0,T]\times \R^2)$, is given by \begin{eqnarray*} \a(\varphi) &=& \partial_t \varphi + b_S \partial_s \varphi + b_X \partial_x \varphi\\ &+& \frac{1}{2} \left\lbrace |\sigma_S|^2\partial_{ss} \varphi + |\sigma_X|^2 \partial_{xx} \varphi + 2 \langle \sigma_S, \sigma_X\rangle \partial_{sx} \varphi \right\rbrace . \end{eqnarray*} \begin{corollary} Let $(y,z)$ be a solution of the PDE \begin{eqnarray} \a(y)(t,x,s) &=& -f(t,x,s,y(t,x,s),z(t,x,s))\\ |\sigma_S|^2 z(t,x,s) &=& |\sigma_S|^2 \partial_s y(t,x,s) + \langle \sigma_S, \sigma_X\rangle \partial_x y(t,x,s), \end{eqnarray} with terminal condition $y(T,.,.) = g(.,.)$. Then the triplet $(Y,Z,O)$, where $$ Y_t = y(t,X_t,S_t), \; Z_t = z(t,X_t,S_t),$$ and $(O_t)$ is given by \eqref{orthMG} is a solution to the BSDE \eqref{BSDEMgMarkov}. \end{corollary} \section{Explicit solution for F\"ollmer-Schweizer decomposition in the basis risk context} \label{S4_BSDE} \setcounter{equation}{0} \subsection{General considerations} We will discuss in this section the important F\"ollmer-Schweizer decomposition, denoted shortly F-S decomposition. It is a generalization of the well-known Galtchouk-Kunita-Watanabe decomposition for martingales, to the more general case of semimartingales. Our task will consist in providing explicit expressions for the F-S decomposition in several situations. Let $S$ be a special semimartingale with canonical decomposition $S=M^S+V^S$. In the sequel we will convene that the space $L^2(M^S)$ consists of the predictable processes $(Z_t)_{t \in [0,T]}$ such that $\E{\int_0^T | Z_u|^2d\langle M^S \rangle_u} < \infty$ and $L^2(V^S)$ will denote the set of all predictable processes $(Z_t)_{t \in [0,T]}$ such that $\E{\left(\int_0^T |Z_u|d\| V^S \|_u\right)^2} < \infty$. The intersection of these two spaces is denoted \begin{equation}\label{ETheta} \Theta:= L^2(M^S) \cap L^2(V^S). \end{equation} The F\"ollmer-Schweizer decomposition is defined as follows. \begin{definition} \label{FSDefinition} Let $h$ be a (possibly complex valued) square integrable $\cF_T$-measurable random variable. We say that $h$ admits an F-S decomposition with respect to $S$ if it can be written as \begin{equation} \label{FSDec} h = h_0 + \int_{0}^{T} Z_u d S_u + O_T, \P-a.s., \end{equation} where $h_0$ is an $\cF_0$-measurable r.v., $Z \in \Theta$ and $O = (O_t)_{t\in[0,T]}$ is a square integrable martingale, strongly orthogonal to $M^S$ \end{definition} \begin{remark}\label{RFS}\ \begin{enumerate}[label=\arabic*)] \item \label{RFS_1} The notion of weak and strong orthogonality is discussed for instance in \cite[Section 4.3]{ProtterBook} and \cite[Section 1.4b]{JacodBook}. Let $L$ and $N$ be two $\cF_t$-local martingales, with null initial value. $L$ and $N$ are said to be {\it strongly orthogonal} if $LN$ is a local martingale. If $L$ and $N$ are locally square integrable, then they are strongly orthogonal if and only if $\langle L,N\rangle=0$. The definition of locally square integrable martingale is given for instance just before \cite[Theorem 49 in Chapter 1]{ProtterBook}. \item \label{RFS_2} The F-S decomposition makes also sense for complex valued square integrable random variable $h$. In that case the triplet $(h_0,Z,O)$ is generally complex. \item \label{RFS_3} If $h$ admits an F-S decomposition \eqref{FSDec} then the complex conjugate $\bar{h}$ admits an F-S decomposition given by \begin{equation} \label{FSDecBar} \bar{h} = \bar{h}_0 + \int_{0}^{T} \bar{Z}_u d S_u + \bar{O}_T, \P-a.s. \end{equation} \end{enumerate} \end{remark} The F-S decomposition has been extensively studied in the literature: sufficient conditions on the process $S$ were given so that every square integrable random variable has such a decomposition. A well-known condition ensuring the existence of such a decomposition is the so called \textbf{structure condition} (SC). \begin{definition} \label{SC} We say that a special semimartingale $S=V^S+M^S$ satisfies the \textbf{structure condition} (SC) if there exists a predictable process $ \alpha$ such that \begin{enumerate} \item $V^S_t=\int_0^t \alpha_u d \langle M^S \rangle_u$, \item $\int_0^T \alpha_u^2 d \langle M^S \rangle_u < \infty$ a.s. \end{enumerate} \end{definition} The latter quantity plays a central role in the F-S decomposition. The associated process \begin{equation} \label{MeanVarTradOff} K_t := \int_0^t \alpha_u^2 d \langle M^S \rangle_u \text{ for } t \in [0,T], \end{equation} is called \textbf{mean variance trade-off} process. \begin{remark}\label{RMS}\ \cite{monat} proved that, under (SC) and the additional condition that the process $K$ is uniformly bounded, the F-S decomposition of any real valued square integrable random variable exists and it is unique. More recent papers about the subject are \cite{Schweizer2001}, \cite{Cerny} and references therein. \end{remark} This general decomposition refers to the process $S$ as underlying and it will be applied in the context of mean-variance hedging under basis risk, where $X$ is an observable price process of a non-traded asset. \\ As in previous sections, we consider a couple $(X,S)$ verifying the martingale problem \eqref{pbMg}, and we suppose Assumption \ref{E0} to be fulfilled. In the sequel we do not necessarily assume (SC) for $S$. \begin{definition} \label{DWeakFS} Let $h$ be a square integrable $\cF_T$-measurable random variable. We say that $h$ admits a {\bf weak F-S decomposition} with respect to $S$ if it can be written as \begin{equation} \label{FSDecWeak} h = h_0 + \int_{0}^{T} Z_u d S_u + O_T, \P{\rm-a.s.}, \end{equation} where $h_0$ is an $\cF_0$-measurable r.v., $Z$ is a predictable process such that $\int_0^T | Z_u|^2d\langle M^S \rangle_u < \infty$ a.s., $\int_0^T | Z_u|d\Vert V^S \Vert_u < \infty$ a.s. and $O$ is a local martingale such that $\langle O,M^S \rangle = 0$ with $O_0 = 0$. \end{definition} Finding a weak F-S decomposition \eqref{FSDecWeak} $(h_0,Z,O)$ for some r.v. $h$ is equivalent to finding a solution $(Y,Z,O)$ of the BSDE \begin{equation} \label{FSasBSDE} Y_t = h - \int_{t}^{T} Z_u d S_u - (O_T-O_t). \end{equation} The link is given by $Y_0 = h_0$. Equation \eqref{FSasBSDE} can be seen as a special case of BSDE \eqref{BSDEMgMarkov}, where the driver $f$ is linear in $z$, of the form \begin{equation} \label{BSpecialf} f(t, x, s, y, z) = -\a(id)(t, x, s) z. \end{equation} This point of view was taken for instance by \cite{FS1994}. \begin{remark}\label{FS-BSDE} Let $(Y,Z,O)$ be a solution of \eqref{FSasBSDE} with $Z \in \Theta$, where $\Theta$ has been defined in \eqref{ETheta} and $O$ is a square integrable martingale. Then $h$ admits an F-S decomposition \eqref{FSDec} with $Y_0 = h_0$. \end{remark} We consider the case of the final value $h = g(X_T,S_T)$ for some continuous function $g$. Theorem \ref{T19} can be applied to obtain the result below. \begin{corollary} \label{FSConditions} Let $y$ (resp. $z$): $[0,T] \times \cO \rightarrow \C$. We suppose that the following conditions are verified. \begin{enumerate}[label=\arabic*)] \item \label{FSConditions1} $y, \widetilde{y}:=y\times id$ belong to $\Da$. \item \label{FSConditions2} $z$ verifies \eqref{ERC43} of Remark \ref{RC33}. \item \label{FSConditions3} $(y,z)$ solve the problem \begin{eqnarray} \a(y)(t, x, s) &=& \a(id)(t, x, s) z(t, x, s), \label{FSConds1} \\ \widetilde{\a}(y)(t, x, s) &=& \widetilde\a(id)(t, x, s) z(t, x, s), \label{FSConds2} \end{eqnarray} where the equalities hold in $\cL$, with the terminal condition $y(T,.,.) = g(.,.)$. \end{enumerate} Then the triplet $(Y,Z,O)$, where $$ Y_t = y(t,X_{t},S_{t}), \; Z_t = z(t,X_{t-},S_{t-}), \; O_t=Y_t - Y_0 - \int_{0}^{t} Z_u d S_u,$$ is a solution to the linear BSDE \eqref{FSasBSDE} linked to the weak F-S decomposition. \end{corollary} \begin{remark} \label{RBSDEFS} We recall that, setting $h_0=y(0, X_0, S_0)$, the triplet $(h_0,Z,O)$ is a candidate for a true F-S decomposition, see Definition \ref{FSDefinition}. Sufficient conditions for this are stated below. \begin{enumerate}[label=\alph*)] \item \label{RBSDEFS_1} $h=g(X_T,S_T) \in L^2(\Omega).$ \item \label{RBSDEFS_2} $(z(t, X_{t-}, S_{t-})) \in \Theta$ i.e. \begin{itemize} \item $\E{\int_0^T \left|z(t, X_{t-}, S_{t-})\right|^2 \widetilde\a(id)(t,X_{t-}, S_{t-}) dA_t } < \infty$. \item $\E{\left(\int_0^T \left|z(t, X_{t-}, S_{t-})\right| \Vert \a(id)(t, X_{t-}, S_{t-}) dA \Vert_t \right)^2} < \infty$. \end{itemize} \item \label{RBSDEFS_3} $\left(y(t,X_t,S_t) - \int_0^t \a(y)(u,X_{u-}, S_{u-}) dA_u\right)$ is an $\cF_t$-square integrable martingale. \end{enumerate} \end{remark} We remark that \ref{RBSDEFS_2} and \ref{RBSDEFS_3} imply by additivity that $O$ is a square integrable martingale. In fact, $\forall t\in[0,T]$ \begin{equation} \label{E_RBSDEFS} O_t = y(t,X_t,S_t) -y(0,X_0,S_0)- \int_0^t \a(y)(u,X_{u-}, S_{u-}) dA_u - \int_0^t z(u,X_{u-}, S_{u-}) dM^S_u. \end{equation} \subsection{Application: exponential of additive processes} \label{PAI} We will investigate in this section a significant context where the equations in Corollary \ref{FSConditions} can be solved, yielding the weak F-S decomposition and we can give sufficient conditions so that the true F-S decomposition is fulfilled. We focus on exponential of additive processes. Another example will be given in Section \ref{S412}. Let $(X,S)$ be a couple of exponential of semimartingale additive processes, as introduced in Section \ref{expleExpAdd}. \begin{proposition}\label{PSC} Under Assumption \ref{A_PAI}, $S$ verifies the (SC) condition given in Definition \ref{SC} if and only if \begin{equation} \int_0^T \left(\frac{d\kappa_t(0,1)}{d\rho^S_t}\right)^2 d\rho^S_t <\infty \end{equation} In this case, the mean variance trade-off process $K$ is deterministic and given by \begin{equation} \label{MVT_PAI} K_t = \int_0^t \left(\frac{d\kappa_u(0,1)}{d\rho^S_u}\right)^2 d\rho^S_u <\infty, \;\forall t\in[0,T]. \end{equation} \end{proposition} \begin{proof} It follows from Corollary \ref{CCS} and item \ref{propAddPt5} of Proposition \ref{propAdd}. \end{proof} We look for the F-S decomposition of an $\cF_T$-measurable random variable $h$ of the form $h:=g(X_T,S_T)$ for a function $g$ such that \begin{equation} \label{Eh} g(x,s) = \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2}, \end{equation} where $\Pi$ is finite Borel complex measure. \begin{remark} This family of random variables includes many examples as, for example, the call and put options payoffs. Indeed, we have, for $K,s>0$ and arbitrary $0<R<1$, $$ (s-K)^+-s = \frac{1}{2\pi i} \int_{R-i\infty}^{R+i\infty}s^z \frac{K^{1-z}}{z(z-1)}dz. $$ Moreover, for arbitrary $U>0$, $$ (K-s)^+ = \frac{1}{2\pi i} \int_{U-i\infty}^{U+i\infty}s^z \frac{K^{1-z}}{z(z-1)}dz. $$ For more details, see for example \cite{Hubalek2006}, \cite{EberleinGlau2010} and \cite{gor2013variance}. \end{remark} In Section \ref{expleExpAdd}, Corollary \ref{ExpAddMgPb} states that $(X,S)$ fulfills the martingale problem with respect to $(\Da, \a, \rho^S)$ where the objects $\Da, \a$ and $\rho^S$ were introduced respectively in \eqref{ExpAddADom}, \eqref{ExpAddA}, \eqref{ExpAddRhoS}. In order to determine the F-S decomposition (in its weak form given in \eqref{DWeakFS}) we make use of Corollary \ref{FSConditions}. We look for a function $y$ (resp. $z$): $[0,T] \times \R^2 \rightarrow \C$ such that Hypotheses \ref{FSConditions1}, \ref{FSConditions2} and \ref{FSConditions3} are fulfilled. In agreement with definition of $\Da$ given in \eqref{ExpAddADom} we select $y$ of the form \begin{equation}\label{Eh1} y(t,x,s) = \int_{\C^2} d \Pi(z_1, z_2)x^{z_1} s^{z_2} \lambda(t,z_1,z_2), \end{equation} where $\Pi$ being the same finite complex measure as in \eqref{Eh} and $\lambda:[0,T] \times \C^2 \rightarrow \C$. We will start by writing ''necessary'' conditions for a couple $(y,z)$, such that $y$ has the form \eqref{Eh1}, to be solutions of \eqref{FSConds1}. Suppose that the couple $(y,z)$ fulfills \eqref{FSConds1} and \eqref{FSConds2} of Corollary \ref{FSConditions}. We consider the expressions of $ \a(id), \widetilde{\a}(id)$ given by \eqref{ExpAddId}, \eqref{ExpAddAlpha}, and $\a(y), \widetilde{\a}(y)$ given by \eqref{ExpAddA} and \eqref{ExpAddATilde}, for $f = y$. We replace them in the two above mentioned conditions \eqref{FSConds1} and \eqref{FSConds2} to obtain the following equations for $\lambda$ ($d \rho^S_t$ a.e.). \begin{equation} \label{E450} \begin{aligned} \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2} \left\lbrace \dfrac{d\lambda(t,z_1,z_2)}{d\rho^S_t} + \lambda(t,z_1,z_2) \dfrac{d\kappa_t(z_1,z_2)}{d\rho^S_t} \right\rbrace &=& s \dfrac{d\kappa_t(0,1)}{d\rho^S_t} z(t, x, s) \\ \int_{\C^2} d \Pi(z_1, z_2) \lambda(t,z_1,z_2) x^{z_1} s^{z_2+1} \dfrac{d\rho_t(z_1, z_2,0,1)}{d\rho^S_t} &=& s^2 z(t, x, s). \end{aligned} \end{equation} The final condition $y(T,\cdot, \cdot) = g(\cdot,\cdot)$ produces \begin{equation} \label{E450bis} \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2} \lambda(T,z_1,z_2) = \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2}. \end{equation} Replacing $z$ from the second line of \eqref{E450} in the first line, by identification of the inverse Fourier-Laplace transform, it follows that $\lambda$ verifies \begin{eqnarray} \label{lambdaODE} \dfrac{d\lambda(t,z_1,z_2)}{d\rho^S_t} &=& \lambda(t,z_1,z_2) \left\lbrace \dfrac{d\kappa_t(0,1)}{d\rho^S_t}\dfrac{d\rho_t(z_1, z_2,0,1)}{d\rho^S_t} - \dfrac{d\kappa_t(z_1,z_2)}{d\rho^S_t}\right\rbrace \label{lambdaODE1} \\ \lambda(T,z_1,z_2) &=& 1, \label{lambdaODE2} \end{eqnarray} for all $(z_1,z_2)\in supp\;\Pi$. Without restriction of generality we can clearly set $\lambda(\cdot, z_1,z_2) = 0$ for $(z_1,z_2) $ outside the support of $\Pi$. We observe that for fixed $z_1, z_2$, \eqref{lambdaODE} constitutes an ordinary differential equation (in the Lebesgue-Stieltjes sense) in time $t$. We solve now the linear differential equation \eqref{lambdaODE}. Provided that \begin{equation} \label{EIntegrability} u \mapsto \dfrac{d\rho_u(z_1, z_2,0,1)}{d\rho^S_u} \dfrac{d\kappa_u(0,1)}{d\rho^S_u} \in L^1([0,T], d\rho_s), \end{equation} the (unique) solution of \eqref{lambdaODE}, is given by \begin{eqnarray} \label{E4600} \lambda(t,z_1,z_2) &=& \exp \left( \int_t^T \left[ \dfrac{d\kappa_u(z_1,z_2)}{d\rho^S_u} - \dfrac{d\rho_u(z_1, z_2,0,1)}{d\rho^S_u} \dfrac{d\kappa_u(0,1)}{d\rho^S_u} \right] d\rho^S_u \right) \nonumber\\ &=& \exp \left( \int_t^T \kappa_{du}(z_1,z_2) - \dfrac{d\rho_u(z_1, z_2,0,1)}{d\rho^S_u} \kappa_{du}(0,1) \right) \\ &=& \exp \left( \int_t^T \eta(z_1,z_2, du) \right)\nonumber , \end{eqnarray} where \begin{equation} \label{eta} \eta(z_1,z_2, t) := \kappa_{t}(z_1,z_2) - \int_0^t\dfrac{d\rho_u(z_1, z_2,0,1)}{d\rho^S_u} \kappa_{du}(0,1), \end{equation} which is clearly absolutely continuous with respect to $d \rho^S$. \\ At this point, we have an explicit form of $\lambda$ defining the function $y$ intervening in the weak F-S decomposition. In the sequel we will show that such a choice of $\lambda$ will constitute a sufficient condition so that $(y,z)$ where $y$ is defined by \eqref{Eh1} and $z$ determined by the second line of \eqref{E450}, is a solution of the deterministic problem given by \eqref{FSConds1} and \eqref{FSConds2}. In order to check \eqref{EIntegrability} and the validity of \eqref{E450} and \eqref{E450bis}, we formulate an hypothesis which reinforces Assumptions \ref{A_PAI} and \ref{A_Pi}. \begin{assumption} \label{setD} Recall $I_0:=\textit{{\rm Re}(supp }\Pi\text{)} (\subset \R^2),$ where we convene that ${\rm Re}(z_1,z_2) = ({\rm Re}(z_1), {\rm Re}(z_2)). $ We denote $I:= 2I_0\cup \{(0,1)\}$ and $\cD$ the set \begin{equation} \cD = \Big\{z\in D, \; \int_0^T \left| \frac{d\kappa_u(z_1,z_2)}{d \rho^S_u}\right|^2 d\rho^S_u < \infty \Big\}. \end{equation} We assume the validity of the properties below. \begin{enumerate}[label=\arabic*)] \item $\rho^S$ is strictly increasing. \label{setD0} \item $I_0$ is bounded. \label{setD1} \item $\forall z\in supp\; \Pi,\; z, z+(0,1) \in \cD$ \label{setD2}. \item $\displaystyle\sup_{x\in I} \norm{\frac{d(\kappa_t(x))}{d\rho^S_t}}_{\infty} < \infty$. \label{setD3} \end{enumerate} \end{assumption} \begin{remark} \label{R411} \ \begin{enumerate}[label=\arabic*)] \item \label{R411Item1} Assumptions \ref{A_PAI} and \ref{A_Pi} are consequences of Assumption \ref{setD}. \item \label{R411Item2} Taking into account Remark \ref{R226}, we emphasize that, for the rest of this section, the statements would not change if we consider that the quantities integrated with respect to the measure $\Pi$ are null outside its support. \item \label{R411Item3} $I \subset \cD$, in particular $(0,1) \in \cD$ because of item \ref{setD3} of Assumption \ref{setD}. \item \label{R411Item4} By previous item and Proposition \ref{PSC}, $S$ verifies the (SC) condition and the mean variance trade-off process $K$ given by \eqref{MVT_PAI} is deterministic. \item \label{R411Item5} $I_0 \subset D/2$ (i.e. $supp\;\Pi \subset D/2$). This follows again by item \ref{setD3} of Assumption \ref{setD}. \end{enumerate} \end{remark} In the sequel we will introduce a useful notation. \begin{equation}\label{Egamma} \gamma_t(z_1,z_2) := \frac{d\rho_t(z_1,z_2,0,1)}{d\rho^S_t}, \; \forall (z_1,z_2)\in D/2, t\in[0,T]. \end{equation} Similarly to \cite[Lemma 3.28]{gor2013variance}, we can show the upper bounds below. \begin{lemma}\ \label{LemmaEta} Under Assumption \ref{setD}, we have the properties below hold. \begin{enumerate}[label=\arabic*)] \item \label{lambdaWellDefined} Condition \eqref{EIntegrability} is verified for $t\in[0,T], (z_1,z_2)\in supp\; \Pi$. \item \label{pt2LemmaEta} There is a positive constant $c_1$, such that $d\rho^S_s$ a.e. $\displaystyle \sup_{(z_1,z_2)\in I_0+i\R^2} \frac{d {\rm Re}(\eta(z_1,z_2,t))}{d\rho^S_t} \leq c_1$. \item \label{pt3LemmaEta} There are positive constants $c_2$, $c_3$ such that, $d\rho_s$ a.e. the following property holds. \\ For any $\displaystyle (z_1,z_2)\in I_0+i\R^2, \; \bigg| \gamma_t(z_1,z_2) \bigg|^2 \leq \frac{d\rho_t(z_1,z_2)}{d\rho^S_t} \leq c_2 - c_3 \frac{d {\rm Re}(\eta(z_1,z_2,t))}{d\rho^S_t}$. \item \label{pt4LemmaEta} $\displaystyle \sup_{(z_1,z_2)\in I_0+i\R^2} - \int_0^T 2 {\rm Re}(\eta(z_1,z_2,dt))\exp \left( \int_t^T 2 {\rm Re}(\eta(z_1,z_2,du)) \right) < \infty$. \end{enumerate} \end{lemma} \begin{proof} For illustration we prove item \ref{lambdaWellDefined}, the other points can be shown by similar techniques as in \cite[Lemma 3.28]{gor2013variance}. \\ Let $t\in[0,T], (z_1,z_2)\in supp\; \Pi$. Condition \eqref{EIntegrability} is valid since $(0,1)\in \cD$, $z, z+(0,1) \in \cD$ and $$\left(\int_0^t\left|\dfrac{d\rho_u(z_1, z_2,0,1)}{d\rho^S_u} \frac{d\kappa_{u}(0,1)}{d\rho^S_u} \right| \rho^S_{du}\right)^2 \leq \int_0^t\left|\frac{d\rho_u(z_1, z_2,0,1)}{d\rho^S_u}\right|^2 \rho^S_{du} \int_0^t\left|\frac{d\kappa_{u}(0,1)}{d\rho^S_u} \right|^2 \rho^S_{du}.$$ \end{proof} Now, we can state a proposition that gives indeed the weak F-S decomposition of a random variable $h= g(X_T,S_T)$. \begin{proposition} \label{FSExpAdd} We suppose the validity of Assumption \ref{setD}. Let $\lambda$ be defined as \begin{equation} \label{lambda} \lambda(t,z_1,z_2) = \exp \left( \int_t^T \eta(z_1,z_2, du) \right), \forall (z_1,z_2) \in D/2, \end{equation} where $\eta$ has been defined at \eqref{eta}. Then $(Y,Z,O)$ is a solution of the BSDE \eqref{FSasBSDE}, where \begin{eqnarray*} Y_t &=& \int_{\C^2} d \Pi(z_1, z_2) X_{t}^{z_1} S_{t}^{z_2} \lambda(t,z_1,z_2) \\ Z_t &=& \int_{\C^2} d \Pi(z_1, z_2) X_{t-}^{z_1} S_{t-}^{z_2-1} \lambda(t,z_1,z_2) \gamma_t(z_1,z_2) \\ O_t &=& Y_t - Y_0 - \int_{0}^{t} Z_u d S_u, \end{eqnarray*} recalling that $\gamma$ has been defined in \eqref{Egamma}. \end{proposition} \begin{proof} The result will follow from Corollary \ref{FSConditions} for which we need to check the assumptions. \\ First we prove that the function $y$ defined by \eqref{Eh1}, where $\lambda$ is defined in \eqref{lambda}, is indeed an element of $\Da$. Secondly, we prove that the associated $\widetilde y$ also belongs to $\Da$. Third, we check Condition \eqref{ERC43} for $z$. Finally we need to check the validity of the system of equations \eqref{FSConds1} and \eqref{FSConds2}. \\ Concerning $y$, the function $\lambda(\cdot,z_1,z_2)$ is well-defined for $ (z_1,z_2) \in supp\; \Pi$, thanks to point \ref{lambdaWellDefined} of Lemma \ref{LemmaEta} and by definition we have $\lambda(dt, z_1, z_2) \ll \rho^S_{dt}, \quad \forall(z_1,z_2)\in D$, which is Condition \eqref{CondLambda0}. \\ In order to prove that $y \in \Da$, which was defined in \eqref{ExpAddADom}, it remains to prove Conditions \eqref{CondLambda1} and \eqref{CondLambda3} of Theorem \ref{T17}. Let $t\in[0,T],\;(z_1,z_2)\in D/2$. By \eqref{lambda}, we have $$\left|\lambda(t,z_1,z_2)\right|= \exp\left( \int_t^T \frac{d{\rm Re}(\eta(z_1,z_2,u))}{d\rho^S_u} \rho^S_{du}\right),$$ which implies, by item \ref{pt2LemmaEta} of Lemma \ref{LemmaEta}, that \begin{equation} \label{lambdaBounded} |\lambda(t,z_1,z_2)| \leq \exp\left(c_1 \rho^S_T\right), \end{equation} which gives in particular \eqref{CondLambda1}: in fact $\int_{\C^2}d|\Pi|(z_1,z_2) |\lambda(t,z_1,z_2)|^2 \leq e^{2 c_1 \rho^S_T} |\Pi|(\C^2) < \infty$. Finally, to conclude that $y\in \Da$, we need to show \eqref{CondLambda3}. By construction, $\lambda$ verifies equation \eqref{lambdaODE1}. Hence, by \eqref{lambdaODE1} and Cauchy-Schwarz we get \begin{align} \label{E4250} \begin{split} \Big(\int_0^T d\rho^S_t \Big| \frac{d\lambda(t,z_1,z_2)}{d\rho^S_t} &+ \lambda(t,z_1,z_2) \frac{d\kappa_t(z_1,z_2)}{d\rho^S_t}\Big| \Big)^2 \\ &= \left(\int_0^T d\rho^S_t \left| \lambda(t,z_1,z_2)\right|\left|\frac{d\kappa_t(0,1)}{d\rho^S_t}\frac{d\rho_t(z_1, z_2,0,1)}{d\rho^S_t} \right|\right)^2 \\ &\leq \int_0^T \left| \lambda(t,z_1,z_2)\right|^2\left|\gamma_t(z_1,z_2) \right|^2 d\rho^S_t \int_0^T \left|\frac{d\kappa_t(0,1)}{d\rho^S_t}\right|^2 d\rho^S_t\\ &\leq \left( I_1(z_1,z_2) + I_2(z_1,z_2)\right) \int_0^T \left|\frac{d\kappa_t(0,1)}{d\rho^S_t}\right|^2 d\rho^S_t , \end{split} \end{align} with \begin{align} \label{EI12} \begin{split} I_1(z_1,z_2) &:= c_2 \int_0^T \left| \lambda(t,z_1,z_2) \right|^2 d\rho^S_t \\ I_2(z_1,z_2) &:= -c_3 \int_0^T \left| \lambda(t,z_1,z_2) \right|^2 \frac{d {\rm Re}(\eta(z_1,z_2,t))}{d\rho_t} d\rho^S_t \end{split} \end{align} where we have used item \ref{pt3LemmaEta} of Lemma \ref{LemmaEta}. Since $\lambda$ is uniformly bounded, see \eqref{lambdaBounded}, we have \begin{equation} \label{I_1Bounded} I_1(z_1,z_2)\leq c_2 \rho^S_T \exp\left(2c_1 \rho^S_T\right). \end{equation} On the other hand, \begin{align} \label{I_2Bounded} \begin{split} I_2(z_1,z_2) &= -c_3 \int_0^T {\rm Re}(\eta(z_1,z_2,dt)) \exp \left( \int_t^T 2 {\rm Re}(\eta(z_1,z_2,du))\right) \\ &\leq c_3 \displaystyle \sup_{y\in I_0+i\R^2} - \int_0^T {\rm Re}(\eta(y_1,y_2,dt))\exp \left( \int_t^T 2 {\rm Re}(\eta(y_1,y_2,du)) \right), \end{split} \end{align} which is finite by item \ref{pt4LemmaEta} of Lemma \ref{LemmaEta}. Integrating \eqref{E4250} with respect to $\vert \Pi \vert$, taking into account the two uniform bounds in $(z_1,z_2)$, i.e. \eqref{I_1Bounded} and \eqref{I_2Bounded}, we can conclude to the validity of \eqref{CondLambda3}, so that $y\in \Da$. We show similarly that $\widetilde y:=y\times id\in \Da$. In fact, for $t\in[0,T]$ and $x,s>0$, we have \begin{equation*} \widetilde y(t,x,s) = \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2+1} \lambda(t,z_1,z_2) \quad = \int_{\C^2} d \widetilde\Pi(z_1, z_2) x^{z_1} s^{z_2} \widetilde \lambda(t,z_1,z_2), \end{equation*} where $\widetilde \lambda(t,z_1,z_2) = \lambda(t,z_1,z_2-1)$ and $\widetilde\Pi$ is the Borel complex measure defined by $$ \int_{\C^2} d \widetilde\Pi(z_1, z_2) \varphi(z_1,z_2) = \int_{\C^2} d \Pi(z_1, z_2) \varphi(z_1,z_2+1), $$ for every bounded measurable function $\varphi$. Hence, $ supp\; \widetilde{\Pi} = supp\; \Pi + (0,1) $. By \ref{R411Item1} and \ref{R411Item5} in Remark \ref{R411}, we have $(0,1)\in D/2$ and $ supp\; \Pi \subset D/2$. Then, by Remark \ref{R23}, $ supp\; \widetilde{\Pi}\subset D$, so that Assumption \ref{A_Pi} is verified for $\widetilde{\Pi}$. Moreover, by definition of $\widetilde \Pi$, the conditions \eqref{CondLambda0} and \eqref{CondLambda1} are fulfilled replacing $\Pi$ and $\lambda$ with $\widetilde \Pi$ and $\widetilde \lambda$. In order to conclude that $\widetilde y\in \Da$, we need to show \begin{equation} \label{ECondb} A := \int_0^T d\rho^S_t \int_{\C^2}d|\widetilde\Pi|(z_1,z_2) \left| \dfrac{d\lambda(t,z_1,z_2-1)}{d\rho^S_t} + \lambda(t,z_1,z_2-1) \dfrac{d\kappa_t(z_1,z_2)}{d\rho^S_t}\right| < \infty, \end{equation} which corresponds to Condition \eqref{CondLambda3} for $\Pi$ and $\lambda$ replaced by $\widetilde \Pi$ and $\widetilde \lambda$. Notice that \begin{align*} \begin{split} A &= \int_0^T d\rho^S_t \int_{\C^2}d|\Pi|(z_1,z_2) \left| \dfrac{d\lambda(t,z_1,z_2)}{d\rho^S_t} + \lambda(t,z_1,z_2) \dfrac{d\kappa_t(z_1,z_2+1)}{d\rho^S_t}\right| \\ &= \int_0^T d\rho^S_t \int_{\C^2}d|\Pi|(z_1,z_2) \left| \dfrac{d\lambda(t,z_1,z_2)}{d\rho^S_t} + \lambda(t,z_1,z_2) \left(\dfrac{d\rho_t(z_1,z_2,0,1)}{d\rho^S_t} + \dfrac{d\kappa_t(z_1,z_2)}{d\rho^S_t} + \dfrac{d\kappa_t(0,1)}{d\rho^S_t}\right)\right| \\ &\leq A_1 + A_2+A_3, \end{split} \end{align*} where \begin{eqnarray*} A_1 &:=& \int_0^T d\rho^S_t \int_{\C^2}d|\Pi|(z_1,z_2) \left| \dfrac{d\lambda(t,z_1,z_2)}{d\rho^S_t} + \lambda(t,z_1,z_2)\dfrac{d\kappa_t(z_1,z_2)}{d\rho^S_t}\right|, \\ A_2 &:=& \int_0^T d\rho^S_t \int_{\C^2}d|\Pi|(z_1,z_2) \left|\lambda(t,z_1,z_2) \dfrac{d\kappa_t(0,1)}{d\rho^S_t} \right|, \\ A_3 &:=& \int_0^T d\rho^S_t \int_{\C^2}d|\Pi|(z_1,z_2) \left|\lambda(t,z_1,z_2) \dfrac{d\rho_t(z_1,z_2,0,1)}{d\rho^S_t} \right|. \end{eqnarray*} The first term $A_1$ is finite, since we already proved that $y\in \Da$ and so condition \eqref{CondLambda3} is fulfilled. Moreover \begin{eqnarray*} A_2 &\leq & \norm{\dfrac{d\kappa_t(0,1)}{d\rho^S_t}}_{\infty} \int_0^T d\rho^S_t \int_{\C^2}d|\Pi|(z_1,z_2) \left|\lambda(t,z_1,z_2)\right|. \end{eqnarray*} The right-hand side is finite, thanks to point \ref{setD3} of Assumption \ref{setD} and the fact that $\lambda$ is uniformly bounded. Finally, by Cauchy-Schwarz and item \ref{pt3LemmaEta} of Lemma \ref{LemmaEta}, taking into account Notation \eqref{Egamma}, by similar arguments as \eqref{E4250}, we have \begin{eqnarray*} \left(A_3\right)^2 &\leq & |\Pi|(\C^2)\rho^S_T \int_{\C^2}d|\Pi|(z_1,z_2)\int_0^T d\rho^S_t \left|\lambda(t,z_1,z_2)\right|^2 \vert \gamma_t(z_1,z_2) \vert^2 \\ &\leq & |\Pi|(\C^2)\rho^S_T \int_{\C^2} d|\Pi|(z_1,z_2) (I_1(z_1,z_2) + I_2(z_1,z_2)), \end{eqnarray*} where $I_1(z_1,z_2)$ and $I_2(z_1,z_2)$ have been defined in \eqref{EI12}. We have already shown in \eqref{I_1Bounded} and \eqref{I_2Bounded} that $I_1$ and $I_2$ are bounded on $ supp\; \Pi$, hence $A_3 <\infty$. In conclusion, it follows indeed that $\widetilde y \in \Da$ and Hypothesis \ref{FSConditions1} of Corollary \ref{FSConditions} is verified. We define $(t,x,s) \mapsto z(t,x,s)$ so that $s^2 z(t,x,s) = \widetilde \a(y)(t,x,s)$. This gives \begin{equation} \label{zExpAdd} z(t,x,s) = \int_{\C^2} d \Pi(z_1, z_2) x^{z_1} s^{z_2-1} \lambda(t,z_1,z_2) \gamma_t(z_1,z_2), \; \forall t\in[0,T],\; x,s>0, \end{equation} Lemma \ref{LemmaZInt} below shows that \eqref{ERC43} is fulfilled and so Hypothesis \ref{FSConditions2} of Corollary \ref{FSConditions} is verified. \medskip We go on verifying Hypothesis \ref{FSConditions3} of Corollary \ref{FSConditions}, i.e. the validity of \eqref{E450} and \eqref{E450bis}. Condition \eqref{E450bis} is straightforward since $\lambda(T,\cdot,\cdot) = 1$. The second equality in \eqref{E450} takes place by definition of $z$. The first equality holds true integrating \eqref{lambdaODE} thanks to \eqref{CondLambda3}. This proves \ref{FSConditions3} of Corollary \ref{FSConditions}. Finally Corollary \ref{FSConditions} implies that $(Y,Z,O)$, is a solution of the BSDE \eqref{FSasBSDE} provided we establish a lemma. \end{proof} \begin{lemma} \label{LemmaZInt} Let $z$ be as in \eqref{zExpAdd}, where $\lambda, \gamma$ have been respectively defined in \eqref{lambda} and \eqref{Egamma}. We have $$ \E{\int_0^T \left|z (u, X_{u-}, S_{u-})\right|^2 S^2_{u-} \rho^S_{du} } < \infty. $$ In particular \eqref{ERC43} is fulfilled. \end{lemma} \begin{proof} First, let us show that \begin{equation} \label{eqCor37} \int_{\C^2} d\Pi(z_1,z_2) \int_0^T |\lambda(t,z_1,z_2)|^2 \rho_{dt}(z_1,z_2) < \infty. \end{equation} For this, we use points \ref{pt3LemmaEta} and \ref{pt4LemmaEta} of Lemma \ref{LemmaEta}, \eqref{lambdaBounded} and \eqref{E4600} we get \begin{eqnarray*} \int_0^T |\lambda(t,z_1,z_2)|^2 \rho_{dt}(z_1,z_2) & = & \int_0^T |\lambda(t,z_1,z_2)|^2 \frac{d\rho_{t}(z_1,z_2) }{d\rho_t^S}\rho_{dt}^S \\ &\leq & \int_0^T |\lambda(t,z_1,z_2)|^2 \left(c_2 - c_3 \frac{d {\rm Re}(\eta(y_1,y_2,t))}{d\rho^S_t}\right)\rho_{dt}^S \\ &\leq & c_2 e^{2c_1 \rho_{T}^S} \rho_{T}^S - c_3 \int_0^T {\rm Re}(\eta(z_1,z_2,dt))\exp \left( \int_t^T 2 {\rm Re}(\eta(z_1,z_2,du)) \right) \\ &\leq & c_2 e^{2c_1 \rho_{T}^S} \rho_{T}^S +\\ && c_3\sup_{(\xi_1,\xi_2) \in I_0+i\R^2} - \int_0^T {\rm Re}(\eta(\xi_1,\xi_2,dt))\exp \left( \int_t^T 2 {\rm Re}(\eta(\xi_1,\xi_2,du)) \right). \end{eqnarray*} Hence \eqref{eqCor37} is fulfilled. Using Cauchy-Schwarz inequality, Fubini theorem and point \ref{pt3LemmaEta} of Lemma \ref{LemmaEta}, we have \begin{eqnarray*} &&\E{\int_0^T \left| z (u, X_{u-}, S_{u-})\right|^2 S^2_{u-} d\rho^S_u } =\E{\int_0^T \left|\int_{\C^2} d \Pi(z_1, z_2) X_{t-}^{z_1} S_{t-}^{z_2} \lambda(t,z_1,z_2) \gamma_t(z_1,z_2)\right|^2 \rho^S_{ds}}\\ &\leq &|\Pi|(\C^2) \sup_{t\in[0,T],(a,b)\in I_0} \E{X_{t}^{2a} S_{t}^{2b}} \int_{\C^2} d |\Pi|(z_1, z_2) \int_0^T \left|\lambda(t,z_1,z_2) \gamma_t(z_1,z_2)\right|^2 \rho^S_{ds} \\ &\leq &|\Pi|(\C^2) \sup_{t\in[0,T],(a,b)\in I_0} \E{X_{t}^{2a} S_{t}^{2b}} \int_{\C^2} d |\Pi|(z_1, z_2) \int_0^T \left|\lambda(t,z_1,z_2) \right|^2 \rho_{dt}(z_1,z_2). \end{eqnarray*} The right-hand side is finite, thanks to \eqref{eqCor37}. \end{proof} One can prove that the \textbf{weak} F-S decomposition in Proposition \ref{FSExpAdd} is actually a strong F-S decomposition in the sense of Definition \ref{FSDefinition}. \begin{theorem} \label{TFS} Under Assumption \ref{setD}, the random variable $$h=\int_{\C^2} d \Pi(z_1, z_2) X_{T}^{z_1} S_{T}^{z_2}$$ admits an F-S decomposition \eqref{FSDec} where $h_0 = Y_0$ and $(Y,Z,O)$ is given in Proposition \ref{FSExpAdd}. Moreover, if $h$ is real-valued then the decomposition $(Y,Z,O)$ is real-valued and it is therefore the unique F-S decomposition. \end{theorem} \begin{remark} \label{RGeneraliz} This statement is a generalization of the results of \cite{gor2013variance} (and \cite{Hubalek2006}) to the case of hedging under basis risk. This yields a characterization of the hedging strategy in terms of Fourier-Laplace transform and the moment generating function. \end{remark} \begin{proof} Since $\Pi$ is a finite measure, then $h$ is square integrable. Indeed by Cauchy-Schwarz \begin{equation} \label{Ehcarre} \E{h^2} \leq |\Pi|(\C^2) \int_{\C^2} \E{|X_T|^{2{\rm Re}(z_1)}|S_T|^{2{\rm Re}(z_2)}} d\Pi(z_1,z_2) \leq \left(|\Pi|(\C^2)\right)^2 \sup_{(a,b)\in I} \E{|X_T|^{a}|S_T|^b}, \end{equation} where $I$ is a bounded subset of $\R^2$ defined in Assumption \ref{setD}. By item \ref{setD1} of Assumption \ref{setD} and item \ref{propAddPt3} of Proposition \ref{propAdd}, previous quantity is finite. By item \ref{R411Item4} of Remark \ref{R411} and by Remark \ref{RMS}, the real-valued F-S decomposition of any real valued square integrable $\cF_T$-measurable random variable is unique. As a consequence, if $h$ is real-valued then its F-S decomposition is also real-valued. In fact, if $(Y_0,Z,O)$ is an F-S decomposition of $h$, then $(\overline Y_0,\overline Z,\overline O)$ is also an F-S of $\overline h$ by item \ref{RFS_3} of Remark \ref{RFS}. Thus, by subtraction, $(\operatorname{Im}(Y_0), \operatorname{Im}(Z), \operatorname{Im}(O))$ is an F-S decomposition with real-valued triplet of the real-valued r.v. $\operatorname{Im}(h)=0$. By uniqueness $\operatorname{Im}(Y_0)$, $\operatorname{Im}(Z)$ and $\operatorname{Im}(O)$ are null and the decomposition $(Y_0,Z,O)$ is real valued. Now, let $(Y,Z,O)$ defined in Proposition \ref{FSExpAdd}. It remains to prove that $(Y_0,Z,O)$ is a strong (possibly complex) F-S decomposition in the sense of Definition \ref{FSDefinition}. For this we need to show items \ref{RBSDEFS_1},\ref{RBSDEFS_2},\ref{RBSDEFS_3} of Remark \ref{RBSDEFS}. Item \ref{RBSDEFS_1} has been the object of \eqref{Ehcarre}. We show below item \ref{RBSDEFS_2} i.e. $\E{\int_0^T | Z_u|^2d\langle M^S \rangle_u} < \infty$ and $\E{\left(\int_0^T |Z_u|d\| V^S \|_u\right)^2} < \infty$. The first inequality is stated in Lemma \ref{LemmaZInt}. In order to prove the second one, we recall that, by Corollary \ref{CCS}, $$ dV^S _t = S_{t-} \kappa_{dt}(0,1) = S_{t-} \frac{d\kappa_{t}(0,1)}{d\rho^S_t} \rho^S_{dt}. $$ Consequently \begin{eqnarray*} \E{\left(\int_0^T |Z_u|d\| V^S \|_u\right)^2} &=& \E{\left(\int_0^T |Z_u| \left|\frac{d\kappa_{u}(0,1)}{d\rho^S_u}\right|S_{u-} \rho^S_{du}\right)^2} \\ &\leq & \int_0^T \left|\frac{d\kappa_{u}(0,1)}{d\rho^S_u}\right|^2 \rho^S_{du} \E{\int_0^T |Z_u|^2 S_{u-}^2 \rho^S_{du}}, \\ \end{eqnarray*} which is finite since, by item \ref{R411Item3} of Remark \ref{R411} which says that $(0,1) \in \cD$, taking into account Lemma \ref{LemmaZInt}. To end this proof, we need to show item \ref{RBSDEFS_3} of Remark \ref{RBSDEFS}. For this we use Proposition \ref{ExpAddSqIntgMg} for which we need to check conditions \ref{ExpAddSqIntgMg_1} and \ref{ExpAddSqIntgMg_2}. By item \ref{R411Item5} of Remark \ref{R411} we have $I_0 \subset D/2$ which constitutes item \ref{ExpAddSqIntgMg_1}. Item \ref{ExpAddSqIntgMg_2} is verified by condition \eqref{eqCor37} is verified. Hence Proposition \ref{ExpAddSqIntgMg} implies that $ t\mapsto y(t,X_t,S_t) - \int_0^t \a(y)(u,X_{u-},S_{u-}) \rho^S_{dt} $ is a square integrable martingale. \end{proof} \subsection{Diffusion processes} \label{S412} We set $\cO = \R\times E$, where $E=\R$ or $]0,\infty[$. In this Section we apply Corollary \ref{FSConditions} to the diffusion processes $(X,S)$ modeled in Section \ref{E22} whose dynamics is given by \eqref{expleDiff}. We are interested in the F-S decomposition of $h = g(X_T,S_T)$. We recall the assumption in that context. \begin{assumption}\ \label{ADiff} \begin{itemize} \item $b_X$, $b_S$, $\sigma_X$ and $\sigma_S$ are continuous and globally Lipschitz. \item $g: \cO \rightarrow \R$ is continuous. \end{itemize} \end{assumption} We recall that $(X,S)$ solve the strong martingale problem related to $(\Da, \a, A)$ where $A_t = t$, $\Da = \mathcal{C}^{1,2}([0,T[\times \cO) \cap \mathcal{C}^{1}([0,T]\times \cO) $. For a function $y \in \Da$, obviously $\tilde y \in \Da$ and the operators $\a$ and $\widetilde{\a}$ are given by \begin{eqnarray*} \a(y) &=& \partial_t y + b_S \partial_s y + b_X \partial_x y + \frac{1}{2} \left\lbrace |\sigma_S|^2\partial_{ss} y + |\sigma_X|^2 \partial_{xx} y + 2 \langle \sigma_S, \sigma_X\rangle \partial_{sx} y \right\rbrace, \\ \widetilde{\a}(y) &=& |\sigma_S|^2 \partial_s y + \langle \sigma_S, \sigma_X\rangle \partial_x y. \end{eqnarray*} Conditions \ref{FSConditions3} of Corollary \ref{FSConditions} translates into \begin{eqnarray} \label{pdeFSdiff} b_S z &=& \partial_t y + b_S \partial_s y + b_X \partial_x y + \frac{1}{2} \left\lbrace |\sigma_S|^2\partial_{ss} y + |\sigma_X|^2 \partial_{xx} y + 2 \langle \sigma_S, \sigma_X\rangle \partial_{sx} y \right\rbrace, \nonumber \\ y(T,.,.) &=& g(.,.), \\ |\sigma_S|^2 z &=& |\sigma_S|^2 \partial_s y + \langle \sigma_S, \sigma_X\rangle \partial_x y. \nonumber \end{eqnarray} If, moreover, $\frac{1}{|\sigma_S|}$ is locally bounded, then we have \begin{equation} \label{E496a} \left\{ \begin{aligned} \partial_t y + B \partial_x y + \frac{1}{2} \left( |\sigma_S|^2\partial_{ss} y + |\sigma_X|^2 \partial_{xx} y + 2 \langle \sigma_S, \sigma_X\rangle \partial_{sx} y \right) =0, \\ y(T,.,.) = g(.,.) \end{aligned} \right. \end{equation} and \begin{equation} \label{E496b} z = \partial_s y + \dfrac{\langle \sigma_S, \sigma_X\rangle}{|\sigma_S|^2} \partial_x y, \end{equation} where \begin{equation} \label{EB} B=b_X - b_S \dfrac{\langle \sigma_S, \sigma_X\rangle}{|\sigma_S|^2}. \end{equation} $z$ is then locally bounded since $\sigma_S, \sigma_X$ and $\frac{1}{\vert \sigma_S\vert}$ are locally bounded and because $y \in \Da$. \begin{proposition} \label{WeakFSDiffusion} We suppose the validity of Assumption \ref{ADiff} and that $\vert \sigma_S \vert$ is always strictly positive. If $(y,z)$ is a solution of the system \eqref{E496a} and \eqref{E496b}, such that $y \in \Da$, then $(Y,Z,O)$ is a solution of the BSDE \eqref{FSasBSDE}, where $$ Y_t = y(t, X_{t}, S_{t}), \quad Z_t =d z(t, X_{t}, S_{t}),\quad O_t = Y_t - Y_0 - \int_{0}^{t} Z_u d S_u. $$ \end{proposition} \begin{proof} It follows from Corollary \ref{FSConditions} for which we need to check the conditions \ref{FSConditions1}, \ref{FSConditions2} and \ref{FSConditions3}. Indeed, since $y, \tilde y \in \Da$, Condition \ref{FSConditions1} holds; since $z$ is locally bounded, by item \ref{RC33_2} of Remark \ref{RC33}, Condition \ref{FSConditions2} is fulfilled. Condition \ref{FSConditions3} has been the object of the considerations above the statement of the Proposition. \end{proof} The result above yields the weak F-S decomposition for $h$. In order to show that $(Y_0, Z, O)$ constitutes a true F-S decomposition, we need to make use of Remark \ref{RBSDEFS}. First we introduce another assumption. \begin{assumption}\ \label{AFSDiff} Suppose that the process $(X,S)$ takes values in $\cO$ and the validity of the following conditions. \begin{enumerate}[label=\roman*)] \item $g\in C^1$ such that $g$, $\partial_x g$ and $\partial_s g$ have polynomial growth. \item $B$ is globally Lipschitz. \item $\partial_x B$, $\partial_s B$, $\partial_x \sigma_X$, $\partial_s \sigma_X,$ $\partial_x \sigma_S$ and $\partial_s \sigma_S$ exist, are continuous and have polynomial growth. \item $\sigma_S$ never vanishes. \end{enumerate} \end{assumption} \begin{theorem} \label{FSDiffusion} Suppose that Assumptions \ref{ADiff} and \ref{AFSDiff} are fulfilled and suppose the existence of a function $y:[0,T]\times \cO \rightarrow \R$ such that \begin{equation} \label{solY} y \in C^0([0,T]\times \cO) \cap C^{1,2}([0,T[\times \cO) \text{ verifies the PDE } \eqref{E496a} \text{ and has polynomial growth.} \end{equation} Then the F-S decomposition \eqref{FSDec} of $h=g(X_T,S_T)$ is provided by $(h_0,Z,O)$ where, $h_0 = Y_0$, $$Y_t=y(t,X_t,S_t),\; Z_t=z(t,X_t,S_t), O_t = Y_t-Y_0-\int_0^t Z_u dS_u,$$ and $z:[0,T] \times \cO \rightarrow \R$ is given by \eqref{E496b}. \end{theorem} \begin{proof} Let $y:[0,T]\times \cO \rightarrow \R$ verifying \eqref{solY} and $z$ defined by \eqref{E496b}. In order to show that the triplet given in Proposition \ref{WeakFSDiffusion} yields a true F-S decomposition, we need to show items \ref{RBSDEFS_1}, \ref{RBSDEFS_2}, \ref{RBSDEFS_3} of Remark \ref{RBSDEFS}. First notice that the random variable $g(X_T,S_T)$ is square integrable, because $g$ has polynomial growth and $X$ and $S$ admit all moments, see Remark \ref{Rallmoments}. So \ref{RBSDEFS_1} is verified. In view of verifying item \ref{RBSDEFS_2} of Remark \ref{RBSDEFS} we recall that $$ \a(id) = b_S, \widetilde\a(id) = \vert \sigma_S \vert^2, A_t \equiv t \text{ and } \; z = \partial_s y + \dfrac{\langle \sigma_S, \sigma_X\rangle}{|\sigma_S|^2} \partial_x y.$$ Indeed, since $y$ has polynomial growth, it is forced to be unique since \cite[Theorem 7.6, chapter 5]{Karatzas1991Brownian} implies that \begin{equation} \label{StochRep} y(t,x,s) = \E{g(X^{t,x,s}_T, S^{t,x,s}_T)}, \end{equation} where $(\widetilde X=X^{t,x,s}, \widetilde S=S^{t,x,s})$ is a solution of $$ d \left( \begin{smallmatrix} \widetilde X_r \\ \widetilde S_r \end{smallmatrix} \right) = \Sigma(r,\widetilde X_r, \widetilde S_r) d\widetilde W_r + \left( \begin{smallmatrix} B(r, \widetilde X_r, \widetilde S_r) \\ 0 \end{smallmatrix} \right)dr, $$ with $\widetilde X_t=x, \; \widetilde S_t=s$, where $\widetilde W=(\widetilde W^1,\widetilde W^2)$ is a standard two-dimensional Brownian motion, and $$ \Sigma = \left( \begin{smallmatrix} \sigma_{X,1} & \sigma_{X,2} \\ \sigma_{S,1} & \sigma_{S,2} \end{smallmatrix} \right). $$ We recall that $B$ has been defined in \eqref{EB}. By \eqref{StochRep}, a straightforward adaptation of \cite[Theorem 5.5]{Friedman1975StochasticVol1} yields that the partial derivatives $\partial_x y$ and $\partial_s y$ exist and are continuous on $[0,T] \times \cO$ and they have polynomial growth. Using \eqref{E496b}, we have $z b_S= b_S\partial_s y + b_X \partial_x y - B \partial_x y.$ Now, since $\partial_x y$ and $\partial_s y$ have polynomial growth, and by assumption $b_S$, $b_X$ and $B$ have linear growth, we get that $z b_S$ has polynomial growth. This gives, by Remark \ref{Rallmoments}, $$\E{\left(\int_0^T \left|z b_S\right|(t,X_t, S_t) dt\right)^2} < \infty.$$ On the other hand, using \eqref{E496b} and Cauchy-Schwarz, we have \begin{eqnarray*} |z \sigma_S| &=& ||\sigma_S|\partial_s y + \frac{\langle \sigma_X,\sigma_S\rangle}{|\sigma_S|} \partial_x y | \leq |\sigma_S| |\partial_s y| + |\sigma_X||\partial_x y |. \end{eqnarray*} Since $\sigma_X$, $\sigma_S$ have linear growth and $\partial_x y$ and $\partial_s y$ have polynomial growth, we get that $z \sigma_S$ has polynomial growth, which implies, by Remark \ref{Rallmoments}, that $ \E{\int_0^T \left|z \sigma_S\right|^2(t,X_t, S_t) dt} < \infty. $ Consequently, item \ref{RBSDEFS_2} of Remark \ref{RBSDEFS} is fulfilled. In order to show the last item \ref{RBSDEFS_3}, taking into account Remark \ref{RE22}, we need to prove that \begin{eqnarray*} u\mapsto M^Y_u&=& \int_0^u \partial_x y(r,X_r,S_r) \left(\sigma_{X,1}(r,X_r,S_r) d W^1_r+\sigma_{X,2}(r,X_r,S_r) d W^2_r\right) \\ &+& \int_0^u \partial_s y(r,X_r,S_r) \left(\sigma_{S,1}(r,X_r,S_r) d W^1_r+\sigma_{S,2}(r,X_r,S_r) d W^2_r\right) \end{eqnarray*} is a square integrable martingale. This is due to the fact that $\partial_x y$ and $\partial_s y$ have polynomial growth, and that $\sigma_X$ and $\sigma_S$ have linear growth, and Remark \ref{Rallmoments}, which implies that $$ \E{\int_0^T \lbrace(\partial_x y(r,X_r,S_r))^2 |\sigma_X(r,X_r,S_r)|^2 +(\partial_s y(r,X_r,S_r))^2 |\sigma_S(r,X_r,S_r)|^2 \rbrace du} < \infty. $$ This concludes the proof of Theorem \ref{FSDiffusion}. \end{proof} Below we show that, under Assumptions \ref{ADiff} and \ref{AFSDiff}, Condition \eqref{solY} is not really restrictive. \begin{proposition} \label{PFSDiffusion} We assume the validity of Assumptions \ref{ADiff} and \ref{AFSDiff}. Moreover we suppose the validity of one of the three items below. \begin{enumerate}[label=\arabic*)] \item \label{RegularCase} We set $\cO = \R^2$. Suppose that the second (partial, with respect to $(x,s)$) derivatives of $B$, $\sigma_X,$ $\sigma_S$ and $g$ exist, are continuous and have polynomial growth. \item \label{UEllip} We set $\cO = \R^2$. We suppose $B$, $\sigma_X$, $\sigma_S$ to be bounded and there exist $\lambda_1,\lambda_2>0$ such that $$ \lambda_1 |\xi|^2 \leq (\xi_1,\xi_2)C(t,x,s)(\xi_1,\xi_2)^T \leq \lambda_2 |\xi|^2, \; \forall \xi=(\xi_1,\xi_2)\in \cO, $$ where $C(t,x,s) = \left( \begin{smallmatrix} |\sigma_X|^2(t,x,s) & \langle \sigma_X, \sigma_S\rangle(t,x,s) \\ \langle \sigma_X, \sigma_S\rangle(t,x,s) & |\sigma_S|^2(t,x,s) \end{smallmatrix} \right)$. \item \label{BScase} (Black-Scholes case). We suppose $\cO = ]0,+\infty[^2$. \begin{eqnarray*} b_S(t,x,s) = s\hat b_S, && \sigma_{S}(t,x,s) = (s \hat \sigma_{S,1} ,\; s \hat\sigma_{S,2}), \\ b_X(t,x,s) = x\hat b_X, && \sigma_{X}(t,x,s) = (x \hat \sigma_{X,1} ,\; x\hat \sigma_{X,2}), \end{eqnarray*}where $\hat b_S$, $\hat b_X$, $\hat\sigma_{S,1}$, $\hat\sigma_{S,2}$, $\hat\sigma_{X,1}$ and $\hat\sigma_{X,2}$ are constants, such that $\langle \hat\sigma_X, \hat\sigma_S\rangle < |\hat\sigma_X||\hat\sigma_S|$. \end{enumerate} We have the following results. \begin{description} \item{i)} There is a (unique) strict solution $y$ of \eqref{E496a} of class $C^{1,2}([0,T[ \times \cO) \cap C^{0}([0,T] \times \cO)$ with polynomial growth. \item{ii)} The F-S decomposition \eqref{FSDec} of $h = g(X_T,S_T)$ is provided by $(h_0,Z,O)$ where $(Y,Z,O)$ fulfills $$Y_t=y(t,X_t,S_t),\; Z_t=z(t,X_t,S_t) \text{ and } O_t = Y_t-Y_0-\int_0^t Z_u dS_u,$$ where $z$ is given by \eqref{E496b}. \end{description} \end{proposition} \begin{remark} \label{RT420} We will show below that under the hypotheses of Proposition \ref{PFSDiffusion}, then conclusion i) holds, i.e. there is a function $y$ fulfilling \eqref{solY}. We observe that, by the proof of Theorem \ref{FSDiffusion}, if such a $y$ exists then it admits the probabilistic representation \eqref{StochRep} and so it is necessarily the unique $C^{1,2}([0,T[ \times \cO) \cap C^{0}([0,T] \times \cO)$, with polynomial growth, solution of \eqref{E496a}. \end{remark} \begin{proof} We proceed to discussing the existence of $y$ mentioned in Remark \ref{RT420}. So we distinguish now the mentioned three cases. Suppose first item \ref{RegularCase}. The function $y$ defined by \eqref{StochRep} is a continuous function by the fact that the flow $(\widetilde X, \widetilde S)$ is continuous in all variables and Remark \ref{Rallmoments}, taking into account Lebesgue dominated convergence theorem. \cite[Theorem 6.1]{Friedman1975StochasticVol1}, states that $y$ belongs to $C^{1,2}([0,T] \times \cO)$, and it verifies the PDE \eqref{E496a}. \cite[Theorem 5.5]{Friedman1975StochasticVol1} says in particular that $y$ has polynomial growth. In that case conclusion i) is established. Under the assumption described in item \ref{UEllip}, the conclusion i) can be obtained by simply adapting the proof of \cite[Theorem 12, p.25]{Friedman1983PDE}. Indeed, according to \cite[Theorem 8, p.19]{Friedman1983PDE} there is a fundamental solution $\Gamma: \{(t_1,t_2), 0\leq t_1<t_2\leq T\}\times \R^2\times \R^2 \rightarrow \R$ such that \begin{equation} \label{EGamma} \Gamma(t_1,t_2; \gamma, \xi) \leq \frac{1}{a_1(t_2-t_1)} \exp\left(-\frac{-|\gamma-\xi|^2}{a_1(t_2-t_1)} \right), \end{equation} where $a_1$ is a positive constant. Now, by \cite[Theorem 12, p.25]{Friedman1983PDE}, the function $y$ defined by \begin{equation} \label{SolByFond} y(t,x,s) = \int_{\R^2} \Gamma(t,T; (x,s),(\xi_1,\xi_2)) g(\xi_1, \xi_2) d\xi_1d\xi_2, \end{equation} is a strict solution of \eqref{E496a}, in particular it belongs to $\mathcal{C}^{1,2}([0,T[\times \R^2) \cap \mathcal{C}^{0}([0,T]\times \R^2)$. Since g has polynomial growth then there exist $a_2>0$, $p>1$ such that, $\forall x,s \in\R$, \begin{equation} \vert g(x,s) \vert \leq a_2(1+|x|^p +|s|^p). \label{E4102} \end{equation} Thus, by \eqref{SolByFond}, \eqref{EGamma} and \eqref{E4102}, for $x,s\in\R$ and $0\leq t\leq T$, we have \begin{equation*} \vert y(t,x,s)\vert \leq \frac{a_2}{a_1(T-t)} \int_{\R^2} (1+|\xi_1|^p +|\xi_2|^p) \exp\left(-\frac{\vert x-\xi_1\vert^2 + \vert s-\xi_2\vert^2}{a_1 (T-t)} \right) d\xi_1d\xi_2. \end{equation*} So there is a constant $C_1(p, T)>0$ such that $ \vert y(t,x,s)\vert \leq C_1(p, T) \left( 1 + \E{\vert x+G_1 \vert^p+\vert x+G_2 \vert^p}\right), $ where $G=(G_1,G_2)$ is a two dimensional centered Gaussian vector with covariance matrix equal to $\frac{a_1(T-t)}{2}$ times the identity matrix. Since $p>1$, then there is a constant $C_2(p, T)$ such that \begin{eqnarray*} \vert y(t,x,s)\vert &\leq & C_2(p,T) \left( 1 + |x|^p + |s|^p + \E{\vert G_1 \vert^p+\vert G_2 \vert^p} \right) \\ &\leq & C_3(p,T) \left( 1 + |x|^p + |s|^p \right), \end{eqnarray*} where $C_3(p,T)$ is another positive constant. In conclusion the solution $y$ given by \eqref{SolByFond} has polynomial growth. We discuss now the Black-Scholes case \ref{BScase} showing that, also in that case, there is $y$ such that \eqref{solY} is fulfilled. First notice that the uniform ellipticity condition in \ref{UEllip} is not fulfilled for this dynamics, so we consider a logarithmic change of variable. For a function $y\in \Da$, we introduce the function $\hat y:[0,T] \times \R^2 \rightarrow \R$ defined by $ \hat y(t, x, s) = y(t, \log(x), \log(s)),\; \forall t\in[0,T], x,s>0. $ By inspection we can show that $y$ is a solution of \eqref{E496a} if and only if $\hat y$ fulfills \begin{eqnarray}\label{PDE-BS} 0&=& \partial_t \hat y + \left(\hat b_X-\hat b_S \dfrac{\langle \hat\sigma_S, \hat\sigma_X\rangle}{|\hat\sigma_S|^2} - \frac{1}{2}|\hat\sigma_X|^2\right) \partial_x \hat y - \frac{1}{2}|\hat \sigma_S|^2 \partial_s \hat y + \nonumber \\ && +\frac{1}{2} \left( |\hat \sigma_S|^2\partial_{ss} \hat y+ |\hat \sigma_X|^2 \partial_{xx} \hat y + 2 \langle \hat \sigma_S, \hat \sigma_X\rangle \partial_{sx} \hat y \right), \\ \hat y(T,.,.) &=& \hat g(.,.), \nonumber \end{eqnarray} where $\hat g(x,s)=g(e^x,e^s),\;\forall x,s \in\R$. Notice that the PDE problem \eqref{PDE-BS} has constant coefficients and it verifies the uniform ellipticity condition in \ref{UEllip}. Moreover, since $g$ has polynomial growth, then there exist $c>0,p>1$ such that $g(x,s) \leq c( 1+x^p+s^p),\; \forall x,s>0$ again. Hence $\hat g(x,s) \leq c( 1+e^{px}+e^{ps}),\; \forall x,s\in\R$. Again, by simple adaptation of the proof of \cite[Theorem 12, p.25]{Friedman1983PDE}, we observe that equation \eqref{PDE-BS} admits a solution $\hat y$ in $C^{1,2}([0,T[\times\R^2) \cap C^0([0,T]\times\R^2)$, such that $\hat y(t,x,s) \leq K( 1+e^{px}+e^{ps}),\; \forall x,s\in\R,$ where $K>0$. This yields that $y$ has polynomial growth, since $y(t, x, s) = \hat y(t, \log(x), \log(s)),\; \forall t\in[0,T], x,s>0$, so $y(t,x,s) \leq K( 1+x^{p}+s^{p}),\; \forall t\in[0,T], x,s>0.$ This concludes the proof of conclusion i). Conclusion ii) is now a direct consequence of Theorem \ref{FSDiffusion} together with condition i). \end{proof} \begin{remark} \label{RHulley} The last item of Proposition \ref{PFSDiffusion} permits to recover the results already found in \cite{Hulley2008}, by replacing \begin{eqnarray*} \hat b_S = (\mu_S-r), && \hat \sigma_{S} = (\sigma_S ,\; 0), \\ \hat b_X = (\mu_U-r), && \hat \sigma_{X} = (\rho \sigma_U ,\; \sqrt{1-\rho^2} \sigma_U), \end{eqnarray*}where $\mu_S$, $\mu_U$, $r$, $\sigma_S$ and $\sigma_U$ are constants. \end{remark} \begin{appendices} \numberwithin{equation}{section} \section{Proof of Proposition \ref{PropMarkov}} \label{PropMarkovProof} \setcounter{equation}{0} \begin{proof} Let $f\in E$ and set $\widetilde f(x) = \frac{f(x)}{1+x^2}, \; \forall x\in \R$. Condition \eqref{derFlow} implies, by mean value theorem, that there exists a constant $c(t)$ such that $ \E{\left| X_t^{0,x}- X_t^{0,y} \right|^2} \leq c(t) \left|x-y\right|^2, \; \forall x,y\in \R. $ Then, by the Garsia-Rodemich-Rumsey criterion, see for instance \cite[Section 3]{BarlowYor1982}, there exists a r.v. $\Gamma_t$ such that $\E{\Gamma_t^2}<\infty$ and $\forall x,y \in\R$ \begin{equation} \label{XHolder} \left| X_t^{0,x}- X_t^{0,y} \right| \leq \Gamma_t \left|x-y\right|^\alpha, \; \text{for}\; 0 < \alpha <\frac{1}{2}, \end{equation} possibly up to a modified version of the flow. This implies in particular that for $x\in\R$ \begin{eqnarray*} \frac{|X_t^{0,x}|^2}{1+x^2} & \leq & \frac{2}{1+x^2} \left( |X_t^{0,0}|^2 + |X_t^{0,x} - X_t^{0,0}|^2 \right) \\ & \leq & \frac{2}{1+x^2} \left( |X_t^{0,0}|^2 + \left|\Gamma_t\right|^2 |x|^{2\alpha} \right) \leq 2 \left( |X_t^{0,0}|^2 + \left|\Gamma_t\right|^2 \right). \end{eqnarray*} Hence \begin{equation} \label{XMoment2} \sup_{x\in\R} \E{\frac{|X_t^{0,x}|^2}{1+x^2}} < \infty. \end{equation} Consequently, for $x\in \R$, we have $$ \frac{\left|P_tf(x)\right|}{1+x^2} = \frac{\left|\E{f(X_t^{0,x})}\right|}{1+x^2} \leq \norm{f}_E \frac{1+\E{|X_t^{0,x}|^2}}{1+x^2} \leq \norm{f}_E \sup_{\xi \in\R} \frac{1+\E{|X_t^{0,\xi}|^2}}{1+\xi^2}. $$ The right-hand side is finite, thanks to \eqref{XMoment2}, so that \begin{equation} \label{PtBounded} \norm{P_tf}_E \leq \norm{f}_E \sup_{\xi \in\R} \frac{1+\E{|X_t^{0,\xi}|^2}}{1+\xi^2}. \end{equation} After we will have shown that $\widetilde {P_tf}$ is also uniformly continuous, \eqref{PtBounded} will also imply that $P_tf \in E$ and that $P_t$ is a bounded linear operator. Therefore it remains to show that $\widetilde{ P_tf}$ is uniformly continuous. For this, let $x,y\in\R$. We have \begin{equation} \label{EI1I2} \frac{P_tf(x)}{1+x^2}-\frac{P_tf(y)}{1+y^2} = \E{\frac{f(X^{0,x}_t)}{1+x^2} - \frac{f(X^{0,y}_t)}{1+y^2}} = \E{I_1+I_2}, \end{equation} where \begin{eqnarray*} I_1 &=& \left(\widetilde f(X^{0,x}_t) - \widetilde f(X^{0,y}_t) \right) \frac{1+(X^{0,x}_t)^2}{1+x^2} \\ I_2 &=& \widetilde f(X^{0,y}_t) \left( \frac{1+(X^{0,x}_t)^2}{1+x^2} - \frac{1+(X^{0,y}_t)^2}{1+y^2}\right). \end{eqnarray*} Let $\epsilon>0$. By uniform continuity of $\widetilde f$, there exists $\delta_1>0$ such that \begin{equation} \label{fTildeUCont} \forall a,b\in \R,\; |a-b|\leq \delta_1 \Rightarrow \left| \widetilde f(a) - \widetilde f(b) \right| < \epsilon. \end{equation} Since $\displaystyle \lim_{M\to \infty} \E{|I_1| \1_{|\Gamma_t| \ge M}}=0$, there exists $M_1>0$ such that \begin{equation} \label{E216} \E{|I_1| \1_{|\Gamma_t| \ge M_1}} < \epsilon. \end{equation} We fix $0 < \alpha < \frac{1}{2}$ and we choose $\delta_2 = \left(\frac{\delta_1}{M_1}\right)^{1/\alpha}$. Taking into account \eqref{XHolder} and \eqref{fTildeUCont}, for $|x-y|<\delta_2$ we have \begin{eqnarray*} \E{|I_1| \1_{|\Gamma_t|<M_1}} & \leq & \E{\frac{1+(X^{0,x}_t)^2}{1+x^2} \left(\widetilde f(X^{0,x}_t) - \widetilde f(X^{0,y}_t) \right) \1_{\left| X_t^{0,x}- X_t^{0,y} \right|<\delta_1}} \nonumber\\ & < & \sup_{\xi\in\R} \E{\frac{1 + |X_t^{0,\xi}|^2}{1+\xi^2}} \epsilon. \end{eqnarray*} The right-hand side is finite thanks to \eqref{XMoment2}. Consequently, if $|x-y|<\delta_2$, then \eqref{E216} implies that \begin{equation} \label{I_1} \E{|I_1|} < A_1 \epsilon, \end{equation} where $A_1= 1+ \sup_{\xi\in\R} \E{1 + \frac{|X_t^{0,\xi}|^2}{1+\xi^2}}$. Concerning $I_2$, we define \begin{equation} \label{FI_2} F(\omega, z) = \frac{1+|X_t^{0,z}(\omega)|^2}{1+z^2}, \omega \in \Omega, z \in \R. \end{equation} Since $z\mapsto F(\cdot, z)$ is differentiable in $L^2(\Omega)$, by mean value theorem we get $$ \E{|I_2|} = |x-y| \E{\left|\widetilde f(X_t^{0,y}) \int_0^1 \partial_z F(\cdot, a x + (1-a) y) da \right|} \leq |x-y| \norm{f}_E \sup_{z}\E{\left|\partial_z F(\cdot, z)\right|}. $$ It remains to estimate the previous supremum. We have for $z\in\R$ $$ \partial_z F(\cdot, z) = 2\frac{X^{0,z}_t \partial_z X^{0,z}_t}{1+z^2} - 2 z \frac{1+|X^{0,z}_t|^2}{(1+z^2)^2}. $$ So by Cauchy-Schwarz we get $$ \E{|\partial_z F(\cdot, z)|} \leq 2\left( \frac{\E{|X^{0,z}_t|^2}}{1+z^2} \frac{\E{|\partial_z X^{0,z}_t|^2}}{1+z^2} \right)^{1/2} + 2 \frac{|z|}{1+z^2} \frac{1+\E{|X^{0,z}_t|^2}}{1+z^2} \leq A_2, $$ where $ A_2 = 2\left( \displaystyle\sup_{z}\frac{\E{|X^{0,z}_t|^2}}{1+z^2} \displaystyle\sup_{z}\E{|\partial_z X^{0,z}_t|^2} \right)^{1/2} + \left( 1 + \displaystyle\sup_{z}\frac{\E{|X^{0,z}_t|^2}}{1+z^2} \right). $ By \eqref{derFlow} and \eqref{XMoment2} $A_2$ is finite and we get \begin{equation} \label{I_2} \E{|I_2|} \leq A_2 \norm{f}_E |x-y|. \end{equation} Combining inequalities \eqref{I_1} and \eqref{I_2}, \eqref{EI1I2} gives the existence of $\delta>0$ such that \begin{equation*} |x-y|<\delta \Rightarrow \left|\frac{P_tf(x)}{1+x^2}-\frac{P_tf(y)}{1+y^2}\right| < \epsilon, \end{equation*} so that the function $x\mapsto \frac{P_tf(x)}{1+x^2}$ is uniformly continuous. In conclusion we have proved that $P_tf\in E$. $P_t$ is a bounded linear operator follows as a consequence of \eqref{PtBounded}. \end{proof} \section{Proof of Theorem \ref{thLevyGen}} \label{appA_BSDE} \setcounter{equation}{0} We recall that the semigroup $P$ is here given by $ P_t f(x) = \E{f(x + X_t)}, x \in \R, t \ge 0$ and $X$ is a square integrable L\'evy process vanishing at zero. The classical theory of semigroup for L\'evy processes defines the semigroup $P$ on the set $C_0$ of continuous functions vanishing at infinity, equipped with the sup-norm $\norm{u}_\infty = \sup_x{|u(x)|}$, cf. for example \cite[Theorem 31.5]{SatoBook}. On $C_0$, the semigroup $P$ is strongly continuous, with norm $\norm{P}=1$, and its generator $L_0$ is given by \begin{eqnarray} \label{L0App} L_0f(x) = \int \left(f(x+y)-f(x)-yf^\prime(x)\1_{|y|<1}\right)\nu(dy), \ f \in C_0. \end{eqnarray} Moreover, \cite[Theorem 31.5]{SatoBook} shows that $C^2_0 \subset D(L_0)$, where $C^2_0$ is the set of functions $f\in C^2$ such that $f$, $f^\prime$ and $f^{''}$ vanish at infinity. To prove Theorem \ref{thLevyGen} which concerns the infinitesimal generator of the semigroup $P$ defined on the set $E$ (cf. \eqref{setE}) related to a square integrable pure jump L\'evy process, we adapt the classical theory. Since we consider a space $(E, \norm{.}_E)$, different from the classical one, i.e. $(C_0, \norm{.}_\infty)$, we need to show that $(P_t)$ is still a strongly continuous semigroup. \begin{proposition} \label{LevyPtStongCont} Let $X$ be a square integrable L\'evy process, then the semigroup $(P_t) : E \rightarrow E$ is strongly continuous. \end{proposition} \begin{proof} The idea of the proof is an adaptation of the proof in \cite[Theorem 31.5]{SatoBook}. Let $f\in E$ and $\widetilde{f}$ defined by $\widetilde{f}(x)=\frac{f(x)}{1+x^2}, \; \forall x\in \R$. We evaluate, for $t>0$, $x \in \R$ $$ \dfrac{P_tf(x) - f(x)}{1+x^2} = \E{\widetilde{f}(x+X_t)-\widetilde{f}(x)} + \E{\widetilde{f}(x+X_t)\frac{X_t^2+2x X_t}{1+x^2} }. $$ So \begin{equation} \label{EB11} \norm{P_tf - f}_E \leq \sup_{x\in \R} \left| \E{ \widetilde{f}(x+X_t)-\widetilde{f}(x) } \right| + \sup_{x\in \R} \left| \E{ \widetilde{f}(x+X_t)\frac{X_t^2+2x X_t}{1+x^2} } \right|. \end{equation} First, notice that $$ \left| \E{\widetilde{f}(x+X_t)\frac{X_t^2+2x X_t}{1+x^2}} \right| \leq \norm{f}_E \E{ \frac{X_t^2+2|x X_t|}{1+x^2} } \leq \norm{f}_E \left(\E{X_t^2} + \E{|X_t|} \right), $$ hence $$\sup_{x\in \R} \left| \E{ \widetilde{f}(x+X_t)\frac{X_t^2+2x X_t}{1+x^2}} \right| \leq \norm{f}_E \left(\E{X_t^2} + \E{|X_t|} \right).$$ Since $X$ is a square integrable L\'evy process, $\E{X_t^2} =c_2t + c_1^2t^2$ where $c_1, c_2$ were defined in \eqref{LevySqInt12}. Hence, the right-hand side of the inequality above goes to zero as $t$ goes to zero. Now we prove that the first term $\sup_{x\in \R}\left|\E{\widetilde{f}(x+X_t)-\widetilde{f}(x)}\right|$ in the right-hand side of \eqref{EB11} goes to zero as well. Let $\epsilon>0$ be a fixed positive real. Since $\widetilde{f}$ is uniformly continuous, then there is $\delta >0$ such that $\forall x,y \; |x-y|<\delta \; \Rightarrow \; |\widetilde{f}(x)-\widetilde{f}(y)|<\frac{\epsilon}{2}.$ Moreover, since $X$ is continuous in probability $$\exists t_0>0, \; \text{such that} \; \forall t<t_0 ,\; \P(|X_t|>\delta) < \frac{\epsilon}{4 \norm{f}_E}.$$ For all $x\in\R$, $t<t_0$ we have \begin{align*} \begin{split} \left| \E{\widetilde{f}(x+X_t)-\widetilde{f}(x)} \right|&\leq \E{\left|\widetilde{f}(x+X_t)-\widetilde{f}(x)\right| \1_{\{|X_t|\leq\delta\}} } + \E{\left|\widetilde{f}(x+X_t)-\widetilde{f}(x)\right| \1_{\{|X_t|>\delta\}} }\\ &\leq \frac{\epsilon}{2} + 2 \norm{f}_E \P(|X_t|>\delta) \leq \epsilon. \end{split} \end{align*} Since the inequality above is valid for every $x\in\R$, then $ \sup_{x\in \R}\left|\E{\widetilde{f}(x+X_t)-\widetilde{f}(x) }\right| \xrightarrow{t\to 0} 0.$ This concludes the proof that $P$ is a strongly continuous semigroup. \end{proof} \begin{remark} \label{RContraction} Notice that the semigroup $(P_t)$ is not a contraction. In fact, if $f\in E, t>0$, then \begin{equation} \label{ENormPt} \norm{P_t f}_E = \sup_{x\in \R} \left| \E{\frac{f(x+X_t)}{1+x^2}}\right|. \end{equation} Let $f_0(x)=1+x^2$ and denote again $c_1 = \E{X_1}$ and $c_2 = {\rm Var} (X_1)$. Obviously $f_0 \in E$, $\norm{f_0}_E=1$ and \begin{equation} \label{ENormPt1} \norm{P_t f_0}_E = \sup_{x\in \R} \E{\frac{1+(x+X_t)^2}{1+x^2}} = 1+\sup_{x\geq 0}\frac{2x|c_1|t + c_2t + c_1^2t^2}{1+x^2} = 1 + \vert c_1 \vert t +c_1^2 t^2. \end{equation} Hence $(P_t)$ cannot be not a contraction since $ \norm{P_t} \ge \norm{P_t f_0}_E > 1. $ \end{remark} \bigskip On the other hand, for $f\in E$, \eqref{ENormPt} gives $ \norm{P_t f}_E \leq \norm{f}_E \norm{P_t f_0}_E. $ By \eqref{ENormPt1} this implies that $ \norm{P_t} \leq 1+(|c_1| + c_2)t + c_1^2t^2. $ So, there exists a positive real $\omega>0$ such that $ \norm{P_t} \leq e^{\omega t}. $ Semigroups verifying the latter inequality are called {\bf quasi-contractions}, see \cite{PazyBook}. For instance, \cite[Corollary 3.8]{PazyBook} implies that \begin{equation} \label{HilleYosidaQuasiContr} \forall \lambda>\omega,\; \lambda I - L \; \text{is invertible}. \end{equation} At this point we show that the space $E^2_0$, defined in \eqref{SetE2_0}, is a subset of $D(L)$ and that formula \eqref{L0App} remains valid in $E^2_0$. This will be done adapting a technique described in \cite[Theorem 31.5]{SatoBook}, where it is stated that $C^2_0$ is included in $D(L_0)$. The main tool used for the proof of \cite[Theorem 31.5]{SatoBook} is the small time asymptotics \begin{equation} \label{EAsymp} \lim_{t \to 0} \frac{1}{t} \E{ g(X_t)} = \int g(x) \nu(dx), \end{equation} which holds for bounded continuous function $g$ vanishing on a neighborhood of the origin, see \cite[Corollary 8.9]{SatoBook}. This result has been extended to a class of unbounded functions by \cite[Theorem 1.1]{Figueroa2008small}. \eqref{EAsymp} is used in \cite[Proposition 2.3]{Figueroa2008small} to prove that the quantity $\displaystyle\lim_{t \to 0} \dfrac{P_t g - g}{t}(x)$ converges point-wise, under some suitable conditions on the function $g$. We state a similar lemma below. \begin{lemma} \label{appLem1} Let $f\in E^2_0$. For all $x\in\R,$ the quantity \begin{equation} \label{PointWiseCv} \lim_{t \to 0} \dfrac{P_t f-f}{t}(x) \end{equation} exists and equals the right-hand side of \eqref{L0App}. \end{lemma} \begin{remark} \label{RappLem1}\ \begin{enumerate} [label=\arabic*)] \item To be self-contained, we give below a simple proof of Lemma \ref{appLem1}, in the case when $X$ is a square integrable pure jump process. \item Later we will need to show that the point-wise convergence \eqref{PointWiseCv} holds according to the norm of $E$. \end{enumerate} \end{remark} \begin{proof} Let $f\in E^2_0$. First, we verify that the integral \begin{equation} \label{proofL} \int\left(f(x+y)-f(x)-y f^\prime(x)\1_{|y|<1}\right)\nu(dy) \end{equation}is well-defined for all $x\in \R$, taking into account $\int y^2 \nu(dy) < \infty$ by \eqref{LevySqInt}. In fact, by Taylor expansion and since $f \in E^2_0$, then for every $x\in \R$ there exist $a, b\geq 0$ such that, for all $y\in \R$ \begin{eqnarray*} |f(x+y)-f(x)|\1_{|y|\geq 1} &\leq& a (y^2 + 1) \1_{|y|\geq 1},\\ |f(x+y)-f(x)-f^\prime(x)y|\1_{|y|<1} &\leq& b y^2 \1_{|y|<1}. \end{eqnarray*} Let $t>0, x\in \R$. By Taylor expansion and Fubini theorem, recalling that $P_t f(x)= \E{f(x + X_t)}$ we have $$ \dfrac{P_t f-f}{t}(x) = c_1 f^\prime(x) + \int_0^1 (1-a)\frac{1}{t} \E{f^{''}(aX_t+x)X_t^2} da.$$ By abuse of notation, we denote by $L_0f(x)$ the integral \eqref{proofL}. Taking into account \eqref{LevySqInt12} we have \begin{eqnarray} \label{GenLevyTaylor} L_0f(x) &=& c_1 f^\prime(x) + \int\left(f(x+y)-f(x)-yf^\prime(x)\right)\nu(dy) \nonumber\\ &=& c_1 f^\prime(x) + \int_0^1 (1-a) \int_\R y^2 f^{''}(ay+x) \nu(dy) da. \end{eqnarray} Hence, it remains to show that ($x$ being fixed) \begin{align} \label{EB4bis} \begin{split} \dfrac{P_t f-f}{t}(x) - L_0f(x) &= \int_0^1 (1-a) \Big(\frac{1}{t}\E{X_t^2f^{''}(aX_t+x)} - \int_\R y^2 f^{''}(ay+x) \nu(dy)\Big) da \\ & \xrightarrow[t \to 0]\ 0. \end{split} \end{align} For $a \in [0,1]$, we denote $g(y) = y^2 f^{''}(ay+x)$. We have $g(y) \underset{y \to 0}{\sim} y^2 f^{''}(x).$ If $ f^{''}(x) \neq 0$, then \cite[Theorem 1.1]{Figueroa2008small} (ii) implies that \begin{equation} \label{EFig} \lim_{t \to 0}\frac{1}{t}\E{g(X_t)} = \int_\R g(y) \nu(dy). \end{equation} If $ f^{''}(x) = 0$, then $g(y) = o(y^2)$ and \eqref{EFig} is still valid by \cite[Theorem 1.1]{Figueroa2008small} (i). We conclude to the validity of \eqref{EB4bis} by Lebesgue dominated convergence theorem taking into account that $ f^{''} $ is bounded. \end{proof} As observed in a similar case in \cite[Remark 2.4]{Figueroa2008small}, we will prove that the point-wise convergence proved in Lemma \ref{PointWiseCv} holds in the strong sense. For this purpose, we introduce the linear subspace \begin{equation*} \widetilde{E} = \Big\{ f\in \cC \; \text{such that }\; \widetilde{f}:= x\mapsto \frac{f(x)}{1+x^2}\; \text{is vanishing at infinity}\;\Big\} \end{equation*} of $E$. It it is easy to show that $\widetilde E$ is closed in $E$ so that it is a Banach subspace of $E$. \begin{lemma} \label{appLem2} Let $f, \; g\in \widetilde{E}$, such that \begin{equation} \label{Ldiaz} \lim_{t \to 0} \dfrac{P_t f-f}{t}(x) = g(x), \; \forall x\in\R. \end{equation} Then $f\in D(L)$ and $ L f = g$. \end{lemma} \begin{proof} We first introduce a restriction $\widetilde{P}$ of the semigroup $P$ to the linear subspace $\widetilde E$. By Lebesgue dominated convergence theorem and the fact that $\frac{1+ (X_t+x)^2}{1+ x^2} \le 2(\vert X_t \vert^2 + 1),$ one can show that $P_t f \in \widetilde E$ for any $f \in \widetilde E$, $t \ge 0$. Hence $(\widetilde P_t)$ is a semigroup on $\widetilde E$; we denote by $\widetilde{L}$ its infinitesimal generator. As in \cite[Lemma 31.7]{SatoBook}, we denote by $L^\#f=g$, the operator defined by the equation \eqref{Ldiaz} for $f, g \in \widetilde E$ and by $D(L^\#)$ its domain, i.e. the set of functions $f$ for which \eqref{Ldiaz} exists. Then $L^\#$ is an extension of $\widetilde L$. Fix $q>|c_1|+ c_2$. We prove first that \begin{equation} \label{Eqlf} \forall f\in D(L^\#) \quad (qI - L^\#)f=0 \Rightarrow f=0. \end{equation} Let $f\in D(L^\#)$ such that $(qI - L^\#)f=0$. We denote $f^-=-(f\wedge 0)$ and $f^+=f \vee 0$. Suppose that $f^+ \neq 0$. Since $\widetilde {f^+}$ is continuous and vanishing at infinity, there exists $x_1$ such that $\frac{f^+(x_1)}{1+ x_1^2}=\displaystyle\max_{x} \dfrac{f^+(x)}{1+x^2} >0$. Moreover $f(x_1) = f^+(x_1)$ . Then $$ \frac{\E{f(x_1+X_t)-f(x_1)}}{t} \leq \frac{1}{t} \left( f(x_1)\frac{\E{1+(x_1+X_t)^2}}{1+x_1^2}- f(x_1)\right). $$ Passing to the limit when $t \rightarrow 0$ it follows $ L^\#f(x_1) \leq f(x_1) (|c_1|+ c_2). $ Then $(q-|c_1|- c_2)f(x_1) \leq 0, $ which contradicts the fact that $f(x_1)>0$. Hence, $f^+ = 0$. With similar arguments, we can show that $f^- = 0$ and so $f=0$, which proves \eqref{Eqlf}. By restriction, $(\widetilde P_t)$ fulfills $\Vert \widetilde P_t \Vert \le e^{\omega t}$, in particular it is a quasi-contraction semigroup, so by \eqref{HilleYosidaQuasiContr}, we can certainly choose $q>\max(|c_1|+ c_2,\omega)$, so that $qI-\widetilde{L}$ is invertible and $R(qI-\widetilde{L})=\widetilde E$.\\ We observe that $ D(\widetilde L) \subset D(L^\#)$. Let now $f\in D(L^\#)$; then $(qI-L^\#)f \in \widetilde E =R(qI-\widetilde{L})$. Consequently, there is $v\in D(\widetilde{L})$ such that $(qI-L^\#)f = (qI-\widetilde{L})v$. So, $(qI-L^\#)(f-v)=0$. By \eqref{Eqlf}, $(qI-L^\#)$ is injective, so $f=v$ and $f\in D(\widetilde{L})$. Consequently $\widetilde L f$ is given by $g$ defined in \eqref{Ldiaz}. Finally, the fact that $D(\widetilde{L}) \subset D(L)$ and $\widetilde{L}$ is a restriction of $L$ allow to conclude the proof of Lemma \ref{appLem2}. \end{proof} We continue the proof of Theorem \ref{thLevyGen} making use of Lemmas \ref{appLem1} and Lemma \ref{appLem2}.\\ First, let us prove that $E^2_0 \subset \widetilde E$. Indeed by Taylor expansion, we have, for $f\in E_0^2$ $$\frac{f(x)}{1+x^2} =\frac{f(0)}{1+x^2} + \frac{x}{1+x^2} f^\prime(0) + \frac{x^2}{1+x^2} \int_0^1 (1-\alpha) f''(x\alpha) d\alpha.$$ Since $\lim_{x\rightarrow\infty} f''(x\alpha)=0$ for all $\alpha\in ]0,1[$, then by Lebesgue theorem, we have that $\lim_{x\rightarrow\infty} \frac{f(x)}{1+x^2}=0$, so $f\in \widetilde E$. By Lemma \ref{appLem1}, it follows \begin{equation*} \lim_{t \to 0} \dfrac{P_t f-f}{t}(x) = L_0f(x), \; \forall x\in\R, \end{equation*} where $L_0$ is given in \eqref{proofL}. In order to apply Lemma \ref{appLem2}, it remains to show that $L_0f \in\widetilde E$. Using relation \eqref{GenLevyTaylor}, for $x\in\R$ we get $$ \frac{L_0f(x)}{1+x^2}= c_1 \frac{f^\prime(x)}{1+x^2} + \int_0^1 (1-a) \int_\R y^2 \frac{f^{''}(ay+x)}{1+x^2} \nu(dy) da. $$ Since $f\in E^2_0$, then $f''$ is bounded and $f^\prime$ has linear growth. So, the fact that $\int_\R y^2 \nu(dy) <\infty$ implies indeed $\lim_{x\to \infty}\frac{L_0f(x)}{1+x^2}=0$ and $L_0f\in \widetilde E$. Finally, Lemma \ref{appLem2} implies that $E_0^2 \subset D(L)$ and for $f\in E^2_0 $, $Lf$ is given by \eqref{L_Levy}. \end{appendices} \noindent {\bf ACKNOWLEDGEMENTS}: The authors are grateful to both referees, the associated editor and the editor in chief for helping them to improve the first version of the paper. Financial support was provided by the ANR Project MASTERIE 2010 BLAN--0121--01. The second named author also benefited partially from the support of the ``FMJH Program Gaspard Monge in optimization and operation research'' (Project 2014-1607H). \bibliographystyle{plainnatMod}
proofpile-arXiv_067-3943
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Lattice action and operators}\label{app:A} The gluons are discretized with the Wilson plaquette action, while the doublet of mass-degenerate quarks with the O$(a)$-improved Wilson action\footnote{The correction proportional to $b_{\rm g}$ is neglected.}~\cite{Sheikholeslami:1985ij,Luscher:1996sc} with its coefficient $c_\mathrm{sw}$ determined non-perturbatively~\cite{Jansen:1998mx}. We are interested in the flavour non-singlet ($r,s=1,2$; $r\neq s$) fermion bilinears \begin{equation} P^{rs} = \overline \psi_{\; r} \gamma_5 \psi_s\;, \qquad A_0^{rs} = \overline \psi_r \gamma_0 \gamma_5 \psi_s\; . \end{equation} The corresponding O$(a)$-improved renormalised operators are given by \begin{eqnarray} P_{\rm R}^{rs} & = & Z_{\rm P}\, (1+ ({\bar b}_{\rm P} + {\tilde b}_{\rm P})\, a m)\, P^{rs}\nonumber\; ,\\[0.125cm] A_{0,\rm R}^{rs} & = & Z_{\rm A}\, (1+ ({\bar b}_{\rm A} + {\tilde b}_{\rm A})\, a m)\, \left\{A_0^{rs} + c_{\rm A}\, \frac{a}{2}\,(\partial_0^* + \partial_0)\, P^{rs} \right\}\; ,\label{eq:AR} \end{eqnarray} where $\partial_0$ and $\partial_0^*$ are the forward and the backward lattice derivatives respectively. The coefficient $c_{\rm A}$ has been determined non-perturbatively for the $N_f=2$ theory in Ref.~\cite{DellaMorte:2005se}, while the $b$-coefficients are known in perturbation theory up to one loop only~\cite{Sint:1997jx,Sint:1997dj}. The multiplicative renormalization constants $Z_{\rm A}$ and $Z_{\rm P}$ have been computed non-perturbatively in Ref.~\cite{Fritzsch:2012wq}. For the lattices considered in this paper, the numerical values of the improvement coefficients and of the renormalization constants are summarized in Table~\ref{tab:impr}. \begin{table} \small \begin{center} \begin{tabular}{@{\extracolsep{0.0cm}}ccccccccccc} \hline $\beta$& run & $c_{\rm SW}$ & $c_{\rm A}$ & ${\tilde b}_{\rm P}$ & ${\tilde b}_{\rm A}$ & ${\bar b}_\mu$& $Z_{\rm P}$ & $Z_{\rm A}$\\ \hline 5.2 &all&2.01715 & -0.06414 & 1.07224 & 1.07116 & -0.576 & 0.5184(53) &0.7703(57)\\[0.125cm] 5.3 &all&1.90952 & -0.05061 & 1.07088 & 1.06982 & -0.575 & 0.5184(53) &0.7784(52)\\[0.125cm] 5.5 &N5&1.751496 & -0.03613 & 1.06830 & 1.06728 & -0.572 & 0.5184(53) &0.7932(43)\\[0.125cm] 5.5 &N6,O7&1.751500 & -0.03613 & 1.06830 & 1.06728 & -0.572 & 0.5184(53) &0.7932(43)\\[0.125cm] \hline \end{tabular} \end{center} \caption{\label{tab:impr} Improvement coefficients and renormalization constants for the $\beta$ values considered in the paper.} \end{table} The matching factors between $Z_{\rm P}$ in the Schr\"odinger functional scheme and the renormalization-group invariant $Z^{\rm RGI}_{\rm P}$ (with the overall normalization convention of Ref.~\cite{Fritzsch:2012wq}) and $Z^{{\rm \overline{MS\kern-0.05em}\kern0.05em}}_{\rm P}(2~\mbox{GeV})$ are \begin{equation} Z^{\rm RGI}_{\rm P} = \frac{1}{1.308(16)}\, Z_{\rm P}\;, \qquad Z^{{\rm \overline{MS\kern-0.05em}\kern0.05em}}_{\rm P}(2~\mbox{GeV})= \frac{1}{0.740(12)}\, Z^{\rm RGI}_{\rm P}\; . \end{equation} Using the PCAC relation, we can define \begin{equation} m(x_0)=\frac{\frac{1}{2}(\partial_0 +\partial_0^*) f_\mathrm{AP}(x_0)+ c_{\rm A} a \partial_0^* \partial_0 f_\mathrm{PP}(x_0)}{2 f_\mathrm{PP}(x_0)}\; , \label{eq:m} \end{equation} where \begin{eqnarray} f_\mathrm{PP}(x_0)& = &-a^3\sum_{\vec{x}} \langle P^{12}(x) P^{21}(0)\rangle\; ,\nonumber\\[0.125cm] f_\mathrm{AP}(x_0)& = & -a^3\sum_{\vec{x}} \langle A_0^{12}(x) P^{21}(0) \rangle\; . \label{eq:2pt} \end{eqnarray} At asymptotically large values of $x_0$, the mass $m(x_0)$ has a plateau which defines the value of $m$ to be used in Eqs.~(\ref{eq:AR}). From this the renormalized quark mass is obtained as \begin{equation}\label{eq:mR} m_{\rm R}= \frac{Z_{\rm A}\, (1+ ({\bar b}_{\rm A} + {\tilde b}_{\rm A}) \, a m)} {Z_{\rm P}\, (1+ ({\bar b}_{\rm P} + {\tilde b}_{\rm P})\, a m )}\, m\; . \end{equation} The bare pseudoscalar decay constant is given by~\cite{DelDebbio:2007pz} \begin{equation} {\cal F}_{\pi} = 2 m \frac{G_{\pi}}{M^2_{\pi}}\; , \end{equation} where $G_{\pi}$ is extracted from the behaviour of the correlator $f_\mathrm{PP}(x_0)$ at asymptotically large values of $x_0$ \begin{equation} f_\mathrm{PP}(x_0) = \frac{G^2_{\pi}}{M_{\pi}} e^{-M_{\pi} x_0}\; . \end{equation} Thanks to Eq.~(\ref{eq:AR}), the pseudoscalar decay constant is finally given by \begin{equation}\label{eq:Fps} F_{\pi} = Z_{\rm A}\, (1+ ({\bar b}_{\rm A} + {\tilde b}_{\rm A})\, a m)\; {\cal F}_{\pi}\; . \end{equation} \section{Quark masses, pion masses and decay constants\label{app:mmpif}} On all ensembles in Table~\ref{tab:ens} we have computed the two-point functions of the flavour non-singlet bilinears operators in Eqs.~(\ref{eq:m}) and (\ref{eq:2pt}). They have been estimated by using 10 to 20 $U(1)$ noise sources located on randomly chosen time slices. The bare quark mass $m(x_0)$ in Eq.~(\ref{eq:m}) has a plateau for large enough $x_0$ over which we average. The pion mass $M_{\pi}$ and the bare pion decay constant ${\cal F}_{\pi}$ are extracted from $f_\mathrm{PP}(x_0)$ and the quark mass following Ref.~\cite{Fritzsch:2012wq}. In particular we determine the region $x_0 \in [x^{\rm min}_0; T - x^{\rm min}_0]$ where we can neglect the excited state contribution by first fitting the pseudoscalar two-point function with a two-exponential fit \begin{equation}\label{eq:twoexp} f_\mathrm{PP}(x_0) = d_1 \big[e^{-E_1 x_0} + e^{-E_1(T-x_0)}\big] + d_2\big[e^{-E_2 x_0} + e^{-E_2(T-x_0)}\big] \end{equation} in a range where this function describes the data well for the given statistical accuracy. We then determine $x^{\rm min}_0$ to be the smallest value of $x_0$ where the statistical uncertainty on the effective mass $m_{\rm eff}(x_0) = -\frac{{\rm d}}{{\rm d x_0}} \log[f_\mathrm{PP}(x_0)] $ is four times larger than the contribution of the excited state to $m_{\rm eff}(x_0)$ as given by the result of the fit. In the second step only the first term of Eq.~(\ref{eq:twoexp}) is fitted to the data restricted to this region, and $E_1$ and $d_1$ are determined. The pion mass and its decay constant are then fixed to be $M_{\pi}=E_1$ and ${\cal F}_{\pi} = 2 \sqrt{d_1} m/M_{\pi}^{3/2}$ respectively. \begin{table} \small \begin{center} \begin{tabular}{@{\extracolsep{0.0cm}}clll} \hline id &~~~~$am$ &~~~$a M_{\pi}$ &~~~~$a F_{\pi}$ \\ \hline A3 & 0.00985(6) & 0.1883(8) & 0.04583(37) \\ A4 & 0.00601(6) & 0.1466(8) & 0.04200(35) \\ A5 & 0.00444(6) & 0.1263(11) & 0.04023(34)\\ B6 & 0.00321(4) & 0.1073(8) & 0.03883(31)\\ \hline E5 & 0.00727(3) & 0.1454(5) & 0.03803(29) \\ F6 & 0.00374(3) & 0.1036(5) & 0.03479(29) \\ F7 & 0.002721(20) & 0.0886(4) & 0.03331(24) \\ G8 & 0.001395(18) & 0.0638(4) & 0.03162(23) \\ \hline N5 & 0.00576(3) & 0.1085(8) & 0.02816(21) \\ N6 & 0.003444(15) & 0.0837(3) & 0.02589(19) \\ O7 & 0.002131(9) & 0.06574(23) & 0.02475(16) \\ \hline \end{tabular} \end{center} \caption{\label{tab:spect} The bare quark mass $a m$ as defined in Eq.~(\ref{eq:m}), the pion mass $a M_{\pi}$ and pion decay constant $a F_{\pi}$ as defined in Eq.~(\ref{eq:Fps}). } \end{table} The numerical results for all lattices are reported in Table~\ref{tab:spect}, and those for the pseudoscalar decay constant and for the cubic root of the ratio $M^2_{\pi}/(2m_R F)$ are shown in Fig.~\ref{fig:FpiMpi} versus $y=M_{\pi}^2/(4 \pi F_{\pi})^2$. We fit $F_{\pi}$ to the function \begin{equation} a F_{\pi} = (a F)\, \{1 - y \ln(y) + b y\}\; , \end{equation} where $b$ is common to all lattice spacings, restricted to the points with $M_{\pi}<\, 400$~MeV (see left plot of Fig.~\ref{fig:FpiMpi}). This function rests on the Symanzik expansion and is compatible with Wilson ChPT (WChPT) at the NLO \cite{Aoki:2009ri}. To estimate the systematic error, we performed a number of fits to different functions: linear in $y$ with $M_{\pi}<\, 400$~MeV, and next-to-next-to-leading order in ChPT with all data included. As a final result we quote $a F=0.0330(4)(8)$, $0.0287(3)(7)$ and $0.0211(2)(5)$ at $a=0.075$, $0.065$ and $0.048$~fm respectively, where the second (systematic) error takes into account the spread of the results from the various fits. By fixing the scale from $F_K$, and by performing a continuum-limit extrapolation we obtain our final result $F=85.8(7)(20)$~MeV. We further compute the ratio $M_{\pi}^2/(2m_R F)$ for all data points. We fit the data restricted to $M_{\pi}<\, 400$~MeV to \begin{equation}\label{eq:Mpicont} \Big[\frac{M_{\pi}^2}{2m_R F}\Big]^{1/3}=(s_0+s_1 (aF)^2)\{1+ \frac{y}{6} \ln(y) +d\, y\}\; , \end{equation} where $s_0$, $s_1$ and $d$ are common to all lattice spacings, and the fit function is again the one resting on the Symanzik expansion and compatible with WChPT at the NLO. Also in this case we checked several variants although the data look very flat up to the heaviest mass. From the fits we get $s_0=3.06(3)(4)$, where the systematic error is determined as for $F$. This translates to a value for the renormalisation-group-invariant dimensionless ratio of $[\Sigma^{\rm RGI}]^{1/3}/F =2.77(2)(4)$, which in turn corresponds to $[\Sigma^{\rm \overline{MS\kern-0.05em}\kern0.05em}(2\, \mbox{GeV})]^{1/3} =263(3)(4)$~MeV if again $F_K$ is used to set the scale. \begin{figure} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/fpi_plot.eps} \end{minipage} \hspace{20mm} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/Mpi2.eps} \end{minipage} \caption{Left: the pseudoscalar decay constant $a F_\pi$ versus \mbox{$y=M_\pi^2/(4 \pi F_\pi)^2$}. Right: The ratio $M_\pi^2/(2 m_R F)$ versus $y$. The bands are the result of a combined fit, see main text.} \label{fig:FpiMpi} \end{figure} \section{Mode number in chiral perturbation theory }\label{app:modenumber:chpt} When chiral symmetry is spontaneously broken, the mode number can be computed in the chiral effective theory. At the NLO it reads~\cite{Giusti:2008vb} (see also Ref.~\cite{Giusti:2008paa}) \begin{equation}\label{eq:RNLO} \nu^{\rm nlo}(\Lambda_{\rm R},m_{\rm R}) = \frac{2 \Sigma \Lambda_{\rm R} V}{\pi} \Big\{ 1 + \frac{m_{\rm R} \Sigma}{(4\pi)^2 F^4}\Big[3\, \bar l_6 + 1 - \ln(2) - 3 \ln\Big(\frac{\Sigma m_{\rm R}}{F^2 \bar\mu^2}\Big) + f_\nu\left(\frac{\Lambda_{\rm R}}{m_{\rm R}}\right)\Big]\Big\}\; , \end{equation} where \begin{equation} f_\nu(x) = x \left[{\rm arctan}(x) - \frac{\pi}{2}\right] - \frac{1}{x} {\rm arctan}(x) - \ln(x) - \ln(1+x^2)\; . \end{equation} The constants $F$ and $\bar l_6$ are, respectively, the pion decay constant in the chiral limit and a SU$(3|1)$ low-energy effective coupling renormalized at the scale $\bar\mu$. The formula in Eq.~(\ref{eq:RNLO}) has some interesting properties: \begin{itemize} \item for $x \rightarrow \infty$ \begin{equation} f_\nu(x) \longrightarrow_{\!\!\!\!\!\!\!\!\!\!\!\!_{x\rightarrow\infty}} -3 \ln(x)\;, \end{equation} and therefore at fixed $\Lambda_{\rm R}$ the mode number has no chiral logs when $m_{\rm R}\rightarrow 0$; \item since in the continuum the operator $D^\dagger_m D_m$ has a threshold at $\alpha=m^2$, the mode number must satisfy \begin{equation} \lim_{\Lambda_{\rm R}\rightarrow 0} \nu^{\rm nlo}(\Lambda_{\rm R},m_{\rm R}) = 0\; , \end{equation} a property which is inherited by the NLO ChPT formula; \item in the chiral limit $\nu^{\rm nlo}(\Lambda_{\rm R},m_{\rm R})/\Lambda_{\rm R}$ {\it becomes independent on} $\Lambda_{\rm R}$. This is an accident of the $N_f=2$ ChPT theory at NLO \cite{Smilga:1993in}; \item the $\Lambda_{\rm R}$-dependence in the square brackets on the r.h.s. of (\ref{eq:RNLO}) is parameter-free. Since $\frac{m_{\rm R} \Sigma^2}{(4\pi)^2 F^4}>0$, the behaviour of the function $f_\nu(x)$ implies that $\nu^{\rm nlo}(\Lambda_{\rm R},m_{\rm R})/\Lambda_{\rm R}$ is a decreasing function of $\Lambda_{\rm R}$ at fixed $m_{\rm R}$, and no ambiguity is left due to free parameters. \end{itemize} \vspace{0.125cm} \noindent At the NLO the effective spectral density defined in Eq.~(\ref{eq:discD}) reads \begin{equation}\label{eq:sigmatilde} \ensuremath{\tilde\rho_{\R}} ^{\rm nlo} = \Sigma \Big\{ 1 + \frac{m_{\rm R} \Sigma}{(4\pi)^2 F^4} \Big[3\, \bar l_6 + 1 - \ln(2) - 3 \ln\Big(\frac{\Sigma m_{\rm R}}{F^2 \bar\mu^2}\Big) + \tilde g_\nu\left(\frac{\Lambda_{1,\rm R}}{m_{\rm R}},\frac{\Lambda_{2,\rm R}}{m_{\rm R}}\right) \Big]\Big\}\, , \end{equation} where \begin{equation} \tilde g_\nu\left(x_1,x_2\right) = \frac{f_\nu(x_1)+f_\nu(x_2)}{2} + \frac{1}{2}\,\frac{x_1+x_2}{x_2-x_1}\,\Big[f_\nu(x_2)-f_\nu(x_1)\Big]\; . \end{equation} The quantity ${\ensuremath{\tilde\rho_{\R}} }^{\rm nlo}$ inherits the same peculiar properties of $\nu^{\rm nlo}(\Lambda_{\rm R},m_{\rm R})/\Lambda_{\rm R}$ at NLO: at fixed $\Lambda_{1,\rm R}$ and $\Lambda_{2,\rm R}$ it has no chiral logarithms when $m_{\rm R}\rightarrow 0$, it is independent from $\Lambda_{1,\rm R}$ and $\Lambda_{2,\rm R}$ in the chiral limit, and at non-zero quark mass it is a decreasing parameter-free (apart the overall factor) function of $(\Lambda_{1,\rm R} + \Lambda_{2,\rm R})/2$. It is very weakly dependent on $(\Lambda_{1,\rm R}-\Lambda_{2,\rm R})$ in the range we are interested in. To have a quantitative idea of the $(\Lambda_{1,\rm R} + \Lambda_{2,\rm R})/2$ dependence of $\ensuremath{\tilde\rho_{\R}} ^{\rm nlo}$ we can choose $\Sigma=(260~\mbox{MeV})^3$, $F=85$~MeV, $m^{\rm sea}_{\rm R}=10$~MeV, $\Lambda_{1,\rm R}=20,40$~MeV, $\Lambda_{2,\rm R}=25,55$~MeV to obtain \begin{equation} \frac{\Sigma}{(4\pi)^2 F^4} = 0.00213~\mbox{MeV}^{-1}\; , \quad 0.0213\cdot\left[ \tilde g_\nu\Big(\frac{20}{10},\frac{25}{10}\Big) - \tilde g_\nu\Big(\frac{40}{10},\frac{55}{10}\Big)\right] = 0.0467\, . \end{equation} For light values of the quark masses the variations are rather mild , i.e. of the order of few percent. The next-to-next-to leading corrections in $\ensuremath{\tilde\rho_{\R}} $ are of the form ${\cal O}(\Lambda_{\rm R}^2, m_{\rm R} \Lambda_{\rm R},m_{\rm R}^2)$. They are expected to spoil some of the peculiar properties of the NLO formula. In the chiral limit the ${\cal O}(\Lambda_{\rm R}^2)$ corrections can induce a $\Lambda_{\rm R}$-dependence, and the ${\cal O}(m_{\rm R} \Lambda_{\rm R})$ can change the parameter-free dependence on $\Lambda_{\rm R}$ within the square brackets on the r.h.s.~of Eq.~(\ref{eq:sigmatilde}). \subsection{Finite volume effects}\label{sec:finvol} Finite volume effects in the mode number were computed in the chiral effective theory at the NLO in Refs.~\cite{Giusti:2008vb,Giusti:2008paa} (see also \cite{Necco:2011vx}). They are given by \begin{eqnarray} \left(\frac{\Delta \nu_{V}}{\nu}\right)^{\rm{nlo}} & = & \frac{\Sigma}{(4\pi)^2 F^4} \sum_{\{n_1,\dots,n_4\}}\,\!\!\!\!\!\!\!\! '\, \lim_{\epsilon\rightarrow 0} \left\{\frac{2}{\Lambda_{\rm R}} {\rm Im} \Big[F_{-2}\left(\frac{\Sigma q_n^2}{4F^2},i\Lambda_{\rm R}+m_{\rm R}+\epsilon\right)\Big] - \right. \nonumber \\[0.25cm] && \left. \frac{m_{\rm R}}{\Lambda_{\rm R}}{\rm Im}\Big[F_{-1} \Big(\frac{\Sigma q_n^2}{2F^2},i\Lambda_{\rm R}+\epsilon\Big)\Big]+ {\rm Re}\Big[F_{-1}\Big(\frac{\Sigma q_n^2}{2F^2},i\Lambda_{\rm R}+\epsilon\Big)\Big]\; \right\}\;, \label{eq:chpt:finvol} \end{eqnarray} where \begin{equation}\displaystyle\label{eq:Fnu} F_\nu(b,z) = 2\, \left(\frac{b}{z}\right)^{\nu/2}\, K_\nu(2 \sqrt{b z})\; , \end{equation} with ${\rm Re}\, b>0$, ${\rm Re}\, z>0$, and $K_\nu$ is a modified Bessel function \cite{Abramowitz:1970}. Furthermore, $q_n^2 = \sum_{\mu=1}^{d} (n_\mu L_\mu)^2$ and $\sum_{\{n_1,\dots,n_d\}}'$ denotes the sum over all integers without $n=(0,\dots,0)$. By expanding the Bessel functions for large arguments~\cite{Abramowitz:1970}, it is straightforward to show that the most significant terms in the sum on the r.h.s of Eq.~(\ref{eq:chpt:finvol}) are proportional to the exponentials $\exp\{-M_1 L/\sqrt{2}\}$ and $\exp\{- M_{2} L/2\}$, where $M_1$ and $M_2$ are the leading-order expressions in ChPT for the mass of a pseudoscalar meson made of two valence quarks of mass $\Lambda_{\rm R}$ and $(\sqrt{\Lambda_{\rm R}^2+m_{\rm R}^2}+m_{\rm R})$ respectively. \subsection{Discretization effects\label{eq:WChPT}} At finite lattice spacing and volume, the threshold region should be treated carefully in ChPT~\cite{Damgaard:2010cz}. The latter can be avoided by considering the quantity $\ensuremath{\tilde\rho_{\R}} $, with $\Lambda_{2,\rm R}>\Lambda_{1,\rm R}\gg 1/\Sigma V$. In this case the computation in the GSM power-counting regime of the Wilson ChPT gives~\cite{Necco:2011vx} \begin{equation}\label{eq:sigmatildelat} \ensuremath{\tilde\rho_{\R}} ^{\rm nlo}(a) = \ensuremath{\tilde\rho_{\R}} ^{\rm nlo} -32 \,(W_0 a)^2\, W_8' m_{\rm R} \frac{1}{\Lambda_{1,\rm R} \Lambda_{2,\rm R}}\; . \end{equation} Since $W_8'$ is expected to be negative \cite{Hansen:2011kk,Splittorff:2012gp}, if we rewrite \begin{equation} \Lambda_{1,\rm R} \Lambda_{2,\rm R} =\left(\frac{\Lambda_{1,\rm R} +\Lambda_{2,\rm R}}{2}\right)^2 -\frac{1}{4}(\Lambda_{2,\rm R}-\Lambda_{1,\rm R})^2\; \end{equation} and we keep constant $(\Lambda_{2,\rm R}-\Lambda_{1,\rm R})$, then $\ensuremath{\tilde\rho_{\R}} ^{\rm nlo}(a)$ is a decreasing function of $\Lambda_{\rm R}=(\Lambda_{2,\rm R}+\Lambda_{1,\rm R})/2$ on the lattice too. At variance with the continuum case, however, a free parameter $W_0^2 W_8'$ appears in the function, and its magnitude cannot be predicted. Remarkably $\ensuremath{\tilde\rho_{\R}} ^{\rm nlo}(a)$ is free from discretization effects in the chiral limit, and therefore it is independent on $\Lambda_{1,\rm R}$ and $\Lambda_{2,\rm R}$. The continuum extrapolation of the chiral value of $\ensuremath{\tilde\rho_{\R}} ^{\rm nlo}(a)$ then removes the discretization effects due to the reference scale used. \section{Numerical results for the mode number}\label{sec:parms} We collect the results for the mode number in Tables \ref{tab:lambda}, \ref{tab:lambda2} and \ref{tab:lambda3}. For each lattice the values of $a M$ correspond to approximatively $\Lambda_{\rm R}=$20, 25, 30, 40, 55, 71, 86, 101, 116 MeV with the exception of the lattice E5 for which also $\Lambda_{\rm R}=151, 202, 303, 505$~MeV were computed. \begin{table}[hb] \small \begin{center} \begin{tabular}{@{\extracolsep{0.4cm}}cccccccc} \hline id & $N_{\text{cnfgs}}$ & $aM$ & $\nu$ \\ \hline A3 & 55 & 0.008673 & 13.3(6) \\ & & 0.009208 & 16.2(6) \\ & & 0.009821 & 20.5(7) \\ & & 0.011235 & 29.6(9) \\ & & 0.013665 & 47.3(10) \\ & & 0.016322 & 66.9(12) \\ & & 0.019110 & 88.2(14) \\ & & 0.021979 & 111.1(16) \\ & & 0.024901 & 134.6(18) \\ A4 & 55 & 0.006205 & 11.6(6) \\ & & 0.006929 & 15.9(7) \\ & & 0.007723 & 20.6(7) \\ & & 0.009447 & 30.8(8) \\ & & 0.012228 & 48.8(10) \\ & & 0.015127 & 68.6(12) \\ & & 0.018088 & 89.6(13) \\ & & 0.021085 & 110.9(15) \\ & & 0.024103 & 132.5(15) \\ A5 & 55 & 0.005352 & 11.4(6) \\ & & 0.006176 & 15.6(6) \\ & & 0.007054 & 20.6(7) \\ & & 0.008905 & 31.9(8) \\ & & 0.011810 & 50.1(11) \\ & & 0.014786 & 68.3(13) \\ & & 0.017799 & 88.7(14) \\ & & 0.020831 & 108.7(16) \\ & & 0.023877 & 129.2(18) \\ B6 & 50 & 0.004800 & 59.5(10) \\ & & 0.005703 & 82.5(11) \\ & & 0.006642 & 108.4(13) \\ & & 0.008580 & 162.3(16) \\ & & 0.011563 & 253.0(22) \\ & & 0.014586 & 346.5(25) \\ & & 0.017629 & 443(3) \\ & & 0.020683 & 543(3) \\ & & 0.023743 & 647(4) \\ \hline \end{tabular} \end{center} \caption{\label{tab:lambda} Values of $aM$ and the corresponding results for $\nu$ for each lattice at $\beta=5.2$.} \end{table} \begin{table} \small \begin{center} \begin{tabular}{@{\extracolsep{0.4cm}}ccccccc} \hline id & $N_{\text{cnfgs}}$ & $aM$ & $\nu$ \\ \hline D5 & 345 & 0.006720 &2.09(9) \\ & & 0.007239 &2.77(10) \\ & & 0.007826 &3.42(10) \\ & & 0.009153 &5.26(12) \\ & & 0.011385 &8.38(16) \\ & & 0.013782 &11.69(19) \\ & & 0.016271 &15.16(22) \\ & & 0.018815 &18.61(25) \\ & & 0.021396 &22.3(3) \\ E5 & 92 & 0.006720 & 7.3(3) \\ & & 0.007239 & 9.3(3) \\ & & 0.007826 & 11.5(3) \\ & & 0.009153 & 17.1(4) \\ & & 0.011385 & 26.9(5) \\ & & 0.013782 & 37.4(7) \\ & & 0.016271 & 47.3(8) \\ & & 0.018815 & 58.0(9) \\ & & 0.021396 & 68.8(10) \\ & & 0.027499 & 93.7(10) \\ & & 0.036321 & 138.6(12) \\ & & 0.054110 & 259.7(16) \\ & & 0.089863 & 689(3) \\ F6 & 50 & 0.004618 &34.7(9) \\ & & 0.005342 &47.6(11) \\ & & 0.006111 &60.7(12) \\ & & 0.007732 &90.8(16) \\ & & 0.010268 &135.8(17) \\ & & 0.012865 &183.0(20) \\ & & 0.015492 &230.9(23) \\ & & 0.018137 &280(3) \\ & & 0.020791 &330(3) \\ F7 & 50 & 0.004159 & 34.7(9) \\ & & 0.004950 & 47.0(10) \\ & & 0.005770 & 59.3(10) \\ & & 0.007464 & 87.1(12) \\ & & 0.010065 & 128.9(16) \\ & & 0.012701 & 172.0(21) \\ & & 0.015354 & 217.2(23) \\ & & 0.018015 & 265(3) \\ & & 0.020682 & 314(3) \\ G8 & 50 & 0.003737 &113.7(16) \\ & & 0.004599 &153.8(18) \\ & & 0.005472 &196.7(22) \\ & & 0.007233 &282.3(25) \\ & & 0.009892 &409(3) \\ & & 0.012560 &543(3) \\ & & 0.015233 &682(4) \\ & & 0.017910 &828(4) \\ & & 0.020587 &981(5) \\ \hline \end{tabular} \end{center} \caption{\label{tab:lambda2} As in Table \ref{tab:lambda} but for $\beta=5.3$. } \end{table} \begin{table} \small \begin{center} \begin{tabular}{@{\extracolsep{0.4cm}}ccccccc} \hline id & $N_{\text{cnfgs}}$ & $aM$ & $\nu$ \\ \hline N5 & 60 & 0.005287 &12.0(6) \\ & & 0.005647 &15.6(6) \\ & & 0.006058 &19.3(7) \\ & & 0.006998 &27.3(8) \\ & & 0.008599 &40.2(9) \\ & & 0.010334 &52.3(10) \\ & & 0.012146 &65.0(11) \\ & & 0.014005 &77.7(12) \\ & & 0.015895 &91.2(13) \\ N6 & 60 & 0.003797 & 11.0(4) \\ & & 0.004284 & 14.9(5) \\ & & 0.004812 & 18.3(5) \\ & & 0.005949 & 25.6(7) \\ & & 0.007765 & 37.3(8) \\ & & 0.009646 & 49.1(8) \\ & & 0.011562 & 60.4(9) \\ & & 0.013496 & 72.6(10) \\ & & 0.015444 & 85.8(11) \\ O7 & 50 & 0.003137 & 34.3(9) \\ & & 0.003710 & 45.9(10) \\ & & 0.004309 & 57.5(11) \\ & & 0.005548 & 78.5(12) \\ & & 0.007459 & 111.9(15) \\ & & 0.009399 & 147.8(16) \\ & & 0.011354 & 184.0(18) \\ & & 0.013316 & 220.8(19) \\ & & 0.015284 & 260.2(21) \\ \hline \end{tabular} \end{center} \caption{\label{tab:lambda3} As in Table \ref{tab:lambda} but for $\beta=5.5$. } \end{table} \begin{table} \small \begin{center} \begin{tabular}{@{\extracolsep{0.4cm}}c|ccc} \hline $\Lambda_{\rm R}/m_{\rm R}$ & 12.9 & 20.9 & 32.0 \\ \hline 22.7 & 0.0289(20) & 0.032(3) & 0.033(3) \\ 27.7 & 0.0249(21) & 0.023(3) & 0.029(3) \\ 35.3 & 0.0191(16) & 0.025(3) & 0.0308(24) \\ 47.9 & 0.0192(15) & 0.0239(22) & 0.0288(19) \\ 63.0 & 0.0221(15) & 0.0228(24) & 0.0229(18) \\ 78.2 & 0.0210(16) & 0.0174(20) & 0.0224(18) \\ 93.3 & 0.0212(14) & 0.0221(21) & 0.0211(18) \\ 108.4 & 0.0237(15) & 0.0257(22) & 0.0243(19) \\ \hline \end{tabular} \end{center} \caption{\label{tab:sigmatildecont} The effective density $\ensuremath{\tilde\rho_{\R}} $ in the continuum is given for various values of the cutoff $\Lambda_{\rm R}$ and the quark mass $m_{\rm R}$. These data are obtained by first interpolating $\ensuremath{\tilde\rho_{\R}} $ linearly in $m_{\rm R}$ for each $\Lambda_{\rm R}$ and lattice spacing $a$, followed by an extrapolation linear in $a^2$ to the continuum for each pair of $(\Lambda_{\rm R},m_{\rm R})$, as described in Sections \ref{sec:firstlook} and \ref{sec:seccont}. $\ensuremath{\tilde\rho_{\R}} $ is given in GeV$^3$, $\Lambda_{\rm R}$ and $m_{\rm R}$ are given in MeV. } \end{table} \clearpage \section{Numerical analysis of discretization effects\label{app:disc}} In this appendix we report more details on the discretization effects that we have observed in our data. We limit ourselves to an empirical discussion of the results obtained by following the strategy described in Section~\ref{sec:seccont}. \begin{figure} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/ur1000-betamulti-nuratio-vs-lam.eps} \end{minipage} \hspace{20mm} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/oh1010-lammulti-discr-vs-m.eps} \end{minipage} \caption{Left: mode number at $m_{\rm R}=32$ MeV for all three lattice spacings and all cutoffs $\Lambda_{\rm R}$, normalized with respect to its value at $\Lambda_{\rm R}=40$ MeV. Right: discretization effects $\Delta$ of the effective spectral density as defined in Eq.~\eqref{eq:disc1}, shown vs.~$m_{\rm R}$ for three values of $\Lambda_{\rm R}$. The fit in the plot follows Eq.~\eqref{eq:disc2}, the resulting parameters of which are shown in Figure~\ref{fig:disc2}.} \label{fig:disc1} \end{figure} A first look into the data reveals that discretization effects in $\nu$ show a non-trivial dependence on $\Lambda_{\rm R}$ and $m_{\rm R}$. We plot the mode number at $m_{\rm R}=32$ MeV, normalized with respect to its value at $\Lambda_{\rm R}=40$ MeV, for all three lattice spacings and all values of $\Lambda_{\rm R}$ in Figure~\ref{fig:disc1}, left hand side. After interpolating the effective spectral density in $m_{\rm R}$, we fit the data linearly in $a^2$ \begin{equation}\label{eq:disc1} \ensuremath{\tilde\rho_{\R}} (\Lambda_{\rm R},m_{\rm R},a) = \ensuremath{\tilde\rho_{\R}} (\Lambda_{\rm R},m_{\rm R},0) + a^2\Delta(\Lambda_{\rm R},m_{\rm R}) \end{equation} for each pair of $(\Lambda_{\rm R},m_{\rm R})$. By fitting $\Delta$ linearly in $m_{\rm R}$ (Figure~\ref{fig:disc1}, right plot) \begin{equation}\label{eq:disc2} \Delta(\Lambda_{\rm R},m_{\rm R}) = c_{0,1}(\Lambda_{\rm R}) + c_{1,1}(\Lambda_{\rm R})m_{\rm R} \end{equation} for each $\Lambda_{\rm R}$, we obtain the values for $c_{0,1}(\Lambda_{\rm R})$ shown in the left plot of Figure~\ref{fig:disc2}. Within errors, $c_{0,1}(\Lambda_{\rm R})$ turns out to be compatible with a constant. To reduce the noise in $c_{1,1}(\Lambda_{\rm R})$, we repeat the fit in Eq.~(\ref{eq:disc2}) but constraining $c_{0,1}(\Lambda_{\rm R})$ to be a constant. The results of this fit are shown in the right plot of Figure~\ref{fig:disc2}. The coefficient $c_{1,1}(\Lambda_{\rm R})$ tends to a constant for large $\Lambda_{\rm R}$, while a significant drop is observed towards the origin. In an intermediate range, the opposite signs of $c_{0,1}$ and $c_{1,1}$ allow for a compensation of the different effects, implying an effectively flat dependence of $\ensuremath{\tilde\rho_{\R}} $ in the lattice spacing. Within the large errors, the mass-dependent discretization effects could be compatible with the functional form given in Eq.~\eqref{eq:sigmatildelat} \cite{Necco:2011vx}. The sign of the pole, however, appears to be opposite than predicted in Refs.~\cite{Splittorff:2012gp,Hansen:2011kk}. In this respect it must be said that it is not clear that the GSM power-counting scheme used in Ref.~\cite{Necco:2011vx} applies in the range of parameters of our data. \begin{figure} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/oh1010-m0-a0-discr.fit-c0-vs-lam.eps} \end{minipage} \hspace{20mm} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/oh1021-m0-a0-discr.fit-c1-vs-lam.eps} \end{minipage} \caption{Left: mass-independent discretization effects $c_{0,1}$ as defined in Eq.~\eqref{eq:disc2} vs.~$\Lambda_{\rm R}$. A fit of the plateau gives 3.8(3) GeV$^3$/fm$^2$. Right: mass-dependent discretization effects $c_{1,1}$ as defined in Eq.~\eqref{eq:disc2} (but with $c_{0,1}(\Lambda_{\rm R})$ constrained to be constant), as a function of $\Lambda_{\rm R}$.} \label{fig:disc2} \end{figure} \section{Introduction} There is overwhelming evidence that the chiral symmetry group $SU(N_f)_L\times SU(N_f)_R$ of Quantum Chromodynamics (QCD) with a small number $N_f$ of light flavours breaks spontaneously to $SU(N_f)_{L+R}$. This progress became possible over the last decade thanks to the impressive speed-up of the numerical simulations of lattice QCD with light dynamical fermions~\cite{Hasenbusch:2001ne,Luscher:2005rx, DelDebbio:2006cn,Urbach:2005ji,Luscher:2012av}, for a recent compilation of results see~\cite{Aoki:2013ldr}. By now it is standard practice to assume this fact, and extrapolate phenomenologically interesting observables in the quark mass by applying the predictions of chiral perturbation theory (ChPT) \cite{Weinberg:1978kz,Gasser:1983yg}. The distinctive signature of spontaneous symmetry breaking in QCD is the set of relations among pion masses and matrix elements which are expected to hold in the chiral limit~\cite{Weinberg:1978kz}. Pions interact only if they carry momentum, and their matrix elements near the chiral limit can be expressed as known functions of two low-energy constants (LECs), the decay constant $F$ and the chiral condensate $\Sigma$. The simplest of these relations is the Gell-Mann-Oakes-Renner (GMOR) one, which equals the slope of the pion mass squared with respect to the quark mass to $2 \Sigma/F^2$. On the one hand lattice simulations became so powerful that we are having now the tools to verify some of these relations with confidence. On the other hand very little is known about the dynamical mechanism which breaks chiral symmetry. Maybe the spectrum of the Dirac operator is the simplest quantity to look at for an insight. Indeed many years ago Banks and Casher suggested that chiral symmetry breaks if the low modes of the Dirac operator at the origin do condense and vice-versa~\cite{Banks:1979yr}. Remarkably we now know that the spectral density~\cite{Banks:1979yr,Leutwyler:1992yt,Shuryak:1992pi} is a renormalisable quantity to which a universal meaning can be assigned~\cite{Giusti:2008vb}. The present paper is the second of two devoted to the computation of the spectral density of the Dirac operator in QCD with two flavours near the origin\footnote{ Preliminary results of this work were presented in Refs.~\cite{Engel:2013rwa,Engel:2014lat}.}. This is achieved by extrapolating the numerical results obtained with $O(a)$-improved Wilson fermions at several lattice spacings to the universal continuum limit. In the first paper the focus was on the physics results~\cite{Engel:2014cka}, while here we report the full set of results, the technical and the numerical details of the computation. After fixing the notation and giving the parameters of the lattices simulated in the second and third sections, the fourth and the fifth ones are devoted to two different numerical analyses of the data. Results and conclusions are given in the last section. \section{Spectral density of the Dirac operator} In a space-time box of volume $V$ with periodic boundary conditions the spectral density of the Euclidean massless Dirac operator $D$ is defined as \begin{equation} \rho(\lambda,m)=\frac{1}{V}\sum_{k=1}^{\infty} \left\langle\delta(\lambda-\lambda_k)\right\rangle\; , \end{equation} where $i\lambda_1$, $i\lambda_2$, $\ldots$ are its (purely imaginary) eigenvalues ordered with their magnitude in ascending order. As usual the bracket $\langle\ldots\rangle$ denotes the QCD expectation value and $m$ the quark mass. The spectral density is a renormalisable observable~\cite{Giusti:2008vb,DelDebbio:2005qa}. Once the free parameters in the action (coupling constant and quark masses) have been renormalized, no renormalisation ambiguity is left in $\rho(\lambda,m)$. The Banks--Casher relation \cite{Banks:1979yr} \begin{equation} \lim_{\lambda \to 0}\lim_{m \to 0}\lim_{V \to \infty}\rho(\lambda,m) =\frac{\Sigma}{\pi} \end{equation} links the spectral density to the chiral condensate \begin{equation}\label{eq:Sigrho} \Sigma=-\frac{1}{2}\lim_{m \to 0}\lim_{V \to \infty} \left\langle \bar\psi \psi\right\rangle\; , \end{equation} where $\psi$ is the quark doublet. It can be read in either directions. If chiral symmetry is spontaneously broken by a non-zero value of the condensate, the density of the quark modes in infinite volume does not vanish at the origin. Conversely a non-zero density implies that the symmetry is broken. The mode number of the Dirac operator \begin{equation} \nu(\Lambda,m)=V \int_{-\Lambda}^{\Lambda} {\rm d}\lambda\,\rho(\lambda,m), \end{equation} corresponds also to the average number of eigenmodes of the massive Hermitean operator $D^{\dagger}D+m^2$ with eigenvalues $\alpha\leq M^2 = \Lambda^2+m^2$. It is a renormalisation-group invariant quantity as it stands. Its (normalized) discrete derivative \begin{equation}\label{eq:discD} {\tilde\rho}(\Lambda_1,\Lambda_2,m) = \frac{\pi}{2 V} \frac{\nu(\Lambda_2)-\nu(\Lambda_1)} {\Lambda_2 - \Lambda_1}\; \end{equation} carries the same information as $\rho(\lambda,m)$, but this {\it effective spectral density} is a more convenient quantity to consider in practice on the lattice. \subsection{Mode number on the lattice} We discretize two-flavour QCD with the Wilson plaquette action for the gauge field, and O$(a)$-improved Wilson action for the doublet of mass-degenerate quarks~\cite{Sheikholeslami:1985ij,Luscher:1996sc}, see appendix \ref{app:A} for more details. The mode number\footnote{We use the same notation for lattice and continuum quantities, since any ambiguity is resolved from the context. As usual the continuum limit value of a renormalised lattice quantity, identified with the subscript ${\rm R}$, is the one to be identified with its continuum counterpart.} $\nu(\Lambda,m)$ is defined as the average number of eigenmodes of the massive Hermitean O$(a)$-improved Wilson-Dirac operator $D^\dagger_m D_m$ with eigenvalues $\alpha\leq M^2$. In the continuum limit this definition converges to the universal one~\cite{Giusti:2008vb} \begin{equation}\label{eq:nuR} \nu_{\rm R}(\Lambda_{\rm R},m_{\rm R}) = \nu(\Lambda,m) \end{equation} provided $m_{\rm R}$ is defined as in Eq.~(\ref{eq:mR}), and $\Lambda_{\rm R}$ as \begin{equation} \Lambda_{\rm R}=\sqrt{M_{\rm R}^2-m_{\rm R}^2}\; , \qquad M_{\rm R} = Z^{-1}_{\rm P} (1 + {\bar b}_\mu\, a m)\, M\; . \end{equation} The counter-term proportional to ${\bar b}_\mu$ ensures that at finite lattice spacing $\nu_{\rm R}(M_{\rm R},m_{\rm R})$ is an O$(a)$-improved quantity. This improvement coefficient has been computed in Ref.~\cite{Giusti:2008vb}, and its values for the inverse couplings $\beta$ considered in this paper are given in Table~\ref{tab:impr}. For Wilson fermions chiral symmetry is violated at finite lattice spacing. As a consequence the fine details of the spectrum of the Wilson--Dirac operator near the threshold $\Lambda_{\rm R}=0$ is not protected from large lattice effects~\cite{DelDebbio:2005qa,Damgaard:2010cz,Splittorff:2012gp}. While this region may be of interest for studying the peculiar details of those fermions, it is easier to extract universal information about the continuum theory far away from it. In this respect the effective spectral density in Eq.~(\ref{eq:discD}) is a good quantity to consider on the lattice to extract the value of the chiral condensate\footnote{Once the renormalisability of the spectral density is proven, a generic finite integral of $\rho(\lambda,m)$ can be used to measure the condensate, see Ref.~\cite{Giusti:2007cn} for a different choice.} . \section{Numerical setup} The CLS community\footnote{https://wiki-zeuthen.desy.de/CLS/CLS.} and the ALPHA Collaboration have generated the gauge configurations of the two-flavour QCD with the $O(a)$-improved Wilson action by using the MP-HMC (lattices A5, B6, G8, N6 and O7) and the DD-HMC (all other lattices) algorithms as implemented in Refs.~\cite{Marinkovic:2010eg,Luscher:DD-HMC}. The primary observables that we have computed are the two-point functions of bilinear operators in Eq.~(\ref{eq:2pt}), and the mode number $\nu(\Lambda,m)$. The former were already computed by the ALPHA Collaboration, see Appendix~\ref{app:mmpif} and Refs.~\cite{Fritzsch:2012wq,alphalec} for more details. \begin{table} \small \begin{center} \setlength{\tabcolsep}{.10pc} \begin{tabular}{@{\extracolsep{0.4cm}}cccccccccc} \hline id &$L/a$&$\beta$&$\kappa$&MDU&$m_{\rm R}$[MeV] &$F_\pi$[MeV]&$M_\pi$[MeV]&$M_\pi L$&$a$[fm]\\ \hline A3 &$32$&$5.2$&$0.13580$ &$7040$ &$37.4(9)$ &$120.8(7)$& $496(6)$ & $6.0$ & 0.0749(8)\\ A4 &$32$& &$0.13590$ &$7920$ &$22.8(6)$ &$110.7(6)$&$386(5)$ & $4.7$ & \\ A5 &$32$& &$0.13594$ &$1980$ &$16.8(4)$ &$106.0(6)$&$333(5)$ & $4.0$ & \\ B6 &$48$& &$0.13597$ &$1200$ &$12.2(3)$ &$102.3(5)$&$283(4)$ & $5.2$ & \\ \hline E5 &$32$&$5.3$&$0.13625$ &$8832$ &$32.0(8)$ &$115.2(6)$&$440(5)$ & $4.7$ & 0.0652(6)\\ F6 &$48$& &$0.13635$ &$4000$ &$16.5(4)$ &$105.3(6)$&$314(3)$ & $5.0$ & \\ F7 &$48$& &$0.13638$ &$3600$ &$12.0(3)$ &$100.9(4)$&$268(3)$ & $4.3$ & \\ G8 &$64$& &$0.136417$&$1680$ &$6.1(2)$ &$95.8(4)$&$193(2)$& $4.1$ & \\ \hline N5 &$48$&$5.5$&$0.13660$ &$3840$ &$34.8(8)$ &$115.1(7)$&$443(4)$ & $5.2$ & 0.0483(4)\\ N6 &$48$& &$0.13667$ &$7680$ &$20.9(5)$ &$105.8(5)$&$342(3)$ & $4.0$ & \\ O7 &$64$& &$0.13671$ &$3800$ &$12.9(3)$ &$101.2(4)$&$269(3)$ & $4.2$ & \\ \hline \end{tabular} \end{center} \caption{\label{tab:ens} Overview of the ensembles and statistics used in this study. We give the label, the spatial extent of the lattice, $\beta=6/g_0^2$, the hopping parameter $\kappa$ for the quark fields, the number of molecular dynamics units (MDU), the quark mass $m_{\rm R}$ renormalized in the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme at $\mu=2$~GeV, the pion mass $M_\pi$ and its decay constant $F_\pi$, the product $M_\pi L$, and the (updated) value of the lattice spacing determined as in~\cite{Fritzsch:2012wq} (see also \cite{Marinkovic:2011pa}).} \end{table} \subsection{Computation of the mode number} The stochastic computation of the mode number has been carried out as in Ref.~\cite{Giusti:2008vb}. A numerical approximation of the orthogonal projector ${\Bbb P}_M$ to the subspace spanned by the eigenmodes of $D_m^\dagger D_m$ with eigenvalues $\alpha\leq M^2$ is computed as \begin{equation} {\Bbb P}_M\simeq h({\Bbb X})^4, \qquad {\Bbb X}= 1-\frac{2 M_*^2}{D_m^\dagger D_m+ M_*^2}\; . \end{equation} where $M/M_\star=0.96334$. The function $h(x)$ is an approximation to the step function $\theta(-x)$ by a minmax polynomial of degree $n=32$ in the range $-1\leq x \leq 1$, see Ref.~\cite{Giusti:2008vb} for more details. This choice, together with the value of $M_\star$ given, guarantees a systematic error well below our statistical errors. The mode number is then computed as \begin{equation} \nu(M,m)=\langle{\cal O}_N\rangle, \qquad {\cal O}_N={1\over N}\sum_{k=1}^N\left(\eta_k,{\Bbb P}_M\eta_k\right), \end{equation} where we have added to the theory a set of pseudo-fermion fields $\eta_1,\ldots,\eta_N$ with Gaussian action. In the course of a numerical simulation, one such field ($N=1)$ for each gauge-field configuration is generated randomly, and the mode number is estimated in the usual way by averaging the observable ${\cal O}_N$ over the generated ensemble of fields. The mode number is an extensive quantity, and at fixed $N$ and for a given statistics, the relative statistical error of the calculated mode number is therefore expected to decrease like $V^{-1/2}$. \subsection{Ensembles generated} The details of the lattices are listed in Tables~\ref{tab:ens} and \ref{tab:ens2}. All of them have a size of $2L \times L^3$, and the spatial dimensions are always large enough so that $M_\pi L\geq4$. The three values of the coupling constant $\beta=5.2,\,5.3,\,5.5$ correspond to lattice spacings of $\;a=0.075,\, 0.065,\, 0.048$\,fm respectively, which have been fixed from $F_K$ by supplementing the theory with a quenched ``strange'' quark~\cite{Fritzsch:2012wq}. The pion masses range from $190$ MeV to $500$ MeV. To explicitly check for finite-size effects in the mode number we have generated an additional set of lattices (D5) with the same spacing and quark mass as E5, but with a smaller lattice volume $48\times 24^3$. \begin{table} \small \begin{center} \begin{tabular}{@{\extracolsep{0.0cm}}cc|cc|cc|c} \hline id & $R_{\rm act}$ & $R_{\rm act}\tau_{\rm int}(M_{\pi})$&$R_{\rm act} n_{\rm it}(M_{\pi})$ & $R_{\rm act}\tau_{\rm int}(\nu)$ &$R_{\rm act} n_{\rm it} (\nu)$ & $R_{\rm act}\tau_{\rm exp}$\\ \hline A3 &$0.37$& 7 & 2.96 & & \phantom{1}47.36 & 40\\ A4 &$0.37$& 5 & 2.96 & & \phantom{1}53.28 & \\ A5 & 1 & 5 & 4.00 & 3 & \phantom{1}36.00 & \\ B6 & 1 & 6 & 2.00 & & \phantom{1}24.00 & \\ \hline E5 &$0.37$& 9 & 5.92 & 6 & \phantom{1}35.52 & 55\\ F6 &$0.37$& 8 & 2.96 & & \phantom{1}29.60 & \\ F7 &$0.37$& 7 & 2.96 & & \phantom{1}26.64 & \\ G8 & 1 & 8 & 2.00 & & 24--48 & \\ \hline N5 &$0.44$&30 & 3.52 & 11 & \phantom{1}28.16 & 100\\ N6 & 1 &10 & 4.00 & & 128 & \\ O7 & 1 &15 & 4.00 & & \phantom{1}76 & \\ \hline \end{tabular} \end{center} \caption{\label{tab:ens2} The integrated autocorrelation time $\tau_{\rm int}$ of the pion mass and of the mode number, multiplied by the fraction of active links in the HMC $R_{\rm act}$, is given in units of MDU. The parameters $\tau_{\rm int}$ have a typical error of $25$--$35\%$. The number $n_{\rm it}$ of MDUs skipped between two consecutive measurements of the two-point functions and of the mode number is also reported. The value of $\tau_{\rm exp}$ of the Markov chain given in the last column is taken from Ref.~\cite{Bruno:2014ova}. The value of $R_{\rm act}\tau_{\rm int}(\nu)$ for N5 is a conservative estimate from the one of E5 and a scaling proprtional to $\tau_{\rm exp}$.} \end{table} The autocorrelation times of the two-point functions and of the mode number are reported in Table~\ref{tab:ens2}. For the lattice E5 we have computed $\tau_{\rm int}(\nu)$ for three values of $aM$ corresponding to $\Lambda_{\rm R} = 30,\, 40$ and $86$~MeV, and no significative difference was observed. We thus space the measurements to give time to the mode number to decorrelate, while we bin properly the (cheaper) measurements of the two-point functions. To measure $\nu$, the number of configurations to be processed is chosen so that the statistical error of the effective spectral density receives roughly equally-sized contributions from the scale and the mode number. To ensure a proper Monte Carlo sampling, a minimum of 50 configurations is processed in any case. The value of $\tau_{\rm exp}$ of the Markov chain, defined as in Ref.~\cite{Fritzsch:2012wq}, is taken from~\cite{Bruno:2014ova}. It gets significantly longer towards finer lattice spacings. For the ensembles where $n_{\rm it}<\tau_{\rm exp}$, we estimate the contributions of the tails in the autocorrelation functions of the observables as described in Ref.~\cite{Schaefer:2010hu}. When needed, we take them into account to have a more conservative error estimate. \section{A first look into the numerical results}\label{sec:firstlook} We have computed the mode number $\nu$ for nine values\footnote{If not explicitly stated, the scheme- and scale-dependent quantities such as $\Sigma$, $m_{\rm R}$, $\Lambda_{\rm R}$ and $\ensuremath{\tilde\rho_{\R}} $ are renormalized in the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme at $\mu=2$~GeV.} of $\Lambda_{\rm R}$ in the range $20$--$120$~MeV with a statistical accuracy of a few percent on all lattices listed in Table~\ref{tab:ens}. Four larger values of $\Lambda_{\rm R}$ in the range $150$--$500$~MeV have been also analysed for the ensemble E5. The results are collected in Tables~\ref{tab:lambda}--\ref{tab:lambda3} of the Appendix~\ref{sec:parms}. \begin{figure} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/O7-cs1000-nu.eps} \end{minipage} \hspace{20mm} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/O7-cs1000-st.eps} \end{minipage} \caption{Left: the mode number $\nu$ as a function of $\Lambda_{\rm R}$ for the ensemble O7. A quadratic fit of the data gives $\nu= -9.0(13) + 2.07(7)\Lambda_{\rm R} + 0.0022(4)\Lambda_{\rm R}^2$. Right: the effective spectral density $\ensuremath{\tilde\rho_{\R}} $ as defined in Eq.~(\ref{eq:discD}) for the same ensemble as a function of $\Lambda_{\rm R}=(\Lambda_{1,\rm R}+\Lambda_{2,\rm R})/2$. Since we are interested in the $\Lambda_{\rm R}$-dependence only, the errors in this plot do not include those of the lattice spacing and of $Z_{\rm P}$. The errors from $Z_{\rm A}$ and $m_{\rm R}$ appear to be invisible in the figure.} \label{fig:firstlook-O7} \end{figure} In Figure~\ref{fig:firstlook-O7} we show $\nu$ as a function of $\Lambda_{\rm R}$ for the lattice O7, corresponding to the smallest reference quark mass (see below) at the smallest lattice spacing. On all other lattices an analogous qualitative behaviour is observed. The mode number is a nearly linear function in $\Lambda_{\rm R}$ up to approximatively $100$--$150$~MeV. A clear departure from linearity is observed for $\Lambda_{\rm R} > 200$~MeV on the lattice E5. At the percent precision, however, the data show statistically significant deviations from the linear behavior already below $100$ MeV. To guide the eye, a quadratic fit in $\Lambda_{\rm R}$ is shown in Figure~\ref{fig:firstlook-O7}, and the values of the coefficients are given in the caption. The bulk of $\nu$ is given by the linear term, while the constant and the quadratic term represent $O(10\%)$ corrections in the fitted range. The nearly linear behaviour of the mode number is manifest on the right plot of Figure~\ref{fig:firstlook-O7}, where its discrete derivative, defined as in Eq.~(\ref{eq:discD}) for each couple of consecutive values of $\Lambda_{\rm R}$, is shown as a function of $\Lambda_{\rm R}=(\Lambda_{1,\rm R}+\Lambda_{2,\rm R})/2$. Since it is not affected by threshold effects, the effective spectral density $\ensuremath{\tilde\rho_{\R}} $ is the primary observable we focus on in the next sections. \begin{figure} \begin{center} \includegraphics[width=7.0 cm,angle=0]{figs/oh1011-m0,0129-lammulti-st-vs-a2.eps} \includegraphics[width=7.0 cm,angle=0]{figs/oh1011-m0,0320-lammulti-st-vs-a2} \caption{Effective spectral density $\ensuremath{\tilde\rho_{\R}} $ vs.~the lattice spacing squared for the lightest (left hand side) and the heaviest reference quark mass $m_{\rm R}$ (right hand side), and for the lightest, an intermediate, and the heaviest cutoff $\Lambda_{\rm R}$ in both panels. In general, the data are well described by a linear fit in $a^2$, which suggests that, within our statistical errors, we are in the asymptotic regime of Symanzik effective theory. As evident from the figures, there are competing (positive and negative) discretization effects, which can approximately compensate for each other in specific domains of parameter space. } \label{fig:contlim} \end{center} \end{figure} \subsection{Continuum-limit extrapolation} In general for $\ensuremath{\tilde\rho_{\R}} $ we observe quite a flat behaviour in $\Lambda_{\rm R}$ toward finer lattice spacings and light quark masses, similar to the one shown in Figure~\ref{fig:firstlook-O7}. Because the action and the mode number are $O(a)$-improved, the Symanzik effective theory analysis predicts that discretization errors start at $O(a^2)$. In order to remove them, at every lattice spacing we match three quark mass values ($m_{\rm R}=12.9$, $20.9$, $32.0$~MeV) by interpolating $\ensuremath{\tilde\rho_{\R}} $ linearly in $m_{\rm R}$ (see next section for more details). The values of $\ensuremath{\tilde\rho_{\R}} $ show mild discretization effects at light $m_{\rm R}$ and $\Lambda_{\rm R}$, while they differ up to $15\,\%$ per linear dimension among the three lattice spacings toward larger $\Lambda_{\rm R}$. Within the statistical errors all data sets are compatible with a linear dependence in $a^2$, and we thus independently extrapolate each triplet of points to the continuum limit accordingly. We show six of those extrapolations in Figure \ref{fig:contlim}, considering the lightest and the heaviest reference quark masses for the lightest, an intermediate, and the heaviest cutoff $\Lambda_{\rm R}$. The difference between the values of $\ensuremath{\tilde\rho_{\R}} $ at the finest lattice spacing and the continuum-extrapolated ones is within the statistical errors for light $m_{\rm R}$ and $\Lambda_{\rm R}$, and it remains within few standard deviations toward larger values of $m_{\rm R}$ and $\Lambda_{\rm R}$. This fact makes us confident that the extrapolation removes the cutoff effects within the errors quoted.\\ The results for $\ensuremath{\tilde\rho_{\R}} $ at $m_{\rm R}=12.9$~MeV in the continuum limit are shown as a function of $\Lambda_{\rm R}$ in the left plot of Figure~\ref{fig:firstlook-contlim}. A similar $\Lambda_{\rm R}$-dependence is observed at the two other reference masses. \begin{figure} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/oh1011-m0.0129-st-vs-lam.eps} \end{minipage} \hspace{20mm} \begin{minipage}{0.35\textwidth} \includegraphics[width=7.0 cm,angle=0]{figs/oh1014-m0-a0-st.fit-c0-vs-lam.eps} \end{minipage} \caption{Effective spectral density $\ensuremath{\tilde\rho_{\R}} $ in the continuum limit at the smallest reference quark mass $m_{\rm R}=12.9$~MeV (left), and in the chiral limit (right). Note the flat dependence on $\Lambda_{\rm R}$ which agrees with the expectation from NLO ChPT. The results of the fit to a constant is also shown on the right plot. } \label{fig:firstlook-contlim} \end{figure} It is worth noting that no assumption on the presence of spontaneous symmetry breaking was needed so far. These results, however, point to the fact that the spectral density of the Dirac operator in two-flavour QCD is (almost) constant in $\Lambda_{\rm R}$ near the origin at small quark masses. This is consistent with the expectation from the Banks--Casher relation in presence of spontaneous symmetry breaking. In this case next-to-leading (NLO) ChPT indeed predicts \begin{equation}\label{eq:eqChPTtext} \ensuremath{\tilde\rho_{\R}} ^{\rm nlo} = \Sigma \Big\{ 1 + \frac{m_{\rm R} \Sigma}{(4\pi)^2 F^4} \Big[3\, \bar l_6 + 1 - \ln(2) - 3 \ln\Big(\frac{\Sigma m_{\rm R}}{F^2 \bar\mu^2}\Big) + \tilde g_\nu\left(\frac{\Lambda_{1,\rm R}}{m_{\rm R}},\frac{\Lambda_{2,\rm R}}{m_{\rm R}}\right) \Big]\Big\}\; , \end{equation} i.e.~an almost flat function in (small) $\Lambda_{\rm R}$ at (small) finite quark masses, see Appendix~\ref{app:modenumber:chpt} for unexplained notation. At fixed quark mass the $\Lambda_{\rm R}$-dependence of $\ensuremath{\tilde\rho_{\R}} ^{\rm nlo}$ in Eq.~(\ref{eq:eqChPTtext}) is parameter-free once the pion mass and decay constant are measured. \subsection{Chiral limit} The extrapolation to the chiral limit requires an assumption on how the effective spectral density $\ensuremath{\tilde\rho_{\R}} $ behaves when $m_{\rm R} \rightarrow 0$. In this respect it is interesting to note that the NLO function in Eq.~(\ref{eq:eqChPTtext}) goes linearly in $m_{\rm R}$ near the chiral limit since there are no chiral logarithms at fixed $\Lambda_{\rm R}$, see Appendix \ref{app:modenumber:chpt}. A fit of the data to Eq.~(\ref{eq:eqChPTtext}) shows that the data are compatible with that NLO formula. A prediction of NLO ChPT in the two-flavour theory is that in the chiral limit $\ensuremath{\tilde\rho_{\R}} ^{\rm nlo}=\Sigma$ also at non-zero $\Lambda_{\rm R}$, since all NLO corrections in Eq.~(\ref{eq:eqChPTtext}) vanish~\cite{Smilga:1993in}. To check for this property we extrapolate $\ensuremath{\tilde\rho_{\R}} $ with Eq.~(\ref{eq:NLO-min-extended}), which is a generalization of Eq.~(\ref{eq:eqChPTtext}) see below, and we obtain the results shown in the right plot of Figure~\ref{fig:firstlook-contlim} with a $\chi^2/\rm{dof}=16.4/14$. Within errors the $\Lambda_{\rm R}$-dependence is clearly compatible with a constant up to $\approx 80$~MeV. Moreover the difference between the values of $\ensuremath{\tilde\rho_{\R}} $ in the chiral limit and those at $m_{\rm R}=12.9$~MeV is of the order of the statistical error, i.e. the extrapolation is very mild. A fit to a constant of the data gives $\Sigma^{1/3}=261(6)$~MeV. As in any numerical computation, the chiral limit inevitably requires an extrapolation of the results with a pre-defined functional form. The distinctive feature of spontaneous symmetry breaking, however, is that the behaviour of $\ensuremath{\tilde\rho_{\R}} $ near the origin is predicted by ChPT, and its extrapolated value has to agree with the one of $M_\pi^2 F_\pi^2/(2 m_{\rm R})$. We have thus complemented our computations with those for $m_{\rm R}$, $M_\pi$ and $F_\pi$, and extrapolated the above mentioned ratio to the chiral limit as prescribed by ChPT, see Appendix \ref{app:mmpif} and Ref.~\cite{Engel:2014cka}. We obtain $\Sigma^{1/3}_{\rm GMOR}= 263(3)(4)$~MeV, where the first error is statistical and the second is systematic, in excellent agreement with the value quoted above. These results show that the spectral density at the origin has a non-zero value in the chiral limit. In the rest of this paper we assume this conclusion, and we apply standard field theory arguments to remove with confidence the (small) contributions in the raw data due to the discretization effects, the finite quark mass and finite $\Lambda_{\rm R}$. \section{Detailed discussion of numerical results\label{sec:glbfit}} We have analysed the numerical results for the effective spectral density $\ensuremath{\tilde\rho_{\R}} $ following two different fitting strategies. In the first one, the main results of which are reported in the previous section, we have extrapolated the results at fixed kinematics $(\Lambda_{\rm R},m_{\rm R})$ to the continuum limit independently. The results of this analysis call for an alternative strategy to extract the chiral condensate which uses ChPT from the starting point, i.e. based on fitting the data in all three directions $(\Lambda_{\rm R},m_{\rm R},a)$ at the same time. This procedure reduces the number of fit parameters, allows us to include all generated data in the fit, and avoids the need for an interpolation in the quark mass. It is important to stress that also in this case ChPT is used to remove only (small) higher order corrections in the spectral density. The details of these fits are reported in the next two sub-sections. \subsection{Continuum limit fit\label{sec:seccont}} In the first strategy outlined in Section \ref{sec:firstlook} we start by interpolating the data in the quark mass at fixed $\Lambda_{\rm R}$ and $a$. We choose three reference values ($m_{\rm R}=12.9$, $20.9$, $32.0$~MeV) which are within the range of simulated quark masses at all $\beta$ values, and they are as close as possible to the values at the finest lattice spacing. Most of the data sets look perfectly linear in $m$ in the vicinity of the interpolation points, with small deviations only for simultaneous coarse lattices, light $\Lambda_{\rm R}$'s and towards heavy quark masses (see Figure~\ref{fig:globalfit1}). In all cases, however, the systematic error associated with the linear interpolation is negligible with respect to the statistical one. The interpolation and all following fits are performed using the jackknife technique to take into account the correlation of the data. At fixed $(\Lambda_{\rm R},m_{\rm R})$, each data set is well fitted by a linear function in $a^2$, see Figure~\ref{fig:contlim}, a fact which supports the assumption of being in the Symanzik asymptotic regime within the errors quoted\footnote{A detailed analysis of discretization effects in the spectral density is beyond the scope of this paper. For completeness we report the results of these fits in Appendix D for the interested readers.}. Once extrapolated to the continuum limit, we fit the effective spectral density with the functional form \begin{equation}\label{eq:NLO-min-extended} \ensuremath{\tilde\rho_{\R}} = c_0(\Lambda_{\rm R}) + m_{\rm R} \Big[c_1 + c_2 \Big(-3\ln\left(\frac{m_{\rm R}}{\bar\mu}\right) +\tilde g_\nu\left(\frac{\Lambda_{\rm R,1}}{m_{\rm R}},\frac{\Lambda_{\rm R,2}}{m_{\rm R}}\right)\Big)\Big] \; , \end{equation} which rests on NLO ChPT, but it is capable of accounting for $O(\Lambda^2)$ effects. The latter are expected to be the dominant higher order effects in ChPT in this range of parameters. Within the given accuracy, $c_0(\Lambda)$ is consistent with a plateau behaviour in the range $20\leq\Lambda_{\rm R}\leq 80$ MeV, see right plot of Figure~\ref{fig:firstlook-contlim}. By fitting $c_0(\Lambda_{\rm R})$ to a constant in this range, we obtain $\Sigma^{1/3}=261(6)~$~MeV. If we include also a $\Lambda_{\rm R}^2$ term in the fit and consider the entire range $20\leq\Lambda_{\rm R}\leq120$ MeV we find $253(9)$~MeV, which differs from the previous result by roughly one standard deviation. At the level of our statistical errors of $O(10\%)$, the spectral density of the Dirac operator in the continuum and chiral limits is a constant function up to $\Lambda_{\rm R} \approx 80$~MeV. \subsection{Combined fit\label{sec:seccombined}} \begin{figure} \begin{center} \includegraphics[width=7.0 cm,angle=0]{figs/s1052-b5,50-lammulti-st-vs-m.eps} \includegraphics[width=7.0 cm,angle=0]{figs/s1052-st.fit-c0-vs-lam.eps} \caption{Left: effective spectral density $\ensuremath{\tilde\rho_{\R}} $ vs.~the quark mass $m_{\rm R}$ for the finer lattice spacings and three cutoffs $\Lambda_{\rm R}$ together with the combined fit to all data to Eq.~\eqref{eq:NLO-extended-reduced}. Right: effective spectral density $\ensuremath{\tilde\rho_{\R}} $ vs.~the cutoff $\Lambda_{\rm R}$ in the continuum and chiral limits. The squares are the results for $c_{0,0}(\Lambda_{\rm R})$ of the fit to the function in Eq.~\eqref{eq:NLO-extended-reduced}, and the plateau fit shown gives the value for the chiral condensate.} \label{fig:globalfit1} \end{center} \end{figure} In this section we present an alternative strategy to extract the chiral condensate, based on fitting the data in all three directions $(\Lambda_{\rm R},m_{\rm R},a)$ at the same time. Compared to the first strategy, the shortcomings are that we cannot disentangle different corrections as clearly and ChPT is used from the very beginning. We remark, however, that also in this case ChPT is used only to remove higher order corrections, while the bulk of the chiral condensate is still given through the Banks--Casher relation. The statistical analysis is based on a double-elimination jackknife fit to take into account all errors and correlations (no fit of fitted quantities is needed). We start with the fit form \begin{equation}\label{eq:NLO-extended} \ensuremath{\tilde\rho_{\R}} = c_0(\Lambda_{\rm R},a) + m_{\rm R} \Big[c_1(\Lambda_{\rm R},a) + c_2 \Big(-3\ln\left(\frac{m_{\rm R}}{\bar\mu}\right) +\tilde g_\nu\left(\frac{\Lambda_{\rm R,1}}{m_{\rm R}},\frac{\Lambda_{\rm R,2}}{m_{\rm R}}\right)\Big)\Big]\; , \end{equation} where $\Lambda_{\rm R}=(\Lambda_{\rm R,1} + \Lambda_{\rm R,2})/2$, and we constrain the fit-parameters as suggested by NLO chiral and Symanzik effective theories. As already verified in the first strategy, the discretization effects obey an $a^2$-dependence in the range of parameters simulated. We thus constrain our fit parameters to obey \footnote{Note that this expression includes also the functional form of discretization effects predicted at NLO in the GSM regime of ChPT~\cite{Necco:2011vx}, see Appendices \ref{app:modenumber:chpt} and \ref{app:disc}.} \begin{equation} c_0(\Lambda_{\rm R},a) = c_{0,0}(\Lambda_{\rm R}) + a^2 c_{0,1}(\Lambda_{\rm R})\; , \qquad c_1(\Lambda_{\rm R},a) = c_{1,0}(\Lambda_{\rm R}) + a^2 c_{1,1}(\Lambda_{\rm R})\; . \end{equation} The NLO ChPT predicts that $c_{0,0}(\Lambda_{\rm R})$ and $c_{1,0}(\Lambda_{\rm R})$ should both be constant. Allowing for the time being an arbitrary $\Lambda_{\rm R}$-dependence in the parameter $c_{0,0}(\Lambda_{\rm R})$, we arrive at the fit function \begin{eqnarray} \label{eq:NLO-extended-reduced} \hspace{-8.0cm} \ensuremath{\tilde\rho_{\R}} & = & c_{0,0}(\Lambda_{\rm R}) + a^2 c_{0,1}(\Lambda_{\rm R}) + m_{\rm R} \Big[c_{1,0} + a^2 c_{1,1}(\Lambda_{\rm R}) + \nonumber\\ & &\qquad\qquad\qquad\qquad\qquad\qquad\quad c_2 \Big(-3\ln\left(\frac{m_{\rm R}}{\bar\mu}\right) +\tilde g_\nu\Big(\frac{\Lambda_{\rm R,1}}{m_{\rm R}},\frac{\Lambda_{\rm R,2}}{m_{\rm R}}\Big)\Big)\Big] \;. \end{eqnarray} The fit of the data is shown versus the quark mass in the left plot of Figure~\ref{fig:globalfit1} for the finer lattice spacings and three cutoffs $\Lambda_{\rm R}$'s. The resulting effective spectral density in the continuum and chiral limit, corresponding to $c_{0,0}(\Lambda_{\rm R})$, is shown in the right plot of Figure~\ref{fig:globalfit1}. The results are very well compatible with the ones determined in Section~\ref{sec:firstlook}. If we fix $c_{0,0}$ to a constant in the region $20\leq \Lambda_{\rm R} \leq 80$, we can extract the condensate to get $\Sigma^{1/3}=259(6)~$~MeV, which is well compatible with the one extracted in the previous strategy. To assess the stability of the fit we have amended the function with higher order terms of the form $\mathcal{O}(\Lambda_{\rm R}^2,\Lambda_{\rm R} m_{\rm R},m_{\rm R}^2)$. Note that when including $\Lambda_{\rm R}^2$ terms, we always consider the entire range $20\leq \Lambda_{\rm R} \leq120$ MeV. The coefficient of $\Lambda_{\rm R} m_{\rm R}$ is consistent with zero, while $m_{\rm R}^2$ and $\Lambda_{\rm R}^2$ effects are non-zero by 2 and 3 standard deviations respectively and affect our final result systematically by roughly 1 standard deviation downwards. We remark, however, that in the truncated range $20\leq \Lambda_{\rm R} \leq80$ the data is perfectly compatible with a flat dependence on $\Lambda_{\rm R}$. We also investigated the effect of truncating the amount of data included in the fit. Cutting light $\Lambda_{\rm R}$ slightly improves the fit, while cutting heavy ones does not make a noteworthy difference. To check again whether all data obey well the assumed linear $a^2$-dependence, we perform also fits excluding the data at the coarsest lattices ($a=0.075$ fm) with larger discretization effects (we kept 12 out of 32 data points at this lattice spacing). This does not improve the quality of the fit significantly, and it gives $\Sigma^{1/3}=267(6)$~MeV which differs from the previous result by roughly one standard deviation upwards. We remark, however, that the linear $a^2$-dependence has been checked and confirmed explicitly for each pair of $(\Lambda_{\rm R},m_{\rm R})$ in the first strategy. A further reduction of the number of fit parameters can actually be achieved by noting that $c_2$ is known in ChPT. One can rewrite it as a function of $m_\pi$ and $m$. We have also tried to fix $c_{0,1}(\Lambda_{\rm R})$ to a constant which is suggested from results of the several fits we have done (see Appendix~\ref{app:disc}). In either case we get results which are well compatible with the results quoted. For this strategy the best value of the chiral condensate is $\Sigma^{1/3}=259(6)$~MeV. It is extracted from the fit function Eq.~\eqref{eq:NLO-extended-reduced} where $c_{0,0}$ is fitted to a constant in the range $20\leq \Lambda_{\rm R} \leq 80$ MeV. This fit confirms that in the chiral and continuum limits the spectral density is a flat function of $\Lambda_{\rm R}$ up to $\approx 80$~MeV at the level of our precision in the continuum limit of roughly $10\%$, and it can be parameterized by NLO ChPT. We presented preliminary results of this study at only two lattice spacings in Ref.~\cite{Engel:2013rwa}. There we observed effects of $\mathcal{O}(\Lambda_{\rm R}^2)$ already for $\Lambda_{\rm R}\gtrsim50$ MeV, in particular for $a=0.065$~fm. Once the data are extrapolated to the continuum limit, these effects are not visible anymore up to $\Lambda_{\rm R}\approx 80$ MeV. In this respect it must be noted, however, that once the uncertainties in the scale and renormalisation constants are included, the final errors of the extrapolated results are significantly larger than those used to study the $\Lambda_{\rm R}$-dependence at fixed lattice spacing. It is therefore not surprising that the window extends to larger values of $\Lambda_{\rm R}$. By estimating the spectral density of the twisted mass Hermitean Dirac operator, the dimensionless quantity $r_0\Sigma^{1/3}$ was computed in Ref.~\cite{Cichy:2013gja}. Since they have a smaller set of data, the analysis described in Section \ref{sec:seccont} is not a viable option for them. They opt for the strategy adopted in Ref.~\cite{Giusti:2008vb} which is inspired by NLO ChPT. They fit the mode number linearly in $M$ in the range $50$--$120$ MeV, and they extrapolate the results to the chiral and continuum limits linearly. The smaller quark masses and in particular the smaller values of $\Lambda_{\rm R}$ that we considered were instrumental to properly quantify and eventually reduce our systematic error. \subsection{Finite-size effects} We estimate finite-volume effects using NLO ChPT (see Appendix \ref{app:modenumber:chpt}), and choose the parameters such that they are negligible within the statistical accuracy. For the lattice E5 we have explicitly checked that finite-size effects are within the expectations of ChPT by comparing the values of the mode number with those obtained on a lattice of $48\times 24^3$, lattice D5 in Table~\ref{tab:lambda2} of Appendix \ref{sec:parms}. \section{Results and conclusions} Our results show that in QCD with two flavours the low modes of the Dirac operator do condense in the continuum limit as expected by the Banks--Casher relation in presence of spontaneous symmetry breaking. The spectral density of the Dirac operator in the chiral limit at the origin is $[\pi\rho^{\rm \overline{MS\kern-0.05em}\kern0.05em}(2\, \mbox{GeV})]^{1/3}= 261(6)(8)~\mbox{MeV}$, where the first error is statistical and the second is systematic. The latter is estimated so that the results from various fits are within the range covered by the systematic error: in particular the smaller value that we find in Section~\ref{sec:seccont} when a $\Lambda^2_R$ term is included in the fit function, and the higher one obtained in Section~\ref{sec:seccombined} when some of the data at the coarser lattice spacing are excluded from the fit. From the GMOR relation the best value of the chiral condensate that we obtain is $[\Sigma^{\rm \overline{MS\kern-0.05em}\kern0.05em}_{\rm GMOR}(2\, \mbox{GeV})]^{1/3}= 263(3)(4)$~MeV, where again the first error is statistical and the second is systematic. The spectral density at the origin thus agrees with $M_\pi^2 F_\pi^2/(2m_{\rm R})$ when both are extrapolated to the chiral limit. For the sake of clarity, the above values of the condensate have been expressed in physical units by supplementing the theory with a quenched ``strange'' quark, and by fixing the lattice spacing from the kaon decay constant $F_K$. They are therefore affected by an intrinsic ambiguity due to the matching of $F_K$ in the $N_f=2$ partially quenched theory with its experimental value. The renormalisation group-invariant dimensionless ratio \begin{equation} \frac{[\Sigma^{\rm RGI}]^{1/3}}{F} = 2.77(2)(4)\; , \end{equation} however, is a parameter-free prediction of the $N_f=2$ theory. It belongs to the family of unambiguous quantities that should be used for comparing computations in the two flavour theory rather than those expressed in physical units~\cite{Aoki:2013ldr}. \acknowledgments{ Simulations have been performed on BlueGene/Q at CINECA (CINECA-INFN agreement), on HLRN, on JUROPA/JUQUEEN at J\"ulich JSC, on PAX at DESY--Zeuthen, and on Wilson at Milano--Bicocca. We thank these institutions for the computer resources and the technical support. We are grateful to our colleagues within the CLS initiative for sharing the ensembles of gauge configurations. G.P.E.~and L.G.~acknowledge partial support by the MIUR-PRIN contract 20093BMNNPR and by the INFN SUMA project. S.L.~and R.S.~acknowledge support by the DFG Sonderforschungsbereich/Transregio SFB/TR9. }
proofpile-arXiv_067-4063
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Informational bias broadly exists in news articles. As a sort of framing bias, it always frames a certain entity by specific aspects using narrow, speculative or indicative information to guide a particular interpretation, thus swaying readers' opinion. For most of us, news articles are the main source of information. Therefore, news articles play a central role in shaping individual and public opinions. However, news reports often show internal bias. The current research is often limited to the lexical bias. This form of bias rarely depends on the context of the sentence. It can be eliminated by deleting or replacing a small number of biased words. Contrarily, researchers \citet{fan-etal-2019-plain} found that the informational bias is more common and more difficult to detect. Different from other types of bias, the sentence-level informational bias detection largely depends on the context and this fact makes the task very challenging. A sentence alone can be expressed in a neutral manner, but it might be revealed as biased in consideration of the context. Take the second row in Table \ref{tab:basil} as example: the sentence \textit{``Mr. Mattis, a retired four-star Marine general, was rebuffed.''} \label{sent:mattis} seems to be a very simple declarative sentence stating a fact. However, if we read the previous sentence \textit{``Officials said Mr. Mattis went to the White House with his resignation letter already written, but nonetheless made a last attempt at persuading the president to reverse his decision about Syria, which Mr. Trump announced on Wednesday over the objections of his senior advisers.''} (the first row in Table \ref{tab:basil}) , we will know that \textit{`a retired four-star Marine general'} indicates a negative, even ironic tone towards Mr. Mattis and his last attempt. Therefore, sentence-level informational bias can only be revealed by collecting information from various sources and analyzing the entire article together with its background. Such subtleties of informational bias are more likely to affect unsuspecting readers, which indicates the necessity of research into new detection methods. In this paper, we propose MultiCTX (Multi-level ConTeXt), a model composed of contrastive learning and sentence graph attention networks to encode three different levels of context: \textbf{1) Neighborhood-context}: adjacent sentences, i.e. sentences in the same article around the target sentence; \textbf{2) Article-context}: the whole article containing the target sentence; \textbf{3) Event-context}: articles from various news media reporting the same event. These three levels encompass the contextual information from the most local to the most global. In order to make use of the context rather than be overwhelmed by the noise introduced, MultiCTX prioritizes contrastive learning which learns sentence embedings via discriminating among $(target, positive$ $sample, negative$ $sample)$ triplets to distill the essence of the target sentence. The quality of the learned CSE (Contrastive Sentence Embedding) relies on that of triplets. Other than the traditional brute-force way to select triplets only based on their labels, MultiCTX further considers article-level information which creates higher-quality triplets. Such triplet formulation guarantees that our CSEs infuse the context and reflect sentences' inherent semantics instead of the shallow lexical features. MultiCTX then builds a relational sentence graph using CSEs. Edges are connected between two sentences if they are logically related in the same \textit{neighborhood} or if they are continuous in entities or semantically similar within the same \textit{event}. Finally we apply a Self-supervised Graph Attention Network (SSGAT) on our sentence graph to make the final informational bias prediction. The SSGAT structure encodes neighborhood-level and event-level context via edges, making it possible for textually distant but contextually close sentences to connect directly. The flexible graph structure extends beyond the sequential arrangement of traditional LSTMs, which also consider the surrounding context. Although document graphs are not rare in NLP tasks, they are often short and built by token-wise dependency parsing. It may suffer from high complexity and considerable noise when applied on long texts which is our case with news articles. Our relational sentence graph uses sentence nodes and focuses on inter-sentence relationships. It recquires only minimal syntax parsing, takes on less noise and has better interpretability. Few research studies sentence-level informational bias detection by infusing context. \citet{fan-etal-2019-plain} first published a human-annotated dataset on this task, taking the context into account during annotation. However, sentences are still treated sentences individually in their model. \citet{van-den-berg-markert-2020-context} did a primary research on incorporating different levels of context in the informational bias detection. However, they consider only one kind of context in each model. To our best knowledge, our model is the first to incorporate multi-level contextual information in sentence-level classification task. In summary, we present the following contributions: \begin{itemize} \item We are the first to incorporate three different levels of context together in the sentence-level bias detection task. By introducing context, we aim to simulate how people learn new things in real life: widely reading, generally picturing and thoroughly reasoning. \item We propose a novel triplet formation for contrastive learning in bias detection. The methodology can be generalized for other tasks. \item We are the first to use a sentence graph to encode the textual context information in the bias detection task. \item Our model MultiCTX significantly outperforms the current state-of-the-art model by 2 percentage points F1 score. It indicates that contextual information effectively helps sentence-level informational bias detection and our model successfully infuses multi-level context. \end{itemize} \section{Methodology} Figure \ref{fig:model} illustrates our model MultiCTX (Multi-level ConTeXt). First, we carefully construct triplets from the original dataset and then apply supervised contrastive learning on them to obtain sentence embeddings. Second, we build relational sentence graphs by joining sentence nodes according to their discourse relationships and semantic similarity. Finally, we apply a Self-supervised Graph Attention Network \citep{kim2021how} to perform the bias detection as a node classification task. In essence, MultiCTX has two modules, Contrastive Learning Embedding (CSE) and Self-supervised Sentence Graph Attention Network (SSGAT). In order to investigate the role of the context and to imitate the way people learn from the news reports, we also apply a more reasonable and challenging cross-event data splitting. \subsection{Data splitting} First of all, let's think about the nature of the news reports and the way people learn about the world in real life. News articles always emerge almost simultaneously in large numbers along with a particular event, over which people reason based on their experience learnt from previous events. Moreover, people usually read an article as a whole instead of randomly picking up several sentences and they are unlikely to encounter a sentence from news events happened before. Additionally, people tend to collect information from more than one article to get a bigger picture of the new event. Therefore, in order to simulate the real human's learning process, different from the commonly-used data splitting which randomly distributes sentences to one of the three subsets (train/val/test), we use event-wise data splitting mentioned in \citet{van-den-berg-markert-2020-context}, \citet{chen-etal-2020-detecting}. We treat the articles reporting the same event as a unit and keeping sentences from the same event in the same subset. Part of the data is shown in Figure \ref{tab:basil} with clear `adjacent sentences', `article' and `event' structure. Furthermore, splitting by events is more reasonable and more demanding, in terms of model generalizability for identifying informational bias from unseen events. Experiments in \citet{van-den-berg-markert-2020-context} and \citet{chen-etal-2020-detecting} also show that common models including BERT-based models all experienced a considerable performance drop when switching from random splitting to event-based splitting. \begin{table*}[th] \centering \begin{tabular}{cccp{12cm}c} \toprule \textbf{Event} & \textbf{Source} & \textbf{Index} & \textbf{Sentence} & \textbf{Label} \\ \midrule 86 & nyt & 3 & Officials said Mr. Mattis went to the White House with his resignation letter already written, but nonetheless made a last attempt at persuading the president to reverse his decision about Syria, which Mr. Trump announced on Wednesday over the objections of his senior advisers. & 0\\ \hline 86 & nyt & 4 & Mr. Mattis, a retired four-star Marine general, was rebuffed. & 1\\ \hline 86 & nyt & 5 & Returning to the Pentagon, he asked aides to print out 50 copies of his resignation letter and distribute them around the building. & 0 \\ \midrule 11 & fox & 20 & However, Democrats rejected the plan even before Trump announced it, and a Senate version of the plan failed to get the 60 votes needed on Thursday. & 1\\ \hline 11 & fox & 21 & A second bill, already passed by the Democrat-controlled House to re-open the government, also fell short. & 0\\ \midrule 2 & hpo & 10 & There were roughly 520,000 arrests for unauthorized border crossings last year, which is about one-third of the 1.6 million arrests that happened in 2000. & 0\\ \hline 2 & hpo & 11 &Since 2014, a high proportion of those crossing have been Central American children and families seeking to make humanitarian claims such as asylum. & 1\\ \bottomrule \end{tabular} \caption{BASIL dataset} \label{tab:basil} \end{table*} \subsection{Sentence Embedding using Contrastive Learning} The idea of contrastive learning is that humans discriminate objects by ``comparison'', thus similar objects should be close to each other in the representation space, and different objects should be as far apart as possible. However, news sentences inherently have small differences in terms of pure text. Two sentences with opposite stances might be different in a few words, while two sentences expressing the same idea are likely to be formulated completely differently. To address the problem, we apply supervised contrastive learning with hard negatives described in \citet{gao2021simcse}. The idea is to develop, from the original dataset, the triplets $(x_i,x^+_i,x^-_i)$ each denotes target sentence, positive sample and negative sample respectively. Using the $\mathbf{h}_{i},\mathbf{h}_{i^+},\mathbf{h}_{i}^-$ representations of $x_i,x^+_i,x^-_i$, the objective function to minimize is InfoNCE Loss. The difficulty is to mine the positive and negative samples for each target sentence from the original dataset. A good positive sample is supposed to capture the most essential features of the sentence, rather than being influenced by other factors, such as the writing styles of different news media. Therefore, the best positive sample is expected to be significantly different from the target sample in terms of sentence formation, while the best negative sample should be similar to the target sentence in terms of syntactic structure. In short, samples with different labels from the target sentence but with initial embedding in its vicinity are likely to be the most useful, providing significant gradient information during the training process. Inspired by \citet{baly-etal-2020-detect} which applies a triplet loss in training using news media in triplet selection, our final triplet follows article-based criteria and is composed of: $x_i$: target sentence; $x_i^+$: same label and event with $x_i$, but from a different article; $x_i^-$: from the same article with $x_i$ but with a different label. Figure \ref{fig:triplet} illustrates our triplet construction. Thereby we essentially augment the original 7977-sentence corpus to a much larger dataset of around 300k triplets where sentences are no longer isolated but linked to two others. More importantly, triplets with the same target sentence provide altogether a microscopic `context' for the target sentence to help its representational learning. This process naturally incorporates article-level context and event-level context information: \begin{itemize} \item The negative samples are from the same article as the target sentence, thus they provide an article-level context. Written by the same author, these sentences would be similar in writing styles and lexical patterns. Therefore, that article-level context made by negative samples not only informs the necessary background story information, but more importantly, they show a layer of ``skin'' of the article, forcing the contrastive learning model to uncover the superficial skin of rhetoric and wording and to truly understand the article. \item All samples are from the same event, so they provide event-level contextual information. Therefore, from the perspective of the target sentence, all positive and negative samples provide it with a small but complete world that covers most of the event information, from which the model is allowed to freely and massively draw information and to get a broad and general overview of events. Therefore the target representations can be both comprehensive and fair. Additionally, since the positive samples are from different news media and the negative samples are from the same news media as the target sentence, our model can exclude the influence of different news media writing styles and is encouraged to learn the essential meanings. In summary, our triplet construction process simulates the human learning principle of wide reading. \end{itemize} \begin{figure}[!htbp]\centering \includegraphics[width=.7\linewidth]{img/triplet.PNG} \caption{Triplet construction. Positive sample $x_i^+$ has same label (red) and event with target sentence $x_i$; negative sample $x_i^-$ has different label (blue) but from the same article with $x_i$; note that three sentences must from same event.} \label{fig:triplet} \end{figure} \subsection{Relational Sentence Graph} Sentences are naturally suitable as nodes when encoding long documents, so we borrowed the idea from extractive text summarization from \citet{christensen-etal-2013-towards} and \citet{summpip} to construct graphs. The graphs are formed by connecting the sentences in four different ways illustrated in \ref{subfig:edgetype}: \begin{enumerate} \label{para:edge} \item Deverbal noun reference: when an action in verb form occurs in the current sentence, it is likely to be mentioned in the noun form in the following sentences. So we attach the current sentence with its downstream sentence when at least one semantically similar deverbal noun is found in the latter. \item Discourse marker: If the immediately subsequent sentence begins with a discourse marker (e.g., however, meanwhile, furthermore), the two sentences are linked. \item Entity continuation: we connect two sentences in the same event if they contain the same entity. \item Sentence similarity: sentence pairs in the same event with high cosine similarity are joined. \end{enumerate} \begin{figure}[!htbp] \centering \begin{subfigure}{0.8\linewidth} \centering \includegraphics[width=\linewidth]{img/edgetype.PNG} \caption{Four types of edges in relational sentence graph. \underline{Underline}: deverbal noun reference; \textit{italic}: discourse maker; \textbf{bold}: entity continuation; \textcolor{blue!60}{colored}: sentence similarity} \label{subfig:edgetype} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{img/mattis0_src.png} \caption{Partial relational sentence graph, where nodes are colored by news source: HPO (yellow), NYT (orange), FOX (purple)} \label{subfig:mattissrc} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{img/mattis0_bias.png} \caption{Partial relational sentence graph, where nodes are colored by bias label: biased (yellow), non-biased (purple)} \label{subfig:mattisbias} \end{subfigure} \caption{Relational sentence graph} \label{fig:adg} \end{figure} The four types of edge formation take different degrees of context into account: Type 1 and Type 2 consider only the subsequent sentences in the same article (neighborhood-context). In particular, Type 2 considers only the immediately following sentence. Type3 and Type 4 are not limited to adjacent sentences. Rather they consider the whole event (event-context). Note that edges occur only between in-event sentences, which is consistent with our event- based splitting. Figure \ref{subfig:mattissrc} and \ref{subfig:mattisbias} present the same subgraph taken directly from the real relational sentence graph in our study. The two figures are only dfferent in colors, where the nodes in Figure \ref{subfig:mattissrc} are colored according to the news source, i.e. HPO (yellow), NYT (orange), FOX (purple); while Figure \ref{subfig:mattisbias} on the right is colored according to labels, i.e. biased (yellow), non-biased (purple). This subgraph presents all edges connected to the sentence \textit{``Mr. Mattis, a retired four-star Marine general, was rebuffed." (NYT)} described in Section \ref{sent:mattis}, and we see that the first sentence in Table \ref{tab:basil} is effectively linked to it. Moreover, we see that the relational sentence graph infuses information from different news media. Most sentences related to the target sentence in Figure \ref{subfig:mattisbias} from HPO (yellow) and FOX (purple) are biased (yellow) according to Figure \ref{subfig:mattisbias}. Therefore event-context contained in articles of different news outlets effectively helps identify the biased sentences. Our graph composition is intended to mimic the way humans develop views: people acquire information through immediate context in article and reason by aggregating certain background knowledge from different news reports of the whole event. Note that during the formation of our relational sentence graph, edges only appear between sentences in the same event. It is compatible with our event-based data splitting, and ensures that SSGAT can be trained on the entire graph without data leakage. \subsection{Graph Attention Network} As one of the representative graph convolutional networks, Graph Attention Networks (GATs) introduces an attention mechanism to achieve better neighbor aggregation. By learning the weights of the neighbors, GAT can learn the representation of the target node by implementing a weighted aggregation of the neighbor node representations. However, it may suffer from graph noise introduced by incorrect node linking. In our study, we use Self-supervised Graph Attention Network \citet{kim2021how} which introduces, on top of the GAT, an edge presence prediction task and thus puts an emphasis on more on distinguishing misconnected neighbors. The graph structure naturally places each sentence within its context, and as a result, different sentences are no longer isolated. The flexibility of the graph structure also allows it to move beyond the ordered arrangement of traditional LSTM. Therefore two sentences can be directly connected by edges, even if they are far apart in the original article or in different articles. Note that our sentence graph doesn't contain edges between two events, therefore it assures no data leakage while training GAT on the whole graph. \begin{figure*}[!htbp]\centering \includegraphics[width=\linewidth]{img/multictx.PNG} \caption{ Our Model MultiCTX } \label{fig:model} \end{figure*} \section{Experiment and Results} We use BASIL (Bias Annotation Spans on the Informational Level) dataset proposed by \citet{fan-etal-2019-plain} for the sentence-level informational bias detection task. We experiment with four baselines including the current state-of-the-art model and four variants of MultiCTX in order to fully demonstrate each module's utility. Our results suggest that MultiCTX greatly outperforms the current SOTA and effectively incorporates the contextual information in sentence-level informational bias detection. \subsection{Data} BASIL dataset provides sentence-by-sentence span-level annotation of informational bias for 300 English news articles grouped in 100 triplets, each discussing on the same event from three news outlets. The articles are selected in order to make a fair coverage in terms of time and ideology: 1) From 2010 to 2019, 10 events are included each year in the dataset; 2) Fox News (FOX), New York Times (NYT) and Huffington Post (HPO), representative of conservative, neutral and liberal respectively in the US journalism, are chosen as three news sources. As for the sentence-level informational bias detection task, we use the same data formulation in \citet{van-den-berg-markert-2020-context}. In this sentence-wise binary classification task, a sentence is labeled as biased if at least one informational bias span occurs, and seven empty sentences are removed, resulting in a total of 7977 sentences with 1221 annotated bias. Examples are shown in Table \ref{tab:basil}. \subsection{Set-up} We use the same 10-fold cross-validation event-split in \citet{van-den-berg-markert-2020-context} to facilitate the comparison. Each fold has 80/10/10 non-overlapping events for train/val/test partition, and sentences from the same event never appear simultaneously in two different subsets within one fold. There are on average 6400/780/790 sentences in train/val/test set respectively. We use 5 different seeds for each method and the F1 score, precision and recall (`biased' is positive class) as the evaluation metrics. For each experiment, a mean value and standard deviation across 5 seeds will be reported if applicable. Note that in the contrastive learning module, the triplets are constructed only in training set, then the trained model does the sentence-by-sentence inference on the test set (no triplets constructed) to get their sentence embeddings. The CSEs are obtained via inference where no labels are seen during such process. We use the same hyper-parameters provided in \citep{van-den-berg-markert-2020-context} to reimplement BERT, RoBERTa and WinSSC baselines. However, for EvCIM, We cut the training epochs from 150 to 75 and increase the batch size from 32 to 64 due to the considerable time usage. For MultiCTX, We trained use a RoBERTa-based contrastive learning following the implementation in \citep{gao2021simcse}. Due to unavoidable non-deterministic atomic operations in implementation of GAT, the result presented below may cannot be exactly reproduced, but we took an average on our experiments to reflect its range. All models are trained and evaluated on a GeForce GTX 1080 Ti GPU with 11G RAM and Intel(R) Xeon(R) CPU E5-2630 with 128G of RAM. \subsection{Baselines} There are few models in sentence-level informational bias detection. \citet{fan-etal-2019-plain} has proposed BASIL dataset and corresponding BERT and RoBERTa benchmarks. \citet{cohan-etal-2019-pretrained} has proposed several models trying to incorporate context in different ways. We will take two of them, WinSSC and their best and also current SOTA model EvCIM, as our baselines. Few other works used BASIL dataset but with objectives other than sentence-level informational detection. Thus we have four baseline models: \begin{itemize} \item \textbf{BERT} \citep{devlin-etal-2019-bert} and \textbf{RoBERTa} \citep{liu2019roberta}: we finetune the individual sentence informational bias detection task on $\text{BERT}_{base}$ and $\text{RoBERTa}_{base}$. \item \textbf{WinSSC} \citep{van-den-berg-markert-2020-context} WinSSC (windowed Sequential Sentence Classification) is a variant of SSC \citep{cohan-etal-2019-pretrained}. We include it as one of the baselines because SSC implements the very natural idea that comes to us when we think of using context: directly inputing sequences of consecutive sentences to BERT. SSC feeds the concatenation of sentences from a chunk of document to pretrained language models (PLMs),and then classifies each sentence using the embedding of the separator tokens \texttt{[SEP]} at its end. SSC makes non-overlapping chunks while WinSSC makes chunks by overlapping sentences at both ends, which eretains the contextual information for bookended sentences. \item \textbf{EvCIM}\label{para:evcim} : PLM embeddings + BiLSTM EvCIM (Event Context-Inclusive Model) proposed by \citet{cohan-etal-2019-pretrained} is the SOTA model on BASIL dataset and it also uses the contextual information. It takes the average of the last four layers of fine-tuned $\text{RoBERTa}_{base}$ as the sentence embedding, and then uses BiLSTM to encode each article from the same event as the target sentence. Finally it concatenates three article representations and the target sentence embedding to make the sentence-level prediction. Besides using the hyper-parameters from the original paper, we generate the result from a separate set of reasonable hyper parameters. We present below results both from the original paper and from our experiments. \end{itemize} \subsection{Our Models} \begin{itemize} \item \textbf{CSE: Contrastive Sentence Embedding} CSE (Contrastive Sentence Embedding), i.e. sentence embeddings obtained directly from contrastive learning. Here we use to refer to the classification model by a logistic regression on CSE. \item \textbf{MultiCTX w/o SSGAT}: CSE + BiLSTM Similar to EvCIM described in Section \ref{para:evcim}, we utilize BiLSTM-encoded context as well as the target sentence to perform the sentence-wise classification. However, instead of the average of the last four layers of fine-tuned $\text{RoBERTa}_{base}$ (RoBERTa embedding, or PLM embedding) in EvCIM, we use CSE in our study. Moreover, we also add news source embeddings before the final fully connected classification layer on top of BiLSTM-encoded in-event article embeddings. In the original paper \citep{cohan-etal-2019-pretrained}, adding news source embeddings hurts EvCIM's performance, but because it is useful for EvCIM w/ CSE according to our experiments, we use this version here. This also indicates that CSE has better captured inherent properties of sentences compared to PLM embeddings. CSE can therefore well incorporate extra news media information rather than be disturbed by it. \item \textbf{MultiCTX w/o CSE}: RoBERTa embeddings + SSGAT We use the original sentence embedding in EvCIM, which is the the average of the last four layers of fine-tuned $\text{RoBERTa}_{base}$ to build the relational sentence graph. We then apply Self-supervised GAT on the graph (SSGAT, Self-supervised Sentence GAT). In other words, we replace CSE in MultiCTX with EvCIM's sentence embedding. \item \textbf{MultiCTX}: our full model (CSE + SSGAT) MultiCTX first performs contrastive learning on carefully composed triplets to obtain CSE. It then builds relational sentence graph according to inter-sentence relationships. Finally, MultiCTX applies Self-supervised GAT above to get the final sentence informational bias prediction. \end{itemize} \begin{table*}[!htbp] \centering \begin{tabular}{l|l|p{5em}p{5em}ccc} \toprule \multicolumn{2}{c}{ \textbf{Model}$^{*}$} & \textbf{Sentence embedding} & \textbf{Structure to encode context} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} \\ \midrule \multirow{5}{*}{baselines} & $\text{BERT}_{base}$ & NA & No context & $\text{BERT}_{base}$ & $40.44 \pm 1.07^{**}$ & $35.49\pm 0.67$ \\ \cline{2-7} &$\text{RoBERTa}_{base}$ & NA & No context & $38.40 \pm 0.64$ & $48.53\pm 1.45$ & $42.13 \pm 1.02$ \\ \cline{2-7} & WinSSC & [SEP] embed. & Text chunks & $41.47\pm1.31$& $34.37\pm 0.57$ &$37.58 \pm 0.77$ \\ \cline{2-7} & {EvCIM (original paper)} & \multirow{2}{*}{RoBERTa} & \multirow{2}{*}{BiLSTM} & $39.72\pm 0.59$ & $49.60 \pm 1.20$ & $44.10\pm 0.15$ \\ & {EvCIM (our reproduction)} && & $38.40 \pm 0.64$ & $48.53\pm 1.45$ & $42.87 \pm 0.69$ \\ \hline \multirow{4}{*}{models}&CSE & CSE & No context & 47.53 & 40.13 & 43.51 \\ \cline{2-7} & MultiCTX w/o SSGAT & CSE& BiLSTM &$48.53 \pm 0.73$ & $41.98\pm 0.36$ & $45.01 \pm 0.26$\\ \cline{2-7} & MultiCTX w/o CSE & RoBERTa & SSGAT & $46.89 \pm 0.71$ & $42.88\pm 0.67$ & $44.79 \pm 0.63$ \\ \cline{2-7} & MultiCTX & CSE & SSGAT & $47.78 \pm 0.94$ & $44.50 \pm 0.65$ & $\mathbf{46.08 \pm 0.21}^{***}$ \\ \bottomrule \multicolumn{7}{l}{$^{*}$ All results are implemented or reproduced by ourselves except for EvCIM (original paper)} \\ \multicolumn{7}{l}{$^{**}$ Mean value and standard deviation across 5 seeds are reported if applicable} \\ \multicolumn{7}{l}{$^{***}$ The best result on a single run obtained in our experiments is \textbf{F1=46.74}} \\ \end{tabular}% \caption{Results. It shows the sentence embedding method, the structure to encode the context, the Precision, Recall, and F1 score of each model.} \label{tab:res}% \end{table*}% The results are shown in Table \ref{tab:res}. By using different sentence embedding methods and the structure to encode contextual information, we are able to demonstrate the respective utility of the two modules, CSE and SSGAT. The table presents the results (Precision, Recall, F1 score) on the four baseline models, our model MultiCTX and its different variants. The four baselines' results are our reproduced results, most of which are close to the original work, except for the EvCIM, so we put both in the table. In addition, we used 5 different random seeds, and the table shows their means and standard deviations (if applicable). From the results, we can draw the following conclusions: \paragraph{1. Contrastive learning helps improve sentence embeddings.} While keeping the structure to encode context unchanged, models using CSE as the sentence embedding method are the optimal. We see that when the context is not structurally introduced, the performance of CSE (F1=43.51) purely classified by logistic regression is better than that of BERT (F1=35.49) and RoBERTa (F1=42.13); moreover, with BiLSTM as structure to encode the context, MultiCTX w/o SSGAT (F1=45.01) outperforms EvCIM in its original paper \citep{cohan-etal-2019-pretrained}. Since EvCIM uses the mean value of the last four layers of the fine-tuned $\text{RoBERTa}_{base}$ as the sentence embedding, the contrastive learning produces better sentence representations than BERT-based PLMs; the same conclusion can be also drawn from MultiCTX w/o CSE (F1=44.79) vs. MultiCTX (F1=46.08). The reason may be as follows: \begin{itemize} \item BERT-based PLM tends to encode all sentences into a smaller spatial region, which results in a high similarity score for most of the sentence pairs, even for those that are semantically completely unrelated. Specifically, when the sentence embeddings are computed by averaging the word vectors, they are easily dominated by high-frequency words, making it difficult to reflect their original semantics. \item Instead of individual sentences, CSE considers for each target sentence a context built up by all its positive and negative counterparts in related triplets. Among them, negative samples provide an article-level context and positive samples provide an event-level context. With the goal of contrastive learning to "distill essence", it learns from its context and naturally suppresses such shallow high-frequency-words features, thus avoiding similar representations of semantically different sentences. \end{itemize} \paragraph{2. Encoding sequential sentences brutally by PLM may fail.} WinSSC attempts to exploit adjacent sentences by directly feeding sequences of consecutive sentences into the PLM, which is the most failed among all models attempting to incorporate the contextual information. It's even worse (F1=37.58) than the original $\text{RoBERTa}_{base}$ (F1=42.13).There are two possible reasons: First, we take sentence chunks instead of individual sentences as input, and doing so may introduce data reduction. Second, BERT-based pretrained language models are not good at processing long text. They simply join neighboring sentences, which may introduce more noise and complexity rather than help integrate the context. Therefore, brute-force concatenation of sequential sentences can rarely make use of the contextual information, and it probably brings in more noise and reduces the data quantity. \paragraph{3. Context information can effectively improve performance.} Except for WinSSC, all other models using some kind of context-introducing structure (BiLSTM or SSGAT) outperform the BERT and RoBERTa baselines. It shows that the introduction of context is indeed helpful for the detection of information bias. \paragraph{4. SSGAT is the best structure to integrate context.} While keeping the sentence embedding method unchanged, the model using SSGAT as the structure to introduce context achieves better results. When both use RoBERTa embeddings, MultiCTX w/o CSE with SSGAT outperforms (F1=44.79) EvCIM with BiLSTM (both F1=44.10 in the original paper and our reproduction F1=42.87); when both use CSE , MultiCTX with SSGAT (F1=46.08) outperforms MultiCTX w/o SSGAT (with BiLSTM, F1=45.01), and CSE without context (F1=43.51). The results prove that our sentence graph structure is more effective in encoding contextual information than sequential models such as BiLSTM. \paragraph{5. Contrastive learning together with sentence graph achieves the best performance} Our full model MultiCTX achieves F1=46.08 in the sentence-level informational bias detection task, significantly outperforms the current State-of-the-Art model EvCIM \citep{cohan-etal-2019-pretrained} (F1=44.10 declared in original paper). Possible reasons are: 1) BiLSTMs are limited to the event context in EvCIM; 2) MultiCTX uses better sentence representations (CSE); 3) MultiCTX incorporates the context in varying degrees explicitly using graph structure and implicitly via contrastive learning. \section{Ablation Analysis} We have proved that both CSE and SSGAT are essential for MultiCTX, and in this section, we will further explore roles of different inter-sentence relationships in our model. We keep CSEs fixed and modify our relational sentence graph by removing certain types of edges, and then report the results to see how each part contributes to MultiCTX in our informational bias detection task. Edge types described in Section \ref{para:edge} can be briefly summarized in two categories: \begin{itemize} \item Type 1,2 and 3 are discourse relationships \item Type 4 is semantic similarity \end{itemize} Besides, they can also be partitioned by level of context: \begin{itemize} \item Type 3 and 4 are event-level \item Type 1 and 2 are neighborhood-level and article-level \end{itemize} We will focus on their utility in our ablation study. Table \ref{tab:ablation} shows the ablation results. Horizontally, the first row represents the comparison between discourse relations and semantic similarity, and the second row represents the comparison between event-level context and neighborhood-level context. Vertically, the three ablation analysis experiments in the first column compare the utility of Type 1,2 / Type 3 / Type4 edges. Here we analyze Type 1,2 together. \begin{table}[htbp] \begin{tabular}{rc|p{6em}p{6em}} \multirow{1}{*}{\rotatebox[origin=r]{90}{\textcolor{red!60}{\textbf{$\xLeftarrow{\hspace*{5cm}\textbf{Vertical comparison: Edge types}}$}}}} & \multicolumn{3}{l}{\textcolor{blue!60}{$\boldsymbol{\xRightarrow{\hspace*{4cm}\textbf{Horizontal comparison}}}$}}\\ &\textcolor{blue!60}{\textbf{Horizontal}} & \textcolor{blue!60}{\textbf{Discourse relationship }} & \textcolor{blue!60}{\textbf{Semantic similarity}} \\%\midrule & & Type [1,2]$^{*}$,3 & Type 4 \\ &\textcolor{red!60}{\textbf{Vertical}} & \textcolor{red!60}{\textbf{(w/o Type 4) }} & \\ \cmidrule[1pt]{2-4 &Precision & $47.43 \pm 0.96$ & $47.16 \pm 0.27$ \\ &Recall & $44.39\pm 0.84$ & $43.47\pm 0.38$ \\ &F1 & $45.85\pm0.35$ & $45.24\pm0.18$ \\ \cmidrule[1pt]{2-4} & \textcolor{blue!60}{\textbf{Horizontal}} & \textcolor{blue!60}{\textbf{Event-context}} & \textcolor{blue!60}{\textbf{Neighborhood-context}} \\%\midrule & & Type 3,4 & Type [1,2] \\ &\textcolor{red!60}{\textbf{Vertical}} & \textcolor{red!60}{\textbf{(w/o Type 1,2)}} & \\\cmidrule[1pt]{2-4 &Precision & $47.07\pm 0.99$ & $47.18 \pm 1.08$ \\ &Recall & $44.64\pm 0.37$ & $44.01\pm 0.91$ \\ &F1 & $45.81\pm0.42 $ & $45.53\pm0.29$ \\ \cmidrule[1pt]{2-4} & & Type [1,2],4 & \\ & \textcolor{red!60}{\textbf{Vertical}} & \textcolor{red!60}{\textbf{(w/o Type 3) }} & \\\cmidrule[1pt]{2-4} & Precision & $47.56 \pm 0.62$ &\\ &Recall & $43.72\pm 0.76$ &\\ &F1 & $45.55\pm0.34$ &\\ \cmidrule[1pt]{2-4} & \multicolumn{3}{p{18em}}{$^{*}$ Type 1,2 represent neighborhood-context, so we treat them as a whole.}\\ \end{tabular}% \caption{Ablation study on different types of edges in SSGAT. Horizontally, the first two rows compare discourse relation vs. semantic similarity and event-context vs. neighborhood-context respectively. Vertically, the first column compares the utility of the Type[1,2] vs. Type 3 vs. Type 4 edges. Mean and standard deviation across 5 seeds are reported.} \label{tab:ablation}% \end{table}% In addition to the numerical results, we also want to better analyze the utility of various types of edges through the perspective of plotting, so we take an event in the dataset and connect different types of edges between its sentence nodes And draw the corresponding picture for comparison. Figure \ref{tab:ablationfig} shows these graphs, where purple nodes are unbiased sentences and yellow nodes are biased sentences. The nodes of each graph are the same, the only difference is the edge types. \begin{table}[htbp] \addtolength{\leftskip} {-0.7cm} \begin{tabular}{rc|p{8em}p{8em}} \multirow{1}{*}{\rotatebox[origin=r]{90}{\textcolor{red!60}{\textbf{$\xLeftarrow{\hspace*{12cm}\textbf{Vertical comparison: Edge types}}$}}}} & \multicolumn{3}{l}{\textcolor{blue!60}{$\boldsymbol{\xRightarrow{\hspace*{5cm}\textbf{Horizontal comparison}}}$}}\\ &\textcolor{blue!60}{\textbf{Horizontal }} & \textcolor{blue!60}{\textbf{Discourse relationship }} & \textcolor{blue!60}{\textbf{Semantic similarity}} \\%\midrule & & Type [1,2]$^{*}$,3 & Type 4 \\ &\textcolor{red!60}{\textbf{Vertical}} & \textcolor{red!60}{\textbf{(w/o Type 4) }} & \\ \cmidrule[1pt]{2-4 &&\begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{img/only123.png}\captionof{figure}{}\label{subfig:ab123} \end{minipage} &\begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{img/only4.png}\captionof{figure}{}\label{subfig:ab4} \end{minipage} \\ \cmidrule[1pt]{2-4} & \textcolor{blue!60}{\textbf{Horizontal }} & \textcolor{blue!60}{\textbf{Event-context}} & \textcolor{blue!60}{\textbf{Neighborhood-context}} \\%\midrule & & Type 3,4 & Type [1,2] \\ &\textcolor{red!60}{\textbf{Vertical}} & \textcolor{red!60}{\textbf{(w/o Type 1,2)}} & \\\cmidrule[1pt]{2-4 &&\begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{img/only34.png}\captionof{figure}{}\label{subfig:ab34} \end{minipage} &\begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{img/only12.png}\captionof{figure}{}\label{subfig:ab12} \end{minipage} \\ \cmidrule[1pt]{2-4} & & Type [1,2],4 & Type 3\\ & \textcolor{red!60}{\textbf{Vertical}} & \textcolor{red!60}{\textbf{(w/o Type 3) }} & \textcolor{red!60}{\textbf{only Type 3 }}\\\cmidrule[1pt]{2-4} &&\begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{img/only124.png}\captionof{figure}{}\label{subfig:ab124} \end{minipage} &\begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{img/only3.png}\captionof{figure}{}\label{subfig:ab3} \end{minipage} \\ \cmidrule[1pt]{2-4} \end{tabular}% \caption{Graphs of ablation studies on different types of edges in SSGAT. They are based on one same event in the dataset. Purple nodes are unbiased sentences and yellow nodes are biased sentences. the nodes of each graph are the same, the only difference is the edge types. } \label{tab:ablationfig}% \end{table}% We can conclude that, \begin{itemize} \item \textbf{Discourse relationship contributes more than semantic similarity to SSGAT.} SSGAT with only discourse relationships (F1=45.85) still has close performance to the full MultiCTX, while SSGAT with only semantic similarity edges (F1=45.25) suffers a considerable decrease in its performance. Note that the semantic similarity is calculated based on CSEs, so SSGAT with only such edges didn't add much extra information but may introduce duplication. It can be explained by Figure \ref{subfig:ab4}: connections are mostly within non-biased nodes while inter-communication between biased/non biased nodes are more frequent in Figure \ref{subfig:ab123}. \item \textbf{Event-level context is more important than neighborhood-level context.} While they are both important according to our results, global event-level context contributes more than local neighborhood-level context. SSGAT with only adjacent sentences (Type 1,2) obtains F1=45.53 and with only Type 3,4 gets F1=45.81. The result is intuitive because edges of type 3,4 not only include adjacent sentences within article, but also extend to the whole event. We can also see the rare presence of edges of Type 1,2 in Figure \ref{subfig:ab12} compared with the closely linked graph in Figure \ref{subfig:ab34}. \item \textbf{Entity continuation is the most important edge type.} Among three ablation experiments removing respectively edges of Type 4 (F1=45.85), Type 1,2 (F1=45.81) and Type 3 (F1=45.55), the last one without Type 3 (entity continuation) suffers the largest performance drop. It suggests that entity continuation, or, coreference is the most important relation in our setting. We can clearly see that Type 3 edges are the main reason for inter-class communication from Figure \ref{subfig:ab3} and Figure \ref{subfig:ab124}. \end{itemize} \section{Related Work} \paragraph{Media bias Detection.} With the rise of deep learning, neural-based aproaches are broadly used in media bias detection. \citet{iyyer-etal-2014-political} used RNNs to aggregate the polarity of each word to predict political ideology on sentence-level. \citet{gangula-etal-2019-detecting} made use of headline attention to classify article bias. \citet{li-goldwasser-2019-encoding} captured social information by Graph Convolutional Network to identify political bias in news articles. \citet{fan-etal-2019-plain} used BERT and RoBERTa and \citet{van-den-berg-markert-2020-context} used BiLSTMs as well as BERT-based models to detect sentence-level informational bias. \paragraph{Contextual information in media bias detection.} Contextual information is explored, though primarily, in media bias detection. \citet{baly2020detect} employed an adversarial news media adaptation using triplet loss; \citet{kulkarni-etal-2018-multi} proposed an attention based model to capture views from news articles' title, content and link structure; \citet{chen-etal-2020-detecting} explored the impact of sentence-level bias to article-level bias; \citet{li-goldwasser-2019-encoding} encoded social information using GCN; \citet{baly-etal-2018-predicting} made use of news media's cyber-features in news factuality prediction; \citet{10.1145/3366423.3380158} explored cross-media context by a news article graph. Sentence-level informational bias is under-studied by only a few research and the methods described above are not applicable on this task. In order to infuse contextual information, we refer to extractive summarization \citet{10.1145/3397271.3401327} and \citet{christensen-etal-2013-towards} which used sentence graph to encode context. \section{Conclusion} Our work focus on incorporating different levels of context: neighborhood-level, article-level and event-level in sentence-level informational bias detection. We proposed MultiCTX, a model composed of contrastive learning and relational sentence graph attention network to encode such multi-level context at different stages. Our model (F1=46.08) significantly outperforms the current state-of-the-art model (F1=44.10) by 2 percentage points. Therefore, we conclude that our model successfully learns from contextual information and that multi-level contextual information can effectively improves the identification of sentence-level informational bias. Moreover, our design aims at simulate the way people learn new things in real life: learn from multiple news reports covering the whole event to form a general picture and then use the past experience to reason about the unknown.
proofpile-arXiv_067-4245
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Magnetic B-type stars exhibiting strong emission \citep[e.g. $\sigma$ Ori E, HD 142184, HD 182180,][]{Landstreet1978,Grunhut2012b,Rivinius2013} serve as important testbeds for understanding how stellar winds interact with magnetic fields. Models such as the Rigidly Rotating Magnetosphere (RRM) model \citep{Townsend2005} provide a qualitative description of these systems \citep[e.g.][]{Townsend2005a,Krticka2009, Oksala2010}; however, detailed comparisons with observations of $\sigma$ Ori E have uncovered important discrepancies which require explanations \citep{Oksala2012, Oksala2015}. By relaxing the RRM model's condition that the magnetic field remains undistorted, \citet{Townsend2005} proposed the centrifugal breakout scenario in which the field loops episodically break and reconnect in response to an accumulating magnetospheric mass. Although magnetohydrodynamic simulations support this hypothesis \citep{ud-Doula2006_oth}, no observational evidence of the breakout events (e.g. optical flares) has yet been reported \citep{Townsend2013}. While these tests of the current theoretical framework provide useful information, their conclusions are based on a relatively small number of case studies. Lately, this number has been increasing as demonstrated by the recent confirmation of HD 23478's centrifugal magnetosphere (CM) \citep{Sikora2015}, as well as the discovery of the candidate CM-host, HD 345439 \citep{Hubrig2015}. The latest addition to this particular subset of magnetic B-type stars, HD~35502, is the focus of this paper. Over the past 60 years, the nature of HD~35502 has been redefined in various ways. Located within the Orion OB1 association \citep[likely within the OB1a subgroup,][]{Landstreet2007}, it was initially identified as a B5V star \citep{Sharpless1952, Crawford1958}. Higher resolution spectra later obtained by \citet{Abt1962} revealed both narrow and broad spectral lines, the latter of which being characterized with a $v\sin{i}$ of $290\,{\rm km\,s}^{-1}$. Moreover, He~{\sc i} lines were reported to be relatively weak; an analysis of early-type stars within Ori OB1 carried out by \citet{Nissen1976} demonstrated that HD~35502's He abundance was approximately half that of the nearby chemically normal field stars. These results motivated its eventual reclassification as a B5IVsnp star \citep{Abt1977}. HD~35502's magnetic field was first detected by \citet{Borra1981} and later confirmed by subsequent studies \citep{Bychkov2005,Glagolevskij2010}. Following the initial detection, it had been suggested that some of the unusual features apparent in its spectrum may be related to this strong field. In this paper, we use high-resolution spectra to provide a new interpretation of HD~35502 as a spectroscopic triple system whose primary component is a magnetic B-type star hosting a centrifugally supported magnetosphere. In Section~\ref{sect_obs}, we discuss both the polarized and unpolarized spectroscopic observations used in this study. Section~\ref{sect_phys_param} focuses on our derivation of some of the physical parameters of the system including its orbital configuration, along with the effective temperatures, surface gravities, masses, radii, and projected rotational velocities of the three stellar components. The various analytical methods used to derive these parameters, such as the modelling of spectroscopic and photometric data, are also described. In Section \ref{P_rot} we discuss the evidence of rotational modulation from which we derive the rotational period of HD~35502's primary component. In Section~\ref{mag_field}, the magnetic field measurements of this component are derived along with the field geometry and strength. In Section~\ref{variability}, we discuss and characterize the magnetic B star's magnetosphere. Finally, our conclusions along with our recommendations for further analytical work to be carried out are summarized in Section \ref{conclusions}. \begin{table*} \caption{ESPaDOnS and Narval spectropolarimetric observations. The SNRs per $1.8\,\text{km/s}$ pixel are reported at $5400\,{\rm \AA}$. The fifth, sixth, and seventh columns list the radial velocities of the three stellar components (see Section~\ref{orb_sol}). The two right-most columns indicate the longitudinal magnetic field derived from H$\beta$ (see Section~\ref{mag_field}) along with the associated detection status: definite detection (DD), marginal detection (MD), and no detection (ND) as outlined by \citet{Donati1997}.} \label{obs_tbl} \begin{center} \begin{tabular*}{1.91\columnwidth}{@{\extracolsep{\fill}}l c c c r r r r r} \hline \hline \noalign{\vskip0.5mm} HJD & Total Exp. & SNR & Instrument & \multicolumn{1}{c}{$v_{r,B}$} & \multicolumn{1}{c}{$v_{r,A_1}$} & \multicolumn{1}{c}{$v_{r,A_2}$} & \multicolumn{1}{c}{$\langle B_z\rangle_{{\rm H}\beta}$} & Detection \\ & Time (s) & (pix$^{-1}$) & & \multicolumn{1}{c}{$({\rm km\,s}^{-1})$} & \multicolumn{1}{c}{$({\rm km\,s}^{-1})$} & \multicolumn{1}{c}{$({\rm km\,s}^{-1})$} & \multicolumn{1}{c}{(kG)} & Status \\ \noalign{\vskip0.5mm} \hline \noalign{\vskip0.5mm} 2454702.138 & 1800 & 677 & ESPaDOnS & $19.4\pm1.3$ & $ 55.9\pm1.4$ & $ 4.8\pm2.7$ & $-0.23\pm0.19$ & DD \\ 2455849.677 & 3600 & 662 & Narval & $20.0\pm1.3$ & $-15.3\pm2.6$ & $ 71.3\pm1.7$ & $-0.03\pm0.16$ & DD \\ 2455893.623 & 3600 & 522 & Narval & $21.2\pm1.4$ & $ -5.1\pm1.5$ & $ 61.3\pm2.9$ & $-2.05\pm0.20$ & DD \\ 2455910.518 & 3600 & 263 & Narval & $22.7\pm2.0$ & $ 0.7\pm2.0$ & $ 55.9\pm5.3$ & $-1.30\pm0.50$ & DD \\ 2455934.528 & 3600 & 587 & Narval & $19.6\pm1.5$ & $-20.9\pm1.1$ & $ 76.4\pm2.0$ & $-2.12\pm0.19$ & DD \\ 2455936.534 & 3600 & 472 & Narval & $18.6\pm1.4$ & $ 76.5\pm1.6$ & $-19.4\pm2.8$ & $-2.32\pm0.24$ & DD \\ 2455938.525 & 3600 & 564 & Narval & $20.4\pm1.5$ & $ 18.1\pm5.5$ & $ 37.0\pm5.9$ & $ 0.08\pm0.19$ & ND \\ 2455944.500 & 3600 & 494 & Narval & $20.9\pm1.4$ & $ 1.9\pm2.5$ & $ 54.0\pm2.9$ & $ 0.16\pm0.23$ & ND \\ 2455949.429 & 3600 & 545 & Narval & $20.5\pm1.3$ & $ 45.4\pm1.1$ & $ 11.2\pm2.5$ & $-0.95\pm0.20$ & DD \\ 2455950.472 & 3600 & 478 & Narval & $21.4\pm1.8$ & $-12.5\pm2.4$ & $ 68.1\pm2.9$ & $-0.18\pm0.23$ & ND \\ 2455951.471 & 3600 & 431 & Narval & $22.8\pm1.5$ & $-22.6\pm1.0$ & $ 77.7\pm2.0$ & $-1.00\pm0.27$ & DD \\ 2455966.376 & 3600 & 550 & Narval & $19.8\pm1.5$ & $ 48.6\pm1.5$ & $ 7.5\pm3.2$ & $-1.87\pm0.20$ & DD \\ 2455998.332 & 3600 & 397 & Narval & $20.9\pm1.4$ & $ 53.6\pm2.6$ & $ 3.4\pm3.4$ & $ 0.28\pm0.28$ & ND \\ 2455999.362 & 3600 & 402 & Narval & $20.7\pm1.9$ & $ 81.8\pm2.9$ & $-25.2\pm1.0$ & $-1.64\pm0.28$ & DD \\ 2456001.309 & 3600 & 523 & Narval & $20.0\pm1.5$ & $ -4.6\pm2.4$ & $ 60.0\pm1.6$ & $-2.39\pm0.23$ & DD \\ 2456003.329 & 3600 & 528 & Narval & $21.0\pm1.6$ & $ 13.5\pm2.2$ & $ 42.8\pm1.9$ & $ 0.51\pm0.21$ & DD \\ 2456202.665 & 3600 & 604 & Narval & $21.4\pm1.3$ & $ 65.4\pm3.4$ & $ -9.4\pm1.9$ & $-2.30\pm0.17$ & DD \\ 2456205.618 & 3600 & 494 & Narval & $21.1\pm1.5$ & $-15.1\pm2.3$ & $ 69.5\pm2.8$ & $-0.34\pm0.22$ & DD \\ 2456224.646 & 3600 & 450 & Narval & $23.0\pm1.3$ & $ 27.7\pm1.4$ & $ 27.7\pm1.4$ & $-0.52\pm0.24$ & ND \\ 2456246.505 & 3600 & 505 & Narval & $20.5\pm1.3$ & $-15.6\pm2.6$ & $ 69.8\pm1.5$ & $-1.72\pm0.22$ & DD \\ 2456293.881 & 1600 & 710 & ESPaDOnS & $22.6\pm1.4$ & $ 79.4\pm2.5$ & $-24.9\pm0.9$ & $-1.39\pm0.20$ & DD \\ 2456295.787 & 1600 & 190 & ESPaDOnS & $21.0\pm2.6$ & $ 10.4\pm1.6$ & $ 44.1\pm1.4$ & $-3.06\pm0.71$ & ND \\ 2456295.808 & 1600 & 231 & ESPaDOnS & $22.8\pm2.3$ & $ 9.3\pm1.3$ & $ 45.2\pm1.9$ & $-2.83\pm0.57$ & MD \\ 2456556.002 & 1600 & 582 & ESPaDOnS & $24.7\pm1.9$ & $ 43.0\pm1.9$ & $ 10.8\pm1.8$ & $-1.37\pm0.28$ & DD \\ 2456557.140 & 1600 & 670 & ESPaDOnS & $19.2\pm1.5$ & $-18.2\pm1.9$ & $ 70.9\pm1.6$ & $-3.27\pm0.20$ & DD \\ 2456560.077 & 1600 & 612 & ESPaDOnS & $20.8\pm1.4$ & $ 74.5\pm1.9$ & $-20.5\pm2.1$ & $-0.05\pm0.25$ & ND \\ \noalign{\vskip0.5mm} \hline \\ \end{tabular*} \end{center} \end{table*} \section{observations} \label{sect_obs} \subsection{ESPaDOnS \& Narval spectropolarimetry} Spectropolarimetric observations of HD~35502 were obtained over the course of 5 years (Aug. 23, 2008 to Sept. 24, 2013) in the context of the MiMeS \citep{Wade2015} and BinaMIcS \citep{Alecian2015} surveys. Nineteen Stokes $V$ observations were obtained using the high-resolution ($R\simeq65\,000$) spectropolarimeter Narval installed at the T\'{e}lescope Bernard Lyot (TBL) over a wavelength range of approximately $3\,600-10\,000\,{\rm \AA}$. Ten Stokes $V$ spectra were also obtained using the twin instrument ESPaDOnS installed at the Canada-France-Hawaii Telescope (CFHT). Three of these observations exhibited signal-to-noise ratios (SNRs) $\lesssim100$ and were removed from the analysis. A median SNR of $522$ was obtained from the twenty six observations. Both the ESPaDOnS and Narval observations were reduced using the {\sc Libre-ESpRIT} pipeline \citep{Donati1997} yielding final Stokes $I$ and $V$ spectra \citep[for a detailed description of the reduction procedure, see e.g.][]{Silvester2012}. The Heliocentric Julian Dates (HJDs), total exposure times, and SNRs are listed in Table~\ref{obs_tbl}. \subsection{dimaPol spectropolarimetry} Twenty-four medium-resolution spectropolarimetric observations were obtained with dimaPol ($R\simeq10\,000$) installed at the Dominion Astrophysical Observatory (DAO) \citep{Monin2012} from Feb. 7, 2009 to Feb. 15, 2012. Two of these observations had SNRs $\lesssim100$ and were removed from the analysis. The remaining twenty-two Stokes $V$ observations of H$\beta$ were used to derive longitudinal field measurements; the HJDs, exposure times, SNRs, and longitudinal field measurements are listed in Table \ref{dao_tbl}. \subsection{FEROS spectroscopy} Thirty-two unpolarized spectra were acquired from Dec. 30, 2013 to Jan. 3, 2014 using the spectrograph FEROS mounted on the $2.2\,{\rm m}$ MPG/ESO telescope located at La Silla Observatory. The instrument has a resolving power of $R=48\,000$ across a wavelength range of $3\,600-9\,200\,\rm{\AA}$ \citep{Kaufer1999}. The spectra were reduced using the FEROS Data Reduction System. The pipeline automatically carries out bias subtraction, flat fielding, and extraction of the spectral orders; wavelength calibration is carried out using ThAr and ThArNe lamps. Uncertainties in the measured intensities were estimated from the root mean square (RMS) of the continuum intensity at multiple points throughout each spectrum \citep[e.g.][]{Wade2012b}. The HJDs, total exposure times, and SNRs are listed in Table~\ref{FEROS_tbl}. \subsection{H$\alpha$ spectroscopy} A total of 131 spectroscopic observations of H$\alpha$ covering various wavelength ranges from approximately $6\,300-6\,800\,{\rm \AA}$ are also used in this study. One hundred thirteen of these observations were obtained at the DAO from Nov. 26, 1991 to Feb. 4, 2012. Seven of the spectra were removed from the analysis on account of their SNRs being $\lesssim50$. Both the McKellar spectrograph installed at the $1.2\,{\rm m}$ Plaskett telescope and the spectrograph mounted at the Cassegrain focus of DAO's $1.8\,{\rm m}$ telescope were used to acquire the spectra. The remaining eleven observations were obtained at CFHT from Nov. 21, 1991 to Oct. 3, 1995 using the now decommissioned Coud\'{e} f/8.2 spectrograph. \subsection{$uvby$ photometry} 149 $uvby$ photometric measurements were obtained from Jan. 27, 1992 to Mar. 13, 1994 using the $0.75\,{\rm m}$ Four College Automated Photoelectric Telescope (FCAPT) on Mt. Hopkins, AZ. The dark count was first measured and then in each filter the sky-ch-c-v-c-v-c-v-c-ch-sky counts were obtained, where sky is a reading of the sky, ch that of the check star, c that of the comparison star, and v that of the variable star. No corrections have been made for neutral density filter differences among each group of variable, comparison, and check stars. HD~35575 was the comparison and HD~35008 the check (i.e. second comparison) star. The standard deviations of the ch-c values were 0.006 mag, except for u for which it was 0.008 mag. We adopted uncertainties of 0.005 mag for each measurement based on the highest precision typically achieved with FCAPT. Table~\ref{uvby_tbl} contains the complete list of photometry. \begin{table} \caption{Spectropolarimetric observations of HD~35502 obtained with dimaPol. Columns $1$ to $3$ list the HJDs, exposure times, and SNRs. Column $4$ lists the longitudinal field measurements derived from the H$\beta$ Stokes $V$ profiles.} \label{dao_tbl} \begin{center} \begin{tabular*}{0.39\textwidth}{@{\extracolsep{\fill}}l c c r} \noalign{\vskip-0.2cm} \hline \hline \noalign{\vskip0.5mm} HJD & Total Exp. & SNR & $\langle B_z\rangle_{\rm H\beta}$\\ & Time (s) & (pix$^{-1}$) & (kG) \\ \noalign{\vskip0.5mm} \hline \noalign{\vskip0.5mm} 2454869.764 & 4800 & 410 & $-1.95\pm0.27$ \\ 2454872.773 & 3600 & 240 & $-0.64\pm0.25$ \\ 2455109.047 & 5400 & 240 & $-2.88\pm0.51$ \\ 2455110.964 & 5400 & 260 & $-1.60\pm0.25$ \\ 2455167.855 & 6000 & 290 & $-2.27\pm0.21$ \\ 2455168.829 & 6000 & 320 & $-2.21\pm0.38$ \\ 2455169.881 & 7200 & 280 & $-1.72\pm0.43$ \\ 2455170.867 & 7200 & 190 & $ 0.31\pm0.31$ \\ 2455190.766 & 7200 & 230 & $-2.57\pm0.33$ \\ 2455191.781 & 7200 & 260 & $-2.29\pm0.30$ \\ 2455192.790 & 7200 & 130 & $-3.95\pm1.09$ \\ 2455193.742 & 7200 & 210 & $-1.61\pm0.33$ \\ 2455261.655 & 6000 & 290 & $-2.04\pm0.41$ \\ 2455262.685 & 6000 & 300 & $-2.31\pm0.26$ \\ 2455264.661 & 6000 & 270 & $-0.89\pm0.37$ \\ 2455580.803 & 6000 & 270 & $-0.07\pm0.38$ \\ 2455583.800 & 4800 & 140 & $-2.38\pm0.94$ \\ 2455594.711 & 7200 & 230 & $-2.39\pm0.60$ \\ 2455611.691 & 5700 & 210 & $-2.07\pm0.47$ \\ 2455904.864 & 6600 & 170 & $-3.60\pm0.56$ \\ 2455964.666 & 5400 & 190 & $-2.48\pm0.34$ \\ 2455972.644 & 7200 & 260 & $ 0.58\pm0.44$ \\ \hline \\ \noalign{\vskip-0.7cm} \end{tabular*} \end{center} \end{table} \section{Physical parameters} \label{sect_phys_param} Based on the high-resolution spectra obtained of HD~35502, three distinct sets of spectral lines are apparent: the strong and broad lines associated with a hot star and two nearly identifical components attributable to two cooler stars which are observed to change positions significantly. Based on HD~35502's reported spectral type, the bright, dominant component is presumed to be a hot B5 star \citep{Abt1977}; the weaker components are inferred to be two cooler A-type stars based on the presence of Fe~{\sc ii} lines and the absence of Fe~{\sc iii} lines. As will be shown in the next section, the lines of the A stars show velocity variations consistent with a binary system. Hence we conclude that HD~35502 is an SB3 system. Some of the B star lines (the He~{\sc i} lines, in particular) appear to exhibit intrinsic variability. Such features are commonly found in magnetic He peculiar stars \citep[e.g.][]{Borra1983,Bolton1998,Shultz2015}. \subsection{Orbital solution} \label{orb_sol} The radial velocity ($v_r$) of the central B star ($B$) in each observation was determined using spectral lines for which no significant contribution from the two A stars ($A_1$ and $A_2$) was apparent. C~{\sc ii}$\,\lambda4267$ was found to be both relatively strong (with a depth of $10$ per cent of the continuum) and only weakly variable. The H$\alpha$ spectra encompassed a limited range of wavelengths with few lines from which $v_r$ could be accurately determined. We used C~{\sc ii}$\,\lambda6578$ and He~{\sc i}$\,\lambda6678$ in order to estimate $v_r$ from all of the spectra (spanning a $22$ year period). However, measurements made from He~{\sc i}$\,\lambda6678$ were subject to systematic errors associated with strong variability (see Section~\ref{variability}). Moreover, the shallower depth of C~{\sc ii}$\,\lambda6578$ ($<4$ per cent of the continuum) and its blending with H$\alpha$ resulted in both a decrease in precision and a larger scatter in $v_r$ compared with those values derived from C~{\sc ii}$\,\lambda4267$. The radial velocities were calculated by fitting a rotationally-broadened Voigt function to the C~{\sc ii}$\,\lambda4267$, C~{\sc ii}$\,\lambda6578$, and He~{\sc i}$\,\lambda6678$ lines. The uncertainties were estimated through a bootstrapping analysis involving the set of normalized flux measurements ($I/I_c$) spanning each line. A random sample of 61 per cent of the data points was selected to be removed at each iteration. These points were then replaced by another set that was randomly sampled from $I/I_c$. The fitting routine was then repeated on this new data set. 1000 iterations of the bootstrapping routine were carried out and a probability distribution was obtained for each fitting parameter. The uncertainties in each of the fitting parameters were then taken as the $3\sigma$ standard deviations associated with each probability distribution. The value of $v_r$ inferred from C~{\sc ii}$\,\lambda4267$ was found to exhibit a median uncertainty of $1.5\,{\rm km\,s}^{-1}$ and a standard deviation of $1.4\,{\rm km\,s}^{-1}$. Larger uncertainties were derived using He~{\sc i}$\,\lambda6678$ and C~{\sc ii}$\,\lambda6578$ ranging from $1-53\,{\rm km\,s}^{-1}$. Similarly, $v_r$ inferred from C~{\sc ii}$\,\lambda6578$ and He~{\sc i}$\,\lambda6678$ yielded larger standard deviations of $9$ and $18\,{\rm km\,s}^{-1}$, respectively. In the case of He~{\sc i}$\,\lambda6678$, the decrease in precision and increase in scatter relative to the more stable C~{\sc ii}$\,\lambda4267$ measurements is likely the result of the intrinsic line variability. No significant $v_r$ variability was detected using C~{\sc ii}$\,\lambda4267$ ($\langle v_r\rangle=21\pm2\,{\rm km\,s}^{-1}$), C~{\sc ii}$\,\lambda6578$ ($\langle v_r\rangle=30\pm15\,{\rm km\,s}^{-1}$), or He~{\sc i}$\,\lambda6678$ ($\langle v_r\rangle=30\pm8\,{\rm km\,s}^{-1}$). The radial velocities of the two A stars were calculated from Stokes I profiles produced using the Least Squares Deconvolution (LSD) method \citep{Donati1997,Kochukhov2010}. The LSD line mask used to carry out the procedure was compiled using data taken from the Vienna Atomic Line Database (VALD) \citep{Kupka2000}. In order to isolate the A stars from the dominant B star component in the Stokes I LSD profiles, we used a line list associated with an $8000\,{\rm K}$ star having a surface gravity of $\log{g}=4.0$ (cgs) and a microturbulence of $v_{\rm mic}=0$. Fig.~\ref{lsd_full} shows the LSD profiles generated using a different line mask in which both the A and B star components are apparent. The majority of the radial velocities were then determined by simultaneously fitting two Gaussians to the sharp components of the Stokes I profiles. In the case of the Narval observation obtained at ${\rm HJD}=2456224.646$, the sharp line profiles were completely blended and the radial velocities were estimated by fitting a single Gaussian and adopting the resultant velocity for both components. The $v_r$ errors were estimated using a $1000$ iteration bootstrapping analysis. We note that the contribution of the B star to the Stokes $I$ LSD profiles was generally weak; however, in certain observations, small contributions were present which resulted in small deformations in the continuum between the two A star profiles. In these cases, the A star line appearing closest to these deformations was more affected than the other A star thereby yielding slightly higher uncertainties in the fitting procedure. The values of the two A stars' radial velocities were found to range from $-30.4$ to $78.6\,{\rm km\,s}^{-1}$ with an average uncertainty of $2.4\,{\rm km\,s}^{-1}$. \begin{figure} \centering \includegraphics[width=0.999\columnwidth]{./hd35502_dynamic_lsd_Astar_plot-eps-converted-to.pdf} \caption{ESPaDOnS, Narval, and FEROS Stokes $I$ LSD profiles (right) and dynamic Stokes $I$ LSD profiles (left) generated such that the three spectral components are emphasized. The observations are phased by the A star orbital period of $5.66866(6)\,{\rm d}$. The vertical dashed black lines indicate the surface of the B star located at $v=\pm v\sin{i}=\pm75\,{\rm km\,s}^{-1}$. The dashed black sinusoids correspond to the fits obtained for $v_r$ of the two A stars.} \label{lsd_full} \end{figure} \begin{table*} \caption{Unpolarized spectra obtained using FEROS. The SNRs per $2.8\,{\rm km\,s}^{-1}$ pixel listed in column 3 are estimated from the RMS of the continuum near $\lambda=5400\,{\rm \AA}$. The fourth, fifth, and sixth columns list the radial velocities of the three stellar components (see Section~\ref{orb_sol}).} \label{FEROS_tbl} \begin{center} \begin{tabular*}{1.5\columnwidth}{@{\extracolsep{\fill}}l c c r r r} \hline \hline \noalign{\vskip0.5mm} HJD & Total Exp. & RMS & \multicolumn{1}{c}{$v_{r,B}$} & \multicolumn{1}{c}{$v_{r,A_1}$} & $v_{r,A_2}$ \\ & Time (s) & SNR & \multicolumn{1}{c}{$({\rm km\,s}^{-1})$} & \multicolumn{1}{c}{$({\rm km\,s}^{-1})$} & $({\rm km\,s}^{-1})$ \\ \noalign{\vskip0.5mm} \hline \noalign{\vskip0.5mm} 2456656.568 & 300 & 271 & $19.5\pm1.6$ & $ 75.4\pm2.4$ & $-25.4\pm1.5$ \\ 2456656.609 & 600 & 315 & $20.9\pm1.4$ & $ 76.0\pm2.1$ & $-26.2\pm1.1$ \\ 2456658.535 & 600 & 262 & $19.2\pm1.9$ & $ 10.3\pm3.1$ & $ 39.2\pm2.9$ \\ 2456658.542 & 124 & 145 & $19.2\pm2.7$ & $ 10.1\pm1.8$ & $ 39.2\pm1.5$ \\ 2456658.679 & 300 & 215 & $20.5\pm1.8$ & $ 2.9\pm1.8$ & $ 46.5\pm2.5$ \\ 2456658.683 & 300 & 200 & $21.8\pm1.8$ & $ 2.8\pm1.7$ & $ 46.5\pm3.0$ \\ 2456658.686 & 300 & 195 & $22.3\pm1.5$ & $ 2.8\pm1.7$ & $ 46.8\pm2.0$ \\ 2456658.690 & 300 & 178 & $20.2\pm2.0$ & $ 2.2\pm2.1$ & $ 47.0\pm2.4$ \\ 2456658.694 & 300 & 215 & $20.9\pm1.8$ & $ 2.3\pm2.3$ & $ 47.2\pm2.7$ \\ 2456658.698 & 300 & 248 & $21.4\pm1.5$ & $ 2.1\pm2.0$ & $ 47.4\pm2.9$ \\ 2456658.701 & 300 & 219 & $20.4\pm1.7$ & $ 1.9\pm1.8$ & $ 47.6\pm2.1$ \\ 2456658.702 & 300 & 181 & $21.3\pm1.6$ & $ 1.6\pm2.3$ & $ 47.6\pm1.6$ \\ 2456658.709 & 300 & 286 & $21.8\pm1.9$ & $ 1.3\pm1.5$ & $ 48.1\pm2.2$ \\ 2456658.713 & 300 & 251 & $21.5\pm3.3$ & $ 1.3\pm2.3$ & $ 48.1\pm2.4$ \\ 2456658.763 & 600 & 252 & $19.7\pm1.6$ & $ -1.5\pm4.7$ & $ 51.1\pm2.4$ \\ 2456659.628 & 600 & 267 & $18.4\pm1.6$ & $-29.3\pm1.3$ & $ 77.8\pm2.3$ \\ 2456659.671 & 300 & 222 & $19.1\pm1.5$ & $-29.9\pm1.2$ & $ 78.1\pm1.7$ \\ 2456659.677 & 400 & 284 & $19.6\pm1.6$ & $-30.1\pm1.5$ & $ 78.3\pm1.8$ \\ 2456659.682 & 300 & 233 & $19.9\pm1.9$ & $-30.0\pm1.6$ & $ 78.5\pm2.3$ \\ 2456659.685 & 300 & 209 & $19.7\pm1.7$ & $-29.8\pm1.3$ & $ 78.0\pm1.6$ \\ 2456659.689 & 300 & 299 & $18.8\pm1.8$ & $-29.9\pm1.7$ & $ 78.0\pm1.6$ \\ 2456659.693 & 300 & 245 & $17.5\pm2.9$ & $-30.3\pm1.9$ & $ 78.1\pm2.0$ \\ 2456659.697 & 300 & 310 & $19.9\pm1.7$ & $-30.0\pm1.1$ & $ 78.1\pm1.3$ \\ 2456659.700 & 300 & 240 & $20.5\pm1.7$ & $-30.1\pm2.0$ & $ 78.2\pm1.8$ \\ 2456659.704 & 300 & 222 & $20.4\pm1.6$ & $-30.1\pm1.6$ & $ 78.1\pm1.9$ \\ 2456659.708 & 300 & 231 & $20.7\pm2.0$ & $-30.2\pm1.3$ & $ 78.2\pm1.3$ \\ 2456659.746 & 600 & 300 & $20.5\pm1.5$ & $-29.8\pm1.3$ & $ 78.0\pm1.4$ \\ 2456660.614 & 600 & 249 & $20.6\pm1.6$ & $ -4.4\pm3.6$ & $ 53.2\pm4.3$ \\ 2456660.653 & 600 & 276 & $21.6\pm1.6$ & $ -2.4\pm3.1$ & $ 51.7\pm4.1$ \\ 2456660.724 & 600 & 265 & $21.5\pm1.6$ & $ 1.1\pm2.7$ & $ 48.1\pm2.9$ \\ 2456660.763 & 600 & 299 & $19.7\pm1.7$ & $ 2.8\pm2.2$ & $ 46.5\pm3.1$ \\ 2456660.801 & 600 & 250 & $21.5\pm2.0$ & $ 5.1\pm1.9$ & $ 43.9\pm2.6$ \\ \noalign{\vskip0.5mm} \hline \end{tabular*} \end{center} \end{table*} The spectral characteristics of the two A-type components are nearly identical; therefore, it is not possible to unambiguously attribute a particular line profile in each spectrum to a particular star. Nevertheless, the importance of this ambiguity can be reduced by making simplifying assumptions. First, we assumed that the two A stars are gravitationally bound and therefore orbit a common center of mass (having a radial velocity $v_{\rm cm}$) with a period $P_{\rm orb}$. Furthermore, we assumed that the orbits are circular implying that the A star $v_r$ variations are purely sinusoidal and described by \begin{equation}\label{eqn:vr_sin} v_{r,i}(t)=v_{\rm cm}+K_i\sin{(2\pi t/P_{\rm orb}+\phi_i)} \end{equation} where $K_i$ and $\phi_i$ are the semi-amplitude and phase shift of the $i^{\rm th}$ A-type component, respectively. The fact that the radial velocities are observed to oscillate symmetrically about a constant average radial velocity of $\langle v_{r,A}\rangle=(v_{r,1}+v_{r,2})/2=25\pm3\,{\rm km\,s}^{-1}$ suggests that (1) $K_1=K_2$ and (2) $|\phi_1-\phi_2|=\pi$. With these assumptions, we applied the following procedure: \begin{enumerate}[leftmargin=0.8cm,labelwidth=16pt] \item define a grid of possible orbital periods; \item define an amplitude and phase shift of the radial velocity variations based on the maximum observed $v_r$ separation; \item for every period, determine which sinusoidal model the blue and red shifted spectral lines must be associated with in order to minimize the residuals. \end{enumerate} The two components in each observation were then identified using whichever period returned the minimal residual fit. A traditional period fitting routine (e.g. Lomb-Scargle) could then be applied to the $v_r$ time series of each star separately thereby yielding more precise periods, amplitudes, and phase shifts for each model. \begin{figure} \centering \includegraphics[width=1\columnwidth]{./hd35502_RV_A_stars-eps-converted-to.pdf} \caption{\emph{Top:} The nearly sinusoidal fits to the radial velocities of the two A stars phased by a period of $5.66866\,\text{d}$. \emph{Bottom:} The reduced $\chi^2$ distribution yielded by the period fitting routine applied to one of the A stars. The $5.66866\,\text{d}$ period is indicated by the red arrow.} \label{A_orbit} \end{figure} We chose a grid of periods ranging from $0.1$ to $10\,{\rm d}$ in increments of $10^{-5}\,{\rm d}$ ($\sim1\,{\rm s}$). The amplitudes ($K_1=K_2$) and phase shifts ($\phi_1=|\phi_2-\pi|$) were defined by the maximum $v_r$ separation of $109\,{\rm km\,s}^{-1}$ (i.e. phase $0.994$ where this phase corresponds to the phase of the B star's maximum longitudinal magnetic field derived in Section \ref{mag_field}). The analysis then involves assigning radial velocities to each of the A star components based on a best-fitting period of $5.6687{\rm d}$. An alternative means of identifying the orbital period uses the fact that the quantity $|v_{r,1}-v_{r,2}|$ varies with a period of $2P_{\rm orb}$, as outlined by \citet{Hareter2008}. Applying this method yields a similar value of $P_{\rm orb}=5.6680(6)\,{\rm d}$. With the radial velocities of the two A stars correctly assigned to each individual component, a more precise analysis of the binary orbital parameters was carried out using {\sc orbitx}, a {\sc fortran} code later adapted to {\sc idl} which determines the best-fitting $P_{\rm orb}$, time of periastron passage ($T$), eccentricity ($e$), longitude of the periastron ($\omega$), semi-amplitudes of each component's radial velocities ($K_1$ and $K_2$), and the radial velocity of the center of mass ($\gamma$) \citep{Tokovinin1992}. This calculation yielded $P_{\rm orb}=5.66866(6)\,{\rm d}$, $T=2456658.172\pm1.652$, $e=0.003_{-0.003}^{+0.006}$, $\omega=82\pm105\degree$, $K_1=55.5\pm0.4\,{\rm km\,s^{-1}}$, $K_2=52.7\pm0.4\,{\rm km\,s^{-1}}$, and $\gamma=26.5\pm0.2\,{\rm km\,s^{-1}}$. These results imply a mass ratio of $M_1/M_2=1.05\pm0.02$, a projected total mass of $(M_1+M_2)\sin^3{i}=0.186\pm0.008$, and a projected semi-major axis of $a\sin{i}=0.0564\pm0.0008$. These values are listed in Table~\ref{orbital_tbl}. Fig.~\ref{A_orbit} shows the radial velocities of the A stars phased by the $5.66866\,{\rm d}$ orbital period and compared with the radial velocities computed from the orbital solution. Comparing $\langle v_{r,A}\rangle=25\pm3\,{\rm km\,s}^{-1}$ with the average B star $v_r$ of $20.5\pm1.6\,{\rm km\,s}^{-1}$ and noting that no significant variability in $\langle v_{r,A}\rangle$ was detected over the $22$ year observing period implies a very long orbital period of the A binary about the B star. A lower limit for this period is derived in Section~\ref{SED_fit}. \begin{table} \caption{Orbital parameters of the A+A binary.} \label{orbital_tbl} \begin{center} \begin{tabular*}{0.72\columnwidth}{@{\extracolsep{\fill}}l r} \hline \hline \noalign{\vskip1mm} $P_{\rm orb}\,({\rm d})$ & $5.66866(6)$ \vspace{0.8mm}\\ $T$ & $2456658.172\pm1.652$\vspace{0.8mm}\\ $e$ & $0.003^{+0.006}_{-0.003}$ \vspace{0.8mm}\\ $\omega\,(\degree)$ & $82\pm105$ \vspace{0.8mm}\\ $K_1\,{\rm (km/s)}$ & $55.5\pm0.4$ \vspace{0.8mm}\\ $K_2\,{\rm (km/s)}$ & $52.7\pm0.4$ \vspace{0.8mm}\\ $\gamma\,{\rm (km/s)}$ & $26.5\pm0.2$ \vspace{0.8mm}\\ $M_1/M_2$ & $1.05\pm0.02$ \vspace{0.8mm}\\ $(M_1+M_2)\sin^3{i}\,(M_\odot)$ & $0.186\pm0.008$ \vspace{0.8mm}\\ $a\sin{i}\,{\rm (AU)}$ & $0.0564\pm0.0008$ \vspace{0.8mm}\\ \hline \end{tabular*} \end{center} \end{table} \subsection{SED fitting} \label{SED_fit} Photometric fluxes of HD~35502 have been measured throughout the UV, visible, and near infrared spectral regions thereby allowing the temperatures and radii of the three stellar components to be constrained. Ultraviolet measurements were previously obtained at four wavelengths -- $1565\,\rm{\AA}$, $1965\,\rm{\AA}$, $2365\,\rm{\AA}$, and $2749\,\rm{\AA}$ -- by the $S2/68$ instrument on board the $TD1$ satellite \citep{Thompsons1978}. Photometry spanning the visible spectrum were taken from the Geneva Observatory's catalogue of $U$, $B$, $V$, $B_1$, $B_2$, $V_1$, and $G$ filters \citep{Rufener1981}. Additionally, infrared observations obtained by 2MASS ($J$, $H$, and $Ks$ filters) \citep{Skrutskie2006} and WISE ($W1$ and $W2$ filters) \citep{Wright2010} were used. The reported Geneva, 2MASS, and WISE magnitudes were converted to the flux units of ${\rm ergs\,s^{-1}\,cm^{-2}\,{\AA}^{-1}}$ using the zero points reported by \citet{Rufener1988}, \citet{Cohen2003}, and \citet{Wright2010}. The reported photometric measurements of HD~35502 include the contributions from each of the three stellar components. This renders an SED fitting analysis particularly susceptible to degenerate solutions; however, speckle interferometry measurements obtained by \citet{Balega2012} provide additional photometric constraints on the system. They detected magnitude differences of $1.45\pm0.02\,{\rm mag}$ and $1.21\pm0.02\,{\rm mag}$ using filters centered on $5500\,\rm{\AA}$ and $8000\,\rm{\AA}$, respectively. The sources were reported to have angular separations of $69\pm1\,{\rm mas}$ and $68\pm1\,{\rm mas}$ in the two filters. The speckle companion is also identifed in observations obtained by \citet{Horch2001} with a consistent angular separation of $\rho<59\,{\rm mas}$. In conjunction with the distance to HD~35502, the angular separations may be used to determine the associated linear separation. A distance of $d=430\pm120\,\rm{pc}$ was inferred from the $2.35\pm0.68\,\rm{mas}$ Hipparcos parallax \citep{VanLeeuwen2007}. However, assuming HD~35502 to be a member of the Orion OB1a subassociation, we inferred a moderately more precise value of $400\pm80\,{\rm pc}$ based on the subassociation's reported average distance modulus of $\langle {\rm dm}\rangle=8.00\pm0.46\,{\rm mag}$ \citep{Brown1994}. The projected linear separation between the two speckle sources was then found to be $27\pm5\,\rm{AU}$. The minimum orbital period of the A star binary system around the B star can then be approximated by assuming upper limit masses of $8\,M_\odot$ and $3\,M_\odot$ for the B and A stars, respectively (the actual masses are derived in Section~\ref{hrd}). This implies an orbital period of $P_{\rm orb}\gtrsim40\,\rm{yrs}$, which is consistent with the fact that no significant variations were detected in either the B star radial velocities or the A star binary's systemic radial velocity. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{./hd35502_sed_fit_mcmc_triangle_select_nohist-eps-converted-to.pdf} \caption{Marginalized posterior probability distributions returned by the MCMC algorithm that was applied to the SED fitting. Each frame demonstrates the correlations that are apparent between various parameters, while the vertical and horizontal blue lines indicate the value of each parameter associated with the maximum likelihood solution (i.e. $T_B$, $T_A$, $R_B$, and $R_A$). The contours approximately correspond to $1-3\sigma$ confidence regions in increments of $0.5\sigma$; $\sigma_{T_B}$, $\sigma_{T_A}$, $\sigma_{R_B}$, and $\sigma_{R_A}$ indicate the value of each parameter's $1\sigma$ region.} \label{SEDa} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{./hd35502_sed_fit-eps-converted-to.pdf} \caption{Comparisons between the observed photometry (red points) and the model SED (solid black curve). The dashed blue and dot-dashed black curves correspond to the model SEDs of the composite A star components (i.e. $2F_A(\lambda)$) and the B star, respectively. The black crosses indicate the flux obtained by multiplying the model SED by the transmission function of the associated filter.} \label{SEDb} \end{figure} The observed photometry was fit using {\sc atlas9} synthetic spectral energy distributions (SEDs) generated from the atmospheric models of \citet{Castelli2004}. The grid consists of models with effective temperatures ranging from $3.5-50.0\,\rm{K}$ and surface gravities spanning $\log{g}=0.5-5.0\,\rm{(cgs)}$, as described in detail by \citet{Howarth2011}. This grid was linearly interpolated in order to produce models with a uniform temperature and surface gravity resolution of $125\,\rm{K}$ and $0.01\,\rm{dex}$ for $T_{\rm eff}=5-25\,\rm{kK}$ and $\log{g}=3.0-4.75\,\rm{(cgs)}$. All of the SEDs were then multiplied by the transmission functions associated with each of the narrow band filters: TD1 $UV$ \citep{Carnochan1982}, Geneva \citep{Rufener1988}, 2MASS \citep{Cohen2003}, and WISE \citep{Wright2010}. Modelling the photometry of un-resolved multi-star systems using synthetic SEDs requires a large number of fitting parameters and therefore the solution is expected to be highly degenerate. The contribution to the total flux from each of the three stellar components depends on, among other factors, their effective temperatures, surface gravities, and radii. In order to reduce the number of solutions, we adopted a solar metallicity and a microturbulence velocity of $v_{\rm mic}=0\,{\rm km\,s}^{-1}$. As with many Bp stars, HD~35502's primary exhibits chemical spots on its surface (see Section~\ref{variability}); however, on average, a solar metallicity may be adopted. The high-resolution spectra of HD~35502 obtained by Narval, ESPaDOnS, and FEROS suggest that the two cooler A star components are approximately identical in terms of their $T_{\rm eff}$, $\log{g}$, and line-broadening parameters (see Section~\ref{line_fit}). If we assume that the two A stars contribute identically to the SED, the number of independent models required in the fitting routine is reduced from three to two thereby resulting in a total of six free parameters: $T_{\rm eff}$, $\log{g}$, and the stellar radius, $R$, for both the B star and the (identical) A stars. The effective temperature of a star inferred from fitting model SEDs to photometry is highly dependent on the assumed colour excess, $E(B-V)$. Given HD~35502's probable location within the Orion OB1a association \citep{Landstreet2007}, the extinction caused by gas and dust is expected to be significant. Indeed, \citet{Sharpless1952} and \citet{Lee1968} report values (without uncertainties) of $E(B-V)$ of $0.13$ and $0.14$, respectively. We used the method of \citet{Cardelli1989} with an adopted to selective total extinction ratio of $R(V)=3.1$ in order to deredden the observed photometry. Small differences in the resulting best-fitting parameters of $<3$ per cent were found by using an $E(B-V)$ of $0.13$ or $0.14$. Although we investigated how our analysis was affected by varying the colour excess from $0.0-0.2$, the final effective temperatures are reported after assuming $E(B-V)=0.14$. We found that a Markov Chain Monte Carlo (MCMC) fitting routine provided a suitable means of determing the most probable solution while simultaneously revealing any significant degeneracies. This was carried out by evaluating the likelihood function yielded by a set of randomly selected fitting parameters drawn from a prior probability \citep[see e.g.][]{WallJenkins2003}. For each iteration, the derived likelihood is compared with that produced by the previous iteration. If a new solution is found to yield a higher quality of fit (higher likelihood), these parameters are adopted, otherwise, the previous solution is maintained. In order to broadly sample the solution space, the MCMC algorithm is designed to adopt poorer fitting solutions at random intervals thereby preventing a local (but not global) maximum likelihood from being returned. Uniform prior probability distributions (flat priors) were defined for $T_{\rm eff}$, $\log{g}$, and $R$, where the latter was constrained within $1.0-10.0\,R_\odot$. The two speckle observations \citep{Balega2012} were then included in the total prior probability as monochromatic flux ratios (i.e. magnitude differences) at $5500\,{\rm \AA}$ and $8000\,{\rm \AA}$. We assumed that the reported $0.02\,{\rm mag}$ uncertainties correspond to $1\sigma$ significance. The marginalized posterior probability distributions produced after $10^6$ iterations in the Markov Chain are shown in Fig.~\ref{SEDa}. \begin{figure*} \centering \includegraphics[width=2.1\columnwidth]{./hd35502_synth_fit-eps-converted-to.pdf} \caption{\emph{Top:} Comparisons between the best-fitting synthetic (red) and observed (black) H lines, H$\gamma$ (left) and H$\beta$ (right). The filled blue region indicates the total uncertainty associated with $\log{g_B}$ and $\log{g_A}$. The observations occur at phase $0.49$ in the B star's rotational period (see Section~\ref{P_rot}) when the weakest emission is visible in the wings of H$\beta$. \emph{Bottom:} The residuals associated with the model spectrum (red).} \label{B_lines} \end{figure*} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./hd35502_synth_fit_gen-eps-converted-to.pdf} \caption{A subsample of the wavelength region used to determine the luminosity ratio, $L_B/L_A$. The red curve corresponds to the best-fitting model spectrum; the filled blue region indicates the total uncertainty associated with a $v_B\sin{i}$ of $\pm5\,{\rm km\,s}^{-1}$ as determined by fitting various metal lines. The black curve shows the observed spectrum. The Fe~{\sc i}$\,\lambda4273$ lines of the two A star components are labeled as `A$_1$' and `A$_2$'.} \label{B_vsini} \end{figure} The most probable effective temperatures for the B and A star models were found to be $18.4\pm1.2\,{\rm kK}$ and $8.9\pm0.6\,{\rm kK}$, respectively, where the uncertainties correspond to the $93^{\rm rd}$ percentile (approximately $2\sigma$). The fitting parameters used to derive the stellar radii, $R_B$ and $R_A$, depend on the distance to HD~35502 (i.e. as a scaling factor given by $R_\ast^2/d^2$). Although the posterior probability distributions for $R_B$ and $R_A$ both yield $2\sigma$ uncertainties of $0.2\,R_\odot$, the consideration of the relatively large distance uncertainty ($d=400\pm80\,{\rm pc}$) implies larger uncertainties of $\delta R_B=0.6\,R_\odot$ and $\delta R_A=0.4\,R_\odot$. The most probable radii and their uncertainties found from the MCMC analysis are then given by $R_B=3.0\pm0.6\,R_\odot$ and $R_A=2.1\pm0.4\,R_\odot$. The derived temperatures and stellar radii are listed in Table~\ref{param_tbl}. The analysis was insensitive to changes in $\log{g}$ as indicated by an essentially flat posterior probability distribution; therefore, no definitive surface gravity can be reported. Comparisons between the observed photometry and the best-fitting model are shown in Fig.~\ref{SEDb}, where we have adopted $\log{g}=4.3$ for both the A and B models as derived in Section~\ref{line_fit}. The model B star flux ($F_B$) and the model binary A star flux ($2F_A$) can be used to verify our initial assumption that the two components detected in the speckle observations do indeed correspond to the central B star and the A star binary system. The models can be compared with the speckle observations by calculating the flux ratios, $2F_A(\lambda)/F_B(\lambda)$, at the speckle observation wavelengths. Both $F_A(\lambda)$ and $F_B(\lambda)$ are integrated over wavelength intervals of $200\,{\rm \AA}$ and $1000\,{\rm \AA}$ \citep[i.e. the FWHM of the filters used by][] {Balega2012} centered at $5500\,{\rm \AA}$ and $8000\,{\rm \AA}$, respectively. We then obtain magnitude differences of $\Delta m_{\rm syn}(5500\,{\rm \AA})=1.46$ and $\Delta m_{\rm syn}(8000\,{\rm \AA})=1.23$. These values yield a negligible discrepancy with the speckle observations of $1$ per cent at $\lambda=5500\,{\rm \AA}$ and $2$ per cent at $\lambda=8000\,{\rm \AA}$. \begin{table} \caption{Stellar parameters of HD~35502.} \label{param_tbl} \begin{center} \begin{tabular*}{0.35\textwidth}{@{\extracolsep{\fill}}l r} \hline \hline \noalign{\vskip1mm} Sp. Type$^{1}$ & B5IVsnp+A+A\\ ${\rm \pi}$ (mas)$^{2}$ & $2.35\pm0.68$\\ $\langle {\rm dm}\rangle$ (mag)$^{3}$ & $8.00\pm0.46$\\ $d$ (pc) & $400\pm80$\vspace{2mm}\\ Photometry &\\ \hline \noalign{\vskip1mm} $V$ (mag)$^{4}$ & $7.331\pm0.004$\vspace{0.8mm}\\ $E(B-V)$ (mag)$^{5}$ & $0.14$\vspace{2mm}\\ B Star Parameters &\\ \hline \noalign{\vskip1mm} $T_{\rm eff}\,(\text{kK})$ & $18.4\pm0.6$\vspace{0.8mm}\\ $\log(g)\,(cgs)$ & $4.3\pm0.2$\vspace{0.8mm}\\ $v\sin{i}\,({\rm km\,s}^{-1})$ & $75\pm5$\vspace{0.8mm}\\ $\log{L/L_\odot}$ & $3.0\pm0.3$\vspace{0.8mm}\\ $M/M_\odot$ & $5.7\pm0.6$\vspace{0.8mm}\\ $R_p/R_\odot$ & $3.0^{+1.1}_{-0.5}$\vspace{0.8mm}\\ $R_{\rm eq}/R_\odot$ & $3.1^{+1.8}_{-0.4}$\vspace{0.8mm}\\ $\tau_{\rm age}\,(\text{Myr})$ & $20\pm20$\vspace{0.8mm}\\ $P_{\rm rot}\,(\text{d})$ & $0.853807(3)$\vspace{2mm}\\ A Star Parameters &\\ \hline \noalign{\vskip1mm} $T_{\rm eff}\,(\text{kK})$ & $8.9\pm0.3$\vspace{0.8mm}\\ $\log(g)\,(cgs)$ & $4.3\pm0.3$\vspace{0.8mm}\\ $v\sin{i}\,({\rm km\,s}^{-1})$ & $12\pm2$\vspace{0.8mm}\\ $\log{L/L_\odot}$ & $1.4\pm0.3$\vspace{0.8mm}\\ $M/M_\odot$ & $2.1\pm0.2$\vspace{0.8mm}\\ $R/R_\odot$ & $2.1\pm0.4$\vspace{0.8mm}\\ $\tau_{\rm age}\,(\text{Myr})$ & $<630$\vspace{0.8mm}\\ \hline \end{tabular*}\par \begin{tablenotes} \small \item Table references: $^1$\citet{Abt1977}, $^2$\citet{VanLeeuwen2007}, $^3$\citet{Brown1994}, $^4$\citet{Rufener1981}, $^5$\citet{Lee1968}. \end{tablenotes} \end{center} \end{table} \subsection{Spectral line fitting} \label{line_fit} Several properties of HD~35502's three stellar components may be estimated through comparisons with synthetic spectra (e.g. the surface gravity, line broadening characteristics, etc.). We carried this out using local thermodynamic equilibrium (LTE) models generated with {\sc synth3} \citep{Kochukhov2007a}. The code computes disc-integrated spectra using spectral line data provided by VALD \citep{Kupka2000} obtained using an {\rm extract stellar} request for a specified effective temperature, surface gravity, and microturbulence velocity in conjunction with {\sc atlas9} atmospheric models \citep{Kurucz1993_oth}. The synthetic spectra can then be convolved with the appropriate functions in order to account for instrumental and rotational broadening effects. The ESPaDOnS, Narval, and FEROS observations were normalized using a series of polynomial fits to the continuum. The relatively shallow ($\approx5$ per cent of the continuum) and narrow lines produced by the two A stars made the spectral line modelling inherently uncertain. For instance, the typical root mean square of the continuum near the A stars' Mg~{\sc i}$\,\lambda4703$ lines was found to be approximately $14$ per cent of the line depth. Thus, the SNRs of the majority of the A star lines were relatively low. This was mitigated to some extent by binning the observed spectra with a bin width of $\approx0.03\,{\rm \AA}$ (i.e. $2$ pixels). In order to account for the instrumental profile of the ESPaDOnS and Narval observations, the synthetic spectra were convolved with a Gaussian function assuming a resolving power of $R=65\,000$; similarly, the FEROS spectra were fit after convolving the synthetic spectra assuming $R=48\,000$. The quality of fit yielded by the total normalized synthetic spectrum ($F_{\rm tot}$) depends not only on $T_{\rm eff}$ of the three models but also on their (relative) luminosities: ${F_{\rm tot}=(\sum_iL_iF_i)/\sum_iL_i}$, where $L_i$ and $F_i$ are the luminosities and synthetic spectra of the $i^{\rm th}$ component. We adopted the $18.4\,{\rm kK}$ and $8.9\,{\rm kK}$ values associated with the B and two A stars obtained from the SED fitting (Section~\ref{SED_fit}). Moreover, we assumed that the luminosities of the two A stars are equal, thereby reducing the number of degrees of freedom in the spectral line fitting analysis. With $T_{\rm eff}$ specified, the stellar luminosities can be estimated through various methods. We found that the best results were obtained by letting the luminosity ratio of the B and A star models, $L_B/L_A$, be a free parameter and subsequently finding the minimum $\chi^2$ fit for a given surface gravity ($\log{g}$) and rotational broadening ($v\sin{i}$). This was carried out using the observed spectra for which the two A stars were most widely separated in wavelength (phase $0.994$ in Fig.~\ref{A_orbit}) in the wavelength range of $4200-4300\,{\rm \AA}$. This region was chosen because of the presence of the strong and essentially non-variable C~{\sc ii} line produced by the B star along with many A star lines of various elements (e.g. Fe, Ti, Cr, Mn). Most importantly, this wavelength range is free of B star lines exhibiting obvious chemical abundance anomalies and variability such as those observed from He. A subsample of this region containing C~{\sc ii}$\,\lambda4267$ is shown in Fig.~\ref{B_vsini}. This technique yielded $L_B/L_A=5.2$, which is consistent with the median value implied by the SED fitting analysis -- within $4000\,{\rm \AA}\le\lambda\le6000\,{\rm \AA}$ -- of $8.4^{+4.1}_{-2.4}$. Ultimately, using an $L_B/L_A$ of $5.2$ instead of $8.4$ produced a moderate increase of $1.1\sigma$ in the best-fitting solution's overall quality of fit. $\log{g}$ and $v\sin{i}$ of HD~35502's three components were then fit using various B and A star lines while recalculating the best-fitting $L_B/L_A$ for every change in the parameters. As a result of the presumed chemical peculiarities and line variability of the B star (see Section~\ref{variability}) and the limited number of lines, $\log{g_B}$ could not be reliably constrained using He or metal lines (e.g. Mg~{\sc i} and Mg~{\sc ii}). Instead, we relied upon the wings of the strong and broad Balmer lines. In particular, the observations of H$\beta$, H$\gamma$, and H$\delta$ obtained at a phase of $0.49$ in the B star's rotational period (the phase of minimum emission) were used in order to minimize the effects of emission. Several metal lines were used to constrain $v_B\sin{i}$ such as C~{\sc ii}$\,\lambda4267$, S~{\sc ii}$\,\lambda5640$, and Fe~{\sc ii}$\,\lambda5780$. $\log{g_B}$ and $v_B\sin{i}$ were found to be $4.3\pm0.2\,{\rm (cgs)}$ and $75\pm5\,{\rm km\,s}^{-1}$, respectively. Examples of the best-fitting model spectra are shown in Fig.~\ref{B_lines}; the adopted range in $v_B\sin{i}$ is shown in Fig.~\ref{B_vsini}. The surface gravity and rotational broadening of the two A stars were fit simultaneously using several Fe~{\sc ii} and Mg~{\sc i} lines. Their best-fitting $\log{g}$ and $v\sin{i}$ values were found to be $4.3\pm0.3\,{\rm (cgs)}$ and $12\pm2\,{\rm km\,s}^{-1}$, respectively. Two examples of the modelled A star lines are shown in Fig.~\ref{A_lines}. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./hd35502_synth_fit_A_stars-eps-converted-to.pdf} \caption{Comparisons between best-fitting model (red) and observed (black points) Fe~{\sc ii} (left) and Mg~{\sc i} (right) lines used to constrain the surface gravity and rotational broadening of the two A stars; the filled blue region indicates the total uncertainties associated with $v_A\sin{i}$ and $\log{g_A}$. The phase corresponds to the maximum observed separation between the two binary components.} \label{A_lines} \end{figure} \subsection{Hertzsprung-Russell Diagram} \label{hrd} The masses, ages, and polar radii of HD~35502's three stellar components may be estimated by comparing their positions on the Hertzsprung-Russell diagram (HRD) with theoretical isochrones. In order to determine the B star's luminosity, we used the $18.4\pm0.6\,{\rm kK}$ effective temperature and $3.0\pm0.6\,R_\odot$ radius derived from the three star SED fit discussed in Section~\ref{SED_fit}. The Stefan-Boltzmann law then yields a luminosity of $\log{L/L_\odot}=3.0^{+0.4}_{-0.5}$. Similarly, the position of the A stars on the HRD can be identified using $T_{\rm eff}=8.9\pm0.3\,{\rm kK}$ and $R_A=2.1\pm0.4\,R_\odot$. We calculate an A star luminosity of $\log{L/L_\odot}=1.4\pm0.3$. The HRD positions of the B and A stars are shown in Fig.~\ref{hrd}. The masses ($M$) and polar radii ($R_p$) associated with a given $T_{\rm eff}$ and $\log{L/L_\odot}$ were determined using a grid of Geneva model isochrones generated by \citet{Ekstrom2012}. The grid is calculated for the evolutionary timescale beginning with the zero-age main sequence up until the core carbon-burning phase for masses of $0.8-120\,M_\odot$. The microturbulence velocity was fixed at $v_{\rm mic}=0.0\,{\rm km\,s}^{-1}$ and a solar metallicity of $Z=0.014$ was assumed. In the case of HD~35502's central B star, the ratio of the angular velocity to the critical angular velocity, $\Omega/\Omega_c$, is known to be significant based on the $0.853807(3)\,{\rm d}$ rotational period (see Section~\ref{P_rot}). Its position on the HRD was therefore compared against several additional grids calculated using $\Omega/\Omega_c=0.4-0.9$ in increments of $0.1$ \citep{Georgy2013a}. While no significant difference in the inferred $M$ was apparent (i.e. $<4$ per cent), $R_{\rm p}$ was found to decrease by as much as $15$ per cent. In order to select the most accurate grid of isochrones and thus, the most accurate $R_{\rm p}$, $\Omega/\Omega_c$ must first be estimated. Since $\Omega_c$ depends on both the mass and polar radius, it was calculated using the parameters derived from each grid of isochrones. Using $P_{\rm rot}$ inferred in Section~\ref{P_rot} to determine $\Omega$, a range of $\Omega/\Omega_c$ values were found. The appropriate grid was then chosen based on whichever $\Omega/\Omega_c$ most closely agreed with the $\Omega/\Omega_c$ associated with the isochrone grid. A calculated $\Omega/\Omega_c$ of $0.53$ yielded the best agreement; we found $R_{{\rm p},B}=3.0^{+1.1}_{-0.5}\,R_\odot$ and $M_B=5.7\pm0.6\,M_\odot$ using the $\Omega/\Omega_c=0.5$ isochrones. These results imply an equatorial radius of $R_{{\rm eq},B}=3.2^{+1.6}_{-0.6}\,R_\odot$. Using von Zeipel's law \citep{VonZeipel1924}, we estimate that the ratio of $T_{\rm eff}$ between the pole and the equator is approximately $1.02$. The two A stars' $M$ and $R_{\rm p}$ were inferred using the $\Omega/\Omega_c=0.0$ isochrone grid. The $T_{\rm eff}$ and $\log{L/L_\odot}$ derived from the three star SED fit then yielded $R_{{\rm p},A}=2.0^{+0.8}_{-0.5}\,R_\odot$ and $M_A=2.1\pm0.2\,M_\odot$. We note that both $R_{{\rm p},B}$ and $R_{{\rm p},A}$ are consistent with $R_B$ and $R_A$ derived in Section~\ref{SED_fit}. \begin{figure} \centering \includegraphics[width=0.999\columnwidth]{./hd35502_hrd-eps-converted-to.pdf} \caption{The positions of HD~35502's A and B star components are indicated by the blue diamond and black square. The evolutionary tracks (black dot-dashed lines) and isochrones (red dotted lines) assume a non-rotating star of solar metallicity \citep{Ekstrom2012}.} \label{hrd} \end{figure} \begin{figure*} \centering \includegraphics[width=1.9\columnwidth]{./hd35502_lsd_plot_test_wide-eps-converted-to.pdf} \caption{H, He+metal, He, C, Si, and Fe Stokes $I$ and $V$ LSD profiles generated from a $T_{\rm eff}=26\,{\rm kK}$, $\log{g}=4.0\,{\rm (cgs)}$ line mask. The profiles are all phased by the B star's rotational period of $0.853807\,{\rm d}$.} \label{multi_phased_lsd} \end{figure*} \section{Rotational period} \label{P_rot} Several observed properties of HD~35502 exhibit periodic variability with varying significance. In order to correctly interpret the origin of these variations, it is crucial to identify the periods, the phases at which the maxima and minima occur, and amplitudes with which they occur. This was carried out using the same procedure discussed in Section~\ref{orb_sol} in which the orbital solution of the A star binary was derived. We first assumed a sinusoidal fit to the data, $f(t)$, given by $f(t)=C_0+C_1\sin{(2\pi[t-t_0]/P+C_2)}$, where $P$ is the period of variability, $t$ is the observation's HJD, $t_0$ is the HJD corresponding to phase $0$, and $C_0$, $C_1$, and $C_2$ are fitting parameters. A $\chi^2$ distribution was then generated using periods ranging from $0.1$ to $10.0\,{\rm d}$ in increments $\sim1\,{\rm s}$. The best-fitting period was inferred from the minimal $\chi^2$ solution and the $3\sigma$ $\chi^2$ interval was taken as the associated uncertainty. The uncertainties in the three fitting parameters were estimated using a $1\,000$ iteration bootstrapping analysis. The statistical significance of each derived period was evaluated by comparing the quality of the sinusoidal fit to that yielded by a constant fitting function given by $f(t)=C_0$, where $C_0$ is a time-independent fitting parameter. The difference between the minimal $\chi^2$ values associated with the constant fit ($\chi^2_{\rm const}$) and sinusoidal fit ($\chi^2_{\rm sin}$) were then calculated. Any sinusoidal fit having $\chi^2_{\rm const}-\chi^2_{\rm sin}\geq3\sigma$ was considered to be statistically significant. Various periods were found when this procedure was applied to the longitudinal field measurements ($\langle B_z\rangle$, see Section~\ref{mag_field}) along with the multiple photometry and equivalent width (EW) measurements (see Section~\ref{variability}). The analyses of nearly all datasets yielded statistically significant variability, with over half corresponding to a unique period near $0.85\,{\rm d}$. They were found to be equal to one another within $\approx10\,{\rm sec}$ with typical uncertainties $\lesssim10\,{\rm s}$ and were therefore averaged to obtain a period of $0.85382(7)\,{\rm d}$. However, when the H$\alpha$ EWs were phased with this period, the oldest measurements showed an $\approx0.1$ phase offset relative to the more recent measurements. This discrepancy was resolved by adopting the best-fitting ephemeris derived from the H$\alpha$ EWs of \begin{equation}\label{Prot_eqn} JD=2456295.812850\pm0.853807(3)\cdot E \end{equation} where the reference JD ($2456295.812850(3)$) corresponds to the epoch of $\langle B_z\rangle$ maximum magnitude. Therefore, while the general accuracy of the rotational period is established by the diverse photometric, spectroscopic, and magnetic data sets, the adopted value and its precision correspond to those implied by the H$\alpha$ EWs. The periodic variability of $\langle B_z\rangle$ can be explained, in part, as a consequence of a stable oblique magnetic field configuration that is modulated by the star's rotation. Similarly, rotationally-modulated variations exhibited by the equivalent widths of various spectral lines can be produced by at least two mechanisms: (1) non-uniform distributions of chemicals on the stellar surface and (2) hot plasma accumulating in the star's magnetosphere resulting in emission and absorption. All of these phenomena are commonly exhibited by magnetic B-type stars \citep[e.g.][]{Landstreet1978,Leone2010,Bohlender2011}. Therefore, we conclude that the ephemeris given by Eqn. \ref{Prot_eqn} is the B star's rotational period. \section{Magnetic field} \label{mag_field} Zeeman signatures produced by a magnetic star in the HD~35502 system were detected in circularly polarized (Stokes $V$) ESPaDOnS and Narval observations. They were found to be coincident with the B star's spectral lines regardless of the inferred velocities of the two A stars. We therefore assumed the detected field to be entirely produced by the B star. The Stokes $V$ Zeeman signature associated with H$\beta$ yielded 18 definite detections (DDs), 1 marginal detection (MD), and 7 non-detections (NDs) based on the detection criterion outlined by \citet{Donati1997}. The SNRs of the observed signatures were optimized using the LSD procedure \citep{Donati1997,Kochukhov2010} introduced in Section~\ref{orb_sol}. A master line mask containing He and metal lines was generated using data obtained from VALD \citep{Kupka2000} with a specified $T_{\rm eff}$, $\log{g}$, and microturbulence velocity ($v_{\rm mic}$). All Balmer lines were also removed along with any regions affected by atmospheric absorption (i.e. telluric lines). Several single element line masks were subsequently generated from the He+metal mask by retaining only specific chemical elements including He, C, Si, Fe, and Mg. Clearly, the magnitudes of $\langle B_z\rangle$ derived using different elements will be affected by any non-uniform distribution of chemicals across the star's surface. Therefore, our analysis also includes measurements obtained using H lines (both from LSD profiles and H$\beta$) which do not typically exhibit non-solar abundances or non-homogeneous surface distributions (i.e. chemical spots). A H line mask was generated in which the H lines exhibiting moderate emission (e.g. H$\beta$ and H$\alpha$) were removed. The resultant mask contained three H~{\sc i} lines: H~{\sc i}$\,\lambda3970$, H~{\sc i}$\,\lambda4102$, and H~{\sc i}$\,\lambda4340$. Two approaches were used to isolate the B star lines from the A star lines using LSD. The first method used a mask generated with $T_{\rm eff}=15\,{\rm kK}$, $\log{g}=4.0\,{\rm (cgs)}$, and $v_{\rm mic}=0\,{\rm km\,s^{-1}}$ which yielded LSD profiles with clear Stokes $I$ contributions from all three stellar components. The narrow line components associated with the two A stars were then fit by Gaussian functions which were subsequently subtracted from the Stokes $I$ profiles. We found that this method could not be consistently applied to all observations. Moreover, the quality of the Gaussian fits was dramatically reduced when applied to the C, Si, and Fe line masks. The second method used the same $\log{g}$ and $v_{\rm mic}$ with a significantly higher temperature of $T_{\rm eff}=26\,{\rm kK}$. This yielded LSD profiles with minimal contributions from the two A stars. The Stokes $I$ and $V$ LSD profiles generated using the $T_{\rm eff}=26\,{\rm kK}$ line mask for H, He+metal, He, C, Si, and Fe are shown in Fig.~\ref{multi_phased_lsd}, phased according to Eqn.~\ref{Prot_eqn}. Aside from the Si and Fe LSD profiles, no strong contributions from the A star Stokes $I$ profiles can be discerned. Each ESPaDOnS and Narval spectropolarimetric observation includes a diagnostic null which may be used to evaluate the significance of any polarized signal. No spurious signals were detected in any of the diagnostic null profiles. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./hd35502_phased_Hbeta_Bz-eps-converted-to.pdf} \caption{H$\beta$ longitudinal magnetic field measurements phased by the rotational period (Eqn.~\ref{Prot_eqn}). The measurements obtained from Narval (open circles), ESPaDOnS (filled circles), and dimaPol (filled triangles) are shown along with the best fitting sinusoid (solid black).} \label{bz_Hbeta_phase} \end{figure} $\langle B_z\rangle$ was inferred from each of the Stokes $I$ and $V$ LSD profiles, as well as from H$\beta$, using equation (1) of \citet{Wade2000}. We used a wavelength of $500\,{\rm nm}$ with a Land\'{e} factor of $1.2$ for the He and metal mask measurements and a Land\'{e} factor of unity for the H mask and H$\beta$ measurements. The Doppler shift produced by the B star's radial velocity of $\approx20\,{\rm km\,s}^{-1}$ was subtracted from each LSD profile. The Stokes $I$ and $V$ profiles were then normalized to the continuum intensity at a velocity of $v=-125\,{\rm km\,s}^{-1}$, where the average Stokes $V$ intensity is approximately zero. An integration range of $v\in[-110,110]\,{\rm km\,s}^{-1}$ was then used in the calculation of $\langle B_z\rangle$ for each of the LSD profiles (i.e. H, He+metal, He, C, Si, and Fe); for the H LSD profiles and H$\beta$ line, this integration range corresponds to the width of the Doppler core. The values of $\langle B_z\rangle_{{\rm H}\beta}$ inferred from the Narval, ESPaDOnS, and dimaPol observations, along with the status of their detections, are listed in Tables~\ref{obs_tbl} and~\ref{dao_tbl}. $\langle B_z\rangle$ derived from the H, He, and metal LSD profiles are listed in Table~\ref{bz_full_tbl}. High resolution spectropolarimetry is essentially insensitive to the polarization in the wings of the Balmer lines. As an example of how the Doppler cores of Balmer lines may be used to infer $\langle B_z\rangle$, see Fig. 2 of \citep{Landstreet2015}, who explain the method in some detail. Our measurements obtained in the context of the current paper, as well as those obtained by \citep{Sikora2015}, demonstrate that this method results in longitudinal field intensity and variability in good agreement with other approaches. \begin{figure*} \centering \includegraphics[width=2.1\columnwidth]{./hd35502_phased_lsd_Bz_multi-eps-converted-to.pdf} \caption{Longitudinal magnetic field measurements phased by the rotational period (Eqn.~\ref{Prot_eqn}). The measurements obtained from the H, He+metal, He, C, Si, and Fe line masks are shown along with their best fitting sinusoids (solid black). The fit to $\langle B_z\rangle_{\rm H}$ is shown as dashed black curves in the other panels.} \label{bz_phase} \end{figure*} All of the $\langle B_z\rangle$ measurements were found to exhibit statistically significant varitions with best-fitting periods ranging from $0.85380-0.85389\,{\rm d}$. Only the $\langle B_z\rangle$ values obtained using the Fe LSD profiles ($\langle B_z\rangle_{\rm Fe}$) yielded more than one period. Figures~\ref{bz_Hbeta_phase} and~\ref{bz_phase} show the $\langle B_z\rangle$ measurements inferred from H$\beta$ and the LSD profiles, respectively, phased by the B star's rotational period (Eqn.~\ref{Prot_eqn}). It is clear that the scatter of the $\langle B_z\rangle_{\rm He}$, $\langle B_z\rangle_{\rm Si}$, and $\langle B_z\rangle_{\rm Fe}$ measurements is significantly larger than that yielded by $\langle B_z\rangle_{\rm H}$, $\langle B_z\rangle_{\rm He+metal}$, and $\langle B_z\rangle_{\rm C}$. This is likely caused by the presence of He, Si, and Fe chemical spots which are commonly observed on the surfaces of Bp stars. As discussed in Section~\ref{variability}, we find strong evidence for He and Si spots. The mean and amplitude of the phased $\langle B_z\rangle$ measurements are defined by the fitting parameters $B_0$ and $B_1$ associated with the sinusoidal fitting function $\langle B_z\rangle=B_0+B_1\sin{(2\pi\theta+\phi)}$, where $\theta$ is the phase calculated using Eqn.~\ref{Prot_eqn} and $\phi$ is the phase shift. The most precise $B_0$ and $B_1$ values -- as indicated by the uncertainties estimated using a $1\,000$ iteration bootstrapping analysis -- were derived using H$\beta$, and the H and He+metal LSD profiles. They were found to be consistent within $2\sigma$. The lowest uncertaintes were obtained from the $\langle B_z\rangle_{\rm H}$ measurements, which yielded a mean and amplitude of $B_0=-1.41\pm0.11\,{\rm kG}$ and $B_1=1.64\pm0.16\,{\rm kG}$ where the uncertainties correspond to $3\sigma$. If we assume that the field is characterized by an important dipole component, the sinusoidal variations in $\langle B_z\rangle$ imply that the dipole's axis of symmetry is inclined (i.e. has an obliquity angle $\beta$) with respect to the star's rotational axis. This interpretation, first described by \citet{Stibbs1950}, is known as the Oblique Rotator Model (ORM). Under the assumptions of the ORM, $\beta$ can be calculated from equation (3) of \citet{Preston1967} which depends on $r\equiv|\langle B_z\rangle|_{\rm min}/|\langle B_z\rangle|_{\rm max}$ and the inclination angle, $i$, of the star's axis of rotation. The value of $i$ can be determined using $P_{\rm rot}$ given by Eqn.~\ref{Prot_eqn}, $v\sin{i}=75\pm5\,{\rm km\,s}^{-1}$ derived in Section~\ref{line_fit}, and $R_{\rm eq}=3.2^{+1.6}_{-0.6}\,R_\odot$ listed in Table~\ref{param_tbl}. We obtained a value of $i=24^{+8}_{-9}\,\degree$. The value of $r$ was determined from $B_0$ and $B_1$. Using the values inferred from the $\langle B_z\rangle_{\rm H}$ measurements, we obtained $r=0.08^{+0.09}_{-0.07}$. Finally, the obliquity angle was found to be $\beta=63\pm13\,\degree$ using Eqn. (3) of \citet{Preston1967}. In addition to the obliquity, the strength of the magnetic field's dipole component, $B_p$, can be calculated by inverting equation (1) of \citet{Preston1967} and letting $t=0$ correspond to $\langle B_z\rangle_{\rm max}$. We used a linear limb darkening constant that was averaged over the values derived by \citet{vanHamme1993} for the $U$, $B$, $V$, $R$, and $I$ bandpasses. These specific filters were selected because of the approximate correspondance with the ESPaDOnS and Narval wavelength range. A value of $u=0.265$ was obtained after interpolating the published table for an effective temperature and surface gravity of $18.4\,{\rm kK}$ and $4.3\,{\rm (cgs)}$. $i$, $\beta$, and $u$ then yield $B_p=14^{+9}_{-3}\,{\rm kG}$. Similar obliquity angles and dipolar field strengths are derived using H$\beta$ along with the He+metal, He, C, and Si LSD profiles. $\langle B_z\rangle_{\rm Fe}$ exhibits significantly weaker and more uncertain values of $\beta=33^{+36}_{-27}\,\degree$ and $B_p=8^{+6}_{-3}$. \section{Emission and variability} \label{variability} Hot magnetic B-type stars are commonly found to exhibit spectral line variability either as a result of chemical spots \citep[e.g.][]{Kochukhov2015,Yakunin2015} or from the presence of a hot plasma beyond the stellar surface \citep[e.g.][]{Landstreet1978}. Furthermore, photometric variability correlated with both of these phenomena, as well as with strong, coherent magnetic fields has been previously reported \citep[e.g.][]{Shore1990a,Oksala2010}. Along with the $uvby$ photometric measurements listed in Table~\ref{uvby_tbl}, we also analyzed \emph{Hipparcos} Epoch Photometry for variability. The catalogue \citep{Perryman1997} contains 98 observations of HD~35502 which were obtained over a period of $3.1\,{\rm yrs}$. Three of these measurements have multiple quality flags reported and were therefore removed from our analysis. The remaining measurements have an average of $7.331\,{\rm mag}$, a standard deviation of $0.011\,{\rm mag}$, and an average uncertainty of $0.009\,{\rm mag}$. The period searching routine described in Section \ref{P_rot} was applied to both the $uvby$ and \emph{Hipparcos} data sets. All of the $uvby$ measurements were found to exhibit statistically significant variability; however, only $u$(v-c) and $v$(v-c) yielded unique periods of $0.8537(3)\,{\rm d}$. The analysis of the \emph{Hipparcos} magnitudes ($H_p$) resulted in a best-fitting period of $0.8630(2)\,{\rm d}$ along with five other statistically significant periods ranging from $0.46$ to $1.7\,{\rm d}$. Fig.~\ref{photometry_plt} shows the sinusoidal fits to $u$(v-c), $v$(v-c), $b$(v-c), $y$(v-c), and $H_p$ obtained when phased by the B star's $0.853807\,{\rm d}$ rotational period given by Eqn.~\ref{Prot_eqn}. The variability of the spectral lines associated with HD~35502's central B star is most easily detected by calculating EWs. We carried this out for a number of lines for which no significant absorption produced by the two A stars was evident. This included He~{\sc i}, C~{\sc ii}, and Si~{\sc iii} lines. The EWs of the He and metal lines were calculated using integration ranges of $[-100,100]\,{\rm km\,s}^{-1}$ and were normalized to the continuum just outside these limits. The Balmer line EWs (H$\alpha$, H$\beta$, and H$\gamma$) were measured by normalizing to the flux at $|v|\gtrsim700\,{\rm km\,s}^{-1}$ and integrating over a velocity range of $[-600,600]\,{\rm km\,s}^{-1}$. The uncertainties in the EW measurements were then estimated using a bootstrapping analysis with $1\,000$ iterations. All of the calculated EWs and uncertainties are listed in and Tables~\ref{halpha_ew_tbl} and \ref{ew_tbl}. The contributions of the two A stars to the total measured Balmer line EWs were approximated by comparing the synthetic EWs associated with the {\sc synth3} models discussed in Section~\ref{line_fit}. We found that the total synthetic spectrum (including the B star and the two A stars) yielded EWs of $5.6\,{\rm \AA}$ averaged over H$\alpha$, H$\beta$, H$\gamma$, and H$\delta$. A similar calculation applied to the single B star model yielded average Balmer line EWs of $4.7\,{\rm \AA}$ suggesting that the presence of the two A stars increase the EW measurements by a factor of $\approx1.2$. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./hd35502_phased_phot-eps-converted-to.pdf} \caption{Photometric observations obtained using various filters and phased by the B star's rotational period of $0.853807\,{\rm d}$. The top four panels show the $u$(v-c), $v$(v-c), $b$(v-c), and $y$(v-c) magnitude differences between HD~35502 (`v') and the non-variable comparison star, HD~35575 (`c'). The bottom panel shows the \emph{Hipparcos} Epoch Photometry measurements. The horizontal dotted lines indicate the best constant fit to the data.}\label{photometry_plt} \end{figure} Statistically significant variations were detected from EW measurements of H$\alpha$, H$\beta$, H$\gamma$, He~{\sc i}$\,\lambda4713$, and Si~{\sc iii}$\,\lambda4553$. We note that telluric absorption lines were not removed or minimized in the calculation of these EWs. A range of best-fitting periods were derived; however, only the H$\alpha$ and He~{\sc i}$\,\lambda4713$ EW measurements yielded unique periods of $0.853807(3)\,{\rm d}$ and $0.85377(3)\,{\rm d}$, respectively. The strongest variability was measured from H$\alpha$ for which an amplitude of $0.51\pm0.03\,{\rm \AA}$ was derived. Similar variability -- both in terms of the phase of maximum emission and the best-fitting period -- was also detected in H$\beta$ although, at a much lower amplitude of $0.08\pm0.01\,{\rm \AA}$. The phased H$\alpha$ and H$\beta$ EWs exhibit a maximum emission at a phase of $0.99\pm0.04$ and $0.0\pm0.3$, respectively, and are therefore in phase with the $\langle B_z\rangle$ measurements. The He~{\sc i}$\,\lambda4713$ EWs are approximately in anti-phase with respect to the $\langle B_z\rangle$ variation with minimum absorption occuring at a phase of $0.5\pm0.1$. Along with EWs, dynamic spectra were also computed by comparing various spectral lines with their respective average normalized intensity ($\langle I/I_c\rangle$). Both the dynamic spectra and the EWs of C~{\sc ii}$\,\lambda4267$, He~{\sc i}$\,\lambda4713$, Si~{\sc iii}$\,\lambda4553$, H$\beta$, and H$\gamma$ are shown in Fig. \ref{ew_He_metal}. It is evident that He~{\sc i}$\,\lambda4713$, Si~{\sc iii}$\,\lambda4553$, and to a lesser extent, C~{\sc ii}$\,\lambda4267$, show absorption features crossing from negative to positive velocities. These features suggest the presence of chemical spots on the B star's surface. The most obvious spot is associated with He~{\sc i}, which exhibits a maximum absorption at a phase of $0.0\pm0.1$ and is therefore coincident with the epoch of maximum $\langle B_z\rangle$ magnitude. Assuming that the star's magnetic field consists of a strong dipole component as discussed in Section~\ref{mag_field}, this result suggests that He is more concentrated near the field's negative pole. Enhanced He abundances on the surfaces of magnetic Bp stars have been commonly reported to coincide with either the magnetic equator or magnetic poles \citep[e.g.][]{Neiner2003,Bohlender2011, Grunhut2012b,Rivinius2013}. A similar plot of the dynamic spectrum and EWs of H$\alpha$ is shown in Fig.~\ref{ew_Halpha}, where the additional DAO and CFHT spectra are also included. In order to reduce normalization errors, all of the H$\alpha$ spectra were consistently normalized using a linear fit to the measured flux at velocities of $\pm600\,{\rm km\,s}^{-1}$. The observed spectra are compared with the synthetic spectrum ($I_{\rm syn}$) discussed in Section~\ref{line_fit} rather than the average observed spectrum. $I_{\rm syn}$ includes the contributions from the A stars which move (in velocity space) relative to the B star throughout the B star's rotational period. Therefore, this method results in a greater contrast between the emission and absorption features associated only with the B star. Strong, nearly symmetrical emission peaks are observed at a distance of $\approx4\,R_\ast$ at a phase of $0.0$. The intensity of this emission is observed to decrease by a factor of $\approx2$ at a phase of $0.5$. Similarly, the ratio between the maximum core emission (at phase $0.25$) and minimum core emission (at phase $0.5$) is also found to be $\approx2$. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./hd35502_dynamic_plot_test-eps-converted-to.pdf} \caption{\emph{Left:} Dynamic spectra of various H~{\sc i}, He~{\sc i}, C~{\sc ii}, and Si~{\sc iii} lines. A low-pass filter has been applied to the spectrum in order to minimize the A star contributions and tellurics. Each set of observations are compared with the average spectrum (dashed red). \emph{Right:} Measured equivalent widths associated with the lines shown in the dynamic spectra plots. All of the measurements are phased by the B star's rotational period of $0.853807\,{\rm d}$.} \label{ew_He_metal} \end{figure} The standard interpretation of the broad H$\alpha$ emission peaks that are associated with a small number of magnetic B-type stars is that they are produced by two dense clouds of hot plasma, trapped in the magnetic field above the stellar surface, which co-rotate with the star \citep[e.g.][]{Walborn1976,Landstreet1978}. Under the assumption that the cloud is optically thin, one would expect the same blue shifted emission feature to be observed half a rotational cycle later shifted towards redder wavelengths (e.g. in Fig.~\ref{ew_Halpha}, the blue emission peak occuring at phase 0.0 should reappear red shifted at phase 0.5). The fact that the strength of both the blue and red shifted emission peaks decrease between phase 0.0 and phase 0.5 suggests that the plasma clouds are, to an extent, optically thick. The relatively large decrease in emission is currently unprecedented amongst the known CM hosting stars; however, a more moderate decrease in HR~5907's H$\alpha$ emission is shown in Fig.~15 of \citet{Grunhut2012b}. Adopting the standard interpretation, the trajectories of the H$\alpha$-emitting clouds may be approximately inferred by fitting the velocities at which the peak emission is found on either side of the H$\alpha$ core as a function of rotational phase. The resulting fits suggest that the plasma clouds follow nearly circular trajectories as indicated by the two dashed curves shown in Fig.~\ref{ew_Halpha}. The mechanism by which this plasma is confined is discussed in the following section. \section{Magnetosphere} \label{sect_magnetosphere} As described by \citet{ud-Doula2008_oth}, various characteristics of a star's magnetosphere may be inferred by comparing two parameters: the Kepler radius, $R_K$, and the Alfv\'{e}n radius, $R_{\rm Alf}$. $R_K$ is the radius at which the gravitational force is balanced by the centrifugal force in a reference frame that is co-rotating with the star. $R_{\rm Alf}$ characterizes the point within which the magnetic field dominates over the wind and approximately corresponds to the extent of the closed field loops \citep{ud-Doula2002_oth,ud-Doula2008_oth}. Their ratio, $R_{\rm Alf}/R_K$, can therefore be used to define a magnetosphere as either dynamical ($R_{\rm Alf}/R_K<1$) or centrifugal ($R_{\rm Alf}/R_K>1$) \citep{Petit2013}. It also serves as an indicator of the volume of the magnetosphere: those stars having comparatively larger $R_{\rm Alf}/R_K$ will be capable of confining the emitted wind at larger radii. Furthermore, since a stronger field would be capable of confining more mass, a correlation between the Alfv\'{e}n radius and the magnetosphere's density may be expected. Using the mass and rotational period of HD~35502's B star, we find a Kepler radius of $R_K=2.1^{+0.4}_{-0.7}\,R_\ast$, where $R_\ast$ is the stellar radius at the magnetic equator. We approximate $R_\ast$ using $R_{\rm eq}$ since this corresponds to the stellar radius at the latitude where the plasma is expected to accumulate. The Alv\'{e}n radius is estimated using equation (9) of \citet{ud-Doula2008_oth} for a dipole magnetic field. This expression requires the calculation of the wind confinement parameter, $\eta_\ast$, which in turn depends on the dipole magnetic field strength, the equatorial radius, the terminal wind speed ($V_\infty$), and the wind mass loss rate in the absence of a magnetic field ($\dot{M}_{B=0}$). Following the recipe outlined by \citet{Vink2000}, $\dot{M}_{B=0}$ and $V_\infty$ are derived for a B star having $12.5<T_{\rm eff}\leq22.5\,{\rm kK}$ using $V_\infty/V_{\rm esc}=1.3$, where $V_{\rm esc}$ is the escape velocity. We obtain $\dot{M}_{B=0}=(1.3^{+6.0}_{-1.0})\times10^{-10}\,M_\odot/{\rm yr}$ and $V_\infty=1100^{+40}_{-110}\,{\rm km\,s}^{-1}$. Finally, $\eta_\ast$ is found to be $(2.6^{+7.9}_{-1.3})\times10^6$ using the value of $B_p$ derived from $\langle B\rangle_{\rm H}$, which then yields $R_{\rm Alf}=41^{+17}_{-6}\,R_\ast$. The magnetospheric parameters associated with both the H and He+metal longitudinal field measurements derived in Section~\ref{mag_field} are listed in Table~\ref{magneto_tbl}. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./hd35502_dynamic_plot_Halpha_0_8538-eps-converted-to.pdf} \caption{\emph{Top:} Observed H$\alpha$ spectrum ($I/I_c$, solid black curve) compared with the average synthetic spectrum (dotted red). \emph{Middle:} Dynamic spectra of $I/I_c-I_{\rm syn}$ where $I_{\rm syn}$ is the synthetic spectrum. A low-pass filter has been applied to the spectrum in order to minimize the A star contributions and tellurics. \emph{Bottom:} Measured equivalent widths of H$\alpha$ using ESPaDOnS (open red triangles), Narval (filled green triangles), FEROS (yellow squares), CFHT f/8.2 spectrograph (filled blue circles), and DAO (open black circles) observations. The black curve indicates the best-fitting sinusoid. Both the EWs and dynamic spectra are phased by the B star's rotational period of $0.853807\,{\rm d}$.} \label{ew_Halpha} \end{figure} Given that the hot plasma surrounding HD~35502's B star is co-rotating with the star at a distance of $\approx4\,R_\ast$, i.e. between $R_K$ and $R_{\rm Alf}$, it is likely that the plasma is being confined by the strong magnetic field. Similar examples of magnetic B-type stars producing H emission well beyond the stellar radius (at distances of $\approx2-4\,R_\ast$) have been previously reported \citep[e.g.][]{Bohlender2011, Oksala2012,Grunhut2012b}. In each of these cases, the star's Alfv\'{e}n radius exceeds its Kepler radius by approximately an order of magnitude \citep{Petit2013,Shultz2014}. Using the $R_{\rm Alf}$ value obtained from the $\langle B\rangle_{\rm H}$ measurements, we derived an $R_{\rm Alf}$ to $R_K$ ratio of $19^{+20}_{-5}$. Therefore, the fact that we observe strong H$\alpha$ emission is in agreement with this $R_{\rm Alf}/R_K\gtrsim10$ empirical limit. The magnetic confinement-rotation diagram compiled by \citet{Petit2013} allows $R_K$ and $R_{\rm Alf}$ to be understood within the broader context of all known O and B stars that host magnetospheres. Our characterization of the magnetosphere hosted by HD~35502's B star suggests that it is well within the centrifugal magnetosphere regime. Only two other stars have been discovered exhibiting similar $R_{\rm Alf}$ and $R_K$ values within the derived $R_{\rm Alf}>40\,R_\ast$ and $1.5\leq R_K\leq2.5\,R_\ast$. Although approximately four other stars have lower limits of $R_{\rm Alf}$ and $R_K$ that are consistent with HD~35502's, only HD$\,182180$ \citep{Rivinius2013} and HD$\,142184$ \citep{Grunhut2012b} have reported upper and lower uncertainties. These two examples have similar effective temperatures, surface gravities, radii, and masses to HD~35502's magnetic B star. However, HD$\,182180$ and HD$\,142184$ are slightly faster rotators ($P_{\rm rot}\approx0.5\,{\rm d}$) and host magnetic fields with weaker dipolar components ($B_p\approx10\,{\rm kG}$). \begin{table} \caption{Magnetospheric parameters derived from $\langle B_z\rangle_{\rm H}$ and $\langle B_z\rangle_{\rm He+metal}$ measurements.} \label{magneto_tbl} \begin{center} \begin{tabular*}{0.9\columnwidth}{@{\extracolsep{\fill}}l c c} \noalign{\vskip1mm} \hline \noalign{\vskip1mm} & H$_{\rm LSD}$ & He+metal$_{\rm LSD}$ \vspace{0.8mm}\\ \hline \noalign{\vspace{0.8mm}} $i\,(\degree)$ & $24^{+8}_{-9}$ & $24^{+8}_{-9}$ \vspace{0.8mm}\\ $\beta\,(\degree)$ & $63\pm13$ & $66^{+9}_{-13}$ \vspace{0.8mm}\\ $B_p\,{\rm (kG)}$ & $14^{+9}_{-3}$ & $15^{+10}_{-4}$ \vspace{0.8mm}\\ $\dot{M}_{B=0}\,(M_\odot\,{\rm yr}^{-1})$ & $(1.3^{+6.0}_{-1.0})\times10^{-10}$ & $(1.3^{+6.0}_{-1.0})\times10^{-10}$ \vspace{0.8mm}\\ $V_\infty\,({\rm km\,s}^{-1})$ & $1100^{+40}_{-110}$ & $1100^{+40}_{-110}$ \vspace{0.8mm}\\ $\eta_\ast$ & $(2.6^{+7.9}_{-1.3})\times10^6$ & $(3.0^{+6.5}_{-1.5})\times10^6$ \vspace{0.8mm}\\ $R_{\rm Alf}\,(R_\ast)$ & $41^{+17}_{-6}$ & $42^{+14}_{-7}$ \vspace{0.8mm}\\ $R_K\,(R_\ast)$ & $2.1^{+0.4}_{-0.7}$ & $2.1^{+0.4}_{-0.7}$ \vspace{0.8mm}\\ $R_{\rm Alf}/R_K$ & $19^{+20}_{-5}$ & $20^{+18}_{-6}$ \vspace{0.8mm}\\ \hline \end{tabular*} \end{center} \end{table} \section{Conclusion} \label{conclusions} The analysis presented here demonstrates a number of new discoveries regarding the nature of HD~35502. The high resolution spectroscopic and spectropolarimetric observations obtained using ESPaDOnS, Narval, and FEROS indicate that it is an SB3 system containing a central magnetic B-type star and two cooler A-type stars, all of which lie on the main sequence. We confirm that HD~35502's speckle companion reported by \citet{Balega2012} is indeed the A star binary system. Our analysis indicates that both the A stars are physically nearly identical with a mass ratio of $1.05\pm0.02$, masses of $2.1\,M_\odot$, and effective temperatures of $8.9\,{\rm kK}$. Based on radial velocity measurements, we find that the two A stars form a binary system with an orbital period of $5.66866(6)\,{\rm d}$. No radial velocity variations of the B star were detected over the 22 year observing period, which can be explained by the inferred orbital period of $P_{\rm orb}\gtrsim40\,{\rm yrs}$. However, two other explanations can account for the lack of detected radial velocity variations: (1) the inclination angle associated with the A star binary's orbit about the B star may be $\sim0\,\degree$ or (2) the A star binary may lie along the line of sight but not be gravitationally bound to the B star. A number of factors favour the triple system description such as the consistent flux ratios between the three components derived here. Specifically, if the two A stars are significantly closer or further than HD 35502's $400\pm80\,\rm{pc}$ distance, they would no longer lie on the main sequence. Furthermore, the radial velocity of the Orion OB1a subassociation, of which HD~35502 is most likely a member, has a reported velocity of $\approx24\,{\rm km\,s}^{-1}$ \citep{Morrell1991} and is therefore consistent with both the B star's average radial velocity of $21\pm2\,{\rm km\,s}^{-1}$ and the radial velocity of the A star binary's center of mass ($27\pm3\,{\rm km\,s}^{-1}$). Our analysis of HD~35502's central B star revealed the following: \begin{enumerate}[leftmargin=0.5cm] \item it has an effective temperature of $18.4\,{\rm kK}$, a mass of $5.7\,M_\odot$, and a polar radius of $3.0\,R_\odot$; \item it rotates relatively rapidly with a rotational period of $0.853807(3)\,{\rm d}$; \item we detect a strong magnetic field and derive the magnitude of its dipolar component ($B_p\approx14\,{\rm kG}$) and its obliquity ($\beta=63\,\degree$); \item it exhibits significant line variability in the form of emission and chemical spots. The emission is predominantly observed in H$\alpha$ at a distance of approximately four times the stellar radius. Strong He abundance variations indicate a higher concentration near the negative pole of the magnetic field's dipole component; \item we derive an Alfv\'{e}n and Kepler radii of $R_{\rm Alf}\approx41\,R_\ast$ and $R_K\approx2.1\,R_\ast$ unambiguously indicating that HD~35502's B star hosts a large centrifugally supported magnetosphere. \end{enumerate} Our analysis indicates that the `sn' classification appearing in HD~35502's historical B5IVsnp spectral type \citep{Abt1977} is most likely related to the presence of the sharp-lined binary companion along with the strong H emission. We therefore propose that this system be reclassified as a B5IVpe+A+A system. Stars hosting strongly emitting centrifugal magnetospheres can provide useful insights towards our understanding of both stellar winds and stellar magnetism. Therefore, it is important that these rare systems be studied in detail. While this particular class of magnetic stars is certainly growing, the number of confirmed examples are still insufficient to solve various outstanding issues. For instance, the inferred magnetospheric material densities of CM-hosting stars are largely inconsistent with currently predicted values \citep[e.g.][]{Rivinius2013,Townsend2013,Shultz2014}. Moreover, testing the validity of theoretical models describing the physical nature of these magnetospheres \citep[e.g. the Rigidly Rotating Magnetosphere model derived by][]{Townsend2005} requires detailed comparisons with a diversity of observations such as those recently carried out by \citet{Oksala2012,Oksala2015}. We conclude that the strong variability -- both in terms of $\langle B_z\rangle$ and the observed line variability -- makes HD~35502 a favourable subject of a magnetic Doppler imaging analysis \citep{Piskunov2002}. However, any attempt would require the three spectral components to be disentangled. Our orbital solution provides the necessary first step towards accomplishing this task. \section*{Acknowledgments} GAW acknowledges support in the form of a Discovery Grant from the Natural Science and Engineering Research Council (NSERC) of Canada. SIA thanks the United States National Science Foundation for support on his differential photometric studies. We thank Dr. Jason Grunhut for the helpful discussion regarding the interpretation of the magnetospheric emission.
proofpile-arXiv_067-4365
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Bitcoin is a digital currency alternative to the legal ones, as any other crypto currency. Nowadays, Bitcoin is the most popular cryptocurrency. It was created by a cryptologist known as "Satoshi Nakamoto", whose real identity is still unknown \cite{Satoshi}. Like other cryptocurrencies, Bitcoin uses cryptographic techniques and, thanks to an open source system, anyone is allowed to inspect and even modify the source code of the Bitcoin software. The Bitcoin network is a peer-to-peer network that monitors and manages both the generation of new Bitcoins and the consistency verification of transactions in Bitcoins. This network is composed of a high number of computers connected to each other through the Internet. They perform complex cryptographic procedures which generate new Bitcoins (mining) and manage the Bitcoin transactions register, verifying their correctness and truthfulness. Mining is the process which allows to find the so called "proof of work" that validates a set of transactions and adds them to the massive and transparent ledger of every past Bitcoin transaction known as the "Blockchain". The generation of Bitcoins is the reward for the validation process of the transactions. The Blockchain was generated starting since January 3, 2009 by the inventor of the Bitcoin system himself, Satoshi Nakamoto. The first block is called "genesis block" and contains a single transaction, which generates 50-Bitcoin for the benefit of the creator of the block. The whole system is set up to yield just 21 million Bitcoins by 2040, and over time the process of mining will become less and less profitable. The main source of remuneration for the miners in the future will be the fees on transactions, and not the mining process itself. In this work, we propose an agent-based model with the aim to study and analyse the mining process and the Bitcoin market starting from September 1, 2010, the approximate date when miners started to buy mining hardware to mine Bitcoins. The proposed model simulates the mining process and the Bitcoin transactions, by implementing a mechanism for the formation of the Bitcoin price, and specific behaviors for each typology of trader. We try to reproduce the generation of Bitcoins, the main stylized facts present in the real Bitcoin market and the economy of the mining process. The model described is built on a previous work of the authors \cite{Cocco2014}, which modeled the Bitcoin market under a purely financial perspective. In this work, we fully consider also the economics of mining. The paper is organized as follows. In Section \textit{Related Work} we discuss other works related to this paper, in Section \textit{Mining Process} we describe briefly the mining process and we give an overview on the mining hardware and on its evolution over time. In Section \textit{The Model} we present the proposed model in detail. Section \textit{Simulation Results} presents the values given to several parameters of the model and reports the results of the simulations, including an analysis of Bitcoin real prices, and a robustness analysis. The conclusions of the paper are reported in Section \textit{Conclusions}. Finally, appendix deals with the calibration to some parameters of the model. \section{Related Work}\label{sec:2} The study and analysis of the cryptocurrency market is a relatively new field. In the last years, several papers appeared on this topic given its potential interest and the many issues related to it (see for instance the works \cite{Androulaki,Bergstra,Brezo,Eyal,Hanley,Hout,Moore,Singh}). However, very few works were made to model the cryptocurrencies market. We can cite the works by Luther \cite{Luther}, who studied why some cryptocurrencies failed to gain widespread acceptance using a simple agent model; by Bornholdt et al. \cite{Bornholdt}, who proposed a model based on Moran process to study the cryptocurrencies able to emerge; by Garcia et al. \cite{Garcia}, who studied the role of social interactions in the creation of price bubbles; by Kristoufek \cite{Kristoufek} who analysed the main drivers of the Bitcoin price; and by Kaminsky et al. \cite{Kaminsky} who related the Bitcoin market with its sentiment analysis on social networks. In this paper we propose a complex agent-based model in order to reproduce the economy of the mining process and the main stylized facts of the Bitcoin price series. Our model is inspired by business, economic and financial agent-based models that depict how organizations, or in general the economy of a country, create, deliver, and capture value. As regards the business models, Amini et al. \cite{Amini} presented a agent-based model with the aim to analyze the impact of alternative production and sales policies on the diffusion of a new product; Cocco et al. \cite{Cocco2013,Cocco2011,Cocco2014bis} proposed agent-based models to simulate the software market and analyze the business processes and policies adopted by proprietary software firms and Open Source software firms; Li et al. \cite{Li} researched the dominant players’ behavior in supply chains and the relationship between the selling prices and purchasing prices in supply chains by using a multi-agent simulation model; Rohitratana et al. \cite{Rohitratana} studied the pricing schemes of the market of the Software as-a-Service and on the market of the proprietary or traditional software; finally, Xiaoming et al. \cite{Xiaoming} studied how a firm maximizes its profit by determining the production and sales policies for a new product during the lifetime of the product. Concerning economic models, in \cite{EURACE} the authors presented one of the most significant agent-based model developed to date in order to study the European economy. In particular, they show how monetary policies, i.e, credit money supplied by commercial banks as loans to firms, influence the economy of a country. In \cite{Dosi2010,Dosi2013} agent based keynesian models are presented in order to investigate the properties of macro economic dynamics and the impact of public polices on supply, demand and the fundamentals of the economy, and to study the interactions between income distribution and monetary and fiscal policies. As regards artificial financial market models, they reproduce the real functioning of markets, trying to explain the main stylised facts observed in financial markets, such as the fat-tailed distribution of returns, the volatility clustering, and the unit-root property. For a review, see works \cite{Chakra} and \cite{Chen}. Raberto et al. \cite{Raberto2001} and Cincotti et al. \cite{Cincotti} proposed the Genoa Artificial Stock Market (GASM) an agent-based artificial financial market characterized by actual tracking of status and wealth of each agent, and by a realistic trading and price clearing mechanisms. GASM is able to reproduce some of the main stylised facts observed in real financial markets. This paper is built on GASM, adding specific features and a mix of zero-intelligence and trend-following traders with the aim to model the Bitcoin exchange market and the economics of mining. \section{Mining Process}\label{sec:3} Today, every few minutes thousands of people send and receive Bitcoins through the peer-to-peer electronic cash system created by Satoshi Nakamoto. All transactions are public and stored in a distributed database called Blockchain which is used to confirm transactions and prevent the double-spending problem. People who confirm transactions of Bitcoins and store them in the Blockchain are called "miners". As soon as new transactions are notified to the network, miners check their validity and authenticity and collect them in a block. Then, they take the information contained in the block of the transactions, which include a variable number called "nonce" and run the SHA-256 hashing algorithm on this block, turning the initial information into a sequence of 256 bits, known as Hash \cite{CourtoisGrajek}. There is no way of knowing how this sequence will look before calculating it, and the introduction of a minor change in the initial data causes a drastic change in the resulting Hash. The miners cannot change the data containing the information of transactions, but can change the "nonce" number used to create a different hash. The goal is to find a Hash having a given number of leading zero bits. This number can be varied to change the difficulty of the problem. The first miner who creates a proper Hash with success (he finds the "proof-of-work"), gets a reward in Bitcoins, and the successful Hash is stored with the block of the validated transactions in the Blockchain. In a nutshell, \begin{quote} \small{"Bitcoin miners make money when they find a 32-bit value which, when hashed together with the data from other transactions with a standard hash function gives a hash with a certain number of 60 or more zeros. This is an extremely rare event}", \cite{CourtoisGrajek}. \end{quote} The steps to run the network are the followings: \begin{quote} \small{" New transactions are broadcast to all nodes; each node collects new transactions into a block; each node works on finding a difficult proof-of-work for its block; when a node finds a proof-of-work, it broadcasts the block to all nodes; nodes accept the block only if all transactions in it are valid and not already spent; nodes express their acceptance of the block by working on creating the next block in the chain, using the hash of the accepted block as the previous hash"}, \cite{Satoshi}. \end{quote} Producing a single hash is computationally very easy, consequently in order to regulate the generation of Bitcoins, over time the Bitcoin protocol makes this task more and more difficult. The proof-of-work is implemented by incrementing the nonce in the block until a value is found that gives the block's hash with the required leading zero bits. If the hash does not match the required format, a new nonce is generated and the Hash calculation starts again \cite{Satoshi}. Countless attempts may be necessary before finding a nonce able to generate a correct Hash. The computational complexity of the process necessary to find the proof-of-work is adjusted over time in such a way that the number of blocks found each day is more or less constant (approximately 2016 blocks in two weeks, one every 10 minutes). In the beginning, each generated block corresponded to the creation of 50 Bitcoins, this number being halved each four years, after 210,000 blocks additions. So, the miners have a reward equal to 50 Bitcoins if the created blocks belong to the first 210,000 blocks of the Blockchain, 25 Bitcoins if the created blocks range from the 210,001th to the 420,000th block in the Blockchain, 12.5 Bitcoins if the created blocks range from the 420,001th to the 630,000th block in the Blockchain, and so on. Over time, mining Bitcoin is getting more and more complex, due to the increasing number of miners, and the increasing power of their hardware. We have witnessed the succession of four generations of hardware, i.e. CPU's, GPU's, FPGA's and ASIC's generation, each of them characterized by a specific hash rate (measured in H/sec) and power consumption. With time, the power and the price of the mining hardware has been steadly increasing, though the price of H/sec has been decreasing. To face the increasing costs, miners are pooling together to share resources. \subsection{The evolution of the mining hardware}\label{sec:3.1} In January 3, 2009, Satoshi Nakamoto created the first block of the Blockchain, called "Genesis Block", hashing on the central processing unit (CPU) of his computer. Like him, the early miners mined Bitcoin running the software on their personal computers. The CPU's era represents the first phase of the mining process, the other eras being GPU's, FPGA's and ASIC's eras (see web site https://tradeblock.com/blog/ \\the-evolution-of-mining/). Each era announces the use of a specific typology of mining hardware. In the second era, started about on September 2010, boards based on graphics processing unit (GPU) running in parallel entered the market, giving rise to the GPU era. About in December 2011, the FPGA's era started and hardware based on field programmable gate array cards (FPGA) specifically designed to mine Bitcoins was available in the market. Finally, in 2013 fully customized application specific integrated circuit (ASIC) appeared, substantially increasing the hashing capability of the Bitcoin network and marking the beginning of the fourth era. Over time, the different mining hardware available was characterized by an increasing hash rate, a decreasing power consumption per hash, and increasing costs. For example, NVIDIA Quadro NVS 3100M, 16 cores, belonging to the GPU generation, has a hash rate equal to 3.6 MH/s and a power consumption equal to 14 W \cite{Courtois}; ModMiner Quad, belonging to the FPGA generation, has a hash rate equal to 800 MH/s and power consumption equal to 40 W \cite{Courtois}; Monarch(300), belonging to the ASIC generation, has a hash rate equal to 300 GH/s and power consumption equal to 175 W (see web site https://tradeblock.com/mining/. \subsection{Modeling the Mining Hardware Performances}\label{sec:3.2} The goal of our work is to model the economy of the mining process, so we neglected the first era, when Bitcoins had no monetary value, and miners used the power available on their PCs, at almost no cost. We simulated only the remaining three generations of mining hardware. We gathered information about the products that entered the market in each era to model these three generations of hardware, in particular with the aim to compute: \begin{itemize} \item the average hash rate per US\$ spent on hardware, $R(t)$, expressed in $\frac{H}{sec * \$}$; \item the average power consumption per $H/sec$, $P(t)$, expressed in $\frac{W}{H/sec}$. \end{itemize} The average hash rate and the average power consumption were computed averaging the real market data at specific times and constructing two fitting curves. To calculate the hash rate and the power consumption of the mining hardware of the GPU era, that we estimate ranging from September 1st, 2010 to September 29th, 2011, we computed an average for $R$ and $P$ taking into account some representative products in the market during that period, neglecting the costs of the motherboard. In that era, motherboards with more than one Peripheral Component Interconnect express (PCIe), started to enter the market allowing to install, by using adapters, multiple video cards in only one system and to mine criptocurrency, thanks to the power of the GPUs. In Table \ref{tab:GPU}, we describe the features of some GPUs in the market in that period. The data reported are taken from the web site http://coinpolice.com/gpu/. \begin{table \caption{\textit{GPU Mining Hardware}.\label{tab:GPU}} \begin{tabular}{|l|l|l|l|} \hline Date&Product&Hash Rate GH/\$&Consumption W/GH\\ \hline \multirow{4}{*}{23/09/2009 } & Radeon 5830 & 0.001475 & 593.22\\ &Radeon 5850 & 0.0015 & 398.94\\ &Radeon 5870 & 0.0015 & 467.66\\ &Radeon 5970 & 0.0023 & 392\\ \hline \multirow{3}{*}{22/10/2010 } & Radeon 6870 & 0.0015 & 503.33\\ &Radeon 6950 & 0.002 & 500\\ &Radeon 6990 & 0.0018 & 328.95\\ \hline \end{tabular}} \end{table} As regards the FPGA and ASIC eras, starting about on September 2011 and on December 2013, respectively, we tracked the history of the mining hardware by following the introduction into the market of Butterfly Labs company's products. We extracted the data illustrated in Table \ref{prodotti} from the history of the web site \textit{http://www.butterflylabs.com/} through the web site \textit{web.archive.org.}. For hardware in the market in 2014 and 2015 we referred to the Bitmain Technologies Ltd company, and in particular, to the mining hardware called AntMiner (see web site https://bitmaintech.com and Table \ref{prodotti}). \begin{table} \caption{\textit{Butterfly Labs Mining Hardware}: FPGA Hardware from 09/29/2011 to 12/17/2012, ASIC Hardware from 12/17/2012 to December 2013 and AntMiner Hardware for 2014 and 2015.\label{prodotti}} \scalebox{0.63}{ \begin{tabular}{|l|l|l|l|l|l|} \hline Date&Product &Price \$&Hash Rate GH/s&Hash Rate $\frac{GH}{sec*\$}$&Power Consumption $\frac{W}{GH/sec}$\\ \hline 09/29/2011- 12/2/2011&The Single&699&1&0.0014&19.8\\ \hline \multirow{2}{*}{12/2/2011- 12/28/2011}&The Single&699&1&0.0014&19.8\\ &Rig Box&24980&50.4&0.0021&49\\ \hline \multirow{2}{*}{12/28/2011- 05/1/2012}&The Single&599&0.832&0.0014&96.15\\ &Rig Box&24980&50.4&0.0021&49\\ \hline \multirow{2}{*}{05/1/2012- 12/17/2012}&The Single&599&0.832&0.0014&96.15\\ &Mini Rig &15295&25.2&0.0016&49\\ \hline \multirow{2}{*}{12/17/2012- 04/10/2013 }&BitForce Jalapeno&149&4.5&0.0302&1\\ &BitForce Little Single SC&649&30&0.0462&1\\ &BitForce Single SC&1299&60&0.0462&1\\ &BitForce Mini Rig SC&29899&1500&0.0502&1\\ \hline \multirow{2}{*}{04/10/2013- 05/31/2013 }&Bitcoin Miner&274&5&0.0182&6\\ &Bitcoin Miner&1249&25&0.02&6\\ &Bitcoin Miner&2499&50&0.02&6\\ \hline \multirow{2}{*}{ 05/31/2013- 10/15/2013 }&Bitcoin Miner&274&5&0.0182&6\\ &Bitcoin Miner&1249&25&0.02&6\\ &Bitcoin Miner&2499&50&0.02&6\\ &Bitcoin Miner&22484&500&0.0222&6\\ \hline \multirow{2}{*}{ 10/15/2013- 12/10/2013}&Bitcoin Miner&274&5&0.0182&6\\ &Bitcoin Miner&2499&50&0.02&6\\ &Bitcoin Miner&22484&500&0.0222&6\\ &Bitcoin Minin Card&2800&300&0.1071&0.6\\ &Bitcoin Minin Card&4680&600&0.1282&0.6\\ \hline 12/10/2013- 01/22/2014&AntminerS1&734.18&180&0.245&2\\ \hline 01/22/2014- 07/4/2014&AntminerS2&1715&1000&0.583&1.1\\ \hline 07/4/2014- 10/23/2014&AntminerS4-B2&1250&2000&1.6&0.69\\ \hline 10/23/2014- 03/25/2015&AntminerS5-B5&419&1155&2.756&0.51\\ \hline 03/25/2015-30/09/2015&AntminerS7-B8&454&4730&10.42&0.27\\ \hline \end{tabular}}} \end{table} Starting from the mining products in each period (see Tables \ref{tab:GPU} and \ref{prodotti}), we fitted a "best hash rate per \$" and a "best power consumption function" (see Table \ref{average}). We call the fitting curves $R(t)$ and $P(t)$, respectively. \begin{table \caption{\textit{Average of Hash Rate and of Power Consumption over time. }\label{average}} \scalebox{0.82}{ \begin{tabular}{|l|l|l|} \hline Date $\Rightarrow$ Simulation Step&Average of Hash Rate $\frac{GH}{sec*\$}$&Average of power Consumption $\frac{W}{GH/sec}$\\ \hline September 1, 2010 $\Rightarrow$ 1&0.0017&454.87\\ \hline September 29, 2011 $\Rightarrow$ 394&0.0014&19.8\\ \hline December 2,2011 $\Rightarrow$ 458&$0.00175$&34.4\\ \hline December 28,2011 $\Rightarrow$ 484&$0.0017$&72.575\\ \hline May 1, 2012 $\Rightarrow$ 608&$0.0029$&72.575\\ \hline December 17, 2012 $\Rightarrow$ 835&0.03565&1\\ \hline April 10, 2013 $\Rightarrow$ 953&0.0194&6\\ \hline May 31, 2013 $\Rightarrow$ 1004&0.0201&6\\ \hline October 15, 2013 $\Rightarrow$ 1141&0.1351&3.84\\ \hline December 10, 2013 $\Rightarrow$ 1197&0.0595&3.84\\ \hline January 22, 2014 $\Rightarrow$ 1240&0.245&2\\ \hline July 4, 2014 $\Rightarrow$ 1403&0.583&1.1\\ \hline October 23, 2014 $\Rightarrow$ 1484&1.6&0.69\\ \hline March 25, 2015 $\Rightarrow$ 1667&2.756&0.51\\ \hline September 30, 2015 $\Rightarrow$ 1856&10.42&0.27\\ \hline \end{tabular}}} \end{table} We used a general exponential model to fit the curve of the hash rate, $R(t)$ obtained by using eq. \ref{R}: \begin{equation}\label{R} R(t) = a*e^{(b*t)} \end{equation} where $a= 8.635*10^4$ and $b=0.006318$. The fitting curve of the power consumption $P(t)$ is also a general exponential model: \begin{equation}\label{P} P(t) = a*e^{(b*t)} \end{equation} where $a= 4.649*10^{-7}$ and $b=-0.004055$. Fig. \ref{fig:1} (a) and (b) show in logaritmic scale the fitting curves and how the hash rate increases over time, whereas power consumtpion decreases. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{hashRateFittingCurve-eps-converted-to}} \hspace{7mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{consumptionFittingCurve-eps-converted-to}} \caption{(a) Fitting curve of R(t) and (b) fitting curve of P(t). \label{fig:1}} \end{figure} \section{The Model }\label{sec:4} We used the \textit{Blockchain.info}, a web site which displays detailed information about all transactions and Bitcoin blocks, providing graphs and statistics on different data, for extracting the empirical data used in this work. This web site provides several graphs and statistical analysis of data about Bitcoins. In particular, we can observe the time trend of the Bitcoin price in the market, the total number of Bitcoins, the total hash rate of the Bitcoin network and the total number of Bitcoin transactions. The proposed model presents an agent-based artificial cryptocurrency market in which agents mine, buy or sell Bitcoins. We modeled the Bitcoin market starting from September 1st, 2010, because one of our goals is to study the economy of the mining process. It was only around this date that miners started to buy mining hardware to mine Bitcoins, denoting a business interest in mining. Previously, they typically just used the power available on their personal computers. The features of the model are: \begin{itemize} \item there are various kinds of agents active on the BTC market: Miners, Random traders and Chartists; \item the trading mechanism is based on a realistic order book that keeps sorted lists of buy and sell orders, and matches them allowing to fulfill compatible orders and to set the price; \item agents have typically limited financial resources, initially distributed following a power law; \item the number of agents engaged in trading at each moment is a fraction of the total number of agents; \item a number of new traders, endowed only with cash, enter the market; they represent people who decided to start trading or mining Bitcoins; \item miners belong to mining pools. This means that at each time $t$ they always have a positive probability to mine at least a fraction of Bitcoin. Indeed, since 2010 miners have been pooling together to share resources in order to be able avoiding effort duplication to optimally mine Bitcoins. A consequence of this fact is that gains are smoothly distributed among miners. On July 18th, 2010, \begin{quote} \small{"ArtForz establishes an OpenGL GPU hash farm and generates his first Bitcoin block"} \end{quote} and on September 18th, 2010, \begin{quote} \small{"Bitcoin Pooled Mining (operated by slush), a method by which several users work collectively to mine Bitcoins and share in the benefits, mines its first block"}, \end{quote} (news from the web site http://historyofBitcoin.org/). \end{itemize} Since then, the difficulty of the problem of mining increased exponentially, and nowaday it would be almost unthinkable to mine without participating to a pool. In the next subsections we describe in detail the model simulating the mining, the Bitcoin market and the related mechanism of Bitcoin price formation. \subsection{The Agents}\label{sec:4.1} Agents, or traders are divided into three populations: Miners, Random traders and Chartists. Every \textit{i}-th trader enters the market at a given time step, $t_i^E$. Such a trader can be either a Miner, a Random trader or a Chartist. All traders present in the market at the initial time $t_i^E=0$ holds an amount $c_i(0)$ of fiat currency (cash, in US dollars) and an amount $b_i(0)$ of crypto currency (Bitcoins), where $i$ is the trader's index. They represent the persons present in the market, mining and trading Bitcoins, before the period considered in the simulation. Each \textit{i}-th trader entering the market at $t_i^E>0$ holds only an amount $c_i(t_i^E)$ of fiat currency (cash, in dollars). These traders represent people interested in entering the market, investing their money in it. The wealth distribution of traders follows a Zipf law \cite{Levy}. The set of all traders entering the market at time $t_i^E>0$ are generated before the beginning of the simulation with a Pareto distribution of fiat cash, and then are randomly extracted from the set, when a given number of them must enter the market at a given time step. Also, the wealth distribution in crypto cash of the traders in the market at initial time follows a Zipf law. Indeed, the wealth share in the world of Bitcoin is even more unevenly distributed than in the world at large (see web site http://www.cryptocoinsnews.com/owns-Bitcoins-infographic-wealth-distribution/). More details on the trader wealth endowment are illustrated in Appendix. \paragraph{Miners}\label{sec:4.1.1} \textit{Miners} are in the Bitcoin market aiming to generate wealth by gaining Bitcoins. At the initial time, the simulated Bitcoin network is calibrated respecting the Satoshi original idea of Bitcoin network where each node participates equally to the process of check and validation of the transactions and mining. We assumed that miners in the market at initial time ($t_i^E=0$) own a personal PC such as Core i5 2600K, and hence they are initially endowed with a hashing capability $r_i(0)$ equal to 0.0173GH/sec, that implies a power consumption equal to 75W \cite{Courtois}. Core i5 is a brand name of a series of fourth-generation x64 microprocessor developed by Intel and brought to market in October 2009. Miners entering the market at time $t_i^E>0$ acquire mining hardware, and hence a hashing capability $r_i(t)$, which implies a specific electricity cost $e_i(t)$, investing a fraction $\gamma_{1,i}(t)$ of their fiat cash $c_i(t)$. In addition, over time all miners can improve their hashing capability by buying new mining hardware investing both their fiat and crypto cash. Consequently, the total hashing capability of $i-th$ trader at time $t$, $r_{i}(t)$ expressed in $[H/sec]$, and the total electricity cost $e_{i}(t)$ expressed in \$ per day, associated to her mining hardware units, are defined respectively as: \begin{equation} r_{i}(t) = \sum_{s=t_i^E}^{t}r_{i,u}(t) \end{equation} and \begin{equation} e_i(t)= \sum_{s=t_i^E}^{t} \epsilon *P(s)*r_{i,u}(s)*24 \end{equation} where: \begin{equation} r_{i,u}(t=t_i^E>0) = \gamma_{1,i}(t) c_i(t) R(t) \end{equation} \begin{equation} r_{i,u}(t>t_i^E) = [\gamma_{1,i}(t) c_i(t)+ \gamma_i(t) b_i(t) p(t)]R(t) \end{equation} \begin{itemize} \item $R(t)$ and $P(t)$ are, respectively, the hash rate which can be bought with one US\$, expressed in $\frac{H}{sec*\$}$, and the power consumption, expressed in $\frac{W}{H/sec}$. At each time $t$, their values are given by using the fitting curves described in subsection \textit{Modeling the Mining Hardware Performances}; \item $r_{i,u}(t)$ is the hashing capability of the hardware units $u$ bought at time $t$ by $i-th$ miner; \item $\gamma_i=0$ and $\gamma_{1,i}=0$ if no hardware is bought by $i-th$ trader at time $t$. When a trader decides to buy new hardware, $\gamma_{1,i}$ represents the percentage of miner's cash devoted to buy it. It is equal to a random variable characterized by a lognormal distribution with average 0.15 and standard deviation 0.15. $\gamma_{i}$ represents the percentage of miner's Bitcoins to be sold for buying the new hardware. It is equal to a random variable characterized by a lognormal distribution with average 0.175 and standard deviation 0.075. The term $\gamma_{1,i}(t) c_i(t)+ \gamma_i(t) b_i(t) p(t)$ expresses the amount of personal wealth that the miner wishes to devote to buy new mining hardware, meaning that on average the miner will devote 35\% of her cash and 17.5\% of her bitcoins to this purpose. If $\gamma_i>1$ or $\gamma_{1,i}>1$, they are set equal to one; \item $\epsilon$ is the fiat price per Watt and per hour. It is assumed equal to $1.4 \ast 10^{-4}$ \$, considering the cost of 1 KWh equal to 0.14\$, that we assumed to be constant throughout the simulation. This electricity price is computed making an average of the electricity prices in the countries in which the Bitcoin nodes distribution is higher; see web sites \textit{https://getaddr.bitnodes.io} and $http://en.wikipedia.org/wiki/Electricity\_pricing$. \end{itemize} The decision to buy or not new hardware is taken by every miner from time to time, on average every two months (60 days). If $i-th$ miner decides whether to buy new hardware and/or to divest the old hardware units at time $t$, the next time, $t^{I-D}_i(t)$, she will decide again is given by eq. \ref{ID}: \begin{equation}\label{ID} t^{I-D}_i(t)=t+ int(60+ N(\mu^{id},\sigma^{id})) \end{equation} where $int$ rounds to the nearest integer and $N(\mu^{id},\sigma^{id})$ is a normal distribution with average $\mu^{id}=0$ and standard deviation $\sigma^{id}= 6$. $t^{I-D}_i(t)$ is updated each time the miner takes her decision. Miners active in the simulation since the beginning will take their first decision within 60 days, at random times uniformly distributed. Miners entering the simulation at time $t>1$ will immediately take this decision. In deeper detail, at time $t=t^{I-D}_i(t)$, every miner can decide to buy new hardware units, if her fiat cash is positive, and/or to divest the old hardware units. If trader's cash is zero, she issues a sell market order to get the cash to support her electricity expenses, $c_{i,a}(t)=\gamma_i(t) b_i(t) p(t)$. Each $i-th$ miner belongs to a pool, and consequently at each time $t$ she always has a probability higher than 0 to mine at least some sub-units of Bitcoin. This probability is inversely proportional to the hashing capability of the whole network. Knowing the number of blocks discovered per day, and consequently knowing the number of new Bitcoins $B$ to be mined per day, the number of Bitcoins $b_{i}$ mined by $i-th$ miner per day can be defined as follows: \begin{equation} b_{i}(t)= \frac{r_{i}(t) }{ r_{Tot}(t)}B(t) \end{equation} where: \begin{itemize} \item $r_{Tot}(t)$ is the hashing capability of the whole population of miners $N_m$ at time $t$ defined as the sum of the hashing capabilities of all miners at time $t$, $ \sum_i^{N_m} r_{i}(t)$; \item the ratio $\frac{r_{i}(t) }{ r_{Tot}(t)}$ defines the relative hash rate of $i-th$ miner at time $t$. \end{itemize} Note that, as already described in section \textit{Mining Process}, the parameter $B$ decreases over time. At first, each generated block corresponds to the creation of 50 Bitcoins, but after four years, such number is halved. So, until November 27, 2012, 100,800 Bitcoins were mined in 14 days (7200 Bitcoins per day), and then 50,400 Bitcoins in 14 days (3600 per day). The decision of a miner to buy and/or divests a hardware unit $u$ depends on the Bitcoins potentially obtained mining with the new hardware. A miner buys new hardware units if the daily cost, given by the expense in electricity, $e_{i,u}(t)$, associated to these units is smaller than the gain expected in Bitcoin. Hence, before buying new hardware units the following constraint has to be evaluated: \begin{equation}\label{ConstraintToBuy} e_{i,u}(t) < b_{i,u}(t) p(t) \end{equation} where: \begin{itemize} \item $b_{i,u}$ are the Bitcoins potentially mined by unit $u$ at time $t$: $b_{i,u}(t)= \frac{r_{i,u}(t) }{ r_{Tot}(t)}B(t)$ \item $p(t)$ is the Bitcoin price at time $t$. \end{itemize} Only if this constraint is respected the miner can buy new hardware. In this case, she issues a market order acquiring an amount of fiat cash $c_{i,a}(t)=\gamma_i(t) b_i(t) p(t)$ in the next time steps. She invests 50\% of this amount to buy new hardware and keeps the remaining 50\% as cash, to pay the electricity bill for her hardware. If the constraint in eq. \ref{ConstraintToBuy} is not respected, the miner anyway issues a market order equal to $c_{i,a}(t)=\frac{\gamma_i(t) b_i(t) p(t)}{2}$ to support her electricity expenses. A miner divests her old hardware units if the expense in electricity associated to that units is 20\% higher than the gain expected in Bitcoins using that hardware, at the current price. Therefore, the following constraint has to be respected for each value of $k$, with $k$ going from 0 to current time $t$: \begin{equation}\label{ConstraintToDivest} e_{i,u}(k) \le 1.2 \frac{r_{i,u}(k) }{ r_{Tot}(t)} B(t) p(t) \end{equation} The model also includes a mechanism that enables 10\% of miners to invest and/or divest their hardware also at a time $t \ne t^{I-D}_i(t)$. This mechanism is triggered when the price relative variation, in a time window $\tau^M$ equal to 15 days, is positive and is higher than a threshold $Th^M$ equal to 0.016. This because, in the real market, the investments of miners grow when the profitability of mining activity increases. So, increasing the interest of miners in buying new hardware in these periods is a plausible assumption. \paragraph{Random Traders}\label{sec:4.1.2} \textit{Random Traders} represent persons who enter the crypto-currency market for various reasons, but not for speculative purposes. They issue orders for reasons linked to their needs, for instance they invest in Bitcoins to diversify their portfolio, or they disinvest to satisfy a need for cash. They issue orders in a random way, compatibly with their available resources. In particular, buy and sell orders are always issued with the same probability. The specifics of their behavior is described in section \textit{Buy and Sell Orders}. \paragraph{Chartists} \textit{Chartists} represent speculators, aimed to gain by placing orders in the Bitcoin market. They speculates that, if prices are rising, they will keep rising, and if prices are falling, they will keep falling. In particular, $i-th$ chartist issues a buy order when the price relative variation in a time window $\tau_i^C$, is higher than a threshold $Th^C=0.01$, and issues a sell order if this variation is lower than $Th^C$. $\tau_i^C$ is specific for each chartist, and is characterized by a normal distribution with average equal to 20 and standard deviation equal to 1. Chartists usually issue buy orders when the price is increasing and sell orders when the price is decreasing. However, 10\% of Chartists decide, instead, to adopt a contrary strategy, and place a sell order instead of a buy order, or vice-versa. This contrarian behavior is common in financial markets, and is typically modeled also in market models \cite{Raberto2003}. Note that a Chartist will issue an order only when the price variation is above a given threshold. So, in practice, the extent of Chartist activity varies over time. In general the modelled Chartists' behavior is key to produce large price variations, and to the reproduction of the basic statistical proprieties of the real returns. All Random traders and Chartists entering the market at $t=t^E > 0$, issue a buy order to acquire their initial Bitcoins. Over time, at time $t>t^E$ only a fraction of Random traders and Chartists is active, and hence enabled to issue orders. Active traders can issue only one order per time step, which can be a sell order or a buy order. Orders already placed but not yet satisfied or withdrawn are accounted for when determining the amount of Bitcoins a trader can buy or sell. Details on the percentage of active traders, the number of the traders in the market and on the probability of each trader to belong to a specific traders' population are described in Appendix. \subsection{Buy and Sell Orders}\label{sec:4.2} The Bitcoin market is modelled as a steady inflow of buy and sell orders, placed by the traders as described in \cite{Cocco2014}. Both buy and sell orders are expressed in Bitcoins, that is, they refer to a given amount of Bitcoins to buy or sell. In deeper detail, all orders have the following features: \begin{itemize} \item amount, expressed in \$ for buy order and in Bitcoins for sell order: the latter amount is a real number, because Bitcoins can be bought and sold in fractions as small as a "Satoshi"; \item residual amount (Bitcoins or \$): used when an order is only partially satisfied by previous transactions; \item limit price (see below), which in turn can be a real number; \item time when the order was issued; \item expiration time: if the order is not (fully) satisfied, it is removed from the book at this time. \end{itemize} The amount of each buy order depends on the amount of cash, $c_i(t)$, owned by $i$-th trader at time $t$, less the cash already committed to other pending buy orders still in the book. Let us call $c^b_i$ the available cash. The number of Bitcoins to buy, $b_a$ is given by eq. \ref{eq-buy} \begin{equation}\label{eq-buy} b_a = \frac{c^b_i \beta}{ p(t)} \end{equation} where $p(t)$ is the current price and $\beta$ is a random variable drawn from a lognormal distribution with average and standard deviation equal to $0.25$ and $0.2$, respectively for Random traders and equal to $0.4$ and $0.2$, respectively for Chartists. In the unlikely case that $\beta > 1$, $\beta$ is set equal to 1. Similarly, the amount of each sell order depends on the number of Bitcoins, $b_i(t)$ owned by $i$-th trader at time $t$, less the Bitcoins already committed to other pending sell orders still in the book, overall called $b^s_i$. The number of Bitcoins to sell, $s_a$ is given by \begin{equation}\label{eq-sell} s_a = b^s_i \beta \end{equation} where $\beta$ is a lognormal random variable as above. Short selling is not allowed. The limit price models the price to which a trader desire to conclude his/her transaction. An order can also be issued with no limit (market order), meaning that its originator wishes to perform the trade at the best price she can find. In this case, the limit price is set to zero. The probability of placing a market order, $P_{lim}$, is set at the beginning of the simulation and is equal to 1 for Miners, to 0.2 for Random Traders and to 0.7 for Chartists. This because, unlike Random Traders, if Miners and Chartists issue orders, then they wish to perform the trade at the best available price, the formers because they need cash, the latters to be able to gain by following the price trend. Let us suppose that $i$-th trader issues a limit order to buy $a_i^b(t)$ Bitcoins at time $t$. Each buy order can be executed if the trading price is lower than, or equal to, its buy limit price $b_{i}$. In the case of a sell order of $a_i^s(t)$ Bitcoins, it can be executed if the trading price is higher than, or equal to, its sell limit price $s_{i}$. As said above, if the limit prices $b_{i}=0$ or $s_{i}=0$, then the orders can be always executed, provided there is a pending complementary order. The buy and sell limit prices, $b_{i}$ and $s_{i}$, are given respectively by the following equations: \begin{equation}\label{buyLimit} b_{i}(t)=p(t)*N_i(\mu,\sigma_i) \end{equation} \begin{equation}\label{sellLimit} s_{i}(t)=\frac{p(t)}{N_i(\mu,\sigma_i)} \end{equation} where \begin{itemize} \item $p(t)$ is the current Bitcoin price; \item $N_i(\mu,\sigma^{c}_i)$ is a random draw from a Gaussian distribution with average $\mu \simeq 1$ and standard deviation $\sigma_i \ll 1$. \end{itemize} The limit prices have a random component, modelling the different perception of Bitcoin value, that is the fact that what traders "feel" is the right price to buy or to sell is not constant, and may vary for each single order. In the case of buy orders, we stipulate that a trader wishing to buy must offer a price that is, on average, slightly higher than the market price. The value of $\sigma_i$ is proportional to the "volatility" $\sigma(T_i)$ of the price $p(t)$ through the equation $\sigma_i=K\sigma(T_i)$, where $K$ is a constant and $\sigma(T_i)$ is the standard deviation of price absolute returns, calculated in the time window $T_i$. $\sigma_i$ is constrained between a minimum value $\sigma_{min}$ and a maximum value $\sigma_{max}$ (this is an approach similar to that of \cite{Raberto2001}). For buy orders $\mu=1.05$, $K=2.5$, $\sigma_{min}=0.01$ and $\sigma_{max}=0.003$. In the case of sell orders, the reasoning is dual. For symmetry, the limit price is divided by a random draw from the same Gaussian distribution $N_i(\mu,\sigma^{c}_i)$. An expiration time is associated to each order. For Random Traders, the value of the expiration time is equal to the current time plus a number of days (time steps) drawn from a lognormal distribution with average and standard deviation equal to 3 and 1 days, respectively. In this way, most orders will expire within 4 days since they were posted. Chartists, who act in a more dynamic way to follow the market trend, post orders whose expiration time is at the end of the same trading day. Miners issue market orders, so the value of the expiration time is set to infinite. \subsection{Price Clearing Mechanism}\label{sec:4.3} We implement the price clearing mechanism by using an Order Book similar to that presented in \cite{Raberto2005}. At every time step, the order book holds the list of all the orders received and still to be executed. Buy orders are sorted in descending order with respect to the limit price $b_{i}$. Orders with the same limit price are sorted in ascending order with respect to the order issue time. Sell orders are sorted in ascending order with respect to the limit price $s_{j}$. Orders with the same limit price are sorted in ascending order with respect to the order issue time. At each simulation step, various new orders are inserted into the respective lists. As soon as a new order enters the book, the first buy order and the first sell order of the lists are inspected to verify if they match. If they match, a transaction occurs. The order with the smaller residual amount is fully executed, whereas the order with larger amount is only partially executed, and remains in the head of the list, with its residual amount reduced by the amount of the matching order. Clearly, if both orders have the same residual amount, they are both fully executed. After the transaction, the next pair of orders at the head of the lists are checked for matching. If they match, they are executed, and so on until they do not match anymore. Hence, before the book can accept new orders, all the matching orders are satisfied. A sell order of index $j$ matches a buy order of index $i$, and vice versa, only if $s_{j} \le b_{i}$, or if one of the two limit prices, or both, are equal to zero. As regards the price, $p_T$, to which the transaction is performed, the price formation mechanism follows the rules described below. Here, $p(t)$ denotes the current price: \begin{itemize} \item when one of the two orders has limit price equal to zero: \begin{itemize} \item if $b_{i} > 0$, then $p_T=min(b_{i}, p(t))$, \item if $s_{j} > 0$, then $p_T=max(s_{j}, p(t))$, \end{itemize} \item when both orders have limit price equal to zero, $p_T = p(t)$; \item when both orders have limit price higher than zero, $p_T = \frac{b_{i}+s_{j}}{2}$. \end{itemize} \section{Simulation Results}\label{sec:6} The model described in the previous section was implemented in Smalltalk language. Before the simulation, it had to be calibrated in order to reproduce the real stylized facts and the mining process in the Bitcoin market in the period between September 1st, 2010 and September 30, 2015. The simulation period was thus set to 1856 steps, a simulation step corresponding to one day. We included also weekends and holidays, because the Bitcoin market is, by its very nature, accessible and working everyday. We set the initial value of several key parameters of the model by using data recovered from the Blockchain Web site. The main assumption we made is to size the artificial market at about 1/100 of the real market, to be able to manage the computational load of the simulation. Table \ref{tab:initial} shows the parameter values and their computation assumptions in detail. In Appendix other details about the calibration of the model are shown. Specifically, the calibration of the trader wealth endowment, the number of active traders, the total number of traders in the market and the probability of a trader to belong to a specific traders' population are described in detail. \begin{table \caption{Values of simulation parameters and the assumptions behind them. \label{tab:initial}} \begin{tabular}{ccp{8cm}} \hline\noalign{\smallskip} Param.&Initial Value&Description and discussion\\ \noalign{\smallskip}\hline\noalign{\smallskip} $N_t(0)$&160&Number of initial traders. Obtained dividing by 100 the number of traders on September 1st, 2010 estimated through the fitting curve shown in eq. \ref{N_T} (see Appendix).\\ \hline $N_t(T)$&39,649&Total number of traders at the end of the simulation. Obtained dividing by 100 the number of traders on September 30, 2015 estimated through the fitting curve shown in eq. \ref{N_T}.\\ \hline $B$&72 or 36&Bitcoins mined per day. Obtained dividing by 100 the Bitcoins which are mined every day. They are 72 until $853th$ simulation step (November 27th, 2012), and 36 from $853th$ simulation step onwards.\\ \hline $p(0)$&0.0649 \$&Initial price. The average price as of September 2010.\\ \hline $B_T(0)$& 23,274 \$&Total initial crypto cash. Obtained dividing by 100 the number of Bitcoins on September 1st, 2010 and keeping just 60\% of this value, because we assume that 40\% of Bitcoins are not available for the trade.\\ \hline $q$& 200,000 \$&Constant used in Zipf's law ($\frac{q}{i^{0.6}}$), used to assign the initial cash for traders entering at $t>1$.\\ \hline $c^s_1$& 20,587 \$&Initial cash of the richest trader entering the simulation at $t=1$.\\ \hline $b^s_1$& 4,117 \$&Initial Bitcoin cash of the richest trader entering the simulation at $t=1$.\\ \noalign{\smallskip}\hline \end{tabular}} \end{table} The model was run to study the main features which characterize the Bitcoin market and the traders who operate in it. In order to assess the robustness of our model and the validity of our statistical analysis, we repeated 100 simulations with the same initial conditions, but different seeds of the random number generator. The results of all simulations were consistent, as shown in the followings. \subsection{Bitcoin prices in the real and simulated market}\label{sec:6.1} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{realPrice-eps-converted-to} \caption{Price of Bitcoins in US\$.} \label{fig:realPrice} \end{figure} We started studying the real Bitcoin price series between September 1st, 2010 and September 30, 2015, shown in Fig. \ref{fig:realPrice}. The figure shows an initial period in which the price trend is relatively constant, until about $950^{th}$ day. Then, a period of volatility follows between $950^{th}$ and $1150^{th}$ day, followed by a period of strong volatility, until the end of the considered interval. The Bitcoin price started to fall at the beginning fo 2014, and is continuing on its downward slope until September 2015. It is well known that the price series encountered in financial markets typically exhibit some statistical features, also known as "stylized facts" \cite{Pagan,Lux}. Among these, the three uni-variate properties which appear to be the most important and pervasive of price series, are (i) the unit-root property, (ii) the fat tail phenomenon, and (iii) the \emph{Volatility Clustering}. We examined daily Bitcoin prices and found that also these prices exhibit these properties as discussed in detail in \cite{Cocco2014}. As regards the prices in the simulated market, we report in Fig. \ref{fig:Price} the Bitcoin price in one typical simulation run. It is possible to observe that, as in the case of the real price, at first the price keeps its value constant, but then, after about 1000 simulation steps, contrary to what happens in the reality, it grows and continues on its upward slope until the end of the simulation period. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{simulatedPrice-eps-converted-to} \caption{Bitcoin simulated Price in one simulation run.} \label{fig:Price} \end{figure} Figs. \ref{fig:averagePrice} (a) and (b) report the average and the standard deviation of the simulated price, taken on all 100 simulations. Note that the average value of prices steadily increases with time, in contrast with what happens in reality. Fig. \ref{fig:averagePrice} (b) shows that the price variations in different simulation runs increase with time, as the number of traders, transactions and the total wealth in the market are increasing. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{averagePrice-eps-converted-to}} \hspace{7mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{stdPrice-eps-converted-to}} \caption{(a) Average Price and (b) standard deviation computed on the 100 Monte Carlo simulations performed. \label{fig:averagePrice}} \end{figure} In the proposed model, the upward trend of the price depends on an intrinsic mechanism -- in fact, the average price tends to the ratio of total available cash to total available Bitcoins. Since new traders bring in more cash than new mined Bitcoins, the price tends to increase. In reality, Bitcoin price is also heavily affected by exogenous factors. For instance, in the past the price strongly reacted to reports such as those regarding Bitcoin ban in China, or the MtGox exchange going bust. Moreover, the total capitalization of the Bitcoin market is of the order of just some billions of US\$, so if a large hedge fund decided to invest in Bitcoins, or if large amounts of Bitcoins disappeared because of theft, fraud or mismanagement, the effect on price would be potentially quite large. All these exogenous events, that can trigger strong and unexpected price variations, obviously cannot be part of our base model. In section \textit{Other Results}, we shall describe the results obtained when some random traders adopt speculative behaviors, in addition to the speculative behaviour that characterizes Chartists. Simulating this behavior allows to reproduce the Bitcoin price peak in December 2013 and its subsequent fall. Despite inability to reproduce the decreasing trend of the price, the model presented in Section \textit{The Model}, is able to reproduce quite well all statistical properties of real Bitcoin prices and returns. The stylized facts, robustly replicated by the proposed model, are the same of a previous work of Cocco et al. \cite{Cocco2014}, and do not depend on the addition of the miners to the model. \subsection{Traders' Statistics}\label{sec:6.3} Figs. \ref{fig:AvgStdBtc} - \ref{fig:AvgStdTotCash} show the average and the standard deviation of the crypto and fiat cash, and of the total wealth, $A(t)$, of trader populations, across all 100 simulations. These simulations were carried with miners buying new hardware using an average percentage of 15\% of their wealth, that demonstrated to be optimal. Figure \ref{fig:AvgStdTotCash}(a) highlights how Miners represent the richest population of traders in the market in the beginning of the simulation. However, from about 1400th step onwards, Random traders become the richest population in the market. This is mainly due to the higher number of Random traders with respect to Miners. Note also that the standard deviation of the total wealth is much more variable than the former two figures. This is due to the fact that the wealth is obtained by multiplying the number of Bitcoins by their price, which is very volatile among the various simulations, as shown in Fig. \ref{fig:averagePrice}(b). \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{cryptoCashAverage-eps-converted-to}} \hspace{4mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{cryptoCashStd-eps-converted-to}} \caption{(a) Average and (b) standard deviation of the Bitcoin amount for all trader populations during the simulation period across all Monte Carlo simulations. \label{fig:AvgStdBtc}} \end{figure} \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{fiatCashAverage-eps-converted-to}} \hspace{4mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{fiatCashStd-eps-converted-to}} \caption{(a) Average and (b) standard deviation of the cash amount for all trader populations during the simulation period across all Monte Carlo simulations. \label{fig:AvgStdCash}} \end{figure} \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{totalWealthAverage-eps-converted-to}} \hspace{4mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{totalWealthStd-eps-converted-to}} \caption{(a) Average and (b) standard deviation of the total wealth for all trader populations during the simulation period across all Monte Carlo simulations. \label{fig:AvgStdTotCash}} \end{figure} Fig. \ref{fig:AvgTotCashPerCapitaOUT7}, shows the average of the total wealth per capita for all trader populations, across all 100 Monte Carlo simulations. Miners are clearly the winners about from the 380th simulation step onwards, thanks to their ability to mine new Bitcoins. Specifically, thanks to the percentage of cash that Miners devot to buy new mining hardware, Miners are able to acquire a wealth per-capite that ranges about between \$1,000 at the beginning of the simulation and \$14,000 at the end. This is due to the optimal percentage of cash devoted to buy new hardware, that is drawn from a lognormal distribution $\gamma$ with both average and standard deviation set to 0.15, as already mentioned in \textit{The Agents}. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{totalWealthPerCapitaOUT7-eps-converted-to} \caption{Average across all Monte Carlo simulations of the total wealth per capita, for all trader populations. \label{fig:AvgTotCashPerCapitaOUT7}} \end{figure} We varied the average percentage of their wealth that Miners devote for buying new hardware, $\gamma$, to verify how this variation can impact on Miners' success. Remember that the actual percentage for a given Miner is drawn from a log-normal distribution, so these percentages are fairly different among Miners. Figures \ref{fig:AvgTotCashPerCapita48} (a) and (b) show the total wealth per capita for Miners, for increasing values of the average of $\gamma$. It is apparent that Miners' gains are inversely proportional to $\gamma$, so the general strategy of devoting more money to buy hardware is not successful for Miners. This is because if all Miners devote an increasing amount of money to buy new mining hardware, the overall hashing power of the network increases, and each single Miner does not obtain the expected advantage of having more hash power, whereas the money spent on hardware and energy increases. The wealth per-capite ranges between about \$1,000 at the beginning of the simulation and \$8,000 at the end, for $\gamma=0.25$ (see fig. \ref{fig:AvgTotCashPerCapita48} (a)) and about between \$1,000 and \$6,000 for $\gamma=0.35$ (see fig. \ref{fig:AvgTotCashPerCapita48} (b)). \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{totalWealthPerCapitaOUT4-eps-converted-to}} \hspace{4mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{totalWealthPerCapitaOUT8-eps-converted-to}} \caption{Average across all Monte Carlo simulations of the total wealth average per capita for all trader populations (a) for $\gamma=0.25$ and (b) for $\gamma=0.35$. \label{fig:AvgTotCashPerCapita48}} \end{figure} Having found that Miners' wealth decreases when too much of it is used to buy new hardware, we studied if increasing money spent in mining hardware is a successful strategy for single Miners, when most other Miners do not follow it. Fig. \ref{fig:crossCorr} (a) shows the ratio of initial Miners' total wealth computed at the end and at the beginning of a single simulation, $\frac{A_i^{f_m}(T)}{A_i^{f_m}(0)}$, versus their actual value of $\gamma_i$, that is their propension to spend money to buy mining hardware. The average $<\gamma> = 0.15$ in this simulation. The correlation coefficients is equal to -0.14, so it looks that there is no meaningful correlation between mining success and the propension to invest in hardware. In Fig. \ref{fig:crossCorr} we can see that two of the three most successful Miners, able to increase their wealth of about 100 and 45 times, have a very low value of $\gamma_i$, (less than 0.1), whereas the third one, who was able to increase his wealth forty times, has a high propension to invest ($\gamma_i \simeq 0.62$). On the contrary, we found that the total wealth, $A_i^{f_m}(T)$, of the miners at the end of the simulation is correlated with their hashing capability $r_i^{f_m}(T)$, being the correlation coefficient equal to 0.788, as shown in Fig. \ref{fig:crossCorr} (b). This result is not unexpected because wealthy Miner can buy more hardware, that in turn helps them to increase their mined Bitcoins. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{a1-eps-converted-to}} \hspace{7mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{c1-eps-converted-to}} \caption{Scatterplots of (a) the increase in wealth of single Miners versus their average wealth percentage used to buy mining hardware, and (b) the total wealth of Miners versus their hashing power at the end of the simulation. \label{fig:crossCorr}} \end{figure} Figures \ref{fig:HR} - \ref{fig:STDelectHardwExpenses} show some significant quantities related to the Miner's population. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{hashRateMCAvgReal-eps-converted-to}} \hspace{7mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{electExpTOTAL-eps-converted-to}} \caption{(a) Comparison between real hashing capability and average of the simulated hashing capability across all Monte Carlo simulations in log scale, and (b) average and standard deviation of the total expenses in electricity across all Monte Carlo simulations in log scale. \label{fig:HR}} \end{figure} Fig. \ref{fig:HR}(a) shows the average hashing capability of the whole network in the simulated market across all Monte Carlo simulations and the hashing capability in the real market, being both these quantities expressed in log scale. Note that the simulated hashing capability should be about 100 times lower than the real one, due to the reduced dimension of the simulated market with respect to the real one. The simulated hash rate does not follow the upward trend of the Bitcoin price at about 1200th time step that is due to an exogenous cause (the step price increase at the end of 2013), that is obviously not present in our simulations. However, in Fig. \ref{fig:HR}(a) the simulated hashing capability is actually about two orders of magnitude lower than the real one, as it should be. In general, Bitcoin mining hardware become obsolete from a few months to one year after you purchase them. "Serious" miners usually buy new equipment every month, re-investing their profits into new mining equipment, if they want that their Bitcoin mining operation to run long term (see web site http://coinbrief.net/profitable-bitcoin-mining-farm/. In our model, miners divest their mining equipment about every ten months. Figure \ref{fig:electExpenses} (a) shows the average and standard deviation of the power consumption across all Monte Carlo simulations. Figure \ref{fig:electExpenses} (b) shows an estimated minimum and maximum power consumption of the Bitcoin mining network, together with the average of the power consumption of Fig. \ref{fig:electExpenses} (a), in logarithmic scale. The estimated theoretical minimum power consumption is obtained by multiplying the actual hash rate of the network at time $t$ (as shown in Fig. \ref{fig:HR}(a)) with the power consumption $P(t)$ given in eq. \ref{P}. This would mean that the entire hashing capability of miners is obtained with the most recent hardware. The estimated theoretical maximum power consumption is obtained by multiplying the actual hash rate of the network with the power consumption $P(t-360)$, referring to one year before. This would mean that the entire hashing capability of miners is obtained with hardware one year old, and thus less efficient. The estimated obsolescence of mining hardware is between six months and one year, so the period of one year should give a reliable maximum value for power consumption. The simulation results, averaged on 100 simulations, show a much more regular trend, steadily increasing with time -- which is natural due to the absence of external perturbations on the model. However, the power consumption value is of the same order of magnitude of the "real" case. Note that the simulated consumption shown in Fig. \ref{fig:electExpenses} (b) is multiplied by 100, that is the scaling factor of our simulations, that have $1/100^{th}$ of the real number of Bitcoin traders and miners. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{powerConsSimulated-eps-converted-to}} \hspace{7mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{powerMW-eps-converted-to}} \caption{ (a) Average and standard deviation of the power consumption across all Monte Carlo simulations, and (b) Estimated minimum and maximum power consumption of the real Bitcoin Mining Network (solid lines), and average of the power consumption across all Monte Carlo simulations, multiplied by 100, the scaling factor of our simulations (dashed line). For the meaning of the circles, see text. \label{fig:electExpenses}} \end{figure} Fig. \ref{fig:electExpenses} (b) also shows a white circle, at time step corresponding to April 2013, with a value of 38.8 MW. This value has been taken by Courtois et al, who in work \cite{CourtoisGrajek} write: \begin{quote} \small{In April 2013 it was estimated that Bitcoin miners already used about 982 Megawatt hours every day. At that time the hash rate was about 60 Tera Hash/s. (Refer to article by Gimein Mark "Virtual Bitcoin Mining Is a Real-World Environmental Disaster", 13 April 2013 published on web site www.Bloomberg.com.). } \end{quote} In fact, the hash rate quoted is correct, but the consumption value looks overestimated of one order of magnitude, even with respect to our maximum power consumption limit. We believe this is due to the fact that the authors still referred to FPGA consumption rates, not fully appreciating how quickly the ASIC adoption had spread among the miners. As of 2015, the combined electricity consumption was estimated equal to 1.46 Tera Wh per year, that corresponds to about 167 MW (see article "The magic of mining", 13 January 2015 published on web site www.economist.com.). This value is reported in Fig. \ref{fig:electExpenses} (b) as a black circle. This time, the value is slightly underestimated, being at the lower bound of power consumption estimate, and is practically coincident with the average value of our simulations. Figures \ref{fig:AVGelectHardwExpenses} (a) and (b) show an estimate of the expenses incurred every six days in electricity (a) and in hardware (b) for the new hardware bought each day in the real and simulated market. Note that also the values of the simulated expenses are average values across all Monte Carlo simulations. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{electExpMCAvgReal-eps-converted-to}} \hspace{7mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{hardExpMCAvgReal-eps-converted-to}} \caption{Real expenses and expenses average in electricity across all Monte Carlo simulations (a) and real expenses and expenses average in hardware across all Monte Carlo simulations every six days (b). \label{fig:AVGelectHardwExpenses}} \end{figure} These expenses were computed assuming that the new hardware bought each day in the real (simulated) market, and hence the additional hashing capability acquired each day, is equal to the difference between the real (simulated) hash rate in $t$ and the real (simulated) hash rate in $t-1$. For both these expenses, contrary to what happens to the respective real quantities, the simulated quantities do not follow the upward trend of the price due to the constant investment rate in mining hardware. Figure \ref{fig:STDelectHardwExpenses} (a) and (b) show the average and standard deviation, across all Monte simulations, of the expenses incurred every six days in electricity and in new hardware respectively, showing the level of the variation across the simulations. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{electExpAvgStd-eps-converted-to}} \hspace{7mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{hardExpAvgStd-eps-converted-to}} \caption{ Average and standard deviation of the expenses in electricity (a) and of the expenses in new hardware across all Monte simulations. \label{fig:STDelectHardwExpenses}} \end{figure} Remembering that our model sizes the artificial market at about 1/100 of the real market and that the number of traders, their cash and their trading probabilities are rough estimates of the real ones, the simulated market outputs can be considered reasonably close to the real one. \subsection{Other Results}\label{sec:6.3} It is known that Bitcoin price is driven by speculation, government regulation and investors behavior, and its volatility depends also on Bitcoin acceptance and usage. In 2012 and 2013 prices had a wild ride, until they reached a peak of \$1,150 in December 2013. In 2014 Bitcoin price fell following the shutdown of historical Mt. Gox exchange site and reports regarding Bitcoin ban in China. Trying to reproduce this market trend, we introduced in the model a particular speculative behaviour by some traders. The speculative mechanism implemented stems from a report, called the "Willy Report", published by an anonymous researcher, which alleges suspicious trading activity at Mt. Gox. "The Willy Report: proof of massive fraudulent trading activity at Mt. Gox, and how it has affected the price of Bitcoin", was posted on May 25, 2014 in web site https://willyreport.wordpress.com/. The anonymous researcher claims to have noted a suspicious bot behavior on Mt. Gox, that spread its trading activity over many accounts, and how this fraudulent massive trading activity impacted on the price, causing bubble and crash. \vspace{0.5cm} In the report the researcher writes: \begin{quote} \small{"Somewhere in December 2013, a number of traders including myself began noticing suspicious bot behavior on Mt. Gox. Basically, a random number between 10 and 20 bitcoin would be bought every 510 minutes, nonstop, for at least a month on end until the end of January. The bot was dubbed “Willy” \dots its trading activity was spread over many accounts. \dots Their trading activity went back all the way to September 27th.\dots In total, a staggering about \$112 million was spent to buy close to 270,000 BTC - the bulk of which was bought in November. So if you were wondering how Bitcoin suddenly appreciated in value by a factor of 10 within the span of one month, well, this may be why. Not Chinese investors, not the Silkroad bust - these events may have contributed, but they may not have been main reason. \dots \dots there was another timetraveller account with an ID of 698630 - and this account, after being active for close to 8 months, became completely inactive just 7 hours before the first Willy account became active! So it is a reasonable assumption that these accounts were controlled by the same entity. \dots There were several peculiar things about Markus. First, its fees paid were always 0 (unlike Willy, who paid fees as usual). Second, its fiat spent when buying coins was all over the place, with seemingly completely random prices paid per bitcoin.\dots \dots Since there are no logs past November 2013, the following arguments are largely based on personal speculation, and that of other traders\dots on January 26th, Willy suddenly became inactive -- and with it, the price retraced back to a more reasonable spread with the other exchanges. Shortly after -- on February 3rd to be precise -- it seemed as if Willy had begun to run in reverse, although with a slightly altered pattern: it seemed to sell around 100 BTC every two hours. \dots There's some additional evidence on the chart that a dump bot may have been at play. At several points in time, starting from Feb. 18th, it seemed that some bot was programmed to sell down to various fixed price levels.\dots At this point, I guess the straightforward conclusion would be that this is how the coins were stolen: a hacker gained access to the system or database, was able to assign himself accounts with any amount of USD at will, and just started buying and withdrawing away.\dots" }. \end{quote} According to what just mentioned, we modeled a similar behaviour. We assumed that, until the end of January 2014, 40\% of Random traders entering the market were Mt. Gox accounts. The Mt. Gox accounts have a behaviour equal to that of Random traders described in Paragraph \textit{Random Traders} until July 2012. Then, from August 2012 and until the end of January 2014, they issue only buy orders. Next, from February 2014 they issue only sell orders. Their trading probability is set equal to 0.1 in every period. Fig. \ref{fig:Price} shows the Bitcoin price in one typical simulation run, under these conditions. At first, the price keeps its value constant, then at about 700 simulation steps, it grows as happens in reality. The price maintains its value high for about 500 simulation steps, then its value falls down, but after a short delay it continues on its upward slope until the end of the simulation, due to the intrinsec mechanism of our model already previous described. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{simulatedPriceMtGox-eps-converted-to} \caption{Price of the Bitcoin in the simulated market.} \label{fig:Price} \end{figure} The MtGox accounts' behaviour has a key rule in the reproduction of the price that has a trend more similar to the real one (shown in Fig. \ref{fig:realPrice}) than that described in section \textit{Bitcoin prices in the real and simulated market}. Figs. \ref{fig:averagePrice} (a) and (b) report the average and the standard deviation of the simulated price across all Monte Carlo simulations showing the consistency of the results. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{averagePriceMtGox-eps-converted-to}} \hspace{7mm} \subfigure[]{ \includegraphics[width=0.45\textwidth]{stdPriceMtGox-eps-converted-to}} \caption{(a) Average Price and (b) standard deviation computed on the 100 Monte Carlo simulations performed. \label{fig:averagePrice}} \end{figure} All the analysis described in the previous sections was performed also for the model including "Mt.Gox" accounts, producing results consistent with those studied in the previous sections. \section{Conclusions} \label{sec:7} In this work, we propose an heterogenous agent model of the Bitcoin market with the aim to study and analyse the mining process and the Bitcoin market starting from September 1st, 2010, the approximate date when miners started to buy mining hardware to mine Bitcoins, for five years. The proposed model simulates the mining process and the Bitcoin transactions, by implementing a mechanism for the formation of the Bitcoin price, and specific behaviors for each typology of trader. It includes different trading strategies, an initial distribution of wealth following Pareto law, a realistic trading and price clearing mechanism based on an order book, the increase with time of the total number of Bitcoins due to mining, and the arrival of new traders interested in Bitcoins. The model was simulated and its main outputs were analysed and compared to respective real quantities with the aim to demonstrate that an artificial financial market model can reproduce the stylized facts of the Bitcoin financial market. The main result of the model is the fact that some key stylized facts of Bitcoin real price series and of Bitcoin market are very well reproduced. Specifically, the model reproduces quite well the unit root property of the price series, the fat tail phenomenon, the volatility clustering of the price returns, the price peak in November 2013, its next fall in April 2014, the generation of Bitcoins, the hashing capability, the power consumption, and the hardware and electricity expenses incurred by Miners. The proposed model is fairly complex. It is intrinsically stochastic and of course it includes endogenous mechanisms affecting the market dynamics. The Zipf distribution of traders' wealth, that impacts to the size of the orders and the "herding" effect of Chartists, when a price trend is established, play a key role in the distribution of the price returns, and hence in the reproduction of the fat tail phenomenon. The Chartist behavior and also the variability of the spread of limit prices as a function of past price volatility contribute to the volatility clustering. The threshold of activation of Chartists based on price relative variation, and the past price volatility used to determine the spread of limit prices impact on the unit-root property of the price series. The percentage of each trader's population, the choice of a trader that trades at a given time step, and the type of trading (buy or sell), as well as the setting of the quantity to trade, impact on the price trend. The setting of the amount of cash to devote to buy new hardware impacts on the wealth and hashing capability of Miners, and consequently on their hardware and electricity expenses. Future research will be devoted to study in deeper detail the mechanisms impacting on the model dynamics. In particular, we will perform a comprehensive analysis of the sensitivity of the model to the various parameters, and will add traders with more sophisticated trading strategies, to assess their profitability in the simulated market. In addition, since the calibration of our model is based on very few specific real data, and on many assumptions aiming to derive the needed data from indirect real data, we plan to perform a deeper analysis of the Block Chain, and to gather financial data from the existing exchanges, in order to extract specific information needed for a better calibration of our model.
proofpile-arXiv_067-4401
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The discovery of neutrino oscillations has proven that neutrinos have a non-zero mass~\cite{pdg}. Yet, the absolute neutrino mass scale is still unknown since oscillation experiments are only sensitive to the squared mass differences of the three neutrino mass eigenstates $m_{\nu_i}$. The knowledge of the neutrino mass is crucial both for Particle Physics and for Cosmology. It will be an essential ingredient to answering the question of the neutrino mass generation mechanism, and an important input parameter to reduce degeneracies in cosmological models. Until more stringent results from laboratory-based experiments are established, cosmological observations themselves provide powerful probes of the neutrinos mass. Current limits based on a combination of cosmological probes set limits of $m_{\nu} = \sum_i m_{\nu_i}<120$~meV (95\% C.L.)~\cite{cosmo}. Future experiments aim to reach a precision of $\sigma({m_{\nu} }) = 17$~meV~\cite{cosmofuture}. It is important to note, however, that these results will depend on the underlying cosmological model. Another sensitive probe of the neutrino mass is the search for neutrinoless-double $\beta$-decay ($0\nu\beta\beta$). Here, one exploits the fact, that the half-life of the decay depends on the so-called Majorana neutrino mass $m_{\beta\beta} = |\sum_i U^2_{ei}m_{\nu_i}|$. Current best limits are at $m_{\beta\beta} = 120 - 250$~meV~\cite{0nbb} and, thanks to their scalability, future $0\nu\beta\beta$ experiments plan to reach sensitivities down to $m_{\beta\beta} \approx 25$~meV~\cite{Avi08}. The least model-dependent technique is solely based on the kinematics of single-$\beta$-decay. Here, the impact of the so-called effective electron (anti-)neutrino mass $m^2_{\nu_e} = \sum_i |U_{ei}|^2 m_{\nu_i}^2$ is a reduction of the endpoint energy and a distortion of the spectrum close to the endpoint. Near-future experiments are designed to reach a sensitivity of $m_{\nu_e} = 200$~meV (90\% C.L.)~\cite{Drex13}, probing the entire regime in which the neutrino mass eigenstates are quasi-degenerate. New ideas are being explored to push the sensitivity beyond this value to the inverted or normal hierarchical neutrino mass regime. \section{Kinematic determination of the neutrino mass} For a kinematic determination of the neutrino mass generally a single $\beta$-decay is considered. Neglecting the small recoil of the heavy daughter nucleus, only the emitted electron and neutrino statistically share the energy released in the decay. The electron, however, can never obtain the entire decay energy, since the neutrino takes away at least the amount of energy that corresponds to its mass. Consequently, the maximum electron energy is reduced and the spectrum is distorted in the close vicinity of the spectrum's endpoint $E_0$, see figure~\ref{fig:spectra}. \begin{figure}[] \begin{center} \includegraphics[width = 0.85\textwidth]{./TritiumHolmium.png} \caption{a: Tritium $\beta$-decay spectrum. b: Holmium-163 electron-capture spectrum. The insets depict a zoom into the endpoint region and demonstrate the impact of a finite effective electron neutrino mass.} \label{fig:spectra} \end{center} \end{figure} \subsection{Isotopes under consideration} The classical isotope in the field of neutrino mass measurement is tritium (${}^{3}$H). ${}^{3}$H has an endpoint of 18.6~keV and decays with a half-live of 12.3~years via a super-allowed $\beta$-decay to helium-3 (${}^{3}$He). Short half-life and low endpoint are preferable, since in this case the total decay rate per amount of isotope and the relative fraction of events in the region of interest are maximized. Holmium-163 (${}^{163}$Ho) constitutes a new player in the field. It has an endpoint of about 2.8~keV~\cite{eli15} and decays with a half-life of 4570~years via electron-capture to dysprosium-163 (${}^{163}$Dy). In this case there is no electron in the final state and instead of an anti-neutrino a neutrino is emitted. Here, the decay energy is shared between the neutrino and the excitation of the daughter nucleus ${}^{163}$Dy, which in turn decays via the emission of X-rays and Auger and Coster-Kronig electrons. \section{Current experimental efforts} Independent of the isotope, a major experimental requirement is an excellent energy resolution of about 2~eV @ $E_0$ in order to resolve the spectral distortion that only extends over an energy range of few eV at the endpoint. To allow for a measurement as close as possible to the endpoint where the signal rate is small, but the neutrino mass signal is large, an extremely low background level is mandatory. \subsection{The KATRIN Experiment} The Karlsruhe Tritium Neutrino (KATRIN) experiment is a large-scale tritium-$\beta$-decay experiment~\cite{KAT04}. It is currently being commissioned at the Karlsruhe Institute of Technology, Germany. KATRIN is designed to achieve a neutrino mass sensitivity of 200~meV (90\% C.L.) after 3 full-beam years of measurement time. \subsubsection{Working principle} \begin{figure}[] \begin{center} \includegraphics[width = 0.8\textwidth]{./KATRIN-Beamline.png} \caption{Main components of the 70-m-long KATRIN experimental setup. a: rear section, b: windowless gaseous tritium source, c: differential and cryogenic pumping section, d: pre-spectrometer, e: main spectrometer, f: focal plane detector.} \label{fig:KATRIN} \end{center} \end{figure} Tritium of very high isotopic purity ($>95\%$) is injected through capillaries into the windowless gaseous tritium source (WGTS) tube, see figure~\ref{fig:KATRIN}. The $\mathrm{T}_2$ molecules then diffuse over a distance of 5~m to both ends of the WGTS. With about 30~$\mu$g of tritium present in the WGTS at all times, an ultra-high and stable decay rate of $10^{11}$ decays/s is achieved. The WGTS beam tube is situated in a magnetic field, which is oriented in beam direction. All $\beta$-electrons that are emitted in the forward direction are guided along the field lines towards the spectrometers. On the way from the WGTS to the spectrometers the flow of tritium has to be reduced by 14 orders of magnitude to avoid tritium-related background in the spectrometer section. This large suppression factor is achieved by a combination of differential and cyrogenic pumping. The spectrometers work as a electrostatic filters allowing only those electrons with enough kinetic energy to be transmitted; electrons with less kinetic energy than the filter potential will be electrostatically reflected and are absorbed at the rear end. The high-energy transmitted electrons reach a focal-plane detector where they are counted. By varying the filter potential and counting the transmitted electrons for each setting, the integral tritium spectrum is determined. In addition to electrostatically filtering the electrons, the spectrometer also aligns the electron momenta via the magnetic gradient force. This combination of magnetic adiabatic collimation combined with electrostatic filtering is called MAC-E Filter principle~\cite{Lob85, Pic92} and allows for high energy resolution with large angular acceptance. For the electromagnetic design of the KATRIN the maximal acceptance angle is $51^{\circ}$ and the sharpness of the electrostatic filter (or energy resolution) is 0.93~eV. \subsubsection{Status and Sensitivity} Since September 2015 all KATRIN components are on-site at KIT. The windowless gaseous tritium source, cryogenic and differential pumping section are currently being commissioned and integrated~\cite{Luk11,Bab12}. During two commissioning phases in 2013--2015 the background and transmission properties of the main spectrometer and focal plane detector~\cite{fpd} were studied. The transmission measurements, performed with an angular-selective electron gun, revealed an excellent energy resolution and confirmed that the spectrometer is working as a MAC-E-filter as expected, see figure~\ref{fig:trans}~\cite{Groh}. The anticipated radon-induced background~\cite{Mer13} could be reduced to a negligible level making use of a liquid-nitrogen-cooled baffle system. However, a remaining background level of $\sim$~100~mcps (as opposed to desired 10~mcps), is still under investigation~\cite{Harms}. The neutrino mass measurement will prospectively start in 2017. With the start of the measurement, the sensitivity of KATRIN improves rapidly reaching the sub-eV level already after a few months of measurement time. After three years of data taking (5 calendar years) a balance between statistical and systematic error is reached. At this point, a 5$\sigma$ discovery level of $m_{\nu_e}=350$~meV and a 90\% upper limit of $m_{\nu_e}=200$~meV is attained. \begin{figure} \centering \subfigure[]{\includegraphics[width = 0.41\textwidth]{KATRINSpectrometer.png}} \hspace{0.2cm} \subfigure[]{\includegraphics[width = 0.38\textwidth]{TransmissionFunction2.png}} \caption{a: Photograph of the KATRIN main spectrometer surrounded by its large air coil system used to fine-shape the magnetic field. The inset depicts the inner surface of the spectrometer, which is equipped with the inner electrode system to fine-tune the retarding potential. b: Preliminary transmission function. The larger the starting angle of the electrons, the more surplus energy they need to overcome the retarding potential. The shift of the transmission function for $0^{\circ}$ and maximal angle determines the energy resolution of the spectrometer.} \label{fig:trans} \end{figure} \subsection{The Project 8 Experiment} Project 8 is exploring a new technique for $\beta$-spectrometry based on cyclotron radiation~\cite{p8_idea}. Using molecular tritium this approach could in principle reach the same sensitivity as the KATRIN experiment, but with quite different systematic uncertainties. \subsubsection{Working principle} The general idea of this technique is to measure the coherent electromagnetic cyclotron radiation of the $\beta$-electron. As opposed to KATRIN, where the electron has to be extracted from the gaseous tritium source to measure its energy, here, the tritium source is transparent to the cyclotron radiation. The cyclotron frequency depends on the kinetic energy via the relativistic $\gamma$ factor. The technical realization of this approach consists of a magnetic trap inside of a wave guide. The magnetic field determines the frequency range of the $\beta$-electrons and the wave guide dimensions are chosen accordingly to match the frequency band of interest. For 18.6-keV electrons in a 1-T magnetic field the cyclotron frequency is 27.009~GHz. The radiated power scales with $B^2$ and $\sin^2\theta$, where $\theta$ is the angle between the momentum vector of the electron and the direction of the magnetic field. Hence, large angles and a sufficiently strong magnetic field are required. For an electron with an energy near the tritium endpoint, approximately 1.2~fW is radiated in a 1-T magnetic field at a pitch angle of $90^{\circ}$. By choosing a magnetic field setting with a very shallow trap, the pitch angle spread and magnetic field inhomogeneity can be reduced, which improves the energy resolution. \subsubsection{Status and Sensitivity} With a first prototype setup, the Project 8 collaboration successfully provided a proof-of-principle of the new technique~\cite{p8_signal}. The wave guide of 10.7~$\times$~4.3~$\mathrm{mm}^2$ cross section and 7.6~cm length was placed inside a warm bore magnet of about 1~T. An additional coil operated with a current of up to 2~A provided a shallow magnetic trap of -8.2~mT depth, that confined all electrons with pitch angles larger than $85^{\circ}$. For test measurements the cell was filled with krypton gas. ${}^{83m}$~Kr is a meta-stable state which decays via internal conversion processes emitting electrons in the keV-energy range. Figure~\ref{fig:P8}b shows the signature of a trapped electron. By reducing the depth of the trap an impressive energy resolution of FWHM(@30.4~keV) = 15~eV could be reached. The collaboration is currently aiming to use the prototype setup in conjunction with tritium to test its performance for a continuous energy spectrum. At the same time the options for scaling-up the setup and the usage of atomic tritium are investigated. Preliminary and optimistic sensitivity studies~\cite{p8_sensi} show that with $\sim$1 year of data taking, with a density of $10^{11}$ molecules/$\mathrm{cm}^{3}$ and a sensitive volume of 10 cubic meter a sensitivity of $ m_{\nu_e}\approx100$~meV (90\%C.L.) could be reached. This is the intrinsic limit dictated by the energy broadening due to the molecular final state distribution. An instrument with an atomic tritium source of $10^{12}$ atoms/$\mathrm{cm}^{3}$ and a sensitive volume of 100 cubic meters could, in principle reach a sensitivity of $m_{\nu_e}=40$~meV. \begin{figure} \centering \subfigure[]{\includegraphics[width = 0.41\textwidth]{Project8.png}} \subfigure[]{\includegraphics[width = 0.41\textwidth]{P8_Signal_ll.png}} \caption{a: Basic working principle of the Project 8 experiment. Electrons from tritium $\beta$-decay are trapped in magnetic field. Their relativistic cyclotron radiation is detected by a wave guide (here depicted as antenna array) b: First detection of a single electron via cyclotron radiation. The onset of the frequency yields the initial energy of the $\beta$-electron. As it radiates and scatters it looses energy and hence increases its cyclotron frequency. } \label{fig:P8} \end{figure} \subsection{Electron Capture on Holmium} Currently, three experiments explore the approach of using electron capture on ${}^{163}$Ho to probe the neutrino mass: ECHo, HOLMES, and NuMECS. These experiments are complementary to tritium-based techniques both from a technical point-of-view and the fact that in this case the effective electron neutrino (as opposed to anti-neutrino) mass is measured. \subsubsection{Working principle} The basic idea is to place the ${}^{163}$Ho source inside an absorber material with low heat capacity. X-rays and electrons emitted in the de-excitation of the ${}^{163}$Dy$^{*}$ daughter atom create phonons in the absorber material and cause a small temperature increase. This temperature change is detected by ultra-sensitive thermometers such as transition edge sensors (TES) or magnetic metallic calorimeters (MMC). The calorimetric concept avoids a number of systematic effects as compared to the MAC-E-filter technology. In particular energy losses due to scattering during the extraction of the electron from the gaseous tritium source are completely circumvented. Furthermore, the intrinsic energy broadening due to the final state distribution of molecular tritium is not present. However, the micro-calorimetric technique involves a different class of systematic effects and technical challenges. As opposed to the KATRIN experiment where only the electrons of the ROI are considered, in these experiments every single decay is detected. The total decay rate is typically twelve orders of magnitudes higher than the decay rate only in the last few eVs away from the endpoint. Hence, pile-up becomes a serious concern. To limit pile-up 1) a fast rise time is needed and 2) the source needs to be spread over a large number of detectors. To operate a large number of detectors in a cryogenic environment, however, a sophisticated multiplexed read-out technology is necessary. Compared with the well-understood super-allowed tritium $\beta$-decay, the theoretical description of the ${}^{163}$Ho spectrum is still a challenge. This topic is addressed by several groups, who found that two- and three-hole excitations due to shake-up and -off processes need to be included and might significantly change the expected statistics at the endpoint~\cite{fas15, ruj15}. \begin{figure} \centering \subfigure[]{\includegraphics[width = 0.41\textwidth]{ECHo.png}} \hspace{0.2cm} \subfigure[]{\includegraphics[width = 0.38\textwidth]{EchoResult.png}} \caption{a: Experimental setup of a micro-calorimetric detector. The source (red) is enclosed by a gold absorber (yellow). The paramagnetic temperature sensor (orange) is read-out by a SQUID system~\cite{echo_spectrum}. b: ${}^{163}$Ho spectrum measured by the ECHo collaboration. This spectrum presents is the first calorimetric measurement of the OI-line~\cite{echo_spectrum_future}} \label{fig:Kink} \end{figure} \subsubsection{Status and Sensitivity} Three groups are currently developing neutrino mass experiments based on electron-capture on ${}^{163}$Ho: ECHo~\cite{echo} is using MMCs for read-out. The holmium source is enclosed in a gold absorber, which is attached to an Au:Er paramagnetic sensor at 30~mK. The temperature change causes a drop of the magnetization of the sensor which is detected by a SQUID. With their first prototype ECHo could demonstrate excellent energy resolution of 7.6~eV @ 6~keV and fast rise times of $\tau = 130$~ns. A larger detector array with 16 pixels, increased purity, and activity (0.1~Bq) is being tested at the moment. HOLMES~\cite{holmes} is making use of the TES technology. The collaboration is currently performing detector and read-out R\&D. In particular, a custom ion-implanter is being assembled in Genova to embed the ${}^{163}$Ho in the detectors. The first test with a ${}^{163}$Ho source will prospectively begin in 2017. NuMECS~\cite{numecs} is also pursuing the TES technology. This group's focus is on ${}^{163}$Ho production via proton activation of dysprosium, as opposed to the more common neutron irradiation on ${}^{162}$Er. With their prototype where the source was enclosed as liquid drop in a nanoporous gold absorber, NuMECS successfully measured a ${}^{163}$Ho spectrum with an energy resolution of about 40~eV FWHM. A total statistics of $10^{14}$ events is needed to reach a sub-eV sensitivity. Assuming a rise time that allows for 10~Bq per detector, $10^5$ detectors are needed to reach $m_{\nu_e}=1$~eV sensitivity within one year of measurement time. \section{Conclusion} The kinematics of $\beta$-decay provides a unique, model-independent means to measure the absolute neutrino mass. KATRIN will start taking data in the near future reaching a final sensitivity of 200~meV (90\%C.L.) after 3 years of data collection. The Project 8 collaboration proved a completely novel concept of measuring the $\beta$-electron's energy via its cyclotron frequency and holmium-based cryogenic experiments are advancing to reach the sub-eV sensitivity. These new approaches will provide complementary results and may show a path towards exploring the hierarchical neutrino mass regime.
proofpile-arXiv_067-4491
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The search for leptonic CP violation constitutes one of the major challenges in particle physics today~\cite{Branco:2011zb}. Although CP violation studies are interesting in their own right, they may also shed light upon the general CP symmetries of the neutrino mass matrices in a rather model--independent way~\cite{Chen:2016ica}, such as the case of the generalized $\mu-\tau$ reflection symmetry~\cite{Chen:2015siy}. Likewise, they can probe the predictions made by specific flavor models and hence put to test the structure of the corresponding symmetries~\cite{Morisi:2012fg,King:2014nza}. This type of CP violation is associated with the Dirac phase $\delta_{CP}$ present in the simplest three-neutrino mixing matrix, which is simply the leptonic analogue of the phase in the CKM matrix, describing the quark weak interactions~\cite{kobayashi:1973fv,Schechter:1980gr,PhysRevLett.51.1945}. It is known to directly affect lepton number conserving processes such as neutrino oscillations. So far neutrino oscillation experiments have measured the two squared neutrino mass differences, as well as the three corresponding mixing angles~\cite{Maltoni:2004ei}. These measurements provide a rather precise determination of all neutrino oscillation parameters, except for the atmospheric mixing angle $\theta_{23}$, whose octant is still uncertain, and the leptonic Dirac CP phase $\delta_{CP}$, which is poorly determined~\cite{Forero:2014bxa}. The precision era in neutrino physics has come with new experimental setups that will provide enough statistics for measuring all of the neutrino parameters to an unprecedented level of accuracy. These include T2K~\cite{Abe:2015awa}, Hyper-K~\cite{Abe:2011ts}, and TNT2K~\cite{TNT2K}. The TNT2K (Tokai 'N Toyama to Kamioka) project is a combination of $\mu$Kam (with $\mu$DAR source and Super-K ($\mu$SK) or Hyper-K ($\mu$HK) detectors at Kamioka) and T2(H)K. All of the above facilities aim at measuring this single Dirac phase $\delta_{CP}$. However, one is likely to depart from such a simple picture, if neutrinos get their mass \textit{a la seesaw}. In this case, neutrino mass arises through the tree level exchange of heavy, so far undetected, $\mathrm{SU(3)_c \otimes SU(2)_L \otimes U(1)_Y}$ singlet messenger fermions such as ``right-handed'' neutrinos, as in the type-I seesaw mechanism. If the seesaw scheme responsible for generating neutrino mass is accessible to the LHC, then it is natural to expect that neutrino oscillations will be described by a non-unitary mixing matrix. Examples of such mechanisms are the inverse and linear seesaw schemes~\cite{Mohapatra:1986bd,GonzalezGarcia:1988rw,Akhmedov:1995vm,Akhmedov:1995ip,Malinsky:2005bi,Bazzocchi:2010dt}. In these schemes one expects sizeable deviations from the simplest three--neutrino benchmark, in which there are only three families of orthonormal neutrinos. The generic structure of the leptonic weak interaction was first given in Ref.~\cite{Schechter:1980gr} and contains new parameters in addition to those of the simplest three--neutrino paradigm. In this case the description of neutrino oscillations involves an effectively non-unitary mixing matrix~\cite{Escrihuela:2015wra,Li:2015oal}. As a consequence, there are degeneracies in the neutrino oscillation probability involving the ``standard'' three-neutrino CP phase and the ``new'' phase combination arising from the non-unitarity of the neutrino mixing matrix~\cite{Miranda:2016wdr,Miranda:2016ptb}. In this paper we examine some strategies to lift the degeneracies present between ``standard'' and ``new'' leptonic CP violation effects, so as to extract with precision the Dirac CP phase from neutrino oscillations in the presence of non-unitary mixing. Such effort also provides an indirect way to help probing the mass scale involved in neutrino mass generation through the seesaw mechanism. A precise measurement of the genuine Dirac CP phase would also provide direct tests of residual symmetries that can predict correlation between the Dirac CP phase and the mixing angles \cite{Ge:2010js,He:2011kn,Dicus:2010yu,Ge:2011ih,Ge:2011qn,Hanlon:2013ska,He:2015xha}. Note also that probing the non-unitarity of the neutrino mixing matrix in oscillation searches could provide indirect indications for the associated (relatively low--mass) seesaw messenger responsible for inducing neutrino mass. This would also suggest that the corresponding charged lepton flavour violation and CP violation processes could be sizeable, irrespective of the observed smallness of neutrino masses~\cite{bernabeu:1987gr,branco:1989bn,rius:1989gk,Deppisch:2004fa,Deppisch:2005zm}. The spectrum of possibilities becomes even richer in low--scale seesaw theories beyond the $\mathrm{SU(3)_c \otimes SU(2)_L \otimes U(1)_Y}$ gauge structure~\cite{Deppisch:2013cya,Das:2012ii}. Unfortunately, however, no firm model--independent predictions can be made in the charged sector. As a result searches for the exotic features such as non--unitary neutrino propagation effects may provide a unique and irreplaceable probe of the theory that lies behind the canonical three--neutrino benchmark. This paper is organized as follows. In \gsec{sec:formalism} we summarize the generalized formalism describing neutrino mixing in the presence of non-unitarity. This convenient parametrization is then used to derive the non-unitarity effects upon the three--neutrino oscillation probabilities, by decomposing their dependence on the CP phases and the atmospheric mixing angle $\theta_a$, see details in \gapp{sec:decomposition}. This is useful to demonstrate, in \gsec{sec:effect}, that the size of the non-unitary CP effects can be as large as the standard CP terms, given the current limits on leptonic unitarity violation. % In addition, we also implement the inclusion of matter effects~\cite{Mikheev:1986gs,Wolfenstein:1977ue}, as detailed in \gapp{sec:matter}, and illustrate how they can modify the oscillation probabilities. With the formalism established, we show explicitly in \gsec{sec:fake} how the ``non-unitary'' CP phase can fake the standard ``unitary'' one at accelerator neutrino experiments like T2(H)K. In \gsec{sec:muDAR} we show that the degeneracy between unitary and non-unitary CP phases can be partially resolved with TNT2K. Moreover, we further propose a near detector $\mu$Near, with 20\,ton of liquid scintillator and 20\,m of baseline, in order to disentangle the effects of the two physical CP phases and recover the full $\delta_{CP}$ sensitivity at TNT2K. Our numerical simulations for T2H(K), $\mu$SK, $\mu$HK, and $\mu$Near are carried out with the NuPro package \cite{NuPro}. The conclusion of this paper can be found in \gsec{sec:conclusion}. \section{Neutrino Mixing Formalism} \label{sec:formalism} Within the standard three--neutrino benchmark scheme the neutrino flavor and mass eigenstates are connected by a unitary mixing matrix $U$~\cite{Valle:2015pba}, \begin{equation} \nu_\alpha = U_{\alpha i} \nu_i \,, \label{eq:mix} \end{equation} % where we use the subscript $\alpha$ for flavor and $i$ for mass eigenstates. This lepton mixing matrix may be expressed as \begin{equation} U = \mathcal P \left\lgroup \begin{array}{ccc} c_s c_r & s_s c_r & s_r e^{- i \delta_{CP}} \\ - c_a s_s - s_a s_r c_s e^{i \delta_{CP}} & c_a c_s - s_a s_r s_s e^{i \delta_{CP}} & s_a c_r \\ s_a s_s - c_a s_r c_s e^{i \delta_{CP}} &-s_a c_s - c_a s_r s_s e^{i \delta_{CP}} & c_a c_r \end{array} \right\rgroup \mathcal Q \,. \label{eq:U} \end{equation} % in which we have adopted the PDG variant~\cite{Agashe:2014kda} of the original symmetric parametrization of the neutrino mixing matrix~\cite{Schechter:1980gr}, with the three mixing angles $\theta_{12}$, $\theta_{23}$ and $\theta_{13}$ denoted as $\theta_{s}$, $\theta_{a}$ and $\theta_{r}$, for solar, atmospheric and reactor, respectively. Within this description, three of the CP phases in the diagonal matrices $\mathcal P \equiv \mbox{diag}\{e^{- i \beta_1}, e^{- i \beta_2}, e^{- i \beta_3}\}$ and $\mathcal Q \equiv \mbox{diag}\{e^{- i \alpha_1}, e^{- i \alpha_2}, e^{- i \alpha_3}\}$ can be eliminated by redefining the charged lepton fields, while one is an overall phase that can be rotated away. The remaining phases correspond to the two physical Majorana phases~\cite{Schechter:1980gr}~\footnote{The absence of invariance under rephasings of the Majorana neutrino Lagrangean leaves these extra two physical Majorana phases~\cite{Schechter:1980gr}. They do not affect oscillations~\cite{Schechter:1981gk,Doi:1980yb}, entering only in lepton number violation processes, such as neutrinoless double beta decay or $\rm 0\nu\beta\beta$ \cite{Schechter:1981bd}.}. This leaves only the Dirac CP-phase $\delta_{CP}$ characterizing CP violation in neutrino oscillations. If neutrinos acquire mass from the general seesaw mechanism through the exchange of $\mathrm{SU(3)_c \otimes SU(2)_L \otimes U(1)_Y}$ singlet heavy messenger fermions, these extra neutrino states mix with the standard $\nu_e$, $\nu_\mu$, $\nu_\tau$, and then the neutrino mixing needs to be extended to go beyond $3 \times 3$, \begin{equation} U^{n \times n} = \left\lgroup \begin{matrix} N & W \\ V & T \end{matrix} \right\rgroup \,, \label{eq:Unn} \end{equation} Note that the total mixing matrix $U^{n \times n}$ (with $n > 3$) shall always be unitary, regardless of its size. The leptonic weak interaction mixing matrix is promoted to rectangular form~\cite{Schechter:1980gr} where each block can be systematically determined within the seesaw expansion~\cite{Schechter:1981cv}. However if the extra neutrinos are heavy they cannot be produced at low energy experiments nor will be accessible to oscillations. In such case only the first $3 \times 3$ block $N$ can be visible~\cite{valle:1987gv,nunokawa:1996tg,Antusch:2006vwa}. In other words, the original $3 \times 3$ unitary mixing $U$ in \geqn{eq:U} is replaced by a truncated non-unitary mixing matrix $N$ which will effectively describe neutrino propagation. This can be written as \begin{equation} N = N^{NP} U = \left\lgroup \begin{array}{ccc} \alpha_{11} & 0 & 0\\ \alpha_{21} & \alpha_{22} & 0\\ \alpha_{31} & \alpha_{32} & \alpha_{33} \end{array} \right\rgroup U \,. \label{eq:N} \end{equation} % This convenient parametrization follows from the symmetric one in \cite{Schechter:1980gr} and applies for any number of additional neutrino states~\cite{Escrihuela:2015wra}. Irrespective of the number of heavy singlet neutrinos, it involves three real parameters ($\alpha_{11},\alpha_{22}$ and $\alpha_{33}$, all close to one) and three small complex parameters ($\alpha_{21},\alpha_{31}$ and $\alpha_{32}$). In the standard model one has, of course, $\alpha_{ii}=1$ and $\alpha_{ij}=0$ for $i\neq j$. Current experiments, mainly involving electron and muon neutrinos, are sensitive to three of these parameters: $\alpha_{11}$, $\alpha_{22}$ and $\alpha_{21}$. Note that the latter is complex and therefore we end up with three additional real parameters and one new complex phase $$\phi \equiv -{\rm arg}(\alpha_{21}).$$ The above definition matches the notation in Refs.~\cite{Escrihuela:2015wra,Miranda:2016wdr}. There are a number of constraints on non-unitarity, such as those that follow from weak universality considerations. In~\cite{Escrihuela:2015wra} updated constraints on unitarity violation parameters at 90\% C.L. have been given as \begin{equation} \alpha_{11}^2 \geq 0.989 \,, \quad \alpha_{22}^2 \geq 0.999 \,, \quad |\alpha_{21}|^2 \leq 6.6 \times 10^{-4} \,, \label{eq:bounds} \end{equation} These include both universality as well as oscillation limits. Concerning the former, these constraints are all derived on the basis of charged current induced processes and under the assumption that there is no new physics other than that of non-unitary mixing. Such bounds rely on many simplifying assumptions. Departure from such simplifying approximations could result in different bounds on the non-unitarity parameters. Indeed, although naively one might think that new physics interactions would always enhance the deviation from the standard model prediction, strengthening the non-unitarity bounds, the opposite can happen. For example, new physics can weaken the non-universality bounds as a result of subtle cancellations involving the new physics effects contributing to the relevant weak processes~\footnote{ Though less likely, cancellations between new physics and standard model contributions to a given weak process can also be envisaged. }. It is not inconceivable that such cancellations amongst new physics contributions might even result from adequately chosen symmetry properties of the new interactions. Given the fragility of existing constraints, the main emphasis of our paper will be on experiments providing robust model-independent bounds on non-unitarity relying only on neutrino processes. For this reason here we will concentrate on the following bound on $\alpha_{21}$ due the non-observation of $\nu_\mu$ to $\nu_e$ conversion at the NOMAD experiment, only relevant neutrino oscillation experiment. We implement this bound as \textit{prior} in the NuPro package~\cite{NuPro} as % \begin{equation} \left[ \sin^2 (2 \theta_{\mu e}) \right]_{eff} = 2 |\alpha_{21}|^2 \leq 0.0014 \quad \quad @ \,\, 90\% \text{C.L.}\, \label{eq:prior2} \end{equation} % In contrast to non-oscillation phenomena, the NOMAD experiment puts direct constraints on neutrino oscillations, which can be used as a prior in our simulation. Indeed, the presence of new physics affecting the charged lepton sector would not change the previous bound, since NOMAD results were derived by assuming the standard model values for observables such as $R^\pi_{e\mu}$. These values are in agreement with current experimental observations and therefore they will not be affected by any other process of new physics in the charged sector. In contrast, new physics in the neutrino sector such as non-standard interactions with matter or light sterile neutrinos could affect the bound in Eq.~(\ref{eq:prior2}). Besides, these additional physics phenomena would have in general different effects in NOMAD and T2K and therefore the above limit will not be directly applicable to T2K. In order to simplify the physics scenario, here we focus on non-unitarity as the only source of new physics in the neutrino sector. Since no sensitivity on the non-unitary CP phase $\phi$ has been obtained so far so we will take this parameter free in our analysis. We will show how non-unitary mixing can deteriorate the CP measurement in neutrino oscillation experiments under the current model-independent constraints. What we propose in this paper can improve not only the constraint on non-unitary mixing but also the resulting CP sensitivity~\cite{Miranda:2016wdr}. As a reference benchmark value for $\alpha_{21}$ we may take the above bound given by the NOMAD experiment. \section{Effect of The Non-Unitarity CP Phase} \label{sec:effect} As demonstrated in \cite{Ge:2013zua}, the three currently unknown parameters in neutrino oscillations, the neutrino mass hierarchy, the leptonic Dirac CP phase $\delta_{CP}$, and the octant of the atmospheric angle $\theta_a$, can be analytically disentangled from each other. This decomposition formalism is extremely useful to study the effect of different unknown parameters in various types of neutrino oscillation experiments. Here, we generalize the formalism to accommodate the effect of non-unitary neutrino mixing, $N = N^{NP} U$, as parametrized in Eq.~\geqn{eq:N}. This extra mixing can be factorized from the Hamiltonian $\mathcal H^{NP}$ and the oscillation amplitude $S^{NP}$, together with $U_{23}(\theta_a)$, which is the 2--3 mixing due to the atmospheric angle $\theta_a$, and the rephasing matrix $P_\delta \equiv \mbox{diag}(1,1,e^{i \delta_{CP}})$, \begin{subequations} \begin{eqnarray} \mathcal H^{NP} & = & [N^{NP} U_{23}(\theta_a) P_\delta] \mathcal H' [N^{NP} U_{23}(\theta_a) P_\delta]^\dagger \,, \\ S^{NP} & = & [N^{NP} U_{23}(\theta_a) P_\delta] S' [N^{NP} U_{23}(\theta_a) P_\delta]^\dagger \,. \end{eqnarray} \end{subequations} % With less mixing parameters, it is much easier to first evaluate $S'$ with the transformed Hamiltonian $\mathcal H'$. The effect of the non-unitary mixing parameters in $N^{NP}$, the atmospheric angle $\theta_a$ and the Dirac CP phase $\delta_{CP}$ can then be retrieved in an analytical way (see \gapp{sec:decomposition} for more details). Here, we find that the key oscillation probability $P_{\mu e}$ for the $\nu_\mu \to \nu_e$ channel is given by, \begin{eqnarray} P^{NP}_{\mu e} & = & \alpha^2_{11} \left\{ \alpha^2_{22} \left[ c^2_a |S'_{12}|^2 + s^2_a |S'_{13}|^2 + 2 c_a s_a (\cos \delta_{CP} \mathbb R - \sin \delta_{CP} \mathbb I) (S'_{12} S'^*_{13}) \right] + |\alpha_{21}|^2 P_{ee} \right. \nonumber \\ &+& \left. 2 \alpha_{22} |\alpha_{21}| \left[ c_a \left( c_\phi \mathbb R - s_\phi \mathbb I \right) (S'_{11} S'^*_{12}) + s_a \left( c_{\phi + \delta_{CP}} \mathbb R - s_{\phi + \delta_{CP}} \mathbb I \right) (S'_{11} S'^*_{13}) \right] \right\} \,. \label{eq:PmeNP} \end{eqnarray} The choice of this parametrization is extremely convenient to separate the neutrino oscillation probabilities into several terms, as we further elaborate in \gapp{sec:decomposition}. In this formalism, the transition probability $P^{NP}_{\mu e}$ relevant for the CP studies can be decomposed into several terms, $P^{NP}_{\mu e} = \sum_k f_k(\alpha_{ij}, \theta_a, \phi) P^{(k)}_{\mu e}(S')$. It contains six terms $P^{(2,3,7,8,9,10)}_{\mu e}$ involving the Dirac CP phases $\delta_{CP}$ and $\phi$ (see Table \ref{tab:Ps} in \gapp{sec:decomposition}). The standard phase $\delta_{CP}$ is modulated by $P^{(2,3)}_{\mu e}$, which are mainly controlled by the matrix elements $(\mathbb R, \mathbb I)(S'_{12} S'^*_{13})$, while the non-unitarity counterparts $P^{(7,8,9,10)}_{\mu e}$ involve the elements $(\mathbb R, \mathbb I)(S'_{11} S'^*_{12}, S'_{11} S'^*_{13})$. If $(\mathbb R, \mathbb I)(S'_{11} S'^*_{12}, S'_{11} S'^*_{13})$ are of the same size as $(\mathbb R, \mathbb I)(S'_{12} S'^*_{13})$, the effect of the non-unitary CP phase $\phi$ is then suppressed by the constraint $|\alpha_{21}| \lesssim 0.026$. Nevertheless, $S'_{11}$ has much larger magnitude than $S'_{12}$ and $S'_{13}$ which becomes evident by calculating the amplitude matrix $S'$ in the basis in which the atmospheric angle $\theta_a$ and the Dirac CP phase are factorized. Since the matter effects are small for the experiments under consideration, here we can illustrate the picture with the result in vacuum~\footnote{Although our results are obtained under the assumption that there is no matter effect, they also apply when the matter effect is not significant. See \gapp{sec:matter} for details.}, \begin{equation} S' = \mathbb I_{3 \times 3} - 2 i \sin \Phi_a e^{- i \Phi_a} \left\lgroup \begin{matrix} s^2_r & & c_r s_r \\ & 0 \\ c_r s_r & & c^2_r \end{matrix} \right\rgroup - 2 i \sin \Phi_s e^{- i \Phi_s} \left\lgroup \begin{matrix} c^2_r s^2_s & c_r c_s s_s &-c_r s_r s^2_s \\ c_r c_s s_s & c^2_s &-s_r c_s s_s \\ - c_r s_r s^2_s & -s_r c_s s_s & s^2_r s^2_s \end{matrix} \right\rgroup , \label{eq:S'} \end{equation} % where $\mathbb I_{3 \times 3}$ is the $3 \times 3$ identity matrix and $\Phi_{a,s} \equiv \Delta m^2_{a,s} / 4 E_\nu$ denote the solar and atmospheric oscillation phases. One can see explicitly that the amplitude matrix $S'$ is symmetric in the absence of matter potential as well as for symmetric matter profiles. \begin{figure}[t] \centering \includegraphics[width=8cm,angle=-90]{deCoeff_T2K.eps} \caption{The decomposed CP coefficients for the neutrino oscillation probability $P_{\mu e}$ for T2(H)K.} \label{fig:deCoeff-T2K} \end{figure} For CP measurements at accelerator experiments, the neutrino energy and baseline are usually configured around the first oscillation peak, $\Phi_a \approx \frac \pi 2$. Correspondingly, $\Phi_s \approx \frac \pi 2 \times \Delta m^2_s / \Delta m^2_a$, has a small value. Up to leading order, $S'_{11} \approx 1$, in comparison with $S'_{12} \approx - 2 i \sin \Phi_s e^{- i \Phi_s} c_r c_s s_s$ and $S'_{13} \approx - 2 i \sin \Phi_a e^{- i \Phi_a} c_r s_r$. The $S'_{12}$ element is suppressed by $\Delta m^2_s / \Delta m^2_a$ while $S'_{13}$ is suppressed by the reactor angle $\theta_r$. Consequently, the non-unitary elements $\mathbb I(S'_{11} S'^*_{12})$ and $(\mathbb R, \mathbb I)(S'_{11} S'^*_{13})$ are expected to be at least one order of magnitude larger than the unitary elements $(\mathbb R, \mathbb I)(S'_{12} S'^*_{13})$. Note that $S'_{12}$ is mainly imaginary, which makes $\mathbb R(S'_{11} S'^*_{12})$ to almost vanish. Among the remaining non-unitary terms, there is still a hierarchical structure. Since $S'_{12}$ is suppressed by $\Delta m^2_s/\Delta m^2_a$ while $S'_{13}$ is suppressed by $s_r$, the relative size is roughly $|S'_{12} / S'_{13}| \sim 1/5$. In short, there are five independent CP terms in $P_{\mu e}$, in full agreement with the result in \cite{Escrihuela:2015wra}. To give an intuitive picture, we plot in \gfig{fig:deCoeff-T2K} the six CP related decomposition coefficients at T2(H)K \cite{TNT2K} for illustration. The relative size of the coefficients can then be measured by, \begin{equation} R_{a}\equiv\frac{2|\alpha_{21}|}{\alpha_{22}}\frac{\mathbb R (S'_{11}S'^*_{1a})+\mathbb I (S'_{11}S'^*_{1a})}{\mathbb R (S'_{12}S'^*_{13})+\mathbb I (S'_{12}S'^*_{13})}, \label{eq:Ratios} \end{equation} where $a=2,3$. We plot the ratio $R_{a}$ for $2|\alpha_{21}|/\alpha_{22}^2=5\%$ on \gfig{fig:Ratio-T2K}, where it is even clearer that $\mathbb I(S'_{11} S'^*_{12})$ and $(\mathbb R, \mathbb I)(S'_{11} S'^*_{13})$ are typically $\sim$ 10-20 times larger than $(\mathbb R, \mathbb I)(S'_{12} S'^*_{13})$, as expected. These considerations show that the size of the standard and the non-unitary contribution can be of the same order. As a result, it can easily mimic the shape of the oscillation curve visible to the experimental setup. \begin{figure}[t] \centering \includegraphics[width=8cm]{Ratios.eps} \caption{$R_a$ ratio as given in Eq. (\ref{eq:Ratios}) for the T2(H)K experimental setup, setting $2|\alpha_{21}|/\alpha_{22}=5\%$. The solid red line corresponds to $R_2$, while $R_3$ is given by the dashed blue line.} \label{fig:Ratio-T2K} \end{figure} \begin{figure}[thb] \centering \includegraphics[scale=0.6]{figure2.eps} \caption{Electron antineutrino appearance probability as a function of $L/E$ for three different assumptions: (i) black solid line: unitary case with $\delta_{CP}=0$, (ii) blue dashed line: unitary with $\delta_{CP}=3\pi/2$, (iii) red solid line: non-unitary case with $\delta_{CP}=0$, $|\alpha_{21}|=0.02$ and $\phi=0.1\pi$. } \label{fig:mimic} \end{figure} Another intuitive way to observe this is through the plot of oscillation probability as a function of $L/E$ as in \gfig{fig:mimic}. Notice how a non-zero value of $\phi$ can mimic the behaviour of $\delta_{CP}=3\pi/2$ (dashed blue line) even with $\delta_{CP}=0$ (solid red line). Later on, it will become clear that if the magnitude of the non-unitarity CP effect $|\alpha_{21}|$ is as large as $5 \%$, the standard CP phase $\delta_{CP}$ will not be distinguishable from its non-unitary counterpart $\phi$, unless the experiment can measure neutrino oscillations over a wide range of $L/E$. This issue will be taken up and elaborated in \gsec{sec:fake}. It should be pointed out that although in the T2K experiment the matter effect is small, it is not completely negligible when considering the sensitivity on the CP phases. The effect of the non-unitary mixing and the matter potential in the electron neutrino appearance probability is shown in \gfig{fig:non-uni-matter}. % This means that a CP analysis should take matter effects into account: in \gapp{sec:matter} we present a formalism to deal with matter effects in the context of non-unitary neutrino mixing. As a good approximation, one can assume an Earth profile with constant density $\rho_{\rm earth}=3 \,\rm{g}/\rm{cm}^3$ throughout this paper. \begin{figure}[t] \centering \includegraphics[scale=0.6]{matteroscillation.eps} \includegraphics[scale=0.6]{matteroscillation_asymetry.eps} \caption{Left: muon to electron neutrino appearance probability at a baseline of $295$ km. Right: the corresponding CP asymmetry between neutrino and anti-neutrino oscillations. We compare three assumptions: unitary mixing in vacuum (red), unitary mixing in matter (blue) and non-unitary mixing in matter with $|\alpha_{21}|=0.02$ and $\phi=3\pi/2$ (green). In all cases we take $\delta_{CP}=3\pi/2$. } \label{fig:non-uni-matter} \end{figure} \section{Faking the Dirac CP Phase with Non-Unitarity} \label{sec:fake} As depicted in Figs.~\ref{fig:deCoeff-T2K} and \ref{fig:Ratio-T2K}, the size of the amplitude matrix elements $\mathbb I(S'_{11} S'^*_{12})$ and $(\mathbb R, \mathbb I)(S'_{11} S'^*_{13})$ that contribute to the CP terms associated to unitarity violation are typically $\sim$ 10-20 times larger than their unitary counterparts $(\mathbb R, \mathbb I)(S'_{12} S'^*_{13})$. According to the prior constraint in Eq.\geqn{eq:prior2}, the magnitude of the non-unitary CP term $|\alpha_{21}|$ is about $2.6\%$ at 90\% C.L. Consequently, after taking into account the extra factor of $2$ associated with $|\alpha_{21}|$ in \gtab{tab:Ps}, one finds that the non-unitary CP coefficients $P^{(8,9,10)}_{\mu e}$ can be as large as the unitary ones $P^{(2,3)}_{\mu e}$. Hence there is no difficulty for the non-unitary CP phase $\phi$ to fake the effects normally ascribed to the conventional CP phase $\delta_{CP}$, given the currently available prior constraint on non-unitarity. In order to study to what extent the standard CP phase $\delta_{CP}$ can be faked by the non-unitary CP phase $\phi$, we simulate, for illustration, the T2(H)K experiment, as shown in \gfig{fig:chi2-T2K}. The pseudo-data are simulated with the true value of $\delta_{CP}=3\pi/2$, under the assumption of unitary mixing, \begin{equation} \delta^{true}_{CP} = 3\pi/2 \,, \qquad \alpha^{true}_{11} = \alpha^{true}_{22} = 1 \,, \qquad |\alpha_{21}|^{true} = 0 \,. \label{eq:true} \end{equation} In other words, there is no unitarity violation in the simulated pseudo-data. We assume that the $7.8 \times 10^{21}\mbox{POT}$ flux of T2K \cite{T2K1409}, corresponding to 6 years of running, is equally split between the neutrino and anti-neutrino modes, while the same configuration is assigned for T2HK in this section. \begin{figure}[t!] \centering \includegraphics[scale=0.31,angle=-90]{muDAR_dCP_error4_dCP270_independent_T2K} \includegraphics[scale=0.31,angle=-90]{muDAR_dCP_error4_dCP270_independent_T2HK} \caption{The marginalized $\chi^2(\delta_{CP})$ function at T2K and T2HK under the assumptions of unitary mixing ({\color{blue}{blue}}) and non-unitary mixing with ({\color{red}{red}}) or without (black) the prior constraint.} \label{fig:chi2-T2K} \end{figure} To extract the sensitivity on the leptonic Dirac CP phase $\delta_{CP}$, we fit the pseudo-data with the following $\chi^2$ function, \begin{equation} \chi^2 \equiv \chi^2_{stat} + \chi^2_{sys} + \chi^2_{prior} \,, \end{equation} % where the three terms ($\chi^2_{stat}$, $\chi^2_{sys}$, $\chi^2_{prior}$) stand for the statistical, systematical, and prior contributions. The statistical contribution $\chi^2_{stat}$ comes from the experimental data points, \begin{equation} \chi_{stat}^2 = \sum_i \left( \frac{N_i^{\rm pred} - N_i^{\rm data}}{\sqrt{N_i^{\rm data}}} \right)^2 \,, \label{eq:chi2-stat} \end{equation} % with summation over energy bins, for a specific experiment. For the combined analysis of several experiments, the total $\chi^2_{stat}$ will be a summation over their contributions. In the systematical term $\chi^2_{sys}$ we take into account the flux uncertainties. For T2(H)K, we assume a 5\% flux uncertainty for the neutrino and anti-neutrino modes independently, \begin{equation} \chi^2_{sys} = \left( \frac {f_\nu - 1}{0.05} \right)^2 + \left( \frac {f_{\bar \nu} - 1}{0.05} \right)^2 \,. \end{equation} Note that both the statistical $\chi^2_{stat}$ and systematical $\chi^2_{sys}$ parts need to be extended when adding extra experiments. In contrast, the prior knowledge is common for different experimental setups. For the discussion that follows, it consists of two parts, \begin{equation} \chi^2_{prior} = \chi^2_{unitary} + \chi^2_{non-unitary} \,. \end{equation} The first term $\chi^2_{unitary}$ contains the current measurement of the three-neutrino oscillation parameters~\cite{Forero:2014bxa}, as summarized in the Sec.2.1 of \cite{TNT2K}, while the contribution $\chi^2_{non-unitary}$ accounts for the current constraint on the unitarity violating parameters in Eq.~\geqn{eq:prior2}. Note that the unitary prior contribution $\chi^2_{unitary}$ is always imposed while $\chi^2_{non-unitary}$ is only considered when fitting the data under the non-unitarity assumption with prior constraint. We then fit the data under different assumptions. For each value of the CP phase $\delta_{CP}$, the marginalized value of $\chi^2$ in \gfig{fig:chi2-T2K} is obtained by first fixing the fit value of $\delta_{CP}$ and then minimizing the $\chi^2$ function over the other oscillation parameters. Depending on the assumption, the parameter list includes the three mixing angles, the two mass squared differences, and the non-unitary parameters. The blue curves in \gfig{fig:chi2-T2K} are obtained by assuming standard unitary mixing, with minimization over the three mixing angles ($\theta_a$, $\theta_r$, $\theta_s$) and the two mass splittings ($\Delta m^2_a$, $\Delta m^2_s$). The result is the marginalized $\chi^2 (\delta_{CP})$ function from which we can read off the CP measurement sensitivity, $\chi^2(\delta_{CP}) = 1$ for $1 \sigma$. One can see that T2K can distinguish reasonably well a nonzero Dirac CP phase from zero, while T2HK can further enhance this sensitivity, under the unitarity assumption. We then turn on the non-unitarity parameters and $\chi^2_{non-unitary}$. As we can see, the situation totally changes once non-unitarity is introduced. The inclusion of the non-unitarity degrees of freedom ($\alpha_{11}$, $\alpha_{22}$, $|\alpha_{21}|$, and $\phi$) requires the marginalization over nine parameters. Given a nonzero fitting value $\delta^{fit}_{CP}$, one can find a counter-term from the non-unitarity terms $P^{(8,9,10)}_{\mu e}$ that cancel the CP effect arising from the standard terms $P^{(2,3)}_{\mu e}$, leading to better agreement with the pseudo-data. In other words, the effect of the CP phase $\delta_{CP}$ can be faked by its non-unitary counterpart $\phi$. The resulting $\chi^2(\delta_{CP})$ becomes nearly flat, as shown by the red curves in \gfig{fig:chi2-T2K}. Under the assumption of non-unitary mixing, there is almost no CP sensitivity in either T2K or T2HK. \begin{figure}[t] \centering \includegraphics[scale=0.6]{number_plot2.eps} \caption{ Bi-event rate plot for T2K for standard three--neutrino mixing with varying $\delta_{CP}$ (black line), and non-unitary mixing with fixed $\delta_{CP}$ value and varying $\phi$ (color lines). Dashed lines correspond to $\sin^2\theta_{a}=0.5$ while solid lines correspond to $\sin^2\theta_{a}=0.5\pm0.055$. } \label{fig:elpseT2K} \end{figure} Imposing the correlated prior constraint \geqn{eq:prior2} as $\chi^2_{non-unitary}$ slightly improve the situation, shown as the black curves in \gfig{fig:chi2-T2K}. Nevertheless, the CP sensitivity is still much worse than the standard case. The difference between $\delta^{true}_D = -90^o$ and $\delta^{fit}_D = 180^o$ reduces from $2\sigma$ to less than $1\sigma$. With or without the prior constraint, the CP sensitivity at T2(H)K is significantly reduced by the presence of non-unitary mixing. An intuitive plot to illustrate this fact is presented in \gfig{fig:elpseT2K} where we show the event rates for the neutrino and antineutrino appearance channel in T2K for two different assumptions: the standard three--neutrino case with varying $\delta_{CP}$ (black line), and the alternative non-unitary case with fixed $\delta_{CP}$ and varying $\phi$ (color lines). The variation of the atmospheric angle $\theta_a$ has been also considered in the non-unitary case. In particular, dashed lines in the plot correspond to maximal mixing, $\sin^2\theta_{a}=0.5$, while solid lines cover approximately the 1$\sigma$ allowed range, $\sin^2\theta_{a}=0.5\pm0.055$. % A similar plot was presented in \cite{Miranda:2016wdr} for $L/E=500$, in order to understand the origin of the ambiguity in parameter space which is inherent to the problem. Now we show that, for the same baseline $L/E\approx 500$ m/MeV, the uncertainties in the atmospheric mixing angle spoil the good sensitivity to $\delta_{CP}$ found after the combination of neutrino and antineutrino channel in Ref.~\cite{Miranda:2016wdr}. Moreover, one should keep in mind that, in a realistic case, the existence of flux uncertainties would change each of the ellipses of \gfig{fig:elpseT2K} into bands. The reason that the leptonic Dirac CP phase $\delta_{CP}$ can be faked by non-unitarity at T2(H)K is due to the choice of narrow neutrino energy spectrum with peak around 550\,MeV and baseline at 295\,km. With this choice, the oscillation phase $\Phi_a \approx \pi / 2$ is almost maximal and the $\cos \delta_{CP}$ term vanishes with its coefficient $\cos \Phi_a$. It is still easy for the CP phase $\phi$ associated to non-unitarity to fake the standard Dirac phase $\delta_{CP}$, even at the special point pointed in \cite{Miranda:2016wdr}, where the degeneracies cancel out in the ideal case of precisely known $\theta_a$ and monochromatic energy spectrum. The faking of the standard Dirac CP phase comes from the interplay of various elements. Around the maximal oscillation phase, $\Phi_a \approx \pi / 2$, the oscillation probability for neutrinos and anti-neutrinos can be approximated by, \begin{subequations} \begin{eqnarray} P_{\mu e} & \approx & 4 s^2_a c^2_r s^2_r \sin^2 \Phi_a + 2 |\alpha_{21}| \mathbb R(S'_{11} S'^*_{13}) \cos (\phi + \delta_{CP}) \nonumber \\ & - & \mathbb I(S'_{12} S'^*_{13}) \sin \delta_{CP} + 2 |\alpha_{21}| \mathbb I(S'_{11} S'^*_{12}) \sin \phi \,, \\ P_{\bar \mu \bar e} & \approx & 4 s^2_a c^2_r s^2_r \sin^2 \Phi_a + 2 |\alpha_{21}| \mathbb R(S'_{11} S'^*_{13}) \cos (\phi + \delta_{CP}) \nonumber \\ & + & \mathbb I(S'_{12} S'^*_{13}) \sin \delta_{CP} - 2 |\alpha_{21}| \mathbb I(S'_{11} S'^*_{12}) \sin \phi \,, \end{eqnarray} \end{subequations} % where the first line is the same both for neutrino and anti-neutrino modes, while the second receives a minus sign. To fit the current experimental best value $\delta^{true}_{CP} = -\pi/2$ with the opposite $\delta^{fit}_{CP} = \pi/2$, the major difference is introduced by the $\sin$ terms in the second line. The CP sensitivity is spoiled by freeing $\theta_a$ and $|\alpha_{21}|$ and it can be faked by varying $\phi$. This introduces a common correction via the $\cos (\phi + \delta_{CP})$ term for both neutrino and anti-neutrino channels. The large uncertainty in the atmospheric angle, which can reach $10\%$ in $s^2_a$, helps to absorb this common correction. The remaining $\sin \phi$ and $\sin (\phi + \delta_{CP})$ terms can then fake the genuine CP term $\sin \delta_{CP}$. Although the coefficients of $\sin \phi$ and $\sin (\phi + \delta_{CP})$ are relatively small, they are not zero. As long as $\alpha_{21}$ is large enough, CP can be faked. This can explain the behavior seen in \gfig{fig:chi2-T2K} and \gfig{fig:elpseT2K}. \section{Probing CP violation with $\mu$DAR and Near Detector} \label{sec:muDAR} In order to fully resolve the degeneracy between the unitary and non-unitary CP phases, it is necessary to bring back the $\cos \delta_{CP}$ dependence by carefully choosing the energy spectrum and baseline configuration. A perfect candidate for achieving this is to use muon decay at rest ($\mu$DAR) which has a wide peak and shorter baseline around 15-23 km. The TNT2K experiment \cite{TNT2K} is proposed to supplement the existing Super-K detector and the future Hyper-K detector with a $\mu$DAR source. Since the accelerator neutrinos in T2(H)K have higher energy than those of the $\mu$DAR source, the two measurements can run simultaneously. Note that for T2K we use the current configuration as described in \gsec{sec:fake}, while for T2HK the $7.8 \times 10^{21} \mbox{POT}$ flux is assigned to neutrino mode only. On the other hand, the $\mu$DAR source can contribute a flux of $1.1 \times 10^{25} \mbox{POT}$ \cite{TNT2K}. Notice that this experiment has backgrounds from atmospheric neutrinos, from the elastic scattering with electrons, and the quasi-elastic scattering with heavy nuclei. In addition, the $\mu$DAR flux can have 20\% uncertainty if there is no near detector. Note also that the sensitivity to break the degeneracy between $\delta_{CP}$ and $\pi - \delta_{CP}$ at T2(H)K, arising from the single $\sin \delta_{CP}$ dependence, can be improved because of the wide spectrum of $\mu$DAR, which has both $\cos \delta_{CP}$ and $\sin \delta_{CP}$ dependences as shown in \gfig{fig:deCoeff-muKam}. \begin{figure}[t] \centering \includegraphics[width=5.6cm,angle=-90]{deCoeff_muSK.eps} \includegraphics[width=5.6cm,angle=-90]{deCoeff_muHK.eps} \caption{The amplitude matrix elements $S'_{ij}$ that contribute to the decomposed CP coefficients for the probabilities of anti-neutrino oscillation at $\mu$SK and $\mu$HK.} \label{fig:deCoeff-muKam} \end{figure} For the $\mu$DAR flux, the spectrum peaks around 40-50 MeV. In this energy range, the decomposed coefficients $P^{(2)}_{\mu e, e \mu}$ for the $\cos \delta_{CP}$ dependence have comparable magnitude with the $\sin \delta_{CP}$ term coefficients $P^{(3)}_{\mu e, e \mu}$. In contrast, for T2(H)K the coefficients $P^{(2)}_{\mu e, e \mu}$ vanish around the spectrum peak $\sim 550$\,MeV while $P^{(3)}_{\mu e, e \mu}$ have sizable magnitude, as shown in \gfig{fig:deCoeff-T2K}. The property of having both $\cos \delta_{CP}$ and $\sin \delta_{CP}$ dependences is exactly what we need also to break the degeneracy between the unitary and non-unitary CP phases. As shown in \gfig{fig:chi2-TNT2K}, supplementing T2K with $\mu$SK can preserve the CP sensitivity at the T2K level even if not imposing the prior constraint \geqn{eq:prior2}. With the prior constraint, the CP sensitivity can further improve beyond that of T2K alone for unitary mixing. The same holds for the T2HK configuration. Nevertheless, the advantage of $\mu$DAR is still not fully utilized. \begin{figure}[t] \centering \includegraphics[scale=0.31,angle=270]{muDAR_dCP_error4_dCP270_independent_T2K_muSK.eps} \includegraphics[scale=0.31,angle=270]{muDAR_dCP_error4_dCP270_independent_T2HK_muHK.eps} \caption{The marginalized $\chi^2(\delta_{CP})$ function at TNT2K under the assumptions of unitarity ({\color{blue}{blue}}), non-unitary mixing with ({\color{red}{red}}) or without (black) the prior constraint.} \label{fig:chi2-TNT2K} \end{figure} An important difference between T2(H)K in \gfig{fig:chi2-T2K} and TNT2K in \gfig{fig:chi2-TNT2K} is the effect of adding the prior constraint. At T2(H)K, the prior constraint can only add some moderate improvement. On the other hand, its effect can be maximized at TNT2K after including $\mu$Kam. We find that the CP sensitivity is significantly improved by the combination of $\mu$Kam and prior constraints. Notice in \gfig{fig:elpseTNT2K} that the ambiguity of the ellipses was not improved by having another experiment, nevertheless one can distinguish the standard case from the non-unitary case by taking a closer look at the neutrino spectrum which contains more information. % \begin{figure}[t!] \centering \includegraphics[scale=0.6]{number_plot.eps} \caption{ Bi-event rate plot for TNT2K for standard three--neutrino mixing with varying $\delta_{CP}$ (black line), and non-unitary mixing with fixed $\delta_{CP}$ value and varying $\phi$ (color lines). Dashed lines correspond to $\sin^2\theta_{a}=0.5$ while solid lines correspond to $\sin^2\theta_{a}=0.5\pm0.055$. } \label{fig:elpseTNT2K} \end{figure} Indeed, the advantage of $\mu$Kam is not fully explored with the current prior constraint in \geqn{eq:prior2}. Since the non-unitary CP effect is modulated by $|\alpha_{21}|$, a more stringent constraint on $|\alpha_{21}|$ would effectively suppress the size of the faked CP violation. From the expression of $P^{NP}_{\mu e}$ in Eq.~\geqn{eq:PNP-me}, one sees that if the oscillation baseline is extremely short, it is dominated by the last term \begin{equation} P^{NP}_{\mu e} \approx \alpha^2_{11} |\alpha_{21}|^2 \,, \end{equation} which is a nonzero constant. Such ``zero--distance effect'' is a direct measure of the effective non--orthonormality of weak--basis neutrinos~\cite{valle:1987gv,nunokawa:1996tg}. Although $P^{NP}_{\mu e}$ is suppressed by $|\alpha_{21}|^2$, which is smaller than $6.6 \times 10^{-4}$ at 90\% C.L., a near detector with a very short baseline can still collect enough number of events to provide information of this parameter. We propose a near detector $\mu$Near, with a 20\,ton scintillator detector and a 20\,m baseline to the $\mu$DAR source, to supplement the $\mu$Kam part of TNT2K. By selecting events with double coincidence, the scintillator can identify the oscillated electron anti-neutrinos. Most of the events come from two sources: the signal from $\mu^+$ decay and the background from $\mu^-$ decay. For both signal and background, the parent muons decay at rest and hence have well--defined spectrum as shown in the left panel of \gfig{fig:muNear}. For a background-signal flux ratio $\mu^-\mbox{DAR}/\mu^+\mbox{DAR} = 5 \times 10^{-4}$ \cite{TNT2K} and non-unitary size $|\alpha_{21}| = 0.02$, the signal and background have roughly the same number of events, $N_{sig} = 1446$ and $N_{bkg} = 1234$. If the neutrino mixing is unitary, only background is present. Based on this we can roughly estimate the sensitivity at $\mu$Near to be, $\sqrt{N_{bkg}}/N_{sig} \approx 2.4\%$, for $|\alpha_{21}|^2 = (0.02)^2$. When converted to $|\alpha_{21}|$, the limit can be improved by a factor of $1/\sqrt{2.4\%} \approx 6.5$ on the basis of $0.02$ around $1\,\sigma$. In addition, the spectrum shape is quite different between the signal and background. The signal peak appears around 50\,MeV where the background event rate is much smaller. This feature of different energy spectrum can further enhance the sensitivity than the rough estimation from total event rate. The constraint on $|\alpha_{21}|$ can be significantly improved beyond the current limit in \geqn{eq:prior2}. \begin{figure}[t] \centering \includegraphics[height=0.45\textwidth,width=5.5cm,angle=-90]{muNear_eventRate} \includegraphics[height=0.50\textwidth,width=5.5cm,angle=-90]{muNear_limit2} \caption{Event rates (left panel) and the sensitivity on $|\alpha_{21}|$ (right panel) at $\mu$Near as a function of background rate and detector size. For the sensitivity plot the solid contours are obtained with both 20\% uncertainty in the $\mu$DAR flux normalization and 50\% uncertainty in the background-signal flux ratio. In contrast, the dashed contours are obtained with only 20\% uncertainty in the $\mu$DAR flux normalization while the background-signal flux ratio is fixed.} \label{fig:muNear} \end{figure} In the right panel of \gfig{fig:muNear} we show the sensitivity on $|\alpha_{21}|$ as a function of the background rate and the detector size from a simplified template fit. The result for $5 \times 10^{-4}$ of background and 20\,ton detector is of the same size as the rough estimation. The concrete value, $|\alpha_{21}| < 0.004$ at 1\,$\sigma$, is lightly larger due to marginalization. In \gfig{fig:muNear} we assumed systematic errors to be 20\% for the $\mu$DAR flux normalization and 50\% for the background-signal flux ratio. The solid contours in the right panel are obtained with both systematic errors imposed while the dashed ones with only the 20\% uncertainty in flux normalization. The difference in the sensitivity on $|\alpha_{21}|$ only appears in the region of small detector size or small background rate. For the 20\,ton detector and background rate larger than $10^{-4}$, the difference is negligibly small. In the full simulation, we only implement the 20\% uncertainty in flux normalization for simplicity. \begin{figure}[t] \centering \includegraphics[scale=0.31,angle=270]{muDAR_dCP_error4_dCP270_independent_T2K_muSK_muNear.eps} \includegraphics[scale=0.31,angle=270]{muDAR_dCP_error4_dCP270_independent_T2HK_muHK_muNear.eps} \caption{The marginalized $\chi^2(\delta_{CP})$ function at TNT2K + $\mu$Near under the assumptions of unitarity ({\color{blue}{blue}}), non-unitary mixing with (black) or without ({\color{red}{red}}) the prior constraint.} \label{fig:chi2-muNear} \end{figure} % In \gfig{fig:chi2-muNear} we show the CP sensitivity at TNT2K plus $\mu$Near once a full simulation is performed. Imposing all the information we can get from TNT2K, $\mu$Near, and the prior constraint on the non-unitary mixing parameters \geqn{eq:prior2}, the CP sensitivity can match the full potential of TNT2K under the assumption of unitary mixing. Even without the prior constraint, the CP sensitivity at TNT2K plus $\mu$Near is very close to the full reach of TNT2K with unitary mixing. Imposing the prior constraint \geqn{eq:prior2} has little effect since the constraint on $\alpha_{21}$ from the $\mu$Near detector can be better by one order of magnitude. This combination of CP measurements, TNT2K plus $\mu$Near, can determine the leptonic Dirac CP phase $\delta_{CP}$ unambiguously and hence provide an ultimate solution to the degeneracy between unitary and non-unitary CP violation parameters. \section{Conclusion} \label{sec:conclusion} Our interpretation of experimental data always relies on theoretical assumptions. Unambiguous understanding of reality always requires distinguishing alternative assumptions through careful experimental design. The degeneracy between unitary and non-unitary CP phases in neutrino mixing provides a perfect example. In this paper we have confirmed, in agreement with Ref.~\cite{Miranda:2016wdr}, that, for values of $|\alpha_{21}|$ of the order of a few\%, one can have unitarity violating CP oscillation amplitudes of the same order, or possibly larger, than the standard one associated to $\delta_{CP}$. We have illustrated how the CP sensitivity at accelerator neutrino experiments like T2(H)K is severely degraded in the presence of non-unitarity. Indeed, in addition to the standard leptonic Dirac CP phase $\delta_{CP}$ if neutrino mixing is non-unitary there is an extra CP phase $\phi$ characterizing deviations from unitarity and affecting the neutrino appearance probability. The effect of such unitary phase $\delta_{CP}$ can be easily faked by the non-unitarity phase $\phi$ if only the $\sin \delta_{CP}$ dependence is probed, as in the T2(H)K configuration. Probing the interplay with the $\cos \delta_{CP}$ dependence can help to lift the degeneracy. A perfect solution comes from the TNT2K project with T2(H)K supplemented by a $\mu$DAR source. Thanks to the different energy scale of the accelerator and $\mu$DAR neutrino fluxes, two different measurements can proceed at the same time, using Super-K and Hyper-K detectors simultaneously. In its original proposal, the goal was to get better measurement of the Dirac CP phase $\delta_{CP}$ within the standard three-neutrino mixing benchmark. We find that it also has the potential of breaking the degeneracy between standard and non-unitary CP phases. However, TNT2K can fully explore its advantage only in combination with a near detector. We propose using $\mu$Near, with only 20\,ton of scintillator and 20\,m of baseline, to monitor the size of the non-unitary CP violating term for the $\mu \to e$ transition, $|\alpha_{21}|$. Our simplified template fit shows that $\mu$Near, with an expected background-signal flux ratio in the $\mu$DAR source of $5 \times 10^{-4}$, can constrain $|\alpha_{21}|$ to be smaller than $4 \times 10^{-3}$ at $1\,\sigma$, which corresponds to almost one order of magnitude improvement with respect to the current model-independent bound obtained from NOMAD data. This estimate is stable against the large uncertainty in the background-signal flux ratio. When implemented in a full simulation, $\mu$Near can almost retrieve the CP sensitivity of TNT2K, providing an ultimate solution to the degeneracy between unitary and non-unitary mixing parameters. In short, non-unitary neutrino mixing is expected in a large class of seesaw schemes at LHC--accessible mass scales. This implies extra mixing parameters, and a new CP phase, that can fake the standard leptonic CP phase $\delta_{CP}$ present in the simplest three-neutrino paradigm. As a result, probing for CP violation in accelerator-type experiments can be misleading. We have considered T2(H)K as an example to illustrate the degeneracy between the ``standard'' and ``non-unitary'' CP phases. Despite the complete loss in its CP sensitivity we note that supplementing T2(H)K with a $\mu$DAR source can help breaking the CP degeneracy, by probing separately both $\cos \delta_{CP}$ and $\sin \delta_{CP}$ dependences in the wide energy spectrum of the $\mu$DAR flux. We have seen that the further addition of a near detector to the $\mu$DAR setup has the potential of removing the degeneracy rather well. \black \section{Acknowledgements} Work supported by Spanish grants FPA2014-58183-P, Multidark CSD2009-00064, SEV-2014-0398 (MINECO), PROMETEOII/2014/084 (Generalitat Valenciana). M.~T. is supported by a Ram\'{o}n y Cajal contract (MINECO). P. S. P. would like to thank the support of FAPESP funding grant 2014/05133-1, 2015/16809-9 and 2014/19164-6. SFG thanks Jarah Evslin for useful discussions. \begin{appendix} \section{Decomposition Formalism for Non-Unitary Mixing} \label{sec:decomposition} The parametrization in Eq.~\geqn{eq:N} isolates the effect of non-unitarity as a multiplicative matrix on the left-hand side of the unitary mixing matrix $U$. This choice is extremely convenient to separate the neutrino oscillation probabilities into several terms, using the decomposition formalism \cite{Ge:2013zua}. The latter has a huge benefit for the case of non-unitary mixing, characterized by the parameters $\alpha_{ij}$ in $N^{NP}$. Indeed it simplifies considerably the calculation of the oscillation amplitudes as we demonstrate below. The neutrino oscillation amplitude can always be evaluated as, \begin{equation} S^{n \times n} \equiv e^{- i t \mathcal H^{n \times n}} \,, \end{equation} no matter in which basis. It is convenient to first diagonalize the Hamiltonian, \begin{equation} \mathcal H^{n \times n} = U^{n \times n} \left\lgroup \begin{matrix} \sqrt{E^2 - m^2_1} \\ & \ddots \\ & & \sqrt{E^2 - M^2_n} \end{matrix} \right\rgroup (U^{n \times n})^\dagger \equiv U^{n \times n} \mathcal H^{n \times n}_D (U^{n \times n})^\dagger \,, \label{eq:Sp2} \end{equation} % and evaluate the oscillation in the mass eigenstate basis, \begin{equation} S^{n \times n} = U^{n \times n} S^{n \times n}_D (U^{n \times n})^\dagger \,. \end{equation} For neutrino oscillation at low energy, $E < M_{4, \cdots, n}$, the heavy state decays with an imaginary Hamiltonian. In other words, the oscillation amplitude matrix $S^{n \times n}_D \equiv e^{- i t \mathcal H^{n \times n}_D}$ in the mass eigenstate basis has non-trivial elements only in the $3 \times 3$ light block. The oscillation within the three light neutrinos can then be described by the effective amplitude matrix, \begin{equation} S^{NP} = N^{NP} S N^{NP \dagger} \,, \label{eq:SNP} \end{equation} where $S$ is the standard amplitude matrix corresponding to unitary mixing $U$. Note that the extra neutrinos are much heavier than the energy scale under discussion and hence decouple from the (low-energy) neutrino oscillations. Their low-energy effect is just a basis transformation which also applies to the oscillation amplitudes. The neutrino oscillation probability is given by the squared magnitude of the corresponding amplitude matrix element, $P^{NP}_{\alpha \beta} = |S^{NP}_{\beta \alpha}|^2$, \begin{subequations}\label{eq:probtotal} \begin{eqnarray} P^{NP}_{ee} & = & \alpha^4_{11} P_{ee} \,, \\ P^{NP}_{e \mu} & = & \alpha^2_{11} \left[ \alpha^2_{22} P_{e \mu} + 2 \alpha_{22} \mbox{Re} \left( \alpha_{21} S_{ee}^* S_{\mu e} \right) + |\alpha_{21}|^2 P_{ee} \right] \,, \\ P^{NP}_{\mu e} & = & \alpha^2_{11} \left[ \alpha^2_{22} P_{\mu e} + 2 \alpha_{22} \mbox{Re} \left( \alpha_{21}^* S_{ee} S_{e \mu }^* \right) + |\alpha_{21}|^2 P_{ee} \right] \,, \\\nonumber P^{NP}_{\mu \mu} & = & \alpha_{22}^4 P_{\mu\mu} + |\alpha_{21}^2| \alpha_{22}^2 (P_{\mu e} + P_{e \mu}) + |\alpha_{21}|^4 P_{ee} \\ & + & \sum_{\{a_1,b_1\}\neq\{a_2,b_2\}}\rm{Re}[\alpha_{2a_1}\alpha_{2b_1}^*\alpha_{2a_2}^*\alpha_{2b_2}S_{a_1b_1}S_{a_2b_2}^*]. \end{eqnarray} \label{eq:prob-nonunitary} \end{subequations} % Here $P_{\alpha \beta}$ is the oscillation probability with unitary mixing and ($a,b$)=($1,2$) for $\alpha_{ab}$ while ($a,b$)=($e,\mu$) for $S_{ab}$. Note that the remaining five oscillation probabilities ($P^{NP}_{e \tau}$, $P^{NP}_{\tau e}$, $P^{NP}_{\mu \tau}$, $P^{NP}_{\tau \mu}$, $P^{NP}_{\tau \tau}$) can not be derived from the four in \geqn{eq:prob-nonunitary} by unitarity conditions since these do not hold in our case. Instead, they need to be calculated directly from $S^{NP}$ elements in a similar way as the above four. In addition, the atmospheric mixing angle and the Dirac CP phase $\delta_{CP}$ can also be factorized out as transformations, \begin{equation} \mathcal H = [U_{23}(\theta_a) P_\delta] \mathcal H' [U_{23}(\theta_a) P_\delta]^\dagger \,, \qquad S = [U_{23}(\theta_a) P_\delta] S' [U_{23}(\theta_a) P_\delta]^\dagger \,, \label{eq:HS} \end{equation} % where $U_{23}(\theta_a)$ is the 2--3 mixing parameter and $P_\delta \equiv \mbox{diag}(1,1,e^{i \delta_{CP}})$ is a rephasing matrix. Those quantities with prime, $\mathcal H'$ and $S'$, are defined in the so-called ``propagation basis'' \cite{Akhmedov:1998xq,Yokomakura:2002av}. The connection between the non-unitary flavor basis and the ``propagation basis'' is $N^{NP} U_{23}(\theta_a) P_\delta$ Replacing the unitary oscillation amplitude $S$ in the flavor basis by $S'$ \cite{Ge:2013zua} in the ``propagation basis'' with $\theta_a$ and $\delta_{CP}$ rotated away, the non-unitary oscillation probabilities \geqn{eq:prob-nonunitary} become, \begin{subequations} \begin{eqnarray} P^{NP}_{ee} & = & \alpha^4_{11} P_{ee} \,, \\ P^{NP}_{e \mu} & = & \alpha^2_{11} \left\{ \alpha^2_{22} P_{e \mu} + 2 \alpha_{22} |\alpha_{21}| \left[ c_a \left( c_\phi \mathbb R + s_\phi \mathbb I \right) (S'_{11} S'^*_{21}) \right. \right. \nonumber \\ && \left. \left. \hspace{40mm} + s_a \left( c_{\phi + \delta_{CP}} \mathbb R + s_{\phi + \delta_{CP}} \mathbb I \right) (S'_{11} S'^*_{31}) \right] + |\alpha_{21}|^2 P_{ee} \right\} \,, \\ P^{NP}_{\mu e} & = & \alpha^2_{11} \left\{ \alpha^2_{22} P_{\mu e} + 2 \alpha_{22} |\alpha_{21}| \left[ c_a \left( c_\phi \mathbb R - s_\phi \mathbb I \right) (S'_{11} S'^*_{12}) \right. \right. \nonumber \\ && \left. \left. \hspace{40mm} + s_a \left( c_{\phi + \delta_{CP}} \mathbb R - s_{\phi + \delta_{CP}} \mathbb I \right) (S'_{11} S'^*_{13}) \right] + |\alpha_{21}|^2 P_{ee} \right\} \,, \label{eq:PNP-me} \\ P^{NP}_{\mu \mu} & = & \left| \alpha^2_{22} S_{\mu \mu} + \alpha_{22} \left( \alpha_{21} S_{e\mu} + \alpha^*_{21} S_{\mu e} \right) + \alpha_{11}^2 S_{ee} \right|^2 \, \end{eqnarray} \label{eq:PNP-expanded} \end{subequations} % For convenience, we have denoted $(c_\phi, s_\phi) \equiv (\cos \phi, \sin \phi)$ and $(c_{\phi + \delta_{CP}}, s_{\phi +\delta_{CP}}) \equiv (\cos(\phi + \delta_{CP}), \sin(\phi + \delta_{CP}))$, where $\delta_{CP}$ and $\phi$ are the leptonic Dirac CP phase and the non-unitary phase associated with $\alpha_{21} \equiv |\alpha_{21}| e^{-i \phi}$, respectively. The real and imaginary operators, $\mathbb R$ and $\mathbb I$, extract the corresponding part of the following terms. The general expression \geqn{eq:PNP-expanded} reproduces the fully expanded form in \cite{Escrihuela:2015wra} up to the leading order of $\sin \theta_r \sim 0.15$ and $\Delta m^2_s / \Delta m^2_a \sim 3\%$. The oscillation probabilities $P^{NP}_{e \mu}$ and $P^{NP}_{\mu e}$ in \geqn{eq:PNP-expanded} are not just functions of their unitary counterparts $P_{e \mu}$ and $P_{\mu e}$, but they also contain non-unitary CP terms involving $\phi$. Therefore, the non-unitarity of the neutrino mixing matrix introduces extra decomposition coefficients in addition to those proposed in \cite{Ge:2013zua}, \begin{eqnarray} P^{NP}_{\alpha \beta} & \equiv & P^{(0)}_{\alpha \beta} + P^{(1)}_{\alpha \beta} x_{\rm a} + P^{(2)}_{\alpha \beta} \cos \delta'_{CP} + P^{(3)}_{\alpha \beta} \sin \delta'_{CP} + P^{(4)}_{\alpha \beta} x_{\rm a} \cos \delta'_{CP} + P^{(5)}_{\alpha \beta} x^2_{\rm a} + P^{(6)}_{\alpha \beta} \cos^2 \delta'_{CP} \qquad \nonumber \\ & + & P^{(7)}_{\alpha \beta} c_a c_\phi + P^{(8)}_{\alpha \beta} c_a s_\phi + P^{(9)}_{\alpha \beta} s_a c_{\phi + \delta_{CP}} + P^{(10)}_{\alpha \beta} s_a s_{\phi + \delta_{CP}} \,. \end{eqnarray} % Here, we have expanded the atmospheric angle $\theta_a$ around its maximal value $c^2_a = (1+x_a)/2$ and rescaled Dirac CP functions $(\cos \delta'_{CP}, \sin \delta'_{CP}) \equiv 2 c_a s_a (\cos \delta_{CP}, \sin \delta_{CP})$. The explicit form of these decomposition coefficients are shown in \gtab{tab:Ps}. \begin{table}[t] \centering \begin{tabular}{c|cccc} & $P^{(k)}_{\rm ee}$ & $P^{(k)}_{\rm e\mu}$ & $P^{(k)}_{\mu \rm e}$ \\[1mm] \hline \hline (0) & $\alpha^4_{11} |S'_{11}|^2$ & $\alpha^2_{11} \left[ \frac {\alpha^2_{22}} 2 (1 - |S'_{11}|^2) + |\alpha_{21}|^2 |S'_{11}|^2 \right]$ & $\alpha^2_{11} \left[ \frac {\alpha^2_{22}} 2 (1 - |S'_{11}|^2) + |\alpha_{21}|^2 |S'_{11}|^2 \right]$ \\[1mm] (1) & 0 & $\frac {\alpha^2_{11} \alpha^2_{22}} 2 (|S'_{21}|^2 - |S'_{31}|^2)$ & $\frac {\alpha^2_{11} \alpha^2_{22}} 2 (|S'_{12}|^2 - |S'_{13}|^2)$ \\[1mm] (2) & 0 & $\alpha^2_{11} \alpha^2_{22}\mathbb R(S'_{21} S'^*_{31})$ & $\alpha^2_{11} \alpha^2_{22}\mathbb R(S'_{12} S'^*_{13})$ \\[1mm] (3) & 0 & $\alpha^2_{11} \alpha^2_{22}\mathbb I(S'_{21} S'^*_{31})$ & $-\alpha^2_{11} \alpha^2_{22}\mathbb I(S'_{12} S'^*_{13})$ \\[1mm] (4) & 0 & 0 & 0 \\[1mm] (5) & 0 & 0 & 0 \\[1mm] (6) & 0 & 0 & 0 \\ \hline (7) & 0 & $+ 2 \alpha^2_{11} \alpha_{22} |\alpha_{21}| \mathbb R (S'_{11} S'^*_{21})$ & $+ 2 \alpha^2_{11} \alpha_{22} |\alpha_{21}| \mathbb R (S'_{11} S'^*_{12})$ \\[1mm] (8) & 0 & $+ 2 \alpha^2_{11} \alpha_{22} |\alpha_{21}| \mathbb I (S'_{11} S'^*_{21})$ & $- 2 \alpha^2_{11} \alpha_{22} |\alpha_{21}| \mathbb I (S'_{11} S'^*_{12})$ \\[1mm] (9) & 0 & $+ 2 \alpha^2_{11} \alpha_{22} |\alpha_{21}| \mathbb R (S'_{11} S'^*_{31})$ & $+ 2 \alpha^2_{11} \alpha_{22} |\alpha_{21}| \mathbb R (S'_{11} S'^*_{13})$ \\[1mm] (10) & 0 & $+ 2 \alpha^2_{11} \alpha_{22} |\alpha_{21}| \mathbb I (S'_{11} S'^*_{31})$ & $- 2 \alpha^2_{11} \alpha_{22} |\alpha_{21}| \mathbb I (S'_{11} S'^*_{13})$ \end{tabular} \caption{The decomposed coefficients $P^{(k)}_{ee}$, $P^{(k)}_{e \mu}$, and $P^{(k)}_{\mu e}$ as an extension to the results first derived in \cite{Ge:2013zua}. For symmetric matter potential profile, the amplitude matrix $S'$ is also symmetric.} \label{tab:Ps} \end{table} % For simplicity, we show just the three channels ($P^{NP}_{ee}$, $P^{NP}_{e \mu}$ and $P^{NP}_{\mu e}$) in \gtab{tab:Ps} to illustrate the idea. Ignoring matter effects (or if these can be approximated by a symmetric/constant potential), the amplitude matrix $S'$ is then symmetric, $S'_{ij}=S'_{ji}$. To obtain the anti-neutrino coefficients $\overline P^{NP}_{\alpha \beta}$, the CP phases ($\delta_{CP}$ and $\phi$) as well as the matter potential inside the $S'$ matrix elements should receive a minus sign. \section{Matter effect with non-unitary mixing} \label{sec:matter} The decomposition formalism presented in \gapp{sec:decomposition} is a powerful tool to obtain a complete formalism for neutrino oscillations. It factorizes the mixings efficiently in different bases and treats their effects independently. For example, the matter potential does not spoil the relations \geqn{eq:probtotal} that follow from the general parametrization \geqn{eq:N}. Although the previous results are obtained for vacuum oscillations, one can still use \geqn{eq:probtotal} for neutrino oscillation through matter, as long as $S_{ij}$ is replaced by the corresponding amplitude matrix in matter, $S^{matter}_{ij}$. In this appendix we will show how the presence of non-unitary neutrino mixing results in a rescaling of the standard matter potential. Our result applies generally for any number of heavy neutrinos \footnote{An expansion in the mass hierarchy parameter $\alpha \equiv \Delta m^2_s / \Delta m^2_a$ and the unitarity violation parameters up to first order can also be found in \cite{Li:2015oal}, where they are denoted as $s^2_{ij}$, for $i = 1,2,3$ and $j = 4,5,6$.}. In order to further develop the formalism established in \gapp{sec:decomposition} to introduce matter effects with non-unitary mixing, it is extremely useful to use the symmetrical parametrization method for unitary matrices. We start by recalling that its main ingredient consists in decomposing $U^{n \times n}$ in terms of products of effectively two--dimensional complex rotation matrices $\omega_{1j}$, in which each factor is characterized by both one rotation angle and one CP phase, see Eqs.(3.9)--(3.15) and (3.19)--(3.22) in~\cite{Schechter:1980gr}. The method is equivalent to the procedure of obtaining the current PDG form of the lepton mixing matrix and any generalization thereof. In the presence of $\mathrm{SU(3)_c \otimes SU(2)_L \otimes U(1)_Y}$ singlet neutrinos, it can be used to describe the mixing matrix $U^{n \times n}$ as follows \begin{equation} U^{n \times n} = \left( \Pi^n_{i > j > 3} \omega_{ij} \right) \left( \Pi^n_{j = 4} \omega_{3j} \right) \left( \Pi^n_{j = 4} \omega_{2j} \right) \left( \Pi^n_{j = 4} \omega_{1j} \right) \left\lgroup \begin{matrix} \omega_{23} P_\delta \omega_{13} \omega_{12} & 0 \\ 0 & 1 \end{matrix} \right\rgroup \,, \end{equation} % in the same way as for its $3 \times 3$ counterpart $U$. With such parametrization for the extended mixing matrix, one can still resort to the ``propagation basis''. This can be achieved by dividing the full mixing matrix $U^{n \times n} \equiv \mathcal R' U'$, \begin{equation} \mathcal R' = U^{NP} \left\lgroup \begin{matrix} \omega_{23} P_\delta & 0 \\ 0 & 1 \end{matrix} \right\rgroup \,, \qquad U' = \left\lgroup \begin{matrix} \omega_{13} \omega_{12} & 0 \\ 0 & 1 \end{matrix} \right\rgroup \,. \label{eq:Tp} \end{equation} % The ``propagation basis'' is connected to the non-unitary flavor basis with the transformation matrix $\mathcal R'$ and the remaining mixing is $U'$. The original $n\times n$ Hamiltonian is given by \begin{equation} \mathcal H^{n \times n} \!\! = U^{n \times n} \!\!\! \left\lgroup \begin{matrix} \sqrt{E^2 - m^2_1} \\ & \ddots \\ & & \sqrt{E^2 - M^2_n} \end{matrix} \right\rgroup \!\!\! (U^{n \times n})^\dagger \! + \!\! \left\lgroup \begin{matrix} V_{cc} \\ & 0 \\ & & 0 \\ & & & \ddots \end{matrix} \right\rgroup \!\! + \! V_{nc} \!\! \left\lgroup \begin{matrix} 1 \\ & 1 \\ & & 1 \\ & & & \ddots \end{matrix} \right\rgroup \,, \label{eq:Sp2} \end{equation} % We denote the matter potential matrices as $\mathbb V \equiv \mathbb V_{cc} + \mathbb V_{nc}$ in latter discussions. For heavy mass eigenstates with $M_i > M_Z \gg E$, the oscillation will decay out very quickly since the oscillation phase $\sqrt{E^2 - M^2_n}$ is imaginary. For convenience, we separate the matrices into light and heavy blocks, \begin{equation} \mathcal H^{n \times n} = \mathcal R' \left[ U' \left\lgroup \begin{matrix} \sqrt{E^2 - \mathbb M^2_l} \\ & \sqrt{E^2 - \mathbb D^2_h} \end{matrix} \right\rgroup U'^\dagger + \mathcal R'^\dagger \left\lgroup \begin{matrix} \mathbb V \\ & 0 \end{matrix} \right\rgroup \mathcal R' \right] \mathcal R'^\dagger \label{eq:Sp3} \end{equation} % where $\sqrt{E^2 - \mathbb M^2_l}$ is the standard momentum matrix in the ``propagation basis'', with the solar and reactor angles $\theta_s$ and $\theta_r$ incorporated, while $\sqrt{E^2 - \mathbb D^2_h}$ is already diagonal. As long as $\mathbb V \ll \sqrt{E^2 - \mathbb D^2_h}$, the mixing between the light and heavy blocks inside the bracket is highly suppressed by a factor of $\mathbb V / \sqrt{E^2 - \mathbb D^2_h}$. For CP measurement experiments, $\mathbb V \lesssim \Delta m^2_a/2 E \ll \sqrt{E^2 - \mathbb D^2_h}$ with $\Delta m^2_a \sim \mathcal O(0.01 \, \mbox{eV}^2)$, $10\,\mbox{MeV} \lesssim E \lesssim 1\,\mbox{GeV}$, and $\mathbb D^2_h > M_Z^2 $, the induced mixing $\mathbb V/\sqrt{E^2 - \mathbb D^2_h} \lesssim 10^{-19}$ is negligibly small. In addition, the mixing term is further suppressed by the small non-unitary mixing contained in $\mathcal R'$. As a good approximation for low-energy neutrino oscillation experiment, the light and heavy blocks decouple from each other. We have showed that the ``propagation basis'' \cite{Akhmedov:1998xq,Yokomakura:2002av} can still be established in the presence of non-unitary mixing. Note that $\mathcal R'$ is exactly $N^{NP} U_{23}(\theta_a) P_\delta$ that already used in \gapp{sec:decomposition} to relate the non-unitary flavor basis and the ``propagation basis'' through \geqn{eq:SNP} and \geqn{eq:HS}. In other words, as long as the mass of heavy neutrino is much larger than the oscillation energy and matter effect, the same ``propagation basis'' can be generalized for non-unitary mixing. Since the light and heavy blocks effectively decouple from each other, the oscillation probability can be evaluated independently. For the light block, we can first evaluate the amplitude matrix $S' = e^{- i \mathcal H' t}$ in the ``propagation basis'' and transform back to the flavor basis with $\mathcal R'$ in the same way as \geqn{eq:PNP-expanded}. The only change is a modified matter potential, \begin{equation} \widetilde{\mathbb M}^2_l = \mathbb M^2_l - 2 E \mathbb R'^\dagger \mathbb V \mathbb R' \,, \label{eq:M2l} \end{equation} % where $\mathbb R'$ is the light block of $\mathcal R'$. Here we have expanded the neutrino momentum of light neutrinos in relativistic limit. The potential matrix in the ``propagation basis'' is replaced by $\mathbb V \rightarrow \mathbb R'^\dagger \mathbb V \mathbb R'$. \end{appendix} \providecommand{\url}[1]{\texttt{#1}} \providecommand{\urlprefix}{URL } \providecommand{\eprint}[2][]{\url{#2}}
proofpile-arXiv_067-4511
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Exploration of the proper form and the physical meaning of momentum in curvilinear coordinates attracts constant attention since the birth of quantum mechanics. \cite{bp,RH,1968,LNF,DC,1989,2002,dong,GK,homma,ikegami,OK,liu2015} In this paper, we show that canonical momenta $P_{\xi}$ associated with its conjugate canonical positions, or coordinates, $\xi$, are closely related to mean curvatures of the surface $\xi=const$. So, the geometric momenta \cite{homma,ikegami,OK,liu2015,07,liu11,liu13-2,133,liu13-1,135,134,136,137,gem,WZ,iran,eprint} that are under extensive studies and applications are closely related to natural decompositions of the momentum operator in gaussian normal, curvilinear in general, coordinates. In next section II, we study a simple but illuminating example: how the momentum operators in $3D$ spherical polar coordinates ($r,\theta,\varphi$) are related to three mean curvature vectors. In section III, we present a theorem for the general case. In final section IV, a brief conclusion is given. \section{An example: mean curvatures and spherical polar coordinates} The gradient operator in the $3D$ cartesian coordinate system $\nabla _{cart}\equiv\mathbf{e}_{x}\partial_{x}+\mathbf{e}_{y}\partial_{y +\mathbf{e}_{z}\partial_{z}$\ can be expressed in the $3D$ spherical polar coordinates ($r,\theta,\varphi$), \begin{equation} \nabla_{sp}=\mathbf{e}_{r}\frac{\partial}{\partial r}+\mathbf{e}_{\theta \frac{1}{r}\frac{\partial}{\partial\theta}+\mathbf{e}_{\varphi}\frac{1 {r\sin\theta}\frac{\partial}{\partial\varphi}. \label{grad \end{equation} The momentum operator can thus be written in the following way, \begin{equation} \mathbf{P}\mathbf{\equiv}-i\hbar\nabla_{cart}=-i\hbar\nabla_{sp}=\left\{ \mathbf{e}_{r},P_{r}\right\} +\frac{1}{r}\left\{ \mathbf{e}_{\theta },P_{\theta}\right\} +\frac{1}{r\sin\theta}\left\{ \mathbf{e}_{\varphi },P_{\varphi}\right\} , \label{2 \end{equation} where $\{A,B\}\equiv(AB+BA)/2$ and, \begin{subequations} \begin{align} \left\{ \mathbf{e}_{r},P_{r}\right\} & =\mathbf{e}_{r}P_{r},P_{r =-i\hbar(\frac{\partial}{\partial r}+\frac{1}{r}),\\ \{\mathbf{e}_{\theta},P_{\theta}\} & =\mathbf{e}_{\theta}P_{\theta +i\hbar\frac{\mathbf{e}_{r}}{2},P_{\theta}=-i\hbar(\frac{\partial {\partial\theta}+\frac{1}{2}\cot\theta),\\ \{\mathbf{e}_{\varphi},P_{\varphi}\} & =\mathbf{e}_{\varphi}P_{\varphi }+i\hbar\frac{1}{2}\left( \mathbf{e}_{r}\sin\theta+\mathbf{e}_{\theta \cos\theta\right) ,P_{\varphi}=-i\hbar\frac{\partial}{\partial\varphi}. \end{align} On one hand, these spherical polar coordinates have three mutually orthogonal surfaces given by $r=const.$, $\theta=const.$ and $\varphi=const.$ respectively. They are, respectively, a\ spherical surface of radius $r$, a cone of polar angle $\theta$, and a flat plane alone azimuthal angle $\varphi $. These (curved) surfaces have three mean curvature vectors, respectively, \end{subequations} \begin{equation} \mathbf{M}_{r}\mathbf{=}-\frac{\mathbf{e}_{r}}{r},\text{ }\mathbf{M}_{\theta }\mathbf{=}-\frac{\mathbf{e}_{\theta}}{2r\tan\theta},\text{ }\mathbf{M _{\varphi}\mathbf{=}0,\text{ }(r\neq0). \end{equation} On the other hand, if looking closely into the canonical momenta multiplied by their vector coefficients, $\mathbf{e}_{r}$, $\mathbf{e}_{\theta}/r$ and $\mathbf{e}_{\varphi}/\left( r\sin\theta\right) $, respectively, we find, \begin{subequations} \begin{align} \mathbf{e}_{r}P_{r} & =-i\hbar\mathbf{e}_{r}(\frac{\partial}{\partial r}+\frac{1}{r})=-i\hbar(\mathbf{e}_{r}\frac{\partial}{\partial r -\mathbf{M}_{r}),\\ \frac{\mathbf{e}_{\theta}}{r}P_{\theta} & =-i\hbar\frac{\mathbf{e}_{\theta }{r}(\frac{\partial}{\partial\theta}+\frac{1}{2}\cot\theta)=-i\hbar (\frac{\mathbf{e}_{\theta}}{r}\frac{\partial}{\partial\theta}-\mathbf{M _{\theta}),\\ \frac{\mathbf{e}_{\varphi}}{r\sin\theta}P_{\varphi} & =-i\hbar \frac{\mathbf{e}_{\varphi}}{r\sin\theta}\frac{\partial}{\partial\varphi }=-i\hbar\left( \frac{\mathbf{e}_{\varphi}}{r\sin\theta}\frac{\partial }{\partial\varphi}-\mathbf{M}_{\varphi}\right) . \end{align} Now, the mean curvature vectors exhibit themselves respectively in the brackets, which result from making the derivative\ $-i\hbar\partial_{\xi}$ ($\xi=r,\theta,\varphi$) Hermitian operators multiplied, respectively, by the vector coefficients $\mathbf{e}_{\xi}/H_{\xi}$ with $H_{\xi}$ denoting Lam\'{e} coefficients for orthogonal curvilinear coordinates, where the Lam\'{e} coefficients are defined by $d\mathbf{x\equiv}dx\mathbf{e _{x}+dy\mathbf{e}_{y}+dz\mathbf{e}_{z}=\sum H_{\xi}d\xi\mathbf{e}_{\xi}$. For the spherical polar coordinates ($r,\theta,\varphi$), three Lam\'{e} coefficients are $H_{r}=1,$ $H_{\theta}=r,$ and $H_{\varphi}=r\sin\theta$. So, Eq. (\ref{2}) has three ways of decomposition in the following \end{subequations} \begin{equation} \mathbf{P=}-i\hbar\nabla_{sp}=\mathbf{e}_{r}P_{r}+\mathbf{\Pi}_{r =\frac{\mathbf{e}_{\theta}}{r}P_{\theta}+\mathbf{\Pi}_{\theta}=\frac {\mathbf{e}_{\varphi}}{r\sin\theta}P_{\varphi}+\mathbf{\Pi}_{\varphi}, \end{equation} where, $\mathbf{\Pi}_{r}$, $\mathbf{\Pi}_{\theta}$ and $\mathbf{\Pi}_{\varphi }$ are so-called the geometric momenta \cite{GK,homma,ikegami,OK,liu2015,07,liu11,liu13-2,133,liu13-1,135,134} for the corresponding surfaces, though the last one $\mathbf{M}_{\varphi}=0$ is trivial, \begin{subequations} \begin{align} \mathbf{\Pi}_{r} & \equiv-i\hbar\left( \mathbf{e}_{\theta}\frac{1}{r \frac{\partial}{\partial\theta}+\mathbf{e}_{\varphi}\frac{1}{r\sin\theta \frac{\partial}{\partial\varphi}+\mathbf{M}_{r}\right) ,\\ \mathbf{\Pi}_{\theta} & \equiv-i\hbar\left( \mathbf{e}_{r}\frac{\partial }{\partial r}+\mathbf{e}_{\varphi}\frac{1}{r\sin\theta}\frac{\partial }{\partial\varphi}+\mathbf{M}_{\theta}\right) ,\\ \mathbf{\Pi}_{\varphi} & \equiv-i\hbar\left( \mathbf{e}_{r}\frac{\partial }{\partial r}+\mathbf{e}_{\theta}\frac{1}{r}\frac{\partial}{\partial\theta }+\mathbf{M}_{\varphi}\right) . \end{align} They are special cases of the general form of the geometric momentum, \cite{134} \end{subequations} \begin{equation} \mathbf{\Pi}\equiv-i\hbar\left( \mathbf{r}^{\xi}\partial_{\xi}+\mathbf{M \right) ,\text{ }or\text{ \ }\mathbf{\Pi}\equiv-i\hbar\left( \mathbf{r ^{\xi}\partial_{\xi}+\frac{\mathbf{M}}{2}\right) \label{GM \end{equation} where $\mathbf{M}=M\mathbf{n}$ with $\mathbf{n}$ denoting the unit normal vector for a surface and $M$ standing for the mean curvature. In the first equation of (\ref{GM}), the mean curvature $M$ is usually defined by the true average of the two principal curvatures and ususlly applies for the 2D surface, whereas the second one uses another convention in which $M$ is defined as sum of all principal curvatures. In the rest part of this paper, we will use the latter convention. \section{A theorem for general cases} \emph{Theorem:}\ In the $\left( N+1\right) D$ Euclidean space $R^{N+1}$, we can define the usual Cartesian coordinates whose corresponding momentum is $\mathbf{P=}-i\hbar\nabla_{cart}$ as usual, which can also expressed in terms of the curvilinear coordinates ($\xi^{0},\xi^{\mu}$), ($\mu=1,2,...N$). Assuming that the curvilinear coordinates take the form of gaussian normal coordinates that have a metric that satisfies conditions $g_{00}>0$ does not depend on\ $\xi^{0}$ and $g_{0\mu}=0$. There is a mean-curvature dependent decomposition of the momentum in the following, \begin{equation} \mathbf{P\equiv}-i\hbar\nabla_{cart}=-i\hbar\left( \frac{\mathbf{n} {\sqrt{g^{00}}}\frac{\partial}{\partial\xi^{0}}-\frac{\mathbf{M}_{0} {2}\right) +\mathbf{\Pi}_{0}, \end{equation} where $-\mathbf{M}_{0}$ is the mean curvature vector, and $\mathbf{\Pi}_{0}$ defines the geometric momentum of the surface $\xi^{0}=const.$ whose unit normal vector is denoted by $\mathbf{n}$, $\mathbf{\Pi}_{0}\mathbf{= -i\hbar\left( \mathbf{r}^{\mu}\partial_{\mu}+\frac{\mathbf{M}_{0}}{2}\right) .$ The proof is straightforward. The coordinate transformation from the cartesian ones $\mathbf{x\equiv}\left( x_{1},x_{2},x_{3},...x_{N+1}\right) $ to the gaussian normal ones $(\xi^{0},\xi^{\mu})$, ($\mu=1,2,3,...,N$) are, \begin{equation} x_{i}=x_{i}(\xi^{0},\xi^{\mu})\text{, and }\xi^{0}=\xi^{0}(\mathbf{x ),\xi^{\mu}=\xi^{\mu}(\mathbf{x}). \end{equation} The line element\ $d\mathbf{x\cdot}d\mathbf{x}$ is $d\mathbf{x\cdot }d\mathbf{x}\mathbf{\equiv}dx_{i}dx_{i}=g^{00}\partial_{0}^{2}+g^{\mu\nu }\partial_{\mu}\partial_{\nu}$, and the the determinant of the metric matrix $g_{\mu\nu}$ is then $g=\left\vert g_{\mu\nu}\right\vert $. The gradient operator is in the gaussian normal coordinates, \begin{equation} \nabla_{gn}=\frac{\mathbf{n}}{\sqrt{g^{00}}}\partial_{0}+\mathbf{r}^{\mu }\partial_{\mu}. \end{equation} This gradient operator contains no mean curvature. The mean-curvature dependence becomes evident in quantum momentum in the following. First, we assume $g^{00}=1$. The canonical momentum operators ($P_{0},P_{\mu $) associated with canonical positions $(\xi^{0},\xi^{\mu})$ are, respectively, given by, \begin{equation} -i\hbar\partial_{0}\rightarrow P_{0}=-i\hbar\frac{1}{g^{1/2}}\partial _{0}g^{1/2}\text{, }-i\hbar\partial_{\mu}\rightarrow P_{\mu}=-i\hbar\frac {1}{g^{1/2}}\partial_{\mu}g^{1/2}. \end{equation} To note that $\xi^{0}(\mathbf{x})=const.$ forms a surface whose mean curvature $M_{0}$ is simply, \cite{ikegami} \begin{equation} \frac{1}{g^{1/2}}\partial_{0}g^{1/2}=-M_{0}\text{, and }\mathbf{M}_{0 =M_{0}\mathbf{n \end{equation} We have then, \begin{equation} \mathbf{P}=-i\hbar\left( \mathbf{n}\partial_{0}+\mathbf{r}^{\mu}\partial _{\mu}\right) =-i\hbar\left( \mathbf{n}\partial_{0}-\frac{\mathbf{M}_{0} {2}+\mathbf{r}^{\mu}\partial_{\mu}+\frac{\mathbf{M}_{0}}{2}\right) =-i\hbar\mathbf{n}P_{0}+\mathbf{\Pi}_{0}. \label{fgm \end{equation} Secondly, for $g^{00}\ $takes any positive values, we can easily prove \begin{equation} -i\hbar\frac{\partial_{0}}{\sqrt{g^{00}}}\rightarrow P_{0}=-i\hbar\frac {1}{g^{1/2}}\frac{\partial_{0}}{\sqrt{g^{00}}}g^{1/2}=-i\hbar\left( \frac{\partial_{0}}{\sqrt{g^{00}}}-\frac{M_{0}}{2}\right) . \end{equation} The decomposition (\ref{fgm}) remains the same. For clearly see that $\mathbf{\Pi}_{0}$ is really lying on the surface $\xi^{0}=const.$, we can verify the orthogonal relation $\mathbf{n \cdot\mathbf{\Pi}_{0}+\mathbf{\Pi}_{0}\cdot\mathbf{n}=0$. \cite{134} \textit{Q.E.D.} Thus, we understand why there are three mean curvature vectors associated with $3D$ spherical polar coordinates. This is because any one coordinate is normal to other two. Moreover, for an $ND$ surface in $R^{N+1}$, any point in the neighborhood of the surface can be clearly specified by the gaussian normal coordinates $\mathbf{r(}\xi^{\mu}\mathbf{)}+\xi^{0}\mathbf{n(}\xi^{\mu }\mathbf{)}$ with $\mathbf{n(}\xi^{\mu}\mathbf{)}$ is the unit normal vector on point $\xi^{\mu}$ of the surface, and we can define the geometric momentum on the surface. \cite{liu13-2,134} \section{Conclusions} Many come across a fact that the canonical momenta in the orthogonal curvilinear coordinates are closely related to the mean curvature vectors of some properly defined curved surfaces, and the mean curvature vectors are geometric invariant, rather than the Christoffel symbols that give different values from one set of coordinates to another. We demonstrate that a decomposition of momentum operator in Gaussian normal coordinates straightforwardly leads to a natural appearance of the mean curvature vectors. Once the canonical momentum along the normal becomes hermitian, the remaining part of the momentum is lying on the surface, which is the geometric momentum which recently attracts much attention. \begin{acknowledgments} This work is financially supported by National Natural Science Foundation of China under Grant No. 11175063. \end{acknowledgments}
proofpile-arXiv_067-4529
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \noindent The 2016 Audio-Visual Emotion Challenge and Workshop (AVEC 2016) will be the sixth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, video, and physiological analysis of emotion and depression, with all participants competing under strictly the same conditions. The goal of the Challenge is to compare the relative merits of the approaches (audio, video, and/or physiologic) to emotion recognition and severity of depression estimation under well-defined and strictly comparable conditions, and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion recognition for multimedia retrieval to a level where behaviomedical systems \cite{Valstar2014_ABU} are able to deal with large volumes of non-prototypical naturalistic behaviour in reaction to known stimuli, as this is exactly the type of data that diagnostic and in particular monitoring tools, as well as other applications, would have to face in the real world. AVEC 2016 will address emotion and depression recognition. The emotion recognition sub-challenge is a refined re-run of the AVEC 2015 challenge \cite{RingevalEtAl2015_FAR}, largely based on the same dataset. The depression severity estimation sub-challenge is based on a novel dataset of human-agent interactions, and sees the return of depression analysis, which was a huge success in the AVEC 2013 \cite{ValstarEtAl13_ACA} and 2014 \cite{Valstar14-A2T} challenges. \begin{itemize} \item \textbf{Depression Classification Sub-Challenge} (DCC): participants are required to classify whether a person is classified as depressed or not, where the binary ground-truth is based on the severity of self-reported depression as indicated by the PHQ-8 score for every human-agent interaction. For the DCC, performance in the competition will be measured using the average \textbf{F1 score} for both classes \textit{depressed} and \textit{not\_depressed}. Participants are encouraged to provide an estimate of the severity of depression, by calculating the root mean square error over all HCI experiment sessions between the predicted and ground-truth PHQ-8 score. In addition, participants are also encouraged to report on overall accuracy, average precision, and average recall to further analyse their results in the paper accompanying their submission. \item \textbf{Multimodal Affect Recognition Sub-Challenge} (MASC) participants are required to perform fully continuous affect recognition of two affective dimensions: Arousal, and Valence, where the level of affect has to be predicted for every moment of the recording. For the MASC, two regression problems need to be solved: prediction of the continuous dimensions \textsc{Valence} and \textsc{Arousal}. The MASC competition measure is the \textbf{Concordance Correlation Coefficient (CCC)}, which combines the Pearson's correlation coefficient (CC) with the square difference between the mean of the two compared time series, as shown in \ref{eq:ccc}. \begin{equation}\label{eq:ccc} \rho_c=\frac{2\rho\sigma_x\sigma_y}{\sigma_x^2+\sigma_y^2+(\mu_x-\mu_y)^2} \end{equation} where $\rho$ is the Pearson correlation coefficient between two time series (e.\,g., prediction and gold-standard), $\sigma_{x}^2$ and $\sigma_{y}^2$ is the variance of each time series, and $\mu_x$ and $\mu_y$ are the mean value of each. Therefore, predictions that are well correlated with the gold standard but shifted in value are penalised in proportion to the deviation. \end{itemize} To be eligible to participate in the challenge, every entry has to be accompanied by a paper presenting the results and the methods that created them, which will undergo peer-review. Only contributions with a relevant accepted paper will be eligible for challenge participation. The organisers reserve the right to re-evaluate the findings, but will not participate in the Challenge themselves. \section{Depression Analysis Corpus}\label{s:depressionDatabase} \noindent The Distress Analysis Interview Corpus - Wizard of Oz (DAIC-WOZ) database is part of a larger corpus, the Distress Analysis Interview Corpus (DAIC) \cite{GratchEtAl2014_DAI}, that contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post-traumatic stress disorder. These interviews were collected as part of a larger effort to create a computer agent that interviews people and identifies verbal and nonverbal indicators of mental illness \cite{DeVaultEtAl2014_SVH}. Data collected include audio and video recordings and extensive questionnaire responses; this part of the corpus includes the Wizard-of-Oz interviews, conducted by an animated virtual interviewer called Ellie, controlled by a human interviewer in another room. Data has been transcribed and annotated for a variety of verbal and non-verbal features. Information on how to obtain shared data can be found in this location: http://dcapswoz.ict.usc.edu. Data is freely available for research purposes. \subsection{Depression Analysis Labels}\label{s:depressionLabels} \noindent The level of depression is labelled with a single value per recording using a standardised self-assessed subjective depression questionnaire, the PHQ-8 \cite{KroenkeEtAl2009_PMC}. This is similar to the PHQ-9 questionnaire, but with the suicidal ideation question removed for ethical reasons. The average depression severity on the training and development set of the challenge is $M = 6.67$ ($SD = 5.75$). The distribution of the depression severity scores based on the challenge training and development set is provided in Figure \ref{fig:DESC_hist}. A baseline classifier that constantly predicts the mean score of depression provides an $RMSE = 5.73$ and an $MAE = 4.74$. \begin{figure}[!t] \begin{center} \includegraphics[width=0.8\columnwidth]{DESC_hist.pdf} \end{center} \caption{{\bf Histogram of depression severity scores for DESC challenge. Data of training and development set are provided here.}} \label{fig:DESC_hist} \end{figure} \subsection{Depression Analysis Baseline Features}\label{s:features} \noindent In the following sections we describe how the publicly available baseline feature sets are computed for either the audio or the video data. Participants can use these feature sets exclusively or in addition to their own features. For ethical reasons, no raw video is made available. \subsubsection{Video Features} Based on the \textit{OpenFace}~\cite{baltruvsaitisopenface} framework\footnote{\url{https://github.com/TadasBaltrusaitis/CLM-framework}}, we provide different types of video features: \begin{itemize} \item facial landmarks: 2D and 3D coordinates of 68 points on the face, estimated from video \item HOG (histogram of oriented gradients) features on the aligned 112x112 area of the face \item gaze direction estimate for both eyes \item head pose: 3D position and orientation of the head \end{itemize} In addition to that, we provide emotion and facial action unit continuous measures based on \textit{FACET} software\cite{littlewort2011computer}. Specifically, we provide the following measures: \begin{itemize} \item emotion: \{Anger, Contempt, Disgust, Joy, Fear, Neutral, Sadness, Surprise, Confusion, Frustration\} \item AUs: \{AU1, AU2, AU4, AU5, AU6, AU7, AU9, AU10, AU12, AU14, AU15, AU17, AU18, AU20, AU23, AU24, AU25, AU26, AU28, AU43\} \end{itemize} \subsubsection{Audio Features} For the audio features we utilized COVAREP(v1.3.2), a freely available open source Matlab and Octave toolbox for speech analyses \cite{degottex2014covarep}\footnote{\url{http://covarep.github.io/covarep/}}. The toolbox comprises well validated and tested feature extraction methods that aim to capture both voice quality as well as prosodic characteristics of the speaker. These methods have been successfully shown to be correlated with psychological distress and depression in particular \cite{scherer2014automatic,Scherer_etAl2015}. In particular, we extracted the following features: \begin{itemize} \item \textbf{Prosodic: }Fundamental frequency (F0) and voicing (VUV) \item \textbf{Voice Quality: }Normalized amplitude quotient (NAQ), Quasi open quotient (QOQ), the difference in amplitude of the first two harmonics of the differentiated glottal source spectrum (H1H2), parabolic spectral parameter (PSP), maxima dispersion quotient (MDQ), spectral tilt/slope of wavelet responses (peakSlope), and shape parameter of the Liljencrants-Fant model of the glottal pulse dynamics (Rd) \item \textbf{Spectral:} Mel cepstral coefficients (MCEP0-24), Harmonic Model and Phase Distortion mean (HMPDM0-24) and deviations (HMPDD0-12). \end{itemize} \noindent In addition to the feature set above, raw audio and transcripts of the interview are being provided, allowing the participants to compute additional features on their own. For more details on the shared features and the format of the files participants should also review the DAIC-WOZ documentation \footnote{\url{http://dcapswoz.ict.usc.edu/wwwutil_files/DAICWOZDepression_Documentation.pdf}}. \section{Emotion Analysis Corpus} The Remote Collaborative and Affective Interactions (RECOLA) database \cite{Ringeval13-ITR} was recorded to study socio-affective behaviours from multimodal data in the context of computer supported collaborative work \cite{Ringeval13-OTI}. Spontaneous and naturalistic interactions were collected during the resolution of a collaborative task that was performed in dyads and remotely through video conference. Multimodal signals, i.\,e., audio, video, electro-cardiogram (ECG) and electro-dermal activity (EDA), were synchronously recorded from 27 French-speaking subjects. Even though all subjects speak French fluently, they have different nationalities (i. e., French, Italian or German), which thus provide some diversity in the expression of emotion. Data is freely available for research purposes, information on how to obtain the RECOLA database can be found on this location: \url{http://diuf.unifr.ch/diva/recola}. \subsection{Emotion Analysis Labels} Regarding the annotation of the dataset, time-continuous ratings (40 ms binned frames) of emotional arousal and valence were created by six gender balanced French-speaking assistants for the first five minutes of all recordings, because participants discussed more about their strategy -- hence showing emotions -- at the beginning of their interaction. To assess inter-rater reliability, we computed the intra-class correlation coefficient (ICC(3,1)) \cite{Shrout79-ICU}, and Cronbach's $\alpha$~\cite{Cronbach51-CAA}; ratings are concatenated over all subjects. Additionally, we computed the root-mean-square error (RMSE), Pearson's CC and the CCC \cite{Li89-ACC}; values are averaged over the $C_2^6$ pairs of raters. Results indicate a very strong inter-rater reliability for both arousal and valence, cf. Table \ref{tab:ira}. A normalisation technique based on the Evaluator Weighted Estimator~\cite{Grimm05-EON}, is used prior to the computation of the gold-standard, i.\,e., the average of all ratings for each subject~\cite{Ringeval15-POA}. This technique has significantly ($p<0.001$ for CC) improved the inter-rater reliability for both arousal and valence; the Fisher Z-transform is used to perform statistical comparisons between CC in this study. The dataset was divided into speaker disjoint subsets for training, development (validation) and testing, by stratifying (balancing) on gender and mother tongue, cf.\ Table \ref{tab:recola_strat}. \begin{table}[t] \centering \caption{Inter-rater reliability on arousal and valence for the 6 raters and the 27 subjects of the RECOLA database; raw or normalised ratings \cite{Ringeval15-POA}.} \label{tab:ira} \begin{tabular}{c|c|c|c|c|c} \midrule &RMSE &CC & CCC &ICC &$\alpha$\\\midrule \multicolumn{6}{c}{\textit{Raw}}\\\midrule Arousal &.344 &.400 &.277 &.775 &.800\\ Valence &.218 &.446 &.370 &.811 &.802\\\midrule \multicolumn{6}{c}{\textit{Normalised}}\\\midrule Arousal &.263 &.496 &.431 &.827 &.856\\ Valence &.174 &.492 &.478 &.844 &.829\\ \end{tabular} \end{table} \begin{table}[!t] \caption{Partitioning of the \textsc{RECOLA} database into train, development, and test sets.} \label{tab:recola_strat} \begin{center} \begin{tabular}{l|r|r|r}\midrule $\#$ & train & dev & test\\\midrule female & 6 & 5 & 5\\ male & 3 & 4 & 4\\\midrule French & 6 & 7 & 7\\ Italian & 2 & 1 & 2\\ German & 1 & 1 & 0\\\midrule age $\mu$ ($\sigma$) & 21.2 (1.9) & 21.8 (2.5) & 21.2 (1.9)\\ \end{tabular} \end{center} \end{table} \subsection{Emotion Analysis Baseline Features} In the followings we describe how the baseline feature sets are computed for video, audio, and physiological data. \subsubsection{Video Features} Facial expressions play an important role in the communication of emotion~\cite{Ekman02-FAC}. Features are usually grouped in two types of facial descriptors: appearance and geometric based \cite{ValstarEtAl2015_FER}. For the video baseline features set, we computed both, using Local Gabor Binary Patterns from Three Orthogonal Planes (LGBP-TOP)~\cite{Almaev13-LGB} for appearance and facial landmarks~\cite{Xiong13-SDM} for geometric. The LGBP-TOP are computed by splitting the video into spatio-temporal video volumes. Each slice of the video volume extracted along 3 orthogonal planes ($x$-$y$, $x$-$t$ and $y$-$t$) is first convolved with a bank of 2D Gabor filters. The resulting Gabor pictures in the direction of $x$-$y$ plane are divided into 4x4 blocks. In the $x$-$t$ and $y$-$t$ directions they are divided into 4x1 blocks. The LBP operator is then applied to each of these resulting blocks followed by the concatenation of the resulting LBP histograms from all the blocks. A feature reduction is then performed by applying a Principal Component Analysis (PCA) from a low-rank (up to rank 500) approximation~\cite{Halko11-AAF}. We obtained 84 features representing 98\,\% of the variance. In order to extract geometric features, we tracked 49 facial landmarks with the Supervised Descent Method (SDM)~\cite{Xiong13-SDM} and aligned them with a mean shape from stable points (located on the eye corners and on the nose region). As features, we computed the difference between the coordinates of the aligned landmarks and those from the mean shape, and also between the aligned landmark locations in the previous and the current frame; this procedure provided 196 features in total. We then split the facial landmarks into groups according to three different regions: i) the left eye and left eyebrow, ii) the right eye and right eyebrow and iii) the mouth. For each of these groups, the Euclidean distances (L2-norm) and the angles (in radians) between the points are computed, providing 71 features. We also computed the Euclidean distance between the median of the stable landmarks and each aligned landmark in a video frame. In total the geometric set includes 316 features. Both appearance and geometric feature sets are interpolated by a piecewise cubic Hermite polynomial to cope with dropped frames. Finally, the arithmetic mean and the standard-deviation are computed on all features using a sliding window, which is shifted forward at a rate of 40\,ms. \subsubsection{Audio Features} In contrast to large scale feature sets, which have been successfully applied to many speech classification tasks~\cite{Schuller13-TI2, Schuller14-TI2}, smaller, expert-knowledge based feature sets have also shown high robustness for the modelling of emotion from speech~\cite{Ringeval14-ERI,Bone14-RUA}. Some recommendations for the definition of a minimalistic acoustic standard parameter set have been recently investigated, and have led to the Geneva Minimalistic Acoustic Parameter Set (\textsc{GeMAPS}), and to an extended version (\textsc{eGeMAPS})~\cite{Eyben15-TGM}, which is used here as baseline. The acoustic low-level descriptors (LLD) cover spectral, cepstral, prosodic and voice quality information and are extracted with the \textsc{openSMILE} toolkit~\cite{Eyben13-RDI} As the data in the RECOLA database contains long continuous recordings, we used overlapping fixed length segments, which are shifted forward at a rate of 40\,ms, to extract functionals; the arithmetic mean and the coefficient of variation are computed on all 42 LLD. To pitch and loudness the following functionals are additionally applied: percentiles 20, 50 and 80, the range of percentiles 20 -- 80 and the mean and standard deviation of the slope of rising/falling signal parts. Functionals applied to the pitch, jitter, shimmer, and all formant related LLDs, are applied to voiced regions only. Additionally, the average RMS energy is computed and 6 temporal features are included: the rate of loudness peaks per second, mean length and standard deviation of continuous voiced and unvoiced segments and the rate of voiced segments per second, approximating the pseudo syllable rate. Overall, the acoustic baseline features set contains 88 features. \subsubsection{Physiological Features} Physiological signals are known to be well correlated with emotion~\cite{Koelstra12-DAD,Knapp11-PSA}, despite not being directly perceptible the way audio-visual are. Although there are some controversies about peripheral physiology and emotion~\cite{Schachter12-CAP,Keltner10-E}, we believe that autonomic measures should be considered along with audio-visual data in the realm of affective computing, as they do not only provide complementary descriptions of affect, but can also be easily and continuously monitored with wearable sensors~\cite{Sano14-QAO,Picard14-AMA,Chen15-AAI}. \begin{table*} \begin{center} \caption{Baseline results for depression classification. Performance is measured in F1 score for \emph{depressed} and \emph{not depressed} classes as reported through the PHQ-8. In addition, precision and recall are provided. Values for class \emph{not depressed} are reported in brackets.} \vspace{2mm} \label{t:baseline_DCC} \begin{tabular}{l | l | c|c|c} \midrule Partition & Modality & F1 score & Precision & Recall \\\midrule Development & Audio& .462 (.682) & .316 (.938) & .857 (0.54) \\ Development & Video & .500 (.896) & .600 (.867) & .428 (.928) \\ Development & Ensemble & .500 (.896) & .600 (.867) & .428 (.928) \\ \midrule Test & Audio & .410 (.582) & .267 (.941) & .889 (.421) \\ Test & Video & .583 (.851) & .467 (.938) & .778 (.790) \\ Test & Ensemble & .583 (.857) & .467 (.938) & .778 (.790) \\ \end{tabular} \end{center} \end{table*} As baseline features, we extracted features from both ECG and EDA signals with overlapping (step of 40\,ms) windows. The ECG signal was firstly band-pass filtered ($[3-27]$ Hz) with a zero-delay 6th order Butterworth filter \cite{Ringeval15-POA}, and 19 features were then computed: the zero-crossing rate, the four first statistical moments, the normalised length density, the non-stationary index, the spectral entropy, slope, mean frequency plus 6 spectral coefficients, the power in low frequency (LF, 0.04-0.15\,Hz), high frequency (HF, 0.15-0.4\,Hz) and the LF/HF power ratio. Additionally, we extracted the heart rate (HR) and its measure of variability (HRV) from the filtered ECG signal \cite{Ringeval15-POA}. For each of those two descriptors, we computed the two first statistical moments, the arithmetic mean of rising and falling slope, and the percentage of rising values, which provided 10 features in total. EDA reflects a rapid, transient response called skin conductance response (SCR), as well as a slower, basal drift called skin conductance level (SCL) \cite{Dawson00-TES}. Both, SCL (0--0.5\,Hz) and SCR (0.5--1\,Hz) are estimated using a 3rd order Butterworth filter, 8 features are then computed for each of those three low-level descriptors: the four first statistical moments from the original time-series and its first order derivate w.r.t. time. \section{Challenge Baselines}\label{s:baseline} \noindent For transparency and reproducibility, we use standard and open-source algorithms for both sub-challenges. We describe below how the baseline system was defined and the results we obtained for each modality separately, as well as on the fusion of all modalities. \subsection{Depression} The challenge baseline for the depression classification sub-challenge is computed using the scikit-learn toolbox\footnote{\url{http://scikit-learn.org/}}. In particular, we fit a linear support vector machine with stochastic gradient descent, i.\,e. the loss is computed one sample at a time and the model is sequentially updated. We validated the model on the development set and conducted a grid search for optimal hyper-parameters on the development set of both the audio data and video data separately. Features of both modalities are taken from the provided challenge baseline features. Classification and training was performed on a frame-wise basis (i.e., at 100Hz for audio and 30Hz for video); temporal fusion was conducted through simple majority voting of all the frames within an entire screening interview. For both modalities we conducted a grid search for the following parameters: loss function $\in \{\mbox{logarithmic}, \mbox{hinge loss}\}$, regularization $\in \{\mbox{L1}, \mbox{L2}\}$, and $\alpha \in \{1e1, 1e0, \dots, 1e-5\}$. For the audio data the optimal identified hyper-parameters are loss function $= {\mbox{hinge loss}}$, regularization $= {\mbox{L1}}$, and $\alpha = {1e-3}$. For the video data the optimal identified hyper-parameters are loss function $= {\mbox{logarithmic}}$, regularization $= {\mbox{L1}}$, and $\alpha = {1e0}$. The ensemble of audio and video was computed through a simple binary fusion of a logical AND. The test performance was computed on a classifier trained using the found optimal parameters from the grid search. Since the positive outputs of the video modality are a subset of those of the audio the ensemble classifier's performance is exactly the same as the video modality for both the development and test sets. Results are summarized in Table \ref{t:baseline_DCC}. \begin{table} \begin{center} \caption{Baseline results for depression severity estimation. Performance is measured in mean absolute error (MAE) and root mean square error (RMSE) between the predicted and reported PHQ-8 scores, averaged over all sequences.} \vspace{2mm} \label{t:baseline_DSC} \begin{tabular}{l | l | c|c } \midrule Partition & Modality & RMSE & MAE \\\midrule Development & Audio & 6.74 & 5.36 \\ Development & Video & 7.13 & 5.88 \\ Development & Audio-Video & 6.62 & 5.52 \\\midrule Test & Audio& 7.78 & 5.72 \\ Test & Video & 6.97 & 6.12 \\ Test & Audio-Video & 7.05 & 5.66 \\ \end{tabular} \end{center} \end{table} In addition to classification baseline, we also computed a regression baseline using random forest regressor. The only hyper-parameter in this experiment was the number of trees $\in {10, 20, 50, 100, 200}$ in the random forest. For both audio and video the best performing random forest has trees $= 10$. Regression was performed on a frame-wise basis as the classification and temporal fusion over the interview was conduced by averaging of outputs over the entire screening interview. Fusion of audio and video modalities was performed by averaging the regression outputs of the unimodal random forest regressors. The performance for both root mean square error (RMSE) and mean absolute error (MAE) for development and test sets is provided in Table \ref{t:baseline_DSC}. \subsection{Affect} \begin{table}[t] \caption{Size of the window $W$ in seconds used to extract features on the different modalities, and delay $D$ in seconds applied to the gold-standard, according to the emotional dimension, i.\,e., arousal ($A$), and valence ($V$); parameters were obtained as the result of an optimisation of the performance measured as CCC on the development partition.} \centering \begin{tabular}{l|l|l|l|l}\midrule &\multicolumn{2}{c|}{Arousal} &\multicolumn{2}{c}{Valence} \\\midrule% Modality &$W_A$ &$D_A$ &$W_V$ &$D_V$ \\\midrule Audio &4 &2.8 &6 &3.6 \\ Video-appearance &6 &2.8 &4 &2.4 \\ Video-geometric &4 &2.4 &8 &2.8 \\ ECG &4 &0.4 &10 &2.0 \\ HRHRV &8 &0.0 &8 &0.0 \\ EDA &8 &0.0 &10 &0.4 \\ SCL &4 &0.0 &14 &2.4 \\ SCR &4 &0.8 &14 &0.8 \\ \end{tabular} \label{tab:wsize} \end{table} \begin{table}[t!] \caption{Baseline results for affect recognition on the development (D) and test (T) partitions from audio, video (appearance and geometric), and physiologic (ECG, HRHRV, EDA, SCL, and SCR) feature sets, and their late fusion (multimodal). Performance is measured in Concordance correlation coefficient. \centering \begin{tabular}{l|c|c} \midrule Modality &Arousal &Valence\\ \midrule D-Audio &.796 &.455 \\ D-Video-appearance &.483 &.474 \\ D-Video-geometric &.379 &.612\\ D-ECG &.271 &.153\\ D-HRHRV &.379 &.293\\ D-EDA &.073 &.194 \\ D-SCL &.068 &.166 \\ D-SCR &.073 &.085\\\midrule D-Multimodal &\textbf{.821} &\textbf{.683}\\\midrule T-Audio &.648 &.375 \\ T-Video-appearance &.343 &.486 \\ T-Video-geometric &.272 &.507\\ T-ECG &.158 &.121\\ T-HRHRV &.334 &.198\\ T-EDA &.075 &.228 \\ T-SCL &.066 &.216 \\ T-SCR &.065 &.145\\\midrule T-Multimodal &\textbf{.683} &\textbf{.639}\\ \end{tabular} \label{t:baseline_MASC} \end{table} \begin{figure*} \centering \includegraphics[width=1.6\columnwidth]{MASC_modalities_fusion.pdf} \caption{\label{f:MASC_fusion}Percentage of contribution of each modality in the prediction of emotion; values are derived from the multimodal fusion model; V-APP: video appearance; V-GEO: video geometric; ECG: electrocardiogram; HRHV: heart rate and heart rate variability; EDA: electrodermal activity; SCL: skin conductance level; SCR: skin conductance resistance.} \end{figure*} Mono-modal emotion recognition was first investigated separately for each modality. Baseline features were extracted as previously described, with a window size $W$ ranging from four to 14 seconds, and a step of two seconds. The window was centred, i.\,e., the first feature vector was assigned to the center of the window ($W/2$), and duplicated for the previous frames; the same procedure was applied for the last frames. For video data, frames for which the face was not detected were ignored. For EDA, SCL, and SCR, test data from the subject \#7 was not used, due to issue during the recording of this subject (sensor was partially detached from the skin). Two different techniques were investigated to standardise the features: (i) online (standardisation parameters $\mu$ and $\sigma$ are computed on the training partition and used on all partitions), and (ii) speaker dependent ($\mu$ and $\sigma$ are computed and applied on features of each subject). In order to compensate time reaction of the raters, a time delay $D$ is applied to the gold-standard, by shifting back in time the values of the time-series (last value was duplicated), with a delay ranging from zero to eight seconds, and a step of 400ms. As machine learning, we used a linear Support Vector Machine (SVM) to perform the regression task with the liblinear library \cite{Fan08-LAL}; the L2-regularised L2-loss dual solver was chosen (option \textsc{-s 12}) and a unit bias was added to the feature vector (option \textsc{-B 1}), all others parameters were kept to default. The complexity of the SVM was optimised in the range [$10^{-5} - 10^0$]. In order to compensate for scaling and bias issues in the predictions, but also noise in the data, we used the same post-processing chain as employed in \cite{Trigeorgis16-AFE}. The window size $W$ and the time delay $D$ were optimised by a grid search with an early stopping strategy, i.\,e., evaluations were stopped if no improvement was observed over the best score after two iterations. Experiments were always performed for both standardisation strategies, i.\,e., online and per speaker. The best value of complexity, window size, time delay, and standardisation method were obtained by maximising the performance - measured as CCC - on the development partition with the model, learned on the training partition. Table \ref{tab:wsize} lists the best parameters for $W$ and $D$, for each modality and emotional dimension, and shows that, the valence generally requires longer window size (to extract features) and time delay (to compensate for reaction time) than for arousal; $\bar{W}_A=5.3$, $\bar{W}_V=9.3$, $\bar{D}_A=1.2$, $\bar{D}_V=1.8$. Moreover, the results show that, a separate processing of features related to ECG, i.\,e., HRHV, and those related to EDA, i.\,e., SCL, and SCR, is justified as the best parameters obtained those signals differ from the ones obtained on their original signal. Regarding the standardisation technique, the online approach worked best for audio on both dimensions, and video data on valence, whereas standardisation of the features per subject worked best for all physiological features. Mono-modal performance is reported in Table \ref{t:baseline_MASC}. Results show that, a significant improvement has been made for all modalities compared to the AVEC 2015 baseline \cite{RingevalEtAl2015_FAR}, excepted for the EDA features. In agreement with the state-of-the-art, audio features perform significantly better than any other modality on arousal, and video features on valence. Interestingly, emotion prediction from the HRHRV signal performs significantly better than with the original ECG signal, and it is ranked as the most relevant physiological descriptor for arousal, when taken alone. Multimodal emotion recognition is performed with three different late-fusion models, because frames might be missing on the video, and EDA related features; (i) audio-ECG, used for missing video and EDA; (ii) audio-ECG-EDA, used for missing video; (iii) audio-ECG-EDA-video, used otherwise. In order to keep the complexity low, and estimate the contribution of each modality in the fusion process, we build the fusion model by a simple linear regression of the predictions obtained on the development partition, using Weka 3.7 with default parameters \cite{Hall09-TWD}. Obtained predictions were then post-processed with the same approach used for the mono-modal predictions. \begin{equation}\label{eq:lin_reg} Pred_{m} = \epsilon_{m} + \sum_{i=1}^{N} \gamma_{i}*Pred_{u}(i), \end{equation} where $Pred_{u}(i)$ is the mono-modal prediction of the modality $i$ from the $N$ available ones (ranging from two to eight), $\gamma_i$ and $\epsilon_m$ are regression coefficients estimated on the development partition, and $Pred_{m}$ is the fused prediction. Performance is reported in Table \ref{t:baseline_MASC}. Results show that, the baseline for the AVEC 2016 MASC is highly competitive, with the performance obtained on valence for the test partition being slightly better than the top-performer of AVEC 2015 \cite{He15-MAD}. In order to depict the contribution of each modality in the prediction of emotion, we normalised the linear regression coefficients that were learned for the multimodal fusion model (iv) into a percentage: \begin{equation}\label{eq:cont} C_i = 100* \frac{|\gamma_{i}|}{\sum_{k=1}^{N} |\gamma_{k}|}, \end{equation} where $C_i$ is the contribution of the modality $i$ in percentage, and $\gamma_k$ are the regression coefficients of the multimodal fusion model; $N=8$. Results show that, even if the mono-modal performance can be low for a given modality and emotion, e.\,g., EDA for arousal or SCR for valence, cf. Table \ref{t:baseline_MASC}, all modalities contribute, to a certain extent, to the prediction of arousal and valence in the fusion scenario, cf. Figure \ref{f:MASC_fusion}. This is especially the case for SCR features on arousal and for SCL features on valence, which did not perform well when used in isolation, but contribute even outperformed appearance features in the fusion model. \section{Conclusion }\label{s:conclusion} \noindent We introduced AVEC 2016 -- the third combined open Audio/Visual Emotion and Depression recognition Challenge. It comprises two sub-challenges: the detection of the affective dimensions of arousal and valence, and the estimation of a self-reported level of depression. This manuscript describes AVEC 2016's challenge conditions, data, baseline features and results. By intention, we opted to use open-source software and the highest possible transparency and realism for the baselines by refraining from feature space optimisation and optimising on test data. This should improve the reproducibility of the baseline results. \section*{Acknowledgements} The research leading to these results has received funding from the EC's 7th Framework Programme through the ERC Starting Grant No.\ 338164 (iHEARu), and the European Union's Horizon 2020 Programme through the Innovative Action No.\ 645094 (SEWA), and the Research Innovative Action No.\ 645378 (ARIA-VALUSPA), and No.\ 688835 (DE-ENIGMA). \bibliographystyle{abbrv}
proofpile-arXiv_067-4575
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} It is well known that core and shell burning in massive stars typically drives convective overturn \citep{kippenhahn}. Although convective heat transport and mixing are inherently multi-dimensional phenomena, the dynamical, convective, Kelvin-Helmholtz, and nuclear time-scales are typically too disparate for modeling convection in three dimensions (3D) during most phases of stellar evolution. Spherically symmetric (1D) stellar evolution models therefore need to rely on mixing-length theory (MLT; \citealp{biermann_32,boehm_58}) or some generalization thereof \citep{kuhfuss_86,wuchterl_98,straka_08}. Such an effective 1D treatment of convection in stellar evolution is bound to remain indispensable even with the advent of modern, implicit hydrodynamics codes \citep{viallet_11,viallet_16,miczek_15} that permit multi-D simulations over a wider range of flow regimes and time-scales. The final stages of a massive star before its explosion as a supernova (SN) are among the notable exceptions for an evolutionary phase where the secular evolution time-scales are sufficiently short to remain within reach of multi-D simulations (see, e.g., \citealp{mocak_08,mocak_09,stancliffe_11,herwig_14} for other examples in the case of low-mass stars). There is also ample motivation for investigating these final stages in 3D. Aside from the implications of multi-D effects in convective shell burning for pulsar kicks \citep{burrows_96,goldreich_97,lai_00,fryer_04,murphy_04} and their possible connection to pre-SN outbursts \citep{smith_14}, they have recently garnered interest as a means for facilitating shock revival in the ensuing supernova \citep{couch_13,mueller_15a,couch_15}, which has been the primary impetus for this paper. While the idea that progenitor asphericities arising from convective motions with Mach numbers $\mathord{\sim} 0.1$ can aid shock revival by boosting turbulent motions in the post-shock regions appears plausible, it would be premature to claim that this new idea is a decisive component for the success of the neutrino-driven mechanism. Major questions about this so far undervalued ingredient remain unanswered; and in this paper we shall address some of them. To evaluate the role of pre-SN seed perturbations in the explosion mechanism, we obviously need multi-D simulations of shell burning up to the onset of core collapse. As shown by the parametric study of \citet{mueller_15a}, the typical Mach number and scale of the convective eddies at this stage determine whether the seed asphericities can effectively facilitate shock revival. None of the available multi-D models can reliably provide that information yet. While there is a large body of 2D and 3D simulations of earlier phases of shell burning \citep{arnett_94,bazan_94,bazan_98,asida_00,kuhlen_03,meakin_06,meakin_07,meakin_07_b,arnett_11,jones_16} , a first, exploratory attempt at extending a model of silicon shell burning up to collapse has only been made recently by \citet{couch_15}, albeit based on a number of problematic approximations. \citet{couch_15} not only assumed octant symmetry, which precludes the emergence of large-scale modes, but also artificially accelerated the contraction of the iron core due to deleptonization, which leads to a gross overestimation of the convective velocities in the silicon shell as we shall demonstrate in this paper. Moreover, convective silicon burning often (though not invariably) terminates minutes before collapse in stellar evolution models (see, e.g., Figures~22 and 23 in \citealt{chieffi_13} and Figure~16 in \citealt{sukhbold_14}), as it apparently also does in the 1D model of \citet{couch_15} calculated with the \textsc{MESA} code \citep{paxton_11,paxton_13}. Obviously, simulations covering the full solid angle ($4 \pi$) with a more physical treatment of the core contraction are required as a next step. Moreover, the efficiency of progenitor asphericities in triggering shock revival in supernova simulations varies considerably between different numerical studies. The models of \citet{couch_13,couch_14} are compatible with a small or moderate reduction of the critical luminosity \citep{burrows_93} for runaway shock expansion. \citet{couch_15} observe shock revival in their perturbed and non-perturbed model alike, i.e.\, the perturbations are not crucial for shock revival at all in their study (which appears somewhat at odds with their claims of a significant effect). On the other hand, a much stronger reduction of the critical luminosity of the order of tens of percent has been inferred by \citet{mueller_15a} for dipolar or quadrupolar perturbation patterns based on 2D models with multi-group neutrino transport. These claims may not be in conflict with each other, but could simply result from the different scale and geometry of the pre-collapse velocity/density perturbations, the different progenitor models, and the different treatment of neutrino heating and cooling in these works. A more quantitative theory about the impact of progenitor asphericities on shock revival that could provide a unified interpretation of these disparate findings is still lacking. In this paper, we attempt to make progress on both fronts. We present the first full-$4\pi$ 3D simulation of the last minutes of oxygen shell burning in an $18 M_\odot$ star. The model is followed up to collapse by appropriately contracting the outer boundary of the excised (non-convective) core as in the corresponding 1D stellar evolution model computed with the \textsc{Kepler} code \citep{weaver_78,heger_10}. By focusing on oxygen shell burning, we avoid the intricacies of deleptonization in the iron core and the silicon shell and the nuclear quasi-equilibrium during silicon burning, so that nucleon burning can be treated with an inexpensive $\alpha$-network. Our simulation covers the last $293.5 \, \mathrm{s}$ before collapse to keep the last three minutes ($\mathord{\sim} 9$ turnover time-scales) free of the artificial transients. In our analysis of the simulation, we single out the properties of the convective flow that are immediately relevant for understanding pre-collapse asphericities in supernova progenitors and their role in the explosion mechanism, while a more extensive analysis of the flow properties based on a Reynolds decomposition (as in \citealt{arnett_09,murphy_11,viallet_13,mocak_14}) is left to a future paper. The key question that we set out to answer in this paper is simply: \emph{Can we characterize the multi-dimensional structure of supernova progenitors (and perhaps their role in the explosion mechanism) already based on 1D stellar evolution models?} We shall argue that this question can be answered in the affirmative, and demonstrate that the typical velocity and scale of the convective eddies comport with the predictions of mixing length theory (MLT) and linear stability analysis. In preparation for future core-collapse simulations using multi-D progenitors, we develop a tentative theory for the effects of pre-collapse seed perturbations on shock revival that allows one to single out promising models for such simulations. Aside from some remarks on convective boundary mixing, we largely skirt the much more challenging question whether deviations from MLT predictions have a long-term effect on the evolution of supernova progenitors during earlier phases. Our paper is structured as follows: In Section~\ref{sec:numerics}, we describe the numerical methods used for our 3D simulation of oxygen shell burning and briefly discuss the current version of the \textsc{Kepler} stellar evolution code and the $18 M_\odot$ supernova progenitor model that we consider. In Section~\ref{sec:results}, we present the results of our 3D simulation, compare them to the 1D stellar evolution model, and show that the key properties of the convective flow are nicely captured by analytic scaling laws. We point out that these scaling laws impose a number of requirements on 3D simulations of shell burning in Section~\ref{sec:requirements}. In Section~\ref{sec:theory} we formulate a simple estimate for the effect of the pre-collapse asphericities with a given typical convective velocity and eddy scale on shock revival. The broader implications of our findings and questions that need to be addressed by 3D stellar evolution models of supernova progenitors are summarized in Section~\ref{sec:summary}. Two appendices address different formulations of the Ledoux criterion (Appendix~\ref{app:ledoux}) and possible effects of resolution and stochasticity (Appendix~\ref{app:res}). \begin{figure} \includegraphics[width=\linewidth]{f1-eps-converted-to.pdf} \caption{Top panel: Mass fractions $X_i$ of relevant $\alpha$-elements in the 1D progenitor model at the onset of collapse as a function of enclosed mass $m$. Bottom panel: Profiles of entropy $s$ and density $\rho$ as a function of $m$. Dashed vertical lines indicate the boundaries of the region simulated in 3D. \label{fig:composition}} \end{figure} \section{Setup and Numerical Methods} \label{sec:numerics} \subsection{The \textsc{Kepler} Stellar Evolution Code} We simulate oxygen shell burning in a non-rotating $18 M_\odot$ solar metallicity star. This stellar model has been evolved to the onset of core collapse with an up-to-date version of the stellar evolution code \textsc{Kepler} \citep{weaver_78,woosley_02,heger_10}. A 19-species nuclear network \citep{weaver_78} is used at low temperatures (up to oxygen burning); at higher temperatures, we switch to a quasi-equilibrium (QSE) approach that provides an efficient and accurate mean to treat silicon burning and the transition to a nuclear statistical equilibrium (NSE) network after silicon depletion. The mixing processes taken into account in this model include convective mixing according to MLT, thermohaline mixing according to \citet{heger_05}, and semiconvection according to \citet{langer_83}, but modified for a general equation of state as derived in \citet{heger_05}. All mixing is modeled as a diffusive process with appropriately determined diffusion coefficients. Since we will compare the predictions of MLT and the results of our 3D simulation in some detail, we elaborate further on the numerical implementation of MLT in fully convective (Ledoux-unstable) regions in \textsc{Kepler}, which has been outlined in a more compact form in previous papers \citep{woosley_88,woosley_04}. For the implementation of semiconvection and thermohaline convection (which are not immediately relevant for this paper), we refer the reader to \citet{heger_00,heger_05}. MLT assumes that the relative density contrast $\delta \rho/\rho$ between convective updrafts/downdrafts and the spherically averaged background state is related to the deviation of the spherically averaged stratification from convective neutrality and hence to the Brunt-V\"ais\"al\"a frequency $\omega_\mathrm{BV}$. If the Ledoux criterion for convection is used (as in \textsc{Kepler}), one obtains \begin{equation} \label{eq:mlt1} \frac{\delta \rho}{\rho} = \Lambda_\mathrm{mix} \left( \frac{1}{\rho} \frac{\partial \rho}{\partial r} -\frac{1}{\rho c_s^2}\frac{\partial P}{\partial r}\right) = \frac{\Lambda_\mathrm{mix} \omega_\mathrm{BV}^2}{g}, \end{equation} for $\delta \rho/\rho$, where both entropy and composition gradients are implicitly taken into account (see Appendix~\ref{app:ledoux}). Here, $\rho$, $P$, and $c_s$ denote the spherically averaged density, pressure, and adiabatic sound speed, and $g$ denotes the local gravitational acceleration. $\omega_\mathrm{BV}$ is the Brunt-V\"ais\"al\"a frequency\footnote{Note the sign convention used in this paper: $\omega_\mathrm{BV}^2>0$ corresponds to convective instability.}, and $\Lambda_\mathrm{mix}$ is the mixing length, which is chosen as one pressure scale height $h_P$ so that we have \begin{equation} \Lambda_\mathrm{mix}=h_P =P \left(\frac{\partial P}{\partial r}\right)^{-1} =\frac{P}{\rho g}, \end{equation} under the assumption of hydrostatic equilibrium. The convective velocity in MLT can then be expressed in terms of $\omega_\mathrm{BV}$, $\Lambda_\mathrm{mix}$, $g$, and $\delta\rho/\rho$, and a dimensionless parameter $\alpha_1$ as \begin{eqnarray} \nonumber \label{eq:vconv} v_\mathrm{conv} &=& \alpha_1 \omega_\mathrm{BV} \Lambda_\mathrm{mix} =\alpha_1 \left(\frac{\delta \rho}{\rho} \frac{g}{\Lambda_\mathrm{mix}}\right)^{1/2} \Lambda_\mathrm{mix} \\ &=&\alpha_1 \left(g \Lambda_\mathrm{mix}\frac{\delta \rho}{\rho} \right)^{1/2}. \end{eqnarray} Note that different normalizations and default values for $\alpha_1$ are used in the literature. Wherever a direct calibration against observations (as for the solar convection zone, \citealp{christensen_96}) is not possible, physical arguments can only constrain $\alpha_1$ to within a factor of a few. Together with the temperature contrast $\delta T$ between the convective blobs and their surroundings, $v_\mathrm{conv}$ determines the convective energy flux $F_\mathrm{conv}$, \begin{eqnarray} \label{eq:mlt2} \nonumber F_\mathrm{conv} &=& \alpha_2 \rho c_P \, \delta T \, v_\mathrm{conv} = \alpha_1 \alpha_2 \rho c_P \, \delta T\, \Lambda_\mathrm{mix} \omega_\mathrm{BV} \\ \nonumber &=& -\alpha_1 \alpha_2 \rho c_P \left(\frac{\partial T}{\partial \ln \rho}\right)_P \frac{\delta \rho}{\rho} \Lambda_\mathrm{mix} \omega_\mathrm{BV} \\ &=& -\alpha_1 \alpha_2 \rho c_P \left(\frac{\partial T}{ \partial \ln \rho}\right)_P \frac{\Lambda_\mathrm{mix}^2 \omega_\mathrm{BV}^3}{g} , \end{eqnarray} where $c_P$ is the specific heat at constant pressure, and $\alpha_2$ is another dimensionless parameter. Note that the second and third line in Equation~(\ref{eq:mlt2}) implicitly assume that the contribution of composition gradients to the unstable gradient can be neglected inside a convective zone, which is a good approximation for advanced burning stages. For compositional mixing, \textsc{Kepler} uses a time-dependent diffusion model \citep{eggleton_72,weaver_78,heger_00,heger_05} for the evolution of the mass fractions $X_i$, \begin{equation} \left(\frac{\partial \rho X_i}{\partial t}\right)_\mathrm{mix} = \frac{1}{r^2}\frac{\partial r^2 F_{X_i}}{\partial r} =\frac{1}{r^2} \frac{\partial }{\partial r} \left(r^2 \rho D \frac{\partial X_i}{\partial r} \right), \end{equation} where $F_{X_i}=\rho D \partial X_i/\partial r$ is the diffusive partial mass flux for species $i$, and the diffusion coefficient is given by \begin{equation} \label{eq:mlt_diff} D = \alpha_3 \Lambda_\mathrm{mix} v_\mathrm{conv} = \alpha_1 \alpha_3 \omega_\mathrm{BV} \Lambda_\mathrm{mix}^2, \end{equation} where we have introduced another dimensionless parameter $\alpha_3$. If we introduce the composition contrast $\delta X_i = \Lambda_\mathrm{mix} \partial X_i/\partial r$ between the bubbles and the background state, the symmetry to Equation~(\ref{eq:mlt2}) for the convective energy flux becomes manifest: \begin{equation} \label{eq:mlt3} F_{X_i} = \alpha_1 \alpha_3\, \rho \,\delta X_i\, \Lambda_\mathrm{mix} \omega_\mathrm{BV}. \end{equation} We note that only the products $\alpha_1 \alpha_2$ and $\alpha_1 \alpha_3$ enter the evolution equations, and we are therefore free to reshuffle an arbitrary factor between $\alpha_1$ and the other two coefficients. In \textsc{Kepler}, we choose $\alpha_1 \alpha_2=1/2$ and $\alpha_1 \alpha_3=1/6$, which is traditionally interpreted as the result of $\alpha_1=1/2$, $\alpha_2=1$ and $\alpha_3=1/3$, where the choice of $\alpha_3=1/3$ is motivated by the interpretation of convective mixing as a random-walk process in 3D with mean free path $\Lambda_\mathrm{mix}$ and an average \emph{total} velocity (including the non-radial velocity components) $v_\mathrm{conv}$. Setting $\alpha_3=\alpha_2/3$ arguably introduces an asymmetry in the equations, but we defer the discussion of its effect to Section~\ref{sec:mixing}. For extracting convective velocities from the \textsc{Kepler} model, we shall work with the alternative choice of $\alpha_1=1, \alpha_2=1/2, \alpha_3=1/6$, however, as this gives better agreement with the convective velocity field in our 3D simulation. This is equally justifiable; essentially this choice amounts to a larger correlation length for velocity perturbations and less perfect correlations between fluctuations in velocity and entropy/composition. For numerical reasons, $\omega_\mathrm{BV}$ is rescaled before computing the convective energy and partial mass fluxes according to Equations~(\ref{eq:mlt2}) and (\ref{eq:mlt3}), \begin{equation} \label{eq:rescaling} \omega_\mathrm{BV} \rightarrow \omega_\mathrm{BV} e^{-f /(3 \delta\rho/\rho)}, \end{equation} where $f$ is an adjustable parameter that is set to $f=0.01$ in our model. By rescaling $\omega_\mathrm{BV}$ convective mixing and energy transport are suppressed until a reasonably large superadiabatic gradient has been established. This procedure avoids convergence problems due to zones switching too frequently between convective stability and instability. The repercussions and limitations of this numerical approach will be discussed in Section~\ref{sec:comparison}, where we compare the 1D stellar evolution model to our 3D hydrodynamic simulation. \subsection{1D Supernova Progenitor Model} Entropy, density, and composition profiles of the 1D progenitor model at the onset of collapse are shown in Figure~\ref{fig:composition}. The progenitor has an extended convective oxygen shell of about $0.5 M_\odot$ with a broader convective carbon burning shell directly on top of it. The inner and outer boundaries of the oxygen shell are located at $3000 \, \mathrm{km}$ and $8000 \, \mathrm{km}$ at the beginning of our 3D simulation and contract considerably until collapse sets in. The entropy jump between the silicon and oxygen shell is relatively pronounced, so that no strong overshooting and/or entrainment at the inner convective boundary is expected because of the strong buoyancy barrier at the interface. The boundary between the oxygen and carbon shell is considerably ``softer'' with only a small jump of $0.5 \, \mathrm{k}_b/\mathrm{nucleon}$ in entropy. We note that the balance between energy generation by nuclear burning and neutrino cooling is broken during the final phase before collapse that we are considering here. This is due to the acceleration of shell burning induced by the contraction of the core on a time-scale too short for thermal adjustment by neutrino cooling. Different from earlier phases, it is therefore sufficient to follow shell convection in multi-D merely for several overturn time-scales to reach the correct quasi-steady state (instead of several Kelvin-Helmholtz time-scales for earlier phases to ensure thermal adjustment). \subsection{3D Simulation} At a time of $293.5 \, \mathrm{s}$ before the onset of collapse, the stellar evolution model is mapped to the finite-volume hydrodynamics code \textsc{Prometheus} \citep{fryxell_89}, which is an implementation of the piecewise parabolic method of \citet{colella_84}. An axis-free overset ``Yin-Yang'' grid \citep{kageyama_04,wongwathanarat_10a}, impelemented as in \citet{melson_15a} using MPI domain decomposition and an algorithm for conservative advection of scalars \citep{wongwathanarat_10a,melson_msc} , allows us to retain the advantages of spherical polar coordinates, which are best suited to the problem geometry, while avoiding excessive time-step constraints close to the grid axis. As in \textsc{Kepler}, nuclear burning is treated using a 19-species $\alpha$-network. The simulations are performed in the so-called implicit large eddy simulations (ILES) paradigm \citep{boris_new_1992,ILES_grinstein}, in which diffusive processes (viscosity, mass diffusion, thermal diffusion) are not explicitly included in the equations. Instead, one relies on the truncation errors of the underlying numerical scheme to mimic the effects of irreversible processes taking place at unresolved scales (truncation errors act as an ``implicit'' sub-grid scale model). Since there is no convective activity in the Fe core and the Si shell in the last stages before collapse in the \textsc{Kepler} model, we excise the innermost $1.68 M_\odot$ of the core and contract the inner boundary of the computational domain according to the trajectory of this mass shell in the \textsc{Kepler} run from an initial radius of $3000 \, \mathrm{km}$ to $1974 \, \mathrm{km}$ at the onset of collapse. At both the inner and outer boundary, we impose reflecting boundary conditions for the radial velocity, and use constant extrapolation for the non-radial velocity components. The density, pressure and internal energy are extrapolated into the ghost zones assuming hydrostatic equilibrium and constant entropy. Excising the core not only reduces the computer time requirements considerably, but also allows us to circumvent the complications of deleptonization and Si burning in the QSE regime. The outer boundary is set to a mass coordinate of $4.07 M_\odot$ (corresponding to a radius of $50,000 \, \mathrm{km}$) so that the computational domain comprises the outer $0.08 M_\odot$ of the Si shell, the entire O and C shell, and a small part of the incompletely burnt He shell. On the other hand, using an inner boundary condition implies that we cannot address potential effects of shell convection on the core via wave excitation at the convective boundaries, such as the excitation of unstable g-mode \citep{goldreich_97} (whose growth is likely too slow to be significant; see \citealt{murphy_04}) or core spin-up due to angular momentum transport by internal gravity waves \citep{fuller_15}. We use a logarithmic radial grid with 400 zones, which implies a radial resolution of $\Delta r/r=0.7 \%$ at the beginning of the simulation. Equidistant spacing in $\log r$ is maintained throughout the simulation as the inner boundary is contracted. $56 \times 148$ angular zones are used on each patch of the Yin-Yang grid, which corresponds to an angular resolution of $2^\circ$. A limited resolution study based on two additional models with coarser meshes is presnted in Appendix~\ref{app:res}. \begin{figure*} \plottwo{f2a.png}{f2b.png} \plottwo{f2c.png}{f2d.png} \plottwo{f2e.png}{f2f.png} \caption{Slices showing the mass fraction $X_\mathrm{Si}$ of silicon (left column) and the radial velocity $v_r$ (right column) at times of $20 \, \mathrm{s}$, $151 \, \mathrm{s}$, and $210 \, \mathrm{s}$ after the beginning of the 3D simulation (top to bottom). $v_r$ is given in units of $\mathrm{km} \, \mathrm{s}^{-1}$. The boundary between the patches of the Yin-Yang grid is located in the right half of the panels at $45^\circ$ and $135^\circ$ from the vertical direction. Note that convection initially develops on small angular scales after mapping from the 1D stellar evolution as a strongly superadiabatic gradient builds up in the narrow region of strongest nuclear burning at the base of the oxygen shell (top row). Once convection is fully developed, large-scale overturn emerges. The position of the updrafts and downdrafts shifts freely across the boundaries between the grid patches. \label{fig:snap2d_a}} \end{figure*} \begin{figure*} \plottwo{f3a.png}{f3b.png} \plottwo{f3c.png}{f3d.png} \plottwo{f3e.png}{f3f.png} \caption{Slices showing the mass fraction $X_\mathrm{Si}$ of silicon (left column) and the radial velocity $v_r$ (right column) at times of $270 \, \mathrm{s}$, $286 \, \mathrm{s}$, and $293.5 \, \mathrm{s}$ (onset of collapse) after the beginning of the 3D simulation (top to bottom). $v_r$ is given in units of $\mathrm{km} \, \mathrm{s}^{-1}$. Note that wave breaking at the outer boundary of the oxygen shell and the global asymmetry of convective motions become more conspicuous at late times. At the onset of collapse, a bipolar flow pattern emerges (bottom row). \label{fig:snap2d_b}} \end{figure*} \begin{figure*} \plotone{f4.png} \caption{Volume rendering of the mass fraction of silicon at the end of the 3D simulation at $293.5 \, \mathrm{s}$ (onset of collapse) on one patch of the Yin-Yang grid, showing fuzzy silicon-rich updrafts of hot ashes (red) and silicon-poor downdrafts of fresh fuel. A global asymmetry in the updrafts is clearly visible. The inner boundary of the oxygen shell (cyan) is relatively ``hard'' due to the strong buoyancy jump between the silicon and oxygen shell and therefore remains almost spherical. \label{fig:vol3d}} \end{figure*} \begin{figure} \includegraphics[width=\linewidth]{f5-eps-converted-to.pdf} \caption{Top: Volume-integrated net nuclear energy generation rate $\dot{Q}_\mathrm{nuc}$ in the oxygen shell in the 3D simulation (black) and in \textsc{Kepler} (red). Bottom: Kinetic energies $E_{\theta,\varphi}$ (black) and $E_r$ (blue) contained in fluctuating non-radial and radial motions in the 3D simulation; see Equations~(\ref{eq:ekinr},\ref{eq:ekinl}) for definitions. The MLT estimate of the volume-integrated kinetic energy $E_{r,\mathrm{1D}}$ in radial convective motions in the oxygen shell for the \textsc{Kepler} model (red) is computed by using Equation~(\ref{eq:vconv}) for the convective velocity assuming $\alpha_1=1$. \label{fig:qnuc_and_ekin}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f6-eps-converted-to.pdf} \caption{Profiles of the turbulent Mach number $\sqrt{\langle \mathrm{Ma}_r^2 \rangle}$ of radial velocity fluctuations in the oxygen and carbon shell at different times during the 3D simulation. Note that there is a secular increase in the Mach number in the oxygen shell even after convection has reached a quasi-stationary state due to the contraction of the inner boundary. By contrast, the turbulent Mach number in the carbon shell merely increases because convection has not reached a quasi-stationary state in that shell. \label{fig:mach}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f7-eps-converted-to.pdf} \caption{Top panel: Efficiency $\eta_\mathrm{conv}$ for the conversion of nuclear energy generation into convective kinetic energy as defined in Equation~(\ref{eq:eta_conv}) in the 3D run and the 1D \textsc{Kepler} model. In the 3D case, we compute $\eta_\mathrm{conv}$ both for the kinetic energy in radial motions (Equation~\ref{eq:ekinr}, black curve) and transverse motions (Equation~\ref{eq:ekinl}, blue); for the \textsc{Kepler} run (red), we use the energy contained in radial convective motions computed according to Equation~(\ref{eq:ek_in_mlt}). By default, we use the pressure scale height in Equation~(\ref{eq:eta_conv}) as the mixing or damping length $\Lambda_\mathrm{damp}$. The efficiency is much lower if $\Lambda_\mathrm{damp}$ is identified with the radial extent of the convective zone $\Lambda_\mathrm{conv}=r_+ - r_-$. Bottom panel: Comparison of the Brunt-V\"ais\"al\"a frequency $\omega_\mathrm{BV,max}$ at the base of the oxygen shell (black), the reciprocal of the convective turnover time $t_\mathrm{conv}$ (blue), and the logarithmic derivative $\ensuremath{\mathrm{d}} \ln \dot{Q}_\mathrm{nuc}/\ensuremath{\mathrm{d}} t$ of the volume-integrated nuclear energy generation rate (red). The freeze-out of convection (denoted by a dashed vertical line) occurs roughly when $\omega_\mathrm{BV,max}/(2\pi) \approx t_\mathrm{conv}^{-1} \approx \ensuremath{\mathrm{d}} \ln \dot{Q}_\mathrm{nuc}/\ensuremath{\mathrm{d}} t$. \label{fig:qconv}} \end{figure} \section{Simulation Results} \label{sec:results} In Figures~\ref{fig:snap2d_a} and \ref{fig:snap2d_b}, we show 2D slices depicting the evolution of the mass fraction $X_\mathrm{Si}$ of silicon and the radial velocity $v_r$ to provide a rough impression of the multi-D flow dynamics in our 3D simulation. Convective plumes initially develop on small angular scales in the inner part of the oxygen shell (where the burning rate is high). After about $100 \, \mathrm{s}$ we see fully developed convection with maximum plume velocities of $\mathord{\sim} 500 \, \mathrm{km} \, \mathrm{s}^{-1}$ that increase towards collapse, and large-scale modes dominate the flow. The latest snapshots at $286 \, \mathrm{s}$ and $293.5 \, \mathrm{s}$ suggest the emergence of a bipolar flow structure right before collapse. Large-scale structures are more clearly visible in the velocity field than in $X_\mathrm{Si}$. Indeed, the rising plumes enriched in silicon and the sinking plumes containing fresh fuel appear rather ``wispy'', an impression which is reinforced by the 3D volume rendering of $X_\mathrm{Si}$ at the onset of collapse in Figure~\ref{fig:vol3d}. Convection also develops in the overlying carbon shell. However, since the convective velocities in the carbon shell are lower, and since this shell extends out to a radius of $27,000 \, \mathrm{km}$, convection never reaches a quasi-steady state within the simulation time. We therefore do not address convection in the carbon shell in our analysis. As in earlier studies of mixing at convective boundaries \citep{meakin_07}, the interface between the carbon and oxygen layer proves unstable to the Holmb\"oe/Kelvin-Helmholtz instability\footnote{We do not attempt to classify the precise type at instability at play, since this is immaterial for our purpose.} with wave breaking leading to the entrainment of material from the carbon shell. The snapshots suggest that such entrainment events become more frequent and violent shortly before collapse. The convective velocities and eddy scales thus fall roughly into the regime where the parametric study of \citet{mueller_15a} suggests a significant impact of pre-collapse asphericities on shock revival (a convective Mach number of order $0.05$ or higher, corresponding to velocities of a few $100 \, \mathrm{km} \, \mathrm{s}^{-1}$, and dominant $\ell=1$ or $\ell=2$ modes). \begin{figure} \includegraphics[width=\linewidth]{f8-eps-converted-to.pdf} \caption{Top: Outer and inner boundary radius $r_+$ and $r_-$ of the oxygen shell as functiosn of time. Bottom: Correlation length $\Lambda_\mathrm{corr}$ for the radial velocity computed at $r=4000 \, \mathrm{km}$ (black), pressure scale height $h_P$ at the base of the oxygen shell (red) and radial extent $\Lambda_\mathrm{conv}=r_+ - r_-$ of the oxygen shell (blue) as a function of time. \label{fig:correlation_length}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f9-eps-converted-to.pdf} \caption{Top: Total convective luminosity (including the kinetic energy flux) at $210 \, \mathrm{ms}$ in the 3D simulation as a function of enclosed mass $m$. Bottom: Quantities determining the spherically averaged entropy production. The term $T^{-1} \ensuremath{\mathrm{d}} L_\mathrm{conv}/\ensuremath{\mathrm{d}} m$ stemming from the divergence of the total convective luminosity is shown in red, the entropy production due to the nuclear source term $\dot{\epsilon}_\mathrm{nuc}/T$ (neglecting terms in the chemical potential of the different nuclear species) is shown in blue, and the black curve denotes the sum of both terms. The curves are computed from averages over several time steps. \label{fig:lconv}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f10-eps-converted-to.pdf} \caption{ Top panel: comparison of radial and transverse RMS velocity fluctuations $\delta v_r$ (black) and $\delta v_t$ (blue) in the 3D model at $210 \, \mathrm{s}$ to the convective velocity $v_\mathrm{conv}$ computed in the \textsc{Kepler} model using MLT (red), and to $\omega_\mathrm{BV} h_P$ (violet). Bottom panel: Comparison of the Brunt-V\"ais\"al\"a frequency computed from spherical averages of the pressure, density, and sound speed in the 3D run (black) and in the 1D \textsc{Kepler} model. Note that there is a formally stable region around the boundary between the oxygen and carbon shell in the 3D model due to the aspherical deformation of the shell boundary and the entrainment of material from the carbon shell, which increases the spherically averaged entropy in the outer parts of the oxygen shell. \label{fig:comp_vel}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f11-eps-converted-to.pdf} \caption{ Profiles of the RMS-averaged turbulent Mach number $\langle \mathrm{Ma}_r^2 \rangle^{1/2}$ of radial velocity fluctuations at the onset of collapse in the 3D model (black) and the 1D \textsc{Kepler} model (red). Dashed lines denote the boundaries of the domain simulated in 3D in \textsc{Prometheus}. The turbulent Mach number in 1D and 3D agrees well in the bulk of the oxygen shell, but the acceleration of nuclear burning and the concomitant increase of the Brunt-V\"ais\"al\"a frequency artificially increases the convective velocities close to the base of the shell in \textsc{Kepler}, as MLT immediately translates the increase in $\omega_\mathrm{BV}$ into an increase in convective velocity. Note that high nuclear burning rates in individual zones inside the silicon core produce formally unstable zones in the \textsc{Kepler} run shortly before collapse, which do not affect the evolution of the model to any significant degree. \label{fig:mach_collapse}} \end{figure} \subsection{Flow Dynamics for Quasi-Stationary Convection-- Quantitative Analysis and Comparison with MLT} \label{sec:comparison} To analyze the flow dynamics more quantitatively, we consider the volume-integrated net nuclear energy generation rate (including neutrino losses) in the oxygen shell, $\dot{Q}_\mathrm{nuc}$, the volume-integrated turbulent kinetic energy $E_r$ and $E_{\theta,\varphi}$ contained in the fluctuating components of radial and non-radial velocity components, and profiles of the root-mean-square (RMS) averaged turbulent Mach number $\langle \mathrm{Ma}_r^2\rangle^{1/2}$ of the radial velocity fluctuations in Figures \ref{fig:qnuc_and_ekin} and \ref{fig:mach}. $E_r$, $E_{\theta,\varphi}$, and $\langle \mathrm{Ma}^2\rangle^{1/2}$, are computed from the velocity field as follows, \begin{eqnarray} \label{eq:ekinr} E_r &=& \frac{1}{2}\int\limits_{r_- \leq r \leq r_+} \rho (v_r -\langle v_r\rangle )^2 \, \ensuremath{\mathrm{d}} V, \\ \label{eq:ekinl} E_{\theta,\varphi} &=& \frac{1}{2}\int\limits_{r_- \leq r \leq r_+} \rho (v_\theta^2+v_\varphi^2) \, \ensuremath{\mathrm{d}} V, \\ \langle \mathrm{Ma}_r^2\rangle^{1/2} &=& \left[\frac{\int \rho (v_r-\langle v_r\rangle)^2 \, \ensuremath{\mathrm{d}} \Omega}{\int \rho c_s^2 \, \ensuremath{\mathrm{d}} \Omega}\right]^{1/2}, \end{eqnarray} where the domain of integration in Equations~(\ref{eq:ekinr}) and (\ref{eq:ekinl}) extends from the inner boundary radius $r_-$ to the outer boundary radius $r_+$ of the oxygen shell. Angled brackets denote mass-weighted spherical Favre averages for quantity $X$, \begin{equation} \langle X \rangle = \frac{\int \rho X\, \ensuremath{\mathrm{d}} \Omega}{\int \rho \, \ensuremath{\mathrm{d}} \Omega}. \end{equation} We note that one does not expect any mean flow in the non-radial directions in the absence of rotation; therefore only $v_\theta$ and $v_\varphi$ appear in Equation~(\ref{eq:ekinl}). In Figure~\ref{fig:qnuc_and_ekin}, we also show the results for $\dot{Q}_\mathrm{nuc}$ and the kinetic energy in convective motions from the 1D \textsc{Kepler} run for comparison. MLT only predicts the radial velocities of rising and sinking convective plumes, so we only compute the 1D analog to $E_r$, \begin{equation} \label{eq:ek_in_mlt} E_{r,\mathrm{1D}} = \frac{1}{2}\int\limits_{r_-}^{r_+} \rho v_\mathrm{conv}^2 \, \ensuremath{\mathrm{d}} V, \end{equation} where $v_\mathrm{conv}$ is calculated according to Equation~(\ref{eq:vconv}). The volume-integrated nuclear energy generation rate $\dot{Q}_\mathrm{nuc}$ increases by more than two orders of magnitude during the evolution towards collapse. Due to slight structural adjustments after the initial transient and slightly different mixing in the 3D model, $\dot{Q}_\mathrm{nuc}$ is roughly $30\ldots 50\%$ higher in 3D than in the \textsc{Kepler} for most of the run (see discussion in Section~\ref{sec:mixing}), but still parallels the \textsc{Kepler} run quite nicely and perhaps as closely as can be expected given the extreme dependence of the local energy generation $\dot{\epsilon}_\mathrm{nuc} \propto T^{30}$ on the temperature $T$ during oxygen burning. The convective kinetic energy oscillates considerably during the first $120 \, \mathrm{s}$, but exhibits a smooth secular increase reflecting the acceleration of nuclear burning. Equipartition between the radial and non-radial kinetic energy in convective motions as suggested by \citet{arnett_09} does not hold exactly, instead we observe $E_\mathrm{\theta,\varphi}>E_r$ for most of the simulation, suggesting that there may not be a universal ratio between the non-radial and radial kinetic energy and that this ratio is instead somewhat dependent on the shell geometry (width-to-radius ratio, ratio of width and pressure scale height), which can vary across different burning shells, progenitors, and evolutionary phases. There may also be stochastic variations in the eddy geometry that the convective flow selects (see Appendix~\ref{app:res}) . Anisotropic numerical dissipation might also account for different results in different numerical simulations. The turbulent Mach number in the oxygen shell (Figure~\ref{fig:mach}) also increases steadily from about $0.04 \ldots 0.05$ after the initial transient to $0.1$ at collapse. Again, there is reasonable agreement between the MLT prediction $E_{r,\mathrm{1D}}$ for the convective kinetic energy and $E_r$ in the 3D simulation (Figure~\ref{fig:qnuc_and_ekin}). $E_{r,\mathrm{1D}}$ and $E_r$ are in fact closer to each other than $E_\mathrm{\theta,\varphi}$ and $E_r$ in 3D. Somewhat larger deviations arise immediately prior to collapse when convection is no longer fast enough to adjust to the acceleration of nuclear burning as we shall discuss in Section~\ref{sec:freezeout}. Except for the last few seconds, the kinetic energy in convection scales nicely with the nuclear energy generation rate both in 1D and 3D. For a case where the convective luminosity $L_\mathrm{conv}$ and $\dot{Q}_\mathrm{nuc}$ balance each other in the case of steady-state convection, MLT implies $v_\mathrm{conv}^3 \sim \dot{Q}_\mathrm{nuc} \Lambda_\mathrm{mix} /M_\mathrm{conv}$, where $M_\mathrm{conv}$ is the mass contained in the convective shell \citep[note that only the form of the equations is slightly different in these references]{biermann_32,arnett_09}. In Figure~\ref{fig:qconv}, we show the efficiency factors $\eta_\mathrm{conv}$ for the conversion of nuclear energy generation into turbulent kinetic energy\footnote{Note that $\eta_\mathrm{conv}$ does not correspond to the ``convective efficiency'' as often used in stellar evolution, i.e.\ it is not the ratio of the convective luminosity to the radiative luminosity. } $E_\mathrm{turb}$, \begin{equation} \label{eq:eta_conv} \eta_\mathrm{conv} =\frac{E_\mathrm{turb}/M_\mathrm{conv}}{(\dot{Q}_\mathrm{nuc} \Lambda_\mathrm{mix}/ M_\mathrm{conv})^{2/3}}, \end{equation} for both the 3D model (using either the component $E_r$ or $E_{\theta,\varphi}$ for $E_\mathrm{turb}$) and the \textsc{Kepler} model (using $E_\mathrm{turb}=E_{r,\mathrm{1D}}$), with $\Lambda_\mathrm{mix}$ set to the pressure scale height at the inner boundary of the oxygen shell. Between $130 \, \mathrm{s}$ and $290 \, \mathrm{s}$, $\eta_\mathrm{conv}$ shows only small fluctuations around $0.35$ and $0.5$ for the kinetic energy in radial and non-radial convective motions in 3D. For the \textsc{Kepler} model, we find similar values around $\eta_\mathrm{conv}\approx 0.37$. The scaling law $v_\mathrm{conv}^3 \sim \dot{Q}_\mathrm{nuc} \Lambda_\mathrm{mix} /M_\mathrm{conv}$ can also be understood as resulting from a balance of buoyant driving (or, equivalently, kinetic energy generation by a heat engine) and turbulent dissipation (see, e.g.\, \citealt{arnett_09} and in a different context \citealt{mueller_15a}). In this picture, the scaling law emerges if the mixing length is identified with the damping length $\Lambda_\mathrm{damp}$. This identification ($\Lambda_\mathrm{damp}=\Lambda_\mathrm{mix}=h_P$), however, has been criticized on the ground that $\Lambda_\mathrm{damp}$ should correspond to the largest eddy scale, which can be considerably larger than $h_P$ if low-$\ell$ modes dominate the flow and the updrafts and downdrafts traverse the entire convection zone, which is precisely the situation that is realized in our 3D model. The disparity of the pressure scale height and the eddy scale can be quantified more rigorously by considering the radial correlation length $\Lambda_\mathrm{corr}$ for fluctuations in the radial velocity, $v'_r=v_r-\langle v_r\rangle$. Following \citet{meakin_07} and \citet{viallet_13} we compute the vertical correlation length $\Lambda_\mathrm{corr}$ as the full width at half maximum of the correlation function $C(r, \delta r)$, \begin{equation} \label{eq:lcorr} C (r,\delta r) = \frac{\langle v_r' (r,\theta,\varphi) v_r'(r+\delta r,\theta,\varphi)\rangle} {\sqrt{\langle {v'}_r^2 (r,\theta,\varphi) \rangle \langle {v'}_r^2 (r+\delta r,\theta,\varphi) \rangle}}. \end{equation} The correlation function is computed at a radius of $r=4000 \, \mathrm{km}$ in the inner half of the oxygen shell. $\Lambda_\mathrm{corr}$ is shown in Figure~\ref{fig:correlation_length} and compared to the pressure scale height $\Lambda_\mathrm{mix}=h_P$ at the inner boundary of the oxygen shell and the extent $\Lambda_\mathrm{conv}$ of the convective region. Once convection is fully developed, we clearly have $\Lambda_\mathrm{corr}>\Lambda_\mathrm{mix}$ and $\Lambda_\mathrm{corr}\approx \Lambda_\mathrm{conv}/2$ (as expected for updrafts and downdrafts reaching over the entire zone). \citet{arnett_09} argued that the damping length should be of the order of the width $\Lambda_\mathrm{conv}$ of the convective zone under such circumstance. If we compute the efficiency factor $\eta_\mathrm{conv}$ based on $\Lambda_\mathrm{conv}$, \begin{equation} \eta_\mathrm{conv} =\frac{E_\mathrm{turb}/M_\mathrm{conv}}{(\dot{Q}_\mathrm{nuc} \Lambda_\mathrm{conv}/M_\mathrm{conv})^{2/3}}, \end{equation} we obtain suspiciously low values $\eta_\mathrm{conv}\lesssim 0.1$, however. This suggests that the effective damping length is set by the pressure scale height (or a multiple thereof) after all. One could opine that the energetics of the flow might still be described adequately by $\Lambda_\mathrm{damp}=\Lambda_\mathrm{corr}$, and that the efficiency factor $\eta_\mathrm{conv}$ merely happens to be relatively low. We argue, however, that there is a deeper reason for identifying $\Lambda_\mathrm{damp}$ with a multiple of the pressure scale height in the final phases of shell convection when neutrino cooling can no longer balance nuclear energy generation. The crucial point is that the average distance after which buoyant convective blobs have to return their excess enthalpy $h'$ to their surroundings cannot become arbitrarily large in a steady-state situation, and since enthalpy and velocity fluctuations $v'$ are correlated ($h'\sim v'^2$), this also limits the damping length. During the final stages, nuclear energy generation, convective transport, and turbulent dissipation must balance each other in such a way as to avoid both a secular build-up of an ever-growing unstable entropy/composition gradient (although the spherically average stratification always remains \emph{slightly} unstable) and a complete erasure of the superadiabatic gradient. Assuming that the Brunt-V\"ais\"al\"a frequency is primarily set by the gradient of the entropy $s$, this implies $\partial^2 s/\partial r \partial t \approx 0$, and hence roughly constant entropy generation, \begin{equation} \dot{s}=\frac{\partial s}{\partial t }\approx \mathrm{const.} \end{equation} throughout the convective region. In the late pre-collapse stages, we can relate $\dot{s}$ to the local nuclear energy generation rate $\dot{\epsilon}_\mathrm{nuc}$ and the derivative of the ``total'' convective luminosity $L_\mathrm{conv}$, \begin{equation} \label{eq:dot_s} \dot{s} \approx \frac{\dot{\epsilon}_\mathrm{nuc}}{T} + \frac{1}{T} \frac{\partial L_\mathrm{conv}}{\partial m} = \frac{\dot{\epsilon}_\mathrm{nuc}}{T} + \frac{1}{4 \pi r^2 \rho T} \frac{\partial L_\mathrm{conv}}{\partial r}. \end{equation} Here, $L_\mathrm{conv}$ denotes the net total energy flux resulting from fluctuations (denoted by primes) in the total energy density and velocity around their spherical Favre average, \begin{equation} L_\mathrm{conv} = r^2 \int \left[\rho e+P+\rho \frac{v^2}{2}\right]' v'_r \, \ensuremath{\mathrm{d}} \Omega, \end{equation} where $e$ is the specific internal energy, and $P$ the pressure. Note that in formulating Equation~(\ref{eq:dot_s}), we implicitly assumed that $\partial{L_\mathrm{conv}}/\partial m$ is equal to the rate of change of the Favre average of the internal energy $e$ (instead of the total energy density, which includes the contribution of the turbulent kinetic energy). This assumption is justified for steady-state convection in the late pre-collapse phase because of moderate Mach numbers and the minor role of neutrino cooling. These two factors imply that the energy that is generated by nuclear reactions and distributed throughout the unstable region by convection mostly goes into internal energy (whereas our argument cannot be applied to earlier phases where neutrino cooling and nuclear energy generation balance each other). Figure~\ref{fig:lconv} shows the two terms contributing to $\dot{s}$ in Equation~(\ref{eq:dot_s}) based on Favre averages over a few time steps around $210 \, \mathrm{s}$, and demonstrates that $\dot{s}$ is indeed roughly constant throughout the convective region. Since strong nuclear burning is confined to a narrow layer at the bottom of the convective shell, we even have $T^{-1} \partial L_\mathrm{conv}/\partial m \approx \mathrm{const.}$ throughout a large part of the shell, and for a stratification with roughly $\rho \propto r^{-3}$ and $T \propto r^{-1}$, this leads to \begin{equation} \frac{\partial L_\mathrm{conv}}{\partial r} \propto r^2 \rho T \propto r^{-2}. \end{equation} For such an idealized case, one can directly compute that the energy transported by convective blobs from the lower boundary must be dissipated after an average distance of \begin{eqnarray} \nonumber \Lambda_\mathrm{damp} &=& -\frac{1}{L_\mathrm{conv}(r_-)-L_\mathrm{conv}(r_+)} \int\limits_{r_-}^{r_+} (r-r_-) \frac{\partial L_\mathrm{conv}}{\partial r}\ensuremath{\mathrm{d}} r \\ \label{eq:ldamp} &=&r_- \left(\frac{\xi \ln \xi}{\xi -1 }-1\right), \end{eqnarray} where $\xi=r_+/r_-$ is the ratio of the outer and inner boundary radius. Evidently, $\Lambda_\mathrm{damp}$ grows only moderately at large $\xi$, and is always smaller than $(r_+-r_-)/2$. It thus appears unlikely that large damping lengths $\Lambda_\mathrm{damp} \approx r_+-r_- \gg h_P$ can be realized in very extended convection zones in the final pre-collapse stage. This is decidedly different to earlier stages with strong neutrino cooling in the outer part of the convective zone, for which \citet{arnett_09} found high values of $\Lambda_\mathrm{damp} =0.85 (r_+-r_-)$. As outlined before, the different behavior is likely due to the specific physical conditions right before collapse; in the absence of strong cooling, the self-regulatory mechanism that we outlined above automatically ensures that $\Lambda_\mathrm{damp}$ cannot be considerably larger than the pressure scale height. Thus, the implicit identification of $\Lambda_\mathrm{damp}$ and $h_P$ (or a multiple thereof) in MLT is likely less critical for shell convection right before collapse than for earlier phases. However, it still remains to be determined whether the damping length can reach considerably higher values in deep convection zones with $\xi \gg 1$ during earlier stages when nuclear energy generation and neutrino cooling balance each other. Since neutrino cooling generally decreases with radius within a shell, it can still be argued that the convective luminosity must decay not too far away from the burning region. Thus, an analog to Equation~(\ref{eq:ldamp}) could still hold, and the damping length would only increase slowly with the width of the shell in the limit of large $\xi$. In that case, the difference between our simulation and the results of \citet{arnett_09} would merely be due to a different depth of the convective zone, which is much deeper in our model ($4\ldots 5$ pressure scale heights as opposed to $\mathord{\sim} 2$ pressure scale heights in \citealt{arnett_09}) so that we approach a ``saturation limit'' for the damping length and can more conveniently distinguish the damping length from the width of the convective zone since the different length scale are sufficiently dissimilar. Radial profiles of the convective velocities also point to reasonable agreement between MLT and the 3D simulation. In the upper panel of Figure~\ref{fig:comp_vel}, we compare the convective velocities from \textsc{Kepler} to RMS averages of the fluctuations of the radial velocity ($\delta v_r$) and the transverse velocity component ($\delta v_t$) at $210 \, \mathrm{s}$, \begin{eqnarray} \delta v_r &=& \left( \frac{\int \rho (v_r-\langle v_r\rangle)^2 \, \ensuremath{\mathrm{d}} \Omega} {\int \rho \, \ensuremath{\mathrm{d}} \Omega} \right)^{1/2}, \\ \delta v_t &=& \left( \frac{\int \rho (v_\theta^2+v_\varphi^2) \, \ensuremath{\mathrm{d}} \Omega} {\int \rho \, \ensuremath{\mathrm{d}} \Omega} \right)^{1/2}. \end{eqnarray} We also compare these to the MLT estimate $v_\mathrm{conv}=\omega_\mathrm{BV} \Lambda_\mathrm{mix}$ computed from the Brunt-V\"ais\"al\"a frequency for the spherically averaged stratification of the 3D model. It is evident that the agreement especially between $\delta v_r$ and the convective velocity in \textsc{Kepler} is very good in the oxygen shell. In large parts of the shell, $v_\mathrm{conv}=\omega_\mathrm{BV} \Lambda_\mathrm{mix}$ is also in very good agreement with $\delta v_r$, which again demonstrates that the choice of the pressure scale height as the acceleration and damping length for convective blobs is a reasonable choice. However, no reasonable comparison can be made in the outer part of the oxygen shell, where $\omega_\mathrm{BV}$ is formally negative. This is due to the strong aspherical deformation of the shell boundary and the entrainment of light, buoyant material from the carbon shell; the fact that the outer part of the oxygen shell is formally stable if $\omega_\mathrm{BV}$ is computed from spherical averages of the density and pressure is thus merely a boundary effect and has no bearing on the validity of MLT in the interior of the shell. The good agreement between the 3D simulation and the \textsc{Kepler} model may seem all the more astonishing considering the rescaling of the Brunt-V\"ais\"al\"a frequency according to Equation~(\ref{eq:rescaling}) for stability reasons. However, this procedure is justified by the fact that the convective luminosity automatically adjusts itself in such a way as to avoid a secular build-up of $\omega_\mathrm{BV}$ as discussed before. In a steady state, the convective luminosity in MLT in a shell will roughly balance the nuclear energy generation rate, $L_\mathrm{conv}=4 \pi r^2 F_\mathrm{conv}\sim \dot{Q}_\mathrm{nuc}$, regardless of whether $\omega_\mathrm{BV}$ is rescaled or not. If a rescaling factor is introduced in Equation~(\ref{eq:mlt2}), the result is simply that a larger $\omega_\mathrm{BV}$ is maintained under steady state conditions to balance the rescaling factor. Except for pathological situations, the convective energy flux and the convective velocities are thus essentially unaffected by this procedure. The superadiabaticity of the stratification is changed, however. For convection at low Mach number, it will be systematically overestimated. This trend is evident from the lower panel of Figure~\ref{fig:comp_vel}, which compares $\omega_\mathrm{BV}$ in \textsc{Kepler} and the 3D simulation. Since convection is not extremely subsonic in our case, the rescaling factor is only slightly smaller than unity, and the superadiabaticity in the 1D and 3D model remains quite similar. \subsection{Freeze-Out of Convection} \label{sec:freezeout} MLT in \textsc{Kepler} thus provides good estimates for the typical convective velocities in the final stages of oxygen shell burning as long as a steady-state balance between nuclear energy generation, convective energy transport, and turbulent dissipation is maintained. However, steady-state conditions are not maintained up to collapse. Figure~\ref{fig:qconv} shows that the growth of the turbulent kinetic energy can no longer keep pace with the acceleration of nuclear burning in the last few seconds before collapse, where $\eta_\mathrm{conv}$ drops dramatically. The time at which convection ``freezes out'' can be nicely determined by appealing to a time-scale argument: Freeze-out is expected once the nuclear energy generation rate (which sets the Brunt-V\"ais\"al\"a frequency and the convective velocity under steady-state conditions) changes significantly over a turnover time-scale. More quantitatively, the efficiency factor $\eta_\mathrm{conv}$ drops abruptly once the freeze-out condition \begin{equation} \label{eq:freezeout_omega} \frac{1}{\dot{Q}_\mathrm{nuc}}\frac{\ensuremath{\mathrm{d}} \dot{Q}_\mathrm{nuc}}{\ensuremath{\mathrm{d}} t}=\frac{\omega_\mathrm{BV,max}}{2\pi} \end{equation} is met as shown in the bottom panel of Figure~\ref{fig:qconv}. Equivalently, the freeze-out condition can be expressed in terms of the convective turnover time $t_\mathrm{conv}$, \begin{equation} t_\mathrm{conv}=\frac{\Lambda_\mathrm{conv}}{\bar{v}_\mathrm{conv}}, \end{equation} where $\bar{v}_\mathrm{conv}$ is an appropriate global average of the convective velocity, e.g., \begin{equation} \bar{v}_\mathrm{conv}=(2 E_{\mathrm{kin},r}/M_\mathrm{conv})^{1/2}. \end{equation} Using these definitions, we find that freeze-out occurs roughly when \begin{equation} \frac{1}{\dot{Q}_\mathrm{nuc}} \frac{\ensuremath{\mathrm{d}} \dot{Q}_\mathrm{nuc}}{\ensuremath{\mathrm{d}} t}= t_\mathrm{conv}^{-1}, \end{equation} which may be even more intuitive than Equation~(\ref{eq:freezeout_omega}) Somewhat astonishingly, the \textsc{Kepler} run shows a similar drop of $\eta_\mathrm{conv}$ in the last seconds, although MLT implicitly assumes steady-state conditions when estimating the density contrast and the convective velocity. \textsc{Kepler} still overestimates the volume-integrated turbulent kinetic energy somewhat after freeze-out (Figure~\ref{fig:qnuc_and_ekin}), but the discrepancy between the 1D and 3D models is not inordinate. The key to the relatively moderate differences can be found in profiles of the turbulent convective Mach number $v_\mathrm{conv}/c_s$ in \textsc{Kepler} and $\langle \mathrm{Ma}_r^2\rangle^{1/2}$ in 3D at the onset of collapse in Figure~\ref{fig:mach_collapse}. Evidently, MLT only overestimates the convective velocities in a narrow layer at the lower boundary of the oxygen shell, where the acceleration of nuclear burning greatly amplifies the superadiabaticity of the stratification (as quantified by $\omega_\mathrm{BV}$). This \emph{immediately} increases $v_\mathrm{conv}$, whereas the convective velocity field adjusts only on a longer time-scale ($\mathord{\gtrsim} \omega_\mathrm{BV}^{-1}$) in 3D. However, even in the \textsc{Kepler} run, the convective velocities in the middle and outer region of the oxygen shell remain unaffected by the increase of the $\omega_\mathrm{BV}$ close to the inner shell boundary. Different from the innermost region, where $\omega_\mathrm{BV}$ reacts instantaneously to the nuclear source term, $\omega_\mathrm{BV}$ (and hence the convective velocity in the outer region) responds to the accelerated burning on a diffusion time-scale, which is again of order $\omega_\mathrm{BV}^{-1}$. For a slightly different reason (insufficient time for convective diffusion vs.\, insufficient time for the growth of plumes), the \textsc{Kepler} run therefore exhibits a similar freeze-out of convection as the 3D model. We thus conclude that the volume-integrated turbulent kinetic energy and the average convective Mach number in 1D stellar evolution codes still provide a reasonable estimate for the state of convection \emph{even right at collapse}. The spatial distribution of the turbulent kinetic energy, on the other hand, appears more problematic; it will be somewhat overestimated in the shell source at collapse due to the instantaneous reaction of $\omega_\mathrm{BV}$ to the increasing burning rate. The rescaling of $\omega_\mathrm{BV}$ in \textsc{Kepler} according to Equation~(\ref{eq:rescaling}) can also affect the time of freeze-out at a minor level. For a convective Mach number of $\mathord{\sim} 0.1$, the rescaling procedure changes $\omega_\mathrm{BV}$ only by $\mathord{\sim} 30 \%$, and given the very rapid increase of $\ensuremath{\mathrm{d}} \ln Q_\mathrm{nuc}/\ensuremath{\mathrm{d}} t$, this will not shift the time of freeze-out appreciably. \begin{figure} \includegraphics[width=\linewidth]{f12-eps-converted-to.pdf} \caption{Power $c_\ell^2$ in different multipoles $\ell$ for the decomposition of the radial velocity at $r=4000 \, \mathrm{km}$ into spherical harmonics $Y_{\ell m}$ in the 3D model at different times computed according to Equation~(\ref{eq:multipoles}). The dominant angular wave number shifts from $\ell=3\ldots 5$ to $\ell=2$ over the course of the simulation. The dashed line indicates a slope of $\ell^{-5/3}$, which is roughly expected for a Kolmogorov spectrum above the injection scale in wave number (i.e.\, at smaller spatial scales). \label{fig:vel_spectra}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f13-eps-converted-to.pdf} \caption{Estimate of the typical angular wave number $\ell=\pi/2 \times (r_+ + r_-) /(r_+ - r_-)$ of convection in the linear regime from the inner and outer boundary radius of the oxygen shell according to Equation~(\ref{eq:dominant_l}) and \citet{foglizzo_06}. The rapid drop at the end of the simulation is evident and suggests that the emergence of a global $\ell=2$ mode is due to the rapid contraction of the iron and silicon core shortly before collapse. \label{fig:scale}} \end{figure} \subsection{Scale of Convective Eddies} The role of progenitor asphericities in the explosion mechanism depends not only on the magnitude of the convective velocities in the burning shells, but also on the \emph{angular} scale of the infalling eddies. MLT does not make any strong assumptions about the eddy scale; it assumes a radial correlation length for entropy and velocity perturbations, but such a correlation length can in principle be realized with very different flow geometries. Empirically, simulations of buoyancy-driven convection in well-mixed shells are usually characterized by eddies of similar radial and angular extent $d$ that reach across the entire unstable zone \citep[e.g.][]{arnett_09}. The dominant modes are also typically close in scale to the most unstable modes in the linear regime \citep{chandrasekhar_61,foglizzo_06}, which have $d \sim r_+ - r_-$. This correspondence between the linear and non-linear regime has sometimes been justified by heuristic principles for the selection of the eddy scale based on maximum kinetic energy or maximum entropy production \citep{malkus_58,martyushev_06}. Expressing the balance of kinetic energy generation due to the growth of an instability with a scale-dependent growth rate $\omega (d)$ and turbulent dissipation for the dominant mode in a shell with mass $M$ yields \begin{equation} \dot{E}_\mathrm{kin} \sim \omega(d) E_\mathrm{kin} -v\frac{E_\mathrm{kin}}{d} \sim \omega (d) E_\mathrm{kin} -\frac{\sqrt{2}E_\mathrm{kin}^{3/2}}{d M^{1/2}} =0 \end{equation} for the change of the kinetic energy $E_\mathrm{kin}$ in a given mode. The dominant mode(s) in the non-linear regime will be the one(s) for which \begin{equation} E_\mathrm{kin} \propto M d^2 \omega(d)^2 \end{equation} is maximal, which actually suggests a bias towards slightly larger scales than in the linear regime. A superficial inspection of Figures~\ref{fig:snap2d_a} and \ref{fig:snap2d_b} already reveals that our 3D models conform to the typical picture with $d \sim r_+ - r_-$. More quantitatively, the dominance of large-scale modes is shown by a decomposition of the radial velocity in the inner half of the oxygen shell (at a radius of $4000 \, \mathrm{km}$) into spherical harmonics (for more sophisticated decompositions of the flow field see \citealt{fernandez_14,chatzopoulos_14}). In Figure~\ref{fig:vel_spectra}, we plot the total power $c_{\ell}^2$ for each multipole order $\ell$, \begin{equation} \label{eq:multipoles} c_{\ell}^2 = \sum_{m=-\ell}^\ell \left |\int Y^*_{\ell m} (\theta,\varphi) v_r (4000 \, \mathrm{km},\theta,\varphi) \, \ensuremath{\mathrm{d}} \Omega\right|^2, \end{equation} which shows a clear peak at low $\ell$ that slowly moves from $\ell=4$ down to $\ell=2$ over the course of the simulation. The tail at high $\ell$ above the typical eddy scale roughly exhibits an $\ell^{-5/3}$ slope as expected for a Kolmogorov-like turbulent cascade \citep{kolmogorov_41} because of the rough proportionality between $\ell$ and the wave number \citep{peebles_93}. The dominant eddy scale is consistent with the crude estimate that the dominant $\ell$ is given by the number of convective eddies of diameter $d= r_+ - r_-$ that can be fitted into one hemisphere of the convective shell \citep{foglizzo_06}, \begin{equation} \label{eq:dominant_l} \ell = \frac{\pi (r_+ + r_-)}{2 (r_+ - r_-)}. \end{equation} This estimate for the dominant multipole order is plotted in Figure~\ref{fig:scale}. It agrees well with spectra of the radial velocity, although it may not clearly predict the emergence of the dominant quadrupole at the end (which is compatible with our argument that the dominant angular scale for fully developed convection is slightly larger than in the linear regime). The slowly changing geometry of the shell evidently accounts nicely for the secular trend towards modes of lower $\ell$. Figure~\ref{fig:correlation_length} reveals that both the contraction of the inner boundary of the shell by about one third in radius and a secular expansion of the (somewhat ill-defined) outer shell boundary contribute to this trend. The fast change of $\ell$ right before collapse is clearly due to the contraction, however, as the outer boundary radius \emph{decreases} again shortly before collapse. The expansion of the outer boundary is not seen in the \textsc{Kepler} model and is the result of entrainment of material from the carbon shell (see Section~\ref{sec:mixing} below). If the amount of entrainment is physical, this is another reason to suspect that estimates of the dominant angular scale based on stellar evolution models using Equation~(\ref{eq:dominant_l}) will slightly overestimate the dominant $\ell$. Considering uncertainties and progenitor variations in the shell structure, Equation~(\ref{eq:dominant_l}) nonetheless furnishes a reasonable zeroth-order estimate of the typical eddy scale. \begin{figure} \includegraphics[width=\linewidth]{f14-eps-converted-to.pdf} \caption{ Spherically averaged profiles of the entropy (violet curves, top panel) and the mass fractions of oxygen (black), silicon (red), and sulfur (blue) in the 3D run (solid curves) at $210 \, \mathrm{s}$ compared to profiles from the 1D \textsc{Kepler} model (dashed) at the same time. Note that the slope in the mass fractions is somewhat steeper in the \textsc{Kepler} model, which we ascribe to the use of an extra factor of $1/3$ in the diffusion equation for compositional mixing. \label{fig:comp_mix}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f15-eps-converted-to.pdf} \caption{Mass $M_\mathrm{conv}$ contained in the convective oxygen shell in the 3D simulation as a function of time. The mass increases by about $0.05 M_\odot$ due to entrainment, which accelerates slightly towards the end of the simulation as a result of higher convective velocities and Mach numbers. Note that the small changes in the first $\mathord{\sim} 30 \, \mathrm{s}$ are simply due to the advection of the entropy discontinuities over the grid in the wake of hydrostatic adjustment, as a result of which cells can jump around the threshold entropies of $3.6 k_b / \mathrm{nucleon}$ and $5.2 k_b / \mathrm{nucleon}$ that we use to define the shell boundaries. ``Physical'' entrainment begins once the first convective plumes reach the boundary between the carbon and oxygen shell around $\mathord{\sim} 30 \, \mathrm{s}$ (denoted by a vertical line). \label{fig:mconv}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f16-eps-converted-to.pdf} \caption{Turbulent mass flux $4 \pi r^2 \langle \rho' v'_r \rangle$ in the 3D model at a time of $210 \, \mathrm{s}$ as a function of enclosed mass $m$. Positive values around the outer boundary of the oxygen shell at $m \approx 2.3 M_\odot$ indicate entrainment of material from the carbon shell. The peak value of $1.4 \times 10^{-4} M_\odot \, \mathrm{s}^{-1}$ roughly corresponds to the average entrainment rate over the course of the simulation. \label{fig:turb_mflx}} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f17-eps-converted-to.pdf} \caption{Comparison of the measured entrainment rate $\dot{M}_\mathrm{conv}$ in the 3D simulation (black) and a fit based on Equation~(\ref{eq:entrainment}) (red) computed using $A=0.37$ and a global average for the convective velocity (see text for details). Overall, the time-dependent entrainment rate nicely follows Equation~(\ref{eq:entrainment}). Note that no data is shown later than $290 \, \mathrm{s}$, as the detection of the outer boundary of the oxygen shell becomes problematic due to increasingly violent boundary mixing shortly before collapse. As in Figure~\ref{fig:mconv}, the dashed vertical line indicates the time when convective plumes first encounter the outer boundary and physical entrainment begins. \label{fig:dotmconv}} \end{figure} \subsection{Comparison of Convective Mixing in 1D and 3D} \label{sec:mixing} Although the properties of the velocity field are more directly relevant for the potential effect of progenitor asphericities on supernova shock revival, some remarks about convective mixing in our 3D model are still in order. In Figure~\ref{fig:comp_mix}, we compare spherically averaged profiles of the entropy $s$, and the mass fractions of oxygen, silicon, and sulphur ($X_\mathrm{O}$, $X_\mathrm{Si}$, and $X_\mathrm{S}$) from the 3D model to the \textsc{Kepler} run at a time of $210 \, \mathrm{ms}$. Although the treatment of convective mixing as a diffusive process in 1D has sometimes been criticized \citep{arnett_09}, the differences in the interior of the oxygen shell remain minute; the most conspicuous among them are the somewhat steeper gradients in the mass fractions in \textsc{Kepler}. These could potentially contribute (on a very modest level) to the lower total nuclear energy generation rate in \textsc{Kepler}, since the nuclear energy generation rate is roughly proportional to the square of the mass fraction $X_\mathrm{O}$ of oxygen in the burning region. Even if we account for spatial fluctuations in the composition by computing $\langle X_\mathrm{O}^2 \rangle$, the compositional differences do no appear to be sufficiently large to explain the different burning rates; temperature changes due to hydrostatic adjustment thus seem to be the major cause of the somewhat higher total nuclear energy generation rate in \textsc{Prometheus}. It is unclear whether the composition gradients are really an artifact of MLT; we find it equally plausible that they simply stem from the choices of different coefficients $\alpha_2$ and $\alpha_3$ for energy transport and compositional mixing in Equation~(\ref{eq:mlt2}) and (\ref{eq:mlt3}). The introduction of an additional factor of $1/3$ in Equation~(\ref{eq:mlt3}) is typically justified by interpreting turbulent mixing as a random walk process of convective blobs with a mean free path $\Lambda_\mathrm{mix}$ and a \emph{total} velocity $v_\mathrm{conv}$ with random orientation, which translates into a \emph{radial} correlation length $\Lambda_\mathrm{mix}/\sqrt{3}$ and an RMS-averaged radial velocity of $\langle v_r ^2\rangle^{1/2}=v_\mathrm{conv}/\sqrt{3}$. However, the mixing length and MLT velocity are implicitly identified with the radial correlation length and $\langle v_r ^2\rangle^{1/2}$ in Equation~(\ref{eq:mlt2}) already, so that the choice $\alpha_3=\alpha_2$ rather than $\alpha_3=\alpha_2/3$ is arguably more appropriate. With such a (more parsimonious) choice of parameters, the composition gradients would be flattened considerably. Figure~\ref{fig:comp_mix} also shows evidence of boundary mixing (entrainment; \citealp{fernando_91,strang_01,meakin_07}) that is not captured in the \textsc{Kepler} run. The fact that the entropy and composition gradients are smeared out at the boundaries (especially at the outer boundary) is mostly due to the aspherical deformation of the shell interface by Kelvin-Helmholtz/Holmb\"oe waves; the shell boundary remains relatively well defined in the multi-D snapshots in Figures~\ref{fig:snap2d_a} and \ref{fig:snap2d_b}. However, the oxygen shell is clearly expanding in $m$ at the outer boundary. To capture the increase of the total mass $M_\mathrm{conv}$ in the convective oxygen shell, we integrate the mass in all zones with entropies between $3.6 k_b / \mathrm{nucleon}$ and $5.2 k_b / \mathrm{nucleon}$ (Figure~\ref{fig:mconv}). $M_\mathrm{conv}$ increases by about $0.05 M_\odot$ over the course of the simulation with some evidence for higher $\dot{M}_\mathrm{conv}$ towards the end, corresponding to an entrainment rate of $1.4 \times 10^{-4} M_\odot$, which is also roughly the maximum value of the turbulent mass flux $4 \pi r^2 \langle \rho' v_r'\rangle$ that is reached in the formally stable region around the outer boundary (Figure~\ref{fig:turb_mflx}). Higher resolution is ultimately required to decide whether this entrainment rate is physical or partially due to numerical diffusion, which could lead to an overestimation of the amount of entrained mass in wave breaking events (see Appendix~\ref{app:res}) . Our simulations are, however, consistent with semi-empirical entrainment laws found in the literature. Laboratory experiments and simulations \citep{fernando_91,strang_01,meakin_07} suggest \begin{equation} \label{eq:entrainment} \dot{M}_\mathrm{conv} =4 \pi r^2 \rho v_\mathrm{conv} A \, \mathrm{Ri}_\mathrm{B}^{-1}, \end{equation} for the entrainment rate in the relevant regime of the bulk Richardson number $\mathrm{Ri}_\mathrm{B}$ and a dimensionless proportionality constant $A$. $\mathrm{Ri}_\mathrm{B}$ is defined in terms of the density contrast $\delta \rho /\rho$ at the interface, the gravitational acceleration $g$, the typical convective velocity $v_\mathrm{conv}$, and the eddy scale $\Lambda$ as \begin{equation} \mathrm{Ri}_\mathrm{B}= \frac{\delta \rho}{\rho} \frac{g \Lambda}{v_\mathrm{conv}^2}. \end{equation} If we identify $\Lambda$ with the pressure scale height, this amounts to \begin{equation} \label{eq:rib} \mathrm{Ri}_\mathrm{B} = \frac{\delta \rho}{\rho} \frac{P}{\rho v_\mathrm{conv}^2}. \end{equation} In our case, we have $\delta \rho/\rho =0.1$, and with $v_\mathrm{conv}= 2.5 \times 10^7 \, \mathrm{cm} \, \mathrm{s}^{-1}$ (corresponding to the non-radial velocities near the boundary, which are relevant for the dynamics of interfacial Holmb\"oe/Kelvin-Helmholtz waves), we obtain $\mathrm{Ri}_\mathrm{B} =17$, indicating a very soft boundary. Together with an average convective velocity of $\mathord{\sim} 200 \, \mathrm{km} \, \mathrm{s}^{-1}$ and an average entrainment rate of $1.4 \times 10^{-4} M_\odot$, this points to a low $A \sim 0.1$ in the entrainment law~(\ref{eq:entrainment}), although the ambiguities inherent in the definition of $\mathrm{Ri}_\mathrm{B}$ can easily shift this by an order of magnitude, which may account for the higher value $A\approx 1$ obtained by \citet{meakin_07}. It is obvious that the calibration of the entrainment law is fraught with ambiguities: If we calibrate Equation~(\ref{eq:entrainment}) by using a global average for $v_\mathrm{conv}$, \begin{equation} v_\mathrm{conv}=\sqrt{\frac{2 E_{\mathrm{kin},r}}{M_\mathrm{conv}}}, \end{equation} and the initial values for $\delta\rho /\rho$, and the density $\rho$ at the outer boundary radius $r$ in (\ref{eq:entrainment}) and (\ref{eq:rib}), the time-dependent entrainment rate is well fitted by $A=0.37$ (Figure~\ref{fig:dotmconv}). If anything, relatively low values of $A$ merely demonstrate that entrainment in our 3D model is no more affected by numerical diffusion than in comparable simulations. Considering the low value of the bulk Richardson number and the small entropy jump of $\sim 0.5 k_b/\mathrm{nucleon}$, which should be conducive to entrainment effects, the dynamical impact of boundary mixing in our simulation is remarkably small, but its long-term effect warrants further investigation. \section{Requirements for 3D Pre-Supernova Simulations} \label{sec:requirements} If 3D simulations of shell burning in massive stars are to be used as input for core-collapse simulations, it is essential that the typical convective velocities and eddy scales are captured accurately. The analysis of our model in the preceding section provides guidelines about the approximations that can (or cannot) be justified in such simulations. The emergence of large-scale motions ($\ell=2$ modes) during the final phase of our model implies that pre-SN model \emph{generally need to cover the full solid angle} (which has been done previously for oxygen shell burning only by \citealt{kuhlen_03}, albeit for an earlier phase). However, for sufficiently narrow convective shells, simulations restricted to a wedge or octant may still cover the flow geometry accurately notwithstanding that such symmetry assumptions remain questionable in the ensuing SN phase. Thus, for the pre-SN phase, the assumption of octant symmetry in \citet{couch_15} may be adequate for their model of silicon shell burning, which has $r_+/r_- \approx 2$ towards the end of the simulation. The eddies should then remain of a moderate scale with a preferred $\ell$ of $\ell \approx \pi/2 (r_++r_-)/(r_+-r_-)=4.71$. An accurate treatment of nuclear burning is even more critical because of the scaling of convective velocities with $(\dot{Q}_\mathrm{nuc}/M_\mathrm{conv})^{1/3}$. Since the nuclear generation rates in the silicon and oxygen shell are sensitive to the contraction of the deleptonizing iron core, this not only applies to the burning shell in question itself, but also to the treatment applied for the iron core. If the contraction of the core is artificially accelerated as in \citet{couch_15}, this considerably reduces the nuclear time-scale in the outer shells as well. For example $\mathord{\sim}0.2 M_\odot$ of intermediate mass-elements in the silicon shell are burned to iron group elements within $160 \, \mathrm{s}$ in the 3D model of \citet{couch_15}, i.e.\, silicon burning on average proceeds $6.25$ times faster than in the corresponding stellar evolution model, where this takes $1000 \, \mathrm{s}$. This suggests an artificial increase of the convective velocities by $84 \%$ in their 3D model. Approximations that affect the nuclear burning time-scale are also problematic because they change the ratio $\tau_\mathrm{conv}/\tau_\mathrm{nuc}\propto \tau_\mathrm{nuc}^{-2/3}$, which plays a crucial role in the freeze-out of convective motions shortly before the onset of collapse (see Section~\ref{sec:freezeout}). If the nuclear burning is artificially accelerated and continues until collapse, then the freeze-out will occur somewhat earlier, which may compensate the overestimation of convective velocities discussed before. However, the simulation of \citet{couch_15} suggests that the opposite may also occur: In their 3D model, silicon burning slows down towards the end of their simulation as the shell almost runs out of fuel. In the corresponding 1D stellar evolution model, convection in the original silicon shell has already died down completely as can be seen from their Figure~2, which shows non-zero convective velocities only in regions with $Y_e=0.5$. While it is conceivable that convection subsides more gradually in 3D as the available fuel is nearly consumed -- probably over a few turnover time-scales -- increasing the ratio $\tau_\mathrm{conv}/\tau_\mathrm{nuc}$ by more than a factor of $\gtrsim 3$ evidently introduces the risk of artificially \emph{prolonging} convective activity in almost fully burned shells. Other worries about the feasibility of multi-D simulations of supernova progenitors include the problem of thermal adjustment after mapping from a 1D stellar evolution model as well as artificial boundary mixing. We have largely circumvented the problem of thermal adjustment in this study by focusing on the final stages. The somewhat higher nuclear burning rate in the 3D model (by up to $\mathord{\sim} 50\%$ compared to \textsc{Kepler}), which may be due to physical multi-D effects or transients after the mapping such as an adjustment to a new hydrostatic equilibrium, suggests that even for a setup where the problem of hydrostatic and thermal adjustment is rather benign, we still face uncertainties of the order of $15\%$ -- because of $v_\mathrm{conv} \propto (\dot{Q}_\mathrm{nuc}/\mathrm{M}_\mathrm{conv})^{1/3}$ -- in the final convective velocity field at collapse. The slight expansion of the outer boundary of the oxygen shell, which may be the result of an adjustment effect or driven by (physical) entrainment, also deserves attention because it plays some role in fostering the emergence of an $\ell=2$ mode right before collapse. It appears less worrisome, however, since there are natural variations in shell geometry anyway, and since the emergence of the $\ell=2$ mode may still be primarily driven by the contraction of inner shell boundary. There is no evidence for artificial boundary mixing at this stage, although further high-resolution tests remain desirable. \section{Effect of Convective Seed Perturbations on Supernova Shock Revival} \label{sec:theory} With typical convective Mach numbers of $\sim 0.1$ and a dominant $\ell=2$ mode at collapse, the progenitor asphericities fall in the regime where they may be able to affect shock revival in the ensuing core-collapse supernova, as has been established by the parameter study of \citet{mueller_15a}. Considering that several recent works have shown that the conditions for shock revival in multi-D can be captured with good accuracy by surprisingly simple scaling laws \citep{mueller_15a,summa_16,janka_16,mueller_16} that generalize the concept of the critical luminosity \citep{burrows_93} to multi-D, it is reasonable to ask whether the effect of progenitor asphericities can also be predicted more quantitatively by simple analytic arguments. Given the good agreement between our 3D model and MLT, such a theory could help to better identify progenitors for which convective seed asphericities play a major role in the explosion before investing considerable computer time into multi-D simulations. The key ingredient to accomplish this consists in a first quantitative theory for the interaction of asymmetries in the supersonic infall region with the shock, which \citet{mueller_15a} only described qualitatively as ``forced shock deformation''. The starting point is the translation of initial radial velocity perturbations into density perturbations at the shock due to differential infall \citep{mueller_15a}, \begin{equation} \delta \rho_\mathrm{pre}/\rho_\mathrm{pre} \approx \mathrm{Ma}, \end{equation} which can also be understood more rigorously using linear perturbation theory \citep{goldreich_97,takahashi_14}. Note that we now designate the typical convective Mach number during convective shell burning simply as $\mathrm{Ma}$ to avoid cluttered notation. The perturbations in the transverse velocity components are amplified as $r^{-1}$ \citep{goldreich_97} and are roughly given by \begin{equation} \delta v_t \approx \mathrm{Ma}\, c_\mathrm{s,ini} (r_\mathrm{ini}/r_\mathrm{sh}), \end{equation} where $c_\mathrm{s,ini}$ and $r_\mathrm{ini}$ are the initial sound speed and radius of the shell before collapse and $r_\mathrm{sh}$ is the shock radius. Radial velocity perturbations only grow with $r^{-1/2}$ \citep{goldreich_97} and can therefore be neglected. \subsection{Generation of Turbulent Kinetic Energy by Infalling Perturbations} The interaction of the pre-shock perturbations with the shock can then be interpreted as an injection of additional turbulent kinetic energy into the post-shock region. While this problem has not yet been addressed in the context of spherical accretion onto a neutron star, the interaction of planar shocks with incident velocity and density perturbations has received some attention in fluid dynamics \citep{ribner_87,andreopoulos_00,wouchuk_09,huete_10,huete_12}. The perturbative techniques that allow a relatively rigorous treatment in the planar case cannot be replicated here, and we confine ourselves to simple rule-of-thumb estimate for the generation of turbulent energy by the infalling perturbations: If we neglect the deformation of the shock initially, we can assume transverse velocity perturbations $\delta v_\mathrm{t}$ and density fluctuations $\delta \rho /\rho$ compared to the spherically averaged flow are conserved across the shock as a first-order approximation. The anisotropy of the ram pressure will also induce pressure fluctuations $\delta P/P \sim \delta \rho /\rho$ downstream of the shock. In a more self-consistent solution, these pressure fluctuations would induce lateral flows and modify the shape of the shock, and larger vorticity perturbations would arise if the shock is asymmetric to begin with (which is important in the context of the SASI \citep{foglizzo_07,guilet_12}. As a crude first-order estimate such a rough estimate is sufficient for our purpose; it is not incompatible with recent results about shocks traveling in inhomogeneous media \citep{huete_10,huete_12}. From the density and pressure perturbations $\delta \rho/\rho \sim \delta P/P \mathord{\sim} \mathrm{Ma}$ and transverse velocity perturbations $\delta v_t \sim \mathrm{Ma} \, c_\mathrm{s,ini} (r_\mathrm{ini}/r_\mathrm{sh})$ downstream of the shock, we can estimate fluxes of transverse kinetic energy ($F_\mathrm{t}$), acoustic energy ($F_\mathrm{ac}$), and an injection rate of kinetic energy due to the work done by buoyancy during the advection of the accreted material through down to the gain radius $r_\mathrm{g}$. $F_\mathrm{t}$ is roughly given by, \begin{eqnarray} F_\mathrm{t} &=& \frac{\dot{M}}{2} \delta v_\mathrm{t}^2 = \frac{\dot{M}}{2} \mathrm{Ma}^2 c_\mathrm{s,ini}^2 \left(\frac{r_\mathrm{ini}}{r_\mathrm{sh}}\right)^2 \\ \nonumber &\approx& \mathrm{Ma}^2 \frac{GM \dot{M}}{6r_\mathrm{ini}} \left(\frac{2r_\mathrm{ini}^2}{3r_\mathrm{g} r_\mathrm{sh}}\right) = \mathrm{Ma}^2 \frac{GM \dot{M}}{9r_\mathrm{g}} \left(\frac{r_\mathrm{ini}}{r_\mathrm{sh}}\right), \end{eqnarray} where we approximated the initial sound speed as $c_\mathrm{s,ini}^2 \approx GM/(3r_\mathrm{ini})$, which is a good approximation for the shells outside the iron core. Note that we use a typical ratio $r_\mathrm{sh}/r_\mathrm{g}=3/2$ during the pre-explosion phase to express $F_\mathrm{t}$ in terms of the gravitational potential of the gain radius; the reason for this will become apparent when we compare the injection rate of turbulent kinetic energy at the shock to the contribution from neutrino heating. Following \citet{landau_fluid}, the acoustic energy flux can be estimated by assuming that the velocity fluctuations in acoustic waves are roughly $\delta v \sim c_s \delta P/P$ (where $c_s$ is the sound speed behind the shock and $\delta P\approx \mathrm{Ma}\, P$). The post-shock pressure $P$ can be determined from the jump conditions, \begin{equation} P=\rho_\mathrm{pre} \frac{\beta-1}{\beta} v_\mathrm{pre}^2 =\rho \frac{\beta-1}{\beta^2}\frac{GM }{r_\mathrm{sh}}, \end{equation} where $\rho_\mathrm{pre}$ and $\rho$ are the pre- and post-shock density, $\beta \approx 7$ is the compression ratio in the shock, and $v_\mathrm{pre}$ is the pre-shock velocity, which we approximate as $v_\mathrm{pre} =\sqrt{GM/r_\mathrm{sh}}$. The acoustic energy flux is thus, \begin{eqnarray} \nonumber F_\mathrm{ac} &=&4 \pi r_\mathrm{sh}^2\delta P\, \delta v =4 \pi r_\mathrm{sh}^2 \frac{\delta P^2 c_s}{P} =4 \pi \rho |v_r| r_\mathrm{sh}^2 \frac{\delta P^2 c_s}{\rho P |v_r|} \\ \nonumber &=& 4 \pi \rho |v_r| r_\mathrm{sh}^2 \frac{\mathrm{Ma}^2 c_s P}{\rho |v_r|} = \dot{M}\, \mathrm{Ma}^2 \frac{\beta-1}{\beta^2} \frac{GM}{r_\mathrm{sh}} \frac{\beta}{\sqrt{3}} \\ &\approx& 0.49\mathrm{Ma}^2 \frac{G M \dot{M}}{r_\mathrm{g}}. \end{eqnarray} Here, $|v_r|=v_\mathrm{pre}/\beta$ is the spherical average of the post-shock velocity, and $c_s^2\approx GM/(3 r_\mathrm{sh})$ has been used following \citet{mueller_15a}. Finally, the gravitational potential energy corresponding to density fluctuations $\delta \rho$ will be converted into kinetic energy by buoyancy forces at a rate of\footnote{If we assume that the density perturbations in the post-shock region adjust on a dynamical time-scale as pressure equilibrium between over- and underdensities is established, then this estimate might be lower, but pressure adjustment itself would involve the generation of lateral flows and hence generate turbulent kinetic energy, so that our estimate is probably not too far off.} \begin{eqnarray} \nonumber F_\mathrm{pot} &=& \frac{\dot{M }\delta \rho}{\rho} \left(\frac{GM}{r_\mathrm{sh}}-\frac{GM}{r_\mathrm{g}}\right) = \mathrm{Ma}\, \dot{M} \left(\frac{GM}{r_\mathrm{g}}-\frac{GM}{r_\mathrm{sh}}\right) \\ &\approx& \mathrm{Ma} \frac{G M \dot{M}}{3r_\mathrm{g}}. \end{eqnarray} Especially for moderate Mach numbers, $F_\mathrm{pot}$ is clearly the dominating term, as the flux of acoustic and transverse kinetic energy scale with $\mathrm{Ma}^2$. In the absence of infalling perturbations, \citet{mueller_15a} established a semi-empirical scaling law that relates transverse kinetic energy $E_\mathrm{kin,t}$ stored in the post-shock region to the volume-integrated neutrino heating rate $\dot{Q}_\nu$, the mass in gain region, $M_\mathrm{g}$, and the shock and gain radius, \begin{equation} \label{eq:etrans} \frac{E_\mathrm{kin,t}}{M_\mathrm{g}} \approx \frac{1}{2} \left[\frac{(r_\mathrm{sh}-r_\mathrm{g}) \dot{Q}_\nu}{M_\mathrm{g}}\right]^{2/3}. \end{equation} At least for convection-dominated models, this scaling law can be understood as the result of a balance between kinetic energy generation by buoyancy and turbulent dissipation with a dissipation length $\Lambda=r_\mathrm{sh}-r_\mathrm{g}$ (cf.\, also \citealt{murphy_12}). Assuming a local dissipation rate of $v^3/\Lambda=(2E_\mathrm{kin,t}/M_\mathrm{g})^{3/2}/\Lambda$, this leads to \begin{equation} \label{eq:balance} \dot{E}_{\mathrm{kin,t}} = \dot{Q}_\nu-\frac{1}{\Lambda}\left(\frac{2E_\mathrm{kin,t}}{M_\mathrm{g}} \right)^{3/2} M_\mathrm{g}=0, \end{equation} from which Equation~(\ref{eq:etrans}) immediately follows. In the presence of infalling perturbations, it is natural to add another source term to Equation~(\ref{eq:balance}), \begin{equation} \dot{Q}_\nu +F_\mathrm{pot} -\frac{1}{\Lambda}\left(\frac{2E_\mathrm{kin,t}}{M_\mathrm{g}} \right)^{3/2} M_\mathrm{g}=0. \end{equation} To keep the calculation tractable, we only include the dominant contribution $F_\mathrm{pot}$ arising from infalling perturbations and discard $F_\mathrm{ac}$ and $F_\mathrm{t}$. However, this obviously poses the question about the appropriate choice for $\Lambda$, which can now no longer assumed to be simply given by $r_\mathrm{sh}-r_\mathrm{g}$. To get some guidance, we can consider the limit in which neutrino heating is negligible; here the appropriate choice for $\Lambda$ is clearly given by the scale of the infalling perturbations, i.e.\ $\Lambda \approx \pi r_\mathrm{sh}/\ell$ in terms of their typical angular wave number $\ell$. Hence we find, \begin{equation} \label{eq:etrans_no_nu} \frac{E_\mathrm{kin,t}}{M_\mathrm{g}} = \frac{1}{2} \left[\frac{\pi r_\mathrm{sh} F_\mathrm{pot}}{\ell M_\mathrm{g}}\right]^{2/3}, \end{equation} in this limit. The general case can be accommodated by simply interpolating between the two limits, \begin{equation} \label{eq:etrans_new} \frac{E_\mathrm{kin,t}}{M_\mathrm{g}} = \frac{1}{2} \left[ \frac{(r_\mathrm{sh}-r_\mathrm{g}) \dot{Q}_\nu}{M_\mathrm{g}} + \frac{\pi r_\mathrm{sh} F_\mathrm{pot}}{\ell M_\mathrm{g}}\right]^{2/3}. \end{equation} We emphasize that a different dissipation length enters in both terms: In the limit of neutrino-driven convection with small seed perturbations, the dissipation length is given by the width of the gain layer, whereas the dissipation length $\pi r_\mathrm{sh}/\ell$ can be considerably larger for ``forced'' convection/shock deformation due to infalling perturbations with small $\ell$. For deriving the modification of the critical luminosity, it will be convenient to express $E_\mathrm{kin,t}$ in terms of its value in the limit of small seed perturbations (Equation~\ref{eq:etrans}) and a correction term $\psi$, \begin{equation} \frac{E_\mathrm{kin,t}}{M_\mathrm{g}} = \frac{1}{2} \left[ \frac{(r_\mathrm{sh}-r_\mathrm{g}) \dot{Q}_\nu}{M_\mathrm{g}} \right]^{2/3} (1+\psi)^{2/3}, \end{equation} where $\psi$ is defined as \begin{equation} \psi= \frac{\pi r_\mathrm{sh} F_\mathrm{pot}/(\ell M_\mathrm{g})} {(r_\mathrm{sh}-r_\mathrm{g}) \dot{Q}_\nu/M_\mathrm{g}} = \frac{\pi r_\mathrm{sh} F_\mathrm{pot}}{\ell (r_\mathrm{sh}-r_g) \dot{Q}_\nu}. \end{equation} Different from the case of negligible seed perturbations, it is hard to validate Equation~(\ref{eq:etrans_new}) in simulations. In the 2D study of \citet{mueller_15a}, the amplitudes of the infalling perturbations change significantly over relatively short time-scales, and the phase during which they have a significant impact on the turbulent kinetic energy in the post-shock region but have not yet triggered shock revival was therefore too short to detect any deviations from Equation~(\ref{eq:etrans}), especially since the turbulent kinetic energy fluctuates considerably around its saturation value in 2D. \subsection{Effect on the Heating Conditions and the Critical Luminosity} Conceptually, the steps from Equation~(\ref{eq:etrans_new}) to a modified critical luminosity are no different from the original idea of \citet{mueller_15a}, i.e.\, one can assume that the average shock radius can be obtained by rescaling the shock radius $r_\mathrm{sh,1D}$ for the stationary 1D accretion problem with a correction factor that depends on the average RMS Mach number $\langle \mathrm{Ma}_\mathrm{gain}^2\rangle$ in the gain region (Equation~42 in \citealt{mueller_15a}), \begin{equation} r_\mathrm{sh} \approx r_\mathrm{sh,1D} \left(1+\frac{4}{3} \langle \mathrm{Ma}_\mathrm{gain}^2\rangle \right)^{2/3}, \end{equation} which then leads to a similar correction factor for the critical values for the neutrino luminosity and mean energy $L_\nu$ and $E_\nu$ (Equation 41 in \citealt{mueller_15a}). In the presence of strong seed perturbations, we can express $\langle \mathrm{Ma}_\mathrm{gain}^2 \rangle$ at the onset of an explosive runaway in terms of its value $\langle \mathrm{Ma}_\nu^2\rangle$ at shock revival in the case of small seed perturbations and a correction factor $(1+\psi)^{2/3}$ as in Equation~(\ref{eq:etrans_new}), \begin{equation} \label{eq:lcrit} L_\nu E_\nu^2 \propto (\dot M M)^{3/5} r_\mathrm{g}^{-2/5} \left[1+\frac{4}{3} \langle \mathrm{Ma}_\nu^2\rangle (1+\psi)^{2/3} \right]^{-3/5}. \end{equation} Equation~(\ref{eq:lcrit}) obviously hinges on the proper calibration (and validation) of Equation~(\ref{eq:etrans_new}), which needs to be provided by future core-collapse supernova simulations. Nonetheless, it already allows some crude estimates. The ratio of the critical luminosity with strong seed perturbation $(L_\nu E_\nu^2)_\mathrm{pert}$ to the critical luminosity $(L_\nu E_\nu^2)_\mathrm{3D}$ value in multi-D for small seed perturbations is found to be \begin{eqnarray} \nonumber \frac{(L_\nu E_\nu^2)_\mathrm{pert}} {(L_\nu E_\nu^2)_\mathrm{3D}} &=& \left(\frac{1+4/3 \langle\mathrm{Ma}_\nu^2\rangle (1+\psi)^{2/3}}{1+4/3 \langle\mathrm{Ma}_\nu^2\rangle}\right)^{-3/5} \\ &\approx& 1-\frac{8 \langle \mathrm{Ma}_\nu^2 \rangle \psi}{15 \big(1+4/3 \langle\mathrm{Ma}_\nu^2\rangle \big)}, \end{eqnarray} where we linearized in $\psi$. In order not to rely on an increasingly long chain of uncertain estimates, it is advisable to use the known multi-D effects without strong seed perturbations as a yardstick; they bring about a reduction of the critical luminosity by about $25 \%$ compared to 1D \citep{murphy_08b,hanke_12,couch_12b,dolence_13,mueller_15a}. This reduction is obtained by setting $\mathrm{Ma}_\nu^2=0.4649$ at the onset of runaway shock expansion, which is also the value derived by \citet{mueller_15a} based on analytic arguments. Using this value, we estimate a reduction of the critical luminosity by $\mathord{\sim}0.15 \psi$ relative to the the critical luminosity in multi-D without perturbations, which remains only a very rough indicator for the importance of perturbations in shock revival barring any further calibration and a precise definition of how and where $\mathrm{Ma}$ is to be measured. It is illustrative to express $\psi$ in terms of the heating efficiency $\eta_\mathrm{heat}$, which is defined as the ratio of the volume-integrated neutrino heating rate and the sum of the electron neutrino and antineutrino luminosities $L_{\nu_e}$ and $L_{\bar{\nu}_e}$, \begin{equation} \eta_\mathrm{heat}=\frac{\dot{Q}_\nu}{L_{\nu_e}+L_{\bar{\nu}_e}}, \end{equation} and the accretion efficiency $\eta_\mathrm{acc}$, \begin{equation} \eta_\mathrm{acc}=\frac{L_{\nu_e}+L_{\bar{\nu}_e}}{GM \dot{M}/r_\mathrm{g}}. \end{equation} We then obtain \begin{equation} \label{eq:psi} \psi= \frac{\pi r_\mathrm{sh} \mathrm{Ma}}{3\ell (r_\mathrm{sh}-r_\mathrm{g}) \eta_\mathrm{acc} \eta_\mathrm{heat}} \approx \frac{\pi \mathrm{Ma}}{\ell \eta_\mathrm{acc} \eta_\mathrm{heat}}. \end{equation} Using Equation~(\ref{eq:psi}), we can verify that the estimated reduction of the critical luminosity due to seed perturbations by $\mathord{\sim} 0.15 \psi$ is in the ballpark: If we take $\mathrm{Ma}$ to be half the maximum value of the Mach number in the infalling shells in the models of \citet{mueller_15a} and work with reasonable average values of $\eta_\mathrm{acc}=2$ and $\eta_\mathrm{heat}=0.05$, we obtain a reduction of 11\% for their model p2La0.25 ($\mathrm{Ma}=0.045$), 24\% for p2La1 ($\mathrm{Ma}=0.1$), and 36\% for p2La2 ($\mathrm{Ma}=0.15$), which agrees surprisingly well with their inferred reduction of the critical luminosity (Figure~12 in their paper). It also explains why their models with $\ell=4$ require twice the convective Mach numbers in the oxygen shell to explode at the same time as their corresponding $\ell=2$ models. For the models of \citet{couch_13,couch_14} with $\eta_\mathrm{heat} \approx 0.1$ and $\ell=4$, our estimate would suggest a reduction in critical luminosity by 6\%. This prediction cannot be compared quantitatively to the results of \citet{couch_13,couch_14} since an analysis of the effect on the critical luminosity in the vein of \citet{mueller_15a} and \citet{summa_16} would require additional data (e.g., trajectories of the gain radius). Qualitatively, such a moderate reduction of the critical luminosity seems consistent with their results: The effect of infalling perturbations corresponds to a change in the critical heating factor\footnote{The change in the critical heating factor is related to but not necessarily identical to the change in the generalized critical luminosity as introduced by \citet{mueller_15a} and \citet{summa_16}, which also depends, e.g., on the relative change of the gain radius and the specific binding energy in the gain region.} (by which they multiply the critical luminosity to compute the neutrino heating terms) by only $2 \ldots 3\%$, and their inferred reduction of the ``critical heating efficiency'' by $\mathord{\sim} 10\%$ due to infalling perturbations is much smaller than the reduction of the critical heating efficiency by a factor of $\mathord{\sim} 2$ in 3D compared to 1D. For the simulations of \citet{couch_15}, for which we estimate the convective Mach number in the silicon shell as roughly $0.02$, the expected reduction in the critical luminosity (again for $\eta_\mathrm{heat} \approx 0.1$ and a dominant $\ell=4$ mode) is roughly $1\%$ , which is consistent with the development of an explosion in both the perturbed and the unperturbed model. For our $18 M_\odot$ progenitor model with a typical convective Mach number $\mathrm{Ma} \approx 0.1$ in the middle of the oxygen shell, we expect a much more sizable reduction of the critical luminosity by $12\ldots 24\%$ if we assume $\eta_\mathrm{acc}=2$, $\ell=2$ and $\eta_\mathrm{heat} =0.05 \ldots 0.1$, although this crude estimate still needs to be borne out by a follow-up core-collapse supernova simulation. Because the importance of the infalling perturbations relative to the contribution of neutrino heating to non-radial instabilities is determined by $\eta_\mathrm{acc}$ and $\eta_\mathrm{heat}$, reasonably accurate multi-group transport is obviously required; the inaccuracy of leakage-based models like \citet{couch_13} and \citet{couch_15} that has been pointed out by \citet{janka_16} evidently does not permit anything more than a proof of principle. \section{Summary and Conclusions} \label{sec:summary} In this paper, we presented the first 3D simulation of the last minutes of oxygen shell burning outside a contracting iron and silicon core in a massive star (ZAMS mass $18 M_\odot$) up to the onset of collapse. Our simulation was conducted using a 19-species $\alpha$-network as in the stellar evolution code \textsc{Kepler} \citep{weaver_78} and an axis-free, overset Yin-Yang grid \citep{kageyama_04,wongwathanarat_10a} to cover the full solid angle and allow for the emergence of large-scale flow patterns. To circumvent the problem of core deleptonization and nuclear quasi-equilibrium in the silicon shell without degrading the accuracy of the simulation by serious modifications of the core evolution, a large part of the silicon core was excised and replaced by a contracting inner boundary with a trajectory determined from the corresponding \textsc{Kepler} run. The model was evolved over almost 5~minutes, leaving ample time for transients to die down and roughly 3~minutes or 9~turnover time-scalse of steady-state convection for a sufficiently trustworthy analysis of the final phase before collapse. For the simulated progenitor, an $18 M_\odot$ star of solar metalicity with an extended oxygen shell, our 3D simulation shows the acceleration of convection from typical Mach numbers of $\mathord{\sim} 0.05$ to $\mathord{\sim} 0.1$ at collapse due to the increasing burning rate at the base of the shell. The contraction of the core also leads to the emergence of larger scales in the flow, which is initially dominated by $\ell=3$ and $\ell=4$ modes before a pronounced quadrupolar ($\ell=2$) mode develops shortly before collapse. As a result of a small buoyancy jump between the oxygen and carbon shell, the oxygen shell grows from $0.51 M_\odot$ to $0.56 M_\odot$ due to the entrainment of material from the overlying carbon shell over the course of the simulation, which appears compatible with empirical scaling laws for the entrainment rate at convective boundaries \citep{fernando_91,strang_01,meakin_07}. The comparison with the corresponding \textsc{Kepler} model shows that -- aside from entrainment at the boundaries -- convection is well described by mixing length theory (MLT) in the final stage before collapse in the model studied here. MLT at least captures the bulk properties of the convective flow that matter for the subsequent collapse phase quite accurately: If properly ``gauged'', the convective velocities predicted by MLT in \textsc{Kepler} agree well with the 3D simulation, and the time-dependent implementation of MLT even does a reasonable job right before collapse when the nuclear energy generation rate changes significantly within a turnover time-scale, which results in a ``freeze-out'' of convection. The good agreement with MLT is also reflected by the fact that the kinetic energy in convective motions obeys a scaling law of the expected form \citep{biermann_32,arnett_09}. The kinetic energy $E_{\mathrm{kin},r}$ in radial convective motions can be described to good accuracy in terms of the average nuclear energy generation rate per unit mass $\dot{q}_\mathrm{nuc}$, the pressure scale height $h_P$ at the base of the shell, and the mass $M_\mathrm{conv}$ in the shell as, \begin{equation} \label{eq:ekinr_scaling} E_{\mathrm{kin},r} \approx 0.35 M_\mathrm{conv} (\dot{q}_\mathrm{nuc} h_P)^{2/3}, \end{equation} and the convective velocities are not too far from \begin{equation} \delta v_r \approx \omega_\mathrm{BV} h_P, \end{equation} where $\omega_\mathrm{BV}$ is the Brunt-V\"ais\"al\"a frequency and $h_P$ is the \emph{local} value of the pressure scale height. Our results are consistent with the assumption that convective blobs are accelerated only over roughly one pressure scale height, and there appears to be no need to replace the pressure scale height in Equation~(\ref{eq:ekinr_scaling}) with the extent of the convective zone as the arguments of \citet{arnett_09} suggest. We surmise that this may be a specific feature of the final phases of shell convection before collapse that requires the dissipation of turbulent energy within a limited distance: Since neutrino cooling no longer balances nuclear energy generation, the convective flow will adjust such as to maintain a constant rate of entropy generation throughout the shell to avoid a secular build-up or decline of the unstable gradient. During earlier phases with appreciable neutrino cooling in the outer regions of convective shells, Equation~(\ref{eq:ekinr_scaling}) may no longer be adequate. Similarly, the dominant scale of the convective eddies agrees well with estimates based on linear perturbation theory \citep{chandrasekhar_61,foglizzo_06}. In terms of the radii $r_-$ and $r_+$ of the inner and outer shell boundary, the dominant angular wave number $\ell$ is roughly \begin{equation} \label{eq:typical_l} \ell \approx \frac{\pi (r_+ + r_-)}{2 (r_+ - r_-)}. \end{equation} Our findings already allow some conclusions about one of the primary questions that has driven the quest for 3D supernova progenitors, i.e.\, whether progenitor asphericities can play a beneficial role for shock revival. We suggest that Equations~(\ref{eq:ekinr_scaling}) and (\ref{eq:typical_l}) can be used to formulate an estimate for the importance of convective seed perturbations for shock revival in the ensuing supernova \citep{couch_13,mueller_15a,couch_15}. To this end, these two equations (or alternatively, the convective velocities obtained via MLT in a stellar evolution code) need to be evaluated at the time of freeze-out of convection to obtain the typical convective Mach number $\mathrm{Ma}$ and angular wave number $\ell$ at collapse. The time of freeze-out can be determined by equating the typical time-scale for changes in the volume-integrated burning rate $\dot{Q}_\mathrm{nuc}$ with the turnover time-scale $t_\mathrm{conv}$, which results in the condition \begin{equation} \frac{\ensuremath{\mathrm{d}} \ln \dot{Q}_\mathrm{nuc}}{\ensuremath{\mathrm{d}} t}=\frac{\omega_\mathrm{BV,max}}{2\pi}, \end{equation} or, \begin{equation} \frac{\ensuremath{\mathrm{d}} \ln \dot{Q}_\mathrm{nuc}}{\ensuremath{\mathrm{d}} t}=t_\mathrm{conv}^{-1}. \end{equation} Relying on an estimate for the extra turbulent energy generated in the post-shock region in the supernova core by the infall of seed perturbations, and using the reduction of the energy-weighted critical neutrino luminosity $\mathcal{L}_\mathrm{crit}$ for explosion by $\mathord{\sim} 25 \%$ in multi-D \citep{murphy_08b,hanke_12,mueller_15a} as a yardstick, one finds that strong seed perturbations should reduce $\mathcal{L}_\mathrm{crit}$ further by \begin{equation} \label{eq:dlcrit} \frac{\Delta \mathcal{L}_\mathrm{crit}}{\mathcal{L}_\mathrm{crit}} \approx 0.47 \frac{\mathrm{Ma}}{\ell \eta_\mathrm{acc} \eta_\mathrm{heat}}, \end{equation} relative to the control value in multi-D simulations without strong seed perturbations. Here $\eta_\mathrm{acc}$ and $\eta_\mathrm{heat}$ are the accretion and heating efficiency in the supernova core, $\mathrm{Ma}$ is the typical convective Mach number in the infalling shell at the onset of collapse, and $\mathcal{L}_\mathrm{crit}=(L_\nu E_\nu^2)_\mathrm{crit}$ includes the proper weighting of the neutrino luminosity $L_\nu$ with the square of the neutrino mean energy $E_\nu^2$ \citep{janka_12,mueller_15a}. This estimate appears to be roughly in line with recent multi-D studies of shock revival with the help of strong seed perturbations and nicely accounts for the range of effect sizes from \citet{couch_15} (no qualitative change in shock revival) to \citet{mueller_15a} (reduction of $\mathcal{L}_\mathrm{crit}$ by tens of percent for $\ell=2$ modes with sufficiently strong perturbations). For our 3D progenitor model, we expect a reduction of the critical luminosity by $12 \ldots 24 \%$. Considering these numbers, the prospects for a significant and supportive role of progenitor asphericities in the supernova explosion mechanism seem auspicious. Yet caution is still in order. Because the relative importance of seed perturbations is determined by the ratio $\mathrm{Ma}/(\eta_\mathrm{acc} \eta_\mathrm{heat})$, a reliable judgment needs to be based both on a self-consistent treatment of convective burning in multi-D before collapse (which determines $\mathrm{Ma}$) \emph{and} accurate multi-group neutrino transport after bounce (which determines $\eta_\mathrm{acc}$ and $\eta_\mathrm{heat}$). Again, first-principle models of supernovae face a curious coincidence: As one typically finds $\mathrm{Ma} \sim \eta_\mathrm{heat}$, progenitor asphericities are just large enough to play a significant role in the explosion mechanism, but not large enough to provide a clear-cut solution for the problem of shock revival. The danger of a simplified neutrino treatment has already been emphasized repeatedly in the literature (see \citealt{janka_16} for a recent summary), but pitfalls also abound in simulations of convective burning: For example, the recipes employed by \citet{couch_15} can be shown to considerably affect the convective Mach number at collapse by appealing to scaling laws from MLT and time-scale considerations. Our method of excising the core seems to be a viable avenue towards obtaining 3D initial conditions in the oxygen shell (which is the innermost active convective shell in many progenitor models) without introducing inordinate artifacts due to initial transients or artificial changes to the nuclear burning. Nonetheless, the model presented here is only another step towards a better understanding of the multi-D structure of supernova progenitors. In particular, the effects of resolution and stochasticity on the convective flow need to be studied in greater depth, though a first restricted resolution study (Appendix~\ref{app:res}) suggests that the predicted convective velocities are already accurate to within $10 \%$ or less. Future simulations will also need to address progenitor variations in the shell geometry, shell configuration, and the burning rate; in fact the $18 M_\odot$ was deliberately chosen as an optimistic case with strong oxygen burning at the base of a very extended convective shell, and may not be representative of the generic situation (if there is any). Moreover, massive stars with active convective silicon shells at collapse also need to be explored even if they form just a subclass of all supernova progenitors. Treating this phase adequately to avoid the artifacts introduced by an approach like that of \citet{couch_15} is bound to prove a harder challenge due to the complications of nuclear quasi-equilibrium. Finally, the long-term effects of entrainment and other phenomena that cannot be captured by MLT need to be examined: If such effects play a major role in the evolution of supernova progenitors, capturing them with the help of exploratory 3D models and improved recipes for 1D stellar evolution in the spirit of the 321D approach \citep{arnett_2015} will be much more challenging than 3D simulations of the immediate pre-collapse stage, where the problems of extreme time-scale ratios (e.g.\ of the thermal adjustment and turnover time-scale), numerical diffusion, and energy/entropy conservation errors are relatively benign. It is by no means certain that supernova progenitor models will look fundamentally different once this is accomplished; but there is little doubt that groundbreaking discoveries will be made along the way. \acknowledgements We thank T.~Foglizzo, E.~M\"uller, and S.~Woosley for useful discussions and T.~Melson for support and discussions concerning the Yin-Yang grid. We acknowledge support by the Australian Research Council through a Discovery Early Career Researcher Award DE150101145 (BM) and an ARC Future Fellowship FT120100363 (AH), by the Deutsche Forschungsgemeinschaft through the Excellence Cluster Universe EXC 153 (TJ) and by the European Research Council through grant ERC-AdG No.~341157-COCO2CASA (MV, TJ). This research was undertaken with the assistance of resources from the National Computational Infrastructure (NCI), which is supported by the Australian Government and was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia. This material is based upon work supported by the National Science Foundation under Grant No.~PHY-1430152 (JINA Center for the Evolution of the Elements).
proofpile-arXiv_067-4593
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction } The entropy change of a system can be considered to come from two origins, i.e., $dS=dS_{\mathrm{e}}+dS_{\mathrm{i}}$ \cite{de_groot_non-equilibrium_1962,nicolis_self-organization_1977,reichl_modern_2009,kondepudi_modern_2014}, where $dS_{\mathrm{e}}$ comes from the exchange with external sources, and it could be either positive or negative; $dS_{\mathrm{i}}$ is the entropy change due to the irreversible processes. Then the 2nd law is simply stated as $dS_{\mathrm{i}}/dt\ge0$, which means the entropy produced by the irreversible processes always increases, and $R_{\mathrm{ep}}:=dS_{\mathrm{i}}/dt$ is called the \emph{entropy production rate} (EPr). When the system is contacted with a thermal bath with temperature $T$, we have $dS_{\mathrm{e}}=\text{\text{\dj}}Q/T$ (hereafter we refer it as the \emph{thermal entropy}), where $\text{\dj}Q$ is the heat flowing into the system. If we have multiple independent thermal baths with different temperatures $T_{\alpha}$ (Fig.\,\ref{fig-mBaths}), the EPr becomes \begin{equation} \frac{dS_{\mathrm{i}}}{dt}=\frac{dS}{dt}-\sum_{\alpha}\frac{1}{T_{\alpha}}\frac{dQ_{\alpha}}{dt}:=R_{\mathrm{ep}},\label{eq:R-EPr} \end{equation} where $\text{\dj}Q_{\alpha}$ is the heat coming from bath-$\alpha$ \cite{de_groot_non-equilibrium_1962,kondepudi_modern_2014}. Further, when an open quantum system is weakly coupled with the multiple thermal baths, usually its dynamics can be described by the following Lindblad (GKSL) equation \cite{gorini_completely_1976,lindblad_generators_1976}, \begin{equation} \dot{\rho}=i[\rho,\hat{H}_{S}]+\sum_{\alpha}{\cal L}_{\alpha}[\rho]. \end{equation} where ${\cal L}_{\alpha}[\rho]$ describes the dissipation due to bath-$\alpha$. Utilizing $\dot{S}[\rho]=-\mathrm{tr}[\dot{\rho}\ln\rho]$ and $\dot{Q}_{\alpha}=\mathrm{tr}\big[\hat{H}_{S}\cdot{\cal L}_{\alpha}[\rho]\big]$, the EPr (\ref{eq:R-EPr}) can be rewritten as the following Spohn formula (denoted as $R_{\mathrm{Sp}}$ hereafter) \cite{spohn_entropy_1978,spohn_irreversible_1978,alicki_quantum_1979,boukobza_three-level_2007,kosloff_quantum_2013, *kosloff_quantum_2016,cai_entropy_2014} \begin{equation} R_{\mathrm{ep}}=\sum_{\alpha}\mathrm{tr}\big[(\ln\rho_{\mathrm{ss}}^{(\alpha)}-\ln\rho)\cdot{\cal L}_{\alpha}[\rho]\big]:=R_{\mathrm{Sp}}.\label{eq:Spohn} \end{equation} Here we call $\rho_{\mathrm{ss}}^{(\alpha)}=Z_{\alpha}^{-1}\exp[-\hat{H}_{S}/T_{\alpha}]$ the \emph{partial steady state} associated with bath-$\alpha$, satisfying ${\cal L}_{\alpha}[\rho_{\mathrm{ss}}^{(\alpha)}]=0$. It can be proved that $R_{\mathrm{Sp}}\ge0$, which means the irreversible entropy production keeps increasing (see the proof in Appendix \ref{apx:Proof} or Ref.\,\cite{spohn_entropy_1978,spohn_irreversible_1978}). However, in the above discussion, the thermal entropy $dS_{\mathrm{e}}=\text{\dj}Q/T$ only applies for canonical thermal baths. If the bath is some non-canonical state containing quantum coherence or squeezing \cite{scully_extracting_2003,rosnagel_nanoscale_2014,manzano_entropy_2016}, the temperature is not well defined, thus it is no more proper to use $\text{\dj}Q/T$ for $dS_{\mathrm{e}}$ \cite{gardas_thermodynamic_2015}, and the relations $R_{\mathrm{ep}}=R_{\mathrm{Sp}}$ or $R_{\mathrm{ep}}\ge0$ no longer hold either. Therefore, for such non-thermal baths, the conventional thermodynamic description of the EPr does not apply. And it is believed that corrections of some work \cite{quan_quantum_2005,quan_quantum-classical_2006,gelbwaser-klimovsky_heat-machine_2014}, or excess heat \cite{gardas_thermodynamic_2015} should be considered in these baths. \begin{figure} \includegraphics[width=0.38\columnwidth]{fig_mBaths} \caption{(Color online) Demonstration for an open quantum system ($S$) interacting with its environment composed of multiple baths ($B_{\alpha}$). The baths are independent from each other, and do not have to be canonical thermal states.} \label{fig-mBaths} \end{figure} Here we replace the thermal entropy term $-\dot{Q}_{\alpha}/T_{\alpha}$ by the von Neumann entropy of bath-$\alpha$, $\dot{S}_{B\alpha}=-\mathrm{tr}[\dot{\rho}_{B\alpha}\ln\rho_{B\alpha}]$. Further, we assume the multiple baths are independent from each other, thus it leads to $\sum_{\alpha}\dot{S}_{B\alpha}=\dot{S}_{B}$. Then this generalization becomes \begin{equation} R_{{\cal I}}=\frac{dS_{S}}{dt}+\frac{dS_{B}}{dt}=\frac{d}{dt}(S_{S}+S_{B}-S_{SB})=\frac{d}{dt}{\cal I}_{SB}.\label{eq:R_EP_ISB} \end{equation} Here $\dot{S}_{SB}=0$ since the total system $S+B$ evolves unitarily \footnote{$\dot{S}[\rho_{SB}]=-\mathrm{tr}[\dot{\rho}_{SB}\ln\rho_{SB}]=-i\,\mathrm{tr}[\rho_{SB}\hat{{\cal H}}\cdot\ln\rho_{SB}-\hat{{\cal H}}\rho_{SB}\cdot\ln\rho_{SB}]=0$, where $\hat{{\cal H}}$ is the Hamiltonian of the total $S+B$ system.}, and ${\cal I}_{SB}:=S_{S}+S_{B}-S_{SB}$ is just the mutual information between the system and its environment, which measures their correlation \cite{nielsen_quantum_2000,esposito_entropy_2010,pucci_entropy_2013,parrondo_thermodynamics_2015,alipour_correlations_2016,strasberg_quantum_2017}. Therefore, we call $R_{{\cal I}}$ the \emph{mutual information production rate }(MIPr). $R_{{\cal I}}$ has a clear physical meaning: a positive $R_{{\cal I}}$ indicates the correlation between the system and its environment is increasing. In the following, we are going to show that, indeed this MIPr (\ref{eq:R_EP_ISB}) has a quite close connection with the previous EPr (\ref{eq:R-EPr}). When the bath of the open system are thermal ones, we can prove that this MIPr could exactly return to the conventional thermodynamic description of the EPr in the weak coupling limit, namely, $R_{{\cal I}}=R_{\mathrm{ep}}$. That means, for thermal bath, the conventional entropy production can be equivalently interpreted as the mutual information production, and the 2nd law statement $(R_{{\cal I}}=)R_{\mathrm{ep}}\ge0$ can be also understood as the system-bath correlation always keeps increasing. Further, we will study an example of a single boson contacted with multiple squeezed thermal baths. In this case, the conventional EPr does not apply. We calculate the MIPr under the weak coupling limit and Markovian approximation, and we find that it exactly equals to the Spohn formula for non-thermal baths, thus we can prove $R_{{\cal I}}\ge0$, which means the monotonic increasing of the system-bath correlation also holds in this squeezed bath example. \section{Mutual information production in thermal baths} Now we first consider the system is coupled with several thermal baths. In this case, the initial state of bath-$\alpha$ is $\rho_{B\alpha}(0)={\cal Z}_{\alpha}^{-1}\exp[-\hat{H}_{B\alpha}/T_{\alpha}]$. Assuming $\rho_{B\alpha}(t)$ does not change too much during evolution \cite{breuer_theory_2002,scully_quantum_1997,li_non-markovianity_2016}, we have $\ln\rho_{B\alpha}(t)=\ln[\rho_{B\alpha}(0)+\delta\rho_{t}]\simeq\ln[\rho_{B\alpha}(0)]+o(\delta\rho_{t})$, thus the entropy change of bath-$\alpha$ is \begin{align} \dot{S}_{B\alpha} & =-\mathrm{tr}[\dot{\rho}_{B\alpha}(t)\ln\rho_{B\alpha}(t)]\simeq-\mathrm{tr}[\dot{\rho}_{B\alpha}(t)\cdot\ln\frac{e^{-\frac{\hat{H}_{B\alpha}}{T_{\alpha}}}}{{\cal Z}_{\alpha}}]\nonumber \\ & =\frac{1}{T_{\alpha}}\frac{d}{dt}\langle\hat{H}_{B\alpha}\rangle\simeq-\frac{\dot{Q}_{\alpha}}{T_{\alpha}}.\label{eq:S_B-th} \end{align} Here $-\frac{d}{dt}\langle\hat{H}_{B\alpha}\rangle$ is the energy loss of bath-$\alpha$, while $\dot{Q}_{\alpha}$ is the energy gain of the system from bath-$\alpha$, and they equal to each other in weak coupling limit. Assuming the baths are independent from each other, $\rho_{B}(t)\simeq\prod_{\alpha}\rho_{B\alpha}(t)$, the MIPr becomes \begin{equation} R_{{\cal I}}=\dot{S}_{S}+\sum_{\alpha}\dot{S}_{B\alpha}=\dot{S}_{S}-\sum_{\alpha}\frac{\dot{Q}_{\alpha}}{T_{\alpha}}=R_{\mathrm{ep}}. \end{equation} Therefore, for thermal baths, the MIPr (\ref{eq:R_EP_ISB}) equals to the conventional thermodynamic description of the EPr (\ref{eq:R-EPr}). Thus, the 2nd law statement $R_{\mathrm{ep}}\ge0$ is equivalent as $R_{{\cal I}}\ge0$, which means the mutual information between the system and its environment keeps increasing monotonically. This can be understood as an equivalent statement for the entropy production when the baths are canonical thermal ones. We notice that this equivalence was also shown in the ``correlation entropy'' approach \cite{esposito_entropy_2010,pucci_entropy_2013,strasberg_quantum_2017}. \section{Mutual information production in Squeezed baths} Now we study an example of a single boson mode interacting with multiple squeezed thermal baths \cite{rosnagel_nanoscale_2014,manzano_entropy_2016,kosloff_quantum_2016}. In this case, the thermal entropy $dS_{\mathrm{e}}=\text{\dj}Q/T$ cannot be used, and neither does the EPr\,(\ref{eq:R-EPr}). Here we calculate the MIPr (\ref{eq:R_EP_ISB}), and we will prove it just equals to the Spohn formula for non-thermal baths, and thus could still keep non-negative, $R_{{\cal I}}\ge0$. \subsection{Master equation and Spohn formula} The Hamiltonian of the single boson mode and the bosonic bath are $\hat{H}_{S}=\Omega\,\hat{a}^{\dagger}\hat{a}$, $\hat{H}_{B}=\sum_{\alpha}\hat{H}_{B\alpha}$ and $\hat{H}_{B\alpha}=\sum_{k}\omega_{\alpha k}\,\hat{b}_{\alpha k}^{\dagger}\hat{b}_{\alpha k}$, and they interact through $\hat{V}_{SB}=\sum_{\alpha}\hat{a}^{\dagger}\hat{B}_{\alpha}+\hat{a}\hat{B}_{\alpha}^{\dagger}$. Here $\hat{B}_{\alpha}=\sum g_{\alpha k}\hat{b}_{\alpha k}$ is the operator of bath-$\alpha$, and the initial states of the baths are squeezed thermal ones (hereafter all the density matrices are written in the interaction picture), \begin{gather} \rho_{B\alpha}^{0}=\frac{1}{{\cal Z}_{\alpha}}e^{-\beta_{\alpha}\,{\cal S}_{\alpha}\hat{H}_{B\alpha}{\cal S}_{\alpha}^{\dagger}},\quad\beta_{\alpha}:=T_{\alpha}^{-1},\label{eq:bath-Sq}\\ {\cal S}_{\alpha}:=\prod_{k}\exp[\frac{1}{2}\lambda_{\alpha k}^{*}\hat{b}_{\alpha k}^{2}-\mathbf{h.c.}],\quad\lambda_{\alpha k}=r_{\alpha k}e^{-i\theta_{\alpha k}}.\nonumber \end{gather} Here ${\cal S}_{\alpha}$ is the squeezing operator for the boson modes in bath-$\alpha$. With Born-Markovian approximation, we obtain a master equation $\dot{\rho}=\sum_{\alpha}{\cal L}_{\alpha}[\rho]$ for the open system alone \cite{breuer_theory_2002,walls_quantum_2008}, where \begin{align*} {\cal L}_{\alpha}[\rho] & =\frac{\gamma_{\alpha}}{2}\Big[\tilde{\mathfrak{n}}_{\alpha}\big(2\hat{a}^{\dagger}\rho\hat{a}-\{\hat{a}\hat{a}^{\dagger},\rho\}\big)+(\tilde{\mathfrak{n}}_{\alpha}+1)\big(2\hat{a}\rho\hat{a}^{\dagger}-\{\hat{a}^{\dagger}\hat{a},\rho\}\big)\\ & -\tilde{\mathfrak{u}}_{\alpha}\big(2\hat{a}^{\dagger}\rho\hat{a}^{\dagger}-\{(\hat{a}^{\dagger})^{2},\rho\}\big)-\tilde{\mathfrak{u}}_{\alpha}^{*}\big(2\hat{a}\rho\hat{a}-\{\hat{a}^{2},\rho\}\big)\Big]. \end{align*} The coupling spectrums of the squeezed bath-$\alpha$ are $J_{\alpha}(\omega):=2\pi\sum_{k}|g_{\alpha k}|^{2}\delta(\omega-\omega_{\alpha k})$ and $K_{\alpha}(\omega):=2\pi\sum_{k}g_{\alpha k}^{2}\delta(\omega-\omega_{\alpha k})$. Without loss of generality, we omit the phase of $g_{\alpha k}$ and thus $K_{\alpha}(\omega)=K_{\alpha}^{*}(\omega)=J_{\alpha}(\omega)$. Here we denote $\gamma_{\alpha}:=J_{\alpha}(\Omega)=K_{\alpha}(\Omega)$, and the parameters $\tilde{\mathfrak{n}}_{\alpha}:=\tilde{\mathsf{n}}_{\alpha}(\Omega)$, $\tilde{\mathfrak{u}}_{\alpha}:=\tilde{\mathsf{u}}_{\alpha}(\Omega)$ are calculated from $\tilde{\mathsf{n}}_{\alpha}(\omega_{k}):=\mathrm{tr}[\rho_{B\alpha}^{0}\hat{b}_{\alpha k}^{\dagger}\hat{b}_{\alpha k}]$, $\tilde{\mathsf{u}}_{\alpha}(\omega_{k}):=-\mathrm{tr}[\rho_{B\alpha}^{0}\hat{b}_{\alpha k}^{2}]$ (see Appendix \ref{apx:squeeze}). The master equation gives \begin{gather} \frac{d}{dt}\langle\tilde{a}(t)\rangle=-\sum_{\alpha}\frac{\gamma_{\alpha}}{2}\langle\tilde{a}\rangle,\quad\frac{d}{dt}\langle\tilde{a}^{2}\rangle=-\sum_{\alpha}\gamma_{\alpha}[\langle\tilde{a}^{2}\rangle-\tilde{\mathfrak{u}}_{\alpha}],\nonumber \\ \frac{d}{dt}\langle\tilde{a}^{\dagger}\tilde{a}\rangle=-\sum_{\alpha}\gamma_{\alpha}[\langle\tilde{n}_{a}\rangle-\tilde{\mathfrak{n}}_{\alpha}].\label{eq:dynamics} \end{gather} Here we denote $\hat{n}_{a}:=\hat{a}^{\dagger}\hat{a}$, and $\langle\tilde{o}(t)\rangle:=\mathrm{tr}[\rho\hat{o}(t)]$ gives variables in the rotating frame \footnote{Here $\rho$ is in the interaction picture, but $\hat{o}$ is in the Schr{\"o}dinger picture, thus we have { $\langle\hat{a}(t)\rangle=\langle\tilde{a}(t)\rangle e^{-i\Omega t}$}. Here {$\langle\hat{o}(t)\rangle$} stands for observable expectations which are independent of pictures, and {$\langle\tilde{o}(t)\rangle$} are variables in the rotating frame, thus in Eq.\,(\ref{eq:dynamics}), the dependence of the system frequency $\Omega$ is cancelled.}. The partial steady states $\rho_{\mathrm{ss}}^{(\alpha)}$, which satisfies ${\cal L}_{\alpha}[\rho_{\mathrm{ss}}^{(\alpha)}]=0$, are now squeezed thermal states, \begin{gather} \rho_{\mathrm{ss}}^{(\alpha)}=\frac{1}{Z_{\alpha}}\exp[-\beta_{\alpha}\Omega\cdot\mathsf{S}_{\alpha}\hat{a}^{\dagger}\hat{a}\mathsf{S}_{\alpha}^{\dagger}],\\ \mathsf{S}_{\alpha}:=\exp[-(\frac{1}{2}\zeta_{\alpha}^{*}\hat{a}^{2}-\mathbf{h.c.})],\:\zeta_{\alpha}=\lambda_{\alpha k}\big|_{\omega_{k}=\Omega}:=r_{\alpha}e^{i\theta_{\alpha}}.\nonumber \end{gather} Here $\mathsf{S}_{\alpha}$ is a squeezing operator for the system. Although the baths are not thermal ones, we can still write down the Spohn formula $R_{\mathrm{Sp}}=\sum_{\alpha}R_{\mathrm{Sp}}^{(\alpha)}$, where \begin{align} R_{\mathrm{Sp}}^{(\alpha)} & :=\mathrm{tr}\big[(\ln\rho_{\mathrm{ss}}^{(\alpha)}-\ln\rho)\cdot{\cal L}_{\alpha}[\rho]\big]\nonumber \\ & :=\chi_{\alpha}-\mathrm{tr}\big[\ln\rho\cdot{\cal L}_{\alpha}[\rho]\big]\label{eq:R_S-X} \end{align} and we can prove $R_{\mathrm{Sp}}^{(\alpha)}\ge0$ and $R_{\mathrm{Sp}}\ge0$ hold also in this non-thermal case (Appendix \ref{apx:Proof}). However, since the above Spohn formula $R_{\mathrm{Sp}}$ for non-thermal baths no more comes from the thermodynamic EPr\,(\ref{eq:R-EPr}), thus its physical meaning is unclear now. In the thermal case, the 1st term in $R_{\mathrm{Sp}}^{(\alpha)}$, $\chi_{\alpha}:=\mathrm{tr}\big[\ln\rho_{\mathrm{ss}}^{(\alpha)}\cdot{\cal L}_{\alpha}[\rho]\big]$, gives the changing rate of the thermal entropy ($\chi_{\alpha}=-\dot{Q}_{\alpha}/T_{\alpha}$). But for the squeezed case, it becomes \begin{multline} \chi_{\alpha}=\frac{\Omega}{T_{\alpha}}\cdot\gamma_{\alpha}\Big(\cosh2r_{\alpha}\cdot[\langle\tilde{n}_{a}(t)\rangle-\tilde{\mathfrak{n}}_{\alpha}]\\ -\frac{1}{2}\sinh2r_{\alpha}[e^{-i\theta_{\alpha}}(\langle\tilde{a}^{2}(t)\rangle-\tilde{\mathfrak{u}}_{\alpha})+\mathbf{h.c.}]\Big).\label{eq:sigma_B} \end{multline} It is difficult to tell the physical meaning of this quantity. In the following, we will show that indeed Eq.\,(\ref{eq:sigma_B}) is just the changing rate of the von Neumann entropy of bath-$\alpha$, i.e., $\chi_{\alpha}=\dot{S}_{B\alpha}$, and then Eq.\,(\ref{eq:R_S-X}) directly leads to \begin{equation} R_{\mathrm{Sp}}=\sum_{\alpha}R_{\mathrm{Sp}}^{(\alpha)}=\dot{S}_{S}+\sum_{\alpha}\dot{S}_{B\alpha}=R_{{\cal I}}. \end{equation} \subsection{Bath entropy dynamics} Now we are going to calculate the entropy changing rate $\dot{S}_{B\alpha}$ of bath-$\alpha$ directly. To do this, we adopt the same trick as the thermal case. Assuming the squeezed baths do not change too much (interaction picture), the entropy of the bath evolves as \begin{align} \frac{d}{dt}S[\rho_{B\alpha}(t)]\simeq- & \mathrm{tr}[\dot{\rho}_{B\alpha}(t)\cdot\ln\frac{\exp[-\beta_{\alpha}\,{\cal S}_{\alpha}\hat{H}_{B\alpha}{\cal S}_{\alpha}^{\dagger}]}{{\cal Z}_{\alpha}}]\nonumber \\ =\frac{d}{dt}\sum_{k}\frac{\omega_{\alpha k}}{T_{\alpha}}\Big( & \cosh2r_{\alpha k}\langle\tilde{b}_{\alpha k}^{\dagger}(t)\tilde{b}_{\alpha k}(t)\rangle\nonumber \\ +\frac{1}{2}\sinh & 2r_{\alpha k}[\langle\tilde{b}_{\alpha k}^{2}(t)\rangle e^{-i\theta_{\alpha k}}+\mathbf{h.c.}]\Big).\label{eq:Sq-evolu} \end{align} Thus, the calculation of the bath entropy is now reduced as calculating the time derivative of the expectations of the bath operators like $\langle\tilde{b}_{\alpha k}^{\dagger}(t)\tilde{b}_{\alpha k}(t)\rangle$ and $\langle\tilde{b}_{\alpha k}^{2}(t)\rangle$. This can be done with the help of the Heisenberg equations, $\dot{\hat{b}}_{\alpha k}=-i\omega_{\alpha k}\hat{b}_{\alpha k}-ig_{\alpha k}^{*}\hat{a}$, and $\dot{\hat{a}}=-i\Omega\hat{a}-i\sum_{\alpha}g_{\alpha k}\hat{b}_{\alpha k}$, which lead to the quantum Langevin equation \cite{gardiner_quantum_2004,walls_quantum_2008,li_probing_2014} \begin{equation} \frac{d}{dt}\hat{a}=-i\Omega\hat{a}-\frac{1}{2}\Gamma\hat{a}-\hat{{\cal E}}(t).\label{eq:b(t)a(t)} \end{equation} Here $\Gamma:=\sum_{\alpha}\gamma_{\alpha}$ is the total decay rate, and $\gamma_{\alpha}$ are the same as those in the master equation; $\hat{{\cal E}}(t):=\sum_{\alpha}\hat{\xi}_{\alpha}(t)$ is the random force, and $\hat{\xi}_{\alpha}(t):=i\sum_{k}g_{\alpha k}\hat{b}_{\alpha k}(0)e^{-i\omega_{\alpha k}t}$ is the contribution from bath-$\alpha$. Thus $\hat{a}(t)$ and $\hat{b}_{\alpha k}(t)$ evolve as \begin{gather} \hat{a}(t)=\hat{a}(0)e^{-i\Omega t-\frac{\Gamma}{2}t}-\int_{0}^{t}ds\,e^{-i\Omega(t-s)-\frac{\Gamma}{2}(t-s)}\hat{{\cal E}}(s),\label{eq:a(t)}\\ \hat{b}_{\alpha k}(t)=\hat{b}_{\alpha k}(0)e^{-i\omega_{\alpha k}t}-ig_{\alpha k}^{*}\int_{0}^{t}ds\,e^{-i\omega_{\alpha k}(t-s)}\hat{a}(s).\nonumber \end{gather} To further calculate the bath entropy change, now we are going to show the following two relations hold in the weak coupling limit and Markovian approximation: \begin{align} \frac{d}{dt}\sum_{k}\mathfrak{f}_{k}\langle\tilde{b}_{\alpha k}^{\dagger}\tilde{b}_{\alpha k}\rangle & \simeq\mathfrak{f}(\Omega)\cdot\gamma_{\alpha}[\langle\tilde{n}_{a}\rangle-\tilde{\mathfrak{n}}_{\alpha}],\label{eq:F-k}\\ \frac{d}{dt}\sum_{k}\mathfrak{h}_{k}\langle\tilde{b}_{\alpha k}^{2}\rangle+\mathbf{h.c.} & \simeq-\mathfrak{h}(\Omega)\cdot\gamma_{\alpha}[\langle\tilde{a}^{2}\rangle-\tilde{\mathfrak{u}}_{\alpha}]+\mathbf{h.c.},\nonumber \end{align} where $\mathfrak{f}_{k}$ and $\mathfrak{h}_{k}$ are arbitrary coefficients depending on $k$. If we set $\mathfrak{f}_{k}=\frac{\omega_{\alpha k}}{T_{\alpha}}\cosh2r_{\alpha k}$, $\mathfrak{h}_{k}=\frac{\omega_{\alpha k}}{2T_{\alpha}}\sinh2r_{\alpha k}e^{-i\theta_{\alpha k}}$, and sum up the above two equations, then the left side simply gives $\dot{S}_{B\alpha}$ {[}Eq.\,(\ref{eq:Sq-evolu}){]}; At the same time, the right side is just equal to $\chi_{\alpha}$ {[}Eq.\,(\ref{eq:sigma_B}){]}. Thus we can prove $\chi_{\alpha}=\dot{S}_{B\alpha}$, namely, the term $\chi_{\alpha}=\mathrm{tr}\big[\ln\rho_{\mathrm{ss}}^{(\alpha)}\cdot{\cal L}_{\alpha}[\rho]\big]$ in the Spohn formula is just the changing rate of the von Neumann entropy of bath-$\alpha$. Besides, if we set $\mathfrak{f}_{k}=\omega_{\alpha k}$ and $\mathfrak{h}_{k}=0$, the above relations lead to $\frac{d}{dt}\langle\hat{H}_{B\alpha}\rangle=\Omega\cdot\gamma_{\alpha}[\langle\tilde{n}_{a}\rangle-\tilde{\mathfrak{n}}_{\alpha}]=-\dot{Q}_{\alpha}$, which means the energy loss of bath-$\alpha$ is equal to the energy gain of the system from bath-$\alpha$ {[}as we utilized in the discussion below Eq.\,(\ref{eq:S_B-th}){]}. The calculation of Eq.\,(\ref{eq:F-k}) goes as follows \begin{align} & \frac{d}{dt}\sum_{k}\mathfrak{f}_{k}\langle\tilde{b}_{\alpha k}^{\dagger}\tilde{b}_{\alpha k}\rangle=\sum_{k}\mathfrak{f}_{k}\cdot ig_{\alpha k}\langle\hat{a}^{\dagger}\hat{b}_{\alpha k}\rangle+\mathbf{h.c.}\nonumber \\ = & \sum_{k}\mathfrak{f}_{k}\cdot\Big[ig_{\alpha k}\langle\hat{a}^{\dagger}(t)\hat{b}_{\alpha k}(0)\rangle e^{-i\omega_{\alpha k}t}\nonumber \\ & +|g_{\alpha k}|^{2}\int_{0}^{t}ds\,e^{-i\omega_{\alpha k}(t-s)}\langle\hat{a}^{\dagger}(t)\hat{a}(s)\rangle\Big]+\mathbf{h.c.}\label{eq:Part1} \end{align} The 1st term in the bracket can be further calculated by substituting $\hat{a}(t)$ {[}Eq.\,(\ref{eq:a(t)}){]}, \begin{align} & \sum_{k}\mathfrak{f}_{k}\cdot ig_{\alpha k}\langle\hat{a}^{\dagger}(t)\hat{b}_{\alpha k}(0)\rangle e^{-i\omega_{\alpha k}t}+\mathbf{h.c.}\\ = & -\sum_{k}\mathfrak{f}_{k}|g_{\alpha k}|^{2}\int_{0}^{t}ds\,e^{[i(\Omega-\omega_{k})-\frac{\Gamma}{2}](t-s)}\langle\hat{b}_{\alpha k}^{\dagger}(0)\hat{b}_{\alpha k}(0)\rangle+\mathbf{h.c.}\nonumber \\ = & -\int_{0}^{t}ds\big[\int_{0}^{\infty}\frac{d\omega}{2\pi}e^{[i(\Omega-\omega)-\frac{\Gamma}{2}](t-s)}J_{\alpha}(\omega)\mathfrak{f}(\omega)\tilde{\mathsf{n}}_{\alpha}(\omega)\big]+\mathbf{h.c.}\nonumber \end{align} Assuming the frequency integral in the bracket gives a fast-decaying function of $(t-s)$, we extend the time integral to $t\rightarrow\infty$ (Markovian approximation), and that gives \begin{align} & -\int_{0}^{\infty}\frac{d\omega}{2\pi}[\int_{0}^{\infty}ds\,e^{i(\Omega-\omega)s-\frac{1}{2}\Gamma s}]J_{\alpha}(\omega)\mathfrak{f}(\omega)\tilde{\mathsf{n}}_{\alpha}(\omega)+\mathbf{h.c.}\nonumber \\ = & -\int_{0}^{\infty}\frac{d\omega}{2\pi}\,J_{\alpha}(\omega)\mathfrak{f}(\omega)\tilde{\mathsf{n}}_{\alpha}(\omega)\cdot\frac{\Gamma}{(\frac{\Gamma}{2})^{2}+(\omega-\Omega)^{2}}\nonumber \\ \simeq & -\mathfrak{f}(\Omega)\cdot\gamma_{\alpha}\tilde{\mathfrak{n}}_{\alpha}.\label{eq:part1-1} \end{align} The last line holds in the weak coupling limit $\Gamma\ll\Omega$ because the Lorentzian function in the integral approaches $2\pi\delta(\omega-\Omega)$. To calculate the 2nd term of Eq.\,(\ref{eq:Part1}), we should notice $\langle\hat{a}^{\dagger}(t)\hat{a}(s)\rangle=\langle\tilde{a}^{\dagger}(s)\tilde{a}(s)\rangle e^{(i\Omega-\frac{\Gamma}{2})(t-s)}$ holds for $t\ge s$ (quantum regression theorem \cite{breuer_theory_2002,gardiner_quantum_2004}). Here $\langle\tilde{o}_{1}(t)\tilde{o}_{2}(s)\rangle$ is a correlation function in the rotating frame, defined by $\langle\tilde{o}_{1}(t)\tilde{o}_{2}(s)\rangle=\mathrm{tr}[\hat{o}_{1}\,{\cal E}_{t-s}\hat{o}_{2}\,{\cal E}_{s}\rho(0)]$ for $t\ge s$ \cite{breuer_theory_2002}, where $\hat{o}_{1,2}$ are operators in Schr\"odinger picture, and ${\cal E}_{t}$ is the evolution operator solved from the above master equation in interaction picture, and $\rho(t)={\cal E}_{t-s}\rho(s)$. Similarly, $\langle\hat{o}_{1}(t)\hat{o}_{2}(s)\rangle$ are correlation functions in the non-rotating frame. Thus the 2nd term of Eq.\,(\ref{eq:Part1}) gives \begin{align} & \sum_{k}\mathfrak{f}_{k}\cdot|g_{\alpha k}|^{2}\int_{0}^{t}ds\,e^{-i\omega_{\alpha k}(t-s)}\langle\hat{a}^{\dagger}(t)\hat{a}(s)\rangle+\mathbf{h.c.}\nonumber \\ \simeq & \int_{0}^{\infty}\frac{d\omega}{2\pi}\,\mathfrak{f}(\omega)J_{\alpha}(\omega)\cdot\langle\tilde{n}_{a}(t)\rangle\int_{0}^{\infty}ds\,e^{i(\Omega-\omega)s-\frac{\Gamma}{2}s}+\mathbf{h.c.}\nonumber \\ = & \langle\tilde{n}_{a}(t)\rangle\cdot\int_{0}^{\infty}\frac{d\omega}{2\pi}\,\mathfrak{f}(\omega)J_{\alpha}(\omega)\cdot\frac{\Gamma}{(\frac{\Gamma}{2})^{2}+(\omega-\Omega)^{2}}\nonumber \\ \simeq & \gamma_{\alpha}\cdot\mathfrak{f}(\Omega)\langle\tilde{n}_{a}(t)\rangle.\label{eq:part1-2} \end{align} Again we adopted the Markovian approximations as before, and $\langle\tilde{n}_{a}(s)\rangle$ is taken out of the integral directly. Therefore, summing up Eqs.\,(\ref{eq:part1-1}, \ref{eq:part1-2}), we obtain the 1st relation in Eq.\,(\ref{eq:F-k}). The 2nd relation can be obtained through the similar way (see Appendix \ref{apx:squeeze}). Then, by setting proper coefficients $\mathfrak{f}_{k}$ and $\mathfrak{h}_{k}$ in Eq.\,(\ref{eq:F-k}), we can prove $\chi_{\alpha}=\dot{S}_{B\alpha}$, and further $R_{{\cal I}}=R_{\mathrm{Sp}}$. Since we can prove the Spohn formula $R_{\mathrm{Sp}}\ge0$, the MIPr $R_{{\cal I}}$ also keeps positive, which means the the system-bath mutual information, or their correlation, still keeps increasing monotonically in this non-thermal case. \section{Summary } In this paper, we study the production of the mutual information between the system and its environment. We find that this MIPr\,(\ref{eq:R_EP_ISB}) has a close connection with the conventional thermodynamic description of the EPr\,(\ref{eq:R-EPr}): when the baths of the open system are canonical thermal ones, this MIPr could exactly return to the previous EPr. Therefore, the 2nd law statement $R_{\mathrm{ep}}\ge0$ can be equivalently understood as saying the system-bath correlation always keeps increasing. Besides, we also study an example of a single boson mode contacted with multiple squeezed thermal baths. In this case, the temperatures of the baths are not well defined and the previous EPr does not apply. We proved that the MIPr is still positive, which means the monotonic increasing of the system-bath correlation also exists in this case. Definitely it is worthful to study the MIPr in more non-thermal systems. We remark that the proof for the positivity of the MIPr and the Spohn formula relies on the fact the dynamics of the system can be well described by a Markovian master equation with the Lindblad (GKSL) form. If this is not fulfilled \cite{pucci_entropy_2013,sharma_landauer_2015,li_non-markovianity_2016,lampo_lindblad_2016}, the positivity cannot be guaranteed. Our study indicates it is the system-bath correlation that keeps increasing monotonically although the total $S+B$ system evolves unitarily. This idea is also consistent with some other fundamental studies on thermodynamics, such as the local relaxation hypothesis \cite{cramer_exact_2008, *eisert_quantum_2015}, the entanglement based thermodynamics \cite{popescu_entanglement_2006, *goldstein_canonical_2006}, and the mutual information understanding of the Blackhole radiation \cite{zhang_hidden_2009, *zhang_entropy_2011}. \emph{Acknowledgement} \textendash{} The author appreciate much for the helpful discussions with G. Agarwal, H. Dong, M. B. Kim, T. Peng, M. O. Scully, A. Svidzinsky, D. Wang in Texas A\&M University, and C. P. Sun in Beijing Computational Science Research Center. This study is supported by Office of Naval Research (Award No. N00014-16-1-3054) and Robert A. Welch Foundation (Grant No. A-1261).
proofpile-arXiv_067-4799
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The quality of a software system can be described using different attributes such as reliability, maintainability etc. There are also various approaches to improve the quality of software with differing emphasis on these attributes. Constructive methods comprise one group that tries to improve the overall development process in order to prevent the introduction of faults. However, the prevalent approach is still to use analytical methods, also called de\-fect-de\-tec\-tion tech\-nique s, to find and remove faults. The main representatives of this approach are tests and reviews. An often cited estimate \cite{myers79} relates 50\% of the overall development costs to testing. Jones \cite{jones91} still assigns 30--40\% to quality assurance and defect removal. Hence, de\-fect-de\-tec\-tion tech\-nique s are a promising field for cost optimisations. However, to be able to optimise the usage of de\-fect-de\-tec\-tion tech\-nique s, we need a suitable economical model first. There are some approaches that model software quality costs but mostly on a high level of abstraction. The effects of individual faults and the effectiveness of different de\-fect-de\-tec\-tion tech\-nique s regarding these faults are not taken into account. Also in \cite{ntafos01} it is discussed that ``cost is clearly a central factor in any realistic comparison but it is hard to measure, data are not easy to obtain, and little has been done to deal with it.'' Rai et al.\ identify in \cite{rai98} mathematical models of the economics of software quality assurance as an important research area. ``A better understanding of the costs and benefits of SQA and improvements to existing quantitative models should be useful to decision-makers.'' \subsection{Problem} The underlying question is how we can optimally use de\-fect-de\-tec\-tion tech\-nique s to improve the quality of software. In particular, we investigate in this paper how the economical relationships of de\-fect-de\-tec\-tion tech\-nique s and quality can be modelled and the importance of the factors in terms of the influence on the output and especially its variance. \subsection{Contribution} We propose an analytical model of the economics of de\-fect-de\-tec\-tion tech\-nique s incorporating different types of defect costs, the difficulty of finding a fault of different techniques and the probability of failure for a fault. This allows an evaluation of different techniques and gives a better understanding of the relationships. The used input factors are prioritised to simplify the model and to identify the factors that are most beneficial to be further investigated. Furthermore, a model based on defect types is derived to allow a simpler application on real world projects. This model could be used to predict optimal usage of de\-fect-de\-tec\-tion tech\-nique s in the future based on old project data. \subsection{Outline} We start by describing software quality costs in general and our understanding of the various cost factors in Sec.~\ref{sec:costs}. Sec.~\ref{sec:ideal} proposes an analytical model of the economics of de\-fect-de\-tec\-tion tech\-nique s that contains the costs associated with each fault. This model is subject to a sensitivity analysis based on the data from an older study in Sec.~\ref{sec:sensitivity}. For the practical application of the model a simplified version based on defect classes is derived in Sec.~\ref{sec:practical}. Sec.~\ref{sec:related} gives related work and final conclusions can be found in Sec.~\ref{sec:conclusions}. \section{Software quality costs} \label{sec:costs} \emph{Quality costs} are the costs associated with preventing, finding, and correcting defective work. Based on experience from the manufacturing area \cite{juran98,feigenbaum05} similar quality cost models have been developed explicitly for software \cite{knox93,slaughter98, Krasner1998}. These costs are divided into \emph{conformance} and \emph{nonconformance} costs, also called \emph{control costs} and \emph{failure of control costs}. The former comprises all costs that need to be spent to build the software in a way that it conforms to its quality requirements. This can be further broken down to \emph{prevention} and \emph{appraisal} costs. Prevention costs are for example developer training, tool costs, or quality audits, i.\,e.~costs for means to prevent the injection of faults. The appraisal costs are caused by the usage of various types of tests and reviews. The \emph{nonconformance} costs come into play when the software does not conform to the quality requirements. These costs are divided into \emph{internal failure} costs and \emph{external failure} costs. The former contains costs caused by failures that occur during development, the latter describes costs that result from failures at the client. A graphical overview is given in Fig.~\ref{fig:costs_overview}. Because of the distinction between prevention, appraisal, and failure costs this is often called \emph{PAF} model. \begin{figure}[h] \centering \includegraphics[width=8cm]{costs} \caption{Overview over the costs related to quality} \label{fig:costs_overview} \end{figure} We add further detail to the PAF model by introducing the main types of concrete costs that are important for defect-detection techniques. Note that there are more types that could be included, for example, maintenance costs. However, we concentrate on a more reliability-oriented view. The appraisal costs are detailed to the \emph{setup} and \emph{execution} costs. The former constituting all initial costs for buying test tools, configuring the test environment, and so on. The latter means all the costs that are connected to actual test executions or review meetings, mainly personnel costs. On the nonconformance side, we have \emph{fault removal} costs that can be attributed to the internal failure costs as well as the external failure costs. This is because if we found a fault and want to remove it, it would always result in costs no matter whether caused in an internal or external failure. Actually, there does not have to be a failure at all. Considering code inspections, faults are found and removed that have never caused a failure during testing. It is also a good example that the removal costs can be quite different regarding different techniques. When a test identifies a failure, there needs to be considerable effort spent to find the corresponding fault. During an inspection, faults are found directly. Fault removal costs also contain the costs for necessary re-testing and re-inspections. External failures also cause \emph{effect} costs. These are all further costs with the failure apart from the removal costs. For example, \emph{compensation} costs could be part of the effect costs, if the failure caused some kind of damage at the customer site. We might also include further costs such as loss of sales because of bad reputation in the effect costs but do not consider it explicitly because its out of scope of this paper. \section{Analytical Model} \label{sec:ideal} We describe a general, analytical model of de\-fect-de\-tec\-tion tech\-nique s in the following. It is general with respect to the various types of techniques it is able to analyse. We mainly analyse different types of testing which essentially detect failures and static analysis techniques that reveal faults in the code or other documents. A model that incorporates all important factors for these differing techniques needs to use the universal unit of money, i.e., units such as euro or dollar. We first describe the model and its assumptions in general, and then give equations for each component of the model for a single technique and for the combination of several techniques. \subsection{General} In this section, we concentrate on an ideal model of quality economics in the sense that we do not consider the practical use of the model but want to mirror the actual relationships as faithfully as possible. \subsubsection{Components} We divide the model in three main components: \begin{itemize} \item Direct costs $d_A$ \item Future costs $t_A$ \item Revenues / saved costs $r_A$ \end{itemize} The direct costs are characterised by containing only costs that can be directly measured during the execution of the technique. The future costs and revenues are both concerned with the (potential) costs in the field but can be distinguished because the future costs contain the costs that are really incurred whereas the revenues are comprised of saved costs. \subsubsection{Assumptions} The main assumptions in the model are: \begin{itemize} \item Found faults are perfectly removed. \item The amount or duration of a technique can be freely varied. \end{itemize} The first assumption is often used in software reliability modelling to simplify the stochastic models. It states that each fault detected is instantly removed without introducing new faults. Although this is often not true in real defect removal, it is largely independent of the used de\-fect-de\-tec\-tion tech\-nique\ and the newly introduced faults can be handled like initial faults which introduces only a small blurring. The second assumption is needed because we have a notion of time effort in the model to express for how long and with how many people a technique is used. This notion of time can be freely varied although for real de\-fect-de\-tec\-tion tech\-nique s this might not always make sense, especially when considering inspections or static analysis tools where a certain basic effort or none at all has to be spent. Still, even for those techniques, the effort can be varied by changing the speed of reading, for example. \subsubsection{Difficulty} We adapt the general notion of the difficulty of a technique $A$ to find a specific fault $i$ from \cite{littlewood:tse00} denoted by $\theta_A (i)$ as a basic quantity for our model. In essence, it is the probability that $A$ does not detect $i$. Furthermore, we denote the length of a technique application with $t_A$. With length we do not mean calendar time but effort measured in staff-days, for example. In the following equations we are often interested in the case when a fault is detected at least once by a technique. From the above we can conclude that the probability that $A$ detects $i$ is $1-\theta_A (i)$. However, as stated above, we have a concept of timing and effort for a technique that has to be incorporated in the difficulty. Hence, with $t_A$ denoting the effort spent for $A$, the probability that $i$ is at least detected once is $1 - \theta_A (i,t_A)$. \subsubsection{Defect Propagation} A further aspect to consider is that the defects occurring during development are not independent. There are various dependencies that could be considered but most importantly there is dependency in terms of propagation. Defects from earlier phases propagate to later phases and over process steps. We actual do not consider the phases to be the important factor here but the document types. In every development process there are different types of documents, or artifacts, that are created. Usually, those are requirements documents, design documents, code, and test specifications. Then one defect in one of these documents can lead to none, one, or more defects in later derived documents. A schematic overview is given in Fig.~\ref{fig:defect_propagation}. \begin{figure}[h] \begin{center} \includegraphics[width=.4\textwidth]{defect_propagation} \caption{How defects propagate over documents} \label{fig:defect_propagation} \end{center} \end{figure} We see that a requirements defect can lead to several defects in design documents as well as test specifications. The design defects can again propagate to the code and to (glass-box) test specifications. For each document type $k$ we have the set of defects $I_k$ and hence the total set of defects $I$ is $I = \bigcup{I_k}$. Furthermore, for each defect, we also look at its predecessor defects $R_i$. For the model this has the effect that a defect can only be found by a technique if neither the defect itself nor one of its predecessors was detected by an earlier used technique. \subsection{Equations} \label{sec:equations_ideal} We give an equation for each of the three components with respect to single de\-fect-de\-tec\-tion tech\-nique s first and later for a combination of techniques. \subsubsection{Direct Costs} The direct costs are those costs that can be directly measured from the application of a de\-fect-de\-tec\-tion tech\-nique . They are dependent on the length $t$ of the application. Fig.~\ref{fig:components} shows systematically the components of the direct costs. \begin{figure}[h] \begin{center} \includegraphics{components} \end{center} \caption{The components of the direct costs \label{fig:components}} \end{figure} From this we can derive the following equation containing the three cost types for a technique. \begin{equation} \label{eq:direct} d_A = u_A + e_A(t) + \sum_{i}{ (1 - \theta_A(i,t)) v_A(i)}, \end{equation} where $u_A$ are the setup costs, $e_A(t)$ the execution costs, and $v_A(i)$ the fault removal costs specific to that technique. Hence, we have for a technique its fixed setup costs, execution costs depending on the length of the technique and for each fault in the software removal costs if the technique is able to find it. \subsubsection{Future Costs} In case we were not able to find defects, these will result in costs in the future. We divide these costs into the two parts fault removal costs in the field $v_F(i)$ and failure effect costs $f_F(i)$. The latter contain all support and compensation costs as well as annoyed customers as far as possible. \begin{equation} \label{eq:future} t_A = \sum_i{\pi_i \theta_A(i,t) (v_F(i) + f_F(i))}, \end{equation} where $\pi_i = P$(fault $i$ is activated by randomly selected input and is detected and fixed) \cite{littlewood:tse00}. Hence, it describes the probability that the defect leads to a failure in the field. \subsubsection{Revenues} We do not only have costs with de\-fect-de\-tec\-tion tech\-nique s but also revenues. These revenues are essentially saved future costs. With every fault that we find in-house we avoid higher costs in the future. Therefore, we have the same cost categories but look at the faults that we find instead of the ones we are not able to detect. \begin{equation} \label{eq:saved} r_A = \sum_i{\pi_i (1 - \theta_A(i,t))(v_F(i) + f_F(i))} \end{equation} \subsubsection{Combination} Typically, more than one technique is used to find defects. The intuition behind that is that they find (partly) different defects. These dependencies are often ignored when the efficiency of de\-fect-de\-tec\-tion tech\-nique s is analysed. Nevertheless, this has a huge influence on the economics and efficiency. In our view, the notion of diversity of techniques from Littlewood et al.~\cite{littlewood:tse00} is very useful in this context. The covariance of the difficulty functions of faults describes the similarity of the effectiveness regarding fault finding. We already use the difficulty functions in the present model and therefore are able to express the diversity implicitly. For the direct costs it means that we sum over all different applications of de\-fect-de\-tec\-tion tech\-nique s. We define that $X$ is the ordered set of the applied de\-fect-de\-tec\-tion tech\-nique s. In each application we use Eq.~\ref{eq:direct} with the extension that we not only take the probability that the technique finds the fault into account but also that the ones before have not detected it. Here also the defect propagation needs to be considered, i.e., that not only the defect itself has not been detected but also its predecessors $R_i$. \begin{equation} \begin{split} \label{eq:direct_total} d_X = \sum_{x \in X}{\biggl[ u_x + e_x(t_x)} + \sum_i{\Bigl( (1 - \theta_x(i,t_x))}\\ \phantom{d_X = } \prod_{y < x}{\theta_y(i,t_y)} \prod_{j \in R_i}{\theta_y(j,t_y)}\Bigr) v_x(i) \biggr] \end{split} \end{equation} The total future costs are simply the costs of each fault with the probability that it occurs and all techniques failed in detecting it and its predecessors. \begin{equation} \begin{split} \label{eq:future_total} t_X = \sum_i{\biggl[ \pi_i \prod_{x \in X}{\theta_x(i,t_x)}}\\ \phantom{t_X} \prod_{y < x}{\prod_{j \in R_i}{\theta_y(j,t_y)}} (v_F(i) + f_F(i)) \biggr] \end{split} \end{equation} The equation for the revenues uses again a sum over all technique applications. In this case we look at the faults that occur, that are detected by a technique and neither itself nor its predecessors have been detected by the earlier applied techniques. \begin{equation} \begin{split} \label{eq:revenues_total} r_X = \sum_{x \in X}{\sum_i{\biggl[ \Bigl( \pi_i (1 - \theta_x(i,t_x)) \prod_{y < x}{\theta_y(i,t_y)}}}\\ \prod_{j \in R_i}{\theta_y(j,t_y)} \Bigr) \bigl( v_F(i) + f_F(i) \bigr) \biggr] \end{split} \end{equation} \subsubsection{ROI} One interesting metric based on these values is the return on investment (ROI) of the de\-fect-de\-tec\-tion tech\-nique s. If we look at the total ROI we have to use Eqns.~\ref{eq:direct_total}, \ref{eq:future_total}, and \ref{eq:revenues_total} for the calculation. \begin{equation} \mbox{ROI} = \frac{r_X - d_X - t_X}{d_X + t_X} \end{equation} This metric is suitable for a single post-evaluation of the quality assurance of a project. However, it alone cannot give an answer whether the effort was cost-optimal. \subsection{Forms of the Difficulty Functions} \label{sec:func_forms} The notion of \emph{difficulty} of the defect detection is a very central one in the described model. As mentioned this notion is based on an idea from \cite{littlewood:tse00}. However, the original difficulty functions have no concept of time or effort spent but only of one usage or two usages and so on. To be able to analyse and optimise the spent effort for each technique, we need to introduce that additional dimension in the difficulty functions, i.e., the functional form depending on the spent effort. Actually, the equations given for the model above already contain that dimension but it is not further elaborated. This gap is closed in the following. Firstly, we do not have sufficient data to give an empirically founded basis for the forms of the difficulty functions. Nevertheless, we can formulate hypotheses to identify the most probable distributions for different defects. Secondly, keep in mind that a difficulty function is defined for a specific de\-fect-de\-tec\-tion tech\-nique\ detecting a specific defect. That means that each defect can have distinct distribution for each possible technique. \subsubsection{Exponential Function} The function that most obviously models the process under investigation is an exponential function. The intuition is that with more effort spent the difficulty decreases, i.e., the probability of detecting that defect increases. However, with increasing effort the rate of difficulty reduction slows down. The defect detection gets more and more complicated when the ``obvious'' cases all have been tried. For this we use a function similar to the density function of an exponential distribution: \begin{equation} \theta(i,t) = \left\{ \begin{array}{ll} \lambda e^{-\lambda t} & \mbox{if $t>0$} \\ 1 & \mbox{otherwise} \end{array} \right. , \end{equation} with $\lambda$ being a parameter that is determined from empirical data from the technique and the defect. It is the inverse of the mean value of the empirically measured difficulty. \subsubsection{Linear and Constant Function} The linear difficulty function models the intuition that there is a steady decrease in difficulty. A review might be an example that employs such a behaviour. The more I read, the higher the possibility that I detect that specific defect. The function can be formulated as follows: \begin{equation} \label{eq:linear} \theta_A(\tau_i, t) = m t + 1, \end{equation} where $m$ is the (negative) slope of the straight line. The constant function constitutes a special case of the linear form of the difficulty functions. In this case the spent effort does not matter because the difficulty of detecting the defect is always the same. The intuitive explanation for this functional form is best explained using the example of a static analysis tool. These tools often use bug patterns specific for a language and thereby identify code sections that are critical. When searching for a specific bug pattern it is of no importance how much effort is spent but if the tool is not able to detect a specific pattern -- or only in seldom cases -- the probability of detection does not change. We can also use this distribution to model that a specific technique $A$ cannot detect a specific defect $i$ by specifying that $\theta_A(i,t) = 1$ for all $t$. \subsubsection{Sigmoid Function} For our purposes it is sufficient to see the sigmoid function as a variation of the exponential function. Its graph has an \emph{S}-like shape and hence one local minimum and one local maximum. In this special case we actually use a complementary sigmoid function to get a turned \emph{S}. In contrast to the exponential function, the sigmoid function models the intuition that in the beginning it is hard to detect a specific defect and the difficulty does decrease only slowly. However, when a certain amount of effort is spent, the rate increases and chance of detecting the defect increases significantly until we reach a point of satisfaction -- similar to the exponential function -- where additional effort does not have a large impact. This distribution is also backed by the so-called S-curve of software testing \cite{kan02}. That S-curve aims in a slightly different direction but also shows that early in testing only a limited number of failures are revealed, then the detection rate increases until a plateau of satisfaction is reached. \subsection{Discussion} \label{sec:discussion} The model so far is not suited for a practical application in a company as the quantities used are not easy to measure. Probably, we are unable to get values for $\theta$ of each fault and de\-fect-de\-tec\-tion tech\-nique . Also the somehow fixed and distinct order of techniques is not completely realistic as some techniques may be used in parallel or only some parts of the software are analysed. However, in a more theoretical setting we can already use the model for important tasks including sensitivity analysis to identify important input factors. Another application can be to analyse which techniques influence which parts of the model. For instance, in the automatic derivation of test cases from explicit behaviour models (model-based testing) is a relatively new technique for defect detection. This technique can be analysed and compared with traditional, hand-crafted test techniques based on our model. Two of the factors are obviously affected by model-based testing: (1) the setup costs are considerably higher than in hand-crafted tests because not only the normal test environment has to be set up but also a formal (and preferably executable) behaviour has to be developed. On the contrary, the execution cost per test case is then substantially smaller because the generation can be automated to some extent and the model can be used as an oracle to identify failures. Further influences on factors like the difficulty functions are not that obvious but need to be analysed. This example shows that the model can help to structure the comparison and analysis of defect-detection techniques. \section{Sensitivity Analysis} \label{sec:sensitivity} Every newly proposed mathematical model should be subject to various analyses. Apart from the appropriateness of the model to the modelled reality and the validity of estimates and predictions, the dependence of the output on the input parameters is of interest. The quantification of this dependence is called \emph{sensitivity analysis}. Local sensitivity analysis usually computes the derivative of the model response with respect to the model input parameters. More generally applicable is global sensitivity analysis that apportions the variation in the output variables to the variation of the input parameters. We base the following description of the global sensitivity analysis we use mainly on \cite{saltelli04}. \subsection{Settings and Methods} Sensitivity analysis is the study of how the uncertainty in the output of a model can be apportioned to different sources of uncertainty in the model input. Still, what do we gain by that knowledge? There are various questions that can be answered by sensitivity analysis. As pointed out in \cite{Saltelli2004} it is important to specify its purpose beforehand. In our context two settings are of most interest: (1) factors priorisation and (2) factors fixing. \subsubsection{Factors Priorisation} The most important factor is the one that would lead to the greatest reduction in the variance of the output if fixed to its true value. Likewise the second most important factor can be defined and so on. The ideal use for the Setting FP is for the prioritisation of research and this is one of the most common uses of sensitivity analysis in general. Under the hypothesis that all uncertain factors are susceptible to determination, at the same cost per factor, Setting FP allows the identification of the factor that is most deserving of better experimental measurement in order to reduce the target output uncertainty the most. In our context, that means that we can determine the factors that are most rewarding to measure most precisely. \subsubsection{Factors Fixing} This setting is similar to factors priorisation but still has a slightly different flavour. Now, we do not aim to prioritise research in the factors but we want to simplify the model. For this we try to find the factors that can be fixed without reducing the output variance significantly. For our purposes this means that we can fix the input factor at any value in its uncertainty range without changing the outcome significantly. \subsubsection{FAST} There are various available methods for global sensitivity analysis. The \emph{Fourier amplitude sensitivity test (FAST)} is a commonly used approach that is based on Fourier developments of the output functions. It also allows an ANOVA-like decomposition of the model output variance. In contrast to correlation or regression coefficients, it is not dependent on the goodness of fit of the regression model. The results give a quantification of the influences of the parameters, not only a qualitative ranking as the Morris method, for example. With the latest developments of this method, it is able not only to compute the first-order effects of each input parameter but also the higher-order and total effects. The first order effect is the influence of a single input parameter on the output variance, whereas the total effects also capture the interaction between input parameters. This is also important for the different settings as the first-order effects are used for the factors priorisation setting, the total-order effects for the factors fixing setting. \subsubsection{SimLab} We use the sensitivity analysis tool \emph{SimLab} \cite{Simlab} for the analysis. Inside the tool we need to define all needed input parameters and their distributions of their uncertainty ranges. For this, different stochastic distributions are available. The tool then generates the samples needed for the analysis. This sample data is can be read from a file into the model -- in our case a Java program -- that is expected to write its output into a file with a specified format. This file is read again from SimLab and the first-order and total-order indexes are computed from the output. \subsection{Input Factors and Data} We describe the analysed scenario, factors and data needed for the sensitivity analysis in the following. The distributions are derived from the survey \cite{wagner:tumi06,Wagner2006}. We base the analysis on an example software with 1000 LOC and with 10--15 faults. The reason for the small number of faults is the increase in complexity of the analysis for higher numbers of faults. \subsubsection{Techniques} We have to base the sensitivity analysis on common or \emph{average} distributions of the input factors. This also implies that we use a representative set of de\-fect-de\-tec\-tion tech\-nique s in the analysis. We choose seven commonly used de\-fect-de\-tec\-tion tech\-nique s and encode them with numbers: requirements inspection (0), design inspection (1), static analysis (2), code inspection (3), (structural) unit test (4), integration test (5), and (functional) system test (6). As indicated we assume that unit testing is a structural (glass-box) technique, system testing is functional (black-box), and integration testing is both. The usage of those seven techniques, however, does not imply that all of them are used in each sample as we allow the effort $t$ to be null. \subsubsection{Additional Factors} To express the defect propagation concept of the model we added the additional factor $\rho$ as the number of predecessors. The factor $c$ represents the the defect class meaning the type of artifact the defect is contained in. This is important for the decision whether a certain technique is capable to find that specific defect at all. The factor $\phi$ encodes the form of difficulty function that is used for a specific fault and a specific technique. We include all the forms presented above in Sec.~\ref{sec:func_forms}. The sequence $s$ of techniques determines the order of execution of the techniques. We allow several different sequences including nonsense orders in which system testing is done first and requirements inspections as the last technique. Finally, the average labour costs per hour $l$ is added because it is not explicitly included in the model equations from Sec.~\ref{sec:equations_ideal}. Note, that we excluded the effect costs from the sensitivity analysis because we have not sufficient data to give any probability distribution. \subsection{Results and Observations} This section summarises the results of applying the FAST method for sensitivity analysis on the data from the example above and discusses observations. The analysed output factor is the return-on-investment (ROI). \subsubsection{Abstract Grouping} We first take an abstract view on the input factors and group them without analysing the input factors for different techniques separately. Hence, we only have 11 input factors that are ordered with respect to their first and total order indexes in Tab.~\ref{tab:ideal_abstract}. The first order indexes are shown on the left, the total order indexes on the right. \begin{table}[htbp] \caption{The first and total order indexes of the abstract grouping} \begin{center} \begin{tabular}{|l|r|l|r|} \hline $c$ & 0.4698 & $c$ & 0.8962 \\ \hline $t$ & 0.1204 & $v_f$ & 0.4473 \\ \hline $\bar{\theta}$ & 0.0699 & $\bar{\theta}$ & 0.4255 \\ \hline $v_f$ & 0.0541 & $u$ & 0.3916 \\ \hline $\phi$ & 0.0365 & $t$ & 0.3859 \\ \hline $u$ & 0.0297 & $\phi$ & 0.2888 \\ \hline $v$ & 0.0264 & $\rho$ & 0.2711 \\ \hline $\rho$ & 0.0256 & $v$ & 0.2546 \\ \hline $\pi$ & 0,0158 & $\pi$ & 0,2068 \\ \hline $s$ & 0,0083 & $s$ & 0,1825 \\ \hline $l$ & 0,0010 & $l$ & 0,1489 \\ \hline \end{tabular} \end{center} \label{tab:ideal_abstract} \end{table} The first order indexes are used for the factors priorisation setting. We see that the types of documents or artifacts the defects are contained in are most rewarding to be investigated in more detail. One reason might be that we use a uniform distribution because we do not have more information on the distribution of defects over document types. However, this seems to be an important information. The factor that ranks second highest is the spent effort. This approves the intuition that the effort has strong effects on the output and hence needs to be optimised. Also the average difficulty of of finding a defect with a technique and the costs of removing a defect in the field are worth to be investigated further. Interestingly, the labour costs, the sequence of technique application, and the failure probability in the field do not contribute strongly to the variance in the output. Hence, these factors should not be the focus in further research. For the factors fixing setting, the ordering of the input factors is quite similar. Again the failure probability in the field, the sequence of technique application and the labour costs can be fixed without significantly changing the output variance. The factors that definitely cannot be fixed are again the document types, the removal costs in the field, and the average difficulty values. The setup costs rank higher with these indexes and hence should not be fixed. \subsubsection{Detailed Grouping} After the abstract grouping, we form smaller groups and differentiate between the factors with regard to different de\-fect-de\-tec\-tion tech\-nique s. The first and total order indexes are shown in Tab.~\ref{tab:ideal_detailed} again with the first order indexes on the left and the total order indexes on the right. \begin{table}[htbp] \caption{The first and total order indexes of the detailed grouping} \begin{center} \begin{tabular}{|l|r|l|r|} \hline $c$ & 0.2740 & $c$ & 0.7750 \\ \hline $t_1$ & 0.0601 & $\phi_4$ & 0.3634 \\ \hline $\pi$ & 0.0528 & $t_1$ & 0.3332 \\ \hline $\phi_4$ & 0.0492 & $\pi$ & 0.3200 \\ \hline $\phi_1$ & 0.0391 & $v_f$ & 0.2821 \\ \hline $v_3$ & 0.0313 & $v_3$ & 0.2802 \\ \hline $\phi_0$ & 0.0279 & $\phi_1$ & 0.2728 \\ \hline $\rho$ & 0.0278 & $\rho$ & 0.2706 \\ \hline $\phi_2$ & 0.0269 & $v_1$ & 0.2574 \\ \hline $v_f$ & 0.0252 & $s$ & 0.2524 \\ \hline $\phi_6$ & 0.0222 & $\bar{\theta}_5$ & 0.2493 \\ \hline $v_0$ & 0.0219 & $\bar{\theta}_0$ & 0.2312 \\ \hline $\phi_3$ & 0.0216 & $\bar{\theta}_3$ & 0.2300 \\ \hline $\bar{\theta}_6$ & 0.0214 & $\phi_6$ & 0.2287 \\ \hline $v_5$ & 0.0212 & $\phi_2$ & 0.2240 \\ \hline $\bar{\theta}_0$ & 0.0209 & $\bar{\theta}_1$ & 0.2077 \\ \hline $s$ & 0.0208 & $v_5$ & 0.2039 \\ \hline $\bar{\theta}_1$ & 0.0203 & $\phi_0$ & 0.1966 \\ \hline $v_1$ & 0.0203 & $u_3$ & 0.1913 \\ \hline $\bar{\theta}_4$ & 0.0197 & $v_0$ & 0.1907 \\ \hline $\phi_5$ & 0.0194 & $\bar{\theta}_6$ & 0.1894 \\ \hline $\bar{\theta}_5$ & 0.0186 & $\phi_5$ & 0.1892 \\ \hline $t_2$ & 0.0185 & $\phi_3$ & 0.1854 \\ \hline $\bar{\theta}_3$ & 0.0181 & $v_4$ & 0.1807 \\ \hline $v_6$ & 0.0142 & $t_5$ & 0.1719 \\ \hline $v_2$ & 0.0139 & $v_6$ & 0.1709 \\ \hline $v_4$ & 0.0120 & $\bar{\theta}_4$ & 0.1707 \\ \hline $\bar{\theta}_2$ & 0.0109 & $v_2$ & 0.1633 \\ \hline $t_6$ & 0.0089 & $t_6$ & 0.1619 \\ \hline $t_4$ & 0.0058 & $u_5$ & 0.1451 \\ \hline $t_3$ & 0.0051 & $t_4$ & 0.1409 \\ \hline $t_5$ & 0.0034 & $u_4$ & 0.1404 \\ \hline $u_5$ & 0.0018 & $t_2$ & 0.1378 \\ \hline $u_0$ & 0.0013 & $t_0$ & 0.1268 \\ \hline $u_4$ & 0.0010 & $l$ & 0.1222 \\ \hline $t_0$ & 0.0009 & $\bar{\theta}_2$ & 0.1125 \\ \hline $u_3$ & 0.0007 & $u_6$ & 0.1122 \\ \hline $u_6$ & 0.0007 & $u_0$ & 0.1085 \\ \hline $u_1$ & 0.0005 & $t_3$ & 0.1053 \\ \hline $u_2$ & 0.0005 & $u_1$ & 0.1034 \\ \hline $l$ & 0.0002 & $u_2$ & 0.0996 \\ \hline \end{tabular} \end{center} \label{tab:ideal_detailed} \end{table} The main observations from the abstract grouping for the factor priorisation setting are still valid. The type of artifact the defect is contained in still ranks highest and has the most value in reducing the variance. However, in this detailed view, the failure probability in the field ranks higher. This implies that this factor should not be neglected. We also see that for some techniques the form of the difficulty function has a strong influence and that the setup costs of most techniques rank low. Similar observations can be made for the factors fixing setting and the total order indexes. The main observations are similar as in the abstract grouping. Again, the failure probability in the field ranks higher. Hence, this factor cannot be fixed without changing the output variance significantly. A further observation is that some of the setup costs can be set to fixed value what reduces the measurement effort. \subsection{Discussion and Consequences} From the observations above we can conclude that the labour costs, the sequence of technique application and the removal costs of most techniques are not an important part of the model and the variation in effort does not have strong effects on the output, i.e., the ROI in our case. On the other hand, the type of artifact or document the defect is contained in, the difficulty of defect detection, and the removal costs in the field have the strongest influences. This has several implications: (1) We need more empirical research on the distribution of defects over different document types and the removal costs of defects in the field to improve the model and confirm the importance of the factor, (2) we still need more empirical studies on the effectiveness of different techniques as this factor can largely reduce the output variance, (3) the labour costs do not have to be determined in detail and it does not seem to be relevant to reduce those costs, (4) further studies on the sequence and the removal costs are not necessary. \section{Practical Application} \label{sec:practical} As we discussed above, the theoretical model can be used for analyses but is too detailed for a practical application. The main goal is, however, to optimise the usage of de\-fect-de\-tec\-tion tech\-nique s which requires a applicability in practice. Hence, we need to simplify the model to reduce the needed quantities. \subsection{General} For the simplification of the model, we use the following additional assumptions: \begin{itemize} \item Faults can be categorised in useful defect types. \item Defect types have specific distributions regarding their detection difficulty, removal costs, and failure probability. \item The linear functional form of the difficulty approximates all other functional forms sufficiently. \end{itemize} We define $\tau_i$ to be the defect type of fault $i$. It is determined using the defect type distribution of older projects. In this way we do not have to look at individual faults but analyse and measure defect types for which the determination of the quantities is significantly easier. In the practical model we assumed that the defects can be grouped in ``useful'' classes or defect types. For reformulating the equation it was sufficient to consider the affiliation of a defect to a type but for using the model in practice we need to further elaborate on the nature of defect types and how to measure them. For our economics model we consider the defect classification approaches from IBM \cite{kan02} and HP \cite{grady92} as most suitable because they are proven to be usable in real projects and have a categorisation that is coarse-grained enough to make sensible statements about each category. We also lose the concept of defect propagation as it was shown not to have a high priority in the analyses above but it introduces significant complexity to the model. Hence, the practical model can be simplified notably. \subsection{Equations} Similar to Sec.~\ref{sec:equations_ideal} where we defined the basic equations of the ideal model, we formulate the equations for the practical model using the assumptions from above. \subsubsection{Single Economics} We start with the direct costs of a de\-fect-de\-tec\-tion tech\-nique . Now we do not consider the ideal quantities but use average values for the cost factors. We denote this with a bar over the cost name. \begin{equation} \label{eq:direct_practical} d_A = \bar{u}_A + \bar{e}_A(t) + \sum_{i}{ (1 - \theta_A(\TYPE,t)) \bar{v}_A(\tau_i)}, \end{equation} where $\bar{u}_A$ is the average setup cost for technique $A$, $\bar{e}_A(t)$ is the average execution cost for $A$ with length $t$, and $\bar{v}_A(\tau_i)$ is the average removal cost in defect type $\tau_i$. Apart from using average values, the main difference is that we consider defect types in the difficulty functions. The same applies to the revenues. \begin{equation} \label{eq:saved_practical} r_A = \sum_i{\pi_{\tau_i} (1 - \theta_A(\TYPE,t))(\bar{v}_F(\tau_i) + \bar{f}_F(\tau_i))}, \end{equation} where $\bar{f}_F(\tau_i)$ is the average effect costs of a fault of type $\tau_i$. Finally, the future costs can be formulated accordingly. \begin{equation} \label{eq:future_practical} t_A = \sum_i{\pi_{\tau_i} \theta_A(\TYPE,t) (\bar{v}_F(\tau_i) + \bar{f}_F(\tau_i))}. \end{equation} With the additional assumptions, we can also formulate a unique form of the difficulty functions: \begin{equation} \theta_A(\tau_i, t_a) = m t_A + 1, \end{equation} where $m$ is the (negative) slope of the straight line. If a technique is not able to detect a certain type, we will set $m = 0$. \subsubsection{Combined Economics} Similarly, the extension to more than one technique can be done. \begin{equation} \begin{split} \label{eq:practical_direct_combined} d_X = \sum_{x \in X}{\biggl[ \bar{u}_x + \bar{e}_x(t_x)} + \sum_i{ (1 - \theta_x(\tau_i,t_x))}\\ \prod_{y < x}{\Bigl( \theta_y(\tau_i,t_y)} \Bigr) \bar{v}_x(\tau_i) \biggr] \end{split} \end{equation} \begin{equation} t_X = \sum_i{\pi_{\tau_i} \prod_{x \in X}{\Bigl (\theta_x(\tau_i,t_x) \Bigr) \Bigl( \bar{v}_F(\tau_i) + \bar{f}_F(\tau_i) \Bigr) }} \end{equation} \begin{equation} \begin{split} \label{eq:practical_revenues_combined} r_X = \sum_{x \in X}{\sum_i{\pi_{\tau_i} (1 - \theta_x(\tau_i,t_x))}} \\ \prod_{y < x}{\Bigl( \theta_y(\tau_i,t_y)} \Bigr) \Bigl( \bar{v}_F(\tau_i) + \bar{f}_F(\tau_i) \Bigr) \end{split} \end{equation} \subsection{Sensitivity Analysis} Similar to the analyses in Sec.~\ref{sec:sensitivity} we determined the first and total order indexes of the practical model again with data from \cite{wagner:tumi06,Wagner2006}. The results are shown in Tab.~\ref{tab:practical} with the first order indexes left and the total order indexes right. We have to note that we only looked at defects in the code because we have no empirical data on defect types in other kinds of documents. Furthermore, we introduced the factor $\alpha$ that denotes the fraction of defects of a specific defect type. \begin{table}[htbp] \caption{The first and total order indexes from the practical model} \begin{center} \begin{tabular}{|l|r|l|r|} \hline $t$ & 0.1196 & $t$ & 0.8855 \\ \hline $\pi$ & 0.1138 & $v_f$ & 0.8670 \\ \hline $\bar{\theta}$ & 0.1097 & $s$ & 0.7881 \\ \hline $\alpha$ & 0.0975 & $\bar{\theta}$ & 0.7857 \\ \hline $v_f$ & 0.0694 & $l$ & 0.7772 \\ \hline $l$ & 0.0634 & $\alpha$ & 0.6676 \\ \hline $s$ & 0.0592 & $\pi$ & 0.6200 \\ \hline $u$ & 0.0476 & $u$ & 0.4902 \\ \hline $v$ & 0.0018 & $v$ & 0.0958 \\ \hline \end{tabular} \end{center} \label{tab:practical} \end{table} We see that the effort for the techniques ranks highest in both settings. The failure probability again ranks high in the factors priorisation setting. Hence, this factor should be investigated in more detail. Similarly to the ideal model, the setup and removal costs of the techniques do not contribute strongly to the output variance. In the factors fixing setting, we see that the setup and removal costs can be fixed without changing the variance significantly. This implies that we can use coarse-grained values here. Also the failure probability can be taken from literature values. More emphasis, however, should be put on the effort, the removal costs in the field, and the sequence of technique application. Of which the last one is surprising as for the ideal model this factor ranked rather low. \subsection{Optimisation} \label{sec:optimisation} For the optimisation only two of the three components of the model are important because the future costs and the revenues are dependent on each other. There is a specific number of faults that have associated costs when they occur in the field. These costs are divided in the two parts that are associated with the revenues and the future costs, respectively. The total always stays the same, only the size of the parts varies depending on the used defect-detection techniques. Therefore, we use only the direct costs and the revenues for optimisation and consider the future costs to be dependent on the revenues. Therefore, the optimisation problem can be stated by: maximise $r_X - d_X$. By using Eq.~\ref{eq:practical_direct_combined} and Eq.~\ref{eq:practical_revenues_combined} we get the following equation to be maximised. \begin{equation} \begin{split} \sum_x{\biggl[ - \bar{u}_x - \bar{e}_x(t_x) + \sum_i{ (1 - \theta_x(\tau_i,t_x)) }} \\ \prod_{y < x}{(\theta_y(\tau_j,t_y))} \bigl(\pi_{\tau_i} \bar{v}_F(\tau_i) + \pi_{\tau_i} \bar{f}_F(\tau_i) - \bar{v}_x(\tau_i) \bigr) \biggr] \end{split} \end{equation} The equation shows in a very concise way the important factors in the economics of de\-fect-de\-tec\-tion tech\-nique s. For each technique there is the fixed setup cost and the execution costs that depend on the effort. Then for each fault in the software (and over all fault classes) we use the probability that the technique is able to find the fault and no other technique has found the fault before to calculate the expected values of the other costs. The revenues are the removal costs and effect costs in the field with respect to the failure probability because they only are relevant if the fault leads to a failure. Finally we have to subtract the removal costs for the fault with that technique which is typically much smaller than in the field. For the optimisation purposes, we probably also have some restrictions, for example a maximum effort $t_{\textit{max}}$ with $\sum_x{t_x} \leq t_{\textit{max}}$, either fixed length or none $t_A = \{0,100\}$, or some fixed orderings of techniques, that have to be taken into account. The latter is typically true for different forms of testing as system tests are always later in the development than unit tests. Having defined the optimisation problem and the specific restrictions we can use standard algorithms for solving it. It is a hard problem because it involves multi-dimensional optimisation over a permutation space, i.e., not only the length of technique usage can be varied but also the sequence of the de\-fect-de\-tec\-tion tech\-nique s. \subsection{Applications} \label{sec:applications} In this section, we describe two possibilities how the practical model can be used. We can use the model in experiments as well as during normal software development. As discussed in Sec.~\ref{sec:discussion} we can use the ideal model to explain the effects of techniques on the economics. The practical model is suited to measure important aspects of de\-fect-de\-tec\-tion tech\-nique s in software engineering experiments by finding difficulty functions of certain techniques in certain domains. In software projects, the practical model can also help to optimise the future quality assurance by using the information from old projects. \subsubsection{In-House} The main idea is to predict the future economics based on the data from finished projects. The approach should then contain the following parts: \begin{itemize} \item Classify found faults \item Which technique found which fault? \item Which faults were found in the field? \item Estimate failure probability and costs for each fault \end{itemize} From this data, we can estimate the needed quantities. This estimation process can have different forms. The failure probability can either be estimated by expert opinion or using field data if it was a field failure. The cost data can be partly taken from effort measurements during development and from the field. Then we can try to answer the two questions: What is the optimal length of a technique and what is the optimal combination? However, not that the results are in all cases dependent on the problem class and domain because they have a huge influence on the costs. \subsubsection{Domain-Specific} A second application could be to try to generalise the results of the model to a complete domain either from field studies or experiments. There are probably specific defect types in specific domains for which we might be able to collect data that is not only valid inside one company but for the whole domain. In this way, data from other companies could be used for optimisation purposes. \section{Related Work} \label{sec:related} Our own previous work on the quality economics of de\-fect-de\-tec\-tion tech\-nique s forms the basis of this model. We formulated some simple relationships of cost factors and how this could be used in evaluating and comparing different techniques in \cite{wagner:sew05}. This is refined in \cite{wagner:wosq05} and additional means to predict future costs are incorporated. Some first results of the current model and sensitivity analysis can be found in \cite{wagner:wosq06}. The available related work can generally classified in two categories: (1) theoretical models of the effectiveness and efficiency of either test techniques or inspections and (2) economic-oriented, abstract models for quality assurance in general. The first type of models is able to incorporate interesting technical details but are typically restricted to a specific type of techniques and often economical considerations are not taken into account. The second type of models typically comes from more management-oriented researchers that consider economic constraints and are able to analyse different types of defect-detection but often contain the technical details in a very abstract way. Pham describes in \cite{pham00} various flavours of a software cost model for the purpose of deciding when to stop testing. It is a representative of models that are based on reliability models. The main problem with such models is that they are only able to analyse system testing and no other de\-fect-de\-tec\-tion tech\-nique s and the differences of different test techniques cannot be considered. Holzmann describes in \cite{holzmann01} his understanding of the economics of software verification. He describes some trends and hypotheses that are similar to ours and we can support most ideas although they need empirical justification at first. Kusumoto et al.\ describe in \cite{kusumoto92,kusumoto93} a metric for cost effectiveness mainly aimed at software reviews. They introduce the concept of virtual software test costs that denote the testing cost that have been needed if no reviews were done. This implies that we always want a certain level of quality. A model similar to the Kusomoto model but with an additional concept of defect propagation was proposed by Freimut et al.\ in \cite{freimut05}. The economics of the inspection process are investigated in \cite{biffl01}. This work also uses defect classes and severity classes to determine the specific costs. However, it identifies only the smaller removal costs to be the benefit of an inspection. An example of theoretical models of software testing is the work of Morasca and Serra-Capizzano \cite{morasca04}. They concentrate on the technical details such as the different failure rates. In this paper there is also a detailed review of similar models. Ntafos describes some considerations on the cost of software failure in \cite{ntafos98}. The difficulties of collecting appropriate data are shown but the model itself is described only on an abstract level. In \cite{krishnan96,slaughter98} a metric called \emph{return on software quality (RO\-SQ)} is defined. It is intended to financially justify investments in quality improvement. The underpinnings of this metric are similar to the analytical model defined in this paper although there are significant differences. Firstly, it aims mainly on measuring the effects of process improvements, i.\,e.\ constructive quality assurance, whereas we concentrate on analytical quality assurance. Secondly, they base the calculations mainly on average defect content in the software and do not consider the important question if the faults lead to failures. In \cite{knox93} the model of software quality costs is set into relation to the Capability Maturity Model (CMM) \cite{paulk95}. The emphasis is hence on the prevention costs and how the improvement in terms of CMM levels helps in preventing failures. Galin extends in \cite{galin04a,galin04b} the software quality costs with managerial aspects but the extensions are not relevant in the context of de\-fect-de\-tec\-tion tech\-nique s. Guidelines for applying a quality cost model in a business environment in general are given in \cite{Kaner1996}. Mandeville describes in \cite{Mandeville1990} also software quality costs, a general methodology for cost collection, and how specific data from these costs can be used in communication with management. Humphrey presents in \cite{humphrey95} his understanding of software quality economics. The defined cost metrics do not represent monetary values but only fractions of the total development time. Furthermore, the effort for testing is classified as failure cost instead of appraisal cost. Collofello and Woodfield propose in \cite{collofello89} a metric for the cost efficiency but do not incorporate failure probabilities or difficulties. Based on the general model for software cost estimation COCOMO, the COQUALMO model was specifically developed for quality costs in \cite{chulani99}. This model is different in that it is aiming at estimating the costs beforehand and that it uses only coarse-grained categories of de\-fect-de\-tec\-tion tech\-nique s. In our work, we want to analyse the differences between techniques in more detail. Boehm et al.\ also present in \cite{boehm04} the iDAVE model that uses COCOMO II and COQUALMO. This model allows a thorough analysis of the ROI of dependability. The main difference is again the granularity. Only an average cost saving per defect is considered. We believe that analysing costs per defect type can improve estimates and predictions. Building on iDAVE, Huang and Boehm propose a value-based approach for determining how much quality assurance is enough in \cite{huang05}. In some respect that work is also more coarse-grained than our work because it considers only the defect levels from COQUALMO. However, it contains an interesting component that deals with time to market costs that are currently missing from our model. A somehow similar model to COQUALMO in terms of the description of the defect introduction and removal process is described in \cite{jalote03}. However, it offers means to optimise the resource allocation. The only measure for de\-fect-de\-tec\-tion tech\-nique s used is defect removal efficiency. \section{Conclusions} \label{sec:conclusions} We finally summarise our work and the main contributions and give some directions for future work. \subsection{Summary} We propose an analytical model of quality economics with a strong focus on de\-fect-de\-tec\-tion tech\-nique s. This focus is necessary to be able to be more detailed than comparable approaches. In this way, we incorporate different cost types that are essential for evaluating de\-fect-de\-tec\-tion tech\-nique s and also a notion of reliability or the probability of failure. The latter is also very important because it is significant which faults are found in terms of reliability. This distinguishes the model from more abstract approaches. On the other hand we have models derived from software reliability modelling. These models are typically simpler but can only be used on techniques where reliability models can be applied, i.\,e. mainly system tests. We aim to incorporate all types of de\-fect-de\-tec\-tion tech\-nique s. One of the main contributions is also the research priorisation. We find that it is most rewarding to further investigate the distribution of defect over document types, the removal costs in the field, and the difficulty, especially the functional form with respect to varying effort. All of these have not been subject to extensive empirical work. The main weakness of our model is that the ideal model is not usable in real software projects. Hence, we derived a practical model that is based on defect types. This gives us a greater data basis for each type. The problem here is, that it is not totally clear if this structuring in defect types really is able to give useful distributions of the removal costs, removal difficulty, and failure probability. Furthermore, it strongly depends on how ``good'' these types are defined and we currently have no requirements on the classes. \subsection{Future Work} As future work, we consider working on support for the estimation of the needed quantities of the practical model, especially the number of faults $\bar{I}$ and also on the probability of failure of the defects as those are important factors. An application of the model to a real project and thereby analysing the predictive validity of the model is one of next major steps. The optimisation must be worked on in more detail and effective tool support is essential to make the model applicable in practice. Finally, an incorporation of time to market might be beneficial because there are important costs associated with time overruns that need to be considered. In some markets this may be even more important than all the other factors contained in the model. \section{Acknowledgments} We are grateful to Sandro Morasca for detailed comments on the model and to Bev Littlewood for helping on the understanding of their diversity model. This research was supported by the \emph{Deutsche Forschungsgemeinschaft (DFG)} within the project \emph{InTime}. \bibliographystyle{abbrv}
proofpile-arXiv_067-4828
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec-introduct} The neutrinos interacting with plasmas play key roles in many astrophysical situations including supernova explosions. They are generally produced due to very high explosions in the core of massive stars and can have significant impact on the cooling of white dwarfs and neutron stars \cite{adams1963, winget2004}. Although the interaction between the neutrinos and matter is weak, in the gamma-ray bursts of a supernova explosion, the energy emitted from neutrinos can be very high (almost $99 \%$ of the gravitational binding energy of collapsing stars) and the intensity can be more than $10^{28}$ W cm$^{-2}$. Furthermore, in the first few seconds of explosion, the neutrino burst that originates from the core of supernova is a source of free energy to drive collective oscillations and instabilities which may lead to the revival of a stalled supernova shock \cite{bingham1994, bingham1996}. \par In nuclear reactions (such as those inside the Sun, as well as in laboratories), neutrinos can appear in three different types, namely, \textit{electron}, \textit{muon} and \textit{tau}. These three types are typically termed as flavors. In a series of experiments, it has been proved that the neutrinos can oscillate from one flavor state to another and accordingly, this phenomenon is referred as the neutrino flavor oscillations. The interactions of neutrinos with plasmas initiates a resonant coupling between different flavor states, which is known as the Mikheyev-Smirnov-Wolfenstein (MSW) effects \cite{bethe1986, mikheev1986,wolfenstein1978}. Such interactions not only reshape the neutrino flavor oscillations, but also generate an induced neutrino charge which gives rise to collective plasma oscillations with a significant enhancement of the collision cross sections. In this context, several authors have studied the neutrino-plasma interactions considering neutrino flavor oscillations, See, e.g., \cite{bingham2004, mendonca2014,haas2013,mendonca2013}. To discuss about a few, in Ref. \cite{haas2013}, it has been shown that the two-flavor neutrino-plasma oscillation equations admit an exact analytic solution for arbitrarily chosen electron neutrino populations. A hydrodynamic model has been introduced by Mendon{\c{c}}a and Haas \cite{mendonca2013} to study the plasma and neutrino flavor oscillations in turbulent plasmas. \par On the other hand, the neutrino-plasma coupling in magnetized plasmas can lead to different types of hydrodynamic instabilities which may influence the neutrino beam transport by improving the properties of the background medium. Several studies have focused on the physics of collective neutrino-plasma interactions in different astrophysical situations \cite{bingham1996, serbeto1999, serbeto2002}. Also, the parametric instabilities in intense neutrino flux and collective plasma oscillations has been studied by Bingham \textit{et al.} \cite{bingham1994}. Furthermore, the generation of neutrino beam driven wakefields \cite{aserbeto2002}, neutrino streaming instability \cite{silva2000, silva1999,silva2006}, and neutrino Landau damping \cite{silva1999} have been studied in different contexts. The latter effect can be implemented to the cooling process of strongly turbulent plasmas. In other investigations, it has been shown that neutrinos can contribute to the generation of both the inhomogeneities and magnetic fields in the early universe \cite{shukla1998, shukla2003}. \par Recently, Haas \textit{et al.} \cite{haas2016} proposed a neutrino MHD (NMHD) model in magnetoplasmas by considering the neutrino-plasma interactions as well as the coupling between MHD waves and neutrino fluids. This model was studied for the propagation of magnetosonic waves in a specific geometry, i.e., when the propagation direction is perpendicular to the external magnetic field. However, the theory was later advanced with an arbitrary direction of propagation \cite{haas2017}. Motivated by these works, the influence of intense neutrino beams on the hydrodynamic Jeans instability has been studied by Prajapati in a magnetized quantum plasma \cite{prajapati2017}. It turns out that the NMHD model has become very useful to establish connections between various astrophysical phenomena and neutrino-plasma coupling processes in magnetized media. \par In this work, our aim is to advance the previous theory of NMHD waves \cite{haas2017} by considering (in addition to the neutrino beam effects) the influence of two neutrino favor (electron- and muon-neutrinos) oscillations on the neutrino beam driven MHD waves and instabilities. We show that the two-flavor oscillations, not only resonantly interact with the oblique magnetosonic wave, but can have a significant contribution to the growth rate of instability. \par The paper is organized as follows: In Sec. \ref{sec-model}, we describe the NMHD model, which is coupled to the dynamics of two neutrino flavors, namely, the electron-neutrino and muon-neutrino. Using the perturbation analysis, a general linear dispersion relation is derived in Sec. \ref{sec-linear} to show the coupling of MHD waves with the resonant neutrino beam and the resonant neutrino flavor oscillations. The instability growth rates for both the fast and slow magnetosonic waves are obtained and analyzed numerically in Sec. \ref{sec-instab}. Finally, Sec. \ref{sec-conclu} is left for concluding remarks. \section{Physical Model} \label{sec-model} We consider a homogeneous magnetized system composed of electrons and ions, as well as the neutrino beams of electron neutrinos and muon neutrinos. We also assume that the fluid descriptions for both the plasma electrons and ions, and the neutrino beams are valid for the length scale of the order of electron skin depth and the time scale of the order of ion gyroperiod. In the NMHD description, the continuity and momentum equations for the MHD fluids read \cite{haas2017} \begin{equation} \frac{\partial \rho_m}{\partial t} + \nabla \cdot (\rho_m {\bf{U}})=0, \label{continuity-eq} \end{equation} \begin{equation} \frac{\partial {\bf{U}}} {\partial t} +{\bf{U}} \cdot \nabla{\bf{U}}= -V_s^2 \frac{\nabla \rho_m}{\rho_m} + \frac{(\nabla \times {\bf{B}})\times {\bf{B}}}{\mu_0 \rho_m}+\frac{F_\nu}{m_i}, \label{momentum-eq} \end{equation} where $\rho_m= m_en_e+m_in_i\approx nm_i$ is the mass density, ${\bf U}=(m_en_e{\bf u}_e+m_in_i{\bf u}_i)/(m_en_e+m_in_i)$ is the plasma velocity, $\mu_0$ is the permeability of free space, $V_s=\sqrt{k_BT_e/m_i}$ is the ion-acoustic velocity, and $F_{\nu}$ is the neutrino-plasma (electroweak) interaction force. Here, $m_{e(i)}$ denotes the electron (ion) mass, $n_{e(i)}$ the electron (ion) number density, $\mathbf{u}_{e(i)}$ the electron (ion) fluid velocity, and $k_B$ the Boltzmann constant. In addition, the equation for the magnetic flux modified by the electroweak force is given by \begin{equation} \frac{\partial {\bf B}}{\partial t} = \nabla \times \left( {\bf U} \times {\bf B} -\frac{F_\nu}{e}\right), \label{B-eq} \end{equation} where $F_\nu=\sqrt{2} G_F ({\bf E}_\nu + {\bf U} \times {\bf B}_\nu)$ with $G_F$ denoting the Fermi coupling constant and $E_{\nu}~(B_{\nu})$ the neutrino electric (magnetic) field, given by, \begin{eqnarray} &&{\bf E}_\nu= -\nabla N_e -\frac{1}{c^2} \frac{\partial}{\partial t} (N_e {\bf v}_e),\\ &&{\bf B}_\nu= \frac{1}{c^2} \nabla \times (N_e {\bf v}_e). \end{eqnarray} \par For a coherent neutrino beam with an energy ${\cal E}_0$, the continuity equations for electron neutrino (with number density $N_e$, velocity $\mathbf{v}_e$) and muon neutrino (with number density $N_\mu$, velocity $\mathbf{v}_\mu$), respectively, are \cite{mendonca2014,haas2019} \begin{equation} \frac{\partial N_e}{\partial t} + \nabla \cdot (N_e {\bf v}_e) =\frac{1}{2} N \Omega_0 P_2, \label{electron-neutrino-continuity-eq} \end{equation} \begin{equation} \frac{\partial N_\mu}{\partial t} + \nabla \cdot (N_\mu {\bf v}_\mu) =-\frac{1}{2} N \Omega_0 P_2, \label{muon-neutrino-continuity-eq} \end{equation} where $P_2$ corresponds to the neutrino coherence in the flavor polarization vector ${\bf P}=(P_1, P_2, P_3)$, $N=N_e+N_\mu$ is the total neutrino fluid density, and $\Omega_0=\omega_0 \sin{(2\theta_0)}$. Here, $\omega_0= \delta{m^2} c^4/2 \hbar {\cal E}_0$ with $\delta{m^2}$ denoting the squared neutrino mass difference, $c$ the speed of light in vacuum, $\hbar$ the reduced Planck's constant, and $\theta_0$ the neutrino oscillation mixing angle. While the left-hand sides of Eqs. \eqref{electron-neutrino-continuity-eq} and \eqref{muon-neutrino-continuity-eq} involve the convective terms due to the flows of neutrinos into plasmas, the terms on the right-hand sides appear due to neutrino flavor oscillations along with the rates of changes of the electron- and muon-neutrino fluid densities. We also require the global neutrino fluid densities to be conserved, i.e., \begin{equation} \frac{d}{dt} \int (N_e+N_\mu) d^3 {\bf r}= - \int \nabla \cdot (N_e {\bf v}_e + N_\mu {\bf v}_\mu)d^3 {\bf r}=0. \label{neutrino-preserve-eq} \end{equation} Next, the electron neutrino and muon neutrino equations of motion are \begin{equation} \frac{\partial {\bf p}_e}{\partial t} + {\bf v}_e \cdot \nabla {\bf p}_e = -\frac{\sqrt{2} G_F}{m_i} \nabla \rho_m, \label{electro-neutrino-force-eq} \end{equation} \begin{equation} \frac{\partial {\bf p}_\mu}{\partial t} + {\bf v}_\mu \cdot \nabla {\bf p}_\mu =0, \label{muon-neutrino-force-eq} \end{equation} where ${\bf p}_e ={\cal{E}}_e {\bf v}_e/c^2$ and ${\bf p}_\mu ={\cal{E}}_\mu {\bf v}_\mu/c^2$ are the momenta of electron and muon neutrinos with ${\cal{E}}_{e,\mu}= \left(p_{e,\mu}^2c^2+m^2_{e,\mu}c^4\right)^{1/2}$ denoting the electron- and muon-neutrino energies, $m_{e,\mu}$ the electron (muon) neutrino mass, and $v_{e(\mu)}$ the electron (muon) neutrino velocity. \par To complete the description of neutrino-plasma interactions, we require the time evolution equations of the components of the flavor polarization vector ${\bf P}=(P_1, P_2, P_3)$, given by \cite{mendonca2014}, \begin{equation} \frac{d P_1}{d t}=-\Omega(n_e)P_2, \label{P1-eq} \end{equation} \begin{equation} \frac{d P_2}{d t}=\Omega(n_e)P_1 -\Omega_0 P_3, \label{P2-eq} \end{equation} \begin{equation} \frac{d P_3}{d t}=\Omega_0 P_2, \label{P3-eq} \end{equation} where $\Omega(n_e)=\omega_0[\cos (2\theta_0) -\sqrt{2} G_F n_e/(\hbar \omega_0)]$. The total time derivatives appearing in Eqs. \eqref{P1-eq}-\eqref{P3-eq} should, in general, be different. However, for a mono-energetic neutrino beam, the velocity of each neutrino flavor can be assumed to be identical so that $\mathbf{v}_e=\mathbf{v}_\mu=\mathbf{v}$. One can then consider the total time derivative as $d/dt\equiv \partial_t+\mathbf{v}\cdot\nabla$. Since we are interested in the linear waves, the convective parts will be less important and can thus be disregarded in the analysis in Sec. \ref{sec-linear}. \section{Linear waves} \label{sec-linear} In order to obtain a general dispersion relation for NMHD waves, we Fourier analyze the system of Eqs. \eqref{continuity-eq}-\eqref{P3-eq} about the following equilibrium state: \begin{equation} \begin{split} &{\bf U}=0,~N_e=N_{e0},~N_\mu=N_{\mu 0},\\ &{\bf v}_e= {\bf v}_\mu ={\bf v}_0, ~N_0=N_{e0}+N_{\mu 0},\\ &P_1=\frac{\Omega_0}{\Omega_\nu},~ P_2=0,~ P_3=\frac{\Omega(n_0)}{\Omega_\nu}=\frac{N_{e0}-N_{\mu 0}}{N_0}, \end{split} \end{equation} where $\Omega_\nu =\sqrt{\Omega^2(n_0)+\Omega_0^2}$ is the eigenfrequency of two-flavor neutrino oscillations and $n_0$ is the background number density of electrons and ions. \par Next, assuming the MHD perturbations in the form of plane waves $\sim \exp [i({\bf k} \cdot {\bf r} -\omega t)]$ with wave vector $\mathbf{k}$ and wave frequency $\omega$, we obtain from Eqs. \eqref{continuity-eq} to \eqref{B-eq} the following expression for the perturbed velocity. \begin{eqnarray} &&\omega^2 \delta{\bf U} =(V_s^2+V_A^2)({\bf k} \cdot \delta{\bf U}) {\bf k} + ({\bf k} \cdot {\bf V}_A) \left\lbrace ({\bf k} \cdot {\bf V}_A) \delta{\bf U} \right. \notag\\ && \left. - (\delta{\bf U} \cdot {\bf V}_A) {\bf k} -({\bf k} \cdot \delta{\bf U}) {\bf V}_A \right\rbrace + \frac{\sqrt{2} G_F}{m_i c^2} \omega \left[c^2 {\bf k} -\omega {\bf v}_0) \delta N_{e} \right. \notag\\ && \left. -\omega N_{e0} \delta{\bf v}_{e}\right],\label{dispersion} \end{eqnarray} where $\delta{f}$ denotes the perturbation of a physical quantity $f$. Also, from Eq. \eqref{electro-neutrino-force-eq}, we have \begin{multline} (\omega- {\bf k} \cdot {\bf v}_0) \delta {\bf p}_{e} = \left[\delta{\bf v}_{e} +\left(1-\frac{v_0^2}{c^2} \right)^{-1} \frac{{\bf v}_0 \cdot \delta{\bf v}_{e} }{c^2} {\bf v}_0\right] \\ = \sqrt{2} G_F \frac{k}{m_i} \delta \rho_{m}. \end{multline} So, for nonrelativistic fluid flow with $v_0\ll c$, we get \begin{multline} \delta {\bf v}_{e} = \frac{\sqrt{2} G_F}{{\cal{E}}_0 (\omega-{\bf k} \cdot {\bf v}_0)} \left[ \frac{c^2 {\bf k} \delta \rho_{m}}{m_i} \right.\\ \left.-\left( \frac{{\bf k} \cdot {\bf v}_0 \rho_{m1}}{m_i} -\frac{n_0 \omega}{c^2} {\bf v}_0 \cdot \delta {\bf U}\right) \right]. \label{ve-eq} \end{multline} Using the perturbed form of Eq. \eqref{continuity-eq}, we can rewrite Eq. \eqref{ve-eq} as \begin{equation} \delta{\bf v}_{e} = \frac{\sqrt{2} G_F}{{\cal{E}}_0 (\omega-{\bf k} \cdot {\bf v}_0)} \frac{c^2 \rho_{m0}}{m_i \omega} ({\bf k} \cdot \delta{\bf U}) {\bf k}. \label{ve-final-eq} \end{equation} \par Next, from Eqs. \eqref{P1-eq} - \eqref{P3-eq} one obtains \begin{equation} \delta P_2= -i \frac{\sqrt{2} \Omega_0 \omega G_F}{(\omega^2-\Omega_\nu^2) m_i \hbar \Omega_\nu} \delta\rho_{m}, \label{derived-P2-eq} \end{equation} and using this expression of $\delta P_2$, we obtain from Eq. \eqref{electron-neutrino-continuity-eq} the following equation for the perturbed density. \begin{eqnarray} && \delta N_{e}=N_{e0} \frac{\sqrt{2}G_F c^2 \rho_{m0}}{{\cal{E}}_0 (\omega- {\bf k} \cdot {\bf v}_0)^2 m_i \omega} k^2 ({\bf k} \cdot \delta{\bf U}) \notag\\ && +\frac{\sqrt{2} G_F \Omega_0^2 N_0 \rho_{m0}}{2 m_i \hbar \Omega_\nu (\omega- {\bf k} \cdot {\bf v}_0)(\omega^2-\Omega_\nu^2)} ({\bf k} \cdot \delta {\bf U}). \label{derived-Ne2-eq} \end{eqnarray} \par The expressions for the perturbed density and velocity of electron neutrinos [Eqs. \eqref{ve-final-eq} and \eqref{derived-Ne2-eq}] together with the continuity equation \eqref{electron-neutrino-continuity-eq} can then be applied to the expression for the perturbed neutrino fluid force to show that its magnitude is enhanced for $\omega\approx {\bf k} \cdot {\bf v}_0$ and/or $\omega\approx \Omega_\nu$. It follows that the MHD waves can have resonant-like interactions with the streaming neutrino beam and the neutrino flavor oscillations for which the energy exchange can take place leading to the MHD instability. Although the resonant contribution of the neutrino beam is known in Ref. \cite{haas2017}, we will, however, consider both the resonances in order to study the relative influence of the neutrino flavor oscillations on the MHD instability. Finally, from Eqs. \eqref{dispersion}, \eqref{ve-final-eq}, and \eqref{derived-Ne2-eq}, we obtain the following dispersion relation. \begin{eqnarray} && \omega^2 \delta {\bf U}= \left\lbrace V_s^2+ V_A^2+V_N^2 \frac{c^2 k^2 -\omega^2}{(\omega- {\bf k} \cdot {\bf v}_0)^2} \right\rbrace ({\bf k} \cdot \delta {\bf U}) {\bf k} \notag\\ && +({\bf k} \cdot {\bf V}_A)\left\lbrace ({\bf k} \cdot {\bf V}_A) \delta {\bf U} -(\delta {\bf U} \cdot {\bf V}_A) {\bf k} - ({\bf k} \cdot \delta {\bf U}) {\bf V}_A\right\rbrace \notag\\ && +V_\text{osc}^2 \frac{\Omega_0^2 \omega {\cal{E}}_0(c^2k^2 - \omega ({\bf k} \cdot {\bf v}_0))}{2c^2k^2 \hbar \Omega_\nu(\omega-{\bf k} \cdot {\bf v}_0)(\omega^2-\Omega_\nu^2)} {\bf k} (\bf k \cdot \delta {\bf U}). \label{dispersion-relation} \end{eqnarray} Here, ${V}_A= { B}_0/(\mu_0 \rho_{m0})^{1/2}$ is the Alfv{\'e}n velocity, $V_N = \left[2 G_F^2 \rho_{m0} N_{e0}/(m_i^2 {\cal{E}}_0)\right]^{1/2}$ is the velocity associated with the electron-neutrino beam, and $V_\text{osc} =\left[2 G_F^2 \rho_{m0} N_0/(m_i^2 {\cal{E}}_0)\right]^{1/2}$ that due to both the electron- and muon-neutrino flavor oscillations. Thus, the terms proportional to $V_N^2$ and $V_\text{osc}^2$ in Eq. \eqref{dispersion-relation} appear due to the neutrino beam effect (electron-neutrino) and two-flavor (both the electron-neutrino and muon-neutrino) oscillations. As noted before and is clear from Eq. \eqref{dispersion-relation} that, in addition to the phase velocity resonance (at the neutrino beam velocity, i.e., $\omega\approx {\bf k} \cdot {\bf v}_0$), there also can occur the resonance (at the frequency of two-flavor oscillations, i.e., $\omega\approx\Omega_{\nu}$) due to the coupling between MHD waves and two neutrino flavor oscillations. Furthermore, disregarding the contribution of the neutrino flavor oscillations from Eq. \eqref{dispersion-relation}, one can recover the same dispersion relation as in Ref. \cite{haas2017}. Thus, the dispersion equation \eqref{dispersion-relation} generalizes the previous theory with the effects of neutrino flavor oscillations. \par We note that the adiabatic sound speed is also modified by the effects of neutrino flavor oscillations. Thus, defining $\tilde{V}_s^2$ by \begin{eqnarray} &&\tilde{V}_s^2(\omega,{\bf k}) =V_s^2 + V_N^2 \frac{c^2 k^2 -\omega^2}{(\omega- {\bf k} \cdot {\bf v}_0)^2} \notag\\ && +V_{osc}^2 \frac{\Omega_0^2 \omega {\cal{E}}_0(c^2k^2 - \omega ({\bf k} \cdot {\bf v}_0))}{2c^2k^2 \hbar \Omega_\nu(\omega-{\bf k} \cdot {\bf v}_0)(\omega^2-\Omega_\nu^2)}, \label{VS-eq} \end{eqnarray} Eq. \eqref{dispersion-relation} can be recast as \begin{eqnarray} &&\omega^2 \delta {\bf U}= (V_A^2+\tilde{V}_s^2) ({\bf k} \cdot \delta {\bf U}) {\bf k} +({\bf k} \cdot {\bf V}_A)\left\lbrace ({\bf k} \cdot {\bf V}_A) \delta {\bf U} \right. \notag\\ && \left. -(\delta {\bf U} \cdot {\bf V}_A) {\bf k} - ({\bf k} \cdot \delta {\bf U}) {\bf V}_A\right\rbrace. \label{dispersion-reduced} \end{eqnarray} \par In what follows, we consider the wave propagation at an arbitrary angle $\theta$ with respect to the constant magnetic field ${\bf B}_0=B_0\hat{z}$ and assume, without loss of generality, that the wave vector ${\bf k}$ lies in the $xz$-plane. Thus, equating the coefficient determinant of the homogeneous system \eqref{dispersion-reduced} for the components of $\delta {\bf U}$ to zero, we obtain the following linear dispersion relation for the coupling of MHD waves with the neutrino beam and the neutrino flavor oscillations. \begin{eqnarray} && (\omega^2-k^2 V_A^2 \cos^2\theta)\left[\omega^4 -k^2(V_A^2+\tilde{V}_s^2 )\omega^2 \right. \notag\\ && \left. +k^4 V_A^2 \tilde{V}_s^2 \cos^2\theta\right]=0. \label{final-dispersion-relation} \end{eqnarray} From Eq. \eqref{final-dispersion-relation}, it is evident that the first factor, when equated to zero, gives the dispersion relation for oblique Alfv{\'e}n waves, i.e., $\omega=k V_A \cos\theta$. This wave mode is neither influenced by the neutrino beam nor by the neutrino flavor oscillations. So, we are interested in the second factor to obtain the following dispersion relation. \begin{equation} \omega^4 -k^2(V_A^2+\tilde{V}_s^2 )\omega^2+k^4 V_A^2 \tilde{V}_s^2 \cos^2\theta=0. \label{eq-disp} \end{equation} Equation \eqref{eq-disp} reveals the coupling of the oblique magnetosonic waves with the neutrino beam and the neutrino two-flavor oscillations. Note that for wave propagation perpendicular to the magnetic field $(\theta=\pi/2)$, typical magnetosonic mode is recovered \cite{haas2016}, which is, however, modified by the influence of neutrino flavor oscillations mediated through the term proportional to $V_\text{osc}^2$. Furthermore, due to smallness of the Fermi constant $G_F$, and hence $V_N^2$ and $V_\text{osc}^2$, the contributions from the neutrino beam and neutrino flavor oscillations are typically small. So, they can be considered as small perturbations to the squared acoustic speed $V_s^2$. Physically, these perturbations, as they develop in the resonant interactions of MHD waves with the streaming neutrino beam and the neutrino flavor oscillations, may lead to instabilities due to energy gain from neutrinos that can be radiated due to core collapse of massive stars in supernova explosions. In Sec. \ref{sec-instab}, we will investigate the qualitative features of these instabilities in details. \section{Instabilities} \label{sec-instab} To study the instabilities of oblique magnetosonic waves, we rewrite the dispersion equation \eqref{eq-disp} as \begin{eqnarray} &&\omega^4 -k^2(V_A^2+ V_s^2)\omega^2 +k^4V_A^2 V_S^2 \cos^2\theta \notag\\ &&= V_N^2 k^2 \frac{\left(c^2 k^2 -\left({\bf k} \cdot {\bf v}_0\right)^2\right)\left(\omega^2 -k^2V_A^2 \cos^2\theta\right)}{\left(\omega- {\bf k} \cdot {\bf v}_0\right)^2}\notag\\ && +V_{osc}^2 \frac{\Omega_0^2 \omega {\cal{E}}_0\left(c^2k^2 - \omega \left({\bf k} \cdot {\bf v}_0\right)\right)\left(\omega^2 -k^2V_A^2 \cos^2\theta\right)}{2c^2k^2 \hbar \Omega_\nu\left(\omega-{\bf k} \cdot {\bf v}_0\right)\left(\omega^2-\Omega_\nu^2\right)}. \label{neutrino-dispersion} \end{eqnarray} Since the influences of the neutrino streaming beam and the flavor oscillations on the instability growth rates are of our prime interest, we assume \begin{equation} \omega=\tilde{\Omega} +\delta \omega,~~\rm{with}~|\delta \omega|\ll\tilde{\Omega}, \label{omega-eq} \end{equation} together with the double resonance condition \begin{equation} \omega=\Omega_\nu \approx\tilde{\Omega}={\bf k}\cdot {\bf v}_0, \label{eq-reso} \end{equation} where $\tilde{\Omega}$ is a solution of the following dispersion equation (in absence of the effects of neutrinos) \begin{equation} \omega^4 -k^2(V_A^2+ V_s^2)\omega^2 +k^4V_A^2 V_S^2 \cos^2\theta=0. \label{omega-new-eq} \end{equation} From Eq. \eqref{omega-new-eq}, the frequencies of the fast (with the suffix $+$) and slow (with the suffix $-$) classical magnetosonic modes can be obtained as \begin{equation} \omega= \tilde{\Omega}_{\pm} =k V_{\pm}, \label{eq-Omegapm} \end{equation} where $V_\pm$ are the corresponding phase velocities, given by, \begin{equation} V_\pm= \left[ \frac{1}{2}\left( V_A^2+V_s^2 \pm \sqrt{(V_A^2-V_s^2)^2 +4V_A^2 V_s^2 \sin^2 \theta}\right) \right] ^{1/2}. \label{V-eq} \end{equation} Thus, from Eqs. \eqref{neutrino-dispersion} to \eqref{eq-reso} and using the fact that $V_\pm\ll c^2$ for non-relativistic fluid flow, we obtain \begin{eqnarray} &&(\delta \omega)^3 \approx \pm\left[ \frac{V_N^2 c^2 k^3 (V_\pm^2 -V_A^2 \cos^2 \theta)}{2 V_\pm \sqrt{(V_A^2-V_s^2)^2 +4V_A^2 V_s^2 \sin^2 \theta}} \right. \notag\\ && \left. +\frac{G_F^2 \rho_{m0} N_0 \Omega_0^2(V_\pm^2 -V_A^2 \cos^2 \theta)}{4 V_\pm^2 \hbar m_i^2 \sqrt{(V_A^2-V_s^2)^2 +4V_A^2 V_s^2 \sin^2 \theta}}\right]. \label{delta-omega-eq} \end{eqnarray} The instability growth rate $\gamma= \Im(\delta \omega)>0$ is then obtained as \begin{equation} \gamma\equiv \gamma_\pm=\left[\left(\gamma^{\pm}_{\nu}\right)^3+\left(\gamma^{\pm}_\text{osc}\right)^3\right]^{1/3}, \end{equation} where we have defined the dimensionless parameter $\Delta=V_N^2/c^2$ and the expressions for $\gamma^{\pm}_{\nu}$ and $\gamma^{\pm}_\text{osc}$, respectively, are \begin{multline} \gamma^{\pm}_{\nu}=\frac{\sqrt{3} k}{2^{4/3}} \left[ \frac{\Delta c^4 |V_\pm^2-V_A^2 \cos^2\theta |}{ V_\pm \sqrt{(V_A^2-V_s^2)^2 +4V_A^2 V_s^2 \sin^2 \theta}}\right] ^{1/3}, \\ \gamma^{\pm}_\text{osc}=\frac{\sqrt{3} }{2^{4/3}}\left[\frac{G_F^2 \rho_{m0} N_0 \Omega_0^2|V_\pm^2-V_A^2 \cos^2\theta |}{2 V_\pm^2 \hbar m_i^2 \sqrt{(V_A^2-V_s^2)^2 +4V_A^2 V_s^2 \sin^2 \theta}}\right]^{1/3}. \label{eq-gamma} \end{multline} Here, we again note that while the quantity $\gamma^{\pm}_{\nu}$ is associated with the interactions of MHD waves with the streaming (with velocity $\mathbf{v}_0$) neutrino beam, the quantity $\gamma^{\pm}_\text{osc}$ appears due to coupling of MHD waves with neutrino two-flavor oscillations (with frequency $\Omega_\nu$). In absence of the latter, one can recover exactly the same result as in Ref. \cite{haas2017}. Furthermore, the conditions for the weak perturbations due to the neutrino beam and neutrino flavor oscillations can be validated so that $\gamma^{\pm}_{\nu}/\Omega_\nu\ll1$ and $\gamma^{\pm}_\text{osc}/\Omega_\nu\ll1$ since the terms in the square brackets in Eq. \eqref{eq-gamma} can be made less than unity after an appropriate normalization. \par The relative influence of the neutrino flavor oscillations on the growth rate of instabilities can be noted and it is given by \begin{equation} \frac{\gamma^{\pm}_\text{osc}}{\gamma^{\pm}_{\nu}}=\frac{1}{2^{4/3}k}\left[\frac{ \left(\delta{m^2}c^3\right)^2 \sin^2(2\theta_0)}{\hbar^3{\cal E}_0 V_\pm }\right]^{1/3}\left(\frac{N_0}{N_{e0}}\right)^{1/3}. \label{eq-ratio} \end{equation} From Eq. \eqref{eq-ratio}, it is to be noted that while the quantity $\gamma^{\pm}_{\nu}$ explicitly depends on the wave number $k$, $\gamma^{\pm}_\text{osc}$ is independent of $k$, which means the growth rate ratio is inversely proportional to $k$. Thus, it follows that in the regimes of sufficiently large wave numbers (provided that the wavelength is not too small) of magnetosonic perturbations, the neutrino beam contribution to the MHD instability can be larger than that of neutrino flavor oscillations. In contrast, the two-flavor oscillations can dominate over the neutrino beam-plasma interactions if initially the muon-neutrino beam density $(N_{\mu0})$ is much higher than that of electron neutrinos $(N_{e0})$ or the streaming neutrino spinor energy is relatively low. Furthermore, depending on the angle of propagation $\theta$, a relatively low magnetic field strength and/or low thermal energies of MHD fluids can enhance the neutrino flavor oscillation correction. \begin{figure*}[ht] \centering \includegraphics[height=2.8in,width=6.5in]{growth1ab} \includegraphics[height=2.8in,width=6.5in]{growth1cd} \caption{ Instability growth rates of the fast (with $+$) and slow (with $-$) oblique magnetosonic waves are shown against the propagation angle $\theta$ with two different magnetic field strengths: (i) $B_0=5\times10^6$ T [Subplots (a) and (b)] and (ii) $B_0=2\times10^7$ T [Subplots (c) and (d)]. The solid and dashed lines correspond to the growth rates when only the neutrino beam effect is present and when both the neutrino beam and neutrino flavor oscillation effects are present. The value $B_0=2\times10^7$ T is the critical value at which the growth rates in the upper subplots exhibit (almost) an opposite trend. } \label{fig1} \end{figure*} \par In order to examine the qualitative features, we numerically study the growth rates of instability for both the fast and slow magnetosonic waves, as well as the relative influence of the flavor oscillations. To this end, we consider the parameters that are relevant for type-II core-collapse supernova SN1987A \cite{haas2017}. In such scenarios, we can expect a fluid flow of $10^{58}$ neutrinos of all flavors and the streaming energy ${\cal E}_0\sim10-15$ MeV. There may also exist a strong magnetic field $B_0\sim 10^6 - 10^8$ T and high neutrino beam densities, $N_0\sim10^{34}-10^{37}$ m$^{-3}$. We, however, consider $n_0=10^{34}$ m$^{-3}$, $N_{e0}=10^{37}$ m$^{-3}$, and two different values of each of $N_0$ and $B_0$, namely $N_0=5\times10^{37}$ m$^{-3}$, $10^{38}$ m$^{-3}$ and $B_0=5 \times 10^6$ T, $2 \times 10^7$ T. Furthermore, $T_e=0.1$ MeV, $k=10^2$ m$^{-1}$ $\Delta m^2 c^4=3\times 10^{-6}~ \text{(eV)}^2$, $\sin (2\theta_0)=10^{-1}$, ${\cal{E}}_0=10 MeV$, and $G_F=1.45\times10^{-62}$ J M$^3$. With these parameters, the non-relativistic conditions $V_A/c\ll1$ and $V_s/c\ll1$, as well as the simplifying assumption \cite{haas2016}: $ck/\omega_{pe}\ll\omega_{pe}/\omega_{ce}$, where $\omega_{pe}$ and $\omega_{ce}$ are, respectively, the electron plasma and electron cyclotron frequencies, for the present model are also satisfied. \par Figure \ref{fig1} displays the growth rates of both the fast and slow magnetosonic modes when only the influence of the neutrino beam is present and when both the neutrino beam and the neutrino flavor oscillations are present. These growth rates are also plotted with two different magnetic field strengths. It is found that in the regimes of relatively low magnetic field [Subplots (a) and (b)], the contribution of the two-flavor oscillation to the growth rate becomes significant. It enhances the growth rates of both the fast and slow magnetosonic modes. In this case, the growth rate for the fast mode exhibits an inverted bell-shaped curve, while that for the slow mode displays a symmetric double-hump even in absence of the flavor oscillations. Consequently, the growth rate of the fast mode reaches its minimum at $\theta=\pi/2$ and maximum at $\theta=0$ and $\theta=\pi$, whereas that for the slow mode has two cut-offs at $\theta=0$ and $\pi$, and the maximum at $\theta=\pi/4$ and $3\pi/4$. \par From Fig. \ref{fig1}, the growth rate can be estimated as $\gamma_{+}/\Omega_{0}\sim10^{-3}$ for $\Omega_0\sim10^4$ rad/s. Since $\Omega_{\nu+}\sim10^8$ rad/s for a small value of $k\sim10^2$, we have $\gamma_{+}/\Omega_{\nu+}\sim10^{-7}\ll1$, i.e., the weak beam and weak flavor oscillations assumptions hold for the fast mode in the entire regime of $\theta$. However, for the slow mode, the similar estimation applies except at $\theta=\pi/2$ where $\Omega_{\nu-}=0$ and $\gamma_{-}$ is not defined. Thus, in difference to the fast magnetosonic mode, which is likely to be more unstable in the parallel and anti-parallel propagation, the slow mode becomes unstable for propagation at angles $\theta=\pi/4$ and $3\pi/4$. Such distinctive features of the instability growth rates of magnetosonic modes have not been reported in the earlier work \cite{haas2017}. \par On the other hand, when the magnetic field strength is relatively high, the similar features as in Ref. \cite{haas2017} (but in contrast to the previous case with a low magnetic field) are noticed. The growth rate for the fast mode is found to be slightly increased by the influence of the flavor oscillations. This increase is, however, pronounced except for parallel and anti-parallel propagation. The weak perturbation assumptions, as said before, still hold for this mode and it displays a stronger instability for propagation nearly perpendicular to the magnetic field. The effects of the flavor oscillations on the instability growth rates are rather significant for the slow magnetosonic mode over the entire domain of the propagation angle except at $\theta=\pi/2$ where $\Omega_{\nu-}=0$ and $\gamma_{-}$ is not defined. However, close to this value of $\theta$, the growth rate for the slow mode tends to reach its minimum value, implying the stability of the slow magnetosonic mode therein. The instabilities are rather stronger for the parallel and anti-parallel propagation. From Fig. \ref{fig1}, it is also noted that $\gamma_{\pm}\sim10$ s$^{-1}$ (i.e., $1/\gamma_{\pm}\sim10^{-1}$ s) and it becomes higher (or $1/\gamma_{\pm}$ becomes lower) with the influence of flavor oscillations. This means that MHD instability occurs faster in presence of the flavor oscillations than that due to its absence. Also, this typical time of instability is relatively shorter than the characteristic time scale of supernova explosion $\sim1-100$ s. Thus, the neutrino flavor oscillations have a remarkable impact on the instability of MHD waves in neutrino-beam driven magnetoplasmas. \par The relative influence of the neutrino flavor oscillations can be qualitatively analyzed in two different density regimes and with the variations of the wave number. It is evident from Fig. \ref{fig2} that as one approaches towards a domain of higher wave numbers, the relative influence of flavor oscillations on the instability growth rate becomes less important. However, it can be significant for both the slow and fast magnetosonic modes at some higher density regimes, provided one limits the neutrino density below a critical value, since at higher density regimes the relativistic degeneracy of electrons should come into the picture \cite{haas2019} which is not considered in the present model. \begin{figure*}[ht] \centering \includegraphics[height=2.8in,width=6.5in]{growth2} \caption{The relative influences of the neutrino two-flavor oscillations $(\gamma^{\pm}_\text{osc})$ and the neutrino streaming beam $(\gamma^{\pm}_{\nu})$ on the growth rates of instabilities of oblique magnetosonic waves are shown against the wave number for two different values of the total neutrino density $N_0$ as in the legend. The fixed parameter values are $B_0=5\times10^6$ T, $\theta=\pi/4$ and others as in the text. } \label{fig2} \end{figure*} \section{Conclusion} \label{sec-conclu} We have studied the influences of the neutrino two-flavor oscillations on the propagation of MHD waves and instabilities that are driven by the streaming neutrino beams. Special emphasis is given to analyze the characteristics of fast and slow magnetosonic waves propagating at an arbitrary direction with respect to the static magnetic field. In this way, the previous theory of neutrino MHD waves \cite{haas2017} is generalized with the inclusion of neutrino flavor oscillations in the wave dynamics. Using the neutrino MHD equations and assuming the weak neutrino beam-plasma interactions, as well as the weak coupling between MHD waves and two-flavor oscillations, a general dispersion relation for MHD waves is derived which accounts for the contributions from the resonant-like interactions of MHD waves with both the streaming neutrino beam and the flavor oscillations. The growth rates of instabilities, so generated due to the energy exchange of neutrinos with the wave, are obtained and analyzed with the parameters relevant for type-II supernova explosion due to core collapse of massive stars. It is found that the relative influence of the flavor oscillations can be significant in the regimes of high neutrino beam densities and/or relatively low magnetic fields provided the wavelength is moderate or high. Here, one must restrict the neutrino beam density to some limit. Otherwise, in high density regimes, where the relativistic degeneracy effects will come into the picture, one must deal with the relativistic NMHD model which is, however, a project for our future work. Furthermore, in the regimes of two different magnetic fields, the growth rate profiles exhibit almost opposite characters, implying that the instabilities can be strong enough not only for the parallel and perpendicular propagation of waves but also for other directions of propagation with $\theta=\pi/4$ and $\theta=3\pi/4$. \par To conclude, since the growth rate of instability becomes higher (or its inverse becomes lower $\lesssim10^{-1}$ s) due to the effects of the neutrino flavor oscillations, the MHD instability can occur within a shorter time than that in absence of the flavor oscillations. Consequently, this instability should appear to be fast enough to provoke the neutrino radiation and two-flavor neutrino mixing in core-collapse supernova since the typical time scale for supernova explosion is $\sim1-100$ s. Although the present model restricts to two neutrino flavor oscillations, the extension to three flavor states is also seemingly important, however, left for future studies. \section*{Acknowledgments} D. Chatterjee acknowledges support from Science and Engineering Research Board (SERB) for a national postdoctoral fellowship (NPDF) with sanction order No. PDF/2020/002209 dated December 31, 2020. \bibliographystyle{apsrev4-1}
proofpile-arXiv_067-4930
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Axel Thue, in \cite{Thue1909}, showed that the equation \begin{align}\label{eq:generalThueEq} F(x,y) = h \end{align} has only finitely many integer pair solutions when $F(x,y) \in \mathbb{Z}[x,y]$ is an irreducible (over $\mathbb{Z}$) homogeneous polynomial of degree $n \geq 3$ and $h \in \mathbb{Z}$ (for the purposes of this paper, only integer pair solutions will be considered). A polynomial which is homogeneous in two variables is called a \emph{binary form} and the equation \eqref{eq:generalThueEq} is called \emph{Thue's equation}. As a consequence, the equation \begin{align}\label{eq:thueInequality} |F(x,y)| \leq h \end{align} also has finitely many solutions. The equation \eqref{eq:thueInequality} is called \emph{Thue's Inequality}. Note that we are using all of the stated hypotheses here: if $F(x,y)$ has a linear factor for instance (say $rx - sy$ divides $F(x,y)$ for some relatively prime $r,s \in \mathbb{Z}$), then there are infinitely many integer pair solutions to $F(x,y) = 0$---namely, the integral multiples of the pair $(s,r)$---and hence, there are infinitely many integer pair solutions to \eqref{eq:thueInequality}. If $n =\deg(F)$ has $n \leq 2$, then the family of Pell equations show that there can be infinitely many solutions to \eqref{eq:thueInequality} even when $F(x,y)$ is irreducible. Thue's conclusion does hold if every irreducible factor of $F(x,y)$ has degree at least three, but we assume that $F(x,y)$ is irreducible for simplicity. Several natural questions arise such as \begin{enumerate}[\hspace{0.75cm}1.] \item How many solutions are there to \eqref{eq:thueInequality}? \item How large are solutions to \eqref{eq:thueInequality}? \item On which features of $F$ and $h$ do the solutions to \eqref{eq:thueInequality} depend? \end{enumerate} This paper largely handles the first question, though of course the second and third questions are related. In particular, the number of nonzero summands of $F(x,y)$ significantly impacts the number of solutions to \eqref{eq:thueInequality}.\footnote{The rough reasoning for this is as follows: a solution $(p,q)$ to \eqref{eq:thueInequality} corresponds to a good rational approximation $p/q$ to a root of $f(X) := F(X,1)$. The only roots of $f(X)$ which ought to allow good rational approximations are the real roots of $f(X)$ and it is the number of nonzero summands of $f(X)$ that controls the number of real roots of $f(X)$, as seen in Lemma 1 of \cite{Schmidt1987}} \begin{definition} If a polynomial has exactly two nonzero summands, it is called a \emph{binomial}; if it has exactly three nonzero summands, it is called a \emph{trinomial}; and if it has exactly four nonzero summands, it is called a \emph{tetranomial}. \end{definition} Since the number of nonzero summands plays such an important role, authors such as Bennett \cite{Bennett2001}, Evertse \cite{Evertse1982}, Grundman and Wisniewski \cite{Grundman2013}, Hyyr\"o \cite{Hyyroe1964}, Mueller \cite{Mueller1987}, Mueller and Schmidt \cite{Mueller1986}, and Thomas \cite{Thomas2000} have examined binomial, trinomial, and tetranomial Thue equations in hopes to get a better handle on how the number of nonzero summands of $F(x,y)$ affect the number of solutions to Thue equations.\\ In this paper, we focus on improving explicit bounds for the number of solutions to the Thue equation \begin{align}\label{eq:thueEq} |F(x,y)| = 1 \end{align} in the particular case that $F(x,y)$ is a trinomial. In this setting, Thomas, in \cite{Thomas2000}, showed that there are no more than $2v(n)w(n) + 8$ distinct integer pair solutions to $|F(x,y)| = 1$ when $F(x,y) \in \mathbb{Z}[x,y]$ is a trinomial irreducible binary form of degree $n \geq 3$ and $w(n)$ and $v(n)$ are piecewise defined as follows: \begin{align*} v(n) = \begin{cases} 3 & \text{if } n \text{ is odd}\\ 4 & \text{if } n \text{ is even} \end{cases} \end{align*} and \begin{table}[h!] \centering \begin{tabular}{| c || c | c | c | c | c | c | c | c | c |}\hline $n$ & 5\tablefootnote{There is an error in the proof of Lemma 4.1 in \cite{Thomas2000}: it is claimed that $\frac{b^t - 1}{b - 1} < b^t$ which is not the case for the choice of $b = 1.5$ when $n = 5$. Tracing this error through to its conclusion, the author believes that this is not correctable.} & 6 & 7 & 8 & 9 & 10--11 & 12--16 & 17--37 & $\geq 38$\\\hline $w(n)$ & $27^\dag$ & 16 & 13 & 11 & 9 & 8 & 7 & 6 & 5\\\hline \end{tabular} \end{table} We are able to improve the bounds that Thomas provides and we have the following theorem. \begin{theorem}\label{totalTrinomialSolutions} Let $F(x,y) = h_nx^n + h_kx^ky^{n - k} + h_0y^n$ where $h_n,h_k,h_0,n,k \in \mathbb{Z}$ with $0 < k < n$. Suppose that $F(x,y)$ is irreducible over $\mathbb{Z}[x,y]$ and $n \geq 6$. Then there are at most $2v(n)z(n) + 8$ distinct integer pair solutions to the equation $|F(x,y)| = 1$ where $v(n) = 3$ if $n$ is odd, $v(n) = 4$ if $n$ is even, and $z(n)$ is defined by the following table. \begin{center} \begin{tabular}{| c || c | c | c | c | c | c | c | c | c | c |}\hline $n$ & $6$ & $7$ & $8$ & $9$ & $10$--$11$ & $12$--$16$ & $17$--$38$ & $39$--$218$ & $\geq 219$\\\hline $z(n)$ & $15$ & $12$ & $11$ & $9$ & $8$ & $7$ & $6$ & $5$ & $4$\\\hline \end{tabular} \end{center} \end{theorem} This result is primarily derived from an improvement in efficiency to a counting technique associated to what is known as the gap principle (see Lemma \ref{GPImprovement}). \section{Counting With Gaps} The main technical accomplishment of this paper is the following version of a counting technique often used in conjunction with the ``gap principle.'' \begin{lemma} \label{GPImprovement} Suppose that $L, M, T, p,y_0,\dots,y_\ell \in \mathbb{R}_{>0}$ satisfy the following conditions: \begin{enumerate} \item $L \leq y_0 \leq \dots \leq y_\ell \leq M$ \item $p > 2$ \item $L^{p-2} > T$ \item $y_{i+1} \geq \inv{T}y_i^{p-1}$ for each $0 \leq i < \ell$ \end{enumerate} Then \[\ell \leq \frac{\log\left[\frac{\log(MT^{-1/(p-2)})}{\log\left(LT^{-1/(p-2)}\right)}\right]}{\log(p-1)}.\] \end{lemma} This lemma is comparable to Lemma 1 in \cite{Saradha2017}. However, by fixing any $L> 0$, $p > 2$, $\ell \in \mathbb{Z}_{>0}$, and $0 < T < L^{p-2}$, then setting $y_0 = L$, $y_i = T^{-1}y_{i-1}^{p-1}$, and $M = y_\ell$, one can see that this upper bound is sharp where the upper bound in Lemma 1 of \cite{Saradha2017} is not. \begin{proof} By induction, we have \begin{align*} M &\geq y_\ell \geq \frac{y_{\ell - 1}^{p-1}}{T} \geq \frac{\left(\frac{y_{\ell-2}^{p-1}}{T}\right)^{p-1}}{T} = \frac{y_{\ell - 2}^{(p-1)^2}}{T \cdot T^{p-1}} \geq \cdots\\ &\cdots\geq \frac{y_0^{(p-1)^\ell}}{T^{\sum_{j=0}^{\ell-1} (p-1)^j}} = \frac{y_0^{(p-1)^\ell}}{T^{\frac{(p-1)^\ell - 1}{p-2}}} \geq \frac{L^{(p-1)^\ell}}{T^{\frac{(p-1)^\ell - 1}{p-2}}} \end{align*} and we multiply both sides of \[M \geq \frac{L^{(p-1)^\ell}}{T^{\frac{(p-1)^\ell - 1}{p-2}}}\] by $T^{-1/(p-2)}$ to get \[MT^{-1/(p-2)} \geq \left(LT^{-1/(p-2)}\right)^{(p-1)^\ell}.\] Taking a log on both sides (and using the fact that $LT^{-1/(p-2)} > 1$) yields \[\frac{\log\left(MT^{-1/(p-2)}\right)}{\log\left(LT^{-1/(p-2)}\right)} \geq (p-1)^\ell\] and taking logs again and using the fact that $p > 2$ yields the desired inequality. \end{proof} \section{The Trinomial Thue Equation} In this section, we follow Thomas in \cite{Thomas2000} for much of our reasoning. However, we use different notation: the parameters which Thomas calls $u$ and $v$, we call $a$ and $b$ (this aligns with similar notation used in \cite{Akhtari2020} for instance, and also makes clear the difference between these parameters---whose choice will depend on $n$---and the values $u_n$ to be defined later and $v(n)$ defined in Theorem \ref{totalTrinomialSolutions}). The parameters which Thomas calls $b$ and $b_0$, we will call $d$ and $d_0$ due to their relation to the degree of $F(x,y)$ and also to avoid conflict with the newly named $b$.\\ Throughout the remainder of this section, suppose that $F(x,y) = h_nx^n + h_kx^ky^{n-k} + h_0y^n$ where $h_n,h_k,h_0,n,k \in \mathbb{Z}$, $0 < k < n$, and $n \geq 6$. Suppose further that $F(x,y)$ is irreducible over $\mathbb{Z}[x,y]$. Let $H = \max(|h_n|, |h_k|, |h_0|)$ be the na\"ive height of $F(x,y)$. Any time we refer to a ``solution,'' we specifically mean a solution to equation \eqref{eq:thueEq} in $\mathbb{Z}^2$.\\ We will not give a sophisticated bound on the number of solutions $(p,q)$ with $|pq| \leq 1$ and we will consider $(p,q)$ and $(-p,-q)$ to be equivalent solutions, spurring the following definition. \begin{definition} A pair $(p,q) \in \mathbb{Z}^2$ is called \emph{regular} if $p \neq 0$, $q > 0$, and $|p| \neq q$. \end{definition} If there are $r$ regular solutions to \eqref{eq:thueEq}, then there will be at most $2r + 8$ distinct solutions since for every solution $(p,q)$ with $|pq| > 1$, either $(p,q)$ or $(-p,-q)$ is regular and there are at most 8 solutions with $|pq| \leq 1$. From this fact and Theorem \ref{trinomialGeneralCount} below, Theorem \ref{totalTrinomialSolutions} will follow. \begin{theorem}\label{trinomialGeneralCount} Equation \eqref{eq:thueEq} has at most $v(n)z(n)$ regular solutions where $v(n)$ and $z(n)$ are defined in Theorem \ref{totalTrinomialSolutions}. \end{theorem} More specifically, let $f(x) := F(x,1)$ and set $R_F$ to be the number of real roots of $f$. We also wish to include certain critical points, so we make the following definition: \begin{definition} A critical point $\tau \in \mathbb{R}$ of $g(x) \in \mathbb{R}[x]$ is \emph{proper} if there exists a neighborhood $U$ of $\tau$ for which $g''(x)g(x) > 0$ for all $x \in U \setminus\{\tau\}$. \end{definition} Now let $C_F$ to be the number of proper critical points of $f(x)$. Setting $N_F$ to be the number of regular solutions to \eqref{eq:thueEq}, we will show the following theorem. \begin{theorem}\label{trinomialExceptionalCount} Let $F(x,y)$ be a trinomial of degree $n \geq 6$. Then \[N_F \leq z(n)R_F + \ell(n)C_F\] where $\ell(n)$ is defined by the following table. \begin{table}[h!] \centering \begin{tabular}{| c || c | c | c |}\hline $n$ & $6$--$7$ & $8$ & $\geq 9$\\\hline $\ell(n)$ & $4$ & $3$ & $2$\\\hline \end{tabular} \end{table} \end{theorem} We first show that Theorem \ref{trinomialExceptionalCount} implies Theorem \ref{trinomialGeneralCount}. Since $\ell(n)$ is less than $z(n)$, we have that $z(n)R_F + \ell(n)C_F \leq z(n)(R_F + C_F)$ and one can check that that $R_F + C_F \leq v(n)$ with calculus, so we get Theorem \ref{trinomialGeneralCount}.\\ To prove Theorem \ref{trinomialExceptionalCount}, we need some additional setup. \begin{definition} For a polynomial $g(x) \in \mathbb{R}[x]$, an \emph{exceptional point} of $f$ is either a real root or a proper critical point of $g(x)$ \end{definition} Let $\mathcal{E}(f)$ be the set of exceptional points, $\tau_1 < \tau_2 < \dots < \tau_c$, of $f$. Note that there exist improper critical points $\eta_1 < \eta_2 < \dots < \eta_{c-1}$ so that $\tau_1 < \eta_1 < \tau_2 < \eta_2 < \dots < \eta_{c-1} < \tau_c$. Setting $\eta_0 = -\infty$ and $\eta_c = +\infty$, we can define $J_1 = (-\infty,\eta_1)$ and $J_i = [\eta_i,\eta_{i+1})$ for $1 \leq i \leq c$. \begin{definition} A real number $\rho$ \emph{belongs to} $\tau_i$ (and $\tau_i$ \emph{belongs to} $\rho$) if $\rho \in J_i$. \end{definition} Observe that a regular pair $(p,q) \in \mathbb{Z}^2$ with $q \neq 0$ satisfying \eqref{eq:thueEq} corresponds to a rational number $\frac{p}{q}$ satisfying \[\left|f\left(\frac{p}{q}\right)\right| = \frac{1}{q^n}.\] Moreover, because $|F(p,q)| = 1$ and $F(x,y) \in \mathbb{Z}[x,y]$, $p$ and $q$ must be relatively prime, so this correspondence is one to one. Hence, rather than counting solutions to \eqref{eq:thueEq}, we instead count rational solutions to \[\left|f\left(\frac{x}{y}\right)\right| = \frac{1}{y^n}\] Thomas, in \cite{Thomas2000}, shows that the number of regular solutions $(p,q)$ of \eqref{eq:thueEq} for which there exists a critical point of $f(x)$, $\tau$, so that $\frac{p}{q}$ belongs to $\tau$ is no larger than $\ell(n)$ (see the completion of the proof of Thomas' Theorem 2.2, given after the statement of Theorem 7.1). So it only remains to show \begin{lemma} The number of regular solutions, $(p,q)$, of \eqref{eq:thueEq} for which $\frac{p}{q}$ belongs to a real root of $f$ is no larger than $z(n)$. \end{lemma} By Theorem 2.2 in \cite{Thomas2000}, it suffices to show Lemma \ref{trinomialRealRootCount} for real roots of $f$ which are greater than 1. Then by Lemma 2.4 of \cite{Thomas2000}, we conclude that any regular $(p,q)$ for which $\frac{p}{q}$ belongs to an exceptional point greater than 1 has $p > q \geq 1$ and so we may assume that $p > q \geq 1$. Defining \[p_0(n) := \begin{cases} 3 & \text{if } 6 \leq n \leq 8 \\ 2 & \text{if } n \geq 9 \end{cases},\] we note that any regular solution $(p,q)$ with $p > q \geq 1$ must satisfy \begin{align} \label{eq:specialSolutionDef} p \geq p_0(n) \end{align} except for possibly $(2,1)$ when $n \leq 8$. \begin{definition} A solution, $(p,q)$ to equation \eqref{eq:thueEq} with $p > q \geq 1$ and $p \geq p_0(n)$ is called \emph{special}. \end{definition} Since at most one solution is not special in the case that $6 \leq n \leq 8$, it suffices to show the following lemma, which will be our final reduction. \begin{lemma}\label{trinomialRealRootCount} Let $\alpha > 1$ be a real root of $f(x)$. Then the number of special solutions $(p,q)$ of \eqref{eq:thueEq} for which $\frac{p}{q}$ belongs to $\alpha$ is no greater than $z(n) - 1$ if $6 \leq n \leq 8$ and no greater than $z(n)$ if $n \geq 9$. \end{lemma} To prove lemma \ref{trinomialRealRootCount}, we split solutions into two cases: small and large. For $F(x,y)$ of degree $n$ and na\"ive height $H$, we choose a constant $Y_F = H^{\chi_n}\cdot e^{\pi_n}$ (for some values $\chi_n$ and $\pi_n$ to be specified later, but which depend only on $n$) and make the following definition. \begin{definition} A special solution $(p,q)$ to \eqref{eq:thueEq} is \emph{small} if $q \leq Y_F$ and is \emph{large} otherwise. \end{definition} \subsection{Small Special Solutions} One of Thomas' main achievements in \cite{Thomas2000} is the following theorem (numbered 4.1 in \cite{Thomas2000}): \begin{theorem}\label{qInc} Suppose that $F(x,y) \in \mathbb{Z}[x,y]$ is an irreducible (over $\mathbb{Z}$) trinomial binary form of degree $n \geq 5$ and na\"ive height $H$. Let $(p,q)$ and $(p',q')$ be special solutions to \eqref{eq:thueEq} which belong to a real root and suppose $q' > q$. Then \begin{align} q' > \frac{H^{d/n}p^{n^*-d}q^d}{K_d(n)} \end{align} where $n^* := \frac{n-2}{2}$, $d$ is chosen to be any real number satisfying $0 \leq d \leq n^*$, and \[K_d(n) := m_n(r_n(1+u_n))^d\] where \begin{align*} m_n = 2\sqrt{\frac{2n}{(n-1)(n-2)}}, \qquad r_n = (2.032)^{1/n}, \qquad u_n = \sqrt{\frac{2}{(n-2)p_0^n}}. \end{align*} \end{theorem} This approximation result will be helpful in proving the following proposition: \begin{proposition} Let $\alpha > 1$ be a real root of $f(x)$. There are no more than \begin{align} T:= \left\lfloor\max\left(\frac{\log\left(\frac{\chi_n}{\frac{d_0(d-1)+d}{n(d-1)}} + 1\right)}{\log d}, \frac{\log\left(\frac{\pi_n}{\log K_d(n)^{-\frac{1}{d-1}}Q_1} + 1\right)}{\log d}\right)\right\rfloor + 2\label{eq:TDef} \end{align} small special solutions $(p,q)$ where $p/q$ belongs to $\alpha$. \end{proposition} \begin{proof} If there less than 2 special solutions $(p,q)$ where $p/q$ belongs to $\alpha$, then we are done. Otherwise, suppose that there are exactly $t + 2$ small special solutions $(p,q)$ where $p/q$ belongs to $\alpha$ and $t \geq 0$. Label those $t + 2$ solutions as $(p_0,q_0),\ldots,(p_{t+1},q_{t+1})$ ordered so that \[1 \leq q_0 < q_1 <\ldots<q_{t+1} \leq Y_F\] (the strict inequality follows from the fact that the $\frac{p_i}{q_i}$ are principal convergents to $\alpha$ by corollary 3.2 in \cite{Thomas2000}).\\ Choose numbers $d_0,d \in \mathbb{R}_{>0}$ and make the following definitions: \begin{align*} c_0 &:= n^*-d_0, & K_0 &:= K_{d_0}(n), & Q_1 &:= \frac{p_0(n)^{c_0}}{K_0}. \end{align*} In particular, choose $d$ and $d_0$ so that \begin{align} &0 \leq d_0 \leq n^*-1.4,\nonumber\\ &1 < d \leq n^*,\nonumber\\ &Q_1^{d-1} > \max(1,K_d(n)).\label{eq:Q1size} \end{align} In the proof of Proposition \ref{TZForLargeN} and in the computations in section \ref{paramsForSmallN}, we show by example that choosing such $d$ and $d_0$ are possible.\\ First, observe that by Theorem \ref{qInc} applied to $q_1 > q_0 \geq 1$ (and using the observation that $p_0 \geq p_0(n)$), we get $q_1 > H^{d_0/n}Q_1$.\\ Here is where we depart from Thomas' method. We now aim to apply Lemma \ref{GPImprovement} to $H^{d_0/n}Q_1 < q_1 < q_2 < \dots < q_{t+1} \leq Y_F$. In the notation of Lemma \ref{GPImprovement}, we have $L = H^{d_0/n}Q_1$, $M = Y_F$, $p = d+1$, and $T = \frac{K_d(n)}{H^{d/n}}$ (technically, $T$ should have a factor of $1/p^{n^*-d}$, but the result of Lemma \ref{GPImprovement} still holds if $T$ is replaced by something larger and moreover, we will choose $d = n^*$ later, rendering the difference moot). To apply the conclusion of Lemma \ref{GPImprovement}, we need to check that $p > 2$ (trivial based on the fact that $d$ is chosen to be greater than 1) and we need to check that $L^{p-2} > T$. But this occurs if and only if \[\left(H^{d_0/n}Q_1\right)^{d-1} > \frac{K_d(n)}{H^{d/n}},\] i.e. $H^{(d_0(d-1) + d)/n}Q_1^{d-1} > K_d(n)$, which is guaranteed by \eqref{eq:Q1size}.\\ Now applying Lemma \ref{GPImprovement} and using the fact that $t$ is an integer yields \begin{align*} t &\leq \Bigg\lfloor\frac{\log\left[\frac{\log\left(Y_F\left(\frac{K_d(n)}{H^{d/n}}\right)^{-\frac{1}{d-1}}\right)}{\log\left(H^{d_0/n}Q_1\left(\frac{K_d(n)}{H^{d/n}}\right)^{-\frac{1}{d-1}}\right)} \right]}{\log d}\Bigg\rfloor\\ &= \bigg\lfloor\frac{\log\left[\frac{\log\left(Y_FK_d(n)^{-\frac{1}{d-1}}H^{\frac{d}{n(d-1)}}\right)}{\log\left(K_d(n)^{-\frac{1}{d-1}}H^{\frac{d_0}{n}+\frac{d}{n(d-1)}}Q_1\right)}\right]}{\log d}\bigg\rfloor\\ &\leq \bigg\lfloor\frac{\log\left[\frac{\log(Y_F)}{\log\left(K_d(n)^{-\frac{1}{d-1}}H^{\frac{d_0}{n}+\frac{d}{n(d-1)}}Q_1\right)} + 1\right]}{\log d}\bigg\rfloor. \end{align*} where in the last step, we use the fact that $Q_1 > 1$. Now using the definition $Y_F = H^{\chi_n} \cdot e^{\pi_n}$, we have \begin{align*} t &\leq \Bigg\lfloor\frac{\log\left[\frac{\chi_n\log H + \pi_n}{\frac{d_0(d-1)+d}{n(d-1)}\log H + \log\left( K_d(n)^{-\frac{1}{d-1}}Q_1\right)} + 1\right]}{\log d}\Bigg\rfloor\\ &\leq \left\lfloor\max\left(\frac{\log\left(\frac{\chi_n}{\frac{d_0(d-1)+d}{n(d-1)}} + 1\right)}{\log d}, \frac{\log\left(\frac{\pi_n}{\log K_d(n)^{-\frac{1}{d-1}}Q_1} + 1\right)}{\log d}\right)\right\rfloor\\ &= T - 2 \end{align*} Therefore, the number of small special solutions $(p,q)$ for which $p/q$ belongs to $\alpha$ is $t + 2 \leq T$. \end{proof} \subsection{Large Special Solutions} Here we follow Thomas in \cite{Thomas2000} as he follows Bombieri-Schmidt in \cite{Bombieri1987}. If we choose numbers $a$ and $b$ satisfying \begin{align}\label{eq:UVReq} 0 < a < b < 1 - \sqrt{2 \cdot \frac{n + a^2}{n^2}} \end{align} then we can define \begin{align*} L &= \frac{\sqrt{2(n + a^2)}}{1 - b} & D &= \frac{L}{n - L} & A &= \frac{1}{a^2} & E &= \frac{1}{2(b^2 - a^2)}. \end{align*} Now we choose \begin{align} \chi_n &= D(A + 1) + 1\label{eq:chiDef}\\ \pi_n &= (D(4 + A) + 2)\log(2) + \frac{(D + 1)\log(n)}{2} + \frac{nAD}{2}.\label{eq:piDef} \end{align} With these choices of $\pi_n$ and $\chi_n$, we aim to apply Lemma 2 of \cite{Bombieri1987} and conclude the following: \begin{proposition}\label{largeSpecialSolutionBound} Suppose $\alpha > 1$ is a real root of $f(x)$. If $\chi_n \geq 2$ and $\pi_n \geq 5\log(2) + 2\log(n)$, then there are at most \begin{align}\label{eq:ZDef} Z :=\left\lfloor\frac{\log E + 2\log(n) - \log(L - 2)}{\log(n-1)}\right\rfloor + 2 \end{align} large special solutions belonging to $\alpha$. \end{proposition} The proof of this proposition relies on the two following lemmas: \begin{lemma}\label{largeYF} $Y_F$ as defined here is greater than or equal to $Y_0$ as defined in \cite{Bombieri1987}. \end{lemma} Lemma \ref{largeYF} ensures that any large solution in the sense of this paper is a large solution in the sense of Bombieri and Schmidt. \begin{lemma}\label{belongsToImpliesClosest} Suppose that $\chi_n \geq 2$ and $\pi_n \geq 5\log(2) + 2\log(n)$. If $\alpha > 1$ is a real root of $f(x)$ and $(p,q)$ is a large special solution of \eqref{eq:thueEq} so that $p/q$ belongs to $\alpha$, then $\alpha$ is the closest (complex) root of $f(x)$ to $p/q$. \end{lemma} Given an algebraic $\beta$, Lemma 2 of \cite{Bombieri1987} only counts the number of rational numbers which are nearest to $\beta$ out of all of the conjugates of $\beta$ and which form good approximations of $\beta$. If there were a real root $\alpha > 1$ of $f(x)$ and a large special solution $(p,q)$ of \eqref{eq:thueEq} for which $p/q$ belonged to $\alpha$ yet there was a root $\beta$ of $f(x)$ with $\beta$ closer to $p/q$ than $\alpha$, Lemma 2 of \cite{Bombieri1987} would not count $p/q$. However, Lemma \ref{belongsToImpliesClosest} confirms that this is not the case.\\ We first prove these two lemmas: \begin{proof}[Proof of Lemma \ref{largeYF}] $Y_0$ depends on the Mahler measure $M(F)$ rather than the height $H(F)$. These are related (for trinomials $F(x,y)$) by $M(F) \leq 3^{1/2}H(F)$, which follows from the fact that $M(F) \leq \ell_2(F)$ (see Lemma 1.6.7 in \cite{BombieriEnrico2001Hidg}). Now, using Thomas' notation, we have that \[Y_0 := \left(2C\right)^{\frac{1}{n-\lambda}}\left(4e^{A_1}\right)^{\frac{\lambda}{n - \lambda}}\] where \begin{align*} C &= (2n^{1/2} M(F))^n\\ t &= \sqrt{\frac{2}{n + a^2}}\\ A_1 &= \frac{t^2}{2 - nt^2}\left(\log M(F) + \frac{n}{2}\right)\\ \lambda &= \frac{2}{t(1 - b)}. \end{align*} Some of our other constants regularly appear in the estimation which shows $Y_0 < Y_F$ and we list them here for simplicity: \begin{align*} &A = \frac{1}{a^2} = \frac{2}{2(n + a^2) - 2n} = \frac{2}{n + a^2}\cdot\frac{1}{2-\frac{2n}{n + a^2}} = \frac{t^2}{2 - nt^2}\\ &L = \frac{\sqrt{2(n + a^2)}}{1-b} = \frac{2}{1 - b} \sqrt{\frac{n + a^2}{2}} = \frac{2}{t(1 - b)} = \lambda \\ &D = \frac{L}{n - L} = \frac{\lambda}{n - \lambda} \end{align*} Note also that this implies that $D + 1 = \frac{n}{n - \lambda}$.\\ Before making the final estimate, we take a moment to observe that \[\left(\sqrt{3}\right)^{\frac{n}{n-\lambda} + AD} < 2^{\frac{n-1}{n-\lambda}}.\] This estimate is tedious, but not difficult. One can show that \[\left(\sqrt{3}\right)^{\frac{n}{n-\lambda} + AD} < 2^{\frac{n-1}{n-\lambda}}\] occurs if and only if \[2 < \left(\frac{2}{\sqrt{3}}\right)^{n + A\lambda}.\] Estimating $A$ from below by \[A > \frac{1}{(1 - \sqrt{2(n+1)/n^2})^2}\] and estimating $\lambda$ from below by $\lambda > \sqrt{2n}$ gives that \[2 < \left(\frac{2}{\sqrt{3}}\right)^{n + A\lambda}\] is implied by \[2 < \left(\frac{2}{\sqrt{3}}\right)^{n + \frac{n^2\sqrt{2n}}{n^2-2n\sqrt{2n + 2} + 2n + 2}}.\] Upon observing that \[n + \frac{n^2\sqrt{2n}}{n^2-2n\sqrt{2n + 2} + 2n + 2} \geq 20\] when $n \geq 6$ for instance, one can now see that \[2 < \left(\frac{2}{\sqrt{3}}\right)^{n + \frac{n^2\sqrt{2n}}{n^2-2n\sqrt{2n + 2} + 2n + 2}}\] and as a result, we must have \[\left(\sqrt{3}\right)^{\frac{n}{n-\lambda} + AD} < 2^{\frac{n-1}{n-\lambda}}.\] We can now conclude \begin{align*} Y_0 &= (2C)^{\frac{1}{n-\lambda}}(4e^{A_1})^{\frac{\lambda}{n - \lambda}}\\ &= (2(2n^{1/2}M(F))^n)^{\frac{1}{n - \lambda}}(4e^{A(\log M(F) + \frac{n}{2})})^{\frac{\lambda}{n - \lambda}}\\ &= 2^{\frac{1 + n + 2\lambda}{n-\lambda}}\cdot n^{\frac{n}{2(n - \lambda)}}\cdot M(F)^{\frac{n}{n - \lambda}}e^{AD(\log M(F) + \frac{n}{2})}\\ &= 2^{\frac{1 + n + 2\lambda}{n-\lambda}}\cdot n^{\frac{n}{2(n - \lambda)}}\cdot M(F)^{\frac{n}{n - \lambda} + AD}\cdot e^{\frac{ADn}{2}}\\ &\leq 2^{\frac{1 + n + 2\lambda}{n-\lambda}}\cdot n^{\frac{n}{2(n - \lambda)}}\cdot (\sqrt{3}H(F))^{\frac{n}{n - \lambda} + AD}\cdot e^{\frac{ADn}{2}}. \end{align*} Next we use the fact that $\left(\sqrt{3}\right)^{\frac{n}{n-\lambda} + AD} < 2^{\frac{n-1}{n-\lambda}}$ to find that \begin{align*} Y_0 &< 2^{\frac{1 + n + 2\lambda}{n-\lambda} + \frac{n - 1}{n - \lambda} + AD}\cdot n^{\frac{n}{2(n - \lambda)}}\cdot H(F)^{\frac{n}{n - \lambda} + AD}\cdot e^{\frac{ADn}{2}}\\ &= 2^{\frac{2n + 2\lambda}{n - \lambda} + AD} \cdot n^{\frac{D + 1}{2}} \cdot H(F)^{1 + D + AD} \cdot e^{\frac{ADn}{2}}\\ &= H(F)^{\chi_n} \cdot \exp\left(\left(\frac{2n + 2\lambda}{n - \lambda} + AD\right)\log(2) + \frac{D + 1}{2}\log n + \frac{ADn}{2}\right)\\ &= H(F)^{\chi_n} \cdot \exp\left(\left(\frac{4\lambda + 2(n - \lambda)}{n - \lambda} + AD\right)\log(2) + \frac{D + 1}{2}\log n + \frac{ADn}{2}\right)\\ &= H(F)^{\chi_n} \cdot \exp\left(\left(4D + 2 + AD\right)\log(2) + \frac{D + 1}{2}\log n + \frac{ADn}{2}\right)\\ &= H(F)^{\chi_n} \cdot e^{\pi_n}\\ &= Y_F. \end{align*} \end{proof} \begin{proof}[Proof of Lemma \ref{belongsToImpliesClosest}] Since $\frac{p}{q}$ is a large special solution, we have \begin{align*} p > q \geq Y_F &= H^{\chi_n}e^{\pi_n} \end{align*} Since $\frac{p}{q}$ belongs to $\alpha$, Thomas' Corollary 3.1 in \cite{Thomas2000} indicates that \begin{align*} \left|\frac{p}{q} - \alpha\right| &< \frac{1}{p^{n^*}q}\\ &< \frac{1}{Y_F^{n/2}} \end{align*} Suppose, by contradiction, that there exists $\beta \in \mathbb{C}$ with $f(\beta) = 0$ and $\left|\frac{p}{q} - \beta\right| < \left|\frac{p}{q} - \alpha\right|$. Then by the triangle equality, we find that \begin{align} \left|\alpha - \beta\right| &\leq \left|\frac{p}{q} - \beta\right| + \left|\frac{p}{q} - \alpha\right|\nonumber\\ &< \frac{2}{Y_F^{n/2}}\label{eq:distBetweenRoots} \end{align} Since $\alpha$ and $\beta$ are distinct roots of $f$, Theorem 4 in \cite{Rump1979} indicates that \begin{align} |\alpha - \beta| > \frac{1}{2n^{n/2 + 2} (4H)^n} = \frac{1}{2^{2n + 1}n^{n/2 + 2}H^n}\label{eq:RumpRootSep} \end{align} Combining \eqref{eq:distBetweenRoots} and \eqref{eq:RumpRootSep}, we find that \[\frac{1}{2^{2n + 1}n^{n/2 + 2}H^n} < \frac{2}{Y_F^{n/2}}\] and rearranging yields \[\frac{Y_F^{n/2}}{2^{2n + 2}n^{n/2 + 2}H^n} < 1.\] From here, we can use the fact that $Y_F = H^{\chi_n}e^{\pi_n}$ to find \begin{align*} 1 &> \frac{H^{n(\chi_n/2 - 1)}e^{n\pi_n/2}}{2^{2n + 2}n^{n/2 + 2}}\\ &\geq \frac{e^{n\pi_n/2}}{2^{2n + 2}n^{n/2 + 2}} \end{align*} where the last inequality follows because $\chi_n \geq 2$. After rearranging, this implies that \begin{align*} \pi_n &< \frac{(4n + 4)\log(2) + (n + 4)\log(n)}{n}\\ &\leq 5\log(2) + 2\log(n) \end{align*} where the last inequality follows from the fact that $n \geq 6$. However, the last inequality contradicts our hypothesis that $\pi_n \geq 5\log(2) + 2\log(n)$, so no such $\beta$ can exist and the closest root of $f(x)$ to $p/q$ is $\alpha$. \end{proof} Finally, we prove proposition \ref{largeSpecialSolutionBound}. \begin{proof}[Proof of Proposition \ref{largeSpecialSolutionBound}.] Let $\alpha > 1$ be a real root of $f(x)$. By lemma \ref{largeYF}, every large special solution $(p,q)$ so that $p/q$ belongs to $\alpha$ is large in the sense of \cite{Bombieri1987}. Moreover, by lemma \ref{belongsToImpliesClosest}, any large special solution $(p,q)$ so that $p/q$ belongs to $\alpha$ has \[\left|\frac{p}{q} - \alpha\right| = \min_{f(\beta) = 0} \left|\frac{p}{q} - \beta\right|.\] Hence, every large special solution $(p,q)$ so that $p/q$ belongs to $\alpha$ is large (in the sense of \cite{Bombieri1987}) and is nearest to $\alpha$ among all the roots of $f(X)$. Lemma 2 of \cite{Bombieri1987} indicates that there are no more than \[Z = \left\lfloor\frac{\log E + 2\log(n) - \log(L - 2)}{\log(n-1)}\right\rfloor + 2\] large solutions $(p,q)$ so that $p/q$ is nearest to $\alpha$ among all the roots of $f(X)$ and so we conclude that there are no more than $Z$ large special solutions $(p,q)$ with $p/q$ belonging to $\alpha$. \end{proof} \subsection{Choosing Parameters for Large Degrees} Begin by assuming $n \geq 507$. We handle all smaller instances of $n$ computationally. \begin{proposition}\label{TZForLargeN} For $n \geq 507,$ we can take $d_0 = \frac{n^*}{2}$, $d = n^*$, $a = \frac{1}{4}$, $C = 7/6$, $c = \frac{8}{9C^2 - 1}$, $b = 1 - \frac{\sqrt{2n + \frac{1}{8}}}{\frac{cn^2}{n-1} + 2}$ and obtain $T = 2$ and $Z = 2$. \end{proposition} Observe first that these are the smallest possible values of $T$ and $Z$. \begin{proof} To show this, we first must show that these choices of $d_0, d, a,$ and $b$ are valid.\\ Certainly $0 \leq d_0 \leq n^* - 1.4$ and $1 < d \leq n^*$. All that remains to show for $d_0$ and $d$ is \eqref{eq:Q1size}. We have \begin{align*} Q_1^{d - 1} &= \left(\frac{p_0^{c_0}}{K_0}\right)^{d-1}\\ &\geq \left(\frac{2^{n^*/2}}{K_{n^*/2}(n)}\right)^{\frac{n^*}{2} - 1}. \end{align*} But observe that \begin{align*} K_{n^*/2}(n) &= 2\sqrt{\frac{2n}{(n-1)(n-2)}}\left(2.032^{1/n}\left(1 + \sqrt{\frac{2}{(n-2)p_0^n}}\right)\right)^{n^*/2}\\ &\leq 2 \cdot 2.032\left(1 + \frac{1}{\sqrt{(n-2)2^{n-1}}}\right)^{\frac{n-2}{4}}\\ &\leq 5\left(1 + \frac{1}{n-2}\right)^{\frac{n-2}{4}}\\ &\leq 5e^{1/4} \end{align*} so $Q_1^{d-1}$ is certainly greater than 1. Similar reasoning shows that $K_d(n) \leq 5e^{1/2}$, so it is certainly also the case that \[K_d(n) \leq 5e^{1/2} < \left(\frac{2^{n^*/2}}{5e^{1/4}}\right)^{\frac{n^*}{2} - 1} \leq \left(\frac{2^{n^*/2}}{K_{n^*/2}(n)}\right)^{\frac{n^*}{2} - 1} = Q_1^{b-1}.\] Hence, our choices of $d$ and $d_0$ are valid.\\ Next, we wish to check that our choices for $a$ and $b$ are valid. To check $0 < a < b$, note that \begin{align} b &= 1 - \frac{\sqrt{2n + \frac{1}{8}}}{\frac{cn^2}{n-1} + 2} \geq 1 - \frac{2\sqrt{n}}{\frac{cn^2}{n-1}} = 1 - \frac{2(n-1)\sqrt{n}}{cn^2} \geq 1 - \frac{2}{c\sqrt{n}}\nonumber\\ &\geq 1 - \frac{2}{c\sqrt{507}}> \frac{1}{4} = a.\label{vLowerBound} \end{align} To check that $b < 1 - \frac{\sqrt{2n + 2a^2}}{n}$, it suffices to show that $n > \frac{cn^2}{n-1} + 2$. But this occurs if and only if $(1-c)n^2 - 3n + 2 > 0$, i.e. $n > \frac{3 + \sqrt{9 - 8(1-c)}}{2(1-c)} \approx 9.66$, which we certainly have.\\ To show that $T = 2$, we claim that we have the following two inequalities (and from equation \eqref{eq:TDef}, it will follow that $T = 2$): \begin{align} \frac{\chi_n}{\frac{d_0(d-1)+d}{n(d-1)}} + 1 < d \label{eq:ChiReq}\\ \frac{\pi_n}{\log K_d(n)^{-\frac{1}{d-1}}Q_1} + 1 < d \label{eq:PiReq} \end{align} We first show \eqref{eq:ChiReq}. Substituting $d_0 = \frac{n^*}{2}$ and $d = n^*$, observe that \eqref{eq:ChiReq} is equivalent to \[\chi_n < \frac{\left(\frac{n-2}{4}\right)\left(\frac{n-4}{2} - 1\right) + \frac{n-2}{2}}{n} = \frac{n-2}{8} = \frac{n^*}{4}.\] Keeping an eye on the definition of $\chi_n$ given in equation \eqref{eq:chiDef}, we have that \begin{align*} A &= 16\\ C &= \frac{7}{6}\\ c &= \frac{8}{9C^2 - 1} = \frac{32}{45}\\ b &= 1 - \frac{\sqrt{2n + \frac{1}{8}}}{\left(\frac{cn^2}{n - 1} + 2\right)} \end{align*} All of these together yield \begin{align}\label{eq:LDef} L = \frac{cn^2}{n-1} + 2 = \frac{32}{45}\left(n + 1 + \frac{1}{n - 1}\right) + 2 \end{align} and it is now easy to check that \[\frac{32}{45}n \leq L \leq \frac{32}{45}n + 3.\] From here we have \begin{align*} D &= \frac{L}{n - L} \leq \frac{\frac{32}{45}n + 3}{n - \left(\frac{32}{45}n + 3\right)} = \frac{\frac{32}{45}n + 3}{\frac{13}{45}n - 3} = \frac{32}{13} + \frac{6075}{13(13n - 135)} \leq 2.54 \\ D &\geq \frac{\frac{32}{45n}}{n - \frac{32}{45}n} = \frac{32}{13} \approx 2.46 \end{align*} when we use the fact that $n \geq 507$. To convert these into estimates on $\chi_n$, we have \begin{align} \chi_n &= 17D + 1 \leq \frac{94853}{2152} \leq 44.08\label{eq:chiNUpperBound}\\ \chi_n &= 17D + 1 \geq \frac{557}{13} \geq 42.8.\label{eq:chiNLowerBound} \end{align} Since $n \geq 507$, we now have $\chi_n \leq 44.08 < \frac{n-2}{8}$ which confirms equation \eqref{eq:ChiReq}.\\ Equation \eqref{eq:PiReq} is more complicated to handle. Observe that by equation \eqref{eq:piDef}, we have \begin{align}\label{eq:piNUpperBound} \pi_n &= (D(4 + A) + 2)\log 2 + \frac{(D + 1)\log n}{2} + \frac{ADn}{2}\nonumber\\ &\leq 36.6 + 1.77\log n + 20.28n\nonumber\\ &\leq 37 + 21n. \end{align} For reference later, we will also note \begin{align}\label{eq:piNLowerBound} \pi_n &\geq 35.5 + 1.7\log n + 19.6n\nonumber\\ &\geq 46 + 19n \end{align} It will additionally be helpful for us to have an estimate on $K_d(n)$. We have \begin{align*} K_d(n) &= 2\sqrt{\frac{2n}{(n-1)(n-2)}}\left(2.032^{1/n}\left(1+\sqrt{\frac{2}{(n-2)p_0^n}}\right)\right)^d\\ &\leq 4\sqrt{\frac{3n-3}{(n-1)(n-2)}}\left(1+\sqrt{\frac{2n-4}{(n-2)p_0^n}}\right)^{d}\\ &\leq 4\sqrt{\frac{3}{n-2}}\left(1+\sqrt{\frac{2}{p_0^n}}\right)^{n^*}. \end{align*} Now, since $p_0 \geq 2$, we have \begin{align}\label{eq:KbUpperBound} K_d(n) &\leq 4\sqrt{\frac{3}{n-2}}\left(1+\sqrt{\frac{2}{2^n}}\right)^{\frac{n-2}{2}} \leq 4\sqrt{\frac{3}{n-2}}\left(1+\frac{1}{2^{\frac{n-2}{2}}}\right)^{\frac{n-2}{2}}\nonumber\\ &\leq 4\sqrt{\frac{3}{n-2}}\left(1 + \frac{1}{\left(\frac{n-2}{2}\right)}\right)^{\frac{n-2}{2}} \leq 4e\sqrt{\frac{3}{n-2}} \end{align} and similar reasoning yields \begin{align}\label{eq:KbzeroUpperBound} K_{d_0}(n) \leq 4\sqrt{\frac{3}{n-2}}\left(1 + \frac{1}{2^{n^*}}\right)^{n^*/2}. \end{align} Combining the upper bounds in equations \eqref{eq:piNUpperBound}, \eqref{eq:KbUpperBound}, and \eqref{eq:KbzeroUpperBound} with the fact that for $n \geq 270$, \[\pi_n < 37+21n < \frac{\log(1.9)}{8}(n-2)(n-4)\] yields \begin{align*} (d-1)\log\left[K_d(n)^{-\frac{1}{d-1}}Q_1\right] &= \log\left[K_d(n)^{-1}Q_1^{d-1}\right]\\ &\geq \log\left[\frac{1}{4} \cdot \inv{e} \cdot \left(\frac{n-2}{3}\right)^{1/2} \cdot \left(\frac{p_0^{c_0}}{K_0}\right)^{n^*-1}\right]\\ &\geq \log\left[\frac{1}{4} \cdot \inv{e} \cdot \left(\frac{n-2}{3}\right)^{1/2}\left(\frac{2^{n^*/2}}{K_{d_0}(n)}\right)^{n^*-1}\right]\\ &\geq \log\left[\frac{1}{4} \cdot \inv{e} \cdot \left(\frac{n-2}{3}\right)^{1/2}\left(\frac{2^{n^*/2}}{\left(1+2^{-n^*}\right)^{n^*/2}}\right)^{\frac{n-4}{2}}\left(\frac{1}{4}\sqrt{\frac{n-2}{3}}\right)^{\frac{n-4}{2}}\right]\\ &\geq \log\left[\left(\frac{1}{4}\right)^{\frac{n-2}{2}}\left(\frac{n-2}{3}\right)^{\frac{n-2}{4}}\inv{e}\left(\frac{2}{1+2^{-n^*}}\right)^{\left(\frac{n-2}{4}\right)\left(\frac{n-4}{2}\right)}\right]\\ &\geq \log\left[\left(\frac{n-2}{48}\right)^{\frac{n-2}{4}}\inv{e}\cdot 1.9^{\frac{(n-2)(n-4)}{8}}\right]\\ &\geq \frac{\log1.9}{8}(n-2)(n-4) + \frac{n-2}{4}\log\left(\frac{n-2}{48}\right) - 1\\ &\geq \frac{\log(1.9)}{8}(n-2)(n-4)\\ &> \pi_n \end{align*} which now implies that equation \eqref{eq:PiReq} is satisfied. Hence, we conclude that $T = 2$.\\ Finally, we check that $Z = 2$. In order to use Proposition \ref{largeSpecialSolutionBound}, we must verify that $\chi_n \geq 2$ and $\pi_n \geq 5\log(2) + 2\log(n)$. However, these quickly follow from \eqref{eq:chiNLowerBound} and \eqref{eq:piNLowerBound}. \\ As before in equation \eqref{vLowerBound}, we have $b \geq 1 - \frac{2}{c\sqrt{507}} > 0.87509$, so \begin{align*} E = \frac{1}{2(b^2 - a^2)} < 0.711 < c \end{align*} and so (also using \eqref{eq:LDef}) \begin{align*} \frac{\log E + 2\log(n) - \log(L - 2)}{\log(n-1)} < \frac{\log\left(\frac{cn^2}{L-2}\right)}{\log(n-1)} = 1. \end{align*} Therefore, by \eqref{eq:ZDef}, we note that $Z = 2$. \end{proof} Note that Proposition \ref{TZForLargeN} proves Lemma \ref{trinomialRealRootCount} for $n \geq 507$. \subsection{Choosing Parameters for Small Degrees}\label{paramsForSmallN} For $n \leq 506$, we make parameter choices listed in the Jupyter notebook listed here:\\ \url{https://pages.uoregon.edu/gknapp4/files/trinomial_computations.ipynb}.\\ One can check (and the code checks this automatically) that the parameter choices satisfy \eqref{eq:Q1size}, \eqref{eq:UVReq} along with the necessary bounds on $\pi_n$ and $\chi_n$ in order to use proposition \ref{largeSpecialSolutionBound}, and yield the $T$ and $Z$ values giving $z(n) = T + Z + 1$ when $6 \leq n \leq 8$ and $z(n) = T + Z$ for $n \geq 9$. Therefore, these computations verify Lemma \ref{trinomialRealRootCount} for $n \leq 506$, which concludes our investigation.\\ In brief, the code picks a value of $n$, sets $d = n^*$, brute force loops over a number of valid values for the parameters $d_0,a,b$, computes the corresponding $T$ and $Z$ values defined in equations \eqref{eq:TDef} and \eqref{eq:ZDef}, and records the values of $d_0,a,$ and $b$ which minimize $T+Z$. The following table contains a summary of some of the more interesting data points listed in the Jupyter notebook.\\ \begin{longtable}{*7{|c|}} \hline $n$ & $d_0$ & $d$ & $a$ & $b$ & $T$ & $Z$ \\ \hline \endhead 6 & 0 & 2 & 0.18 & 0.29 & 10 & 4 \\ 7 & 0.55 & 2.5 & 0.2 & 0.28 & 7 & 4 \\ 8 & 0.992 & 3 & 0.16 & 0.41 & 7 & 3 \\ 9 & 0.882 & 3.5 & 0.17 & 0.4 & 6 & 3 \\ 10 & 1.196 & 4 & 0.23 & 0.41 & 5 & 3 \\ 11 & 1.705 & 4.5 & 0.14 & 0.37 & 5 & 3 \\ 12 & 2.088 & 5 & 0.27 & 0.41 & 4 & 3 \\ 13 & 2.255 & 5.5 & 0.2 & 0.37 & 4 & 3 \\ 14 & 2.53 & 6 & 0.16 & 0.35 & 4 & 3 \\ 15 & 3.009 & 6.5 & 0.13 & 0.34 & 4 & 3 \\ 16 & 3.136 & 7 & 0.11 & 0.32 & 4 & 3 \\ 17 & 3.965 & 7.5 & 0.33 & 0.43 & 3 & 3 \\ 18 & 4.29 & 8 & 0.27 & 0.39 & 3 & 3 \\ $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ 37 & 7.728 & 17.5 & 0.08 & 0.25 & 3 & 3 \\ 38 & 9.794 & 18 & 0.07 & 0.25 & 3 & 3 \\ 39 & 11.97 & 18.5 & 0.43 & 0.47 & 2 & 3 \\ 40 & 12.144 & 19 & 0.41 & 0.46 & 2 & 3 \\ $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ 218 & 57.564 & 108 & 0.03 & 0.87 & 3 & 2 \\ 219 & 68.2227 & 108.5 & 0.399258 & 0.883258 & 2 & 2 \\ 220 & 68.6488 & 109 & 0.408477 & 0.884477 & 2 & 2 \\ $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ 506 & 218.022 & 252 & 0.517076 & 0.927076 & 2 & 2 \\ 507 & 218.457 & 252.5 & 0.517138 & 0.927138 & 2 & 2 \\ \hline \end{longtable} \section{Acknowledgments} I would like to thank Shabnam Akhtari for her guidance on this project and paper, Ben Young for his suggestions on the computations and feedback on my code, and Cathy Hsu for her mentorship and general research advice. This research was supported by NSF grant number 2001281. \newpage \bibliographystyle{plain}
proofpile-arXiv_067-4938
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \input{chapters/introduction.tex} \section{Related work} \input{chapters/related_work.tex} \section{Problem} \input{chapters/hidden_representation.tex} \section{Method: potential energy loss} \input{chapters/genralization_error.tex} \section{Empirical study} \input{chapters/empirical_study.tex} \section{Conclusion} \input{chapters/conclusion.tex} \subsection{Experiment Settings} We describe some experiment settings below. \textbf{Implementation.} All experiments are implemented using the Pytorch library\footnote{https://pytorch.org}, and are executed on a server with NVIDIA RTX3090 GPUs. \textbf{Network structures.} For MNIST task, the network structure is [Linear(784, 128), LeakyReLU, Linear(128, 32), Tanh, Lineare(32, 10)]. For the Fashion-MNIST task, the network structure is [Conv(1, 32, kernel\_size=5), LeakyReLU, MaxPool(2), Conv(32, 64, kernel\_size=3, padding=1), LeakyReLU, MaxPool(2), Linear(2304, 128), Tanh, Linear(128, 10)]. For CIFAR-10, we use the Resnet-20 structure described in the original paper~\cite{hekaiming2016resnet}. \textbf{Layer to split.} The split position is a crucial factor in split learning. However, the effect of the split position varies on different model architectures. Hence, in all experiments, we split the model by its last dense layer in consistency to our analysis. The last dense layer is the top model, and all the remaining layers form the bottom model. \textbf{Training strategy.} For all split training tasks, we use Adam optimizer with default parameters. For the attacker's fine-tuning tasks, since Adam performs poorly when the sample size is extremely small, we use the SGD optimizer with a learning rate of 0.01 and momentum of 0.9. The stop criterion for fine-tuning tasks is that the training error is smaller than 0.01 or the epochs reach 1,000. For training with PELoss and DcorLoss, we train the model for 100 epochs and save the best model on the validation set. We save the best model during the 90th to 100th epoch to ensure that the loss is optimized sufficiently. The attacker's accuracy is obtained by fine-tuning the saved (bottom) model. As for vanilla split training, we use the early stopping strategy of 20 epochs. We run all experiments for $>5$ times and report the mean and standard error of the results. \textbf{Data argumentation.} To eliminate unnecessary variables, no data argumentation is used in our experiments, including any image flipping and random cropping. Hence, the reported model performance may be lower than those studies using data argumentation by default. \subsection{Model performance vs attacker's accuracy} We report the test accuracy and the attacker's accuracy obtained by both methods, with $\alpha$ varying from $0.5$ to $16$, and the number of leaked labels varying from $1$ to $32$, in \Cref{fig:acc_vs}. For both methods, the right-upper end of the curve corresponds to the smallest $\alpha$ ($=0.5$), and the left-lower end corresponds to the largest $\alpha$ ($=16$). $\alpha$ doubles each time. We also plot the test accuracy (green line) and the attacker's accuracy (red line) in the vanilla split learning case, and the perfect protection area (where the attacker's accuracy is lower than the accuracy of training from scratch using leaked labels). For both methods, we observe the trend that the increment of $\alpha$ decreases the attacker's accuracy, but the test accuracy will also be lowered. Also, larger numbers of leaked labels greatly increase the attacker's accuracy on both methods. Although both methods protect privacy at the cost of damaging the model performance, it is obvious that our proposed PELoss is superior to DcorLoss. Our proposed PELoss has the following advantages compared with DcorLoss: \begin{itemize} \item The curve of PELoss is constantly at the lower-right side of DcorLoss. % In other words, on the same test accuracy level, the PELoss has a significantly lower attacker's accuracy than DcorLoss, indicating that PELoss has a better privacy-preserving ability. \item The curve of PELoss is smoother than the curve of DcorLoss. While the DcorLoss's curve may randomly fluctuate as $\alpha$ changes, the PELoss's curve moves lower as $\alpha$ increases, i.e., PELoss is more responsive to the change of $\alpha$. % Hence, it is easier to balance privacy and model performance using PELoss. \item Whe $\alpha$ is large enough, e.g., $\alpha=16$, DcorLoss may make the model training diverge, while training with PELoss is more stable. \end{itemize} \begin{figure*}[ht] \centering \begin{subfigure}[h]{0.3\linewidth} \includegraphics[width=1\linewidth]{res/mnist-converge.pdf} \caption{MNIST.} \end{subfigure} \begin{subfigure}[h]{0.3\linewidth} \includegraphics[width=1\linewidth]{res/fashion-converge.pdf} \caption{FASHION.} \end{subfigure} \begin{subfigure}[h]{0.3\linewidth} \includegraphics[width=1\linewidth]{res/cifar-converge.pdf} \caption{CIFAR.} \end{subfigure} \caption{Convergence speed.} \label{fig:training-curve} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{res/tsne-all.png} \caption{t-SNE of the bottom model output.} \label{fig:tsne} \end{figure*} \subsection{Visualization} \subsection{Convergence speed} We show the convergence speed of our method in \Cref{fig:training-curve} with $\alpha \in \{0.5, 2, 8\}$, and compare it with vanilla split training. We can see that with PELoss added, the model convergence is slightly slowed down as $\alpha$ increases. However, the slowdown is nearly negligible. We believe such a minor slowdown is acceptable in practical applications. To directly show the effect of potential energy loss, we visualize the bottom model output using t-SNE in \Cref{fig:tsne}. We can see that without the potential energy loss, the bottom outputs of different classes are clustered with a relatively large margin. As $\alpha$ increases, the clusters become fatter and even overlap with each other. We observe that the overlapping in the CIFAR-10 dataset is more obvious, possibly because the bottom model is more powerful to distort the output distribution. \subsection{Generalization error from sampling error} Recall that our goal is to train a split model $(M_b, M_t)$, such that the bottom model $M_b$ is hard to fine-tune under a small number of training samples. To decrease the attacker's fine-tuning advantage is equivalent to increase $\mathop{\mathbb E}\limits_{X'_k, Y'_k} R[(M_b, M_t^{(M_b(X'_k), Y'_k)})]$. Since $M_b$ is fixed, it can be rewritten as $\mathop{\mathbb E}\limits_{H'_k=M_b(X'_K), Y'_k} R[ M_t^{(H'_k, Y'_k)}]$. Hence, our target becomes finding a $M_b$ whose output distribution $M_b(X)$ satisfies: the generalization error is large when the sample size is small, and the generalization error is small when the sample size is sufficiently large. To simplify our target, we assume that the bottom model $M_b$ is powerful enough to output any desired distribution. Then the problem becomes finding a data distribution that introduces a large generalization error for the learner (when the sample size is small). By intuition, such data distribution should be kind of `complicated' so that a small-sized sample can not represent the true distribution. However, there are also some restrictions on the output distribution so that the classification is feasible. For example, suppose the top model is a linear classifier, then the positive and negative samples should be separated by a hyperplane. We use a simplified example to get some insights into the relationship between the data distribution and generalization error. Assume that all data points are distributed on the $d$-dimensional hypersphere $\{\mathbf x: \sum\limits_{i=1}^d x_i^2 = 1\}$. Let the hypothesis set be $ \mathcal H = \{h: h(\mathbf x) = I[\mathbf w \cdot \mathbf x > t], ||\mathbf w||_2 = 1\} $, where $I[\cdot]$ is the indicator function, and $t$ is a constant threshold. We make the following assumptions: \begin{itemize} \item The probability density of positive samples only depends on $\mathbf x\cdot \mathbf w$, i.e., it is isotropic in any directions orthogonal to $\mathbf w$. \item Given a set of positive samples $S = \{\mathbf x_1, ..., \mathbf x_n\}$, the learning algorithm simply outputs the normalized mean of these samples as the parameter of learned hypothesis, i.e., $f^{(S)}(\mathbf x) = I\left[\sum\limits_{i=1}^n \mathbf x_i \cdot \mathbf x \big / \big\lVert\sum\limits_{i=1}^n \mathbf x_i\big\rVert_2 > t \right]$. \end{itemize} Without loss of generality, let the target hypothesis be $f(x) = I[\mathbf e_1\cdot \mathbf x > t]$, where $\mathbf e_1 = [1,0, ..., 0]$ is the unit vector along the first axis. Now we want to estimate the generalization error when the learned parameter $\mathbf w = \sum\limits_{i=1}^n \mathbf x_i \big / \big\lVert\sum\limits_{i=1}^n \mathbf x_i\big\rVert_2$ slightly differs from the true parameter $\mathbf e_1$. Since the distribution is isotropic except in the direction of $\mathbf e_1$, we may assume that $\mathbf w$ lies on the plane expanded by the first two axis, i.e., $\mathbf w = \mathbf e_1 \cos \epsilon + \mathbf e_2 \sin \epsilon$, where $\epsilon$ is a small angle between $\mathbf w$ and $\mathbf e_1$. The generalization error is \begin{equation} \begin{split} R[\mathbf w] & = \mathop{\mathbb E}\limits_{\mathbf x \sim \mathcal S} I[x_1 > t] \cdot I[x_1 \cos \epsilon + x_2 \sin \epsilon \le t] \\ & = \int_{\substack{x_1 > t \\ x_1 \cos \epsilon + x_2 \sin \epsilon \le t \\ x_1^2 + ... + x_d^2 = 1}}p(x_1, x_2, ...,x_d)dV \\ & \le \int_{\substack{x_1^2 + ... + x_d^2 = 1 \\t < x_1 \le \frac{t}{\cos\epsilon} + \sqrt{1 - t^2}\tan\epsilon}}p(x_1, x_2, ...,x_d)dV \\ & \approx \int_{x_1 = t}^{t + \epsilon \sqrt{1-t^2}} p_1(x_1) dx_1 \approx \epsilon p_1(t) \sqrt{1-t^2}, \end{split} \label{eq:error-estimation} \end{equation} where $p_1$ is the marginal density function of $x_1$. From \eqref{eq:error-estimation} we can see that with $\epsilon$ fixed, the generalization bound is approximately proportional to the probability mass of the data points falling near the boundary of the target region. In the above analysis, the estimation error $\epsilon$ is fixed. We now explore the relationship between the data distribution and the distribution of $\epsilon$. Notice that for any random variable $X$, if $X_1, ..., X_m$ are $m$ independent samples, we have {\small$\mathbb E \left[ \left(\dfrac{1}{m}\sum_{i=1}^m X_i - \mathbb E[X] \right)^2 \right] = \dfrac{1}{m} \mathbb E\left[\left(X -\mathbb EX\right)^2 \right]$}. In other words, if the random variable is likely to fall far from the mean of its distribution, the sample mean tends to have a larger error. Although $\epsilon$ is not exactly the error of the sample mean in our case, it is also reasonable to assume $\mathbb E[\epsilon^2]\propto \mathbb E\left[(X - \mathbb EX)^2\right]$. To make (the magnitude of) $\epsilon$ larger, the data points should be away from their mean as much as possible. Interestingly, in our case, this is also equivalent to letting the data points lie near the boundary. We show a simple illustration of the above analysis in \Cref{fig:estimation-error}. The target hypothesis is the solid circle, and the learned hypothesis is the dashed circle. In the left figure, the dark blue points are three samples picked from all data points. Based on those samples, the learner makes an estimation represented by the dashed circle. As we can see in the right figure, although the estimation seems to be close to the target hypothesis, there are still many misclassified data points (colored red). In other words, the generalization error is large. In summary, pushing data points to the boundary of the decision region will increase the generalization error for the following two reasons: \begin{itemize} \item The sampling error tends to be larger. \item A small error on the decision region will result in a large generalization error. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{res/estimation-error.png} \caption{A small error on the decision region caused by small sample size (left) leads to a large generalization error (right). } \label{fig:estimation-error} \end{figure} \subsection{Potential energy loss} When electrostatic equilibrium is established, any net charge resides on the surface of the conductor~\cite[ch. 2]{griffiths2005introduction}. This is partly caused by Coulomb's law, which tells us that \textit{like charges} (electric charges of the same sign) attract each other and \textit{opposite charges} repel each other. \ccc{cannot follow} Inspired by this, we can view the data points of the same class as like charges and have repulsive forces against each other. As a result, those data points will tend to be away from each other and be pushed to the boundary of the decision region. Coulomb's law is stated as follows: \begin{equation} \label{eq:coulomb} \mathbf F = k \dfrac{q_1q_2(\mathbf r_1 - \mathbf r_2)}{|r_1 - r_2|^3}, \end{equation} where $k$ is a constant, $q_1, q_2$ is the signed magnitude of the charges, $\mathbf r_1, \mathbf r_2$ are their positions, and $\mathbf F$ is the repulsive force. Since we assume all data points belonging to the same class have the same sign and magnitude while ignoring the constant term, \eqref{eq:coulomb} becomes $\mathbf F = \dfrac{\mathbf r_1 - \mathbf r_2}{|\mathbf r_1 - \mathbf r_2|^3}$. Notice that the repulsive force is the gradient of the electric potential energy, we can further write $\mathbf F$ as $\mathbf F = \nabla_{\mathbf r_1} \dfrac{1}{|\mathbf r_1 - \mathbf r_2|}$, which is naturally suited to the gradient descent method. Based on this, we define the \textit{potential energy loss} (PELoss) as \begin{equation} \label{eq:loss-pe} L_\text{pe} = \sum\limits_{c\in\mathcal C}\sum\limits_{\mathbf h \in H_c}\sum\limits_{\mathbf h' \in H_c, \mathbf h' \ne \mathbf h} \dfrac{1}{|\mathbf h - \mathbf h'|}, \end{equation} where $\mathcal C$ is the label set, and $H_c$ is the bottom model outputs of $c$-labeled samples. By adding $L_\text{pe}$ to the loss function, during the training of the split model, the bottom model outputs of the same class are pushed away from each other, and move towards the boundary of the decision region of the top model. \textbf{Adding Layer Normalization.} However, if we do not put any restrictions on $\mathbf h$, i.e., $\mathbf h \in \mathbb R^d$, the repulsive force may just push data points to somewhere far from the origin, instead of the boundary of the decision region. To overcome this, we simply enforce layer normalization~\cite{leiba2016layernorm} on $\mathbf h$, which restricts $\mathbf h$ to the $d$-sphere of radius $\sqrt{d}$, i.e., $||\mathbf h||^2=d$. Note that because the data points are on the $d$-sphere, the direction of repulsive force shall also be along the $d$-sphere. For two points $\mathbf h, \mathbf h'$ on a $d$-sphere, there are two geodesics between them. Their lengths are $\arccos \langle \mathbf h,\mathbf h' \rangle$ and $2\pi - \arccos \langle \mathbf h,\mathbf h' \rangle$. So there are also two repulsive forces. When one point is at exactly the opposite position of another, i.e., their distance becomes maximum, the two repulsive forces are canceled out since the two geodesics have the same length, This is preferable since we do not want any repulsive force when two data points are already at the largest distance. Accordingly, the potential energy loss defined in \eqref{eq:loss-pe} should be changed to \begin{equation} \small L_\text{pe} = \sum\limits_{c\in\mathcal C}\sum\limits_{\substack{\mathbf h, \mathbf h' \in H_c \\ \mathbf h' \ne \mathbf h}} \dfrac{1}{\arccos \langle\mathbf h, \mathbf h'\rangle} - \dfrac{1}{2\pi - \arccos \langle\mathbf h, \mathbf h'\rangle}. \end{equation} The combined loss for split training is $L' = L + \alpha L_\text{pe}$, where $L$ is the original loss function (i.e., cross-entropy loss), $\alpha$ is a coefficient to control the intensity of repulsive force. \subsection{Relationship with distance correlation} \citet{Vepakomma2020nopeek} uses distance correlation as a loss to decorrelate the bottom model output and the input features. Instead, we consider the case that distance correlation loss is applied to the bottom model output and the label. The distance correlation loss on a batch is \begin{equation} \label{eq:dcor-loss-0} L_\text{dcor} = \sum_{i, j=1}^n {d_{i, j}}{d'_{i, j}} \big/ \sqrt{\sum_{i, j=1}^n d^2_{i, j}\sum_{i, j=1}^n d'^2_{i, j}}, \end{equation} where $d_{i, j}$ is the doubly-centered distance between $i$-th sample's output and $j$-th sample's output, and $d'_{i,j}$ is the doubly-centered distance between $i$-th label and $j$-th label. Assume the label is the one-hot, then the same-class samples have the same label. If the $i$-th sample and $j$-th sample belong to the same class, then $d'_{i,j} = 0$. Hence, \eqref{eq:dcor-loss-0} (ignoring the denominator) is converted to \begin{equation} \small \sum_{\substack{c, c'\in \mathcal C \\ c\ne c'}}\sum_{\substack{\mathbf h \in H_c \\ \mathbf h' \in H_{c'}}} k\left(|\mathbf h - \mathbf h'| - \overline{|\mathbf h - \cdot|} - \overline{|\cdot - \mathbf h'|} + \overline{|\cdot - \cdot|}\right), \end{equation} where $\mathcal C$ is the set of labels, $k$ is some constant, $\overline{|\mathbf h - \cdot|}$ is the average distance from $\mathbf h$ to other points within the batch, and $\overline{|\cdot - \cdot|}$ is the average distance between all pairs of points within the batch. We can see that minimizing the distance correlation is similar to minimizing the distance between samples of different classes. As our method is to maximize distances between same-class outputs, minimizing distance correlation has a similar effect. However, when there are multiple classes, minimizing the distance of one output to all other classes' outputs may lead to unpredictable behaviors. For example, a certain data point is `attracted' by all samples of other classes. If those other classes lie in different directions of the data point, the combined attraction could be noisy. \subsection{Leakage from bottom model output} The output of the bottom model is the hidden representation of a certain layer in the neural network. The hidden representations of neural networks are widely studied \cite{rauber2017visualize_hidden, pezzotti2018deepeyes, cantareira2020hidden}. Through visualization and other techniques, those studies show that the neural network gradually learns to make hidden representations of different classes separate, and those of the same class close to each other. Although this `separation ability' seems to be essential for neural networks and may be the reason why they perform well on various tasks, it also brings security hazards for split learning. With only a few labeled samples and the bottom model, the attacker can fine-tune the model and obtain quite good performance. He may also use the fine-tuned model to predict the label of all the training samples with high accuracy. Hence, both the entire model and the label of the training set could be stolen by the attacker. \subsection{Threat model} We assume that the attacker has access to the trained bottom model $M_b$, along with a few labeled samples $X'_k, Y'_k$ which include $k$ samples for each class. The attacker also knows the architecture of the top model $M_t$, and performs the fine-tuning attack via training $M_t$, given $X_k'$ and $Y_k'$, with pre-trained $M_b$ fixed. Notably, we do not consider the privacy leakage of the training procedure, during which the attacker may infer the label from the backward gradients. To prevent leakage from backward gradients, existing approaches like cryptographic methods and perturbation on the gradients can be used. \ccc{cannot follow} Readers can refer to \Cref{sec:RW-PPSL} for details. \subsection{Problem formulation} In order to reduce the aforementioned privacy leakage while maintaining the model performance at the same time, our purpose is to train a split model $M = (M_b, M_t)$ such that $M_b$ is hard to fine-tune, while $M$ still has a high performance. To be specific, we define the fine-tuning advantage as follows: \begin{equation} \label{eq:fine_tune-adv} \small \begin{split} & A_m(M_b) = \\ & \max\left\{\mathop{\mathbb E}\limits_{X'_k, Y'_k}\left(R[M^{(X'_k, Y'_k)}] - R[(M_b, M_t^{(M_b(X'_k), Y'_k)})]\right), 0\right\}, \end{split} \end{equation} where $X'_k, Y'_k$ is the leaked labeled data that has $k$ instances independently drawn from each class, $M_b$ is the given bottom model trained on the entire training set, $M_t^{(M_b(X'_k), Y_k')}$ is the top model trained on the $M_b$'s output $M_b(X'_k)$ and label $Y'_k$, $M^{(X'_k, Y'_k)}$ is the entire model trained on $X'_k, Y'_k$ from scratch, and $R[\cdot]$ means the generalization error. If fine-tuning $M_t$ results in a worse performance than training the whole model from scratch, the fine-tuning advantage is 0. We denote this situation as \textit{perfect protection,} since that the attacker can just use leaked labels to train a better model from scratch, without using the bottom model. Hence, the fine-tuning advantage is always positive. We want to train a split model $(M_b, M_t)$ such that: \begin{itemize} \item the performance gap between the trained split model and the split model obtained by vanilla split training $R[(M_b, M_t)] - R[(M_b^*, M_t^*)]$ is as small as possible. \item the fine-tuning advantage under a small size of leaked labeled samples is as small as possible. \end{itemize} \subsection{Privacy concerns of split learning} Many studies have demonstrated the privacy concerns of split learning. Most of them focus on the privacy of input features. \citet{abuadbba2020split_cnn} shows that applying split learning to CNN models can be dangerous since the intermediate output is highly correlated to the input. \citet{pasquini2021inference_attack, luo2021inference_attack} propose methods for feature inference attack of split learning under certain conditions. As for the privacy of the label data, \citet{li2021split_learning_label} investigated the label leakage brought by the backward gradients from the top model and has proposed a solution based on a random perturbation method. \citet{fu2022vertical_federated_label} pointed out that besides the gradients, the trained bottom model can be easily fine-tuned with just a few labeled samples, hence the label of the training data can be inferred. \subsection{Privacy protection for split learning} \label{sec:RW-PPSL} One straightforward way to protect privacy for split learning is to use cryptographic privacy-preserving machine learning systems, e.g., \cite{mohassel2017secureml, rathee2020cryptflow2, huangzhicong2022cheetah}. Those methods utilize cryptographic building blocks to realize secure training and inference of machine learning models. However, cryptographic methods have heavy computation and communication costs and yet are not practical in many scenarios. There are also hybrid methods like \cite{fufangcheng2022blindFL, chenchaochao2020codesign} that only use cryptographic algorithms in some parts of the split model to balance privacy and efficiency. As for non-cryptographic methods, \citet{Vepakomma2020nopeek} adds a distance correlation~\cite{szekeley2007dcor} loss to decorrelate the output of the bottom model and the input features. \citet{li2021split_learning_label} protects the label information from backward gradients via perturbation. \subsection{Data-dependent generalization error} Most studies on the data-dependent generalization error are based on the Rademacher and Gaussian complexity~\cite{koltchinskii2002empirical, kontorovich2014knn_generalization, leiyunwen2019data-dependent}, or the mutual information between the data and the algorithm output~\cite{negrea2019information_generalization, pensia2018generalization_iterative, russo2020overfit}. Although these studies relate the generalization bound to the data, they usually use the training data to estimate values such as complexities or information metrics, instead of directly using the data distribution. Recently, \citet{jinpengzhan2020generalization} derived generalization bounds directly from the data distribution, by proposing the so-called cover complexity, which is a metric to measure the `complexity' of the multi-class data distribution, and is computed from the distances between same-class data points and different-class data points. It is somewhat related to our work since our method makes the data distribution more `complicated' by pushing the data points to the decision boundary of their class.
proofpile-arXiv_067-4949
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Understanding the behavior of traffic participants is essential for autonomous driving systems, especially for safe navigation in crowded and complex traffic scenarios \cite{lefevre2014survey}. In these scenarios, accurate trajectory prediction can not only help autonomous vehicles make informed decisions and safely plan their future motions \cite{li2020interaction}, but also help surveillance systems detect abnormal behaviors . For short-term prediction, using purely physical models or traditional statistical models can obtain acceptable performance since the behavior is unlikely to be largely affected by external factors in a short period (i.e., less than a second). However, these approaches are not sufficient for more complicated long-term prediction. Given a series of historical observations, there may be a variety of plausible future motions due to different human intentions or mutual influence between agents \cite{zhao2019multi,li2020evolvegraph}. The inherent uncertainty in forecasting the future makes long-term trajectory prediction a challenging task. Besides, the prediction system is also expected to discriminate traversable areas delineated by the road boundaries and the right of way in accordance with the traffic rules \cite{hong2019rules}. Therefore, it is necessary to design effective modules to model the interaction between interactive agents as well as to exploit the environmental context information. Many recent works attempt to model the interaction between traffic participants by constructing a scene graph and utilizing graph neural networks to extract relational features \cite{yan2018spatial}. However, previous methods did not explicitly utilize the information in frequency domain. Inspired by \cite{cao2020spectral}, for time series signals whose primary information content lies in localized singularities, frequency domain can provide a much more compact representation than the original time domain. In this work, we propose to additionally exploit the patterns extracted in frequency domain and design a SpecTGNN unit that employs a new graph convolution method based on spectral decomposition to capture the information from different frequency components. The proposed method can simultaneously process temporal correlations and spatial structures. More specifically, we take advantage of the graph Fourier transform (GFT) and temporal convolution to model multi-agent trajectories in frequency domain. The underlying intuition is that the spectral representation obtained by GFT may present patterns that can be utilized more effectively by graph convolution operations. After obtaining the intermediate output of the SpecTGNN unit, we also apply a multi-head spatio-temporal attention to figure out relative importance of the information of each agent at each time step, which can reduce the effect of error propagation in long-term prediction \cite{2020GMAN,li2019conditional}. Moreover, most existing methods use the state information of agents to build the scene graph while the environment information only serves as additional features, which might not be fully exploited by the model. To address this issue, we construct an environment graph in addition to the agent graph with the image features extracted by a convolutional neural network (CNN) followed by a fully connected (FC) network. The features extracted from context images serve as a representation of abstract relationship between the agents, since the environment may also influence the interactive behaviors. These features are used to construct the Laplacian matrix of the environment graph, enabling the spectral graph convolution on the environment information. To the best of our knowledge, we are the first to utilize spatial correlations and temporal dependency simultaneously in frequency domain to handle the prediction task. The main contributions are summarized as follows: \begin{itemize} \item We propose a Spectral Temporal Graph Neural Network (SpecTGNN) for multi-agent trajectory prediction. SpecTGNN integrates the advantages of sequence modeling with convolutional networks and feature extraction in frequency domain via graph Fourier transform and graph convolution operations in a unified framework. \item We propose a SpecTGNN unit which consists of two blocks particularly used to extract information on the agent graph and environment graph, respectively. The SpecTGNN unit is designed to extract trajectory patterns and capturing dynamic frequency correlations. \item We validate SpecTGNN on two prediction benchmark datasets, which achieves state-of-the-art performance. \end{itemize} \section{Related Work} \subsection{Behavior and Trajectory Prediction} Traditionally, trajectory prediction problems are widely studied in statistics, signal processing, and systems engineering. However, these traditional methods heavily rely on prior knowledge which may not be available in the data. Therefore, more advanced data-driven methods such as deep learning have been extensively explored for solving trajectory prediction problems. In recent years, many approaches have been introduced to model the trajectory information, such as sequence modeling based on recurrent neural networks (RNNs) \cite{lee2017desire,fernando2018soft+,ma2019wasserstein}. In order to model the mutual influence between agents, researchers proposed various techniques to aggregate the information, such as social pooling~\cite{2016Social}, attention mechanisms~\cite{fernando2018soft+}, and graph message passing~\cite{mohamed2020social}. Mixture models are also employed to encourage the multi-modality and enable different outcomes in trajectory prediction \cite{cui2019multimodal,zyner2019naturalistic,zhan2018towards}. Besides, deep generative models are utilized to enhance the performance of distribution learning to generate better prediction hypotheses \cite{sadeghian2019sophie,zhao2019multi,bhattacharyya2019conditional,li2019interaction}. Unlike most existing works that only utilize the spatial and temporal information in time domain, we propose to also leverage the patterns extracted in frequency domain. \subsection{Graph Neural Networks} Graph neural networks (GNN) have achieved outstanding performance on different types of tasks in various domains. Many variants of the model architecture and message passing rules have been proposed, such as GCN \cite{kipf2016semi}, Spectral GCN \cite{bruna2013spectral}, Spatial GCN \cite{bruna2013spectral}, ChebNet~\cite{Defferrard2016Convolutional}, GraphSAGE~\cite{hamilton2017inductive,ma2020reinforcement} and GAT~\cite{velivckovic2017graph}. In recent years, researchers attempt to leverage GNN to incorporate relational inductive biases in the learning based models to solve various real-world tasks such as traffic flow forecasting and trajectory prediction \cite{cao2020spectral,zhao2020multivariate, li2017diffusion,yu2017spatio,zhang2019stochastic,mohamed2020social,choi2020shared}. Li et al proposed a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow \cite{li2017diffusion}. Yu et al integrated graph convolution and gated temporal convolution through spatio-temporal convolutional blocks for traffic prediction \cite{yu2017spatio}. Besides, Zhang et al used a social graph with a hierarchical LSTMs to solve the challenge of complexity of real-world human social behaviors and uncertainty of the future motion \cite{zhang2019stochastic}. Mohamed et al presented a kernel function to embed the social interaction between pedestrians in the adjacency matrix \cite{mohamed2020social}. For a more comprehensive literature review on graph neural network, please refer to recent surveys~\cite{wu2020comprehensive, zhou2018graph, zhang2020deep}. In this work, we take advantage of the graph convolution to extract patterns in frequency domain to model the agents' interactions and the environment. \section{Problem Formulation} The goal of this work is to predict the future trajectories of multiple interactive agents based on their historical states and the context information. Without lose of generality, we assume that $N$ agents are navigating within the observation area. The number of involved agents can be flexible in different situations. The involved agents include vehicles, pedestrians and cyclists. The history horizon and the forecasting horizon are denoted as $T_h$ and $T_f$, respectively. We denote the whole trajectories of $N$ agents as: \begin{equation} \small \begin{split} \bm{T}_{1:T} = \{\tau^i_{1:T}| \tau^{i}_t = (x_t^i,y_t^i), T = T_h+T_f,i=1,...,N\}, \end{split} \end{equation} where $(x_t^i,y_t^i)$ is the position of agent $i$ at time $t$. The coordinates can be either in the world space or the image pixel space. The goal of prediction is to approximate the conditional distribution of future trajectories given the history observations. \section{Method: SpecTGNN} We provide a brief overview of the proposed method. First, we build an agent graph $G^a$ to model the interactive behavior of agents, and an environment graph $G^e$ to encode the context information. Next, we present a novel SpecTGNN unit which consists of two types of blocks that operate on $G^a$ and $G^e$, respectively. The two blocks share the same structure and operations but learn different parameters. They apply spectral temporal graph convolution to the agent state information and environment information to capture the spatial correlations and temporal dependency. Finally, the intermediate representation in frequency domain obtained by graph convolution can be used to predict the future trajectory with a multi-head spatio-temporal attention mechanism and a set of stacked temporal convolutions with residual connection. \begin{figure*} \centering \includegraphics[width=\textwidth]{Frigure_struce_1001.pdf} \caption{The overall architecture of SpecTGNN. The whole framework consists of three modules: spectral temporal graph convolution (SpecTGNN) unit, spatio-temporal attention mechanism (STAtt) and temporal convolutional neural network (TCNN). More specifically, each SpecTGNN unit has two types of SpecTGNN blocks: 1) A-SpecTGNN block to model the agent information on the agent graph; and 2) E-SpecTGNN block to model the environment information on the environment graph. The SpecTGNN unit extracts the spectral and temporal state information of multiple interactive agents and environment information via a spectral graph convolution and a temporal convolution. The key operation of STAtt module is to dynamically assign attention weights to different node (agent) attributes at different time steps. The TCNN module further aggregates the attention weights and node attributes to generate future trajectories. } \vspace{-0.5cm} \label{fig:network} \end{figure*} \subsection{Spectral Temporal Graph Convolutional Unit} We introduce a novel spectral temporal graph convolutional (SepcTGNN) unit including two types of blocks, which operates on an agent graph $G^a$ and an environment graph $G^e$, respectively. As shown in Fig. \ref{fig:network}, the two SpecTGNN blocks share the same model structure but they are applied to different topologies of $G^a$ and $G^e$. Multiple layers of the SpecTGNN units with different learnable parameters can be stacked to enhance the model capability of understanding the history information. \subsubsection{Agent Modeling} In order to capture the behavior patterns of a group of interactive agents, we use a graph structure to represent the relationship between different entities, and embed the agent information into node attributes. The nodes (agents) are fully connected to each other by a set of weighted edges. More formally, given the observed trajectories $\bm{T}^{1:T_h}$, we build a spatio-temporal graph $G^{a}_t=(V^a_t,E^a_t)$ with $N$ nodes to represent the information of agent interactions at time $t$. $E^{a}_t$ is a set of edges representing the connectivity between the agents in the form of weight matrix. We have $E^{a}_t=\{w_{ij,t}^{a}|i,j=1,...,N\}$, which depends on the relative distances between agents in $G^a_t$. If there is no edge between a certain pair of nodes, the corresponding weight is set to 0. More formally, the weight matrix $E^{a}_t$ is defined as: \begin{align} \begin{split} w_{ij,t}^{a}= \left \{ \begin{array}{ll} 1 / ||\tau^i_t - \tau^j_t||_2, & i\neq j,\\ 0, & \text{otherwise}. \end{array} \right. \end{split} \end{align} \subsubsection{Environment Modeling} We also use a graph structure $G^e=(V^e,E^e)$ to represent the environment information. The Laplacian matrix of $G^e$ is built based on the features extracted from the context image $I$. We choose to utilize the VGG backbone ~\cite{simonyan2015very} to extract and encode image features due to its effectiveness on capturing information in images. After that, we apply a fully connected layer after the last convolutional layer to yield the edge weight matrix $E^{e}=\{w_{ij}^{e}|i,j=1,...,N\} \in \mathbb{R}^{N \times N}$, where the output feature of the last layer maps to each agent in the environment topology. \subsubsection{Graph Fourier Transform (GFT) and Inverse Graph Fourier Transform (IGFT)} Before we employ the spectral graph convolution, we apply the convolution operation on graph structures in the spectral domain with graph Fourier transforms. We introduce the graph convolution operator $*\mathcal{G}$ from the perspective of graph signal processing. Note that there are two types of normalized graph Laplacian: agent Laplacian $L^a$ and environment Laplacian $L^e$. To simplify notations, we use $*$ to donate the two types. We use the normalized graph Laplacian $L^*$ to calculate the Fourier basis $U\in \mathbb{R}^{N\times N}$: \begin{equation} L^* = I_N - {D^*}^{-\frac{1}{2}}E^*{D^*}^{-\frac{1}{2}} = U^*\Lambda^* {U^*}^\top, \end{equation} where $I_N$ is an identity matrix, $D^*\in \mathbb{R}^{N\times N}$ is the diagonal degree matrix with $D^*_{ii}=\sum_j {E_{ij}^*}^\top$, $U^*$ is the eigenvector matrix of the normalized graph Laplacian $L^*$ sorted by the eigenvalues in the descending order, and $\Lambda^*$ is the diagonalized matrix of the eigenvalues defined as $\Lambda^*_{ii}=\lambda_i$. $U^*$ forms an orthogonal space with ${U^*}^\top$: ${U^*}^\top U^*=I$. The input of the spatio-temporal graph is composed of the spatial states of $T_h$ time steps, where the used state can be stored as the form of matrix $V^{*} \in \mathbb{R}^{T_h \times N\times c_{\text{in}}}$, which can represent the node information of the agent graph or the environment graph, where $N$ is the number of agents and $c_\text{in}$ is attribute dimension. The Fourier transform $\mathcal{F}$ and the inverse transform $\mathcal{F}^{-1}$ of input $V^{*}$ in the spectral domain can be defined as: \begin{equation} \begin{aligned} \mathcal{F}^*(V^{*}) ={U^*}^\top V^{*}, \quad {\mathcal{F}^*}^{-1}(\hat{V}^{*}) = U^* \hat{V}^{*}. \end{aligned} \end{equation} As is defined above, the graph Fourier transform projects the graph signal input $V^{*}$ into the eigenspace constituted by the eigenvectors (rows of $U^*$) of the matrix $L^*$. \subsubsection{Spectral Graph Convolution (SGConv)} To simplify notations, we denote the kernel of SGConv operation as $g_\theta$. For 2D graph signal $v \in \mathbb{R}^{T_h \times N}$, the graph convolution in frequency domain can be written as: \begin{equation} y^*=g_\theta*\mathcal{G}(v) = g_\theta(\Lambda^* {U^*}^\top )v=g_\theta(\Lambda){U^*}^\top v. \end{equation} Without loss of generality, we extend the convolution operator $*\mathcal{G}$ to multi-dimensional tensors. For $T_h$ 3D graph signals $V^{*}\in \mathbb{R}^{T_h\times N \times c_\text{in}}$ which has $c_\text{in}$ channels, the 3D graph convolution can be defined as : \begin{equation} Y^*_{SG}=g_{*\Theta} U^{*\top} V^{*} = \sum\limits_{k=1}^{c^*_\text{out}}\sum\limits_{j=1}^{c^*_\text{in}} \sum\limits_{i=1}^{N}\Theta ^*_{kij}\Lambda^* U^{*\top} V^{*}, \end{equation} where the graph convolution kernel on 3D variables is denoted as $\Theta^* \in \mathbb{R}^{T_h\times N \times c_{\text{in}}^* \times c^*_\text{out}}$. \subsubsection{Temporal Gated Convolution (TGConv)} The agent graph $G^a$ and environment graph $G^e$ are spatio-temporal graphs. We propose a temporal gated convolutional network in frequency domain, which has the advantages of simple structure and fast training. In addition, the information at different time steps in time domain will have no temporal dependence after it is transformed into frequency domain, thus the TGConv can learn the long-interval patterns in sequences effectively. In the graph Fourier space, the model can capture the global information of history trajectory by mapping the eigenvectors. Specifically, we use 1D convolutional method to capture the features of temporal sequences. Given the input $\mathcal{F}^*(V^{*})$, the convolution kernel $\mathcal{K}^*\in \mathbb{R}^{1 \times L \times c_{\text{in}}^*\times c_{\text{out}}^*}$ can be represented as the mapping between the input $\mathcal{F}^*(V^{*})$ and the output of temporal gated convolution, where $L$ is the kernel size of $\mathcal{K}^*$, $c_{\text{in}}^*$ and $c_{\text{out}}^*$ represent the input channel and output channel, respectively. The formal representation of TGConv can be written as: \begin{equation} Y_{TG}^* = \ \text{TGConv}^*_\mathcal{K}(\mathcal{F}^*(V^{*})). \end{equation} \begin{figure}[!tbp] \centering \includegraphics[width=6.cm]{Frigure_Spass.pdf} \caption{An illustrative diagram of the SpecTGNN block.} \vspace{-0.3cm} \label{fig:network_block} \end{figure} \subsubsection{Spectral Temporal Graph Convolution} As shown in Fig.~\ref{fig:network_block}, the graph convolution directly processes the graph data and extracts the node attributes based on the history trajectory and the spatial information in the neighborhood. We combine the results of SGConv and TGConv in frequency domain and then pass the embedding through IGFT to yield the embedding vector: \begin{align} & Y^*=\mathcal{F^*}^{-1}(Y_{SG}^*+Y_{TG}^*), \nonumber\\ &Y =Y^a + Y^e, Y \in \mathbb{R}^{T_h \times N \times c_\text{out}}. \end{align} \subsection{Spatio-Temporal Attention Mechanism} \begin{figure}[!tbp] \centering \includegraphics[width=0.9\columnwidth]{Figure_attention.pdf} \caption{An illustrative diagram of the spatio-temporal attention mechanism: $H^{(l)}$ donates the $l$th channel state of each input where $l \in [1,c_\text{in}]$. We can calculate the spatial attention scores by utilizing complete agent information in different channels and calculate the temporal attention scores by using the information of each agent at history time steps. } \vspace{-0.3cm} \label{fig:network_attention} \end{figure} We design a spatio-temporal attention mechanism to reduce the error propagation effect between different prediction time steps over a long time horizon. As shown in Fig.~\ref{fig:network_attention}, we apply the spatial attention scores to the spatial interactions of all agents and the temporal attention scores to different time steps to extract spatio-temporal features. More specifically, we use the multi-head attention to calculate the attention scores on the spectral embedding given by the SpecTGNN unit. Formally, the multi-head attention mechanism can be written as: \begin{align} & \text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V}) = \text{Softmax}(\frac{\mathbf{QK}^T}{\sqrt{d_k}})\mathbf{V}, \nonumber\\ & Y_{ST} = \text{MultiHead}(\mathbf{Q},\mathbf{K},\mathbf{V}) = \text{Concat}(\text{head}_i)\mathbf{W}^H, \nonumber \\ & \text{head}_i = \text{Attention}(Y\mathbf{W}_i^Q,Y\mathbf{W}_i^K,Y\mathbf{W}_i^V), \end{align} where $d_k$ is the dimension of each head, $\mathbf{Q},\mathbf{K}$ are the query and key, which can be calculated by linear projections with parameters $\mathbf{W}^Q, \mathbf{W}^K \in \mathbb{R}^{c_\text{out}\times d_k}$ respectively. $\mathbf{V}$ is calculated by the spectral embedding ($Y$) given by the final SpecTGNN unit and $\mathbf{W}^V \in \mathbb{R}^{c_\text{out}\times d_\text{out}}$. $\mathbf{W}^H \in \mathbb{R}^{(N_h\times d_\text{out})\times (N_h\times d_\text{out})}$ projects the concatenation of the $N_h$ head output (each in $\mathbb{R}^{d_\text{out}}$) to the output space $\mathbb{R}^{N_h \times d_\text{out}}$. \begin{table*}[htbp] \centering \caption{$\text{minADE}_{20}$ / $\text{minFDE}_{20}$ (pixels) comparisons on the SDD dataset.} \vspace{-0.2cm} \setlength{\tabcolsep}{1.5mm}{ \resizebox{\textwidth}{!}{ \begin{tabular}{l|cccc|ccccc} \toprule \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Baseline Methods}&\multicolumn{5}{c}{Proposed Method} \\\cmidrule(r){2-10} & CF-VAE~\cite{bhattacharyya2019conditional} & P2TIRL~\cite{deo2020trajectory} & SimAug~\cite{2020SimAug} & PECNet~\cite{mangalam2020not} &\textbf{Base} &\textbf{+TGConv} & \textbf{+Image}&\textbf{+STAtt} &\textbf{SpecTGNN}\\ \midrule minADE$_{20}$ & 12.60 & 12.58 & 10.27 & 9.96 &9.14 & 8.29 & 8.46&8.65&\textbf{8.21} \\ minFDE$_{20}$ &22.30 & 22.07 & 19.71 & 15.88 &13.90&13.60 & 12.57 & 13.36 & \textbf{12.41} \\ \bottomrule \end{tabular}}} \label{scores_SDD} \end{table*} \begin{table*}[htbp] \centering \caption{$\text{minFDE}_{20}$ (meters) comparisons on the nuScenes (vehicle) dataset.} \vspace{-0.2cm} \setlength{\tabcolsep}{1.5mm}{ \resizebox{\textwidth}{!}{ \begin{tabular}{l|cccccc|ccc} \toprule \multirow{2}{*}{Time} & \multicolumn{6}{c|}{Baseline Methods}&\multicolumn{3}{c}{Proposed Method} \\\cmidrule(r){2-10} & CVM & CSP~\cite{deo2018multi} & CAR-Net~\cite{sadeghian2018car}& SpAGNN~\cite{casas2020spagnn} & STGAT \cite{huang2019stgat} & S-STGCNN~\cite{mohamed2020social}& \textbf{+Image} & \textbf{+STAtt} &\textbf{SpecTGNN} \\ \midrule 1.0s & 0.32 &0.46 & 0.38 &0.36 & 0.30 &0.35 &0.29 & 0.31 &\textbf{0.28} \\ 2.0s & 0.89 & -- & -- &-- & 0.78 &0.81 &0.77&0.74&\textbf{0.72} \\ 3.0s & 1.70 &1.50 & 1.35 &1.23& 1.26 &1.23& 1.21 &1.20&\textbf{1.19}\\ 4.0s & 2.73 & -- &-- &-- & 2.09 &2.15 &1.96&1.92&\textbf{1.87}\\ \bottomrule \end{tabular}}} \label{scores_nu} \vspace{-0.5cm} \end{table*} \subsection{Temporal Convolution Neural Network (TCNN)} TCNN operates directly on the feature representations from the final SpecTGNN unit $Y$ and STAtt $Y_{ST}$ and expands them as a necessity for prediction~\cite{mohamed2020social}. Excepting for the first TCNN layer, the remaining TCNNs are composed of CNNs with different kernel sizes in a series of residual connections. \subsection{Loss function} We assume that the ground truth of predicted trajectory $\tau_t = (x_t, y_t)$ follows a bivariate Gaussian distribution denoted as $\tau_t \sim \mathcal{N}(\bm{\mu}_t,\bm{\sigma}_t)$, where $\bm{\mu}_t$, $\bm{\sigma}_t$ are the mean and variance of the distribution. Our loss function contains two parts: a likelihood based loss and an distance based loss. The former is defined as \begin{equation} L_{\text{prob}} = -\sum_{t=T_h+1}^{T_f+T_h}\log(\mathbb{P}(\tau_t|\hat{\bm{\mu}}_t,\hat{\bm{\sigma}}_t)), \end{equation} where $\hat{\bm{\mu}}_t,\hat{\bm{\sigma}}_t$ are the predicted parameters of the Gaussian distribution. This loss term aims to maximize the log-likelihood of the ground truth. The latter is defined as \begin{equation} L_{\text{dist}}=\frac{1}{T_f}\sum_{t=T_h+1}^{T_f+T_h}(\tau_t-\hat{\tau}_t)^2, \end{equation} which aims to minimize the $L_2$ distance between the predicted trajectory and the ground truth. The complete loss function is a linear combination of the two parts, which can be written as $L_{\text{total}} = L_{\text{prob}} + \lambda L_{\text{dist}}$, where $\lambda$ is a hyperparameter. In this paper, we set $\lambda=1$ for all the experiments on both benchmark datasets. \section{Experiments} \subsection{Datasets} \textbf{Stanford Drone Dataset (SDD)}: SDD is a large-scale dataset collected in urban scenes in a university campus, which contains images, videos and trajectory annotations of various types of agents such as pedestrians, bicycles and vehicles. This dataset includes complex scenarios involving various types of human interactions. We adopted the dataset provided in the TrajNet benchmark \cite{becker2018evaluation} to generate our training, validation and test sets. We predicted the future 4.8s (12 frames) based on 3.2s (8 frames) history information. \textbf{nuScenes Dataset}: The nuScenes dataset is a public large-scale dataset for autonomous driving, which consists of 1,000 diverse challenging traffic scenes. Both the trajectory and map information are provided. Each scene has a length of 20 seconds with a frame rate of 2Hz. We split the training, validation and test sets with a ratio of 60\%, 20\% and 20\%. \subsection{Baseline and Evaluation Metrics} We compare SpecTGNN with several state-of-the-art baselines on both datasets, including CF-VAE~\cite{bhattacharyya2019conditional}, P2TIRL~\cite{deo2020trajectory}, SimAug~\cite{2020SimAug}, PECNet~\cite{mangalam2020not}, CSP~\cite{deo2018multi}, CAR-Net~\cite{sadeghian2018car}, SpAGNN~\cite{casas2020spagnn}, STGAT \cite{huang2019stgat}, Social-STGCNN (S-STGCNN)~\cite{mohamed2020social}. We employ the widely used evaluation metrics for trajectory prediction tasks: minimum average displacement error (minADE) and minimum final displacement error (minFDE). More specifically, minADE$_{20}$ is the minimum average $l_2$ distance between the 20 prediction hypotheses and the ground truth over all the involved entities within the prediction horizon. minFDE$_{20}$ is the minimum $l_2$ distance between the 20 prediction hypotheses and ground truth at the last time step. More formally, we have \begin{equation} \small \begin{aligned} & \text{minADE}_{20} = \frac{\sum\limits_{n=1}^{N}\min\limits_{k}\sum\limits_{t=T_h+1}^{T_h+T_f}||\hat{\tau}_{t,k}^n - \tau_{t,k}^n||_2}{N \times T_f}, \ k\in\{1,...,20\},\\ & \text{minFDE}_{20} = \frac{\sum\limits_{n=1}^{N}\min\limits_{k}||\hat{\tau}_{T_h+T_f}^n - \tau_{T_h+T_f}^n||_2}{N}, \ k\in\{1,...,20\}. \end{aligned} \end{equation} \subsection{Implementation Details} SpecTGNN consists of three main components: a SpecTGNN unit, a multi-head spatio-temporal attention mechanism, and a temporal convolution. For the SpecTGNN unit, we construct two types of graphs based on the association of agents and the original image. The number of nodes in those graphs is equal to the number of agents in the observed area. We set a training batch size of 64 on SDD dataset and 128 on nuScences dataset. We train the model on a NVIDIA Tesla T4 GPU for 250 epochs using stochastic gradient descent and the initial learning rate is set to be 0.01. For SGConv, the kernel size of graph convolution is equal to the history horizon. The kernel size of each TGConv and temporal convolution is set to 3 and the kernel size used on VGG to extract the image features is $3\times 3$. The initial number of input channel of SpecTGCNN unit is 2 and number of output channel is set to be 5. The optimal configuration was determined by validation, which is two layers of SpecTGNN units, two heads of STAtt and a set of temporal convolutions with five residual connections. \vspace{-0.1cm} \subsection{Quantitative Analysis} For the SDD dataset, we compared SpecTGNN with baselines in terms of minADE$_{20}$ and minFDE$_{20}$ in the pixel space. As shown in Table~\ref{scores_SDD}, SpecTGNN achieves the best results on both metrics. More specifically, our model achieves a performance improvement of 17.6\% on minADE$_{20}$ and 21.9\% on minFDE$_{20}$ compared to the previous state-of-the-art method. Our model takes advantage of the spectral-temporal graph convolution and can jointly identify structural and sequential patterns in frequency domain, which can represent the correlation between agents more effectively than previous spatio-temporal graph convolutional network. Compared with the S-STGCNN which employs the spatial and temporal convolution separately, results show that our design achieves better performance. For the nuScenes dataset, we compared SpecTGNN with the baselines with publicly available code and conducted all the experiments under the same settings. We predicted the trajectory at future 4s based on the previous 4s observations. As shown in Table~\ref{scores_nu}, our model outperforms the baselines by a large margin, especially for long-term prediction. Compared with the previous state-of-the-art baselines STGAT~\cite{huang2019stgat} and S-STGCNN~\cite{mohamed2020social} which only leverage the spatial and temporal information, the proposed model consistently performs better, indicating that the information in frequency domain can further improve the conventional graph neural network. SpecSTGNN reduces minFDE$_{20}$ of the final 4.0s by 8.1\% on the nuScenes dataset compared with the SOTA baseline STGAT. \subsection{Ablation Analysis} To better understand the significance of components in SpecTGNN, we conducted ablation experiments on the SDD dataset with four variants and on the nuScenes dataset with three variants. The results are summarized in the right parts of Table \ref{scores_SDD} and Table \ref{scores_nu}, which show that all the components are effective and indispensable. We elaborate each variant: \begin{itemize} \item \textbf{Base}: The base SpecTGNN only contains three modules: ``Agent Modeling'', ``A-SpecTGNN Block'' and ``TCNN''. Note that in order to illustrate the effectiveness of the proposed sequence operations in frequency domain, the TGConv module in A-SpecTGNN block is removed from the complete model. \item \textbf{+TGConv}: The TGConv operation is added inside the SpecTGNN block to the base model. \item \textbf{+STAtt}: The STAtt module is added to the base model, which can help strengthen the recognition of dependence between different prediction time steps over a long-term horizon. \item \textbf{+Image}: This setting additionally equips the base model with image information of the environment. \end{itemize} Comparing \textbf{+TGConv} with the \textbf{Base} setting, we can see that \textbf{+TGConv} leads to performance improvement by about 9.0\% on minADE$_{20}$, since temporal dependency captured by TGConv provides important clues for time-series prediction. Compared with the \textbf{Base} model, the \textbf{+STAtt} setting reduces minADE$_{20}$ by 5.4\% and minFDE$_{20}$ by 3.9\% on the SDD dataset. Results show that \textbf{+Image} reduces minADE$_{20}$ by 11.2\% and minFDE$_{20}$ by 9.6\% on the SDD dataset compared with \textbf{Base} model. The complete model \textbf{SpecTGNN} preforms best in all experiments. \subsection{Analysis on the number of samples $K$} \begin{figure} \centering \includegraphics[width=\columnwidth,height=3.5cm]{K.pdf} \caption{Effect of $K$: the performance of minADE and minFDE against the number of samples used for evaluation.} \vspace{-0.4cm} \label{K} \end{figure} For the SDD dataset, we used $K = 20$ samples to evaluate the prediction accuracy in terms of minADE and minFDE, which is a widely used amount of hypotheses. In addition, we compare the results with different $K$ values. It can be seen from Fig. \ref{K} that as $K$ increases, both minADE$_K$ and minFDE$_K$ present a consistent decreasing trend. We can tell that our method can achieve comparable performance to the previous state-of-the-art baseline with much fewer samples. When fixing the number of samples, our method outperforms all baselines, which further supports our hypothesis that the modeling of structural and temporal relationship in frequency domain significantly reduces the prediction error. \subsection{Qualitative Analysis} \begin{figure}[!tbp] \centering \subfigure[]{ \includegraphics[width=3.5cm, height=3.5cm]{deathCircletest_temp2.pdf} \label{label_for_cross_ref_1} } \subfigure[]{ \includegraphics[width=3.5cm, height=3.5cm]{deathCircletest_temp3.pdf} \label{label_for_cross_ref_3} } \subfigure[]{ \includegraphics[width=3.5cm, height=3.5cm]{877_ready.pdf} \label{label_for_cross_ref_2} } \subfigure[]{ \includegraphics[width=3.5cm, height=3.5cm]{739_ready.pdf} \label{label_for_cross_ref_4} } \caption{Qualitative results of the SDD dataset (a)(b) and nuScenes dataset (c)(d). The black dots represent the best predicted trajectory of each agent from 20 prediction hypotheses. The blue lines and red lines represent historical observation and ground truth, respectively.} \label{fig.1} \vspace{-0.5cm} \end{figure} We provide qualitative results of typical test cases in various traffic scenarios in Fig.~\ref{fig.1}. The results show that our method is able to accurately predict future trajectories of the agents in the scene. More specifically, as shown in Fig.~\ref{fig.1}(a), our model can handle roundabout scenes with different types of agent behaviors, including turning, stopping and going straight. In Fig.~\ref{fig.1}(b), SpecTGNN can generate reasonable and accurate prediction hypotheses that are located in the feasible and traversable areas of pedestrians. Fig.~\ref{fig.1}(c) shows that SpecTGNN is able to handle the opposite directions and the significant change in direction of movement. In Fig.~\ref{fig.1}(d), both the turning and lane keeping behavior can be accurately forecasted by our model. \vspace{-0.1cm} \section{Conclusion} \vspace{-0.1cm} In this paper, we propose a novel trajectory prediction model (SpecTGNN), which considers the interaction between agents and the influence of environment information simultaneously. The proposed SpecTGNN unit realizes sequence modeling and spatial graph convolution modeling of the trajectory jointly in frequency domain. In addition, the STAtt module can further enhance the exploitation of long-term temporal dependency of the embedded trajectory data. The effectiveness of SpecTGNN is validated by pedestrian and vehicle trajectory prediction tasks on two benchmark datasets. Experimental results show that our method achieves state-of-the-art prediction performance compared with the baselines. In future work, we will explore more effective ways to build the graph structure and stricter distinctions will be made between pedestrians and vehicles. \bibliographystyle{IEEEtran}
proofpile-arXiv_067-5066
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Due to the pioneering work of Cook satisfiability of Boolean circuits is among the most celebrated problems in computer science. Although the problem itself is \textsf{NP}-complete\xspace, it becomes solvable in \textsf{PTIME}\xspace when restricted to circuits of special kinds, like monotone circuits or circuits with linear gates only. Here by linear gate we mean \textsf{XOR}\xspace of unbounded fan-in. Such a gate simply checks the parity of the sum of inputs. This has been generalized to the gates $\pcmod{m}{A}$ that check if the sum of inputs, taken modulo $m$ belongs to the set $A \subseteq \set{0,\ldots,m-1}$. Note here that traditionally only the sets $A=\set{0}$ (or dually, only $A=\set{1,\dots,m-1}$) are allowed. We will however always consider generalized modular gates, i.e. $\pcmod{m}{A}$ with arbitrary $A$ and multiple wires between gates (including input gates). These gates are to be used to build modular circuits of bounded depth. More precisely for depth $h$ and modulus $m$ by $\cch{h}{m}$ we mean a class of circuits built of gates $\pcmod{m}{A}$, possibly with different $A$ for each gate. In Section \ref{sec:cchm} we also discuss modular circuits with possibly different moduli on different levels. Thus a $\cc{m_1;\ldots;m_h}$-circuit admits only gates of the type $\pcmod{m_i}{A}$ on the $i$-th level. Our results start with the following full characterization of parameters $h$ and $m$ for which satisfiability of $\cch{h}{m}$-circuits ($\cch{h}{m}$-SAT for short) is in \textsf{PTIME}\xspace. In what follows, for a positive integer $m$ by $\omega(m)$ we denote the number of different prime factors of $m$. \begin{thm} \label{poly} Let $h$ and $m$ be positive integers. Then under the assumption of ETH\xspace the problem of satisfiability for $\cch{h}{m}$-circuits is in \textsf{PTIME}\xspace iff $m=1$ or $\omega(m)=1$. \end{thm} Our nonpolynomial lower bounds are based on the construction of relatively small $\cch{2}{m}$-circuits computing $\textsf{AND}\xspace_n$, i.e. of size $2^{O(\sqrt[\omega}%{r]{n}\log n)}$, where $\omega}%{r=\omega(m)$. This construction improves the one of Barrington, Beigel and Rudich \cite{BarOR} where $3$ levels were used. From the papers \cite{BST90,ST06} we know that for $p$ being a prime, $\cc{m;p^k}$-circuits expressing \textsf{AND}\xspace need at least $2^{\Omega(n)}$ gates. But the general case of $\cch{2}{m}$ has been opened. It is also worth to notice here that the expressive power of modular circuits with 2 levels is also very sensitive to the sets $A$ used in $\pcmod{m}{A}$. Indeed in \cite{caussinus} Caussinus shows the very same lower bound $2^{\Omega(n)}$ for \textsf{AND}\xspace if on the second level only the set $A=\set{1,\ldots,m-1}$ is allowed. \medskip Next we show that not only the width of the modulus $m$, i.e. $\omega(m)$, but also the circuit depth may substantially contribute to reduce the size of $\cch{h}{m}$-circuits realizing \textsf{AND}\xspace. Surprisingly also the number $\varpi(m)$ of large prime factors of $m$ plays some role. By a large prime divisor of $m$ we mean each one that is at least $\omega(m)$. \begin{thm} \label{thm:omegabar} For $h\geq 3$ and a positive integer $m$ with $\omega}%{r=\omega(m)\geq 2$ and $\varpi}%{s = \varpi(m)$ there are $\cch{h}{m}$-circuits of size $2^{O({n}^{1/({(\er-1)}(h-2)+\varpi}%{s)}\log n)}$, computing $n$-ary \textsf{AND}\xspace. \end{thm} Although the only known lower bound for the size of modular circuits computing \textsf{AND}\xspace is slightly better than linear (see \cite{LowBounds}), Barrington, Straubing and Thérien \cite{BST90} conjectured that it has to be exponential. In fact, after the paper \cite{BarOR}, the bound $2^{\Omega(n^\delta)}$ with some $\delta>0$ is a popular belief. In contrast to this conjecture there are constructions \cite{hansen08,hansen-koucky09} of (quasi)polynomial size probabilistic modular circuits computing \textsf{AND}\xspace. The construction in \cite{hansen08} is of quasipolynomial size and uses $\polylog(n)$ random bits. The one from \cite{hansen-koucky09} fixes the depth to be constant (but a substantial one), reduces the size to be polynomial and cuts down the number of random bits to $O(\log n)$. Our techniques applied in the proof of Theorem \ref{thm:omegabar} have proved to be useful also in this probabilistic setting. First, while keeping only $O(\log n)$ random bits, we reduce the depth of the circuits realizing \textsf{AND}\xspace to be only $2$. Moreover our construction is more transparent, as it makes no use of expanders or universal hashing functions. \begin{thm} \label{thm:random-and} For the modulus $m$ with $\omega(m)\geq 2$ the $n$-ary \textsf{AND}\xspace functions can be realized by $\cch{2}{m}$-circuits of polynomial size with $O(\log n)$ random bits. In fact the realization is done by $\cc{p,q}$-circuits, where $p,q$ are different primes. \end{thm} Again, the mentioned lower bound $2^{\Omega(n)}$ for the size of deterministic $\cc{p,q}$-circuits computing \textsf{AND}\xspace, blocks any derandomization here. Thus to confirm the suggestion made in \cite{hansen-koucky09} that \textsf{AND}\xspace can be computed by small ${\rm CC}^0$ circuits one would need to increase the depth. \medskip We also show how superpolynomial lower bounds for sizes of modular circuits computing \textsf{AND}\xspace would give rise to subexponential algorithms checking satisfiability (or equivalence) of such circuits. In fact this connection, as well as the converse one, (due to their technicality) is presented only in Section \ref{sec:algo}, in particular in Theorem \ref{thm:algo}. As a result of these considerations we obtain (see Theorem \ref{thm:cc-vs-ac}) an upper bound for satisfiability of $CC^0$-circuits that is asymptotically lower than the one ETH\xspace permits for $AC^0$-circuits. \medskip Our methods proved themselves to be powerful enough to be applied in some other contexts. In particular in Theorem \ref{thm:dm} we give a characterization of dihedral groups $\m D_{2k+1}$, i.e. groups of symmetries of regular polygon with odd number of sides, for which the problem of solving equation is tractable. In fact for odd $m$ we show that this happens only if $\omega(m)=1$, or ETH\xspace fails. This result partially fills the small gap that remains, after the paper \cite{ikkw}, in characterizing finite groups with polynomial time algorithms for solving equations. Another, in fact pretty similar, application of our methods is done for satisfiability of multivalued circuits, as defined in \cite{ik:lics18}. \section{Shallow or narrow may apply} \label{may-apply} In this section we analyze the expressive power of $\cch{h}{m}$-circuits with $h=1$ or $\omega(m)=1$. We start with stating that in such realm $\cch{h}{m}$-circuits can compute \textsf{AND}\xspace only of bounded arity. Although one can find proofs of very similar statements in the literature (e.g \cite{BST90,bt}), we have decided to sketch the proof in Section \ref{sec:easy}. \begin{prp} \label{shallow-narrow-and} For positive integers $m,h,k$ and a prime $p$, the arity of \textsf{AND}\xspace \ computable by \begin{itemize} \item $\cch{1}{m}$-circuits is bounded by $m-1$, \item $\cch{h}{p^k}$-circuits is bounded by a constant depending only on $h$ and $m=p^k$. \hfill\hfill$\Box$ \end{itemize} \end{prp} From the above bound we can infer one implication in Theorem \ref{poly}. To do this an easy observation is required, that we can simulate $\cch{h}{m}$ circuit with some inputs fixed to be constant, just by slightly modifying the structure of the circuit, without inflating its size. For this reason, in the following, we simply allow some inputs to be constant \begin{cor} \label{cor-shallow-narrow} Satisfiability of $\cch{h}{m}$-circuits is in \textsf{PTIME}\xspace whenever $h=1$ or $\omega(m)=1$. \end{cor} \begin{proof} The only property of $\cch{1}{m}$ and $\cch{h}{p^k}$ circuits we are going to use is that they have a bound, say $c$, for the arity of \textsf{AND}\xspace they can express. This bound allows us to reduce our search for an $n$-tuple $a$ in the large set $\set{0,1}^n$ satisfying the circuit $\Gamma$ to a smaller subset of size at most $n^c$ containing only tuples with at most $c$ ones. Indeed, let $a$ be a satisfying tuple with minimal possible $\card{\pre a 1}$. By this minimality we know that $\Gamma$ with all the inputs with indices outside $\pre a 1$ set to $0$ behaves like the $\card{\pre a 1}$-ary \textsf{AND}\xspace. Thus $\card{\pre a 1} \leq c$, so that the tuple $a$ has at most $c$ ones, as claimed. \end{proof} The function $\textsf{AND}\xspace_n$ is an example of a nonconstant extremely unbalanced boolean function, i.e, one value is taken exactly once. Our next goal is to show that shallow or narrow modular circuits can compute functions with rather balanced piles, i.e. preimages $\pre f 0$ and $\pre f 1$. Formally the balance of the $n$-ary boolean function $f$ is defined to be \[ \bal{f} = 1- \frac{\card{\card{\pre f 0}-\card{\pre f 1}}}{2^n}. \] Constant functions have balance $0$. The functions $\textsf{AND}\xspace_n$ and $\textsf{OR}\xspace_n$ have the smallest possible non-zero balance $2^{1-n}$. In fact each $n$-ary function $f$ with one element smaller pile, denoted later $\stack f$, has balance $2^{1-n}$ and is to be called a spike. An obvious calculation shows that $\card{\stack f} = 2^{n-1}\bal f$. \begin{rmk} \label{rmk-reducing-stack} Each nonconstant $n$-ary boolean function $f$ can be turned to be $(n-k)$-ary spike by fixing its $k \leq \log\card{\stack f}$ variables to be constant from $\set{0,1}$. \end{rmk} \begin{proof} To fix the notation we use the symbol $\subst{f}{x_i}{c}$ for the function obtained from $f$ by fixing the variable $x_i$ to be $c\in\set{0,1}$. Now, as long as $\card{\stack f}\geq 2$ we iteratively reduce the size of $\stack f$ at least twice, by fixing the value of one of the variables without making $f$ constant. Thus we start with picking $a^0,a^1 \in \stack f = \pre f b$ and a coordinate $i$ so that $a^0_i=0$ and $a^1_i=1$. Obviously $\card{\pre f b} = \card{\pre{\subst{f}{x_i}{0}}{b}}+\card{\pre{\subst{f}{x_i}{1}}{b}}$ and we pick $c\in\set{0,1}$ so that $\card{\pre{\subst{f}{x_i}{c}}{b}}\leq\card{\pre{\subst{f}{x_i}{1-c}}{b}}$. Thus we have $\card{\stack{\subst{f}{x_i}{c}}} \leq \card{\stack{f}}/2$, as required. To see that $\subst{f}{x_i}{c}$ is not constant, note that $a^c$, with its $i$-th coordinate removed, belongs to $\stack{\subst{f}{x_i}{c}}$ so that $1 \leq \card{\stack{\subst{f}{x_i}{c}}} \leq \card{\stack{f}}/2 <2^n/2=2^{n-1}$. \end{proof} We note here that all spikes (of the same arity) are interdefinable. Indeed, if $\spike{\o a}{\varepsilon}$ denotes the spike which takes the value $\varepsilon\in\set{0,1}$ only on the tuple $\o a \in \set{0,1}^n$ then \begin{itemize} \item $\spike{\o a}{1-\varepsilon}(\o x)=1-\spike{\o a}{\varepsilon}(\o x)$, \item $\spike{\o b}{\varepsilon}(x_1,\ldots,x_n)=\spike{\o a}{\varepsilon}(x_1+a_1-b_1,\ldots,x_n+a_n-b_n)$. \end{itemize} This interdefinability can be realized by modifying only the sets $A$ in the gates $\pcmod{m}{A}$ on the last and/or first level, so that the sizes of the corresponding circuits remain unchanged. Now, if the arity of spikes computable by $\cch{h}{m}$-circuits is bounded, like in Proposition \ref{shallow-narrow-and}, we use Remark \ref{rmk-reducing-stack} to turn the circuit into the one computing a spike with arity at least $n-\log\card{\stack f} = n - (n-1)\log\bal f$. This gives the following lower bound on the balance of $\cch{h}{m}$-circuits independently of its arity and size. \begin{cor} \label{cor:bal1} For $h=1$ or $\omega(m)=1$ the balance of non-constant functions computable by $\cch{h}{m}$-circuits is at least $2^{1-c}$, where $c$ bounds the arity of $\cch{h}{m}$-computable conjunctions. \end{cor} From Corollary \ref{cor:bal1} we immediately get the following. \begin{cor} \label{cor:lan-bal1} Let $L\subseteq\set{0,1}^*$ be a language recognizable by $\cch{h}{m}$ circuits, where $h=1$ or $\omega(m)=1$. Then the number of words in $L$ of the length $n$ is either $0$ or is rather large, i.e. at least $2^{n-c}$, where $c$ bounds the arity of $\cch{h}{m}$-computable conjunctions. \end{cor} \section{Deep or wide need not apply \label{sec:cc2} In this section we will show the converse to Corollary \ref{cor-shallow-narrow} but under the assumption of the Exponential Time Hypothesis\xspace. As we have already noted this is done by constructing conjunction of subexponential size. The next Proposition formulates this fact in more details. \begin{prp} \label{prp:force-narrow} For a positive integer $m$ with exactly $\omega}%{r$ different prime divisors we have: \begin{enumerate} \item\label{force-narrow-cnf} for each 3-CNF-SAT formula $\Phi$ with $\cll$ clauses there is a $\cch{2}{m}$ circuit of size at most $2^{O(\sqrt[\omega}%{r]{\cll}\log \cll)}$ representing $\Phi$, \item \label{force-narrow-and} in particular unbounded fan-in \textsf{AND}\xspace \ can be computed by $\cch{2}{m}$ circuits of size $2^{O(\sqrt[\omega}%{r]{n}\log n)}$, where $n$ is the number of variables (input gates). \end{enumerate} The above bounds on the circuits size also bound the time needed to obtain them. \end{prp} Combining this Proposition with Exponential Time Hypothesis\xspace (and Sparsification Lemma) we immediately get the following Corollary. \begin{cor} \label{eth-nonpoly} If $h\geq 2$ and $\omega(m)\geq 2$ then satisfiability for $\cch{h}{m}$-circuits is not in \textsf{PTIME}\xspace, unless ETH\xspace fails. \end{cor} As we have mentioned in the Introduction our proof of Proposition \ref{prp:force-narrow} is modelled after the idea of Barrington, Beigel and Rudich \cite{BarOR} where the $\textsf{AND}\xspace_n$ had been shown to be computable by modular circuits of the same subexponential size as described in Proposition \ref{prp:force-narrow}(\ref{force-narrow-and}) but on $3$ levels. In squeezing this to $2$ levels we need the concepts of $\zex{p,q}$-expressions and the circuits realizing them. In the papers \cite{ikk:mfcs18, ikk:lics20} we have been studied action of the group $\mathbb{Z}_p$ on the group $\mathbb{Z}_q$ via the function $\mathsf{b} : \mathbb{Z}_p \longrightarrow \mathbb{Z}_q$ defined by $\mathsf{b}(0)=0$ and $\mathsf{b}(x)=1$ for all other $x\in \mathbb{Z}_p$. With the help of this action we define $\zex{p,q}$-expression to be the $n$-ary expression over the variables $\o x =(x_1,\ldots, x_n)$: \[ t(\o x) = \sum_{\mytop{\beta\in Z_p^n}{c\in Z_p}} \alpha_{\beta,c}\cdot \mathsf{b}\left(\sum_{i=1}^n \beta_i x_i +c\right), \] where the $\alpha_{\beta,c} \in Z_q$ while $\beta=(\beta_1,\ldots,\beta_n)\in Z_p^n$ and $c \in Z_p$, the outer sum and the multiplications by the $\alpha_{\beta,c}$'s are taken modulo $q$, while the inner sum and the multiplications by the $\beta_i$'s are taken modulo $p$. Obviously the $\zex{p,q}$-expression $t(\o x)$ is determined by the sequence $\left\langle \alpha_{\beta,c} : {\beta\in Z_p^n},{c\in Z_p}\right\rangle$ of coefficients from $\mathbb{Z}_q$. This sequence may have the exponential size $p^{n+1}$. However only the nonzero $\alpha_{\beta,c}$'s contribute to the length of $t(\o x)$ and consequently to the size of a circuit that models $t(\o x)$. In fact, if $t(\o x)$ returns always boolean values on boolean inputs $\o x$, $t(\o x)$ may be realized by a circuit, called $\Gamma(t)$, of size $1+\card{L(t)}$, where $L(t)= \set{(\beta,c) \in Z_q^n \times Z_q : \alpha_{\beta,c} \neq 0}$. Indeed, the subexpression $\mathsf{b}\left(\sum_{i=1}^n \beta_i x_i +c\right)$ can be realized by a single $\textsf{MOD}\xspace_{p}^{\mathbb{Z}_{p}-\set{-c}}$ gate (denoted $\Gamma_{\beta,c}(t)$), and then combining the outputs of all the $\Gamma_{\beta,c}(t)$ (with $(\beta,c)$ ranging over $L(t)$) by the $\textsf{MOD}\xspace_q$-like gate. For this reason by the size of the circuit $\Gamma(t)$, as well as of the $\zex{p,q}$-expression $t(\o x)$, we simply mean $1+\card{L(t)}$. In our further consideration we will also use the bunch $\Theta(t) = \set{\Gamma_{\beta,c}}_{(\beta,c)\in L(t)}$ of the above gates with $\card{L(t)}$ outputs. These outputs are going to be treated as a single bundle (without ordering, but with copying the output of each $\Gamma_{\beta,c}(t)$ the corresponding, i.e. $\alpha_{\beta,c}$, number of times) as they always will be used as inputs to other $\textsf{MOD}\xspace$-gates, so that they will be summed up first. To keep track of the modulus used to sum up this bundle we will say that the bundle is of type $q$ and that the bunch $\Theta(t)$ is of type $\btyp{p,q}$. \bigskip The importance of the $\zex{p,q}$-expressions lies in the next Fact that has been originally shown is \cite{ikk:mfcs18} as Lemma 3.1 (but with $\mathsf{b}(x)$ replaced by $\h\mathsf{b}(x)= 1-\mathsf{b}(1-x)$). \begin{fact} \label{trzy-jeden} With two different primes $p,q$ we can represent every $n$-ary function $g: Z_p^n \longrightarrow Z_q$ by a $\zex{p,q}$-expression of length and size bounded by $2^{O(n)}$. \hfill\hfill$\Box$ \end{fact} The next fact is borrowed from \cite{BarOR}, but we include its more transparent proof in Section \ref{sec:easy}. \begin{fact} \label{bar-poly} Let $p$ be a prime and $k\geq 1$ be an integer. Then there is a polynomial $w(\o x) \in \mathbb{Z}_p[\o x]$ of degree at most $p^k-1$, such that for $\o x \in \set{0,1}^n$ we have \[ w(\o x) = \left\{ \begin{array}{ll} 0, &\mbox{if \ $\card{\pre {\o x}{0}} \equiv 0$ modulo $p^k$,}\\ 1, &\mbox{else.} \end{array} \right. \] \end{fact} With the help of Facts \ref{trzy-jeden} and \ref{bar-poly} we can represent 3-CNF formulas by a relatively short $\zex{p,q}$-expressions in the following sense: \begin{lm} \label{lm:pseudo-and} Let $p,q$ be two different primes and $\nu \geq 1$ be an integer. Then for each 3-CNF-SAT formula $\Phi(\o x)$ with $n$ variables $\o x = (x_1,\ldots, x_n)$ and $\cll$ clauses there is a $\zex{p,q}$-expression $t^\Phi_{p,q}(\o x)$ of size at most $2^{O(q^\nu \cdot \log \cll)}$ such that for all $\o a \in \set{0,1}^n$ we have \[ t^\Phi_{p,q}(\o a) = \left\{ \begin{array}{ll} 0, &\mbox{if the number of unsatisfied (by $\o a$) clauses in $\Phi$ is divisible by $q^\nu$}\\ 1, &\mbox{else.} \end{array} \right. \] \end{lm} \begin{proof} To fix our notation let $\Phi(\o x) = \bigwedge_{i=1}^\cll C_i$ be a 3-CNF formula with the clauses $C_i= C_i(z^1_i,z^2_i,z^3_i)$. Fact \ref{bar-poly} supplies us with an $\cll$-ary polynomial $w(c_1,\ldots, c_\cll) \in GF(q)[\o c]$ of degree at most $q^\nu-1$. We want to feed up the polynomial $w$ by substituting $C_i(z^1_i,z^2_i,z^3_i)$ for the variable $c_i$ to get a total function $w^* : Z_p^n \longrightarrow Z_q$. In order to do that we first extend each clause $C_i$ to be a total function $Z_p^3 \longrightarrow Z_q$ (instead of $\set{0,1}^3 \longrightarrow \set{0,1}$) by putting arbitrary values on the set $Z_p^3 - \set{0,1}^3$. Now the function \[ w^*(\o z) = w\left(C_1(z^1_1,z^2_1,z^3_1),\ldots,C_\cll(z^1_\cll,z^2_\cll,z^3_\cll)\right) \] behaves on the boolean values exactly as we need, i.e. for $\o a \in \set{0,1}^n$ we have \[ w^*(\o a) = \left\{ \begin{array}{ll} 0, &\mbox{if the number of unsatisfied (by $\o a$) clauses in $\Phi$ is divisible by $q^\nu$}\\ 1, &\mbox{else.} \end{array} \right. \] All we need is to turn $w^*$ into a relatively short $\zex{p,q}$-expression. Instead of applying Fact \ref{trzy-jeden} directly to $w^*$ we will do it for each its monomial separately. Note that the monomials of $w$, after our substitution of the $C_i$'s for $c_i$'s have the form \[ C_{i_1}(z^1_{i_1},z^2_{i_1},z^3_{i_1}) \cdot \ldots \cdot C_{i_s}(z^1_{i_s},z^2_{i_s},z^3_{i_s}) \mbox{ \ \ \ with \ \ \ } s<q^\nu, \] so that there are at most $3q^\nu$ variables involved into each such ``monomial''. Because of that, Fact \ref{trzy-jeden} allows us to represent each summand in $w^*$ by a $\zex{p,q}$-expression of size $O(2^{cq^\nu})$. Since $\cll^{q^\nu}$ bounds the number of monomials of degree at most $q^\nu-1$ it also bounds the number of summands in $w^*$, so that we end up with the bound $O\left( 2^{cq^\nu} \cdot \cll^{q^\nu} \right)\leq 2^{ O\left({q^\nu \log \cll} \right)}$ for our $\zex{p,q}$-expression representing $w^*$. \end{proof} Our Claim shows also that for $p,q,\nu$ as above we also have a relatively short $\zex{p,q}$-expression $t_{p,q}(x_1,\ldots,x_n)$ that behaves almost like an $n$-ary \textsf{AND}\xspace. That is, its size is at most $2^{O({q^\nu \cdot \log n})}$ and for all $\o a \in \set{0,1}^n$ we have \begin{eqnarray} \label{pq-exp} t_{p,q}(\o a)&=& \left\{ \begin{array}{ll} 0, &\mbox{if the number of zeros among the $a_i$'s is divisible by $q^\nu$},\\ 1, &\mbox{else.} \end{array} \right. \end{eqnarray} We will also use the symbols $\Gamma_{p,q},\Gamma^\Phi_{p,q}$ to denote the circuits $\Gamma(t_{p,q}), \Gamma(t^\Phi_{p,q})$ computing the $\zex{p,q}$-expressions $t_{p,q}$ and $t^\Phi_{p,q}$, respectively. Also the symbols $\Theta_{p,q},\Theta^\Phi_{p,q}$ will be used to denote the bunch of initial $\textsf{MOD}\xspace_p$-gates in the circuits $\Gamma_{p,q},\Gamma^\Phi_{p,q}$. \medskip Now we are ready to prove Proposition \ref{prp:force-narrow}. \begin{proof} To start we let $m=p_1^{\alpha_1}\cdot\ldots p_\omega}%{r^{\alpha_\omega}%{r}$ be the prime decomposition of $m$. Each of the groups $\mathbb{Z}_{p_j}$'s can be identified with a subgroup of $\mathbb{Z}_m$ generated by $\frac{m}{p_j}$, by simply sending $z$ to $\frac{m}{p_j} \cdot z$. After such identification we know that the sum $\sum_{j=1}^{\omega}%{r} \frac{m}{p_j} \cdot \mathbb{Z}_{p_j}$ is in fact a direct sum, so that each element of this sum has a unique decomposition. To construct a $\cch{2}{m}$ circuit computing the 3-CNF formula $\Phi$ with $\cll$ clauses we first fix integers $\nu_1,\ldots,\nu_\omega}%{r$ to satisfy $p_j^{\nu_j-1} \leq \sqrt[\omega}%{r]{\cll} < p_j^{\nu_j}$. Also, for convenience we identify the index $0$ with $\omega}%{r$ so that we can refer to the indices of the primes $p_j$'s cyclically. Now, for each $j=1,\ldots,\omega}%{r$, \ Lemma \ref{lm:pseudo-and} supplies us with a $\zex{p_{j-1},p_{j}}$-expression $t^\Phi_j(\o x)$ of the length at most $O(2^{c_j\cdot \sqrt[\omega}%{r]{\cll} \cdot \log \cll})$ so that for $\o a \in \set{0,1}^n$ we have \[ t^\Phi_j(\o a) = \left\{ \begin{array}{ll} 0, &\mbox{if the number of unsatisfied (by $\o a$) clauses in $\Phi$ is divisible by $p_j^{\nu_j}$}\\ 1, &\mbox{else.} \end{array} \right. \] Our identification of the direct sum $\bigoplus_{j=1}^\omega}%{r \frac{m}{p_j} \cdot \mathbb{Z}_{p_j}$ with a subgroup of $\mathbb{Z}_m$ allows us to sum up (modulo $m$) all the $t^\Phi_j$ to get \begin{equation} \label{tfi} T^\Phi(\o a) = \sum_{j=1}^{\omega}%{r} \frac{m}{p_j} \cdot t^\Phi_j(\o a). \end{equation} We argue now that for $\o a \in \set{0,1}^n$ \[ \mbox{$T^\Phi(\o a) = 0$ \ \ iff \ \ $\Phi$ is satisfied by $\o a$}. \] Indeed, to see the `if' direction note that the number $\cll_0$ of unsatisfied (by $\o a$) clauses is zero so that Lemma \ref{lm:pseudo-and} gives that each of the $t^\Phi_j(\o a)$'s, and consequently the sum $T^\Phi(\o a)$, is zero. Conversely, if $\o a$ does not satisfy $\Phi$ then $1\leq \cll_0$, which together with $\ell_0 \leq \cll < p_1^{\nu_1}\cdot\ldots\cdot p_\omega}%{r^{\nu_\omega}%{r}$ gives that at least one of the $p_j^{\nu_j}$'s does not divide $\cll_0$. Thus for this $j$ the summand $\frac{m}{p_j} \cdot t^\Phi_j(\o a)$ is non-zero and -- by the unique decomposition -- the entire sum $T^\Phi(\o a) \neq 0$. Now, the circuit required in Proposition \ref{prp:force-narrow}\eqref{force-narrow-cnf} is not supposed to calculate separately each of the $\Gamma(t^\Phi_j)$'s by summing up the subexpressions $\mathsf{b}\left(\sum_{i=1}^n \beta_i x_i +c\right)$ of $t^\Phi_j$. Instead, each such subexpression is calculated by the gate $\Gamma_{\beta,c}(t^\Phi_j)$ and then sent to $\textsf{MOD}\xspace_m^{\set{0}}$-gate $\left(\frac{m}{p_j}\cdot\alpha_{\beta,c}\right)$-times. Due to the properties of $T^\Phi$, this last gate, after collecting all the bundles $\Theta(t^\Phi_j)$'s, calculates the boolean value of $\Phi(\o a)$. Moreover the entire circuit consists of $1+\sum_{j=1}^r \card{L(t_j)}$ gates: \begin{itemize} \item the final gate $\textsf{MOD}\xspace_m^{\set{0}}$, \item the gates $\Gamma_{\beta,c}(t^\Phi_j)$ of the form $\textsf{MOD}\xspace_{p_j}^{\mathbb{Z}_{p_j}-\set{-c}}$, one for each $(\beta, c) \in L(t_j)$. \end{itemize} From Lemma \ref{lm:pseudo-and} we know that the sizes of the $t_j$'s (and therefore of the $\Theta(t^\Phi_j)$'s) can be uniformly bounded by $O(2^{c\sqrt[\omega}%{r]{\cll}\log \cll})$. Thus this also bounds the size of the circuit. \end{proof} Note here that in our construction of subexponential size $\cch{2}{m}$-circuit computing \textsf{AND}\xspace the final gate is $\textsf{MOD}\xspace_m^{\set{0}}$. This contrasts the result of Caussinus \cite{caussinus} where the lower bound $2^{\Omega(n)}$ is shown if the final gate is $\textsf{MOD}\xspace_m^{\set{1,\ldots,m-1}}$. \section{Making the circuits smaller} \label{sec:cchm} We start with observing that composing $\cch{2}{m}$ circuits by using 2 separate groups of 2 levels we can keep the size $2^{O(k^{1/\omega}%{r}\log k)}$ to compute $\textsf{AND}\xspace_{k^2}$. Indeed we can simply feed the $k$ inputs of the last two levels computing $\textsf{AND}\xspace_k$ by the outputs of $k$-ary independent conjunctions built on the 2 starting levels. Repeating this recursively $\lfloor h/2 \rfloor$-many times on $h$ levels we get the following Proposition. \begin{prp} \label{prp:r-and-h-pol} For $h\geq 2$ and a positive integer $m$ with $\omega}%{r=\omega(m)\geq 2$ there are $\cch{h}{m}$-circuits of size $2^{O({n}^{1/(\omega}%{r\lfloor h/2 \rfloor)}\log n)})$, computing $n$-ary \textsf{AND}\xspace. \end{prp} Our next step is to use both the depth $h$ of the circuit and the width $\omega(m)$ of the modulus to make our $\cch{h}{m}$-circuits for \textsf{AND}\xspace much smaller. But before doing that we warm up with the following easy observation. The idea of its proof has been already explored in \cite{ikk:lics20,weiss:icalp20,ikkw}. \begin{prp} \label{prp:single-primes} For $h\geq 2$ and a sequence of alternating primes $p_1 \neq p_2 \neq p_3 \neq \ldots \neq p_h$ there are $\cc{p_1;\ldots;p_h}$-circuits of size $2^{O({n}^{1/(h-1)})}$, computing $n$-ary \textsf{AND}\xspace. \end{prp} \begin{proof} Obviously we may assume that $n=k^{h-1}$ for some $k$. For each $j=1,\ldots,h-1$ Fact \ref{trzy-jeden} supplies us with a $k$-ary $\mathbb{Z}[p_j,p_{j+1}]$-expression $C_j$ of size $2^{O(k)}$ that on $\o a\in\set{0,1}^k\subseteq \mathbb{Z}_{p_j}^k$ behaves as $\textsf{AND}\xspace_k$. On the starting level of our circuit we group $n=k^{h-1}$ inputs into $n/k$ groups of $k$ inputs each. Then each group is passed through the bunch $\Theta(C_1)$ so that we end up with $n/k$ bundles $B_i$. Note that if $B_i$ was passed through $\pcmod{p_2}{\set{1}}$ gate we would get the conjunction of $k$ inputs of $B_i$. Instead we again group the bundles $B_1,\ldots,B_{n/k}$ into $n/k^2$ groups with $k$ bundles each and pass each such a group through the bunch $\Theta(C_2)$. Again, the sum of each of the $n/k^2$ resulting bundle (modulo $p_3$) coincide with \textsf{AND}\xspace of ${k^2}$ on the initial inputs that fall into that bundle. After repeating this $h-1$ times we end up with a single bundle of type $p_{h}$. At this point we actually use $\pcmod{p_h}{\set{1}}$ gate to sum this bundle up and get \textsf{AND}\xspace of all the inputs. It should be clear that the size of the entire circuit is bounded by $2^{O(k)} = 2^{O({n}^{1/(h-1)})}$. \end{proof} Now we are in a position to prove Theorem \ref{thm:omegabar}. However we will start with its slightly weaker version. \begin{prp} \label{prp:omega-omegabar} For $h\geq 3$ and a positive integer $m$ with $\omega}%{r=\omega(m)\geq 2$ and $\varpi}%{s = \varpi(m)$ there are $\cch{h}{m}$-circuits of size $2^{O({n}^{1/({(\er-1)}(h-2)+{(\es-1)})}\log n)}$, computing $n$-ary \textsf{AND}\xspace. \end{prp} \begin{proof} As in the proof of Proposition \ref{prp:force-narrow} we start with the prime decomposition $m=p_1^{\alpha_1}\cdot\ldots p_\omega}%{r^{\alpha_\omega}%{r}$ and assume that $p_1 > \ldots > p_\varpi}%{s \geq \omega}%{r >p_{\varpi}%{s+1}>\ldots > p_\omega}%{r$. Moreover, without loss of generality we assume that $n=k^{{(\er-1)}(h-2)+{(\es-1)}}$ for some integer $k$ and put $k_\omega}%{r={(\er-1)} k^{\omega}%{r-1}$ and $k_\varpi}%{s={(\er-1)} k^{\varpi}%{s-1}$. Finally we pick integers \[ \begin{array}{lcccll} \nu_1,\ldots,\nu_\omega}%{r & \mbox{satisfying} &p_j^{\nu_j-1} \leq k_\omega}%{r^{1/{(\er-1)}} < p_j^{\nu_j},% &\mbox{so that} & \prod_{j\neq i} p_j^{\nu_j} > k_\omega}%{r, &\mbox{for $i=1,\dots,\omega}%{r$,} \\ \o\nu_1,\ldots,\o\nu_\varpi}%{s & \mbox{satisfying} &p_j^{\o\nu_j-1} \leq k_\varpi}%{s^{1/{(\es-1)}}< p_j^{\o\nu_j},% &\mbox{so that} & \prod_{j\neq i} p_j^{\o\nu_j} > k_\varpi}%{s, &\mbox{for $i=1,\dots,\varpi}%{s$.} \end{array} \] Also for two different prime divisors $p,q$ of $m$ we modify $k_\omega}%{r$-ary and $k_\varpi}%{s$-ary $\zex{p,q}$-expressions of the form $t_{pq}$ that satisfy (\ref{pq-exp}) to $t'_{pq} =1-t_{pq}$ with the arity that later should be clear from the context. Note here that, except their arities, the $t_{p_ip_j}$'s depend not only on the primes $p_i, p_j$ but also on the integers $\nu_j$ (or $\o\nu_j$, whatever applies). By $\Gamma'_{pq}$ and $\Theta'_{pq}$ we denote the circuit $\Gamma(t'_{pq})$ and the bunch $\Theta(t'_{pq})$ of type $[p,q]$, respectively. Note that for fixed $p_i$ and $z_1,\ldots,z_{k_\omega}%{r}\in \set{0,1}$ we have \begin{equation} \label{eqn:and} \textsf{AND}\xspace\set{t'_{p_ip_j}(z_1,\ldots,z_{k_\omega}%{r}) : j\neq i} = \textsf{AND}\xspace\set{z_1,\ldots,z_{k_\omega}%{r}}. \end{equation} Indeed, Lemma \ref{lm:pseudo-and} assure us that the left hand side in the above display is $1$ if and only if for all $j\neq i$ the number of zeros among the $z$'s is divisible by $p_j^{\nu_j}$. This in turn means that the number of zeros among the $z$'s is divisible by $\prod_{j\neq i} p_j^{\nu_j} > \prod_{j\neq i} k_\omega}%{r^{1/{(\er-1)}} = k_\omega}%{r$. But there are only $k_\omega}%{r$ places for such zeros so that there are no zeros among the $z$'s at all. \medskip Now for each ${h'}=0,1,2,\ldots, h-2$ we recursively built a circuit $\nabla_{h'}$ of depth ${h'}$ \begin{enumerate} \item[(i)] with $n$ inputs $x_1,\ldots, x_n$, (repeated $\omega}%{r{(\er-1)}$ times by $\nabla_0$) \item[(ii)] and with $b_{h'} = \omega}%{r{(\er-1)}\cdot n/k^{{(\er-1)}{h'}} = \omega}%{r{(\er-1)} \cdot k^{{(\er-1)}(h-2-{h'})+{(\es-1)}}$ bundles of outputs. \end{enumerate} For ${h'} >0$ each bundle mentioned in (ii) is the result of some bunch of the form $\Theta_{pq}$. Thus each bundle has one of the types $p_1,\ldots,p_\omega}%{r$ and all the bundles are evenly divided into these types so that \begin{enumerate} \item[(iii)] there are $b_{h'}/\omega}%{r = {(\er-1)} \cdot k^{{(\er-1)}(h-2-{h'})+{(\er-1)}}$ bundles of each type. \end{enumerate} Moreover enlarging $\nabla_{h'}$ to $\nabla_{{h'}+1}$ we will keep the following properties: \begin{enumerate} \item[(iv)] summing up (modulo $q$) a bundle $B$ of type $q$ (to get $s_B(\o x)$) only the boolean values $0$ or $1$ may appear, \item[(v)] the conjunction of all $b_{h'}$ values $s_B(\o x)$ (i.e. with $B$ ranging over all bundles produced by $\nabla_{h'}$) coincides with $\textsf{AND}\xspace(x_1,\ldots,x_n)$. \end{enumerate} We start with artificially adding level $0$ just to multiply variables so that it does not contribute to the depth of our circuits. In fact this starting circuit $\nabla_0$ (of depth $0$) takes $n$ inputs $x_1,\ldots,x_n$ and makes $b_0=\omega}%{r{(\er-1)} \cdot n$ bundles, each of which consisting of one typed variable, i.e. each variable $x_i$ is repeated $\omega}%{r-1$ times in each type. It should be (more than) obvious that (i)-(v) hold. Now to go from $\nabla_{h'}$ to $\nabla_{{h'}+1}$ we first group $\frac{b_{h'}}{\omega}%{r}$ bundles of a given type, say $p$, into $\frac{b_{h'}}{\omega}%{r k_\omega}%{r}$ groups of size $k_\omega}%{r$ (i.e. each such a group consists of $k_\omega}%{r$ bundles of type $p$). Next, all $k_\omega}%{r$ bundles in one group are passed through $\omega}%{r-1$ bunches $\Theta'_{pq}$, one for each $q\neq p$, to produce $\omega}%{r-1$ bundles, again one for each type $q\neq p$. Thus $b_{h'}$ bundles (that go to the gates on level ${h'}+1$) are replaced by $b_{{h'}+1}={(\er-1)}\frac{b_{h'}}{k_\omega}%{r}=\frac{b_{h'}}{k^{\omega}%{r-1}}$ new bundles, as required in (i)-(iii). To pass the $k_\omega}%{r$-element group $B_1,\ldots,B_{k_\omega}%{r}$ of bundles through the bunch $\Theta'_{pq}$ of gates we inflate each single input (say the $s$-th one) of $\Theta'_{pq}$ into the number of outputs in $B_s$ so that in fact $\Theta'_{pq}$ is fed by $s_{B_1},\ldots,s_{B_{k_\omega}%{r}}$. To see (iv), say for a bundle $B$ of type $q$, note that $s_B(\o x) = t'_{pq}(s_{B_1}(\o x),\ldots,s_{B_{k_\omega}%{r}}(\o x))$, where $B_1,\ldots,B_{k_\omega}%{r}$ form the $k_\omega}%{r$ element group of bundles (of type $p$) that were passed through $\Theta'_{pq}$. Since $t'_{pq}$ returns boolean values on boolean arguments, we get (iv). To prove (v) let $C_1,\ldots,C_{\omega}%{r-1}$ be the bundles resulting from passing the $k_\omega}%{r$-element group $B_1,\ldots,B_{k_\omega}%{r}$ of bundles of type $p$ through $\omega}%{r-1$ bunches $\Theta'_{pq}$ (with $q\neq p$). If $C_s$ is of type $q$ then $s_{C_s}(\o x) =t'_{pq}(s_{B_1}(\o x),\ldots,s_{B_{k_\omega}%{r}}(\o x))$ and consequently \[ \textsf{AND}\xspace(s_{C_1}(\o x),\ldots,s_{C_{\omega}%{r-1}}(\o x)) = \textsf{AND}\xspace\set{t'_{pq}(s_{B_1}(\o x),\ldots,s_{B_{k_\omega}%{r}}(\o x)) : q\neq p}. \] Due to the equation (\ref{eqn:and}) the last conjunction is equal to $\textsf{AND}\xspace(s_{B_1}(\o x),\ldots,s_{B_{k_\omega}%{r}}(\o x))$. Thus the two conjunctions of all the sums of the form $s_B(\o x)$: one before processing the bundles through a given level and the other one after processing them, are equal. This shows (v). After arriving at the level $h-2$, our circuit $\nabla_{h-2}$ produces $b_{h-2}= \omega}%{r{(\er-1)}\cdot k^{\varpi}%{s-1}=\omega}%{r k_\varpi}%{s$ bundles, i.e. $k_\varpi}%{s$ bundles in each type. Now we put all these $k_\varpi}%{s$ bundles of one type, say $p$, into one group and proceed this group through $\varpi}%{s-1$ bunches $\Theta'_{pq}$ with $q$ ranging over some $\varpi}%{s-1$ element subset $Q_p\subseteq \set {q\neq p : q\geq \omega}%{r}$. Again, as in the proof of invariant (v), we argue that $\textsf{AND}\xspace(s_{C_1}(\o x),\ldots,s_{C_{\varpi}%{s-1}}(\o x)) = \textsf{AND}\xspace(s_{B_1}(\o x),\ldots,s_{B_{k_\varpi}%{s}}(\o x))$ where $C_1,\ldots,C_{\varpi}%{s-1}$ are the bundles resulting from passing $k_\varpi}%{s$-element group $B_1,\ldots,B_{k_\varpi}%{s}$ of bundles of type $p$ through the bunches $\Theta'_{pq}$ with $q\in Q_p$. The output of the $(h-1)$-th level consists of $\omega}%{r{(\es-1)}$ bundles, as each of the $\omega}%{r$ groups is passed through $\varpi}%{s-1$ bunches $\Theta'_{pq}$ with large primes $q$. On the other hand for a fixed large $q$ at most $\omega}%{r-1$ primes $p\neq q$ may contribute to the bunches $\Theta'_{pq}$ that are actually used on level $h-1$. To distinguish those primes we put ${Z}_j=\set{i : \Theta'_{p_i p_j} \mbox{\ is used on level \ } h-1}$ for $j\leq \varpi}%{s$. Note that $\set{1,\ldots,\varpi}%{s}-\set{j} \subseteq {Z}_j \subseteq \set{1,\ldots,\omega}%{r}-\set{j}$, i.e. in particular $\card{{Z}_j} \leq \omega}%{r-1 < p_j$. In this notation we enumerate all the $\card{{Z}_{1}}+\ldots+\card{{Z}_\varpi}%{s}$ bundles resulting from level $h-1$ by \( C_{1}^1,\ldots,C_{1}^{\card{{Z}_{1}}}, C_{2}^1,\ldots,C_{2}^{\card{{Z}_{2}}},\ldots\ldots, C_{\varpi}%{s}^1,\ldots,C_{\varpi}%{s}^{\card{{Z}_{\varpi}%{s}}}. \) Denoting $s_{C^i_j}(\o x)$ simply by $s^i_j(\o x)$ we now express the property (v) as \begin{equation} \label{eqn:s-and} \textsf{AND}\xspace(x_1,\ldots,x_n)=\textsf{AND}\xspace\set{s^i_j(\o x) : j\leq\varpi}%{s \mbox{ and } i\in{Z}_j}. \end{equation} Now, at the very last level we put all $\card{{Z}_{1}}+\ldots+\card{{Z}_\varpi}%{s}$ bundles, with $C^i_j$ being repeated $\frac{m}{p_j}$ times, into the gate $\textsf{MOD}\xspace_{m}^\set{\sigma}$, where $\sigma =\sum_{j\leq\varpi}%{s} \frac{m}{p_j}\cdot\card{{Z}_j} \mod{m}$. This gate computes (modulo $m$) the sum \[ S(\o x) = \sum_{j\leq\varpi}%{s} \sum_{i\in {Z}_j} \frac{m}{p_j}\cdot s^i_j(\o x) \] and turns it to $1$ if $S(\o x)=\sigma$ and to $0$ otherwise. Thus, due to (\ref{eqn:s-and}), we are left with showing that $S(\o x)=\sigma$ iff $s^i_j(\o x)=1$ for all $j\leq\varpi}%{s$ and $i\in {Z}_j$. Obviously if all the $s^i_j(\o x)$'s are $1$ then the sum $S(\o x)$ is $\sigma$. Conversely, as in the proof of Proposition \ref{prp:force-narrow}, we first identify the direct sum $\bigoplus_{j=1}^\varpi}%{s \frac{m}{p_j} \cdot \mathbb{Z}_{p_j}$ with a subgroup of $\mathbb{Z}_m$. Then the assumption that $\sigma= S(\o x) = \sum_{j\leq\varpi}%{s} \frac{m}{p_j}\cdot\sum_{i\in {Z}_j} s^i_j(\o x)$ together with the fact that $0\leq \sum_{i\in {Z}_j} s^i_j(\o x) \leq \card{{Z}_j} \leq \omega}%{r-1 <p_j$ gives, by the unique decomposition in the direct sum, that for each $j\leq\varpi}%{s$ we have $\sum_{i\in {Z}_j} s^i_j(\o x)=\card{{Z}_j} \mod p_j$. But now $s^i_j(\o x)\in\set{0,1}$ and $\card{{Z}_j}<p_j$ yield that all the $s^i_j(\o x)$'s are $1$. \medskip It remains to calculate the size of the entire circuit. Each of the first $h-2$ levels has $2^{O(p_i^{\nu_i} \log k_\omega}%{r)}$ gates in each bunch of type $p_i$. There are at most $O(n)$ bunches of each type. Using $p_i^{\nu_i} \leq p_i k_\omega}%{r^{1/{(\er-1)}}\in O(k)$ we bound the size of each bunch by $2^{O(k\log k)}$. The same holds on the level $h-1$. Summing up we bound the size of entire circuit by $2^{O(k\log n)} \leq 2^{O(n^{1/({(\er-1)}(h-2)+{(\es-1)})} \log n)}$. \end{proof} \bigskip Now we are ready to show Theorem \ref{thm:omegabar} that, in comparison to Proposition \ref{prp:omega-omegabar}, increases the degree of the root just by one. \begin{proof} Our circuits here are based on those from the proof of Proposition \ref{prp:omega-omegabar} by modifying only two levels: $\nabla_0$ and $\nabla_1$. This time we start with assuming that $n=k^{{(\er-1)}(h-2)+\varpi}%{s}$ for some integer $k$. Also, additionally to the $\nu_j$'s and the $\o\nu_j$'s (exactly as in the proof of Proposition \ref{prp:omega-omegabar}) we pick $\nu^0_j$ to satisfy $p_j^{\nu^0_j-1} \leq k < p_j^{\nu^0_j}$ for all the $j$'s. The starting circuit $\nabla_0$ takes $n$ inputs $x_1,\ldots,x_n$ and makes $b_0= \omega}%{r\cdot n$ bundles, each of which consisting of one typed variable, so that there are exactly $n$ bundles in each type. To proceed these bundles through the gates of $\nabla_1$ we will group $n$ bundles in each type into groups of size $k_0=k^\omega}%{r$, but in a synchronized way. By this synchronization we mean that first the set $\set{1,\ldots,n}$ is split into $n/k_0$ groups $G_i$ of size $k_0$ and then in each type, say $p$, we form a group of bundles $G^p_i = \set{x_j : j\in G_i}$. Next, each such group $G^p_i$ is passed though all the $\Theta'_{pq}$'s (with $q\neq p$) to get the bundles $\Theta'_{pq}(G^p_i)$. As previously we want to have that $\textsf{AND}\xspace(x_1,\ldots,x_n)$ coincides with the conjunction of all the $s_B(\o x)$'s with $B$ ranging over all the bundles produced by $\nabla_1$, i.e. that: \[ \textsf{AND}\xspace(x_1,\ldots,x_n)=\textsf{AND}\xspace\set{s_{\Theta_{pq}(G^p_i)}(\o x) : \ p\neq q, \ i=1,\ldots,n/k_0}. \] We get this by observing that $\textsf{AND}\xspace(G^p_i)$ can be replaced by the conjunction of $s_B(\o x)$ for (at least $\omega}%{r$) bundles $B$ of all $\omega}%{r$ different types $p_1,\ldots,p_\omega}%{r$. This however is witnessed by \[ \textsf{AND}\xspace(G^p_i)=\textsf{AND}\xspace\left(\set{s_{\Theta_{pq}(G^p_i)}(\o x) : \ q\neq p}\cup \set{s_{\Theta_{qp}(G^q_i)}(\o x) : \ q\neq p}\right), \] due to the fact that for a fixed $i$ our synchronization spans the sets $G^p_i$ and $G^q_i$ on the very same variables. In this process $\nabla_1$ replaces each group of $k_0=k^\omega}%{r$ bundles by $\omega}%{r-1$ new bundles. This means that $\nabla_1$ produces $b_1= {(\er-1)}\cdot\frac{b_0}{k^\omega}%{r}={(\er-1)}\omega}%{r\cdot k^{{(\er-1)}(h-3)+{(\es-1)}}$ bundles, which is exactly the number of bundles produced by $\nabla_1$ in the proof of Proposition \ref{prp:omega-omegabar}. This allows us to put these bundles into the consecutive levels of the circuit described in that proof. As previously our choice of $k_0, k_\omega}%{r, k_\varpi}%{s$ (for determining the sizes of the groups of bundles) yields that, on each level, the sizes of the bunches used in our circuit are bounded by $2^{O(k\log k)}$. Combining this with the fact that on each level at most $O(n)$ bunches are used and with $n=k^{{(\er-1)}(h-2)+\varpi}%{s}$ we get that our circuit has the size bounded by $2^{O({n}^{1/({(\er-1)}(h-2)+\varpi}%{s)}\log n)}$. \end{proof} \section{Probabilistic circuits} \label{sec:cc2-prob} In this section we prove Theorem \ref{thm:random-and}, i.e. we construct polynomial size $\cc{p;q}$-circuits $\Gamma_n$ computing $\textsf{AND}\xspace_n$ with the help of $\rbit=6+\log n$ additional random bits. This means that $\Gamma_n$ has $n+\rbit$ inputs and for each $n$-tuple $\o a \in \set{0,1}^n$ for at least $\frac{2}{3}$ possible tuples $\o b \in \set{0,1}^\rbit$ we have $\Gamma_n(\o a, \o b) = \textsf{AND}\xspace_n(\o a)$. These circuits will be based on $O(\rbit)$-ary special $\zex{p,q}$-expressions so that we can control their size to be polynomial in $n$, i.e. $2^{O(\rbit)}$. To start our construction define $\Lambda$ to be the set of all tuples $\lambda=\left(\lambda_{\o c,j}\right)^{\o c \in \set{0,1}^\rbit}_{j=1,\ldots,p^*\rbit}$ of length $2^\rbit p^*\rbit$, where $p^* =\lceil\log_{\frac{p}{p-1}}2\rceil$ and each $\lambda_{\o c,j}$ is an $GF(p)$-affine combination of the $x_i$'s satisfying $\lambda_{\o c,j}(1,\ldots,1)=1$. Define $\mathsf{b}'(z)=1-\mathsf{b}(z)$ so that for $\lambda \in \Lambda$ we put \[ t_\lambda(\o x,\o b) = \sum_{\o c \in\set{0,1}^\rbit} \prod_{i=1}^{\rbit} \mathsf{b}'(b_i-c_i) \cdot \prod_{j=1}^{p^*\rbit} \mathsf{b}(\lambda_{\o c,j}(\o x)), \] to show that \begin{itemize} \item each $t_\lambda(\o x,\o b)$ can be turned into $\zex{p,q}$-expression with $2^{O(\rbit)}$ summands (corresponding to the number of gates in the circuits realizing this expression), \item for at least one $\lambda\in \Lambda$ the expression $t_\lambda(\o x,\o b)$ calculates $\textsf{AND}\xspace_n(\o x)$ for at least $\frac{2}{3}$ of the $\o b$'s in $\set{0,1}^\rbit$. \end{itemize} For the first item note that each summand in $t_\lambda(\o x,\o b)$ can be obtained by an appropriate substitution in a $(\rbit+p^*\rbit)$-ary function \( Z_p^\rbit \times Z_p^{p^*\rbit} \ni (\o u, \o z) \mapsto \mathsf{b}'(u_1)\cdot\ldots\cdot\mathsf{b}'(u_\rbit)\cdot\mathsf{b}(z_1)\cdot\ldots\cdot\mathsf{b}(z_{p^*\rbit}) \in Z_q. \) By Fact \ref{trzy-jeden} such function can be represented by a $\zex{p,q}$-expression with $O(p^{(p^*+1)\rbit})$ summands. Now, summing up (modulo $q$) over the $\o c$'s we end up with a $\zex{p,q}$-expression with $O(2^\rbit p^{(p^*+1)\rbit})=2^{O(\rbit)}= \poly(n)$ summands. Before showing the second item note that for fixed $\o b \in \set{0,1}^\rbit$ the expression $t_\lambda(\o x,\o b)$ reduces to only one summand, namely $\prod_{j=1}^{p^*\rbit} \mathsf{b}(\lambda_{\o b,j}(\o x))$. Now, for a fixed $\o a \in \set{0,1}^n$ and $\o b \in \set{0,1}^\rbit$ the random variable $X_{\o a,\o b}$ checks for a particular tuple $(\lambda_{\o b,j})_{j=1,\ldots,p^*\rbit}$ if the value $\prod_{j=1}^{p^*\rbit} \mathsf{b}(\lambda_{\o b,j}(\o a))$ coincides with $\textsf{AND}\xspace_n(\o a)$. Thus the sum $X_{\o a} = \sum_{\o b \in\set{0,1}^\rbit} X_{\o a,\o b}$, defined now on entire $\Lambda$, simply counts the number of the $\o b$'s for which $t_\lambda(\o a,\o b) = \textsf{AND}\xspace_n(\o a)$. We conclude our argument with showing that $\pr{\bigwedge_{\o a \in \set{0,1}^n} X_{\o a}\geq \frac{2}{3}\cdot 2^\rbit}\neq 0$. Note that for fixed $\o a \neq \o 1$ and randomly chosen $\lambda_{\o c,j}$ we have $\pr{\lambda_{\o c,j}(\o a)\neq 0} =\frac{p-1}{p}$ so that $\pr{X_{\o a,\o b}=0}= \left(\frac{p-1}{p}\right)^{p^*\rbit}=2^{-\rbit}$ and $E(X_{\o a,\o b})=1-2^{-\rbit}$. Consequently $E(X_{\o a})=2^\rbit(1-2^{-\rbit})=2^\rbit -1$. Fixing $\delta$ so that $(1-\delta)E(X_{\o a})=\frac{2}{3}\cdot 2^\rbit$ we apply Chernoff's inequality for the lower tail to get $\pr{X_{\o a}\leq \frac{2}{3}\cdot 2^\rbit} \leq \exp\left(-\frac{E(X_{\o a})\cdot\delta^2}{2}\right) \leq \exp\left(-\frac{64n-1}{32}\right) <2^{-n}$. Consequently probability of the fact that no $\lambda \in \Lambda$ leads to $t_\lambda$ with desired property is bounded by $\pr{\bigvee_{\o a \in \set{0,1}^n} X_{\o a}\leq \frac{2}{3}2^\rbit} <2^n\cdot2^{-n}=1$, as required. \section{Algorithms} \label{sec:algo} In Sections \ref{sec:cc2} and \ref{sec:cchm} we have seen how to construct subexponential conjunctions and how it helps to encode 3-CNF SAT in satisfiability of modular circuits. Obviously better upper bounds for the size of circuits realizing \textsf{AND}\xspace \ (and consequently 3-CNF formulas) give rise to higher complexity of $\cch{h}{m}$-SAT. In particular a polynomial upper bound for the size of \textsf{AND}\xspace would show \textsf{NP}-complete\xspace{ness} of $\cch{h}{m}$-SAT. Although, in Section \ref{sec:cc2-prob} we have shown that \textsf{AND}\xspace can be realized by a probabilistic $\cch{h}{m}$-circuits of polynomial size (provided $h,\omega(m)\geq 2$), we strongly believe that this cannot be done without those random bits. In this section we analyze how the lower (superpolynomial) bound for the size of circuits realizing \textsf{AND}\xspace \ can be used to (subexponentially) bound the complexity of $\cch{h}{m}$-SAT from above. To this end for fixed depth $h$ and modulus $m$ by $\gam_{h,m}(n)$ we denote the size of the smallest possible $\cch{h}{m}$-circuit computing $\textsf{AND}\xspace_n$. Note first that (according to Proposition \ref{shallow-narrow-and}) if $h=1$ or $\omega(m)=1$ the values $\gam_{h,m}(n)$ are defined only for finitely many first integers $n$. However, independently of $h$ and $m$, Fact \ref{trzy-jeden} ensures us that $\gam_{h,m}$ is at most exponentially large and therefore computable in \textsf{2-EXPTIME}\xspace. In our considerations we need much better bound for the time needed to compute $\gam_{h,m}(n)$. Note that the functions bounding sizes of the circuit constructed in Propositions \ref{prp:force-narrow}(\ref{force-narrow-and}), \ref{prp:single-primes}, \ref{prp:r-and-h-pol}, \ref{prp:omega-omegabar} and Theorem \ref{thm:omegabar} are of the form $2^{O(n^{1/\delta}\log n)}$ and can be computed in \textsf{PTIME}\xspace. Although we cannot guarantee that $\gam_{h,m}$ is \textsf{PTIME}\xspace-computable, it would be enough for us to bound it from below by such a function (which is still close enough to $\gam_{h,m}$). Now we provide two algorithms for satisfiability of $\cch{h}{m}$-circuits, a deterministic one and a slightly faster randomized one with running times depending on the growth rate of $\gam_{h,m}$, or rather it inverse. For a partial increasing function $f: \mathbb{N} \longrightarrow \mathbb{N}$ by $\pre f k$ we mean the largest $n$ with $f(n)\leq k$. \begin{thm} \label{thm:algo} Suppose that $\gam_{h,m}$ has \textsf{PTIME}\xspace-computable increasing lower bound $f$. Then there are two algorithms for checking if an $n$-ary $\cch{h}{m}$-circuit is satisfiable: \begin{itemize} \item a deterministic one with the running time $O\left(\poly\card{\Gamma} + 2^{\pree f {\card{\Gamma}}\cdot\log n} \cdot \card{\Gamma}\right)$, \item a randomized one with the running time $O\left(\poly\card{\Gamma} + 2^{\pree f {\card{\Gamma}}} \cdot \card{\Gamma}\right)$. \end{itemize} \end{thm} \begin{proof} Our deterministic algorithm is based on a brute-force search for a satisfying tuple in a relatively small set $S$ of size $n^{\pree f {\card{\Gamma}}}$ consisting of all the tuples $a\in \set{0,1}^n$ with at most $\pree f {\card{\Gamma}}$ ones. To determine this set we first need to know the value $\pree f {\card{\Gamma}}$. But since $f$ is \textsf{PTIME}\xspace-computable this can be done in $\poly\card{\Gamma}$ steps. This together with checking whether $S$ contains a satisfying tuple takes $\poly\card{\Gamma} + O\left(\card{\Gamma} \cdot n^{\pree f {\card{\Gamma}}} \right)$ steps, as claimed. \medskip Since $\pree \gam_{h,m} {} \leq \pree f {}$ we are left with showing that if $\Gamma$ is satisfiable then it can be satisfied by a tuple $a\in \set{0,1}^n$ with $\card{\pre a 1} \leq \pree \gam_{h,m} {\card{\Gamma}}$. Suppose then that $a$ is a non-zero satisfying tuple with the minimal number of ones. By this minimality we know that $\Gamma$ with all the inputs with indices outside $\pre a 1$ set to $0$ behaves like the $\card{\pre a 1}$-ary \textsf{AND}\xspace. Thus $\gam_{h,m} \card{\pre a 1} \leq \card{\Gamma}$ so that the tuple $a$ has at most $\pree \gam_{h,m} {\card{\Gamma}}$ ones. \medskip On the other hand our second algorithm, the probabilistic one, is based on randomly choosing sufficiently many inputs so that the probability of having a satisfying one among them exceeds $1/2$, if there is any such satisfying tuple at all. We claim that $2^{\pree \gam_{h,m} {\card{\Gamma}}}$ samples suffices. Indeed, if $\Gamma$ is constant then any single sample witnesses its (un)satisfiability. Remark \ref{rmk-reducing-stack} allows us to modify a nonconstant circuit $\Gamma$ to get $(n-k)$-ary spike circuit $\Gamma'$ for some $k\leq \log\card{\stack{\Gamma}}$, so that $\card{\Gamma'}\geq \gam_{h,m}(n-k)$. Consequently $\card{\Gamma}\geq \card{\Gamma'}\geq \gam_{h,m}(n-\log\card{\stack{\Gamma}})$, which together with $\card{\pre \Gamma 1} \geq \card{\stack{\Gamma}}$ gives ${\card{\pre \Gamma 1}}/{2^n} \geq 2^{-\pree \gam_{h,m} {\card{\Gamma}}}$. This simply means that we will find a tuple from $\pre \Gamma 1$ among $2^{\pree \gam_{h,m} {\card{\Gamma}}}$ samples. But again, to calculate how long we need to sample we increase $2^{\pree \gam_{h,m} {\card{\Gamma}}}$ to $2^{\pree f {\card{\Gamma}}}$ and use the fact that $f$ is \textsf{PTIME}\xspace-computable. \end{proof} From our proof of Theorem \ref{thm:algo} we get the following generalization of Corollary \ref{cor:bal1}. \begin{cor} \label{cor:bal2} The balance of a $\cch{h}{m}$-circuit $\Gamma$ is at least $2^{1-\pree \gam_{h,m} {\card{\Gamma}}}$. \end{cor} Observe here, that like in Corollary \ref{cor:lan-bal1}, we can use the function $\gam_{h,m}$ to bound from below the number of words (of a given length) in a language recognizable by polynomial size $\cch{h}{m}$-circuits. In particular the suspected lower bound for $\gam_{h,m}$ of the form $2^{\Omega(n^\delta)}$ translates into the bound $2^{n-O(\log^{1/\delta} n)}$. Although our random sampling algorithm RanSam described in the proof of Theorem \ref{thm:algo} is not involved, the proof itself tells that a bigger lower bound for $\gam_{h,m}$ allows us to reduce the number of samples in RanSam. Below we show that this connection is two-sided. \begin{prp} \label{prp:ransam} If RanSam works (with probability at least $1/2$) with at most $2^{f\card{\Gamma}}$ samples for some increasing computable function $f$, then $\pre f n \leq \gam_{h,m}(n+1)$. \end{prp} \begin{proof} We run RanSam on the circuit $\textsf{AND}\xspace_n$ with $2^{f\gam_{h,m}(n)}$ samples, so that the expected number of satisfying tuples is $2^{f\gam_{h,m}(n)}/2^n$. This procedure however has to find, with probability at least $1/2$ the unique satisfying tuple. Thus, Markov inequality yields $2^{f\gam_{h,m}(n)}/2^n \geq 1/2$ so that $\pre f {n-1} \leq \gam_{h,m}(n)$. \end{proof} Combining Theorem \ref{thm:algo} and Proposition \ref{prp:ransam} we get that the suspected lower bound $2^{\Omega(n^\delta)} \leq \gam_{h,m}$ is equivalent to the upper bound $2^{O(\log^{1/\delta}\card{\Gamma})}$ for the running time of RanSam. Actually any superpolynomial lower bound for $\gam_{h,m}$ mutually translates into substantially subexponential (i.e. at most $2^{\card{\Gamma}^{o(1)}}$) number of samples. As for now only slightly superlinear lower bounds $\Omega(n\cdot \varepsilon(n))$ for $\gam_{h,m}$ are known, as $\varepsilon(n)$ is an extremely slowly increasing function (see \cite{LowBounds}). Although the functions $\varepsilon(n)$ depend on $h$ and $m$, a careful inspection of their description shows that the inverse to $n\cdot \varepsilon(n)$ is always bounded by $O(n/\varepsilon(n))$. This, together with Theorem \ref{thm:algo} shows the following result. \begin{thm} \label{thm:cc-vs-ac} Satisfiability of $\cch{h}{m}$-circuits $\Gamma$ is solvable in probabilistic $2^{O(\card{\Gamma}/\varepsilon(\card{\Gamma})}$ time. \end{thm} This Theorem stays in a big contrast to the lower bound $2^{\Omega(\card{\Gamma})}$ (provided by the randomized version of ETH \cite{rETH09}) for probabilistic algorithms for satisfiability of $AC^0$-circuits. \medskip We conclude this section with arguing that (under some additional assumption about effective coding of 3-CNF formulas by modular circuits) the running time of RanSam is hard to beat. Our heuristic assumption simply says that there is a \textsf{PTIME}\xspace algorithm that turns 3-CNF formulas $\Phi$ with $\cll$ clauses into $\cch{h}{m}$-circuits of size bounded by $O(\gam_{h,m}(c\cll))$. Assuming also that $\gam_{h,m}$ (or some of its $\Theta$-equivalents) is \textsf{PTIME}\xspace-computable we know that RamSam runs with $2^{O(\pree \gam_{h,m} {\card{\Gamma}})}$ samples. On the other hand ETH\xspace, applied to the circuit $\Gamma$ produced from 3-CNF formula by the algorithm supplied by our heuristic assumption, gives an integer $d>0$ so that $\cch{h}{m}$-SAT cannot be solved in $O(2^{\frac{1}{d}\pree \gam_{h,m} {\card{\Gamma}}})$. Thus the best imaginable algorithm solving $\cch{h}{m}$-SAT has the running time bounded by a polynomial applied to the running time of RanSam. \section{Concluding remarks and applications} In view of the results in Section \ref{sec:cchm}, in particular a spectacular role played by $\varpi(m)$, as well as the easiness of increasing the degree of the root just by $1$, it seems to be really hard to state reasonable conjectures for the asymptotic behaviour of the $\gam_{h,m}$'s. As for now, for $h=2$ the degree of the root (occurring in the exponent) is at least $\omega(m)$ (Proposition \ref{prp:force-narrow}) and for $h\geq 3$ at least ${(\er-1)}(h-2)+\varpi}%{s$ (Theorem \ref{thm:omegabar}). However for the `majority' of potential moduli $m$ we know that $\varpi(m)$ is pretty close to $\omega(m)$, so that this degree is almost ${(\er-1)}(h-1)+1$ (and coincides with $\omega(m)$, whenever $h=2$). Due to the fact that prime factorization (i.e. the number $\omega(m)$) may contribute fully into this degree and the depth $h$ contributes by the factor $h-1$, it seems natural to suspect that the bound for $\log\gam_{h,m}(n)$ could be of the form $n^{1/(\omega}%{r(h-1))}\log n$. Another remark we want to make here is the difference between the circuits of the form $\cch{h}{p;q;p;q;\ldots}$ (with $p\neq q$) and $\cch{h}{p\cdot q}$. In the later case we have $\varpi}%{s=\omega}%{r=2$ so that the bound for the considered degree is $h$, while Proposition \ref{prp:single-primes} gives degree $h-1$ in the first case. Moreover, it seems that there is no room for improving this $h-1$ in this case. This difference in locations of primes on different levels is even more striking for $\cch{2}{p;m}$ and $\cch{2}{m;p}$, whenever $m$ has $r\geq 2$ prime divisors except $p$. In the first case we can actually argue, as in the proof of Proposition \ref{prp:force-narrow}, to get the upper bound $2^{n^{1/r}\log n}$ for $\gam_{h,m}$, while \cite{BST90,ST06} give $2^{\Omega(n)}$ lower bound in case of $\cch{2}{m;p}$. \medskip The technique we have developed for proving Proposition \ref{prp:force-narrow} can be used to determine (modulo ETH\xspace) the complexity of solving equations over the dihedral groups $\m D_{2k+1}$, i.e. groups of symmetries of regular polygons with odd number of sides. Some of the variables in these equations are already preevaluated (as otherwise every equation has a trivial solution with all the variables set to the neutral element of the group). This is equivalent to consider polynomials (instead of terms) over groups. The decision version of this problem for the group $\m G$ is denoted by $\csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){G}$. Analogously by $\poleqv{G}$ we mean the problem of deciding whether two polynomials over $\m G$ define the same function. Note here that from the paper \cite{goldman-russell} of Goldmann and Russell we know that \csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){} is \textsf{NP}-complete\xspace for nonsolvable groups and in \textsf{PTIME}\xspace for nilpotent groups. Moreover the paper \cite{ikkw} partially fills this gap by showing that (modulo ETH\xspace) $\csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){G}$ is not in \textsf{PTIME}\xspace unless $\m G$ has Fitting length at most $2$, i.e. $\m G$ is a wreath product of two nilpotent groups. This paper refutes a long standing belief that \csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){} for all solvable groups is in \textsf{PTIME}\xspace. The conjecture was based on many examples of groups that are in fact 2-nilpotent. The very recent paper of Földvári and Horváth \cite{foldvari-horvath} summarizes most of these examples by showing that $\csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){\m G}$ is in \textsf{PTIME}\xspace whenever $\m G$ is a semidirect product of a $p$-group and an abelian group. Note here that the dihedral groups $\m D_{p^k}$, with prime $p$, fall into this realm. On the other hand our characterization below dismisses such a speculation about tractability of \csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){} for groups of Fitting length 2 (unless ETH\xspace fails). \begin{thm} \label{thm:dm} If ETH\xspace holds then for each odd integer $m\geq 3$ the problem $\csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){D_m}$ is in \textsf{PTIME}\xspace iff $\omega(m)=1$. \end{thm} \begin{proof} Remind that the dihedral group $\m D_m$ is generated by two elements, a rotation $\rho$ (with angle $2\pi/m$) and a reflection $\sigma$ satisfying $\rho^m=1, \sigma^2=1$ and $\sigma\rho=\rho^{-1}\sigma$. This means that $\m D_m$ has $2m$ elements: $m$ rotations $\rho^0, \rho^1, \rho^2,\ldots, \rho^{m-1}$ and $m$ reflections $\sigma, \sigma\rho, \sigma\rho^2,\ldots, \sigma\rho^{m-1}$. If $\omega(m)=1$ then we have already noted that \cite{foldvari-horvath} puts $\csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){D_{m}}$ into \textsf{PTIME}\xspace. Now suppose $m=p_1^{\alpha_1}\cdot\ldots p_\omega}%{r^{\alpha_\omega}%{r}$, where the $p_j$'s are pairwise different odd primes and $\omega}%{r\geq 2$. Since the rotations form a cyclic group isomorphic to $\mathbb{Z}_m$ for each $j=1,\ldots,\omega}%{r$ there is a rotation, say $\rho_j$, generating a cyclic subgroup of order $p_j$. Define unary polynomials (with $j=1,\ldots,\omega}%{r$) by putting $\mathsf{e}(x) = \sigma(\sigma x^m)^m, \mathsf{e}_j(x) = x^{2m/p_j}$ and $\mathsf{b}_j(x) = (\rho_j \mathsf{e}(x) \rho_j^{-1} \mathsf{e}(x)^{-1})^{\frac{m+1}{2}}$ and observe that the range of $\mathsf{e}$ is $\set{1,\sigma}$, i.e. the group isomorphic to $\mathbb{Z}_2$, while $\mathsf{e}_j$ maps the group $\m D_m$ onto its cyclic subgroup $\set{1,\rho_j,\rho_j^2,\ldots,\rho_j^{p_j-1}}$ isomorphic to $\mathbb{Z}_{p_j}$. Moreover the polynomial $\mathsf{b}_j$ maps the group $\set{1,\sigma}$ onto $\set{1,\rho_j} \subseteq \set{1,\rho_j,\rho_j^2,\ldots,\rho_j^{p_j-1}}$ and therefore $\mathsf{b}_j$ can be used to build $\mathbb{Z}[2,p_j]$-expressions as polynomials of $\m D_m$. Now we adapt the proof of Proposition \ref{prp:force-narrow}(1) to our setting. For a 3-CNF formula $\Phi$ we borrow the $\mathbb{Z}[2,p_j]$-expressions by putting $t^\Phi_j=t^\Phi_{2,p_j}$ to build the polynomial $T^\Phi$, but we modify the original definition (\ref{tfi}) to be read \[ T^\Phi(x_1,\ldots,x_n) = \sum_{j=1}^{\omega}%{r} \ t^\Phi_j(x_1,\ldots,x_n) \] where the sum is computed in the direct sum $\bigoplus_{j=1}^\omega}%{r \mathbb{Z}_{p_j}$ identified with a subgroup of the group $\mathbb{Z}_m$ of all rotations. Now we simply transform 3-CNF formula $\Phi$ into the equation $T^\Phi(\o x)=0$, with $0$ being the neutral element of both $\mathbb{Z}_m$ and $\m D_m$. To see that $\Phi$ is satisfiable iff the corresponding equation has a solution in $\m D_m$, we simply go back and forth between the boolean values and the elements of $\m D_m$ by identifying the rotations, i.e. elements of $\mathsf{e}_0^{-1}(1)$ with the boolean value true and the reflections, i.e. elements of $\mathsf{e}_0^{-1}(\sigma)$ with the boolean value false. Obviously, as previously, the length of $T^\Phi$ is bounded by $2^{O(\sqrt[\omega}%{r]{\cll}\log \cll)}$ where $\cll$ is the number of clauses in $\Phi$. Thus ETH\xspace yields that $\csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){D_m}$ cannot be in \textsf{PTIME}\xspace. \end{proof} An analysis of the complexity for \csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){} over all dihedral groups $\m D_m$ is postponed to our paper \cite{ikk:dihedral}. In particular our method used in Theorem \ref{thm:dm} is applied to a more subtle situation where $m$ is even but has at least two different odd prime divisors. Another feature of the dihedral groups $\m D_m$ is that $\poleqv{D_m}$ is in \textsf{PTIME}\xspace for all $m$, see \cite{burris-lawrence}. Thus Theorem \ref{thm:dm} provides the first examples of finite groups with tractable \poleqv{} and untractable \csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){} (modulo ETH\xspace). Note here that every group with tractable \csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){} has tractable \poleqv{}, as to decide whether two polynomials $t,s$ are equal we simply check that none of the $\card{G}-1$ equations of the form $ts^{-1}=a$ (with $a$ ranging over $G-\set{1}$) has a solution. \medskip Almost the same argument can be used in the setting of multivalued circuit satisfiability \csat{} and circuit equivalence \ceqv{}, as defined in \cite{ik:lics18}. Such multivalued circuits are built over a fixed finite algebra $\m A$ so that the gates here simply compute the basic operations of the algebra. The paper \cite{ik:lics18} initiated a systematic project of characterizing finite algebras $\m a$ with $\csat{\m A}$ in \textsf{PTIME}\xspace and provided a partial characterization for algebras from congruence modular varieties. However a somehow similar (to \csat}%[1]{\textsc{PolSat}\left( {\m #1} \right){} for groups) gap was left open, namely the unsolved complexity of \csat{} and \ceqv{} for nilpotent but not supernilpotent algebras. In paper \cite{ikk:lics20} we constructed algebras $\dpp{p_1,\ldots,p_h}$ built over the alternating chain of primes $p_1 \neq p_2 \neq p_3 \neq \ldots \neq p_h$ with \csat{} and \ceqv{} outside \textsf{PTIME}\xspace, provided $h\geq 3$ and ETH\xspace holds. Later the paper \cite{komp:mfcs21} developed these methods to actually force nilpotent algebras with \csat{} or \ceqv{} in \textsf{PTIME}\xspace to be wreath products of two supernilpotent algebras. On the other hand \cite{ikk:mfcs18} provides examples of such wreath products (that are actually 2-nilpotent) with \csat{} and \ceqv{} in \textsf{PTIME}\xspace. Although \ceqv{} for all 2-nilpotent algebras has been confirmed \cite{kkk} to be in \textsf{PTIME}\xspace, an analogue for \csat{} is blocked by the following example, the proof of which simply repeats the argument for Theorem \ref{thm:dm}. \begin{ex} \label{ex:dqqq} \csat{} for the following $2$-nilpotent algebras is outside \textsf{PTIME}\xspace, modulo ETH\xspace: \\ For any sequence $p_0,p_1,p_2,\ldots,p_\omega}%{r$ of pairwise different primes the algebra $\dpp{p_0;p_1\cdot\ldots\cdot p_\er}$ is the group $\m Z_{p_0}\times \m Z_{p_1} \times \ldots \times \m Z_{p_\omega}%{r}$ endowed with $2\omega}%{r+1$ unary operations \[ \begin{array}{ccll} \mathsf{e}_j(x_0,x_1,\ldots,x_\omega}%{r) &=& (0,\ldots,0,x_j,0,\ldots,0), &\mbox{ for $j=0,1,\ldots,\omega}%{r$,}\\ \mathsf{b}_j(x_0,x_1,\ldots,x_\omega}%{r) &=& (0,\ldots,0,b^*_j(x_{0}),0,\ldots,0), &\mbox{ for $j=1,\ldots,\omega}%{r$,} \end{array} \] where $b^*_j : \m Z_{p_0} \longrightarrow \m Z_{p_{j}}$ is the function given by $b^*_j(0)=0$ and $b^*_j(a)=1$ otherwise. \hfill\hfill$\Box$ \end{ex} \section{Easy stuff} \label{sec:easy} \subsubsection*{Proof of Proposition \ref{shallow-narrow-and}} To warm up, note that for any modulus $m$ and $n\geq m$ each sequence $\alpha_1,\ldots,\alpha_n$ of integers contains a nonempty subsequence $\alpha_{i_1},\ldots,\alpha_{i_k}$ that modulo $m$ sums up to $0$. Indeed, either there is $0$ among the sums $\alpha_1, \alpha_1+\alpha_2, \ldots, \alpha_1+\ldots+\alpha_n$ or at least two of them are equal, making their difference , i.e. a shorter nonempty sum, to be $0$. Now, the only (say $n$-ary) gate $\textsf{MOD}\xspace_m^{A}$ in the $\cch{1}{m}$-circuit (that takes $\alpha_i$ times the input $x_i$), after checking if $\sum_{i=1}^n \alpha_i x_i$ belongs to $A$, returns the very same value on the constant sequence $x_1=\ldots=x_n=1$ and its modification obtained by switching $x_{i_1},\ldots,x_{i_k}$ to $0$. This destroys the possibility for $\textsf{MOD}\xspace_m^{A}$ with $n \geq m$ inputs to serve as $\textsf{AND}\xspace_n$. \medskip For $\cch{h}{p^k}$-circuits we induct on $h$ to show how from a particular circuit $\Gamma$ pass to a polynomial $w_\Gamma(\o x)$ over $GF(p)$ so that this polynomial: \begin{itemize} \item computes the circuit $\Gamma$, \item is presented in its sparse representation, \item contains monomials of degree $d^h$ for some constant $d$ depending on $p^k$ only \end{itemize} Having done that, we pick a monomial in $w_\Gamma(\o x)$ of minimal degree and evaluating all the $x_i$'s occurring in this monomial by $1$ and the other $x_i$'s by $0$, we know that $w_\Gamma(\o x)\neq 0$. However $n>d^h$ ensures us that at least one of the $x_i$'s is $0$, contrary to the fact that $\Gamma$ is supposed to compute $\textsf{AND}\xspace_n$. To start our induction for $h=1$ we refer to the paper \cite{bt} of Beigel and Tarui, where Lemma 2.1 supplies us with a polynomial $r_0(x_1,\ldots,x_n)$ over $GF(p)$ that on the boolean values of the $x_i$'s behaves as the gate $\textsf{MOD}\xspace_{p^k}^{\set{0}}$. In fact in the proof of that Lemma it is shown that the polynomial \[ r_0(x_1,\ldots,x_n) = \prod_{j=1}^{k-1} \left( 1 - \left( \sum_{I\subseteq\set{1,\ldots,n}, \card{I}=p^j} \quad \prod_{i\in I} x_i \right)^{p-1} \right) \] does the job. It is easy to see that the degree $d_0$ of $r_0(x_1,\ldots,x_n)$ is bounded by $p^{k+1}$, independently of $n$. Since on the boolean values of the $x_i$'s the polynomial $r_0(\o x)$ can be represented as \[ r_0(x_1,\ldots,x_n) = \prod_{j=1}^{k-1} \left( 1 - \binom{\sum_{i=1}^n x_i}{p^j}^{p-1} \right) \] we get that $r_c(x_1,\ldots,x_n)=r_0(x_1-c,x_2,\ldots,x_n)$ computes the gate $\textsf{MOD}\xspace_{p^k}^{\set{c}}$. Consequently $r_A(\o x) = 1 - \prod_{c\in A} \left(1-r_c(\o x)\right)$ computes $\textsf{MOD}\xspace_{p^k}^{A}$. The degree of the polynomial $r(\o x)$ is bounded by $d = \card{A}\cdot d_0 \leq p^{2k+1}$. \medskip Now assume that a $\cch{h}{p^k}$-circuit $\Gamma$ composes on the final level $\cch{h-1}{p^k}$-circuits $\Gamma_1,\ldots,\Gamma_m$ by the gate $\textsf{MOD}\xspace_{p^k}^{A}$. To get the required polynomial $w_\Gamma(\o x)$ we simply plug into $m$-ary $r_A(y_1,\ldots,y_m)$ the polynomials $w_{\Gamma_1},\dots,w_{\Gamma_m}$. Obviously the degree of $w_\Gamma(\o x)$ is bounded by the maximal degree of the $w_{\Gamma_j}$'s (i.e. by $d^{h-1}$) multiplied by the degree of $r_A(\o y)$ (i.e. by $d$) which gives the required bound of $d^h$. \hfill\hfill$\Box$ \subsubsection*{Proof of Fact \ref{bar-poly}} For an $n$-tuple of variables $\o x = (x_1,\ldots, x_n)$ we define \[ v_j(\o x) = \sum_{1\leq i_1 < i_2 < \ldots <i_j \leq n} x_{i_1}\ldots x_{i_j} \] to be the sum of all $j$-linear monomials over the variables $\o x$. In particular $v_0(\o x)=1$. We will concentrate on their behaviour only for the boolean values $0,1$, so that we put $v'_j : \set{0,1}^n \longrightarrow \mathbb{Z}_p$ to be the appropriate restriction of $v_j$. First observe that $v'_0,v'_1,\ldots,v'_n$ are linearly independent members of the vector space $\mathbb{Z}_p^{2^n}$. Indeed, if $\sum_{j=0}^n \alpha_j v'_j = 0$ then evaluating at $\o x = (0,\ldots,0)$ we get $\alpha_0 =0$. Moreover inducting on $j$ we evaluate on $\o x\in \set{0,1}^n$ with $1$ ocurring exactly $j$ times to get $\alpha_j=0$. Now, fix $n \geq m = p^k$ and concentrate on the $m$ dimensional subspace $V_m$ of the $2^n$ dimensional space $\mathbb{Z}_p^{2^n}$ spanned over $v'_0,\ldots,v'_{m-1}$. One can easily see that each $v'_j$, and therefore each $v\in V_m$ is fully symmetric, i.e. $v(x_1,\ldots, x_n) = v(x_{\sigma 1},\ldots, x_{\sigma n})$ for all permutation $\sigma$. This symmetry allows us to define $v[i]$ to be $v(1,\ldots,1,0,\ldots,0)$ with exactly $i$ ones. Slightly more effort is required to show that each $v'_j$ (and therefore each $v\in V_m$) is $m$-periodic, i.e. $v[i+m]=v[i]$. This reduces to show that $\binom{i+p^k}{j} =\binom{i}{j}$ modulo $p$, or in other words, that the prefixes of length $p^k$ of the $i$-th and $i+p^k$-th rows of Pascal triangle coincide modulo $p$. However due to the fact that an entry in the $j+1$-th row depends only on the two values in $j$-th row, we are left with noticing that the mentioned coincidence holds for $i=0$, or in other words that \[ \binom{p^k}{j} \stackrel{p}{\equiv} \left\{ \begin{array}{ll} 1, &\mbox{for $j=0$},\\ 0, &\mbox{for $j=1,\ldots,p^k-1$}. \end{array} \right. \] To see that $V_m$ actually consists of all fully symmetric $m$-periodic functions $\set{0,1}^n \longrightarrow \mathbb{Z}_p$, first note that each such a function can be obtained as a linear combination (over $\mathbb{Z}_p$) of $w_0,w_1,\ldots,w_{m-1}$, where \[ w_j[i] = \left\{ \begin{array}{ll} 1, &\mbox{for $i \equiv j \mod p^k$},\\ 0, &\mbox{else}. \end{array} \right. \] This shows that the vector space of all fully symmetric $m$-periodic functions has dimension at most $m$ so that it has to coincide with $V_m$. This observation allows us to represent the function $w' : \set{0,1}^n \longrightarrow \mathbb{Z}_p$, that behaves as $w$ in the statement of the Fact, as a linear combination of the the $v'_j$'s. This can be used to represent $w$ itself as the very same linear combination of the $v_j$'s (with $j=0,1,\ldots,m-1$) showing that the degree of $w$ can be kept below $p^k$. \hfill\hfill$\Box$
proofpile-arXiv_067-5098
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this paper we study the $\mu$-$\lambda$ family of equations \begin{align} &m_t(t,\theta) + u(t,\theta) m_{\theta}(t,\theta) + \lambda u_{\theta}(t,\theta) m(t,\theta) = 0,\label{mubgeneral} \\ &m(t,\theta) = \sigma(t) - u_{\theta\theta}(t,\theta), \qquad \sigma(t) = \int_{S^1} u(t,\theta)\,d\theta \label{momentumdef} \\ &u(0,\theta) = u_0(\theta), \qquad t\ge 0, \;\theta\in S^1 = \mathbb{R}/\mathbb{Z}.\label{mubICs} \end{align} Here $u(t,\theta)$ is a velocity field on the circle, and $m(t,\theta)$ defined by \eqref{mubICs} is called its \emph{momentum} or \emph{vorticity}. The two special cases we care about the most are: \begin{itemize} \item $\lambda=2$, the $\mu$-Camassa-Holm (or sometimes $\mu$-Hunter-Saxton) equation, and \item $\lambda=3$, the $\mu$-Degasperis-Procesi equation. \end{itemize} Our interest is in whether solutions exist for all time $t\ge 0$, or if they break down at some $T>0$, given an initial condition $u_0$. We will work with solutions $u(t,\cdot)\in C^2(S^1)$, assuming that $u_0\in C^2$ and $m_0\in C^0$. Integrating \eqref{mubgeneral} over $\theta\in S^1$ gives, after an integration by parts, the fact that $\sigma'(t)=0$, so that for the remainder of the paper we will denote \begin{equation}\label{sigmadef} \sigma = \int_0^1 u_0(\theta)\,d\theta. \end{equation} If $u_0$ is such that $\sigma=0$ in equation \eqref{sigmadef}, then the breakdown picture is mostly understood by work of Sarria-Saxton~\cite{SS1,SS2}, who showed that if $\lambda\in [-1,1]$ then all solutions of \eqref{mubgeneral}--\eqref{mubICs} are global in time; if $1<\lambda\le 5$, then there exist $u_0$ such that solutions break down with $u_{\theta}(t,\theta_*)$ approaching negative infinity for some $\theta_*\in S^1$; and for all other values of $\lambda$, there is an initial condition such that breakdown happens everywhere. For $\lambda=2$ with $\sigma = 0$, the equation becomes the Hunter-Saxton equation~\cite{HunterSaxton}, and its explicit solution together with the geometric interpretation in terms of spherical geodesics were given by Lenells~\cite{Lenells1}. In particular all solutions break down in finite time with $u_{\theta}\to -\infty$ on a discrete set. If $\lambda=3$ with $\sigma = 0$, the equation \eqref{mubgeneral} is the second derivative of the inviscid Burgers' equation $u_t + uu_{\theta}=0$, for which all solutions break down in finite time as pointed out in Lenells-Misio{\l}ek-Ti{\u g}lay~\cite{LMT}. We will review these computations in Section \ref{background}. When $\sigma\ne 0$ the situation is more complicated: for some smooth $u_0$ the solution may break down, while for other smooth $u_0$ the solution exists globally. Here we settle the question of precisely which initial conditions lead to breakdown for the two simplest and most important special cases $\lambda=2$ and $\lambda=3$. This theorem is inspired by the result of McKean~\cite{mckeanbreakdown}, who proved the same for the Camassa-Holm equation, which is \eqref{mubgeneral} but with \eqref{momentumdef} replaced by $m = u - u_{\theta\theta}$. Our proof is inspired by that one, and the simplified version given in \cite{JNZ}, but we introduce a new central-force model which describes the equation more geometrically, and for which the conserved angular momentum is precisely the momentum $m$ defined by \eqref{momentumdef}. \begin{theorem}\label{mainthm} Suppose the initial velocity $u_0$ is $C^2$ on $S^1$, and let $m_0(\theta) = \sigma - u_0''(\theta)$ be the initial momentum. Assume that either $\lambda=2$ or $\lambda=3$. Then the solution $u$ of \eqref{mubgeneral}--\eqref{mubICs} exists and remains in $C^2$ for all time if and only if $m_0$ never changes sign on $S^1$. If $m_0$ does change sign, then $u_{\theta}(t,\theta_*)$ approaches negative infinity in finite time at some $\theta_*\in S^1$ where $m_0$ changes from positive to negative. \end{theorem} The fact that $m_0\ge 0$ or $m_0\le 0$ everywhere implies global existence is well-known: if $\lambda=2$ it was proven in the original paper of Khesin-Lenells-Misio{\l}ek~\cite{KLM} which introduced the $\mu$CH equation, and if $\lambda=3$ it was proven in the original paper of Lenells-Misio{\l}ek-Ti{\u g}lay~\cite{LMT} which introduced the $\mu$DP equation. We give a different proof which makes a bit more clear geometrically why this works. On the other hand, while there are several results on sufficient conditions for breakdown of either the $\mu$CH or $\mu$DP equations (see e.g., \cite{FLQ} and \cite{GLZ}), they do not capture all cases. The similarity of Theorem \ref{mainthm} to the result of McKean suggests that a general principle applies: those equations which have the form \eqref{mubgeneral} for some function $m$, given as a pseudodifferential operator in terms of $u$, should have breakdown behavior which depends on the sign of the initial momentum $m_0$. It seems likely that with a bit more work, one can apply the technique here to similar families of PDEs to obtain the complete breakdown picture. The special cases $\lambda=2$ and $\lambda=3$ in \eqref{mubgeneral}--\eqref{mubICs} are especially interesting because they are both completely integrable, with bihamiltonian structure generating infinitely many conservation laws: see \cite{KLM} and \cite{LMT} respectively. Aside from the conservation of average velocity \eqref{sigmadef}, which is true regardless of $\lambda$, we have for $\lambda=2$ that $\int_{S^1} u_{\theta}(t,\theta)^2 \, d\theta$ is constant, and for $\lambda=3$ that $\int_{S^1} u(t,\theta)^2\,d\theta$ is constant. We will not need any of the other conservation laws, which in general are not coercive. However one can use the complete integrability to obtain the global existence result, as shown in McKean~\cite{mckeanglobal} for the Camassa-Holm equation and sketched in Ti{\u g}lay~\cite{Tiglay} for the $\mu$-Camassa-Holm equation. The author thanks Martin Bauer, Boris Khesin, Alice Le Brigant, Jae Min Lee, Stephen Marsland, Gerard Misio{\l}ek, Cristina Stoica, Vladimir {\u S}ver{\'a}k, and Pearce Washabaugh for very valuable discussions, as well as all the organizers and participants of the BIRS workshop 18w5151 and the Math in the Black Forest workshop for listening to early versions of this work. The work was done while the author was partially supported by Simons Foundation Collaboration Grant \#318969. \section{Background}\label{background} Equation \eqref{mubgeneral}, for a general $m= L(u)$ defined by a pseudodifferential operator $L$ in terms of $u$, is a generalization of the Euler-Arnold equation. For $\lambda=2$ it is exactly the Euler-Arnold equation: it describes the evolution of geodesics under a right-invariant Riemannian metric on the diffeomorphism group $\Diff(S^1)$ of the circle, where the metric is given at the identity by \begin{equation}\label{L2norm} \langle u, u\rangle_{\id} = \int_{S^1} u L u \, d\theta. \end{equation} If $L$ is positive-definite, this defines a Riemannian metric, and the actual geodesic is found by solving the flow equation \begin{equation}\label{flowequation} \eta_t(t,\theta) = u\big(t,\eta(t,\theta)\big), \qquad \eta(0,\theta) = \theta. \end{equation} Paired with \eqref{mubgeneral}, this is a second-order differential equation for $\eta$; the decoupling is an expression of Noether's theorem due to the right-invariance. The Camassa-Holm equation with $m=u-u_{\theta\theta}$ is the best-known example in one dimension; in higher dimensions one gets the Euler equations of ideal fluid mechanics and a variety of other equations of continuum mechanics. See surveys in \cite{AK1999}, \cite{KW}, \cite{KLMP2013} for other examples. When $\lambda=2$ and $L$ is nonnegative but not strictly positive, the equation may describe geodesics on quotient spaces of $\Diff(S^1)$, modulo a quotient group generated by the kernel of $L$; see Khesin-Misio{\l}ek~\cite{KMhomogen} for the requirement. Examples include the Euler-Weil-Petersson equation~\cite{gaybalmazratiu} and the Hunter-Saxton equation. For other values of $\lambda$, the quadratic form \eqref{L2norm} is not necessarily conserved, and if not then the equation \eqref{mubgeneral} does not represent the equation for geodesics in a Riemannian metric. However it can still be interpreted as a geodesic for a non-Riemannian connection; see \cite{KLM} and \cite{EschernonRiem} for details on this construction in the present cases, and \cite{tiglayvizman} for the general situation. A well-known example is the Okamoto-Sakajo-Wunsch equation~\cite{OSW}, where $m = Hu_{\theta}$ in terms of the Hilbert transform $H$ (if $\lambda=-1$ it becomes the well-known De Gregorio equation~\cite{degregorio}) which are considered the simplest one-dimensional models for vorticity growth in the 3D Euler equation. We will return to this family at the end of the paper. On the other hand if $m = -u_{\theta\theta}$ then \eqref{mubgeneral} is the generalized Proudman-Johnson equation, studied in \cite{SS1,SS2}, which is related to self-similar solutions of the Euler equations of fluids. What all these equations have in common is the conservation of vorticity property, which we describe as follows. Observe that by the chain rule and the definition \eqref{flowequation}, we have $$ \frac{\partial}{\partial t} m\big(t, \eta(t,\theta)\big) = m_t\big(t, \eta(t,\theta)\big) + u\big(t,\eta(t,\theta)\big) m_{\theta}\big(t,\eta(t,\theta)\big).$$ Furthermore differentiating \eqref{flowequation} in $\theta$ yields \begin{equation}\label{flowderiv} \eta_{t\theta}(t,\theta) = u_{\theta}\big(t,\eta(t,\theta)\big) \, \eta_{\theta}(t,\theta). \end{equation} Using both in \eqref{mubgeneral} shows that $$ \frac{\partial}{\partial t}\Big( \eta_{\theta}(t,\theta)^{\lambda} m\big(t,\eta(t,\theta)\big) \Big) = 0,$$ which shows that the vorticity $m$ is transported via \begin{equation}\label{vorticitytransport} \eta_{\theta}(t,\theta)^{\lambda} m\big(t,\eta(t,\theta)\big) = m_0(\theta). \end{equation} This is a consequence only of \eqref{mubgeneral}, and is true regardless of whether $m$ is related to $u$ by \eqref{momentumdef} or not. As long as $\eta$ remains a diffeomorphism of the circle, we will have $\eta_{\theta}>0$, so that the sign of $m$ is preserved: for each $\theta$, the transported vorticity $m\big(t,\eta(t,\theta)\big)$ along the Lagrangian path $\eta(t,\theta)$ is positive if and only if the initial vorticity $m_0(\theta)$ is positive. Equation \eqref{vorticitytransport} can be inverted to solve for $u\big(t,\eta(t,\theta)\big)$ in terms of $\eta_{\theta}$ and $m_0$, and from there we may obtain a first-order equation for $\eta$ using \eqref{flowequation}. We will not take this approach directly. Instead we study the second order system \eqref{mubgeneral}--\eqref{mubICs}, \eqref{flowequation} by an approximate linearization. That is, we differentiate \eqref{flowderiv} in time to get a second order equation for $\eta_{\theta}$, then change variables to simplify it. We will elaborate on the differential geometric meaning of this at the end of the paper. Now let us specify that $m = \sigma - u_{\theta\theta}$ with the definition \eqref{momentumdef}. Then \eqref{mubgeneral} becomes \begin{equation}\label{explicitmub} \sigma'(t) - u_{t\theta\theta}(t,\theta) - u(t,\theta) u_{\theta\theta\theta}(t,\theta) + \lambda \sigma(t) u_{\theta}(t,\theta) - \lambda u_{\theta}(t,\theta) u_{\theta\theta}(t,\theta) = 0. \end{equation} Integrate this over $\theta\in S^1$: all terms integrate to zero by periodicity, and we obtain $\sigma'(t)=0$, as mentioned in the Introduction. Now integrate what remains in \eqref{explicitmub}, and we obtain \begin{equation}\label{integrated} u_{t\theta} + u u_{\theta\theta} + \frac{\lambda-1}{2} u_{\theta}^2 - \lambda \sigma u = I(t), \end{equation} for some function $I(t)$. Integrating both sides over the entire circle shows that \begin{equation}\label{Iformula} I(t) = \frac{\lambda-3}{2} E(t) - \lambda \sigma^2, \qquad E(t) = \int_{S^1} u_{\theta}(t,\theta)^2 \, d\theta. \end{equation} It is this form of the equation, which makes sense for $u(t,\cdot) \in C^2(S^1)$, that we will view as fundamental. We will see that the kinetic energy term $E(t)$ defined by \eqref{Iformula} controls the global behavior of solutions. In general it is a bit hard to estimate: differentiation and using \eqref{integrated} gives \begin{align*} E'(t) &= 2 \int_{S^1} u_{\theta}u_{t\theta} \, d\theta \\ &= 2I(t) \int_{S^1} u_{\theta} \, d\theta + \lambda \sigma \int_{S^1} u u_{\theta} \, d\theta - 2\int_{S^1} u u_{\theta}u_{\theta\theta} \,d\theta - (\lambda-1) \int_{S^1} u_{\theta}^3 \, d\theta \\ &= -(\lambda-2) \int_{S^1} u_{\theta}^3 \, d\theta \end{align*} after noticing the first two terms vanish and the third term can be integrated by parts to combine with the fourth term. In particular $E(t)$ is constant when $\lambda=2$, and doesn't matter when $\lambda=3$ because of the coefficient in \eqref{Iformula}. This is precisely the reason why our technique will work well in those two cases, and the lack of a bound on $E(t)$ is the reason we cannot yet prove Theorem \ref{mainthm} for other values of $\lambda$. (As will be clearer later, a polynomial growth bound for $E(t)$ in $t$ would be sufficient to prove Theorem \ref{mainthm}, but the obvious successive-differentiation manipulations seem to yield at best exponential growth.) As is typical with equations of Euler-Arnold type (as first noticed by Ebin-Marsden~\cite{EM}; see also \cite{CKintertial} and \cite{MisClassical}), the equation is best-behaved in terms of the flow $\eta$, i.e., using the Lagrangian description. To see this here, differentiate \eqref{flowderiv} with respect to $t$ to get $$ \eta_{tt\theta}(t,\theta) = \Big( u_{t\theta}\big(t,\eta(t,\theta)\big) + u_{\theta\theta}\big(t,\eta(t,\theta)\big) u\big(t,\eta(t,\theta)\big) + u_{\theta}\big(t,\eta(t,\theta)\big)^2\Big) \eta_{\theta}(t,\theta).$$ Using this, equations \eqref{integrated}--\eqref{Iformula}, after composing with $\eta$ and using \eqref{flowequation} and \eqref{flowderiv}, become \begin{equation}\label{densityeqn} \eta_{tt\theta} = - \frac{\lambda-3}{2} \, \frac{\eta_{t\theta}^2}{\eta_{\theta}} + \Big[ \lambda \sigma \big( \eta_t(t,\theta)-\sigma\big) + \frac{\lambda-3}{2} E(t)\Big] \eta_{\theta}(t,\theta). \end{equation} We are going to view this as an equation for $\eta_{\theta}$, in spite of the fact that $(\eta_t-\sigma)$ must be determined nonlocally by the spatial integral of $\eta_{\theta}$; this is an unavoidable complication. Now the term in square brackets is relatively easy to control (at least if $\lambda=2$ or $\lambda=3$), while the first term on the right side of \eqref{densityeqn} is of higher order and more likely to become singular. The trick is thus to change variables to eliminate it, and end up with an equation that is nearly linear. We do this via the substitution $x = \eta_{\theta}^{(\lambda-1)/2}$ to get an equation of the form $x_{tt}(t,\theta) = F(x,x_t; t,\theta) x(t,\theta)$, where $F$ depends in a nonlocal (but mild) way on $x$ and $x_t$. \begin{theorem}\label{transformthm} For a parameter $\lambda\ne 1$, define $\gamma = \frac{2}{\lambda-1}$. Set \begin{equation}\label{xydef} x(t,\theta) = \eta_{\theta}(t,\theta)^{1/\gamma} \qquad\text{and}\qquad y(t,\theta) = -\gamma x_{\theta}(t,\theta) + \sigma x(t,\theta) \int_0^t x(\tau,\theta)^{\gamma}\,d\tau. \end{equation} Then the equation \eqref{densityeqn} is equivalent to the pair of equations \begin{align} \frac{\partial^2 x}{\partial t^2}(t,\theta) = F(t,\theta) x(t,\theta) \label{xeq}\\ \frac{\partial^2 y}{\partial t^2}(t,\theta) = F(t,\theta) y(t,\theta), \label{yeq} \end{align} where \begin{equation}\label{Fdef} F(t,\theta) = \frac{\lambda(\lambda-1)\sigma}{2}\, G(t,\theta) + \frac{(\lambda-1)(\lambda-3)}{4} E(t), \end{equation} where $E(t)$ defined by \eqref{Iformula} becomes \begin{equation}\label{Edef} E(t) = \gamma^2 \int_0^1 x(t,\phi)^{\gamma-2} x_t(t,\phi)^2 \, d\phi \end{equation} and $G(t,\theta):= \eta_t(t,\theta)-\sigma$ is given by \begin{equation}\label{Gdef} G(t,\theta) = \int_0^{\theta} x(t,\phi)^{\gamma-1} x_t(t,\phi)\,d\phi - \int_0^1 x(t,\phi)^{\gamma} \int_0^{\phi} x(t,\psi)^{\gamma-1} x_t(t,\psi)\,d\psi\,d\phi. \end{equation} The initial conditions for these equations are given by \begin{alignat}{3} x(0,\theta) &= 1, &\qquad x_t(0,\theta) &= \tfrac{1}{\gamma} u_0'(\theta) \label{xIC}\\ y(0,\theta) &= 0, &\qquad y_t(0,\theta) &= \sigma - u_0''(\theta) = m_0(\theta) \label{yIC} \end{alignat} \end{theorem} \begin{proof} This is a straightforward computation from \eqref{densityeqn}: the transformation $\eta_{\theta} = x^{\gamma}$ gives $$ \eta_{tt\theta} + \frac{\lambda-3}{2} \,\frac{\eta_{t\theta}^2}{\eta_{\theta}^2} = \gamma x^{\gamma-1} x_{tt} + \gamma\left( \frac{\gamma(\lambda-1)}{2} - 1\right) x^{\gamma-2} x_t^2,$$ so that $\gamma = \frac{2}{\lambda-1}$ eliminates the quadratic term $x_t^2$ from the equation. We then obtain \begin{equation}\label{xequationstep} x_{tt}(t,\theta) = \frac{\lambda-1}{2} \Big[ \lambda\sigma \big( \eta_t(t,\theta) - \sigma\big) + \frac{\lambda-3}{2} \, E(t)\Big] x(t,\theta). \end{equation} The formula for $G(t,\theta)$ is determined from the fact that we know \begin{equation}\label{Gderivative} G_{\theta}(t,\theta) = \eta_{t\theta}(t,\theta) = \gamma x(t,\theta)^{\gamma-1} x_t(t,\theta) \end{equation} as well as the fact that \begin{equation}\label{Gintegral} \int_{S^1} G(t,\theta) \eta_{\theta}(t,\theta)\,d\theta = 0, \end{equation} and these two conditions clearly uniquely determine $G$. The condition \eqref{Gintegral} comes from the change of variables formula and \eqref{sigmadef}: we have \begin{align*} 0 &= \int_{S^1} \big[ u(t,\phi)-\sigma\big] \, d\phi = \int_{S^1} \Big[ u\big(t,\eta(t,\theta)\big) - \sigma\Big] \eta_{\theta}(t,\theta) \, d\theta \\ &= \int_{S^1} \Big[ \eta_t(t,\theta) - \sigma\Big] \eta_{\theta}(t,\theta)\,d\theta. \end{align*} We can easily compute that $G$ defined by formula \eqref{Gdef} satisfies both requirements, using the fact that $\int_{S^1} x(t,\theta)^{\gamma}\,d\theta = 1$, and so \eqref{xequationstep} becomes \eqref{xeq}. To prove \eqref{yeq}, we differentiate the formula \eqref{xydef} defining $y(t,\theta)$ twice with respect to time and obtain $$ y_{tt}(t,\theta) = -\gamma x_{tt\theta}(t,\theta) + \sigma x_{tt}(t,\theta) \int_0^t x(\tau,\theta)^{\gamma}\, d\tau + (\gamma+2) \sigma x(t,\theta)^{\gamma} x_t(t,\theta).$$ Now insert the equation $x_{tt} = F x$ to get $$ y_{tt}(t,\theta) = F(t,\theta) y(t,\theta) -\gamma F_{\theta}(t,\theta) x(t,\theta) + (\gamma+2) \sigma x(t,\theta)^{\gamma} x_t(t,\theta).$$ The last two terms in this equation cancel out using \eqref{Fdef} and \eqref{Gderivative}, which produces \eqref{yeq}. The initial conditions come from the fact that $\eta(0,\theta) = \theta$ so that $\eta_{\theta}(0,\theta)\equiv 1$, which gives the conditions for $x(0,\theta)$ and $y(0,\theta)$. Differentiating the formula \eqref{xydef} with respect to $t$ and using \eqref{flowderiv} gives $\gamma x_t(0,\theta) = u_0'(\theta)$, along with $y_t(0,\theta) = -\gamma x_t(0,\theta) + \sigma$, which is exactly the initial momentum $m_0(\theta) = \sigma - u_0''(\theta)$. \end{proof} The forcing term $F(t,\theta)$ defined by \eqref{Fdef} appearing in \eqref{xeq}--\eqref{yeq} depends on the solution $x$ and $x_t$ (or if we like on $y$ and $y_t$, since we can in principe reconstruct $x$ from $y$ if desired). As such we properly view \eqref{xeq} as an ODE on a Banach space. Fortunately the dependence of $F$ on $x$ and $x_t$ is relatively simple, and even if $x$ has only limited smoothness---for example $x(t,\cdot)$ and $x_t(t,\cdot)\in C^k(S^1)$ for integer $k\ge 0$---the function $F(t,\cdot$ will be in $C^{k+1}(S^1)$. More importantly, the map $\Psi := (x,x_t)\mapsto F$ from $C^k\times C^k\to C^{k+1}$ is actually $C^{\infty}$ as a map of Banach spaces as long as $x$ remains positive (which is only needed for the power function to be smooth). Hence equation \eqref{xeq} describes a $C^{\infty}$ ODE on the space of functions $x$ satisfying \begin{equation}\label{intxpower} x\in C^k(S^1), \qquad \int_{S^1} x(\theta)^{\gamma} \, d\theta = 1, \qquad x(\theta)>0 \quad \forall \theta\in S^1, \end{equation} where the integral condition comes from periodicity of $\eta$ and the fact that $x^{\gamma} = \eta_{\theta}$. If $\gamma = \frac{2}{\lambda-1}$ happens to be an integer, as it does for $\lambda=2$ and $\lambda=3$, we get smoothness even for functions $x$ that may be zero or negative at some points, and this allows us to extend the ODE to the larger space $$ x\in C^k(S^1), \qquad \int_{S^1} x(\theta)^{\gamma}\,d\theta = 1.$$ As we are interested in the breakdown of the equation when $\eta_{\theta}\to 0$, allowing $x$ to approach zero (and even continue to go negative) gives us global solutions in the new coordinate, which translate into weak solutions when we invert to get $\eta_{\theta}$, and from this $\eta$ and $u$. To better understand this, let us now recall the analysis of the equations when $\sigma=0$ and $\lambda=2$ or $\lambda=3$, when everything can be done explicitly. The easiest case is $\lambda=3$, when we get $\gamma = 1$ and the trivial equations $$ x_{tt}(t,\theta) = y_{tt}(t,\theta) = 0.$$ The solutions are $x(t,\theta) = 1 + t u_0'(\theta)$ and $y(t,\theta) = -t u_0''(\theta)$. These obviously exist for all time, and $x$ remains positive for $t<T = \frac{1}{-\inf_{\theta\in S^1} u_0'(\theta)}$; hence also $\eta_{\theta}=x$ remains positive here. For larger $t$, the function $x(t,\theta)$ becomes negative, which means that $\eta(t,\theta)$ is not invertible as a function of $\theta$: it maps multiple values of $\theta$ to the same point. This leads to our inability to invert the formula $\eta_t(t,\theta) = u\big(t,\eta(t,\theta)\big)$ to find $u$, which is the shock phenomenon: the solution $u$ is not even continuous. Note however that $\eta(t,\theta) = \theta + t u_0(\theta)$ exists and remains as spatially smooth as $u_0$ for all time, another illustration of the fact that things are better in Lagrangian coordinates. The more interesting case is $\sigma=0$ and $\lambda=2$. We compute using \eqref{xydef}, \eqref{Fdef}, and \eqref{Edef} that since $\gamma=2$, we have $$ E'(t) = 8 \int_{S^1} x_t(t,\theta) x_{tt}(t,\theta) \, d\theta = -2E(t) \int_{S^1} x_t(t,\theta) x(t,\theta) \, d\theta = -E(t) \, \frac{d}{dt} \int_{S^1} x(t,\theta)^2 \, d\theta = 0,$$ since here $\int_{S^1} x(t,\theta)^2 \, d\theta=0$. Equations \eqref{xeq}--\eqref{yeq} thus reduce to $$ x_{tt}(t,\theta) = -K^2 x(t,\theta), \qquad y_{tt}(t,\theta) = -K^2 y(t,\theta), \quad \text{where } K^2 = \frac{1}{4} \int_{S^1} u_0'(\theta)^2 \, d\theta.$$ The solutions with initial conditions \eqref{xIC}--\eqref{yIC} are $$ x(t,\theta) = \cos{Kt} + \tfrac{u_0'(\theta)}{2K} \sin{Kt}, \qquad y(t,\theta) = -\tfrac{u_0''(\theta)}{K} \sin{Kt}.$$ We can easily see that $x$ remains positive for $$t<T = \frac{1}{K}\, \arctan{\left( \frac{2K}{\inf u_0'(\theta)}\right)}$$ and becomes negative beyond that. However since $\eta_{\theta}(t,\theta) = x(t,\theta)^2$ in this case, we will find for typical initial data that $\eta_{\theta}(t,\theta)$ is positive for all $\theta$ except a discrete set of points (depending on $t$), which means $\eta$ will be a homeomorphism even if it not a diffeomorphism. This allows us to define $u$ as a continuous function, although its derivative $u_{\theta}$ will approach negative infinity wherever $x(t,\theta)=0$ by \eqref{flowderiv}. We see that breakdown is very different already between $\lambda=2$ and $\lambda=3$. In Figure \ref{b2breakdownfig} we demonstrate what this looks like for a simple solution of the Hunter-Saxton equation with $\lambda=2$ and $\sigma=0$. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.35]{HSsolar1.eps} \qquad \includegraphics[scale=0.35]{HSstraight1.eps} \\ \includegraphics[scale=0.35]{HSsolar2.eps} \qquad \includegraphics[scale=0.35]{HSstraight2.eps} \\ \includegraphics[scale=0.35]{HSsolar3.eps} \qquad \includegraphics[scale=0.35]{HSstraight3.eps} \\ \includegraphics[scale=0.35]{HSsolar4.eps} \qquad \includegraphics[scale=0.35]{HSstraight4.eps} \\ \includegraphics[scale=0.35]{HSsolar5.eps} \qquad \includegraphics[scale=0.35]{HSstraight5.eps} \\ \end{center} \caption{Here we show both the solar model on the left and the solution $x(t,\theta)=\sqrt{\eta_{\theta}(t,\theta)}$ on the right for the Hunter-Saxton equation, with initial condition $u_0(\theta) = \alpha \sin{(2\pi \theta)}$ for $\alpha = \tfrac{2}{\pi} \arctan(\tfrac{1}{\sqrt{2}})$, with a breakdown time of $t=1$. In the solar model particles emerge from $(1,0)$ with velocity $\langle \tfrac{1}{2}u_0'(\theta),\omega_0(\theta)\rangle$ and approach the vertical wall $x=0$. On the right $x$ and $x_{\theta}$ have simultaneously reached zero, and the classical solution $u(t,\theta)$ breaks down. However the solution continues in the $(x,y)$ variables. Points colored red have positive angular momentum, while those in blue have negative angular momentum: the first breakdown occurs at the transition.}\label{b2breakdownfig} \end{figure} For each fixed $\theta$, the functions $x(t,\theta)$ and $y(t,\theta)$ form the components of a central-force system, which implies that the angular momentum is conserved. This conserved quantity is precisely the transported vorticity, so that the conservation law \eqref{vorticitytransport} is encoded here automatically. This fact is what ensures that when the vorticity is always positive or always negative, classical solutions will be global; see Theorem \ref{globalposmom}. The intuition is that the $(x,y)$ system is attracted or repulsed by a central force, analogously to the sun's gravity, and singularities correspond to the particle reaching the sun in finite time. As in our solar system, this can only happen if the particle dives directly into it, and any nonzero angular momentum prevents this. A very singular force may still lead to finite-time collapse, but in our situations the force is bounded on finite time intervals. \begin{corollary}\label{angularmomentumcor} The angular momentum of the system \eqref{xeq}--\eqref{yeq} is conserved, and given by the formula \begin{equation}\label{angularmomentum} x(t,\theta)y_t(t,\theta) - y(t,\theta) x_t(t,\theta) = \eta_{\theta}(t,\theta)^{\lambda} m\big(t,\eta(t,\theta)\big) = m_0(\theta). \end{equation} \end{corollary} \begin{proof} The fact that angular momentum is conserved for central force systems is well-known: it follows from $$ \frac{\partial}{\partial t} (xy_t - yx_t) = x y_{tt} - y x_{tt} = x(Fy) - y(Fx) = 0.$$ Equation \eqref{xydef} implies that $$ \frac{\partial}{\partial t} \left( \frac{y(t,\theta)}{x(t,\theta)}\right) = -\gamma \, \frac{\partial}{\partial t} \left(\frac{x_{\theta}(t,\theta)}{x(t,\theta)}\right) + \sigma x(t,\theta)^{\gamma},$$ so that \begin{align*} x(t,\theta)y_t(t,\theta) - y(t,\theta)x_t(t,\theta) &= -\gamma \, x(t,\theta)^2 \, \frac{\partial^2}{\partial t\partial \theta} \left(\ln{\big(x(t,\theta)\big)}\right) + \sigma x(t,\theta)^{\gamma+2} \\ &= -\eta_{\theta}(t,\theta)^{\lambda-1} \, \frac{\partial^2}{\partial t\partial \theta} \left(\ln{\big(\eta_{\theta}(t,\theta)\big)}\right) +\sigma x(t,\theta)^{\lambda\gamma} \\ &= -\eta_{\theta}(t,\theta)^{\lambda-1} \, \frac{\partial}{\partial \theta} \left(u_{\theta}\big(t,\eta(t,\theta)\big)\right) +\sigma \eta_{\theta}^{\lambda}(t,\theta) \\ &= -\eta_{\theta}(t,\theta)^{\lambda} \Big( \sigma - u_{\theta\theta}\big(t,\eta(t,\theta)\big)\Big). \end{align*} At time $t=0$ this is $m_0(\theta)$. \end{proof} Because the transformation to Lagrangian coordinates eliminates the loss of derivatives (essentially just being able to combine terms like $m_t + u m_{\theta}$ into $\frac{\partial}{\partial t} m\circ\eta$), we get a smooth ODE on the space of functions $(x,y)$. We want to work in the simplest space for which all the functions make sense, so we will require that $u_0$ be $C^2$ in order to have the momentum be continuous. We then expect $u(t,\cdot)$ to be in $C^2$ for short time, which by the flow equation \eqref{flowequation} should imply that $\eta$ is also spatially in $C^2$; hence $x(t,\cdot)$ would be in $C^1$ and $y(t,\cdot)$ would be in $C^0$. Working in these spaces, we thus get existence of solutions using Picard iteration. The following was proved for the case $\lambda=2$ by Deng-Chen~\cite{DC}, following the technique of Lee~\cite{Lee} for the Camassa-Holm equation. The proof for other values of $\lambda$ is similar, and just involves showing that $F$ defined by \eqref{Fdef} is smooth as a function of $x$ and $x_t$. \begin{theorem}\label{localexistence} Consider the situation in Theorem \ref{transformthm}. The equation \eqref{xeq} is a second-order smooth ODE on the manifold $$ \mathcal{S}^1_{\gamma} = \left\{ x\in C^1(S^1) \, \big\vert\, x(\theta)>0 \;\forall \, \theta\in S^1, \; \int_{S^1} x(\theta)^{\gamma} \, d\theta = 1 \right\}.$$ As such, for each initial condition $x(0)\equiv 1$ and $\frac{dx}{dt}(0) = \tfrac{1}{\gamma} u_0'(\theta)$ with $u_0\in C^2(S^1)$, there is a $T>0$ and a solution $x\colon [0,T) \to C^1(S^1)$ of equation \eqref{xeq}. \end{theorem} \begin{proof} The main point is to write it as a first-order system with $v:=x_t$, viewing $E$, $F$, and $G$ as functions not of $(t,\theta)$ but of $(x,v)$. That is, we write $F$ given by \eqref{Fdef} $$ F(x,v) = \frac{\lambda(\lambda-1) \sigma}{2}\, G(x,v) + \frac{(\lambda-1)(\lambda-3)}{4} \, E(x,v),$$ where $G\colon C^1(S^1)\times C^1(S^1) \to C^1(S^1)$ from equation \eqref{Gdef} and $E\colon C^1(S^1)\times C^1(S^1)\to \mathbb{R}_+$ from \eqref{Edef} are given by $$ G(x,v)(\theta) = \int_0^{\theta} x(\phi)^{\gamma-1} v(\phi) \, d\phi - \int_0^1 x(\phi)^{\gamma} \int_0^{\phi} x(\psi)^{\gamma-1} v(\psi) \, d\psi \, d\phi$$ and $$ E(x,v) = \gamma^2 \int_0^1 x(\phi)^{\gamma-2} v(\phi)^2 \, d\phi.$$ As long as $x$ remains strictly positive, $E$ and $G$ are smooth functions of $(x,v)$. For example, the derivative of $E$ is $$ DE_{(x,v)}(p,q) = \gamma^2(\gamma-1) \int_0^1 x(\phi)^{\gamma-3} p(\phi) v(\phi)^2 \, d\phi + 2 \gamma^2 \int_0^1 x(\phi)^{\gamma-2} v(\phi) q(\phi) \, d\phi,$$ which depends continuously on the $C^1$ functions $(x,v,p,q)$, and further derivatives can be computed the same way. Similarly the derivative of $G$ can be computed, and for any $C^1$ functions $(x,v,p,q)$, the derivative map $DG$ will also be a $C^1$ function (actually $C^2$ since $G$ is smoothing, but we don't need that). The only thing that remains is to check that the integral constraint $$\int_0^1 x(\theta)^{\gamma} \,d\theta=1, \qquad \int_0^1 x(\theta)^{\gamma-1} v(\theta)\,d\theta = 0$$ is a submanifold of $C^1_+(S^1)\times C^1(S^1)$, where $C^1_+(S^1)$ denotes the $C^1$ functions on $S^1$ with strictly positive image; this is easy by the usual implicit function theorem for Banach spaces. Then we verify that the differential equation preserves these constraints, which is straightforward, and shows that our smooth vector field actually descends to a vector field on the submanifold. For details about the implicit function theorem and vector fields on Banach manifolds, see for example Lang~\cite{Lang} or Abraham-Marsden-Ratiu~\cite{AMR}. \end{proof} The local existence proof works for any value of $\lambda$, but for global existence we only have a proof in case $\lambda=2$, because that is the case where we know conservation laws to get global bounds on solutions. Even when $\lambda=3$ we cannot quite prove global existence since the conservation law only applies when $\eta$ is a diffeomorphism. Note that here we mean global existence in the $(x,y)$ variables, but since we will allow $x$ to go negative, it may no longer represent a diffeomorphism (so we certainly do not have global existence in the original $(\eta,u)$ variables). But this will demonstrate that for example $x$ and $y$ cannot approach infinity. In case $\lambda=2$ proofs were given in Deng-Chen~\cite{DC} and in Ti{\u g}lay~\cite{Tiglay2}, so we will only prove the case $\lambda=3$. The essential thing here is the formula \eqref{integrated}, which for $\lambda=3$ becomes \begin{equation}\label{b3eulerform} u_t + u u_{\theta} = 3\sigma Q,\qquad Q = \partial_{\theta}^{-1}(u-\sigma), \end{equation} with the constant of integration in $Q$ chosen so that it has mean zero, since the left side must integrate to zero. The conservation law \begin{equation}\label{l2conserved} \frac{d}{dt} \int_{S^1} u(t,\theta)^2 \, d\theta = 0 \end{equation} proved in \cite{LMT} is one of the infinite family of conservation laws for $\lambda=3$, and although it is not very strong, it is enough to get a bound on $Q$, which allows us to control the growth of $u$ pointwise, at least as long as $\eta$ remains a diffeomorphism and for a (possibly small) time beyond. This strategy comes from \cite{FLQ}. \begin{theorem}\label{globalthm} In case $\lambda=2$ or $\lambda=3$, the equation \eqref{xeq} has a solution $x\colon C^{\infty}\big([0,T), C^1(S^1)\big)$ for any $u_0\in C^2(S^1)$, which extends as long as $\eta_{\theta}(t,\theta)>0$ for all $t\in [0,T)$ and $\theta\in S^1$. Consequently \eqref{yeq} has a solution $y\colon C^{\infty}\big([0,T), C^0(S^1)\big)$ for any $u_0\in C^2(S^1)$ for the same $T$. \end{theorem} \begin{proof} In the case $\lambda=3$, the transformation \eqref{xydef} simplifies to just $x(t,\theta) = \eta_{\theta}(t,\theta)$. The easiest way to proceed is to show that $\eta$ itself satisfies a differential equation for which the right side is bounded. Equation \eqref{xeq} becomes \begin{equation}\label{etaderivativeb3} \eta_{tt\theta}(t,\theta) = 3\sigma \big(\eta_t(t,\theta) - \sigma\big) \eta_{\theta}(t,\theta), \end{equation} and integrating once more in space gives $$ \eta_{tt}(t,\theta) = 3\sigma P(t,\theta),$$ where $P$ is essentially a pressure function, defined uniquely by the conditions $$ P_{\theta}(t,\theta) = (\eta_t(t,\theta) - \sigma)\eta_{\theta}(t,\theta), \qquad \int_{S^1} P(t,\theta) \eta_{\theta}(t,\theta)\,d\theta = 0,$$ or (suppressing time dependence) explicitly in terms of $\eta$ and $V:=\eta_t$ by $$ P(\eta, V)(\theta) = \int_0^{\theta} \big[V(\psi)-\sigma\big] [\eta(\psi)-\eta(0)] \eta'(\psi) \, d\psi - \int_{\theta}^1 \big[ V(\psi)-\sigma\big] \big[ \eta(1)-\eta(\psi)] \eta'(\psi) \,d\psi.$$ For periodic $\eta\in C^2(S^1)$, this defines a periodic $C^2$ function $P$ which depends smoothly on $(\eta,V)$, since it involves only products and continuous integral operators, and furthermore because there is no composition by $\eta$, this still makes sense even if $\eta$ stops being a homeomorphism. The $L^2$ conservation law \eqref{l2conserved}, together with the conservation of the mean from \eqref{sigmadef}, implies that $\int_{S^1} (u-\sigma)^2 \, d\theta$ is constant in time, and in Lagrangian form this becomes \begin{equation}\label{L2conservegeneta} \int_{S^1} \big[V(t,\theta)-\sigma\big]^2 \eta_{\theta}(t,\theta) \, d\theta = \int_{S^1} \big[u_0(\theta)-\sigma\big]^2 \, d\theta, \end{equation} which again makes sense even if $\eta_{\theta}$ is not positive. As long as $\eta_{\theta}$ remains positive, we obtain the bound \begin{align*} \sup_{\theta\in S^1} P(\eta,V)(t,\theta) &\le \int_{S^1} \big\lvert V(t,\theta)-\sigma\big\rvert^2 \lvert \eta_{\theta}(t,\theta)\rvert \, d\theta \\ &\le \sqrt{\int_{S^1} \lvert V(t,\theta)-\sigma\rvert^2 \eta_{\theta}(t,\theta) \, d\theta } \, \sqrt{\int_{S^1} \eta_{\theta}(t,\theta) \, d\theta} = \sqrt{\int_{S^1} \big[u_0(\theta)-\sigma\big]^2 \, d\theta}, \end{align*} using \eqref{L2conservegeneta} and the fact that $\eta$ is periodic. Hence as long as $\eta_{\theta}$ remains positive, we have that $P(\eta,V)$ is bounded in the $C^0$ norm uniformly in time. Meanwhile since $P_{\theta} = (V-\sigma) \eta_{\theta}$, we use the fact that $\eta_{tt}$ is uniformly bounded in time to conclude that $V=\eta_t$ grows at most linearly in time (again as long as $\eta_{\theta}$ remains positive), which implies that $\eta_{\theta}$ satisfies an estimate of the form $$ \lVert \eta_{tt\theta}\rVert_{C^0} \le \big(\lVert u_0\rVert_{C^0} + Kt\big) \lVert \eta_{\theta}\rVert_{C^0}.$$ In particular the right side of the differential equation is bounded on all finite time intervals in the space of $C^1$ diffeomorphisms $\eta$. Thus by the usual theory of ODEs in Banach spaces, e.g., Proposition 4.1.22 in \cite{AMR}, the solution can be continued for $\eta\in C^1$ as long as $\eta_{\theta}$ remains positive. The fact that we also have a solution $y\in C^0$ is now straightforward, since $y$ satisfies the linear ODE \eqref{yeq} with known coefficients in terms of the function $x$. \end{proof} This theorem establishes that the only thing that can go wrong with the global solutions of equation \eqref{densityeqn} in the cases $\lambda=2$ and $\lambda=3$ is that $\eta_{\theta}$ approaches zero. Significantly, the equation for $\lambda=3$ in the form \eqref{etaderivativeb3} depends only on $\eta$ as a function on $S^1$ of some smoothness, but \emph{not} on the fact that $\eta$ is a diffeomorphism. Hence the local existence result for the ODE holds even when $\eta_{\theta}$ reaches zero, and we get existence for some (possibly small) time beyond that. The difficulty is that without a global bound on the $L^2$ energy, we cannot extend this for all time, although it seems likely that better estimates may be available which would produce global existence in the space of $C^2$ self-maps of $S^1$ that are not necessarily diffeomorphisms. Again we note that in the case $\sigma=0$ the breakdown is completely understood: when $\lambda=3$, the function $\eta$ ceases even to be a homeomorphism as $\eta_{\theta}$ becomes negative, while if $\lambda=2$ the fact that $\eta_{\theta}=x^2$ means that $\eta_{\theta}\ge 0$ always, so that typically $\eta$ will remain a homeomorphism. Since $u = \eta_t \circ\eta^{-1}$, this is the difference between the solution $u$ having shocks where it must cease being continuous, as opposed to steepening where $u$ remains continuous but its slope may approach infinity due to equation \eqref{flowderiv}. For other values of $\lambda$ things may be much worse: Sarria and Saxton~\cite{SS1} showed that for $\lambda>5$ or $\lambda<-1$, there are solutions for which $\eta_{\theta}$ approaches either zero or infinity, everywhere at the breakdown time. The reason here is that for $\lambda=2$ or $\lambda=3$, the terms in the forcing function $F$ defined by \eqref{Fdef} are well-controlled in time, while in general there are no good estimates for the growth. This property will be crucial for what comes next, so we record it here. \begin{lemma}\label{forcegrowth} For $\lambda=2$ or $\lambda=3$, the forcing function $F$ given by \eqref{Fdef} satisfies a bound $$ \sup_{\theta\in S^1} \lvert F(t,\theta)\rvert \le \begin{cases} K^2 & \lambda=2 \\ K^2 + C t & \lambda=3 \end{cases}, $$ for all time $t\in [0,T)$ as determined by Theorem \ref{globalthm}, for some constants $K$ and $C$ depending on the initial data $u_0$. \end{lemma} \begin{proof} In the case $\lambda=3$, we have already established this in the proof of Theorem \ref{globalthm}, since there $$ F(t,\theta) = 3 \sigma G(t,\theta),$$ and $G=(\eta_t-\sigma)$ grows at most linearly in time because $\eta_{tt}$ is bounded. In the case $\lambda=2$, the forcing function is given by $$ F(t,\theta) = \sigma(\eta_t-\sigma) - \tfrac{1}{4} E(t),$$ and $E(t)$ is constant in time for $\lambda=2$, and given by $$ E(t) = E(0) = \int_{S^1} u_0'(\theta)^2 \, d\theta.$$ This implies that $\int_{S^1} x_t^2 \, d\theta$ is constant in time, and we thus get a uniform bound for $(\eta_t-\sigma)$ by the Poincar\'e inequality, since $$ \sup_{\theta\in S^1} \lvert \eta_t-\sigma\rvert \le \int_{S^1} \lvert \eta_{t\theta}\rvert \, d\theta = 2 \int_{S^1} \lvert x x_t\rvert \,d\theta \le \int_{S^1} x^2 \, d\theta \, \int_{S^1} x_t^2 \, d\theta,$$ and the right side is constant in time. \end{proof} One might hope that a polynomial-in-time bound like this is true for other values of $\lambda$; if it were, the technique of the breakdown proof we will give later would also show the same breakdown phenomenon for all values of $\lambda$. Ultimately the only thing we need is that the forcing function grows like a polynomial in time, because it will be less than the exponential decay we get in general from the equation whenever $\lambda>1$. If we could establish any kind of polynomial estimate for the energy $E(t)$ given by \eqref{Edef} for other values of $\lambda$, we would obtain the same breakdown result here proved for $\lambda=2$ and $\lambda=3$. However the fact that Sarria-Saxton~\cite{SS1} shows that the basic breakdown mechanism changes when $\lambda>5$ makes clear that this could only be hoped for if $\lambda\in (1,5)$. The main tools we use to establish breakdown are the following simple result which applies for any ODE for fairly general forcing functions (and thus will apply here for the individual particles $x(t,\theta),y(t,\theta)$ for each individual $\theta\in S^1$). The first lemma gives an upper bound for the solution in terms of the forcing function, while the second establishes that solutions will eventually reach zero if their velocity is sufficiently negative. Our philosophy is that although the forcing function depends implicitly and nonlocally on the solution for all values of $\theta$, each individual particle feels a force $F(t)$ that is some given function of time, bounded on finite time intervals, and thus we can treat it as essentially an external force. \begin{lemma}\label{upperboundlem} Suppose $\phi$ satisfies the second-order ODE $$ \phi''(t) = F(t) \phi(t)$$ on some interval $[0,T)$, where $T$ may be infinite, and assume $F(t)\le f(t)^2$ for some nonnegative differentiable increasing function $f$. Then there is a $C$ such that \begin{equation}\label{Rbound} \frac{\phi'(t)}{\phi(t)} \le C+f(t) \end{equation} for all $t\in [0,T)$. \end{lemma} \begin{proof} Define $R(t) = \phi'(t)/\phi(t)$. Then $R$ satisfies the Riccati inequality \begin{equation}\label{Rinequality} R'(t) = F(t) - R(t)^2 \le f(t)^2 - R(t)^2. \end{equation} If $R(t)$ is ever larger than $f(t)$, it must decrease; thus if $f(0)<R(0)$ then $R(t)<R(0)$ for all time until $R(t)$ possibly crosses $f(t)$. If $R(t)$ is smaller than $f(t)$, then the difference $Q(t) = f(t)-R(t)$ satisfies $$ Q'(t) \ge f'(t) + R(t)^2 - f(t)^2 \ge Q(t)^2 - 2f(t) Q(t) \ge -2f(t) Q(t),$$ so in particular if $Q$ is ever positive, it will always be positive. This shows that $R(t)\le f(t)$ for all time if it is true for any time. Combining shows that $$R(t)\le \max\{R(0), f(t)\}\le C+f(t),$$ which is equivalent to \eqref{Rbound}. \end{proof} \begin{lemma}\label{breakdownlemma} Suppose \begin{equation}\label{myODE} \phi''(t) = F(t) \phi(t) \end{equation} for some continuous function $F$ on a maximal time interval $[0,T)$. Then if $\phi(t_0)>0$ and $\phi'(t_0)/\phi(t_0)$ is sufficiently negative, then $\phi(t_*)=0$ for some $t_*\in (t_0,T)$. \end{lemma} \begin{proof} Let $g$ denote the solution of \eqref{myODE} satisfying $$ g(t_0) = 1, \qquad g'(t_0)=0.$$ Then the general solution of \eqref{myODE} is given by \begin{equation}\label{myODEsoln} \phi(t) = \phi(t_0) g(t)\Big( 1 + C \int_{t_0}^t \frac{d\tau}{g(\tau)^2} \Big), \qquad C = \frac{\phi'(t_0)}{\phi(t_0)} \end{equation} as can easily be verified by direct substitution. (This is just reduction of order.) If $g(t)$ reaches zero in finite time, then by the Sturm comparison theorem, $\phi(t)$ must also reach zero whenever $\phi'(t_0)/\phi(t_0)\le 0$. If $g(t)$ is always positive, then $\phi(t)$ given by \eqref{myODEsoln} will turn negative for some $t$ as long as $$ C < -1/\int_{t_0}^T \frac{d\tau}{g(\tau)^2}.$$ \end{proof} The next result tells us about the effect of nonzero angular momentum. It is familiar from basic celestial mechanics: even for a not-too-singular force directed toward the origin, a particle will not reach the origin if there is nonzero angular momentum, while a particle with zero angular momentum will reach the origin in finite time. In our context this will give a lower bound on the radial coordinate $r=\sqrt{x^2+y^2}$, which gives global existence in Theorem \ref{globalposmom} if the angular momentum is never zero. \begin{lemma}\label{angmomboundlem} Suppose $(x,y)$ is a planar system satisfying the ODE \begin{equation}\label{generalsolar} \ddot{x}(t) = F(t) x(t), \qquad \ddot{y}(t) = F(t) y(t), \end{equation} where $F$ is continuous and bounded on $[0,T]$. Let \begin{equation}\label{angmom} \omega_0 = x(0) \dot{y}(0) - y(0) \dot{x(0)} \qquad \text{and}\qquad r(t)^2 = x(t)^2 + y(t)^2. \end{equation} Then if $\omega_0$ is nonzero, $r(t)$ cannot reach zero on $[0,T]$. \end{lemma} \begin{proof} Conservation of angular momentum ensures that $$ x\dot{y} - y\dot{x} = \omega_0,$$ so that $$ \dot{x}^2 + \dot{y}^2 = (x\dot{x}+y\dot{y})^2 + (x\dot{y} - y\dot{x})^2 = \dot{r}^2 + \frac{\omega_0^2}{r^2}.$$ We then obtain $$ \frac{d}{dt} \left( \dot{r}^2 + \frac{\omega_0^2}{r^2}\right) = 2 \big(\dot{x} \ddot{x} + 2 \dot{y} \ddot{y}\big) = 2 F(t)( x\dot{x} + y\dot{y}) = 2F(t) r(t) \dot{r}(t).$$ Observe that $r(t)$ can only be made small if it is decreasing on some interval, so to get an upper bound on this energy we define $$ \overline{F} = \max\{ -\inf_{0\le t\le T} F(t), 0\}.$$ Then $-F(t) \le \overline{F}$ for all $t\in [0,T]$ and $\overline{F}\ge 0$, and integrating over $[t_1,t_2]$ assuming that $\dot{r}(t)\le 0$ on $[t_1,t_2]$ gives \begin{align*} \dot{r}(t_2)^2 + \frac{\omega_0^2}{r(t_2)^2} &= \dot{r}(t_1)^2 + \frac{\omega_0^2}{r(t_1)^2} + 2 \int_{t_1}^{t_2} F(t) r(t) \dot{r}(t) \, dt \\ &\le \dot{r}(t_1)^2 + \frac{\omega_0^2}{r(t_1)^2} + \overline{F} \big( r(t_1)^2 - r(t_2)^2\big) \le \dot{r}(t_1)^2 + \frac{\omega_0^2}{r(t_1)^2} + \overline{F} r(t_1)^2. \end{align*} In particular we obtain $$ r(t_2) \ge \frac{\lvert \omega_0\rvert r(t_1)}{\sqrt{r(t_1)^2 \dot{r}(t_1)^2 + \omega_0^2 + \overline{F} r(t_1)^4} },$$ and in particular $r(t_2)$ is positive since $\overline{F}$ is finite by assumption. There can only be finitely many such intervals where $r$ can decrease on $[0,T]$ since $r$ can only decrease when either $x$ or $y$ is decreasing, and a linear differential equation with bounded force coefficient can only have a discrete set of turning points in a compact interval. \end{proof} \begin{remark}\label{reallybadforce} Of course, if we allow the forcing function to be something like $F(t) = -\frac{k^2}{(1-t)^2}$, then the particle can reach zero in finite time. The change of time variable $s=-\ln{(1-t)}$ in this case turns each equation in the system \eqref{generalsolar} into $$ \frac{d^2x}{ds^2} + \frac{dx}{ds} + k^2 x = 0,$$ which will have infinitely many oscillations up to $t=1$ if and only if $k>\tfrac{1}{2}$. Thus if $k>\tfrac{1}{2}$ the system will spiral around the origin infinitely many times until reaching the origin at $t=1$. For bounded $F(t)$, things are substantially simpler, but note that we only have reasonable bounds on $F(t)$ in special cases (in particular $\lambda=2$ and $\lambda=3$ in the present context). \end{remark} One further lemma simplifies our considerations, which is the reflection symmetry of the equation \eqref{mubgeneral}--\eqref{mubICs}. Note that since $m(t,\theta) = \sigma - u_{\theta\theta}(t,\theta)$, and $u_{\theta\theta}$ must change sign if $u$ is not constant, the condition that $m$ changes sign has somewhat different consequences for the convexity of $u$ depending on whether $\sigma$ is positive or negative. However these are illusory, and the following proposition shows that if $\sigma\ne 0$, we can assume $\sigma>0$ without loss of generality. This proposition is well-known and appears in many places, e.g., in \cite{FLQ}. \begin{proposition}\label{sigmasign} If $v(t,\theta) := -u(t,1-\theta)$, with $u$ satisfying \eqref{mubgeneral}--\eqref{mubICs}, then $v$ satisfies the equation $$ n_t + v n_{\theta} + \lambda v_{\theta} n = 0, \qquad n = \mu(v) - v_{\theta\theta}.$$ Hence any result that applies with $\sigma=\mu(u)>0$ also applies to $v$ for $\sigma=\mu(v)<0$. \end{proposition} \begin{proof} Clearly if $\zeta$ denotes the reflection map $\zeta(\theta) = 1-\theta$ on the circle, then $v:=-u\circ\zeta$ satisfies $v_t = -u_t\circ\zeta$ and $v_{\theta} = u\circ\zeta$. Thus we get $$(\mu-\partial_{\theta}^2)v = -(\mu-\partial_{\theta}^2)u\circ\zeta,$$ so that if $n = \mu(v) - v_{\theta\theta}$, we have $ n = -m\circ\zeta$. This now implies $n_t = -m_t\circ\zeta$ and $n_{\theta} = m_{\theta}\circ\zeta$. Thus composing \eqref{mubgeneral} with $\zeta$ gives \begin{align*} 0&= m_t\circ\zeta + (u\circ\zeta) \, (m_{\theta}\circ\zeta) + \lambda (u_{\theta}\circ\zeta)\, (m\circ\zeta) \\ &= -n_t - v n_{\theta} + \lambda (v_{\theta}) (-n) = 0. \end{align*} This implies that $(v,n)$ satisfies the same system as $(u,m)$ in \eqref{mubgeneral}--\eqref{mubICs}. However since $\mu(v) = -\mu(u)$, anything we may prove assuming $\mu(u)>0$ will equally apply to $v$ when $\mu(v)<0$. \end{proof} In light of Proposition \ref{sigmasign}, we will always assume that $\sigma>0$ without loss of generality. \section{Proofs of main theorems} First we show that if the momentum is everywhere positive or everywhere negative, then the solution of equations \eqref{mubgeneral}--\eqref{mubICs} exists globally and gives a diffeomorphism. This result is already contained in the original papers \cite{KLM} and \cite{LMT}, based on analytic inequalities (and generalized for any value of $\lambda$ in \cite{tiglayvizman}), but our perspective here is different. By Proposition \ref{sigmasign}, we may assume without loss of generality that the initial momentum is strictly positive. \begin{theorem}\label{globalposmom}[Theorem \ref{mainthm}, ``if'' case] If $\lambda=2$ or $\lambda=3$, and if $m_0(\theta) = \sigma - u_0''(\theta)$, with $\sigma = \mu(u_0)$, is positive for all $\theta\in S^1$, then the solution of \eqref{mubgeneral}--\eqref{mubICs} exists for all time, and the flow $\eta$ given by \eqref{flowequation} remains a $C^2$ diffeomorphism of the circle for all time. \end{theorem} \begin{proof} By the definitions \eqref{xydef} of $x$ and $y$, the first time $x$ approaches zero, we must simultaneously have $y$ approaching zero, since $$y = -\gamma x_{\theta} + \sigma x \int_0^t x(\tau)^{\gamma}\,d\tau.$$ Because $x$ is positive everywhere until it approaches zero, its minimum is also approaching zero, so that $x_{\theta}$ is approaching zero at the same time; meanwhile the second term in $y$ approaches zero since $x$ remains bounded and the integral is multiplied by $x$. Hence the only way $\eta_{\theta} = x^{\gamma}$ can ever reach zero is if both $x$ and $y$ approach zero simultaneously, which requires that $r$ reach zero. Theorem \ref{globalthm} shows that for $\lambda=2$ or $\lambda=3$, the only way the solution can break down is if $\eta_{\theta}$ reaches zero at some finite time $T$, and when this happens we still have at least local existence in $(x,y)$ coordinates beyond this $T$. If there were such a $T$, then define $r\colon [0,T] \to C^1(S^1)$ by the formula $r(t,\theta) = \sqrt{x(t,\theta)^2 + y(t,\theta)^2}$. By Lemma \ref{angmomboundlem}, since $m_0$ is positive, $r$ cannot reach zero on $[0,T]$, and we get a contradiction. \end{proof} Now we consider what happens when the sign of the momentum changes. By Proposition \ref{sigmasign}, we may assume without loss of generality that $\sigma>0$. In this case, the assumption that momentum changes sign means that $\sigma - u_0''(\theta) < 0$ for some values of $\theta\in S^1$, because it would always be true that $\sigma - u_0''(\theta)>0$ for some values of $\theta\in S^1$ (for example when $u_0$ has a local maximum or minimum). The important thing here becomes $ u_0''(\theta)> \sigma,$ which in particular implies that $u_0$ is convex on some interval. This leads to a convexity result on the function $x$, and it is on this that all our breakdown results depend. Our strategy will be as follows: we choose points $a<b<c<d$ such that $m_0(\theta)< 0$ on $(a,d)$: then we establish that \begin{itemize} \item $x(t,c)$ has an upper bound independent of $t$ in Lemma \ref{step1lemma}; \item $x(t,b)/x(t,c)$ decays like $e^{-Mt}$ for some $M>0$ in Lemma \ref{step2lemma}; \item and thus $x_t(t,a)/x(t,a)$ can be made as small as we want in Lemma \ref{step3lemma}, \end{itemize} and from this we use Lemma \ref{breakdownlemma} to show that $x$ must reach zero in finite time. None of the choices of these points actually matter, although optimizing the choice could lead to a better estimate for the breakdown time. All that matters is that $a$ and $d$ are chosen so that $m_0(\theta)< 0$ on $(a,d)$, which we will assume from now on. Essentially all three lemmas rely on the same basic conservation-of-momentum equation \begin{equation}\label{basicconsmom} \frac{\partial}{\partial t} \left( \frac{y(t,\theta)}{x(t,\theta)}\right) = \frac{m_0(\theta)}{x(t,\theta)^2}, \end{equation} which is a direct consequence of the equation \eqref{angularmomentum}. We apply it in three different ways: integrating in time for Lemma \ref{step1lemma}, integrating in both time and space for Lemma \ref{step2lemma}, and integrating in space only for Lemma \ref{step3lemma}. The first two lemmas are basically the same as arguments in the original paper of McKean~\cite{mckeanbreakdown}, while the third is a new argument. See Figure \ref{figure2} for the heuristic in a simple case. \begin{figure}[!ht] \includegraphics[scale=0.25]{abcdxplot.eps} \; \includegraphics[scale=0.25]{abcdyplot.eps} \; \includegraphics[scale=0.25]{abcdvelplot.eps} \caption{The plots of $x$, $y$, and $x_t/x$ in the Hunter-Saxton case $\lambda=2$, with $u_0(\theta) = 0.1\sin(2\pi\theta) + 0.04\cos(4\pi \theta)$ at $t=1.4$, shortly before breakdown. Note that $x$ is increasing on $(a,d)$, and $y$ is negative everywhere there, and that $x_t/x$ is most negative at $\theta=a$. In this case $y_t/y$ is constant, so we have not plotted it.}\label{figure2} \end{figure} \begin{lemma}\label{step1lemma} Suppose $\gamma>0$ and $\sigma>0$, and that $x$ and $y$ satisfy \eqref{xydef} and \eqref{xIC}--\eqref{yIC}, and thus \eqref{basicconsmom}. If $m_0(\theta)\le 0$ on the interval $[a,d]$, then for any time $t$, the function $x(t,\theta)$ is increasing in $\theta$ for $\theta\in [a,d]$. As a consequence, we have for any $c\in[a,d]$ and any $t\ge 0$ that \begin{equation}\label{upperboundonx} x(t,c)\le (d-c)^{-1/\gamma}. \end{equation} \end{lemma} \begin{proof} Integrate \eqref{basicconsmom} in time to get \begin{equation}\label{basicconsmominttime} \frac{y(t,\theta)}{x(t,\theta)} = \frac{y(0,\theta)}{x(0,\theta)} + m_0(\theta) \int_0^t \frac{d\tau}{x(\tau,\theta)^2} = -\lvert m_0(\theta)\rvert \int_0^t \frac{d\tau}{x(\tau,\theta)^2}, \end{equation} for all $\theta\in [a,d]$, since $y(0,\theta)=0$ everywhere and $m_0$ is nonpositive by assumption. By the definition \eqref{xydef} of $x$ and $y$, we have \begin{equation}\label{integratedlemma1} -\gamma \, \frac{x_{\theta}(t,\theta)}{x(t,\theta)} + \sigma \int_0^t x(\tau,\theta)^{\gamma}\,d\tau = -\lvert m_0(\theta)\rvert \int_0^t \frac{d\tau}{x(\tau,\theta)^2}, \end{equation} and since $\sigma>0$ and $\gamma>0$ by assumption, we conclude that $x_{\theta}/x>0$, so that $x$ is strictly increasing as long as it remains positive. The inequality \eqref{upperboundonx} comes from the fact that $\int_{S^1} \eta_{\theta} \,d\theta =1$, which implies as in \eqref{intxpower} that $$ \int_{S^1} x(t,\theta)^{\gamma} \,d\theta = 1.$$ In particular since $x$ is increasing for $\theta\in [c,d]$, we have $$ (d-c) x(t,c)^{\gamma} \le \int_c^d x(t,\theta)^{\gamma}\,d\theta \le \int_{S^1} x(t,\theta)^{\gamma}\,d\theta = 1,$$ which implies \eqref{upperboundonx}. \end{proof} The next step is to integrate equation \eqref{basicconsmominttime} over $\theta\in [b,c]$, which gives a bound on the logarithm of $x$. This implies exponential decay in time of $x(t,b)$. \begin{lemma}\label{step2lemma} Consider all the same hypotheses as in Lemma \ref{step1lemma} on an interval $[a,d]$. Then for any $b,c$ with $a<b<c<d$, the function $x$ satisfies \begin{equation}\label{exponentialdecaybound} x(t,b) \le x(t,c) e^{-Mt}, \qquad \text{where } M = A\sigma^{\frac{2}{\gamma+2}} \int_b^c \lvert m_0(\theta)\rvert^{\frac{\gamma}{\gamma+2}}\,d\theta, \end{equation} and $A$ is a constant depending only on $\gamma$. \end{lemma} \begin{proof} We begin with \eqref{integratedlemma1}, in the form \begin{equation}\label{logxderiv} \frac{x_{\theta}(t,\theta)}{x(t,\theta)} = \int_0^t \frac{1}{\gamma}\left( \sigma x(\tau,\theta)^{\gamma} + \frac{\lvert m_0(\theta)\rvert}{x(\tau,\theta)^2}\right)\,d\tau. \end{equation} Elementary calculus shows that the function $$ x\mapsto \frac{1}{\gamma}\left( \sigma x^{\gamma} + \frac{\lvert m_0\rvert}{x^2}\right)$$ is minimized among positive $x$ for $x = \left( \frac{2\lvert m_0\rvert}{\sigma \gamma}\right)^{\frac{1}{\gamma+2}}$, and the minimum value is $$ A \lvert m_0\rvert^{\frac{\gamma}{\gamma+2}} \sigma^{\frac{2}{\gamma+2}}, \quad\text{for}\quad A = \left(\frac{2}{\gamma}\right)^{\frac{\gamma}{\gamma+2}} \left( \frac{1}{\gamma}+\frac{1}{2}\right).$$ In particular since this bound is independent of time, equation \eqref{logxderiv} implies $$ \frac{\partial}{\partial \theta} \ln{x(t,\theta)} \ge A t \sigma^{\frac{2}{\gamma+2}} \lvert m_0(\theta)\rvert^{\frac{\gamma}{\gamma+2}}.$$ Integrating from $\theta=b$ to $\theta=c$ gives $$ \ln{x(t,c)} - \ln{x(t,b)} \ge Mt,$$ and exponentiation gives \eqref{exponentialdecaybound}. \end{proof} The last step is to use the conservation of angular momentum formula \eqref{angularmomentum} $$ xy_t - y x_t = m_0$$ directly. Dividing through by $xy$ gives \begin{equation}\label{xyratios} \frac{x_t}{x} = \frac{y_t}{y} - \frac{m_0}{xy}. \end{equation} Now by Lemma \ref{upperboundlem}, since both $x$ and $y$ satisfy the same ODE with a bounded forcing function, the quantity $y_t/y$ is bounded above by the square root of any increasing upper bound for the forcing function. Meanwhile since $y$ is negative if and only if $m_0$ is, the other term can be made as large and negative as we want when $x$ and $y$ are both small. \begin{lemma}\label{step3lemma} Consider the same hypotheses as in Lemma \ref{step1lemma} and \ref{step2lemma}. Then \begin{equation}\label{intervalabbound} \int_a^b \frac{x_t(t,\theta)}{x(t,\theta)}\,d\theta \le \int_a^b \frac{y_t(t,\theta)}{y(t,\theta)}\,d\theta - \frac{N}{x(t,b)^2}, \qquad \text{where } N = \frac{2}{\gamma} \left(\int_a^b \sqrt{\lvert m_0(\theta)\rvert}\,d\theta\right)^2. \end{equation} \end{lemma} \begin{proof} Integrating equation \eqref{xyratios} for $\theta\in [a,b]$, we just need to establish a lower bound for the positive quantity \begin{equation}\label{Jdef} J := \int_a^b \frac{m_0(\theta)\,d\theta}{x(t,\theta)y(t,\theta)}. \end{equation} Since $m_0$ and $y$ are both negative simultaneously on $(a,b)$, the Cauchy-Schwarz inequality implies that \begin{equation}\label{cauchyschwarz} \left(\int_a^b \sqrt{\lvert m_0(\theta)\rvert}\,d\theta \right)^2 \le \int_a^b \frac{\lvert m_0(\theta)\rvert \,d\theta}{x(t,\theta) \lvert y(t,\theta)\rvert} \int_a^b x(t,\theta) \lvert y(t,\theta)\rvert \, d\theta. \end{equation} Now by formula \eqref{xydef}, and using the fact that $\lvert y\rvert = -y$ on $[a,d]$, we get \begin{align*} \int_a^b x(t,\theta) \lvert y(t,\theta)\rvert \, d\theta &= \gamma \int_a^b x(t,\theta) x_{\theta}(t,\theta) \,d\theta - \sigma x(t,\theta)^2 \int_0^t x(\tau,\theta)^{\gamma} \,d\tau\,d\theta \\ &\le \tfrac{\gamma}{2} \big( x(t,b)^2 - x(t,a)^2\big) \le \tfrac{\gamma}{2} x(t,b)^2. \end{align*} Now plug this inequality into \eqref{cauchyschwarz} to get that $J$ given by \eqref{Jdef} satisfies $$ J \ge \frac{2}{\gamma x(t,b)^2} \left( \int_a^b \sqrt{\lvert m_0(\theta)\rvert} \,d\theta\right)^2.$$ This then yields \eqref{intervalabbound}. \end{proof} Combining Lemmas \ref{step1lemma}--\ref{step3lemma}, we can now prove the second half of Theorem \ref{mainthm}. Everything here would in fact work for any value of $\lambda>1$, not just $\lambda=2$ or $\lambda=3$, except for the fact that we need a subexponential upper bound for the forcing function in order to use Lemma \ref{upperboundlem}. \begin{theorem}\label{breakdownthm}[Theorem \ref{mainthm}, ``only if'' case] Suppose $\sigma>0$ and that $\lambda=2$ or $\lambda=3$. If the sign of $m_0 = \sigma - u_0''$ changes on the circle, then $C^2$ solutions of \eqref{mubgeneral}--\eqref{mubICs} must break down in finite time, as the Lagrangian flow given by \eqref{flowequation} ceases to be a diffeomorphism. \end{theorem} \begin{proof} Choose any subdivision $a<b<c<d$ such that $m_0$ is negative on $(a,d)$, and such that $m_0(a)=0$. Lemma \ref{step1lemma} implies that $$ x(t,c) \le (d-c)^{-1/\gamma}.$$ Lemma \ref{step2lemma} then implies that $$ x(t,b) \le x(t,c) e^{-Mt} \le (d-c)^{-1/\gamma} e^{-Mt},$$ where $M>0$ is given by equation \eqref{exponentialdecaybound}. Applying Lemma \ref{step3lemma} then gives $$ \int_a^b \frac{x_t(t,\theta)}{x(t,\theta)}\,d\theta \le \int_a^b \frac{y_t(t,\theta)}{y(t,\theta)}\,d\theta - N (d-c)^{2/\gamma} e^{2Mt},$$ where $N>0$ is given by \eqref{intervalabbound}. Since $y$ satisfies the equation $y_{tt}(t,\theta)= F(t,\theta) y(t,\theta)$ by Theorem \ref{transformthm}, the quantity $y_t/y$ is bounded above by an estimate of the form \begin{equation}\label{ytratio} \frac{y_t(t,\theta)}{y(t,\theta)} \le C(\theta) + f(t,\theta), \end{equation} where $f(t,\theta)$ is any positive increasing function satisfying $F(t,\theta)\le f(t,\theta)^2$ for all $t$ and $\theta$, as in Lemma \ref{upperboundlem}. If $\lambda=2$ or $\lambda=3$, we can use Proposition \ref{forcegrowth} to see that $f(t,\theta)$ grows at most polynomially in time, for each value of $\theta$, and this implies by Lemma \ref{upperboundlem} that $y_t(t,\theta)/y(t,\theta)$ grows at most polynomially in time. Integrating over the interval $\theta\in [a,b]$ still gives polynomial growth in time, and this implies that our estimate takes the form $$ \int_a^b \frac{x_t(t,\theta)}{x(t,\theta)} \,d\theta \le P(t) - N (d-c)^{2/\gamma} e^{2Mt}, $$ where $P(t)$ is a function growing at most like a power of $t$. Since the exponential term eventually dominates, we see that we can make the integral $$ \int_a^b \frac{x_t(t,\theta)}{x(t,\theta)} \,d\theta$$ as small as we want, which also implies that for some $\theta\in [a,b]$, the quantity $x_t(t,\theta)/x(t,\theta)$ can be made as small as desired. For such $\theta$, Lemma \ref{breakdownlemma} implies that $x(t,\theta)$ must reach zero in finite time. Of course, since $x(t,\cdot)$ is increasing on $[a,d]$, the smallest value must occur at $\theta=a$, when the sign of $m_0$ changes from positive to negative. \end{proof} \section{Outlook} The general principle that $m_0>0$ or $m_0<0$ everywhere implies global existence of classical solutions for solutions of \eqref{mubgeneral} is established in Ti{\u g}lay-Vizman~\cite{tiglayvizman} as long as the definition of $m$ in terms of $u$ that replaces \eqref{momentumdef} involves at least two derivatives of $u$. In many situations of interest, the operator $m$ has mean zero for all $u$, and so it is impossible for $m_0$ to have a constant sign; thus we would expect all classical solutions to break down in finite time. As an example we return to the Okamoto-Sakajo-Wunsch equation~\cite{OSW}, given by \eqref{mubgeneral} where $m = Hu_{\theta}$, for which $m$ integrates to zero, and it is impossible to have $m_0$ positive or negative everywhere. (On the real line the situation is different, but our periodic context forecloses such possibilities.) The following construction was presented in \cite{BKP} in the case $\lambda=2$, but most things work the same way for any value of $\lambda$. Breakdown for all solutions in the case $\lambda=2$ was given in \cite{washabaughpreston}, while breakdown for all positive $\lambda$ with $u_0$ odd was given by Castro-Cord\'oba~\cite{CC}. For $\lambda>0$, all solutions break down in finite time, while for $\lambda<0$ the solution is much more complicated and unknown in general (particularly in the most important case $\lambda=-1$, the De Gregorio equation). For the state of the art on global existence and breakdown for such equations, see Chen~\cite{chen} for the periodic case, Elgindi-Jeong~\cite{elgindijeong} for the nonperiodic case, and references in both. As in \cite{BKP}, we start with $$ m_t + u m_{\theta} + \lambda u_{\theta} m = 0, \qquad m = Hu_{\theta},$$ and applying the Hilbert transform gives $$ u_{t\theta} + u u_{\theta\theta} - \frac{\lambda}{2} (m^2 -u_{\theta}^2) = -F, \qquad F = -uu_{\theta\theta} - H(uHu_{\theta\theta}),$$ using the product identity. For any $u$, the function $F$ is positive at every point, as shown in \cite{BKP}. In Lagrangian form using \eqref{flowequation}, \eqref{flowderiv}, and \eqref{vorticitytransport}, this becomes $$ \frac{\partial}{\partial t}\left( \frac{\eta_{t\theta}}{\eta_{\theta}}\right) + \frac{\lambda}{2} \left( \frac{\eta_{t\theta}}{\eta_{\theta}}\right)^2 = \frac{\lambda}{2} \frac{m_0^2}{\eta_{\theta}^{2\lambda}} - F(t,\eta).$$ The transformation $\rho = \eta_{\theta}^{\lambda/2}$ turns this into the Ermakov-Pinney equation \begin{equation}\label{OSWradial} \rho_{tt} = \frac{\lambda^2}{4} \, \frac{m_0^2}{\rho^3} - \frac{\lambda}{2} \, F \rho. \end{equation} Here $\rho$ is the radial variable of a system $(x,y)$ satisfying $$ x_{tt} = -\tfrac{\lambda}{2} (F\circ\eta) x, \qquad y_{tt} = -\tfrac{\lambda}{2} (F\circ\eta) y,$$ with constant angular momentum $m_0$. This formulation makes it obvious that if $\lambda>0$, the force is attracting, and zero angular momentum with $y(0,\theta)=0$ and $x_t(0,\theta)<0$ implies $\rho(t,\theta)$ reaches zero in finite time. Hence $\eta_{\theta}$ does as well. (There is always such a $\theta\in S^1$ by the Hopf Lemma; see \cite{washabaughpreston}.) If $\lambda<0$, the effective force in the solar model becomes repulsive, although the singular condition is still $\rho\to 0$; that is, $\eta_{\theta}\to \infty$ rather than $\eta_{\theta}\to 0$. (This corresponds to $u_{\theta}$ approaching positive infinity rather than negative infinity.) It is still possible that the particle can approach the origin, but it needs to have both zero angular momentum and a sufficiently negative velocity pointing toward the origin to counteract the repulsive force. We give a simple example of a bound that is straightforward in the solar model. Consider the case $\lambda=-1$ corresponding to the De Gregorio equation. Then equation \eqref{OSWradial} takes the form $$ \rho_{tt} = \frac{m_0^2}{4\rho^3} + \frac{1}{2} \, F\rho,$$ and positivity of $F$ makes clear that there is at most one time interval on which $\rho$ can decrease since $\rho_{tt}$ is strictly positive for $\rho>0$. As such the derivative of the kinetic energy satisfies $$ \frac{d}{dt} \Big( \rho_t^2 + \frac{m_0^2}{4\rho^2}\Big) = F(t) \rho \rho_t \le 0$$ whenever $\rho_t\le 0$, and thus if $\rho$ is decreasing on a time interval $[t_1,t_2]$, then we have $$ \rho_t(t_2,\theta)^2 + \frac{m_0(\theta)^2}{4\rho(t_2,\theta)^2} \le \rho_t(t_1,\theta)^2 + \frac{m_0(\theta)^2}{4\rho(t_1,\theta)^2}.$$ In particular we conclude that $$ \rho(t_2,\theta)^2 \ge \frac{m_0(\theta)^2 \rho(t_1,\theta)^2}{4 \rho(t_1,\theta)^2 \rho_t(t_1,\theta)^2 + m_0(\theta)^2}$$ if $\rho$ is decreasing on $[t_1,t_2]$. For example if $\rho$ decreases on $[0,T]$ then we have \begin{equation}\label{degregorioradialbound} \rho(T,\theta)^2 \ge \frac{\omega_0(\theta)^2}{\omega_0(\theta)^2 + u_0'(\theta)^2}, \end{equation} using the fact that $\rho(0,\theta)=1$ and $\rho_t(0,\theta) = -\frac{1}{2} u_0'(\theta)$. Since for $\lambda=-1$ we have $\eta_{\theta} = \rho^{-2}$, equation \eqref{degregorioradialbound} then translates into $$ \eta_{\theta}(t,\theta) \le \frac{m_0(\theta)^2 + u_0'(\theta)^2}{m_0(\theta)^2}.$$ Obviously this is only useful when $m_0(\theta)\ne 0$, and by definition of our momentum operator $m = Hu_{\theta}$, there will certainly be points where $m_0=0$. However such estimates could be useful for estimating the forcing function $F$, which depends nonlocally on our variables. We leave further analysis for future research, but the point is that the general framework here relates a family of Euler-Arnold-type PDEs to a well-understood central force system, which makes some phenomena regarding breakdown or global existence easier to intuitively understand. The reason this approach works is because the equations are ``nearly'' linear in terms of the variable $\eta_{\theta}$. Of course the coefficients of this equation depend on $\eta_{\theta}$, and a transformation may eliminate some of this dependence (e.g., quadratic terms like $\eta_{t\theta}^2/\eta_{\theta}^2$ can be eliminated by a power transformation). The reason this frequently works is that $\eta$ satisfies some kind of geodesic equation of the form $\eta_{tt} + \Gamma(\eta; \eta_t,\eta_t) = 0$ for some Christoffel map $\Gamma$ which is bilinear and symmetric in the last two variables but typically depends in a complicated way on the first. Differentiating this with respect to any parameter leads to the Jacobi equation for the variation. In infinite dimensions the spatial variable $\theta$ itself can always be treated as this variational parameter, so that $\eta_{\theta}$ always satisfies the Jacobi equation. The coefficients and covariant derivative here depend on $\eta$ (and thus indirectly on $\eta_{\theta}$), so we cannot view this as a true linear equation, but if the curvature is bounded or well-understood, this equation may be easy to analyze. These are the situations we have studied here. The fact that equation \eqref{mubgeneral} applies to many situations of continuum mechanics suggests that this technique may produce new insights that are not obvious from direct PDE techniques.
proofpile-arXiv_067-5182
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this paper, we consider a complete probability space $\left(\Omega, {\cal F}, P\right)$, on which we suppose given a standard Brownian motion $W$ (could be $d$-dimensional). Througout the paper, $\mathbb F:=({\cal F}_t)_{t\geq0}$ denotes the complete and right-continuous filtration generated by $W$. Besides this stochastic basis $(\Omega, {\cal F}, \mathbb F, P)$, we consider an arbitrary random time $\tau$ that might not be an $\mathbb F$-stopping time with values in $[0,+\infty)$, and the data-triplet $(f, S, \xi)$. Here $f(t,y,z)$ is a functional that is random (the driver rate), $\xi$ is an ${\cal F}_{\tau}$-random variable\footnote{Here ${\cal{F}}_{\tau}$ is the $\sigma$-algebra that is generated by $\{X_{\tau}:\ X\quad\mbox{is}\quad {\mathbb F}\mbox{-optional}\}$}, and $S$ is a RCLL process $\mathbb F$-adapted with values in $[-\infty, +\infty)$. Thus, our main goal is to study the following RBSDE: \begin{eqnarray}\label{RBSDE1} \begin{cases} dY_{t}=-f(t,Y_{t},Z_{t})d(t\wedge\tau)-d(K_{t\wedge\tau}+M_{t\wedge\tau})+Z_{t}dW_{t}^{\tau},\quad {Y}_{\tau}=\xi,\\ Y\geq S\quad\mbox{on}\quad[\![ 0,\tau[\![,\quad\mbox{and}\quad E\left[\displaystyle\int_{0}^{\tau}(Y_{t-}-S_{t-})dK_{t}\right]=0. \end{cases} \end{eqnarray} This RBSDE generalizes the works of \cite{Touzi,Popier} to the case where $\tau$ is not a stopping time. \\ This study (for now) concentrate more on addressing the following points: \begin{enumerate} \item What are the conditions (the weakest possible)of the data-triplet $(f, S, \xi)$, without further assumption on $\tau$, that guarantee the existence and uniqueness of the solution to this RBSDE? \item How (\ref{RBSDE1}) can be {\bf explicitly} connected to an RBSDE in $\mathbb F$? We want to explicitly determine the relationship between the two data triplets and between the solutions of the two RBSDEs. \item How can we estimate --in norm-- the solution $(Y^{\mathbb G},Z^{\mathbb G}, M^{\mathbb{G}},K^{\mathbb{G}})$ in terms of the data-triplet? What are the adequate norms and adequate spaces for both the solution and the data-triplet? \end{enumerate} \subsection{What the literature says about this RBSDE?} It is well known, see \cite{Touzi} for similar discussion, that a BSDE (Backward Stochastic Differential Equation) is an RBSDE with $S\equiv -\infty$ and $K\equiv 0$. Up to our knowledge, the BSDEs were introduced in \cite{Bismut} with $f$ being linear in the variables $(y,z)$ and $\tau=T$ being a positive fixed constant. However, only after the seminal paper of Pardoux and Peng \cite{Pardoux4} that this class of BSDEs, with $\tau$ being positive fixed constant, got tremendous attention and has been investigated deeply and intensively in many directions. These studies were highly motivated by applications in probabilistic numerical methods and/or the probabilistic representation for semilinear PDEs, stochastic control, stochastic game theory, theoretical economics and mathematical finance. The huge part of this literature focuses on weakening the Lipschitz property of the coefficient $f$ with-respect-to the $y$-variable, allowing $\mathbb F$ be more general, and/or weakening the assumption on the barrier process $S$. Only very recently that the novel notion of second order BSDE was introduced in \cite{Cheridito1}, and extended in \cite{SonerRTouziZhang} afterwards, due to its vital role in treating the fully nonlinear PDEs. \\ The first BSDE (or RBSDE) with a random horizon appeared in \cite{Peng91}, where $\tau$ is an $\mathbb F$-stopping time. The author describes how the solution to the class of BSDEs with an unbounded random terminal time $\tau$, that is an $\mathbb F$-stopping time, is related to semilinear elliptic PDE. It is important to mention that in the case of constant horizon $T$, the solution to the BSDEs are connected to viscosity solutions to a system of semilinear parabolic PDEs, see \cite{PardouxPradeillesRao} and the references therein for details. Afterwards, this family of RBSDEs with have been extended in various directions in \cite{BriandConfortola, BriandHu,Darling,Popier,Royer}, and the references therein to cite a few. For the case second order BSDE under random terminal time, that is an $\mathbb F$-stopping time, we refer the reader to the very recent work \cite{Touzi}. \\ Herein, we address (\ref{RBSDE1}) by letting $\tau$ to be an arbitrary random time and address the main problems aforementioned. This case is a natural extension of the exiting literature on RDBSEs with random terminal time, and is highly motivated by the two areas of credit risk theory and life insurance (life market). For the credit risk framework, $\tau$ represent the default time of a firm, while in life insurance it models the death time of an insured, where the mortality and longevity risks are real challenges for both academia and insurance industry. Up to our knowledge, all the existing literature treating this class of RBDEs assumes very strong assumption(s) on $\tau$. The most frequent among these, see \cite{Kharroubi} and the references therein, we cite the case where $W^{\tau}$ should remain a martingale under the enlarged filtration (this case is also known in the literature as the immersion assumption). \\ \subsection{ Main challenges and our achievements} On one hand, the RBSDE (\ref{RBSDE1}) is a {\it natural} extension of the existing literature on RBSDEs involved with random horizon, see \cite{Touzi} and the references therein to cite a few, to the case where $\tau$ is an arbitrary random time. On the other hand, our hedging and pricing studies in \cite{Choulli5} for some class of informational markets yield to these form of RBSDEs and BSDEs, where the main source of uncertainty is $W^{\tau}$ and the driver $f$ is Lipschitz . \\ The difficulties for addressing (\ref{RBSDE1}) are numerous and challenging. Among these, on the one hand, we mention that $W^{\tau}$ is {\it not} a $\mathbb G$-martingale when $\tau$ is general. This explains why all the literature about BSDE under random horizon, up to our knowledge, assume the immersion assumption on $\tau$, which says that any $\mathbb F$-martingale stopped at $\tau$ remains a $\mathbb G$-martingale. On the other hand, the Burkholder-Davis-Gundy inequalities for martingales, that are really vital in BSDEs and RBSDEs, fail for martingales stopped at $\tau$ that might not be a pseudo-stooping time with respect to $\mathbb F$. In fact, we refer the reader to \cite{Nik2008, Nik2005} for this fact and for the notion of pseudo-stopping times that is very close to that of immersion. In virtue of the Doob-Meyer decomposition for $W^{\tau}$ under $\mathbb G$, one can think of using the transformation ${\cal T}(W)=W^{\tau}-G_{-}^{-1}I_{\Rbrack0,\tau]\!]}\bigcdot \langle W, m\rangle $ which is a $\mathbb G$-local martingale. However, this will definitely alters the driver $f(t,y,z)$ of the RBSDE. Precisely, the process $ZG_{-}^{-1}\bigcdot \langle W, m\rangle=:\int_0^{\tau\wedge\cdot}\beta^{(m)}_sZ_sG_{s-}^{-1} ds$ will be transferred to the driver, and this will perturb the Lipschitz conditions and other features, as the process $\beta^{(m)}$ might not be ``regular" nor ``smooth" enough. Hence, this view does not solve the problem, it makes it very complicated and will lead to assumption on $\tau$. Inspired by \cite{Touzi} and \cite{Bouchard}, we address (\ref{RBSDE1}) in two steps. In the first step, we consider the case of bounded horizon and we stop at $T\wedge \tau$ for some $T\in (0,+\infty)$ instead fo $\tau$. For this bounded random horizon case, thanks to some results of \cite{Choulli1,Choulli2}, we answer fully and in details the main problems aforementioned and beyond. The second step consists of relaxing the boundedness condition on the random horizon by letting somehow $T$ to go infinity. This rises additional serious challenges.\\ Our achievements are numerous at both methodical and conceptual aspects. In fact, besides answering all the aforementioned problems, we prove the following general fact: For any random time $\tau$ having positive Az\'ema supermartingale, there exists a positive and bounded decreasing process --that we call hereafter by discount factor and we denote by ${\widetilde{\cal E}}$-- that is crucial in defining the spaces and norms for both the solution of the RBSDE and the data-triplet. This discount factor is also vital in bridging the RBSDE (\ref{RBSDE1}) with its counter part RBSDE under $\mathbb F$, and cements their solutions as well in a very explicit manner. At the methodical aspect, we elaborate our prior estimates using different method than the existing ones in the literature. Indeed, we directly establish inequalities without distinguishing the cases on $p$, and this is due to some stronger and deeper martingales inequalities of \cite{Choulli4} that we slightly generalize. Furthermore, our method is robust towards the nature of the filtration $\mathbb F$, and hence our analysis can be extended to setting with jumps without serious difficulties. Some of these extensions can be found in \cite{Alsheyab}, while herein we restrict to the Brownian filtration $\mathbb F$ for the sake of keeping the setting accessible to a broad audience, and to avoid overshadowing the main ideas with technicalities related to the general setting.\\ This paper has seven sections including the current one. The second section defines the mathematical model and its preliminaries such as the norms used for the RBSDEs and some vital results on enlargement of filtration $\mathbb F$ with $\tau$ and on martingales for the enlarged filtration. The third section addresses the optimal stopping problem and the Snell envelop under stopping with $\tau$. This is vital as we know the Snell envelop is intimately related to linear RBSDE. The fourth and fifth sections are devoted to the linear RBSDEs depending whether we stop the RBSDE at $\tau\wedge{T}$ for some fixed planning horizon $T\in (0,+\infty)$, or we stop at $\tau$. The sixth and seventh sections deal with the general RBSDE (\ref{RBSDE1}), and here again we distinguish the cases when we stop at $\tau\wedge{T}$ or $\tau$. The paper has Appendixes where we recall some crucial results and/or prove our technical lemmas. \section{The mathematical setting and notation} This section defines the notations, the financial and the mathematical concepts that the paper addresses or uses, the mathematical model that we focus on, and some useful existing results. Throughout the paper, we consider the complete probability space $\left(\Omega, {\cal F}, P\right)$. By ${\mathbb H}$ we denote an arbitrary filtration that satisfies the usual conditions of completeness and right continuity. For any process $X$, the $\mathbb H$-optional projection and the $\mathbb H$-predictable projection, when they exist, will be denoted by $^{o,\mathbb H}X$ and $^{p,\mathbb H}X$ respectively. The set ${\cal M}(\mathbb H, Q)$ (respectively ${\cal M}^{p}(\mathbb H, Q)$ for $p\in (1,+\infty)$) denotes the set of all $\mathbb H$-martingales (respectively $p$-integrable martingales) under $Q$, while ${\cal A}(\mathbb H, Q)$ denotes the set of all $\mathbb H$-optional processes that are right-continuous with left-limits (RCLL for short) with integrable variation under $Q$. When $Q=P$, we simply omit the probability for the sake of simple notations. For an $\mathbb H$-semimartingale $X$, by $L(X,\mathbb H)$ we denote the set of $\mathbb H$-predictable processes that are $X$-integrable in the semimartingale sense. For $\varphi\in L(X,\mathbb H)$, the resulting integral of $\varphi$ with respect to $X$ is denoted by $\varphi\bigcdot X$. For $\mathbb H$-local martingale $M$, we denote by $L^1_{loc}(M,\mathbb H)$ the set of $\mathbb H$-predictable processes $\varphi$ that are $X$-integrable and the resulting integral $\varphi\bigcdot M$ is an $\mathbb H$-local martingale. If ${\cal C}(\mathbb H)$ is a set of processes that are adapted to $\mathbb H$, then ${\cal C}_{loc}(\mathbb H)$ is the set of processes, $X$, for which there exists a sequence of $\mathbb H$-stopping times, $(T_n)_{n\geq 1}$, that increases to infinity and $X^{T_n}$ belongs to ${\cal C}(\mathbb H)$, for each $n\geq 1$. The $\mathbb H$-dual optional projection and the $\mathbb H$-dual predictable projection of a process $V$ with finite variation, when they exist, will be denoted by $V^{o,\mathbb H}$ and $V^{p,\mathbb H}$ respectively. For any real-valued $\mathbb H$-semimartingale, $L$, we denote by ${\cal E}(L)$ the Dol\'eans-Dade (stochastic) exponential. It is the unique solution to the stochastic differential equation $dX=X_{-}dL,\ X_0=1,$ and is given by \begin{eqnarray}\label{DDequation} {\cal E}_t(L)=\exp\left(L_t-L_0-{1\over{2}}\langle L^c\rangle_t\right)\prod_{0<s\leq t}(1+\Delta L_s)e^{-\Delta L_s}.\end{eqnarray} Throughout the paper, on $\left(\Omega, {\cal F}, P\right)$, we consider a standard Brownian motion $W=(W_t)_{t\geq 0}$, and its natural filtration $\mathbb F:=({\cal F}_t)_{t\geq 0}$ that satisfies the usual conditions of right continuity and completeness. On $\Omega\times [0,+\infty)$, we consider the $\mathbb F$-optional $\sigma$-field denoted by ${\cal O}(\mathbb F)$ and the $\mathbb F$-progressive $\sigma$-field denoted by $\mbox{Prog}(\mathbb F)$ (i.e., a process $X$ is said to be $\mathbb F$-progresssive if $X$, as a map on $\Omega\times [0,t]$, is ${\cal F}_t\otimes{\cal B}(\mathbb R)$-measurable, for any $t\in (0,+\infty)$, where ${\cal B}(\mathbb R)$ is the Borel $\sigma$-field on $\mathbb R$). \subsection{RBSDEs: Definition, spaces and norms} Throughout this subsection we suppose given a complete filtered probability space $\left(\Omega, {\cal F}, \mathbb H=({\cal H}_t)_{t\geq 0}, Q\right)$, where $\mathbb H\supseteq{\mathbb F}$ and $Q$ is any probability measure absolutely continuous with respect to $P$. The following definition of RBSDEs is borrowed from \cite[Definition 2.1]{Briandetal}. \begin{definition}\label{Definition-RBSDE} Let $\sigma$ be an $\mathbb H$-stopping time, and $(f^{\mathbb H},S^{\mathbb H},\xi^{\mathbb H})$ be a triplet such that $f^{\mathbb H}$ is $\mbox{Prog}(\mathbb H)\otimes{\cal B}(\mathbb R)\otimes{\cal B}(\mathbb R)$-measurable functional, $S^{\mathbb H}$ is a RCLL and $\mathbb H$-adapted process, and $\xi^{\mathbb H}$ is an ${\cal H}_{\sigma}$-measurable random variable. Then an $(\mathbb H, Q)$-solution to the following RBSDE \begin{eqnarray}\label{RBSDE4definition} \begin{cases} dY_t=-f^{\mathbb H}(t,Y_t,Z_t)I_{\{t\leq\sigma\}}dt+Z_t dW_{t\wedge\sigma}-dM_t-dK_t,\quad Y_{\sigma}=\xi^{\mathbb H},\\ \displaystyle{Y}\geq S^{\mathbb H}\ \mbox{on}\ \Lbrack0,\sigma[\![,\quad \int_0^{\sigma}(Y_{u-}-S_{u-}^{\mathbb H})dK_u=0\quad P\mbox{-a.s.}.\end{cases} \end{eqnarray} is any quadruplet $(Y^{\mathbb H}, Z^{\mathbb H},M^{\mathbb H},K^{\mathbb H})$ satisfying (\ref{RBSDE4definition}) such that $M^{\mathbb H}\in {\cal M}_{0,loc}(Q,\mathbb H)$, $K^{\mathbb H}$ is a RCLL nondecreasing and $\mathbb H$-predictable, and \begin{eqnarray}\label{Condition1} \int_0^{\sigma}\left( (Z_t^{\mathbb H})^2+\vert{f^{\mathbb H}}(t,Y_t^{\mathbb H},Z_t^{\mathbb H})\vert\right) dt<+\infty\quad Q\mbox{-a.s.} \end{eqnarray} When $Q=P$ we will simply call the quadruplet an $\mathbb H$-solution, while the filtration is also omitted when there no risk of confusion. \end{definition} In this paper, we are interested in solutions that are integrable somehow. To this end, we recall the following spaces and norms that will be used throughout the paper. We denote by $\mathbb{L}^{p}(Q)$ is the space of $\mathcal{F}$-measurable random variables $\xi'$, such that \begin{equation*} \parallel \xi' \parallel_{\mathbb{L}^{p}(Q)}^{p}:=E^{Q}\left[|\xi' | ^{p}\right ]<\infty . \end{equation*} $\mathbb{D}_{\sigma}(Q,p)$ is the space of RCLL and ${\cal F}\otimes{\cal B}(\mathbb R^+)$-measurable processes, $Y$, such that $Y=Y^{\sigma}$ and \begin{equation*} \Vert{Y }\Vert_{\mathbb{D}_{\sigma}(Q,p)}^{p}:=E^{Q}\left[\sup_{0\leq {t}\leq\sigma}\vert{Y_t}\vert^p\right ]<\infty. \end{equation*} Here ${\cal B}(\mathbb R^+)$ is the Borel $\sigma$-field of $\mathbb R^+$. $\mathbb{S}_{\sigma}(Q,p)$ is the space of $\mbox{Prog}(\mathbb H)$-measurable processes $Z$ such that $Z=Z^{\sigma}$ and \begin{equation*} \Vert{Z}\Vert_{\mathbb{S}_{\sigma}(Q,p)}^{p}:=E^{Q}\left[\left (\int_{0}^{\sigma}\vert{ Z_t}\vert ^{2}dt\right )^{{p}/{2}}\right ]<\infty . \end{equation*} For any $M\in {\cal M}_{loc}(Q,\mathbb H)$, we define its $p$-norm by \begin{equation*} \Vert {M} \Vert_{{\cal M}^p(Q)}^{p}:=E^{Q}\left[[M, M]_{\infty} ^{p/2}\right ]<\infty,\end{equation*} and the $p$-norm of any $K\in {\cal A}_{loc}(Q,\mathbb H)$ is given by \begin{eqnarray*} \Vert{K}\Vert_{{\cal{A}}(Q,p)}^p:=E^Q\left[\left(\mbox{Var}_{\infty}(K)\right)^p\right].\end{eqnarray*} Herein and throughout the paper, Var$(K)$ denotes the total variation process of $K$, and ${\cal A}^p(Q,\mathbb H)$ is the set of $K\in {\cal A}_{loc}(Q,\mathbb H)$ such that $ \Vert{K}\Vert_{{\cal{A}}(Q,p)}<+\infty$. \begin{definition}\label{RBSDE4Lp} Let $p\in (1,+\infty)$. An $L^p(Q, \mathbb H)$-solution for (\ref{RBSDE4definition}) is a $(Q,\mathbb H)$-solution $(Y, Z,M,K)$ that belongs to $ \mathbb{D}_{\sigma}(Q,p)\otimes \mathbb{S}_{\sigma}(Q,p)\otimes{\cal M}^p(Q,\mathbb H)\otimes{\cal A}^p(Q,\mathbb H).$\end{definition} \subsection{The random horizon and the progressive enlargement of $\mathbb F$} In addition to this initial model $\left(\Omega, {\cal F}, \mathbb F,P\right)$, we consider an arbitrary random time, $\tau$, that might not be an $\mathbb F$-stopping time. This random time is parametrized though $\mathbb F$ by the pair $(G, \widetilde{G})$, called survival probabilities or Az\'ema supermartingales, and is given by \begin{eqnarray}\label{GGtilde} G_t :=^{o,\mathbb F}(I_{\Rbrack0,\tau]\!]})_t= P(\tau > t | {\cal F}_t) \ \mbox{ and } \ \widetilde{G}_t :=^{o,\mathbb F}(I_{\Rbrack0,\tau[\![})_t= P(\tau \ge t | {\cal F}_t).\end{eqnarray} Furthermore, the following process \begin{equation} \label{processm} m := G + D^{o,\mathbb F}, \end{equation} is a BMO $\mathbb F$-martingale and play important role in the analysis of enlargement of filtration. The flow of information that incorporate both $\tau$ and $\mathbb{F}$ defined using the pair $(D,\mathbb G)$ given by \begin{equation}\label{processD} D:=I_{]\!]\tau,+\infty]\!]},\ \mathbb G:=({\cal G}_t)_{t\geq 0},\ {\cal G}_t:={\cal G}^0_{t+}\ \mbox{with} \ {\cal G}_t^0:={\cal F}_t\vee\sigma\left(D_s,\ s\leq t\right). \end{equation} Thanks to \cite[Theorem 3]{ACJ} and \cite[Theorem 2.3 and Theorem 2.11]{ChoulliDavelooseVanmaele}, we recall \begin{theorem}\label{Toperator} The following assertions hold.\\ {\rm{(a)}} For any $M\in{\cal M}_{loc}(\mathbb F)$, the process \begin{equation} \label{processMhat} {\cal T}(M) := M^\tau -{\widetilde{G}}^{-1} I_{[\![ 0,\tau[\![} \bigcdot [M,m] + I_{[\![ 0,\tau[\![} \bigcdot \Big(\sum \Delta M I_{\{\widetilde G=0<G_{-}\}}\Big)^{p,\mathbb F}\end{equation} is a $\mathbb G$-local martingale.\\ {\rm{(b)}} The process \begin{equation} \label{processNG} N^{\mathbb G}:=D - \widetilde{G}^{-1} I_{[\![ 0,\tau[\![} \bigcdot D^{o,\mathbb F} \end{equation} is a $\mathbb G$-martingale with integrable variation. Moreover, $H\bigcdot N^{\mathbb G}$ is a $\mathbb G$-local martingale with locally integrable variation for any $H$ belonging to \begin{equation} \label{SpaceLNG} {\mathcal{I}}^o_{loc}(N^{\mathbb G},\mathbb G) := \Big\{K\in \mathcal{O}(\mathbb F)\ \ \big|\quad \vert{K}\vert G{\widetilde G}^{-1} I_{\{\widetilde{G}>0\}}\bigcdot D\in{\cal A}_{loc}(\mathbb G)\Big\}. \end{equation} \end{theorem} For any $q\in [1,+\infty)$ and a $\sigma$-algebra ${\cal H}$ on $\Omega\times [0,+\infty)$, we define \begin{equation}\label{L1(PandD)Local} L^q\left({\cal H}, P\otimes dD\right):=\left\{ X\ {\cal H}\mbox{-measurable}:\quad \mathbb E[\vert X_{\tau}\vert^q I_{\{\tau<+\infty\}}]<+\infty\right\}.\end{equation} \begin{lemma} \label{G-projection}For any nonnegative or integrable process $X$, we always have \begin{equation}\label{converting}E\left[X_{t}|\mathcal{G}_{t}\right]I_{\{t\ <\tau\}}={E\left[X_{t}I_{\{t\ <\tau\}}|\mathcal{F}_{t}\right]}G_t^{-1}I_{\{t\ <\tau\}}. \end{equation} \end{lemma} Throughout the paper, we assume the following assumption \begin{eqnarray}\label{Assumption4Tau} G>0\quad (\mbox{i.e., $G$ is a positive process) and}\quad 0<\tau<+\infty\quad P\mbox{-a.s.}. \end{eqnarray} Now, we recall \cite[Proposition 4.3]{Choulli1} that will be useful throughout the paper. \begin{proposition}Suppose that $G>0$ and consider the process \begin{equation}\label{Ztilde} \widetilde{Z}:=1/{\cal E}(G_{-}^{-1}\bigcdot m). \end{equation} Then the following assertions hold.\\ {\rm{(a)}} The process $\widetilde{Z}^{\tau}$ is a $\mathbb G$-martingale, and for any $T\in (0,+\infty)$, $\widetilde{Q}_T$ given by \begin{equation}\label{Qtilde} \frac{d{\widetilde{Q}_T}}{dP}:=\widetilde{Z}_{T\wedge\tau}. \end{equation} is well defined probability measure on ${\cal G}_{\tau\wedge T}$.\\ {\rm{(a)}} For any $M\in {\cal M}_{loc}(\mathbb F)$, we have $M^{T\wedge \tau}\in {\cal M}_{loc}(\mathbb G, \widetilde{Q})$. In particular $W^{T\wedge\tau}$ is a Brownian motion for $(\widetilde{Q}, \mathbb G)$, for any $T\in (0,+\infty)$. \end{proposition} \begin{remark} In general, the $\mathbb G$-martingale $\widetilde{Z}^{\tau}$ might not be uniformly integrable, and hence in general $\widetilde{Q}$ might not be extended to $(0,+\infty]$. For these fact, we refer the reader to \cite[Proposition 4.3]{Choulli1} for details, where conditions for $\widetilde{Z}^{\tau}$ being uniformly integrable are fully singled out when $G>0$. \end{remark} \section{The Snell envelop under random horizon} Throughout the paper, $\mathcal{J}_{\sigma_1}^{\sigma_2}(\mathbb{H})$ denotes the set of all $\mathbb{H}$-stopping times with values in $[\![ \sigma_1,\sigma_2]\!]$, for any two $\mathbb H$-stopping times $\sigma_1$ and $\sigma_2$ such that $\sigma_1\leq\sigma_2$ . \begin{proposition}\label{PropositionG2F}Suppose (\ref{Assumption4Tau}) holds, and let $X^{\mathbb G}$ be a $\mathbb G$-optional process such that $(X^{\mathbb G})^{\tau}=X^{\mathbb G}$. Then there exists a unique pair $(X^{\mathbb F}, k^{(pr)})$ of processes such hat $X^{\mathbb F}$ is $\mathbb F$-optional and $k^{(pr)}$ is $\mathbb F$-progressive and \begin{eqnarray}\label{EqualityG2F} X^{\mathbb G}=X^{\mathbb F} I_{\Lbrack0,\tau[\![}+k^{(pr)}\bigcdot D. \end{eqnarray} Furthermore, the following assertions hold.\\ {\rm{(a)}} $X^{\mathbb G}$ is RCLL if and only if $X^{\mathbb F}$ is RCLL.\\ {\rm{(b)}} $X^{\mathbb G}$ is a $\mathbb G$-semimartingale if and only if $X^{\mathbb F}$ is an $\mathbb F$-semimartingale, and \begin{eqnarray}\label{Decompo4XG} (X^{\mathbb G})^{\tau}=(X^{\mathbb F})^{\tau}+(k^{(pr)}-X^{\mathbb F})\bigcdot D. \end{eqnarray} {\rm{(c)}}$E\left[\sup_{t\geq 0} \vert X^{\mathbb G}_{t}\vert\right]<+\infty$ if and only if \begin{eqnarray} k^{(pr)}\in L^1\left(\widetilde\Omega, {\rm{Prog}}(\mathbb F), P\otimes D\right)\quad\mbox{and}\quad E\left[\int_0^{+\infty} \sup_{0\leq s< t}\vert X^{\mathbb F}_s\vert dD^{o,\mathbb F}_t\right]<+\infty.\end{eqnarray} \end{proposition} \begin{proof} Consider a $\mathbb G$-optional process $X^{\mathbb G}$. Then thanks to \cite[Lemma B.1 ]{Aksamit} (see also \cite[Lemma 4.4]{Jeulin1980}, there exists a pair $(X^{\mathbb F}, k^{(pr)})$ such that $X^{\mathbb F}$ is an $\mathbb F$-optional and $k^{(pr)}$ is $\mbox{Prog}(\mathbb F)$-measurable such that \begin{eqnarray*} X^{\mathbb G}I_{\Lbrack0,\tau[\![}=X^{\mathbb F}I_{\Lbrack0,\tau[\![},\quad\mbox{and}\quad X^{\mathbb G}_{\tau}=k^{(pr)}_{\tau}. \end{eqnarray*} Furthermore, this pair is unique due to $G>0$. Thus, the condition $X^{\mathbb G}=(X^{\mathbb G})^{\tau}$ allows us to derive \begin{eqnarray*} X^{\mathbb G}=X^{\mathbb G}I_{\Lbrack0,\tau[\![}+X^{\mathbb G}_{\tau} I_{[\![\tau,+\infty[\![}=X^{\mathbb F}I_{\Lbrack0,\tau[\![}+k^{(pr)}\bigcdot D, \end{eqnarray*} and the equality (\ref{EqualityG2F}) is proved. \\ a) Thanks to (\ref{EqualityG2F}) and the fact that $k^{(pr)}\bigcdot D$ is a RCLL process, we deduce that $X^{\mathbb G}$ is a RCCL process if and only if $X^{\mathbb F} I_{\Lbrack0,\tau[\![}$ is a RCLL process. Remark, due to $G>0$ and \cite{DellacherieMeyer80}, this latter fact is equivalent to $X^{\mathbb F}$ being RCLL. This ends the proof of assertion (a).\\ b) It is clear that $k^{(pr)}\bigcdot D$ is a RCLL $\mathbb G$-semimartingale, and hence $X^{\mathbb G}$ is a RCLL $\mathbb G$-semimartingale if and only if $X^{\mathbb F}I_{\Lbrack0,\tau[\![}$ is a RCLL $\mathbb G$-semimartingale. By stopping, there is no loss of generality in assuming $X^{\mathbb G}$ is bounded, which leads to the boundedness of $X^{\mathbb F}$, see \cite[Lemma B.1]{Aksamit} or \cite[Lemma 4.4 (b), page 63]{Jeulin1980}. Thus, thanks to \cite[Th\'eor\`eme 47, p. 119 and Th\'eor\`eme 59, p. 268]{DellacherieMeyer80} that implies that the optional projection of a bounded RCLL $\mathbb G$-semimartingale is a RCLL $\mathbb F$-semimartingale, we deduce that $ X^{\mathbb F}G=^{o,\mathbb F}\left( X^{\mathbb F}I_{\Lbrack0,\tau[\![}\right)$ is a RCLL $\mathbb F$-semimartingale. This with the condition $G>0$ and $G$ is an $\mathbb F$-semimartingale yield that $ X^{\mathbb F}$ is an $\mathbb F$-semimartingale. Furthermore, it is clear that when $X^{\mathbb F}$ is an $\mathbb F$-semimartingale, we have \begin{eqnarray}\label{equalityG2Fbis} X^{\mathbb F}I_{\Lbrack0,\tau[\![}=( X^{\mathbb F})^{\tau}- X^{\mathbb F}\bigcdot D\quad\mbox{is a $\mathbb G$-semimartingale},\end{eqnarray} and (\ref{Decompo4XG}) follows from this equality and (\ref{EqualityG2F}). \\ c) Here, we prove assertion (c). To this end, we use (\ref{EqualityG2F}) and notice that \begin{eqnarray*} {{I}\over{2}} \leq \sup_{t\geq 0} \vert X^{\mathbb G}_{t}\vert=\max\left(\sup_{ 0\leq{t}<\tau} \vert X^{\mathbb F}_{t}\vert,\vert{k}^{(pr)}_{\tau}\vert\right)\leq I,\quad I:=\int_0^{\infty} \left(\sup_{ 0\leq{u}<t} \vert X^{\mathbb F}_{u}\vert+\vert{k}^{(pr)}_t\vert\right)dD_t. \end{eqnarray*} Hence, these inequalities imply that $E\left[\sup_{t\geq 0} \vert X^{\mathbb G}_{t}\vert\right]<+\infty$ iff $E\left[\int_0^{\infty} \vert{k}^{(pr)}_t\vert{d}D_t\right]<+\infty$ and \begin{eqnarray*} E\left[\int_0^{\infty} \sup_{ 0\leq{u}<t} \vert X^{\mathbb F}_{u}\vert{d}D_t\right]=E\left[\int_0^{\infty} \sup_{ 0\leq{u}<t} \vert{X}^{\mathbb F}_{u}\vert{d}D_t^{o,\mathbb F}\right]<+\infty. \end{eqnarray*} This altter equality is due to $\sup_{ 0\leq{u}<t} \vert{X}^{\mathbb F}_{u}$ being $\mathbb F$-optional. This ends the proof of the proposition. \end{proof} \begin{lemma}\label{stoppingTimeLemma} Let $\sigma_{1}$ and $\sigma_{2}$ be two $\mathbb{F}$-stopping times such that $\sigma_{1}\leq\sigma_{2}$ P-a.s.. Then, for any $\mathbb{G}$- stopping time, $\sigma^{\mathbb{G}}$, satisfying \begin{equation}\label{sigmaG} \sigma_{1}\wedge\tau\leq\sigma^{\mathbb{G}}\leq\sigma_{2}\wedge\tau \hspace{5mm} P\mbox{-a.s.}, \end{equation} there exists an $\mathbb{F}$- stopping time $\sigma^{\mathbb{F}}$ such that \begin{equation}\label{sigmaF} \sigma_{1}\leq\sigma^{\mathbb{F}}\leq\sigma_{2}\quad \mbox{ and }\quad \sigma^{\mathbb{F}}\wedge\tau=\sigma^{\mathbb{G}} \hspace{5mm} P\mbox{-a.s.} \end{equation} \end{lemma} The following is our main result of this section, where we write in different manners the Snell envelop of a process under $\mathbb G$ as a sum of a transformation of an $\mathbb F$-Snell envelop and $\mathbb G$-martingales. \begin{theorem}\label{SnellEvelopG2F}Suppose $G>0$, and let $X^{\mathbb G}$ be a RCLL and $\mathbb G$-adapted process such that $(X^{\mathbb G})^{\tau}=X^{\mathbb G}$. Then consider the unique pair of processes $(X^{\mathbb F}, k^{(pr)})$ associated to $X^{\mathbb G}$, and $k^{(op)}$ the $\mathbb F$-optional projection of $k^{(pr)}$ with respect to the measure $P\otimes D$. Then the following assertions hold.\\ {\rm{(a)}} If either $X^{\mathbb G}$ is nonnegative or $E\left[\sup_{t\geq 0} \max(X^{\mathbb G}_{t},0)\right]<+\infty$, then the $(\mathbb G,P)$-Snell envelop of $X^{\mathbb G}$, denoted ${\cal S}(X^{\mathbb G};\mathbb G,P)$, is given by \begin{eqnarray}\label{Snell4(G,P)} {\cal S}(X^{\mathbb G};\mathbb G,P)&&={{{\cal S}(X^{\mathbb F}G+k^{(op)}\bigcdot D^{o,\mathbb F}; \mathbb F, P)}\over{G}}I_{\Lbrack0,\tau[\![}+(k^{(pr)}-k^{(op)})\bigcdot D+{{(k^{(op)}\bigcdot D^{o,\mathbb F})_{-}}\over{G_{-}^2}}\bigcdot {\cal T}(m)\nonumber\\ &&+\left(k^{(op)}+{{k^{(op)}\bigcdot D^{o,\mathbb F}}\over{G}}\right)\bigcdot N^{\mathbb G}, \end{eqnarray} where ${\cal S}(X^{\mathbb F}G+k^{(op)}\bigcdot D^{o,\mathbb F}; \mathbb F, P)$ is the $(\mathbb F, P)$-Snell envelop of $X^{\mathbb F}G+k^{(op)}\bigcdot D^{o,\mathbb F}$.\\ {\rm{(b)}} Let T$\in (0,+\infty)$ and $\widetilde{Q}$ be given in (\ref{Qtilde}). If either $X^{\mathbb G}\geq 0$ or $E^{\widetilde{Q}}\left[\sup_{T\geq t\geq 0} \max(X^{\mathbb G}_{t},0)\vert\right]<+\infty$, then the $(\mathbb G, \widetilde{Q})$-Snell envelop of $X^{\mathbb G}$, denoted ${\cal S}(X^{\mathbb G};\mathbb G,\widetilde{Q})$, is given on $[0,T]$ by \begin{eqnarray}\label{Snell4(G,Qtilde)} {\cal S}(X^{\mathbb G};\mathbb G,\widetilde{Q})={{{\cal S}(X^{\mathbb F}{\widetilde{\cal E}}-k^{(op)}\bigcdot {\widetilde{\cal E}}; \mathbb F, P)}\over{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+(k^{(pr)}-k^{(op)})\bigcdot D^T+\left(k^{(op)}-{{k^{(op)}\bigcdot \widetilde{\cal E}}\over{\widetilde{\cal E}}}\right)\bigcdot (N^{\mathbb G})^T. \end{eqnarray} \end{theorem} \begin{proof} Let $\theta\in {\cal T}_{t\wedge\tau}^{\tau}(\mathbb G)$, then thanks to Lemma \ref{stoppingTimeLemma} there exists $\sigma\in {\cal T}_t^{\infty}(\mathbb F)$ such that $\theta=\sigma\wedge\tau$. Then notice that \begin{eqnarray}\label{equa100} X_{\theta}^{\mathbb G}&&=X_{\sigma\wedge\tau}^{\mathbb G}I_{\{\sigma<\tau\}}+ h^{(pr)}_{\tau}I_{\{\sigma\geq\tau\}}=X_{\sigma}^{\mathbb F}I_{\{\sigma<\tau\}}+\int_0^{\sigma} h^{(pr)}_s dD_s \nonumber\\ &&=X_{\sigma}^{\mathbb F}I_{\{\sigma<\tau\}}+ ({{h^{(op)}}\over{\widetilde G}} \bigcdot D^{o,\mathbb F})_{\sigma\wedge\tau}+ (h^{(op)} \bigcdot N^{\mathbb G})_{\theta}+(h^{(pr)}- h^{(op)})\bigcdot D_{\theta} . \end{eqnarray} Furthermore, it is clear that both processes $ h^{(op)} \bigcdot N^{\mathbb G}$ and $ (h^{(pr)}-h^{(op)})\bigcdot D$ are $\mathbb G$-martingale, and hence by combining these remarks with Lemma \ref{G-projection}-(a) and taking conditional expectation with respect to ${\cal G}_t$ on both sides of the above equality, we derive \begin{eqnarray*} &&Y_t(\theta)\\ &&:= E\left[ X_{\theta}^{\mathbb G}\big|{\cal G}_t\right]= E\left[ X_{\sigma}^{\mathbb F}I_{\{\sigma<\tau\}}+\int_0^{\sigma\wedge\tau} {{h^{(op)}_s}\over{{\widetilde G}_t}} dD^{o,\mathbb F}_s \big|{\cal G}_t\right]+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ &&=E\left[ X_{\sigma}^{\mathbb F}I_{\{\sigma<\tau\}}+\int_{t\wedge\tau}^{\sigma\wedge\tau} {{h^{(op)}_s}\over{{\widetilde G}_t}} dD^{o,\mathbb F}_s \bigg|{\cal G}_t\right]+({{h^{(op)}_s}\over{{\widetilde G}_t}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ &&=E\left[ X_{\sigma}^{\mathbb F}I_{\{\sigma<\tau\}}+\int_{t\wedge\tau}^{\sigma\wedge\tau} {{h^{(op)}_s}\over{{\widetilde G}_t}} dD^{o,\mathbb F}_s \bigg|{\cal F}_t\right]{{I_{\{\tau>t\}}}\over{G_t}}+({{h^{(op)}_s}\over{{\widetilde G}_t}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ &&=E\left[ G_{\sigma}X_{\sigma}^{\mathbb F} +\int_t^{\sigma}h^{(op)}_s dD^{o,\mathbb F}_s \bigg|{\cal F}_t\right]{{I_{\{\tau>t\}}}\over{G_t}}+({{h^{(op)}_s}\over{{\widetilde G}_t}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ &&=:{{X^{\mathbb F}_t(\sigma)}\over{G_t}} I_{\{t<\tau\}}-{{(h^{(op)}\bigcdot D^{o,\mathbb F})_t}\over{G}} I_{\{t<\tau\}}+({{h^{(op)}_s}\over{{\widetilde G}_t}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \end{eqnarray*} Thus, by taking the essential supremum over all $\theta\in {\cal T}_{t\wedge\tau,\tau}(\mathbb G)$, we deduce that \begin{eqnarray}\label{mainequality1} {\cal S}(X^{\mathbb G};\mathbb G,P)&&={{{\cal S}(X^{\mathbb F}G+h^{(op)}\bigcdot D^{o,\mathbb F}; \mathbb F, P)}\over{G}}I_{\Lbrack0,\tau[\![}-{{(h^{(op)}\bigcdot D^{o,\mathbb F})}\over{G}} I_{\Lbrack0,\tau[\![}\nonumber\\ &&+({{h^{(op)}_s}\over{{\widetilde G}_t}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t . \end{eqnarray} Furthermore, put $V:=h^{(op)}\bigcdot D^{o,\mathbb F}$ and by remarking that $d(1/G^{\tau})=(G\widetilde{G})^{-1}I_{\Rbrack0,\tau]\!]} dD^{o,\mathbb F}-G_{-}^{-2}d{\cal T}(m)$ and using It\^o, we derive \begin{eqnarray*} d\left({{V^{\tau}}\over{G^{\tau}}}\right)=V_{-}d(1/G^{\tau})+G^{-1}h^{(op)}I_{\Rbrack0,\tau]\!]} dD^{o,\mathbb F}={{V}\over{G{\widetilde{G}}}}I_{\Rbrack0,\tau]\!]} dD^{o,\mathbb F}-{{V_{-}}\over{G_{-}^2}}d {\cal T}(m)+ {{h^{(op)}}\over{\widetilde{G}}}I_{\Rbrack0,\tau]\!]} dD^{o,\mathbb F}. \end{eqnarray*} Thus, (\ref{Snell4(G,P)}) follows immediately from combining this equality with (\ref{mainequality1}) and the easy fact that \begin{eqnarray}\label{X-Fsemimartinagle} XI_{\Lbrack0,\tau[\![}=X^{\tau}-X\bigcdot D,\quad \mbox{for any}\quad \mathbb F\mbox{-semimartingale}\ X.\end{eqnarray} This ends the proof of assertion (a).\\ 2) Here, we fix $T\in (0,+\infty)$ and let $\theta\in {\cal T}_{t\wedge\tau}^{T\wedge\tau}(\mathbb G)$ and $\sigma\in {\cal T}_t^T(\mathbb F)$ such that $\theta=\sigma\wedge\tau$. Then, similarly as in part 1), by taking conditional expectation under $\widetilde{Q}$ and using the fact that the two processes both processes $ h^{(op)} \bigcdot N^{\mathbb G}$ and $ (h^{(pr)}-h^{(op)})\bigcdot D$ are remain $\mathbb G$-martingale under $\widetilde{Q}$, we write \begin{eqnarray*} &&{\widetilde Y}_t(\theta):=E^{\widetilde Q} \left[ X_{\theta}^{\mathbb G}\big|{\cal G}_t\right]\\ &&=E^{\widetilde Q}\left[ X_{\sigma}^{\mathbb F}I_{\{\sigma<\tau\}}+\int_t^{\sigma\wedge\tau} {{h^{(op)}_s}\over{{\widetilde G}_s}} dD^{o,\mathbb F}_s \big|{\cal G}_t\right]+({{h^{(op)}}\over{{\widetilde G}}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ &&=E\left[ {{\widetilde{Z}_{\sigma}}\over{\widetilde{Z}_t}}X_{\sigma}^{\mathbb F}I_{\{\sigma<\tau\}}+\int_t^{\sigma\wedge\tau} {{h^{(op)}_s\widetilde{Z}_s}\over{{\widetilde G}_s\widetilde{Z}_t}} dD^{o,\mathbb F}_s \big|{\cal G}_t\right]+({{h^{(op)}}\over{{\widetilde G}}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ &&=E\left[ \widetilde{Z}_{\sigma}X_{\sigma}^{\mathbb F}I_{\{\sigma<\tau\}}+\int_t^{\sigma\wedge\tau} {{h^{(op)}_s}\over{{\widetilde G}_s}}dV^{\mathbb F}_s \big|{\cal F}_t\right]{{I_{\{\tau>t\}}}\over{\widetilde{Z}_tG_t}}+({{h^{(op)}}\over{{\widetilde G}}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ &&=E\left[ \widetilde{\cal E}_{\sigma}X_{\sigma}^{\mathbb F}+\int_t^{\sigma} h^{(op)}_sdV^{\mathbb F}_s \big|{\cal F}_t\right]{{I_{\{\tau>t\}}}\over{\widetilde{\cal E}_t}}+({{h^{(op)}}\over{{\widetilde G}}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ &&=:{{X^{\mathbb F}_t(\sigma)}\over{\widetilde{\cal E}_t}}I_{\{\tau>t\}}-{{ (h^{(op)}\bigcdot V^{\mathbb F})_t}\over{\widetilde{\cal E}_t}}I_{\{\tau>t\}}+({{h^{(op)}}\over{{\widetilde G}}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t \\ \end{eqnarray*} By taking essential supremum over all $\theta$, we get \begin{eqnarray}\label{mainequality2} {\cal S}(X^{\mathbb G};\mathbb G,\widetilde{Q})&&={{{\cal S}(X^{\mathbb F}{\widetilde{\cal E}}+h^{(op)}\bigcdot D^{o,\mathbb F} ;\mathbb F, P)}\over{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}-{{ (h^{(op)}\bigcdot V^{\mathbb F})_t}\over{\widetilde{\cal E}_t}}I_{\{\tau>t\}}\nonumber\\ &&+({{h^{(op)}_s}\over{{\widetilde G}_t}}\bigcdot D^{o,\mathbb F})_{t\wedge\tau}+(h^{(op)} \bigcdot N^{\mathbb G})_t+(h^{(pr)}- h^{(op)})\bigcdot D_t . \end{eqnarray} Similar arguments, as in part 1) after equation (\ref{mainequality1}) applied to $V= h^{(op)}\bigcdot V^{\mathbb F}:= - h^{(op)}\bigcdot {\cal E}$, leads to $$-(h^{(op)}\bigcdot V^{\mathbb F}){\widetilde{\cal E}}^{-1}I_{\Lbrack0,\tau[\![}+ (h^{(op)}_s{\widetilde G}^{-1}\bigcdot D^{o,\mathbb F})^{\tau}=(h^{(op)}\bigcdot V^{\mathbb F}){\widetilde{\cal E}}^{-1}\bigcdot N^{\mathbb G}.$$ Thus, (\ref{Snell4(G,Qtilde)}) follows from combining this fact with (\ref{mainequality2}), and the proof of assertion (b) is completed. This ends the proof of theorem.\end{proof} \section{The case of linear RBSDEs with bounded horizon}\label{LinearboundedSection} In this section, we start by a given triplet $\left(f, S, \xi\right)$, called the data-triplet, where $f$ is an $\mathbb F$-progressively measurable process representing the driver of the BSDE, $S$ is a RCLL $\mathbb F$-adapted process that models the barrier of the RBSDE, and $\xi$ is ${\cal F}_{T\wedge\tau}$-measurable random variable which is the terminal condition such that $\xi\geq S_{\tau\wedge T}$. Therefore, in virtue of Proposition \ref{PropositionG2F}, there exists an $\mathbb{F}$-optional $h$ such that \begin{equation}\label{sh} \xi=h_{T\wedge\tau},\quad P\mbox{-a.s..} \end{equation} Hence, the $\mathbb G$-triplet data $\left(f, S, \xi\right)$ is equivalent to the $\mathbb F$-triplet data $\left(f, S, h\right)$. In this section, our aim lies in addressing the following RBSDE under $\mathbb G$ given by \begin{eqnarray}\label{RBSDEG} \begin{cases} dY=-f(t)d(t\wedge\tau)-d(K+M)+ZdW^{\tau},\quad Y_{\tau}=Y_{T}=\xi,\\ Y_{t}\geq S_{t};\quad 0\leq t< T\wedge\tau,\quad \displaystyle\int_{0}^{T\wedge\tau}(Y_{t-}-S_{t-})dK_{t}=0,\quad P\mbox{-a.s..} \end{cases}\end{eqnarray} This section is divided into two subsection. The first subsection elaborates estimates inequalities for the solution of the RBSDE (when it exists), while the second subsection address the existence and uniqueness of the solution and the $\mathbb F$-RBSDE counterpart of (\ref{RBSDEG}). \subsection{Various norm-estimates for the solutions} This subsection elaborates estimates for the solution of the RBSDE (\ref{RBSDEG}). To this end, we start elaborating some useful intermediate results that we summarize in two lemmas. \begin{lemma}\label{Lemma4.11} The following assertions hold.\\ {\rm{(a)}} For any $T\in (0,+\infty)$, $m^{T\wedge\tau}$ is a BMO $(\widetilde Q, \mathbb G)$-martingale. Furthermore, we have \begin{eqnarray}\label{BMO(m)} E^{\widetilde Q}\left[[m,m]_{T\wedge\tau}-[m,m]_{t\wedge\tau-}\big|{\cal G}_t\right]\leq \Vert m\Vert_{BMO(P)},\quad P\mbox{-a.s.}. \end{eqnarray} {\rm{(b)}} For any $t\in (0,T]$, we have \begin{eqnarray}\label{domination1} E^{\widetilde{Q}}\left[D^{o,\mathbb{F}}_{T\wedge\tau} -D^{o,\mathbb{F}}_{(t\wedge\tau)-}\big|{\cal G}_{t}\right]\leq G_{t-}I_{\{t<\tau\}}\leq 1,\quad P\mbox{-a.s.}.\end{eqnarray} {\rm{(c)}} For any $a\in(0,+\infty)$, it always holds that \begin{eqnarray}\label{estimation1} \max(a,1) {\widetilde G}^{-1}\bigcdot D^{o,\mathbb F}-\widetilde {V}^{(a)}\quad \mbox{is nondecreasing and}\quad E\left[\int_{t\wedge\tau}^{T\wedge\tau}{\widetilde G}^{-1}_s dD^{o,\mathbb F}_s\big| {\cal G}_t\right]\leq 1 ,\quad P-\mbox{a.s.},\end{eqnarray} where $\widetilde {V}^{(a)}$ is the process defined by \begin{eqnarray}\label{Vepsilon} \widetilde {V}^{(a)}:={{a}\over{{\widetilde G}}}\bigcdot D^{o,\mathbb F}+\sum \left(-{{a\Delta D^{o,\mathbb F}}\over{\widetilde G}}+1-\left(1-{{\Delta D^{o,\mathbb F}}\over{\widetilde G}}\right)^a\right) \end{eqnarray} \end{lemma} The proof of this lemma is relegated to Appendix \ref{Appendix4Proofs}. The following lemma connects under some assumptions the solution to (\ref{RBSDEG}) --when it exists-- to Snell envelop. \begin{lemma}\label{Solution2SnellEnvelop} Let $p\in [1,+\infty)$, and suppose that the triplet $(f, S, \xi)$ satisfies \begin{eqnarray}\label{MainAssumption} E^{\widetilde{Q}}\left[\vert\xi\vert^p+\left(\int_0^{T\wedge\tau}\vert f(s)\vert ds\right)^p+\sup_{0\leq u\leq\tau\wedge T}(S_u^+)^p\right]<+\infty. \end{eqnarray} If $(Y^{\mathbb G}, Z^{\mathbb G}, M^{\mathbb G}, K^{\mathbb G})$ is a solution to (\ref{RBSDEG}), then \begin{eqnarray}\label{RBSDE2Snell} Y^{\mathbb G}_t=\rm{ess}\sup_{\theta\in \mathcal{J}_{t\wedge\tau}^{T\wedge\tau}(\mathbb{G})}E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\theta}f(s)ds + S_{\theta}1_{\{\theta <T\wedge\tau\}}+\xi 1_{\{\theta=T\wedge \tau\}}\ \Big|\ \mathcal{G}_{t}\right],\quad 0\leq t\leq T. \end{eqnarray} \end{lemma} The proof of this lemma is relegated to Appendix \ref{Appendix4Proofs}, while herein we elaborate our first estimate. \begin{theorem}\label{EstimatesUnderQtilde} For any $p\in (1,+\infty)$ there exists a positive constant $C$ that depends on $p$ only such that if ($Y^{\mathbb{G}},Z^{\mathbb{G}},K^{\mathbb{G}}, M^{\mathbb{G}}$) is a solution to (\ref{RBSDEG}), then \begin{align}\label{estimate100} &E^{\widetilde{Q}}\left[\sup_{0\leq t\leq T\wedge\tau}\vert{Y}^{{\mathbb{G}}}_{t}\vert^p+\left(\int_{0}^{T\wedge\tau}\vert{Z}^{{\mathbb{G}}}_{s}\vert^{2}ds+[ M^{\mathbb{G}}, M^{\mathbb{G}}]_{T\wedge\tau}\right)^{p/2}+(K^{{\mathbb{G}}}_{T\wedge\tau})^p\right]\nonumber\\ &\leq{C} E^{\widetilde{Q}}\left[\vert\xi\vert^p+\left(\int_{0}^{T\wedge\tau}\vert{f}(s)\vert{d}s\right)^p+\sup_{0\leq t\leq T\wedge\tau}(S^{+}_{t})^p\right]. \end{align} \end{theorem} \begin{proof} This proof is divided into four parts, where we control and estimate, in a way or another, the four terms in the left-hand-side of (\ref{estimate100}).\\ {\bf Part 1.} Thanks to (\ref{RBSDE2Snell}), we conclude that $Y^{\mathbb G}$ satisfies \begin{equation*} Y_{t}^{\mathbb{G}}=\underset{\upsilon\in \mathcal{J}_{t\wedge\tau,T\wedge\tau}(\mathbb{G})}{\rm{ess}\sup}\hspace{2mm}E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\upsilon\wedge\tau}f(s)ds + S_{\upsilon}1_{\{\upsilon\ <T\wedge\tau\}}+\xi 1_{\{\upsilon =T\wedge \tau\}}\ \Big|\ \mathcal{G}_{t}\right]. \end{equation*} Therefore, by taking $\upsilon=T\wedge\tau\in \mathcal{J}_{t\wedge\tau}^{T\wedge\tau}(\mathbb G)$ and using $\xi\geq -\xi^-$ and $\int_{t\wedge\tau}^{\upsilon\wedge\tau}f(s)ds \geq -\int_{0}^{T\wedge\tau}(f(s))^-ds $, we deduce that \begin{eqnarray*} E^{\widetilde{Q}}\left[-\int_0^{T\wedge\tau} (f(s))^- ds-\xi^-\ \Big|\ \mathcal{G}_{t}\right] \leq Y_{t}^{\mathbb{G}}\leq E^{\widetilde{Q}}\left[\int_0^{T\wedge\tau} (f(s))^+ ds + \sup_{0\leq u\leq\tau\wedge T} S_u^+ +\xi^+ \ \Big|\ \mathcal{G}_{t}\right]. \end{eqnarray*} This clearly leads to \begin{eqnarray}\label{Domination4YG} \vert Y_{t}^{\mathbb{G}}\vert\leq {\widetilde M}_t:=E^{\widetilde{Q}}\left[\int_0^{T\wedge\tau}\vert f(s)\vert ds + \sup_{0\leq u\leq\tau\wedge T} S_u^+ +\vert\xi\vert\ \Big|\ \mathcal{G}_{t}\right]. \end{eqnarray} Hence, by applying Doob's inequality to $\widetilde M$ under $(\widetilde Q, \mathbb G)$, denoting $C_{DB}$ the Doob's constant, and using $(\sum_{i=1}^n \vert x_i\vert)^p\leq n^{p-1}\sum_{i=1}^n\vert x_i\vert^p$, we derive \begin{align}\label{yesyes} E^{\widetilde{Q}}\left[\sup_{0\leq t\leq T\wedge\tau}\vert Y^{{\mathbb{G}}}_t\vert^p\right]\leq 3^{p-1}C_{DB} E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}\vert f(s)\vert ds\right)^p +\underset{0\leq s \leq T\wedge\tau}{\sup}(S^{+}_{s})^p+\vert\xi \vert^p\right]. \end{align} {\bf Part 2.} By combining $ K_{T\wedge\tau}^{\mathbb{G}}=Y^{\mathbb{G}}_{0}-\xi +\int_{0}^{T\wedge\tau}f(t)dt- M^{\mathbb{G}}_{T\wedge\tau}+ \int_{0}^{T\wedge\tau}Z^{\mathbb{G}}_{s}dW_{t}^{\tau}$, (\ref{yesyes}), $(\sum_{i=1}^n x_i )^p\leq n^{p-1}\sum_{i=1}^n x_i^p$, and the BDG inequalities for the $(\widetilde{Q}, \mathbb G)$-martingale $-M^{\mathbb{G}}+ Z^{\mathbb{G}}\bigcdot {W}^{\tau}$, we get \begin{align}\label{Control4KG} &E^{\widetilde{Q}} \left[(K_{T\wedge\tau}^{\mathbb{G}})^p\right]\nonumber\\ &\leq 4^{p-1}E^{\widetilde{Q}}\left[\vert{Y}^{\mathbb{G}}_{0}\vert^p+\vert\xi\vert^p + \left(\int_{0}^{T\wedge\tau}\vert f(t)\vert dt\right)^p+ C_{BDG}\left([M^{\mathbb{G}}, M^{\mathbb{G}}]_{T\wedge \tau}+ \int_{0}^{T\wedge\tau}\vert{Z}^{\mathbb{G}}_{s}\vert^2 dt\right)^{p/2}\right]\nonumber\\ &\leq 4^{p-1}(3^{p-1}+1) E^{\widetilde{Q}}\left[\vert\xi \vert^p+\left(\int_{0}^{T\wedge\tau}\vert f(s)\vert ds\right)^p +\underset{0\leq s \leq T\wedge\tau}{\sup}(S^{+}_{s})^p\right]\nonumber\\ &+ 4^{p-1}C_{BDG}E^{\widetilde{Q}}\left[\left([M^{\mathbb{G}}, M^{\mathbb{G}}]_{T\wedge \tau}+ \int_{0}^{T\wedge\tau}\vert{Z}^{\mathbb{G}}_{s}\vert^2 dt\right)^{p/2}\right]. \end{align} {\bf Part 3.} A combination of It\^o and (\ref{RBSDEG}) implies that \begin{align} d(Y^{\mathbb G})^2&=2Y_{-}^{{\mathbb{G}}}dY^{\mathbb G}+d[ Y^{\mathbb G},Y^{\mathbb G}]\nonumber\\ &=-2Y_{s-}^{\mathbb G}f(s)d(s\wedge\tau)-2Y_{-}^{\mathbb G}dK^{\mathbb G}+2Y_{-}^{\mathbb G} Z^{\mathbb G}dW^{\tau}-2Y_{-}^{\mathbb G}dM^{\mathbb G}\nonumber\\ &+d[M^{\mathbb G},M^{\mathbb G}]+d[K^{\mathbb G},K^{\mathbb G}]+(Z^{\mathbb G})^2d(s\wedge\tau)+2d[{K}^{\mathbb G},{M}^{\mathbb G}].\label{Ito1}\end{align} As the three processes $[ M^{\mathbb{G}}, K^{\mathbb{G}}]$, $Y_{-}^{\mathbb{G}}Z^{\mathbb{G}}\bigcdot W^{\tau}$ and $Y_{-}^{\mathbb{G}}\bigcdot M^{\mathbb{G}}$ are $\widetilde Q$-local martingales, then there exists a sequence of $\mathbb G$-stopping times $(T_n)_n$ that increases to infinity such that when these processes are stopped at each $T_n$ they become true martingale. Thus, by using Young's inequality when it is convenient, we get \begin{align} 2\sum\vert\Delta{K}^{\mathbb G}\Delta{M}^{\mathbb G}\vert&\leq 2\sqrt{\sum(\Delta{K}^{\mathbb G})^2}\sqrt{\sum(\Delta{M}^{\mathbb G})^2}\leq \epsilon [M^{\mathbb G},M^{\mathbb G}]+\epsilon^{-1}[K^{\mathbb G}, K^{\mathbb G}]\nonumber\\ &\leq \epsilon [M^{\mathbb G},M^{\mathbb G}]+\epsilon^{-1}\sup_{0\leq s\leq T\wedge\tau}E^{\widetilde{Q}}[\sup_{0\leq t\leq T\wedge\tau}\vert Y^{{\mathbb{G}}}_t\vert\ \big|{\cal G}_s]K^{\mathbb G}\nonumber\\ &\leq \epsilon [M^{\mathbb G},M^{\mathbb G}]+\epsilon^{-3}\sup_{0\leq s\leq T\wedge\tau}E^{\widetilde{Q}}[\sup_{0\leq t\leq T\wedge\tau}\vert Y^{{\mathbb{G}}}_t\vert\ \big|{\cal G}_s]^2+\epsilon(K^{\mathbb G})^2,\end{align} and for any stopping times $\tau_n$, for any $\epsilon\in(0,1)$, we derive \begin{align} &(1-\epsilon)[M^{\mathbb G},M^{\mathbb G}]_{\tau\wedge\tau_n}+\int_0^{\tau\wedge\tau_n}(Z^{\mathbb G}_s)^2ds\nonumber\\ &\leq (2+\epsilon^{-1})\sup_{0\leq s\leq\tau\wedge\tau_n}\vert Y^{\mathbb G}_s\vert^2+\left(\int_0^{\tau\wedge\tau_n}\vert f(s)\vert ds\right)^2\nonumber\\ &+\epsilon^{-3}\sup_{0\leq s\leq T\wedge\tau}E^{\widetilde{Q}}[\sup_{0\leq t\leq T\wedge\tau}\vert Y^{{\mathbb{G}}}_t\vert\ \big|{\cal G}_s]^2+2\epsilon({K}^{\mathbb G}_{\tau\wedge\tau_n})^2+2\sup_{0\leq s\leq\tau\wedge\tau_n}\vert (Y_{-}^{\mathbb G} \bigcdot (Z^{\mathbb G}\bigcdot {W}^{\tau}+{M}^{\mathbb G}))_s\vert.\label{Ito2} \end{align} On the other hand, by applying Lemma \ref{Lemma4.8FromChoulliThesis} to $Z^{\mathbb G}\bigcdot {W}^{\tau}+{M}^{\mathbb G}$ and $a=b=p$, we get \begin{align* &E^{\widetilde{Q}}\left[\sup_{0\leq s\leq\tau\wedge\tau_n}\vert (Y_{-}^{\mathbb G} \bigcdot (Z^{\mathbb G}\bigcdot {W}^{\tau}+{M}^{\mathbb G}))_s\vert^{p/2}\right]\\ &\leq C_p E^{\widetilde{Q}}\left[\sup_{0\leq s\leq\tau\wedge\tau_n}\vert Y^{\mathbb G} \vert^p\right]^{1/2}E^{\widetilde{Q}}\left[\left(\int_0^{\tau\wedge\tau_n} (Z^{\mathbb G}_s)^2ds+[M^{\mathbb G},M^{\mathbb G}]_{\tau\wedge\tau_n}\right)^{p/2}\right]^{1/2}\\ &\leq {{C_p^2}\over{\epsilon}} E^{\widetilde{Q}}\left[\sup_{0\leq s\leq\tau\wedge\tau_n}\vert Y^{\mathbb G} \vert^p\right]+\epsilon E^{\widetilde{Q}}\left[\left(\int_0^{\tau\wedge\tau_n} Z^{\mathbb G}_s)^2ds+[M^{\mathbb G},M^{\mathbb G}]_{\tau\wedge\tau_n}\right)^{p/2}\right].\end{align*} Thus, by combining this latter inequality (that is due to Young's inequality) and (\ref{Ito2}), we put \begin{eqnarray}\label{C3} C_3:= (2+\epsilon^{-1})^{p/2}+2^{p/2} C_p^2{\epsilon}^{-1}+\epsilon^{-3p/2}C_{DB},\end{eqnarray} and we derive \begin{align*}\label{Ito3} &((1-\epsilon)^{p/2}-(10)^{p/2}\epsilon)E^{\widetilde{Q}}\left[\left(\int_0^{\tau\wedge\tau_n} (Z^{\mathbb G}_s)^2ds+[M^{\mathbb G},M^{\mathbb G}]_{\tau\wedge\tau_n}\right)^{p/2}\right]\nonumber\\ &\leq 5^{p/2}E^{\widetilde{Q}}\left[C_3 \sup_{0\leq s\leq\tau\wedge\tau_n}\vert Y^{\mathbb G} \vert^p+\left(\int_0^{\tau\wedge\tau_n}\vert f(s)\vert ds\right)^p+(2\epsilon)^{p/2}({K}^{\mathbb G}_{\tau\wedge\tau_n})^p\right]. \end{align*} Therefore, by inserting (\ref{yesyes}) and (\ref{Control4KG}) in the above inequality, we obtain \begin{align*} &\left((1-\epsilon)^{p/2}-(10)^{p/2}\epsilon-(20\epsilon)^{p/2}{{C_{BDG}}\over{4}}\right)E^{\widetilde{Q}}\left[\left(\int_0^{\tau\wedge\tau_n} (Z^{\mathbb G}_s)^2ds+[M^{\mathbb G},M^{\mathbb G}]_{\tau\wedge\tau_n}\right)^{p/2}\right]\nonumber\\ &\leq C_4 E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}\vert f(s)\vert ds\right)^p +\underset{0\leq s \leq T\wedge\tau}{\sup}(S^{+}_{s})^p+\vert\xi \vert^p\right],\end{align*} where $$ C_4:=5^{p/2}( 3^{p-1}C_3C_{DB}+1)+(20\epsilon)^{p/2}{{3C_{DB}+1}\over{4}}. $$ Hence, it is enough to choose $\epsilon>0$ very small such that $C_{\epsilon}:=(1-\epsilon)^{p/2}-(10)^{p/2}\epsilon-(20\epsilon)^{p/2}{{C_{BDG}}\over{4}}>0$, and combine the above inequality with (\ref{yesyes}), the proof of (\ref{estimate100}) follows immediately with the constant equal to $C=C_4C_{\epsilon}^{-1}+3^{p-1}C_{DB} $ that depends on $p$ only. This ends the proof of the theorem. \end{proof} We end this subsection, by elaborating norm-estimate for the difference of solutions as follows. \begin{theorem}\label{EstimatesUnderQtilde1} Suppose that ($Y^{\mathbb{G},i},Z^{\mathbb{G},i},K^{\mathbb{G},i}, M^{\mathbb{G},i}$) is a solution to the RBSDE (\ref{RBSDEG}) that correspond to $(f^{(i)}, S^{(i)}, \xi^{(i)})$, for each $i=1,2$. Then for any $p>1$, there exist positive $C_1$ and $C_2$ that depend on $p$ only such that \begin{eqnarray}\label{estimate1001} &&E^{\widetilde{Q}}\left[\sup_{0\leq t\leq{T}\wedge\tau}\vert\delta Y^{{\mathbb{G}}}_{t}\vert^p\right]+E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}\vert\delta Z^{{\mathbb{G}}}_{s}\vert^{2}ds+[\delta M^{\mathbb{G}}, \delta M^{\mathbb{G}}]_{T\wedge\tau}\right)^{p/2}\right]\nonumber\\ &&\leq{C_1}E^{\widetilde{Q}}\left[\vert\delta\xi\vert^p+\left(\int_{0}^{T\wedge\tau}\vert \delta f(s)\vert ds\right)^p +\sup_{0\leq s \leq T\wedge\tau}\vert\delta S_{s}\vert^p\right]\nonumber\\ &&+C_2\Vert\sup_{0\leq t\leq T\wedge\tau}\vert\delta S_{t}\vert\Vert_{L^p(\widetilde{Q})}^{p/2}\sqrt{\sum_{i=1}^2E^{\widetilde{Q}}\left[\vert\xi^{(i)}\vert^p+\left(\int_{0}^{T\wedge\tau}\vert{f}^{(i)}(s)\vert ds\right)^p +\sup_{0\leq s \leq T\wedge\tau}((S_s^{(i)})^+)^p\right]},\end{eqnarray} where $ \delta Y^{\mathbb{G}},\delta Z^{\mathbb{G}},\delta K^{\mathbb{G}},\delta M^{\mathbb{G}},\delta f,\delta \xi,$ and $\delta S$ are given by \begin{eqnarray*} &&\delta Y^{\mathbb{G}}:=Y^{\mathbb{G},1}-Y^{\mathbb{G},2},\quad \delta Z^{{\mathbb{G}}}:=Z^{\mathbb{G},1}-Z^{\mathbb{G},2},\quad \delta M^{{\mathbb{G}}}:=M^{\mathbb{G},1}-M^{\mathbb{G},2}, \quad \delta K^{{\mathbb{G}}}:=K^{\mathbb{G},1}-K^{\mathbb{G},2},\\ && \delta f:=f^{(1)}-f^{(2)},\quad \delta\xi:=\xi^{(1)}-\xi^{(2)},\quad \delta S:=S^{(1)}-S^{(2)}, \end{eqnarray*} \end{theorem} \begin{proof} This proof is achieved in two parts, where we control in norm the first and the second terms of the left-hand-side of (\ref{estimate1001}). \\ {\bf {Part 1.}} Thanks to (\ref{RBSDE2Snell}), we conclude that \begin{align*} Y^{\mathbb{G},1}-Y^{\mathbb{G},2}&\leq \underset{\upsilon\in \mathcal{J}_{t\wedge\tau,T\wedge\tau}(\mathbb{G})}{\rm{ess}\sup}\hspace{2mm}E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\upsilon\wedge\tau} \delta f(s)ds + \delta S_{\upsilon}1_{\{\upsilon\ <T\wedge\tau\}}+ \delta \xi 1_{\{\upsilon =T\wedge \tau\}}\ \Big|\ \mathcal{G}_{t}\right]\\ &\leq E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{T\wedge\tau} \vert \delta f(s)\vert ds + \underset{t\wedge\tau\leq s\leq T\wedge\tau}{\sup}\vert\delta S_{s}\vert+ \vert \delta \xi \vert \Big|\ \mathcal{G}_{t}\right] \end{align*} and \begin{align*} Y^{\mathbb{G},2}-Y^{\mathbb{G},1}&\leq \underset{\upsilon\in \mathcal{J}_{t\wedge\tau,T\wedge\tau}(\mathbb{G})}{\rm{ess}\sup}\hspace{2mm}E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\upsilon\wedge\tau} -\delta f(s)ds - \delta S_{\upsilon}1_{\{\upsilon\ <T\wedge\tau\}} - \delta \xi 1_{\{\upsilon =T\wedge \tau\}}\ \Big|\ \mathcal{G}_{t}\right]\\ &\leq E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{T\wedge\tau} \vert \delta f(s)\vert ds + \underset{t\wedge\tau\leq s\leq T\wedge\tau}{\sup}\vert\delta S_{s}\vert+ \vert \delta \xi \vert \Big|\ \mathcal{G}_{t}\right]. \end{align*} Therefore, these yield \begin{equation*} \vert\delta Y_{t}^{\mathbb{G}}\vert\leq {\widetilde M}_t:=E^{\widetilde{Q}}\left[\int_{0}^{T\wedge\tau} \vert \delta f(s)\vert ds + \underset{0\leq s\leq T\wedge\tau}{\sup}\vert\delta S_{s}\vert+ \vert \delta \xi \vert \Big|\ \mathcal{G}_{t}\right]. \end{equation*} By applying Doob's inequality to $\widetilde M$ under $(\widetilde Q, \mathbb G)$ and using $(\sum_{i=1}^n \vert x_i\vert)^p\leq n^{p-1}\sum_{i=1}^n\vert x_i\vert^p$, we get \begin{align}\label{yesyes1} E^{\widetilde{Q}}\left[\sup_{0\leq t\leq T\wedge\tau}\vert \delta Y^{{\mathbb{G}}}_t\vert^p\right]\leq 3^{p-1}C_{DB} E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}\vert\delta f(s)\vert ds\right)^p +\sup_{0\leq s \leq T\wedge\tau}\vert\delta S_{s}\vert^p+\vert\delta\xi \vert^p\right], \end{align} where $C_{DB}$ is the universal Doob's constant that depends on $p$ only.\\ {\bf Part 2.} Here we focus on $\displaystyle\int_{0}^{ \cdot}(\delta Z_{s}^{\mathbb{G}})^{2}ds+[\delta M^{\mathbb{G}},\delta M^{\mathbb{G}}]$. Thus, we apply It\^o to $(\delta{Y}^{\mathbb G})^2$ and get \begin{align}\label{Ito000} &d [\delta{M}^{\mathbb G},\delta{M}^{\mathbb G}]+(\delta{Z}^{\mathbb G})^2d(\cdot\wedge\tau)+d [\delta{K}^{\mathbb G},\delta{K}^{\mathbb G}]\nonumber\\ &=d(\delta{Y}^{\mathbb G})^2+2\delta Y_{-}^{\mathbb{G}}\delta {f}d(\cdot\wedge\tau)+2\delta Y_{-}^{\mathbb{G}}d\delta{K}^{\mathbb{G}}-2\Delta\delta{K}^{\mathbb G}d\delta{M}^{\mathbb G}+2\delta{Y}_{-}^{\mathbb G}d(\delta{Z}^{\mathbb G}\bigcdot {W}^{\tau}-\delta{M}^{\mathbb G})). \end{align} Thus, we use this equality and mimic the first step of Part 3 in the proof of Theorem \ref{EstimatesUnderQtilde}, and derive \begin{align} &{\cal Q}^{\mathbb G}:= [\delta{M}^{\mathbb G},\delta{M}^{\mathbb G}]+\int_0^{\cdot} (\delta{Z}^{\mathbb G}_s)^2d(s\wedge\tau)\nonumber\\ &\leq \sup_{0\leq{t}\leq\cdot}(\delta{Y}^{\mathbb G}_t)^2+2\int_{0}^{\cdot}\delta Y_{s-}^{\mathbb{G}}\delta f(s)d{s}+2\delta Y_{-}^{\mathbb{G}}\bigcdot \delta{K}^{\mathbb{G}}-2[\delta{K}^{\mathbb G},\delta{M}^{\mathbb G}]+\overbrace{2\sup_{0\leq{t}\leq\cdot}\vert (\delta{Y}_{-}^{\mathbb G}\bigcdot (\delta{Z}^{\mathbb G}\bigcdot {W}^{\tau}-\delta{M}^{\mathbb G}))_t\vert}^{=:\Gamma^{\mathbb G}}\nonumber\\ &\leq 2\sup_{0\leq{t}\leq\cdot}(\delta{Y}^{\mathbb G}_t)^2+\int_{0}^{\cdot}\vert\delta f(s)\vert{d}{s}+2\delta S_{s-}^{\mathbb{G}}\bigcdot \delta{K}-2\Delta(\delta{K}^{\mathbb G})\bigcdot \delta{M}^{\mathbb G}+ \Gamma^{\mathbb G}.\label{Ito10} \end{align} As $M^{\mathbb{G}}$ and $\delta Z^{\mathbb{G}}\bigcdot W^{\tau}-M^{\mathbb{G}}$ are $(\widetilde Q,\mathbb G)$-local martingales and $^{p,\mathbb G}(\Delta(\delta Y_{-}^{\mathbb{G}}))=-\Delta(\delta{K}^{\mathbb G})$, we consider $(T_n)_n$ a sequence of $\mathbb G$-stopping times that increases to infinity such that when the processes are stopped at each $T_n$ they become true martingale, and by applying Lemma \ref{Lemma4.8FromChoulliThesis} and using Young's inequality afterwards, we obtain \begin{align*} & E^{\widetilde{Q}}\left[(\Gamma^{\mathbb G}_{\tau\wedge{T_n}})^{p/2}\right]\leq C^p\epsilon^{-1}\Vert\sup_{0\leq s \leq T\wedge\tau} \vert\delta Y_{s}^{\mathbb{G}}\vert\Vert_{L^p(\widetilde{Q})}^p+\epsilon {E}^{\widetilde{Q}}\left[\left(\int_{0}^{ T_n\wedge\tau}(\delta Z_{s}^{\mathbb{G}})^{2}ds+ [\delta M^{\mathbb{G}},\delta M^{\mathbb{G}}]_{T_n\wedge}\right)^{p/2}\right],\\ &\mbox{and}\\ &E^{\widetilde{Q}}\left[\sup_{0\leq{t}\leq\tau\wedge{T_n}}\vert-2[\delta{K}^{\mathbb G},\delta{M}^{\mathbb G}]_t\vert^{p/2}\right]\leq C^p\epsilon^{-1}\Vert\sup_{0\leq s \leq T\wedge\tau} \vert\delta Y_{s}^{\mathbb{G}}\vert\Vert_{L^p(\widetilde{Q})}^p+\epsilon {E}^{\widetilde{Q}}\left[[\delta M^{\mathbb{G}},\delta M^{\mathbb{G}}]_{T_n\wedge}^{p/2}\right]. \end{align*} Therefore, by taking expectation on both sides of (\ref{Ito10}) and inserting the two inequalities above in the resulting inequality afterwards, we get \begin{align} &(1-2{\epsilon}5^{p/2})E^{\widetilde{Q}}\left[ ({\cal Q}^{\mathbb G}_{\tau\wedge{T_n}})^{p/2}\right]\leq 2(10C)^{p/2}\epsilon^{-1}\Vert\sup_{0\leq s \leq T\wedge\tau} \vert\delta Y_{s}^{\mathbb{G}}\vert\Vert_{L^p(\widetilde{Q})}^p\nonumber\\ &+5^{p/2}\Vert\int_{0}^{\tau\wedge{T_n}}\vert\delta f(s)\vert{d}{s}\Vert_{L^p(\widetilde{Q})}^p+5^{p/2}\sqrt{\Vert\sup_{0\leq{t}\leq{\tau\wedge{T_n}}}\vert\delta{S}_t\vert\Vert_{L^p(\widetilde{Q})}^p\Vert\mbox{Var}_{\tau\wedge{T_n}}(\delta{K}^{\mathbb G})\Vert_{L^p(\widetilde{Q})}^p}.\label{Control4QG} \end{align} Furthermore, remark that $\mbox{Var}_{\tau\wedge{T_n}}(\delta{K}^{\mathbb G})\leq {K}^{\mathbb G,1}_{\tau\wedge{T_n}}+{K}^{\mathbb G,2}_{\tau\wedge{T_n}}$. Thus, by inserting this latter inequality in (\ref{Control4QG}) and applying Theorem \ref{EstimatesUnderQtilde} to each ${K}^{\mathbb G, i}$, $i=1,2$, and using Fatou afterwards, we get \begin{align*} &(1-2{\epsilon}5^{p/2})E^{\widetilde{Q}}\left[ ({\cal Q}^{\mathbb G}_{T\wedge\tau})^{p/2}\right]\leq 2(10C)^{p/2}\epsilon^{-1}\Vert\delta Y^{\mathbb{G}}\Vert_{{\mathbb{D}}_{ T\wedge\tau}(\widetilde{Q},p)}^p+5^{p/2}\Vert\int_{0}^{\tau\wedge{T}}\vert\delta f(s)\vert{d}{s}\Vert_{L^p(\widetilde{Q})}^p\nonumber\\ &+5^{p/2}\sqrt{C} \Vert\delta S\Vert_{{\mathbb{D}}_{ T\wedge\tau}(\widetilde{Q},p)}^{p/2}\sqrt{\sum_{i=1}^2\left\{\Vert\xi^{(i)}\Vert_{L^p(\widetilde{Q})}^p+\Vert\int_{0}^{T\wedge\tau}\vert{f}^{(i)}(s)\vert ds\Vert_{L^p(\widetilde{Q})}^p +\Vert(S^{(i)})^+\Vert_{{\mathbb{D}}_{ T\wedge\tau}(\widetilde{Q},p)}^p\right\}}. \end{align*} Therefore, by combinung this inequality with (\ref{yesyes1}) and putting \begin{align*} \epsilon=5^{-p/2}/4,\quad C_1=3^{p-1}4C_{DB}(50C)^{p/2}+5^{p/2}4+3^{p-1}C_{DB}\quad\mbox{and}\quad C_2=2\sqrt{C}5^{p/2}, \end{align*} the theorem follows immediately. This ends the proof of the theorem. \end{proof} \subsection{Existence for the $\mathbb G$-RBSDE and its relationship to $\mathbb F$-RBSDE}\label{Subsection4.1} In this subsection, we prove the existence and the uniqueness of the solution to the RBSDE (\ref{RBSDEG}), and we establish explicit connection between this RBSDE and its $\mathbb F$-RBSDE counterpart, and highlight the explicit relationship between their solutions as well. \begin{theorem}\label{abcde}Let $p\in (1,\infty)$, suppose that (\ref{MainAssumption}) holds, and consider $(f^{\mathbb{F}},S^{\mathbb{F}})$ and $(\xi^{\mathbb{F}},V^{\mathbb F})$ given by \begin{eqnarray} f^{\mathbb{F}}:={\widetilde{\cal E}}f,\quad S^{\mathbb{F}}:= {\widetilde{\cal E}}S,\quad \xi^{\mathbb{F}}:={\widetilde{\cal E}_T}h_{T},\quad V^{\mathbb F}:=1-{\widetilde{\cal E}},\quad\mbox{where}\quad {\widetilde{\cal E}}:={\cal E}\left(-{\widetilde G}^{-1}\bigcdot D^{o,\mathbb{F}}\right). \label{ProcessVFandXiF} \end{eqnarray} Then the following assertions hold.\\ {\rm{(a)}} The following RBSDE under $\mathbb F$, associated to the triplet $ \left(f^{\mathbb{F}},S^{\mathbb{F}}, \xi^{\mathbb F}\right)$, \begin{eqnarray}\label{RBSDEF} \begin{cases} Y_{t}= \displaystyle\xi^{\mathbb{F}}+\int_{t}^{T}f^{\mathbb{F}}(s)ds+\int_{t}^{T}h_{s}dV^{\mathbb{F}}_{s}+K_{T}-K_{t}-\int_{t}^{T}Z_{s}dW_{s},\\ Y_{t}\geq S_{t}^{\mathbb{F}}1_{\{t\ <T\}}+\xi^{\mathbb{F}}1_{\{t\ =T\}},\quad \displaystyle\int_{0}^{T}(Y_{t-}-S_{t-}^{\mathbb{F}})dK_{t}=0 ,\quad P\mbox{-a.s.,} \end{cases} \end{eqnarray} has a unique $L^p(P,\mathbb F)$-solution $(Y^{\mathbb F}, Z^{\mathbb F}, K^{\mathbb F})$, and \begin{eqnarray}\label{RBSDE2SnellF} Y^{\mathbb F}_t=\rm{ess}\sup_{\sigma\in \mathcal{J}_{t}^{T}(\mathbb{F})}E\left[\int_{t\wedge\tau}^{\sigma}f^{\mathbb F}(s)ds+\int_{t\wedge\tau}^{\sigma}h_s dV^{\mathbb F}_s + S_{\sigma}^{\mathbb F}1_{\{\sigma <T\}}+\xi^{\mathbb F} I_{\{\sigma =T\}}\ \Big|\ \mathcal{F}_{t}\right],\quad 0\leq t\leq T. \end{eqnarray} {\rm{(b)}} The RBSDE defined in (\ref{RBSDEG}) has a unique $L^p(\widetilde{Q},\mathbb G)$-solution $(Y^{\mathbb{G}},Z^{\mathbb{G}},K^{\mathbb{G}},M^{\mathbb{G}})$ given by \begin{eqnarray} Y^{\mathbb{G}}= \frac{Y^{\mathbb{F}}}{\widetilde{\cal E}}I_{\Lbrack0,\tau[\![}+\xi{I_{[\![\tau,+\infty[\![}},\ Z^{\mathbb{G}}=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}_{-}} I_{\Rbrack0,\tau]\!]},\ K^{\mathbb{G}}=\frac{1}{{\widetilde{\cal E}}_{-}}\bigcdot (K ^{\mathbb{F}})^{\tau}\ \mbox{and}\ M^{\mathbb{G}}=\left(h-\frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}.\label{secondrelation} \end{eqnarray} \end{theorem} \begin{proof} Assertion (a) is the linear case of a general RBSDE under $\mathbb F$ given in Subsection \ref{GeneralRBSDEfromG2F}, see (\ref{RBSDEFGENERAL}). Thus, the proof of the existence and uniqueness of the $L^p(\mathbb F, P)$-solution will be omitted here, and we refer the reader to Subsection \ref{GeneralRBSDEfromG2F}. Furthermore, the proof of (\ref{RBSDE2SnellF}) mimics exactly the proof of (\ref{RBSDE2Snell}). Thus, the remaining part of this proof will focus on proving assertion (b). To this end, on the one hand, we remark that in virtue of Theorem \ref{EstimatesUnderQtilde} and (\ref{MainAssumption}), we conclude that a solution to (\ref{RBSDEG}), when it exists, it is in fact an $L^p(\widetilde{Q}, \mathbb G)$-solution. On the other hand, thanks to Theorem \ref{EstimatesUnderQtilde1} and the assumption (\ref{MainAssumption}), we deduce that there is at most one $L^p(\widetilde{Q},\mathbb G)$-solution. Thus, the rest of this proof focuses on proving the existence of the solution to (\ref{RBSDEG}) that is given by (\ref{secondrelation}). To this end, we put \begin{eqnarray}\label{Yoverline} \overline{Y}:= \frac{Y^{\mathbb{F}}}{\widetilde{\cal E}}I_{\Lbrack0,\tau[\![}+\xi{I_{[\![\tau,+\infty[\![}},\quad \overline{Z}:=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}_{-}} I_{\Rbrack0,\tau]\!]},\quad \overline{M}:=\left(h-\frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}\quad \overline{K}:=\frac{1}{{\widetilde{\cal E}}_{-}}\bigcdot (K ^{\mathbb{F}})^{\tau}, \end{eqnarray} and prove that $(\overline{Y}, \overline{Z}, \overline{M},\overline{K}$ is a solution to (\ref{RBSDEG}). Hence, we put \begin{eqnarray*}\label{Gamma} \Gamma:=\frac{Y^{\mathbb F}}{\widetilde{\cal E}}= Y^{\mathbb F}{\cal E}(G^{-1}\bigcdot D^{o,\mathbb F}).\end{eqnarray*} and remark that, in virtue of the first equality in (\ref{secondrelation}), we have \begin{eqnarray}\label{YGGamma} \overline{Y}=\Gamma^{\tau} +(h-\Gamma)\bigcdot D. \end{eqnarray} Thanks to It\^o, the facts that ${\widetilde{\cal E}}^{-1}={\cal E}(G^{-1}\bigcdot D^{o,\mathbb F})$ and ${\widetilde{\cal E}}={\widetilde{\cal E}}_{-}G/{\widetilde{G}}$, (\ref{RBSDEF}) and (\ref{ProcessVFandXiF}), we derive \begin{align} d\Gamma&={{\Gamma}\over{\widetilde G}}dD^{o,\mathbb F}+{1\over{\widetilde{\cal E}_{-}}}dY^{\mathbb F}={{\Gamma- h}\over{\widetilde G}} dD^{o,\mathbb F}-f(t)dt-{1\over{\widetilde{\cal E}_{-}}}dK^{\mathbb F}+{{Z^{\mathbb F}}\over{\widetilde{\cal E}_{-}}}dW.\label{GammaYF} \end{align} Thus, by inserting this latter equation in (\ref{YGGamma}) and arranging terms we get \begin{align}\label{SDE4YG} d\overline{Y}=-f(t)d(t\wedge\tau)-{1\over{\widetilde{\cal E}_{-}}}d(K^{\mathbb F})^{\tau}+(h- \Gamma)dN^{\mathbb{G}}+{{Z^{\mathbb F}}\over{\widetilde{\cal E}_{-}}}dW^{\tau}. \end{align} This proves that the processes defined in (\ref{secondrelation}) satisfy the first equation in (\ref{RBSDEG}). To prove the second condition in (\ref{RBSDEG}), it is enough to remark that we have \begin{eqnarray*} Y^{\mathbb{F}}_{t}\geq S_{t}^{\mathbb{F}}I_{\{t\ <T\}}+\xi^{\mathbb{F}}I_{\{t\ =T\}},\end{eqnarray*} which implies that for any $t\in[0,T)$, $Y_{t}^{\mathbb{F}}({\widetilde{\cal E}}_{t})^{-1}I_{\{t\ <\tau\}}\geq S_{t}I_{\{t\ <\tau\}}.$ This is obviously equivalent to the second condition of (\ref{RBSDEG}). To prove the Skorokhod condition (the last condition in (\ref{RBSDEG})), we use the Skorokhod condition for the triplet $(Y^{\mathbb F}, Z^{\mathbb F}, K^{\mathbb F})$ (as it is the solution to the RBSDE (\ref{RBSDEF}) with the data-triplet $(f^{\mathbb F}, S^{\mathbb F}, \xi^{\mathbb F})$) given by \begin{eqnarray}\label{SkorokhodF} \int_0^T(Y^{\mathbb F}_{t-}-S^{\mathbb F}_{t-})dK^{\mathbb F}_t=0,\quad P\mbox{-a.s..}\end{eqnarray} As $\overline{Y}_{-}-S_{-}\geq 0$ on $\Rbrack0,\tau]\!]$ and $\overline{K}$ is an increasing process, we get \begin{eqnarray*} \int_{0}^{T}(\overline{Y}_{t-}-S_{t-})d\overline{K}_{t}=\int_{0}^{T\wedge\tau}(Y^{\mathbb F}_{t-}-S^{\mathbb F}_{t-}){\widetilde{\cal E}_{t-}}^{-2}dK^{\mathbb F}_t\leq \int_{0}^{T}(Y^{\mathbb F}_{t-}-S^{\mathbb F}_{t-}){\widetilde{\cal E}_{t-}}^{-2}dK^{\mathbb F}_t=0,\quad P\mbox{-a.s..} \end{eqnarray*} It is clear that the last equality is equivalent to (\ref{SkorokhodF}) due the fact that $K^{\mathbb F}$ is nondecreasing and $Y^{\mathbb F}_{-}-S^{\mathbb F}_{-}\geq 0$ . This ends the proof of the theorem.\end{proof} \begin{remark} One can prove, under weaker integrability conditions than those of (\ref{MainAssumption}), that any solution to the RBSDE (\ref{RBSDEG}), denoted by $(Y,Z,K, M)$, coincides with $(Y^{\mathbb G},Z^{\mathbb G},K^{\mathbb G}, M^{\mathbb G})$ defined in (\ref{secondrelation}). To this end, thanks to the Doob-Meyer decomposition under $(\widetilde{Q},\mathbb G)$, we remark that $(Y,Z,K, M)=(Y^{\mathbb G},Z^{\mathbb G},K^{\mathbb G}, M^{\mathbb G})$ is equivalent to $Y=Y^{\mathbb G}$. To prove this equality, we notice that due to (\ref{RBSDE2Snell}), we have \begin{eqnarray*} Y+\int_0^{\tau\wedge{T}\wedge\cdot}f(s)ds={\cal S}(X^{\mathbb G}; \mathbb G, \widetilde{Q})\quad\mbox{with}\quad X^{\mathbb G}:=\int_0^{\tau\wedge{T}\wedge\cdot}f(s)ds+SI_{\Lbrack0,\tau\wedge{T}[\![}+h_{\tau\wedge{T}}I_{[\![\tau\wedge{T},+\infty[\![}. \end{eqnarray*} Therefore, to apply Theorem \ref{SnellEvelopG2F}-(b), we need to find the unique pair $(X^{\mathbb F}, k^{(pr)})$ associated to $X^{\mathbb G}$. To this end, we remark that \begin{eqnarray*} SI_{\Lbrack0,\tau\wedge{T}[\![}=SI_{\Lbrack0,\tau[\![}I_{\Lbrack0,{T}[\![}\quad\mbox{and}\quad h_{\tau\wedge{T}}I_{[\![\tau\wedge{T},+\infty[\![}=h_{\tau\wedge{T}}I_{\Lbrack0,\tau[\![}+h_{T}I_{\Lbrack0,{T}[\![}I_{[\![{T},+\infty[\![},\end{eqnarray*} and derive \begin{eqnarray*} X^{\mathbb F}=\int_0^{T\wedge\cdot}f(s)ds+SI_{\Lbrack0,{T}[\![}+h_{T}I_{[\![{T},+\infty[\![},\quad k^{(pr)}=k^{(op)}=\int_0^{T\wedge\cdot}f(s)ds+hI_{\Lbrack0,T[\![}+h_{T}I_{[\![{T},+\infty[\![}. \end{eqnarray*} Furthermore, we have \begin{eqnarray*} &&{\widetilde{\cal E}}X^{\mathbb F}-k^{(op)}\bigcdot {\widetilde{\cal E}}=\int_0^{T\wedge\cdot}f^{\mathbb F}(s)ds+S^{\mathbb F} I_{\Lbrack0,T[\![}+(h\bigcdot V^{\mathbb F})^T+\xi^{\mathbb F}I_{[\![{T},+\infty[\![},\\ && k^{(op)}{\widetilde{\cal E}}-k^{(op)}\bigcdot {\widetilde{\cal E}}=\int_0^{T\wedge\cdot}f^{\mathbb F}(s)ds+(h\bigcdot V^{\mathbb F})^T+\widetilde{\cal E}hI_{\Lbrack0,T[\![} +\xi^{\mathbb F}I_{[\![{T},+\infty[\![}. \end{eqnarray*} Remark also that \begin{eqnarray*} Y^{\mathbb F}+L^{\mathbb F}={\cal S}\left(L^{\mathbb F}+\xi^{\mathbb F}I_{[\![{T},+\infty[\![}+S^{\mathbb F} I_{\Lbrack0,T[\![};\mathbb F, P\right),\quad L^{\mathbb F}:=\int_0^{T\wedge\cdot}f(s)ds+\int_0^{T\wedge\cdot}h_sdV^{\mathbb F}_s. \end{eqnarray*} Thus, by directly applying Theorem \ref{SnellEvelopG2F}-(b) to $Y$, on $\Lbrack0,T]\!]$, we have \begin{align*} Y+\int_0^{\tau\wedge{T}\wedge\cdot}f(s)ds&={\cal S}(X^{\mathbb G}; \mathbb G, \widetilde{Q})\\ &={{{\cal S}\left(X^{\mathbb F}{\widetilde{\cal E}}-k^{(op)}\bigcdot \widetilde{\cal E};\mathbb F, P\right)}\over{ \widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+{{L^{\mathbb F}+\widetilde{\cal E}hI_{\Lbrack0,T[\![} +\xi^{\mathbb F}I_{[\![{T},+\infty[\![}}\over{\widetilde{\cal E}}}\bigcdot N^{\mathbb G}\\ &={{Y^{\mathbb F}+L^{\mathbb F}}\over{ \widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+{{L^{\mathbb F}}\over{ \widetilde{\cal E}}}\bigcdot N^{\mathbb G}+\left(hI_{\Lbrack0,T[\![} +h_T{I}_{[\![{T},+\infty[\![}\right)\bigcdot N^{\mathbb G}\\ &={{Y^{\mathbb F}}\over{ \widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+{1\over{ \widetilde{\cal E}_{-}}}\bigcdot (L^{\mathbb F})^{\tau}+h_{T\wedge\cdot}\bigcdot D- {{h}\over{\widetilde{G}}}I_{\Rbrack0,\tau\wedge{T}]\!]}\bigcdot D^{o,\mathbb F}\\ &={{Y^{\mathbb F}}\over{ \widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+\int_0^{\tau\wedge{T}\wedge\cdot}f(s)ds+\xi{I}_{[\![\tau,+\infty[\![}=Y^{\mathbb G}+\int_0^{\tau\wedge{T}\wedge\cdot}f(s)ds. \end{align*} The fourth equality follows from the following lemma, that we prove in Appendix \ref{Appendix4Proofs}.\end{remark} \begin{lemma}\label{L/EpsilonTilde} For any $\mathbb F$-semimartingale $L$, the following holds. \begin{eqnarray*} {{L}\over{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+{{L}\over{\widetilde{\cal E}}}\bigcdot N^{\mathbb G}={1\over{\widetilde{\cal E}_{-}}}\bigcdot L^{\tau}. \end{eqnarray*} \end{lemma} \section{The case of linear RBSDE with unbounded horizon} \label{LinearUnboundedSection} This section focuses on the following RBSDE \begin{eqnarray}\label{RBSDEGinfinite} \begin{cases} dY=-f(t)d(t\wedge\tau)-d(K+M)+ZdW^{\tau},\quad Y_{\tau}=\xi=h_{\tau},\\ \\ Y_{t}\geq S_{t}\quad; 0\leq t< \tau,\quad \displaystyle\int_{0}^{\tau}(Y_{t-}-S_{t-})dK_{t}=0,\quad P\mbox{-a.s..} \end{cases} \end{eqnarray} It is important to mention that the probability $\widetilde{Q}$ depends heavily on the finite horizon planning $T$, and the process ${\widetilde Z}^{\tau}$ defined in (\ref{Qtilde}) might not be a uniformly integrable martingale. This raises serious challenges in different direction. \subsection{Existence, uniqueness and estimates} As we aforementioned it, the probability $\widetilde Q$ depends heavily on $T$, see \cite{Choulli1} for details, and in general ${\widetilde Z}^{\tau}$ might not be a uniformly integrable martingale. Thus, the fact of letting $T$ goes to infinity triggers serious challenges in both the technical and the conceptual sides. In fact, both the condition (\ref{MainAssumption}) and the RBSDE (\ref{RBSDEF}) might not make sense when we take $T$ to infinity due to the fact the limit of $h_T$ when $T$ goes to infinity might not exist even. Our approach to these challenges will be in two steps. The first step relies on the following lemma and the two theorems that follow it, and aims to get rid-off of $\widetilde Q$ in the left-hand-sides of the estimates of Theorems \ref{EstimatesUnderQtilde} and \ref{EstimatesUnderQtilde1}. \begin{lemma}\label{technicallemma1} Let $T\in (0,+\infty)$, $\widetilde{Q}$ be the probability given in (\ref{Qtilde}), and $\widetilde{\cal E}$ be the process defined in (\ref{ProcessVFandXiF}). Then the following assertions hold.\\ {\rm{(a)}} For any $p\in(1,+\infty)$ and any RCLL $\mathbb G$-semimartingale $Y$, we have \begin{eqnarray}\label{Equality4YG} E\left[\sup_{0\leq s\leq{T\wedge\tau}}{\widetilde{\cal E}}_s\vert{Y_s}\vert^p\right]\leq {G_0^{-1} }E^{\widetilde{Q}}\left[\sup_{0\leq s\leq{T\wedge\tau}}\vert{Y_s}\vert^p\right]. \end{eqnarray} {\rm{(b)}} For any $a\in (0,+\infty)$ and RCLL, nondecreasing and $\mathbb G$-optional process $K$ with $K_0=0$, we have \begin{eqnarray}\label{Equality4KG} E\left[\left(\int_0^{T\wedge\tau} ({\widetilde{\cal E}}_{s-})^{a}dK_s\right)^{1/a}\right]\leq \underbrace{3^{1/a}(5+(\max(a, a^{-1}))^{1/a})}_{=:\kappa(a)} G_0^{-1} E^{\widetilde{Q}}\left[K_{T\wedge\tau}^{1/a}+\sum_{0< s\leq _{T\wedge\tau}} \widetilde{G}_t(\Delta K_s)^{1/a}\right].\end{eqnarray} {\rm{(c)}} For any $p>1$ and any nonnegative and $\mathbb G$-optional process $H$, we have \begin{eqnarray}\label{Equality4MG} E\left[({\widetilde{\cal E}}_{-}^{2/p}H\bigcdot [N^{\mathbb G},N^{\mathbb G}])_{T\wedge\tau} ^{p/2}\right]\leq \kappa(a)G_0^{-1} E^{\widetilde{Q}}\left[(H\bigcdot [N^{\mathbb G},N^{\mathbb G}]_{T\wedge\tau})^{p/2}+ (H^{p/2}\widetilde{G}\bigcdot \mbox{Var}(N^{\mathbb G}))_{T\wedge\tau}\right]. \end{eqnarray} {\rm{(d)}} For any $p>1$ and any nonnegative and $\mathbb F$-optional process $H$, we have \begin{eqnarray}\label{Equality4MGOptionalF} E\left[({\widetilde{\cal E}}_{-}^{2/p}H\bigcdot [N^{\mathbb G},N^{\mathbb G}])_{T\wedge\tau} ^{p/2}\right]\leq \kappa(a)G_0^{-1} E^{\widetilde{Q}}\left[(H\bigcdot [N^{\mathbb G},N^{\mathbb G}]_{T\wedge\tau})^{p/2}+ 2(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})_T\right]. \end{eqnarray} \end{lemma} For the sake of simple exposition, we relegate the proof of this lemma to Appendix \ref{Appendix4Proofs}. In the following, we elaborate estimates for the solution to (\ref{RBSDEG}) under the probability $P$ instead. \begin{theorem}\label{estimates} For any $p>1$, there exists a positive constant $C=C(p)$ that depends on $p$ only such that the unique solution to the RBSDE (\ref{RBSDEG}), denoted by ($Y^{\mathbb{G}},Z^{\mathbb{G}},K^{\mathbb{G}}, M^{\mathbb{G}}$), satisfies \begin{eqnarray}\label{Estimate4P} &&E\left[\sup_{0\leq t\leq T}\widetilde{\cal E}_t\vert{Y}^{{\mathbb{G}}}_{t}\vert^p+\left(\int_{0}^{T}(\widetilde{\cal E}_{s-})^{2/p}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}+\left((\widetilde{\cal E}_{-})^{1/p}\bigcdot K^{\mathbb{G}}_T\right)^p+((\widetilde{\cal E}_{-})^{2/p}\bigcdot [ M^{\mathbb{G}}, M^{\mathbb{G}}]_T)^{p/2}\right]\nonumber\\ &&\leq C E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}\vert f(s)\vert ds\right)^p +\underset{0\leq s \leq T\wedge\tau}{\sup}(S^{+}_{s})^p+\vert\xi \vert^p\right], \end{eqnarray} where $\widetilde{\cal E}$ is the process given by \begin{eqnarray}\label{Xitilde111} \widetilde{\cal E}:={\cal E}(-{\widetilde{G}}^{-1}\bigcdot D^{o,\mathbb{F}}).\end{eqnarray} \end{theorem} \begin{proof} By applying Lemma \ref{technicallemma1}-(b) to the process $K_t=\int_0^{t\wedge\tau} (Z^{\mathbb G}_{s})^{2}ds$ and $a=2/p$, we get \begin{eqnarray}\label{Estimate4ZGepsilon} E\left[\left(\int_0^{T\wedge\tau} ({\widetilde{\cal E}}_{s-})^{2/p}(Z^{\mathbb G}_{s})^{2}ds\right)^{p/2}\right]\leq \kappa(a)G_0^{-1} E^{\widetilde{Q}}\left[\left(\int_0^{T\wedge\tau} (Z^{\mathbb G}_{s})^{2}ds\right)^{p/2}\right].\end{eqnarray} By applying Lemma \ref{technicallemma1}-(a) to the process $Y=Y^{\mathbb G}$, we get \begin{eqnarray}\label{Estimate4YGepsilon} E\left[\sup_{0\leq s\leq{T\wedge\tau}}{\widetilde{\cal E}}_s\vert{Y_s^{\mathbb G}}\vert^p\right]\leq {G_0^{-1} }E^{\widetilde{Q}}\left[\sup_{0\leq s\leq{T\wedge\tau}}\vert{Y_s^{\mathbb G}}\vert^p\right]. \end{eqnarray} By applying Lemma \ref{technicallemma1}-(b) to the process $K=K^{\mathbb G}$ and $a=1/p$, and using the fact that we always have $\sum_{0< s\leq _{T\wedge\tau}} \widetilde{G}_s(\Delta K_s^{\mathbb G})^p\leq (K_{T\wedge\tau}^{\mathbb G})^p$, we get \begin{eqnarray}\label{Estimate4KGepsilon} E\left[\left(\int_0^{T\wedge\tau} ({\widetilde{\cal E}}_{s-})^{1/p}dK_s^{\mathbb G}\right)^p\right]\leq{{\kappa(a)}\over{ G_0}} E^{\widetilde{Q}}\left[(K_{T\wedge\tau}^{\mathbb G})^p+\sum_{0\leq s\leq _{T\wedge\tau}} \widetilde{G}_s(\Delta K_s^{\mathbb G})^p\right]\leq {{2\kappa(a)}\over{ G_0}} E^{\widetilde{Q}}\left[(K_{T\wedge\tau}^{\mathbb G})^p\right].\end{eqnarray} Thanks to Theorem \ref{abcde}, we write $[M^{\mathbb G}, M^{\mathbb G}]=H\bigcdot [N^{\mathbb G}, N^{\mathbb G}]$ with $H:=(h-Y^{\mathbb F}{\widetilde {\cal E}}^{-1})^2$ is a nonnegative $\mathbb F$-optional process. Thus, a direct application of Lemma \ref{technicallemma1}-(d), we get \\ \begin{align}\label{Equality4MG00} E\left[({\widetilde{\cal E}}_{-}^{2/p}H\bigcdot [N^{\mathbb G},N^{\mathbb G}])_{T\wedge\tau} ^{p/2}\right]\leq C(p)G_0^{-1} E^{\widetilde{Q}}\left[(H\bigcdot [N^{\mathbb G},N^{\mathbb G}]_{T\wedge\tau})^{p/2}+ 2(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})_T\right]. \end{align} Thus, we need to control the second term in the right-hand-side of the above inequality. To this end, we remark that $(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})\leq 2^{p-1}(\vert{h}\vert^p+\vert{Y}^{\mathbb G}\vert^pI_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})$. Thus, by using this, we derive \begin{align*} 2E^{\widetilde{Q}}\left[(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})_T\right]\leq 2^pE^{\widetilde{Q}}\left[\vert{h}_{\tau}\vert^p I_{\{\tau\leq{T}\}}\right]+2^pE^{\widetilde{Q}}\left[\sup_{0\leq{t}\leq\tau\wedge{T}}\vert{Y}^{\mathbb G}_t\vert^p\right]. \end{align*} Therefore, by combining this inequality with ${h}_{\tau}I_{\{\tau\leq{T}\}}=\xi I_{\{\tau\leq{T}\}}$, (\ref{Equality4MG00}), (\ref{Estimate4KGepsilon}), (\ref{Estimate4YGepsilon}), (\ref{Estimate4ZGepsilon}) and Theorem \ref{EstimatesUnderQtilde}, the proof of the theorem follows immediately. \end{proof} Similarly, the following theorem gives a version of Theorems \ref{EstimatesUnderQtilde1} where the left-hand-side of its estimate does not involve the probability $\widetilde Q$. \begin{theorem}\label{estimates1} Let ($Y^{\mathbb{G},i},Z^{\mathbb{G},i},K^{\mathbb{G},i}, M^{\mathbb{G},i}$) be a solution to the RBSDE (\ref{RBSDEG}) that correspond to $(f_{i}, S_{i}, \xi^i)$, $i=1,2$ respectively. Then, there exist $C_1$ and $C_2$ that depend on $p$ only such that \begin{align}\label{Estimate4P1} &\Vert (\widetilde{\cal E})^{1/p}\delta Y^{\mathbb G} \Vert_{\mathbb{D}_{T\wedge\tau}(P,p)}^{p}+\Vert(\widetilde{\cal E}_{-})^{1/p}\delta Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(P,p)}^p+\Vert(\widetilde{\cal E}_{-})^{1/p} \bigcdot \delta {M^{\mathbb{G}}}\Vert^{p}_{{\cal {M}}^p_{T\wedge\tau}(\widetilde{P})}\\ &\leq C_1\left\{\Vert \delta \xi\Vert^{p}_{\mathbb{L}^{p}(\widetilde{Q})}+ \Vert\int_{0}^{T\wedge\tau}\vert\delta f(s)\vert ds\Vert^{p}_{\mathbb{L}^{p}(\widetilde{Q})}+\Vert \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}^p\right\} \\ &+C_2\Vert \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}^{p/2}\sqrt{\sum_{i=1}^2 \left\{\Vert\xi^{(i)}\Vert^{p}_{\mathbb{L}^{p}(\widetilde{Q})}+ \Vert\int_{0}^{T\wedge\tau}\vert{f}^{(i)}(s)\vert ds\Vert^{p}_{\mathbb{L}^{p}(\widetilde{Q})}+\Vert (S^{(i)})^+\Vert_{\mathbb{D}^{p}_{T\wedge\tau}(\widetilde{Q})}^p\right\}}. \end{align} \end{theorem} \begin{proof Remark that by applying Lemma \ref{technicallemma1}-(a) to $Y=\delta{Y}^{\mathbb G}$ and $a=1/p$, we deduce that \begin{align}\label{Control4deltaYGInfinity} E\left[\sup_{0\leq t\leq T}\widetilde{\cal E}_t\vert\delta Y^{\mathbb{G}}_{t}\vert^p\right]\leq \kappa E^{\widetilde{Q}}\left[\sup_{0\leq t\leq T}\vert\delta Y^{\mathbb{G}}_{t}\vert^p\right]\end{align} Thus, the rest of the proof focuses on controlling the remaining terms in the left-hand-side of (\ref{Estimate4P1}). To this end, we apply Lemma \ref{technicallemma1}-(b) to $K=\int_0^{\cdot }(\delta{Z}^{\mathbb G}_s)^2ds +[\delta{M}^{\mathbb G},\delta{M}^{\mathbb G}]$ and $a=2/p$, and get \begin{align}\label{Control4deltaZGInfinity} &E\left[\left(\int_0^{T\wedge\tau}\widetilde{\cal E}_{-})^{2/p}(\delta Z^{\mathbb{G}})^2 ds+\int_0^{T\wedge\tau}(\widetilde{\cal E}_{-})^{2/p}d[\delta{M}^{\mathbb G},\delta{M}^{\mathbb G}]_s\right)^{p/2}\right]\nonumber\\ &\leq\kappa E^{\widetilde{Q}}\left [\left(\int_0^{T\wedge\tau}(\delta Z^{\mathbb{G}})^2 ds+[\delta{M}^{\mathbb G},\delta{M}^{\mathbb G}]_{T\wedge\tau}\right)^{p/2}+\sum_{0\leq{t}\leq {T\wedge\tau}}{\widetilde{G}_t}(\Delta(\delta{M}^{\mathbb G})_t)^p/2\right] \end{align} Then, thank to Theorem \ref{abcde} which implies that $\Delta(\delta{M}^{\mathbb G})=(h-Y^{\mathbb F}{\widetilde{\cal E}}^{-1})\Delta N^{\mathbb G}$, and by mimicking the footsteps of step 3 in the proof of Lemma \ref{technicallemma1}, we derive \begin{align}\label{Control4deltaMGInfinity} E^{\widetilde{Q}}\left [\sum_{0\leq{t}\leq {T\wedge\tau}}{\widetilde{G}_t}\vert\Delta(\delta{M}^{\mathbb G})_t\vert^p\right]\leq 2^p E^{\widetilde{Q}}\left [(\vert{h}\vert^p+\vert{Y}^{\mathbb G}\vert^p)I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F}_T\right]\leq 2^pE^{\widetilde{Q}}\left[\vert\xi\vert^p+ \sup_{0\leq{t}\leq{T\wedge\tau}}\vert Y^{\mathbb G}\vert^p\right]. \end{align} Therefore, by combining (\ref{Control4deltaYGInfinity}), (\ref{Control4deltaZGInfinity}), (\ref{Control4deltaMGInfinity}) and Theorem \ref{EstimatesUnderQtilde1}, the proof of theorem follows immediately. This ends the prof of the theorem. \end{proof} Our second step, in solving (\ref{RBSDEGinfinite}), relies on the following lemma, and focuses on simultaneously letting $T$ to go to infinity and getting rid-off $\widetilde{Q}$ in the norms of the data-triplet. \begin{lemma}\label{ExpecationQtilde2P} Let $X$ be a non-negative and $\mathbb F$-optional process with $X_0=0$. Then the following hold. {\rm{(a)}} For any $T\in (0,\infty)$, we always have \begin{eqnarray}\label{XunderQtilde} E^{\widetilde Q}[X_{T\wedge\tau}]=G_0 E\left[\int_0^T X_sdV_s^{\mathbb F}+X_T{\widetilde {\cal E}}_T\right]\quad\mbox{and}\quad E^{\widetilde Q}[X_{\tau}I_{\{\tau\leq{T}\}}]=G_0 E\left[\int_0^T X_sdV_s^{\mathbb F}\right]. \end{eqnarray} {\rm{(b)}} If $X/{\cal E}(G_{-}^{-1}\bigcdot m)$ is bounded, then we get \begin{eqnarray} \lim_{T\to\infty} E^{\widetilde Q}[X_{T\wedge\tau}]=G_0\Vert X\Vert_{L^1(P\otimes V^{\mathbb F})}:=G_0 E\left[\int_0^{\infty} X_sdV_s^{\mathbb F}\right].\end{eqnarray} \end{lemma} This lemma, that will be proved in Appendix \ref{Appendix4Proofs}, allows us to take the limit of expectations under $\widetilde Q$ when some conditions hold. Below, we elaborate our principal result of this subsection. \begin{theorem}\label{EstimateInfinite} Let $p\in (1,+\infty)$ and suppose that $G>0$ and the data-triplet $(f, S, h)$ satisfies \begin{eqnarray}\label{MainAssumption4InfiniteHorizon} E\left[\int_0^{\infty}\left(\vert{h}_t\vert^p+(F_t)^p+\sup_{0\leq{u}\leq t}(S_u^+)^p\right)dV^{\mathbb F}_t\right]<+\infty,\quad\mbox{where}\quad F_t:=\int_0^t\vert f(s)\vert ds. \end{eqnarray} Then the following assertions hold.\\ {\rm{(a)}} The RBSDE (\ref{RBSDEGinfinite}) admits a unique solution $\left(Y^{\mathbb{G}},Z^{\mathbb{G}},K^{\mathbb{G}},M^{\mathbb{G}}\right)$. \\ {\rm{(b)}} There exists a positive constant $C$, that depends on $p$ only, such that \begin{eqnarray}\label{Estimate4PTinfinity} &&E\left[\sup_{0\leq t\leq \tau}\widetilde{\cal E}_t\vert{Y}^{{\mathbb{G}}}_{t}\vert^p+\left(\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}+\left(\int_0^{\tau}({\widetilde{\cal E}_{s-}})^{1/p}dK^{\mathbb{G}}_s\right)^p+\left(\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}d[ M^{\mathbb{G}}, M^{\mathbb{G}}]_s\right)^{p/2}\right]\nonumber\\ &&\leq C E\left[\int_0^{\infty}\left\{\vert{h}_t\vert^p+(F_t)^p+\sup_{0\leq s\leq t}(S_u^+)^p\right\}dV^{\mathbb F}_t\right].\end{eqnarray} {\rm{(c)}} Let $\left(Y^{\mathbb{G},i},Z^{\mathbb{G},i},K^{\mathbb{G},i}, M^{\mathbb{G},i}\right)$ be a solution to the RBSDE (\ref{RBSDEGinfinite}) corresponding to $(f^{(i)}, S^{(i)}, h^{(i)})$, for $i=1,2$. Then, there exist positive $C_1$ and $C_2$ that depend on $p$ only such that \begin{align}\label{Estimate4P1Tinifinite} &E\left[\sup_{0\leq t\leq \tau}\widetilde{\cal E}_t\vert\delta Y^{{\mathbb{G}}}_{t}\vert^p+\left(\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}\vert \delta Z^{{\mathbb{G}}}_{s}\vert^{2}ds+\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}d[ \delta M^{\mathbb{G}},\delta M^{\mathbb{G}}]_s\right)^{p/2}\right]\nonumber\\ &\leq C_1 E\left[\displaystyle\int_0^{\infty}\left\{\vert\delta h_t\vert^p+\vert{\delta F_t}\vert^p+\sup_{0\leq{u}\leq{t}}\vert\delta{S}_u\vert^p\right\}dV^{\mathbb F}_t\right]\nonumber\\ &+C_2\sqrt{E\left[\displaystyle\int_0^{\infty}\sup_{0\leq{u}\leq{t}}\vert\delta{S}_u\vert^p dV^{\mathbb F}_t\right]} \sqrt{\sum_{i=1}^2 E\left[\displaystyle\int_0^{\infty}\left\{\vert{h}_t^{(i)}\vert^p+\vert{ F}_t^{(i)}\vert^p+\sup_{0\leq{u}\leq{t}}\vert(S^{(i)}_u)^+\vert^p\right\}dV^{\mathbb F}_t\right]}, \end{align} where \begin{eqnarray}\label{processesDelta} \begin{cases}\delta{Y}^{\mathbb{G}}:=Y^{\mathbb{G},1}-Y^{\mathbb{G},2},\quad\delta{Z}^{\mathbb{G}}:=Z^{\mathbb{G},1}-Z^{\mathbb{G},2},\quad\delta{K}^{\mathbb{G}}:=K^{\mathbb{G},1}-K^{\mathbb{G},2},\quad \delta{M}^{\mathbb{G}}:=M^{\mathbb{G},1}-M^{\mathbb{G},2},\label{deltaProcesses}\\ \delta{h}:=h^{(1)}-h^{(2)} ,\quad\delta{F}:=F^{(1)}-F^{(2)},\quad F^{(i)}:=\int_0^{\cdot}\vert{f}^{(i)}_s\vert{d}s,\quad \delta{S}:=S^{(1)}-S^{(2)}.\end{cases}\end{eqnarray} \end{theorem} The theorem states that, in the case when the horizon might be unbounded, we ``{\it discount}" somehow the solution of the RBSDE (\ref{RBSDEGinfinite}), using the discount factor ${\widetilde{\cal E}}$, and we estimate afterwards the resulting processes under $P$ with the data-triplet processes $(f, h, S)$ using the space $L^p(\widetilde{\Omega}, {\cal{F}}\otimes{\cal{B}}(\mathbb R^+),P\otimes V^{\mathbb F})$ and its norm instead. This norm appeared naturally in our analysis, and it reflects the fact that $\tau$ is a random horizon that might span the whole set of fixed planning horizons. Equivalently, for the pair $(Y^{\mathbb{G}},Z^{\mathbb{G}})$ of the solution, we use the following two spaces and their norms given by \begin{eqnarray}\label{DtildeSpace} \begin{cases} \widetilde{\mathbb{D}}_{\sigma}(P,p):=\left\{Y\in {\mathbb{D}}_{\sigma}(P,p):\quad \Vert{Y}\Vert_{ \widetilde{\mathbb{D}}_{\sigma}(P,p)}:= \Vert{Y}{\widetilde{\cal E}}^{1/p}\Vert_{ {\mathbb{D}}_{\sigma}(P,p)}<+\infty\right\},\\ \widetilde{\mathbb{S}}_{\sigma}(P,p):=\left\{Z\in {\mathbb{S}}_{\sigma}(P,p):\quad \Vert{Z}\Vert_{ \widetilde{\mathbb{S}}_{\sigma}(P,p)}:= \Vert{Z}{\widetilde{\cal E}_{-}}^{1/p}\Vert_{ {\mathbb{S}}_{\sigma}(P,p)}<+\infty\right\}.\end{cases} \end{eqnarray} For the remaining pair $(K^{\mathbb{G}},M^{\mathbb{G}})$ of the solution we take the norm of the ``discounted" processes ${\widetilde{\cal E}_{-}}^{1/p}\bigcdot {K}^{\mathbb{G}}$ and ${\widetilde{\cal E}_{-}}^{1/p}\bigcdot {M}^{\mathbb{G}}$ instead of those of $({K}^{\mathbb{G}},{M}^{\mathbb{G}})$.\\ Furthermore, direct It\^o calculations show that $\left(Y^{\mathbb{G}},Z^{\mathbb{G}},K^{\mathbb{G}},M^{\mathbb{G}}\right)$ is the unique solution to (\ref{RBSDEGinfinite}) if and only if $\left(\widetilde{Y}^{\mathbb{G}},\widetilde{Z}^{\mathbb{G}},\widetilde{K}^{\mathbb{G}},\widetilde{M}^{\mathbb{G}}\right):=\left({\widetilde{\cal E}}^{1/p}Y^{\mathbb{G}},{\widetilde{\cal E}_{-}}^{1/p}Z^{\mathbb{G}},{\widetilde{\cal E}_{-}}^{1/p}\bigcdot {K}^{\mathbb{G}},{\widetilde{\cal E}_{-}}^{1/p}\bigcdot {M}^{\mathbb{G}}\right)$ is the unique solution to \begin{align}\label{EquivalentRBSDE} \begin{cases} dY=-Y\left({{\widetilde{G}}\over{G}}\right)^{1/p}I_{\Rbrack0,\tau]\!]}dV^{(1/p)}-{\widetilde{\cal E}_{-}}^{1/p}f(t)d(t\wedge\tau)-dK-dM+ZdW^{\tau},\\ Y_{\tau}={\widetilde{\cal E}_{\tau}}^{1/p}\xi,\displaystyle\quad Y\geq {\widetilde{\cal E}}^{1/p}S\quad\mbox{on}\quad \Lbrack0,\tau[\![,\quad \int_0^{\tau}(Y_{u-}- {\widetilde{\cal E}_{u-}}^{1/p}S_{u-})dK_u=0, \end{cases} \end{align} where $V^{(1/p)}$ is defined in (\ref{Vepsilon}). Furthermore, under (\ref{MainAssumption4InfiniteHorizon}), this solution is an $L^p(P,\mathbb G)$-solution and there exists a positive constant $C$ that depends on $p$ only such that \begin{align*} \Vert \widetilde{Y}^{\mathbb{G}}\Vert_{\mathbb{D}_{\tau}(P,p)}+ \Vert \widetilde{Z}^{\mathbb{G}}\Vert_{\mathbb{S}_{\tau}(P,p)}+\Vert\widetilde{K}^{\mathbb{G}}_T\Vert_{L^p(P)}+\Vert\widetilde{M}^{\mathbb{G}}\Vert_{{\cal{M}}^p(P)}\leq C\Vert{F}+\vert{h}\vert+\sup_{0\leq u\leq \cdot}S_u^+\Vert_{L^p(P\otimes{V}^{\mathbb{F}})}. \end{align*} \begin{proof}[Proof of Theorem \ref{EstimateInfinite}] In virtue of Lemma \ref{ExpecationQtilde2P}, we will prove the theorem in two parts.\\ {\bf Part 1.} Here, we assume that there exists a positive constant $C\in (0,+\infty)$ such that \begin{eqnarray}\label{BoundednessAssumption} \max\left(\vert{h}\vert^p, \left(\int_0^{\cdot}\vert f_s\vert ds\right)^p, \sup_{0\leq u\leq\cdot}\vert{S}_u\vert^p\right)\leq C {\cal E}(G_{-}^{-1}\bigcdot m),\end{eqnarray} and prove that the theorem holds under this assumption. To this end, we consider the sequence of data $(f^{(n)}, h^{(n)}, S^{(n)})$ given by \begin{eqnarray}\label{Sequence4[0,n]} f^{(n)}:=fI_{\Lbrack0,n]\!]},\quad h^{(n)}:=hI_{\Lbrack0,n]\!]},\quad S^{(n)}_t:=S_{n\wedge{t}},\quad\xi^{(n)}:=h_{\tau}I_{\{\tau\leq{n}\}},\quad \forall\ n\geq 1. \end{eqnarray} For any $n\geq 1$, the RBSDE (\ref{RBSDEGinfinite}) has a unique solution for any horizon $T\geq n$. For any $n, m\geq 1$, we apply Theorem \ref{estimates} to each $(f^{(n)}, h^{(n)}, S^{(n)})$ and Theorem \ref{estimates1} to the triplet \begin{eqnarray*} \left(\delta{f}, \delta{h},\delta{ S},\delta\xi\right):=\left(f^{(n)}-f^{(n+m)}, h^{(n)}-h^{(n+m)}, S^{(n)}-S^{(n+m)}, h_{\tau}I_{\{\tau\leq n\}}-h_{\tau}I_{\{\tau\leq n+m\}}\right), \end{eqnarray*} and the horizon for both theorems is $T=n+m$, and get \begin{eqnarray}\label{Inequality4Limit} &&E\left[\sup_{0\leq t\leq T}\widetilde{\cal E}_t\vert{Y}^{\mathbb{G},n}_{t}\vert^p+\left(\int_{0}^{T}(\widetilde{\cal E}_{s-})^{2/p}\vert Z^{\mathbb{G},n}_{s}\vert^{2}ds\right)^{p/2}+\left((\widetilde{\cal E}_{-})^{1/p}\bigcdot K^{\mathbb{G},n}_T\right)^p+((\widetilde{\cal E}_{-})^{2/p}\bigcdot [ M^{\mathbb{G},n}, M^{\mathbb{G},n}]_T)^{p/2}\right]\nonumber\\ &&\leq C E^{\widetilde{Q}}\left[\left(\int_{0}^{n\wedge\tau}\vert f(s)\vert ds\right)^p +\underset{0\leq s \leq n\wedge\tau}{\sup}(S^{+}_{s})^p+\vert\xi_n\vert^p\right], \end{eqnarray} and \begin{align}\label{Inequa4Convergence} &E\left[\sup_{0\leq t\leq T}\widetilde{\cal E}_t\vert{Y}^{\mathbb{G},n}_{t}-Y^{\mathbb{G},n+m}_{t}\vert^p+\left(\int_{0}^{T\wedge\tau}(\widetilde{\cal E}_{s-})^{2/p}\vert{ Z}^{\mathbb{G},n}_{s}-Z^{\mathbb{G},n+m}_{s}\vert^{2}ds\right)^{p/2}\right]\nonumber\\ &+E\left[\left(\int_{0}^{T}(\widetilde{\cal E}_{s-})^{2/p}d[ M^{\mathbb{G},n}-M^{\mathbb{G},n+m},M^{\mathbb{G},n}-M^{\mathbb{G},n+m}]_s\right)^{p/2}\right] \nonumber\nonumber\\ &\leq C_1 E^{\widetilde{Q}}\left[\vert\xi_n-\xi_{n+m}\vert^p+\left(\int_{0}^{\tau}\vert{f}_n(s)-f_{n+m}(s)\vert ds\right)^p +\sup_{0\leq t\leq \tau}\vert{S}_{t\wedge{n}}-S_{t\wedge{(n+m)}}\vert^p\right]\nonumber\\ &+C_2\sqrt{\Vert\sup_{0\leq t\leq\tau}\vert{S}_{t\wedge{n}}-S_{t\wedge{(n+m)}}\vert\Vert_{L^p(\widetilde{Q})}^p}\sqrt{\sum_{i\in\{n,n+m\}}E^{\widetilde{Q}}\left[\vert\xi^{(i)}\vert^p+\left(\int_{0}^{\tau}\vert{f}^{(i)}(s)\vert ds\right)^p +\sup_{0\leq t\leq \tau}({S}^{(i)}_t)^+)^p\right]}. \end{align} The rest of this part is divided into two steps.\\ {\bf Step 1.} Here we calculate the limits, when $n$ and/or $m$ go to infinity, of the right-hand-sides of the inequalities (\ref{Inequality4Limit}) and (\ref{Inequa4Convergence}). \\ By directly applying Lemma \ref{ExpecationQtilde2P} to $\left(\int_{0}^{\cdot}\vert f(s)\vert ds\right)^p$, $\sup_{0\leq s \leq \cdot}(S^{+}_{s})^p$, and $\vert{h}\vert^p$, we deduce that \begin{align}\label{Limits4RightHandSide} \begin{cases} \lim_{n\to\infty}E^{\widetilde{Q}}\left[\left(\int_{0}^{n\wedge\tau}\vert f(s)\vert ds\right)^p\right]=E\left[\int_0^{\infty} (F_t)^p dV^{\mathbb F}_t\right],\\ \lim_{n\to\infty}E^{\widetilde{Q}}\left[\sup_{0\leq s \leq n\wedge\tau}(S^{+}_{s})^p\right]=E\left[\int_0^{\infty} \sup_{0\leq s \leq{t}}(S^{+}_{s})^pdV^{\mathbb F}_t\right],\\ \lim_{n\to\infty}E^{\widetilde{Q}}\left[\vert\xi_n\vert^p\right]=\lim_{n\to\infty}E^{\widetilde{Q}}\left[\vert{h}_{\tau}\vert^pI_{\{\tau\leq{n}\}}\right]=E\left[\int_0^{\infty} \vert{h}_t\vert^p dV^{\mathbb F}_t\right]. \end{cases} \end{align} Furthermore, similar arguments allow us to derive \begin{align}\label{LimitZero} \begin{cases} \displaystyle\lim_{n\to\infty}\sup_{m\geq1}E^{\widetilde{Q}}\left[\left(\int_{0}^{\tau}\vert{f}_n(s)-f_{n+m}(s)\vert {d}s\right)^p\right]=\lim_{n\to\infty}E^{\widetilde{Q}}\left[\left(\int_{n\wedge\tau}^{\tau}\vert{f}\vert {d}s\right)^p\right]=0,\\ \displaystyle\lim_{n\to\infty}\sup_{m\geq1}E^{\widetilde{Q}}\left[\sup_{0\leq t\leq \tau}\vert{S}_{t\wedge{n}}-S_{t\wedge{(n+m)}}\vert^p\right]=\lim_{n\to\infty}E^{\widetilde{Q}}\left[\sup_{n\leq{t}\leq \tau}\vert{S}_{n}-S_{t}\vert^pI_{\{\tau>n\}}\right]=0,\\ \displaystyle\lim_{n\to\infty}\sup_{m\geq1}E^{\widetilde{Q}}\left[\vert\xi_n-\xi_{n+m}\vert^p\right]=\lim{n\to\infty}E^{\widetilde{Q}}\left[\vert{h}_{\tau}\vert^pI_{\{\tau>n\}}\right]=0.\end{cases} \end{align} {\bf Step 2.} This step proves assertions (a), (b) and (c) of the theorem under the assumption (\ref{BoundednessAssumption}).\\ Thus, by combining (\ref{LimitZero}) and (\ref{Inequa4Convergence}), we deduce that the sequence $(Y^{\mathbb{G},n}, Z^{\mathbb{G},n}, K^{\mathbb G,n},M^{\mathbb{G},n})$ is Cauchy sequence in norm, and hence it converges to $(Y^{\mathbb{G}}, Z^{\mathbb{G}}, K^{\mathbb G},M^{\mathbb{G}})$ in norm and almost surely for a subsequence. It is clear then that $(Y^{\mathbb{G}}, Z^{\mathbb{G}}, K^{\mathbb G},M^{\mathbb{G}})$ is a solution to (\ref{RBSDEGinfinite}), and hence this equation admits a solution. The uniqueness of this solution is a direct consequence of assertion (c), and hence assertion (a) follows immediately as soon as we prove assertion (c). Besides, by taking the limit in (\ref{Inequality4Limit}), and using Fatou and (\ref{Limits4RightHandSide}), assertion (b) follows immediately. Thus, the rest of this step deals with assertion (c). To this end, we consider two triplets $(f^{(i)}, S^{(i)},h^{(i)})$, $i=1,2$, that satisfy the boundedness assumption (\ref{BoundednessAssumption}). Then for each $i=1,2$ we associate a sequence $(f^{(i, n)}, S^{(i,n)}, h^{(i,n)})$ as in (\ref{Sequence4[0,n]}). Thus, on the one hand, we apply Theorem \ref{estimates1} for each $(\delta\xi^{(i)}, \delta f^{(i)}, \delta S^{(i)}):=\left(f_{i, n}-f_{i, n+m}, S_{i,n}-S_{i,n+m}, \xi^{i,n}-\xi^{i,n+m}\right)$ and get similar inequalities as (\ref{Inequa4Convergence}) for each $i=1,2$ and deduce afterwards that the sequence $(Y^{\mathbb{G},(i,n)}, Z^{\mathbb{G},(i,n)}, K^{\mathbb G,(i,n)},M^{\mathbb{G},(i,n)})$ converses in norm and almost surely for a subsequence to the solution $(Y^{\mathbb{G},(i)}, Z^{\mathbb{G},(i)}, K^{\mathbb G,(i)},M^{\mathbb{G},(i)})$. On the other hand, for each $n\geq 1$, we apply Theorem \ref{estimates1} for \begin{align*} &\delta f^{(n)}:=f^{(1, n)}- f^{(2, n)},\quad\delta S^{(n)}:=S^{(1,n)}-S^{(2,n)},\quad\delta\xi^{(n)}):= \xi^{(1,n)}-\xi^{(2,n)},\\ & \mbox{and}\\ &\begin{cases} \delta{Y}^{\mathbb{G},(n)}:=Y^{\mathbb{G},(1,n)}-Y^{\mathbb{G},(2,n)} \quad ,\delta{ Z}^{\mathbb{G},(n)}:=Z^{\mathbb{G},(1,n)}-Z^{\mathbb{G},(2,n)},\\ \delta{K}^{\mathbb G,(n)}:=K^{\mathbb G,(1,n)}-K^{\mathbb G,(2,n)},\quad \delta{M}^{\mathbb{G},(n)}:=M^{\mathbb{G},(1,n)})-M^{\mathbb{G},(2,n)}, \end{cases}\end{align*} and get \begin{align}\label{Convergence4Differences} &E\left[\sup_{0\leq t\leq T}\widetilde{\cal E}_t\vert\delta{Y}^{\mathbb{G},(n)}_{t}\vert^p+\left(\int_{0}^{T\wedge\tau}(\widetilde{\cal E}_{s-})^{2/p}\vert\delta{ Z}^{\mathbb{G},(n)}_{s}\vert^{2}ds+\int_{0}^{T}(\widetilde{\cal E}_{s-})^{2/p}d[ \delta{M}^{\mathbb{G},(n)},\delta{M}^{\mathbb{G},(n)}]_s\right)^{p/2}\right]\nonumber\\ &\leq C_1 E^{\widetilde{Q}}\left[\vert\delta\xi^{(n)}\vert^p+\left(\int_{0}^{\tau}\vert\delta f^{(n)}_s\vert ds\right)^p +\sup_{0\leq t\leq \tau}\vert \delta S^{(n)}_t\vert^p\right]\nonumber\\ &+C_2\sqrt{\Vert\sup_{0\leq t\leq\tau}\vert \delta S^{(n)}_t\vert\Vert_{L^p(\widetilde{Q})}^p}\sqrt{\sum_{i=1}^2E^{\widetilde{Q}}\left[\vert\xi^{(i,n)}\vert^p+\left(\int_{0}^{\tau}\vert{f}^{(i,n)}(s)\vert ds\right)^p +\sup_{0\leq t\leq \tau}({S}^{(i,n)}_t)^+)^p\right]}. \end{align} Similarly, as in the proof of (\ref{Limits4RightHandSide}), we use Lemma \ref{ExpecationQtilde2P} and the boundedness assumption (\ref{BoundednessAssumption}) that each triplet $(f^{(i)}, S^{(i)},h^{(i)})$ satisfies, $i=1,2$, and get \begin{align}\label{Limits4Differences} \begin{cases} \displaystyle\lim_{n\to\infty}E^{\widetilde{Q}}\left[\left(\int_{0}^{n\wedge\tau}\vert \delta{f}^{(n)}_s)\vert ds\right)^p\right]=E\left[\int_0^{\infty} \vert\delta{F}_t\vert^p dV^{\mathbb F}_t\right],\ \lim_{n\to\infty}E^{\widetilde{Q}}\left[\vert\delta\xi^{(n)}\vert^p\right]=E\left[\int_0^{\infty} \vert\delta{h}_t\vert^p dV^{\mathbb F}_t\right],\\ \displaystyle\lim_{n\to\infty}E^{\widetilde{Q}}\left[\sup_{0\leq s \leq{n}\wedge\tau}((S^{(i)})^{+}_{s})^p\right]=E\left[\int_0^{\infty} \sup_{0\leq s \leq{t}}((S^{(i)})^{+}_{s})^pdV^{\mathbb F}_t\right],\quad i=1,2.\end{cases} \end{align} Thus, by taking the limit in (\ref{Convergence4Differences}), using Fatou's lemma for its left-hand-side term, and using (\ref{Limits4Differences}) for its right-hand-side term, we conclude that assertion (c) holds. This ends the first part.\\ {\bf Part 2.} This step proves the theorem without any assumption. Hence, we consider the following sequence of stopping times \begin{eqnarray*} T_n:=\inf\left\{t\geq 0\ :\quad {{\vert{S}_t\vert^p}\over{{\cal E}_t(G_{-}^{-1}\bigcdot m)}} >n\quad\mbox{or}\quad {{(\int_0^t\vert f(s)\vert ds)^p}\over{{\cal E}_t(G_{-}^{-1}\bigcdot m)}} >n\right\},\end{eqnarray*} and the sequences \begin{eqnarray}\label{Consutrction4DataSequence} h^{(n)}:=h I_{\{\vert{h}\vert^p\leq n{\cal E}(G_{-}^{-1}\bigcdot m)\}}I_{\Lbrack0, T_n[\![},\quad f^{(n)}:=fI_{\Lbrack0, T_n]\!]},\quad S^{(n)}:=S I_{\Lbrack0, T_n[\![}. \end{eqnarray} Thus, it is clear that all triplets $(f^{(n)}, h^{(n)}, S^{(n)})$ satisfy (\ref{BoundednessAssumption}), for any $n\geq 1$. Thus, thanks to the first part, we deduce the existence of unique solution to (\ref{RBSDEGinfinite}), denoted by $(Y^{\mathbb{G},(n)}, Z^{\mathbb{G},(n)}, K^{\mathbb G,(n)},M^{\mathbb{G},(n)})$, associated to the data $(f^{(n)}, h^{(n)}, S^{(n)})$ and satisfying\\ \begin{align}\label{Estimate4PTinfinityproof} &E\left[\sup_{0\leq t\leq \tau}\widetilde{\cal E}_t\vert{Y}^{{\mathbb{G}},n}_{t}\vert^p+\left(\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}\vert Z^{{\mathbb{G}},n}_{s}\vert^{2}ds+\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}d[ M^{\mathbb{G},n}, M^{\mathbb{G},n}]_s\right)^{p/2}+\left(\int_0^{\tau}({\widetilde{\cal E}_{s-}})^{1/p}dK^{\mathbb{G},n}_s\right)^p\right]\nonumber\\ &\leq C E\left[\int_0^{\infty}\left\{\vert{h}^{(n)}_t\vert^p+({ F}_t^{(n)})^p+\sup_{0\leq s\leq t}\vert(S^{(n)}_u)^+\vert^p\right\}dV^{\mathbb F}_t\right],\end{align} and for any $n\geq 1$ and $m\geq 1$ \begin{align}\label{Estimate4P1Tinifiniteproof} &E\left[\sup_{0\leq t\leq \tau}\widetilde{\cal E}_t\vert{Y}^{{\mathbb{G},n}}_{t}-{Y}^{{\mathbb{G},n+m}}_{t}\vert^p+\left(\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}\vert{ Z}^{{\mathbb{G}},n}_{s}-Z^{{\mathbb{G}},n+m}_{s}\vert^{2}ds\right)^{p/2}\right]\nonumber\\ &+E\left[\left(\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}d[ M^{\mathbb{G},n}-M^{\mathbb{G},n+m},M^{\mathbb{G},n}-M^{\mathbb{G},n+m}]_s\right)^{p/2}\right]\nonumber\\ &\leq C_1 E\left[\displaystyle\int_0^{\infty}\left\{\vert {h}_t^{(n)}-{h}_t^{(n+m)} \vert^p+\vert{ F}_t^{(n)}-{ F}_t^{(n+m)}\vert^p+\sup_{0\leq{u}\leq{t}}\vert{S}_u^{(n)}-{S}_u^{(n+m)}\vert^p\right\}dV^{\mathbb F}_t\right]\nonumber\\ &+C_2\sqrt{E\left[\displaystyle\int_0^{\infty}\sup_{0\leq{u}\leq{t}}\vert{S}_u^{(n)}-{S}_u^{(n+m)}\vert^p dV^{\mathbb F}_t\right]} \sqrt{\sum_{i\in\{n,n+m\}} E\left[\displaystyle\int_0^{\infty}\left\{\vert{h}_t^{(i)}\vert^p+\vert{ F}_t^{(i)}\vert^p+\sup_{0\leq{u}\leq{t}}\vert(S^{(i)}_u)^+\vert^p\right\}dV^{\mathbb F}_t\right]}. \end{align} This latter inequality is obtained from (\ref{Estimate4P1Tinifinite}) by considering \begin{eqnarray*} \begin{cases}\delta{Y}^{\mathbb{G}}:=Y^{\mathbb{G},n}-Y^{\mathbb{G},n+m},\ \delta{Z}^{\mathbb{G}}:=Z^{\mathbb{G},n}-Z^{\mathbb{G},n+m},\ \delta{K}^{\mathbb{G}}:=K^{\mathbb{G},n}-K^{\mathbb{G},n+m},\ \delta{M}^{\mathbb{G}}:=M^{\mathbb{G},n}-M^{\mathbb{G},n+m},\label{deltaProcesses}\\ \delta{h}:=h^{(n)}-h^{(n+m)} ,\quad\delta{F}:=F^{(n)}-F^{(n+m)},\quad F^{(i)}:=\int_0^{\cdot}\vert{f}^{(i)}_s\vert{d}s,\quad \delta{S}:=S^{(n)}-S^{(n+m)}.\end{cases}\end{eqnarray*} Thus, in virtue of (\ref{MainAssumption4InfiniteHorizon}) and the dominated convergence theorem, we derive \begin{align*} &\lim_{n\to+\infty}\sup_{m\geq 1}E\left[\displaystyle\int_0^{\infty}\left\{\vert {h}_t^{(n)}-{h}_t^{(n+m)} \vert^p+\vert{ F}_t^{(n)}-{ F}_t^{(n+m)}\vert^p+\sup_{0\leq{u}\leq{t}}\vert{S}_u^{(n)}-{S}_u^{(n+m)}\vert^p\right\}dV^{\mathbb F}_t\right]\\ &\leq\lim_{n\to+\infty}{E}\left[\displaystyle\int_{T_n}^{\infty}\left\{\vert {h}_t\vert^p+\vert{ F}_t\vert^p+\sup_{0\leq{u}\leq{t}}\vert{S}_u\vert^p\right\}dV^{\mathbb F}_t\right]=0. \end{align*} A combination of this with (\ref{Estimate4P1Tinifiniteproof}) proves that the sequence $(Y^{\mathbb{G},(n)}, Z^{\mathbb{G},(n)}, K^{\mathbb G,(n)},M^{\mathbb{G},(n)})$ is a Cauchy sequence in norm, and hence it converges to $(Y^{\mathbb{G}}, Z^{\mathbb{G}}, K^{\mathbb G},M^{\mathbb{G}})$ in norm and almost surely for a subsequence. Furthermore, $(Y^{\mathbb{G}}, Z^{\mathbb{G}}, K^{\mathbb G},M^{\mathbb{G}})$ clearly satisfies (\ref{RBSDEGinfinite}), and due to Fatou's lemma and (\ref{Estimate4PTinfinityproof}) we conclude that (\ref{Estimate4PTinfinity}) holds. To prove that (\ref{Estimate4P1Tinifinite}) holds, we repeat this analysis for the pair of data $(f^{(i)}, S^{(i)},h^{(i)}, \xi^{(i)})$, $i=1,2$, and obtain two sequences $(Y^{\mathbb{G},(n,i)}, Z^{\mathbb{G},(n,i)}, K^{\mathbb G,(n,i)},M^{\mathbb{G},(n,i)})_{n\geq 1}$ ($i=1,2$) solution to (\ref{RBSDEGinfinite}) corresponding to the data $(f^{(n,i)}, h^{(n,i)}, S^{(n,i)})$ that is constructed from $(f^{(i)}, h^{(i)}, S^{(i)})$ via (\ref{Consutrction4DataSequence}). Furthermore, each $(Y^{\mathbb{G},(n,i)}, Z^{\mathbb{G},(n,i)}, K^{\mathbb G,(n,i)},M^{\mathbb{G},(n,i)})_{n\geq1}$ converges (in norm and almost surely for a subsequence) to $(Y^{\mathbb{G},i}, Z^{\mathbb{G},i}, K^{\mathbb G,i},M^{\mathbb{G},i})$ that is solution to (\ref{RBSDEGinfinite}) corresponding to $(f^{(i)}, S^{(i)},h^{(i)}, \xi^{(i)})$, for each $i=1,2$. Thus, by applying (\ref{Estimate4P1Tinifinite}) to $(\delta{f}, \delta{h}, \delta{S}):=(f^{(n,1)}-f^{(n,2)}, h^{(n,1)}-h^{(n,2)}, S^{(n,1)}-S^{(n,2)})$, and taking the limits on both sides, we easily deduce that (\ref{Estimate4P1Tinifinite}) holds for the general case. This ends the second part, and completes the proof of theorem.\end{proof} \subsection{Relationship to RBSDE under $\mathbb F$} In this subsection, we establish the RBSDE under $\mathbb F$ that is directly related to (\ref{RBSDEGinfinite}). \begin{theorem}\label{Relationship4InfiniteBSDE} Suppose that $G>0$ and $(f, S, h)$ satisfies (\ref{MainAssumption4InfiniteHorizon}) and \begin{eqnarray}\label{MainAssumption4InfiniteHorizonBIS} E\left[\left(F_{\infty}{\widetilde{\cal E}}_{\infty}\right)^p\right]<+\infty,\quad\mbox{where}\quad {\widetilde{\cal E}}:={\cal E}(-{\widetilde G}^{-1}\bigcdot D^{o,\mathbb{F}})\quad\mbox{and}\quad F_{\infty}:=\int_0^{\infty}\vert{f}_s\vert ds .\end{eqnarray} Consider the pair $(f^{\mathbb{F}},S^{\mathbb{F}})$ given by (\ref{ProcessVFandXiF}). Then the following assertions hold.\\ {\rm{(a)}} The following RBSDE, under $\mathbb F$, generated by the triplet $ \left(f^{\mathbb{F}},S^{\mathbb{F}},h\right)$ \begin{eqnarray}\label{RBSDEFinfinite} \begin{cases} Y_{t}= \displaystyle\int_{t}^{\infty}f^{\mathbb{F}}(s)ds+\int_{t}^{\infty}h_{s}dV^{\mathbb{F}}_{s}+K_{\infty}-K_{t}-\int_{t}^{\infty}Z_{s}dW_{s},\\ Y_{t}\geq S_{t}^{\mathbb{F}},\quad \displaystyle{E}\left[\int_{0}^{\infty}(Y_{t-}-S_{t-}^{\mathbb{F}})dK_{t}\right]=0 , \end{cases} \end{eqnarray} has a unique $L^p(P,\mathbb F)$-solution $(Y^{\mathbb F}, Z^{\mathbb F}, K^{\mathbb F})$ .\\ {\rm{(b)}} The unique solution to (\ref{RBSDEGinfinite}), that we denote by $\left(Y^{\mathbb{G}},Z^{\mathbb{G}},K^{\mathbb{G}},M^{\mathbb{G}}\right)$, satisfies \begin{eqnarray}\label{firstrelationInfnite} Y^{\mathbb{G}}= \frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+\xi{I}_{[\![\tau,+\infty[\![},\ Z^{\mathbb{G}}=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}_{-}} I_{\Rbrack0,\tau]\!]},\ K^{\mathbb{G}}=\frac{1}{{\widetilde{\cal E}}_{-}}\bigcdot (K ^{\mathbb{F}})^{\tau},\ \mbox{and}\ M^{\mathbb{G}}=\left(h-\frac{Y^{\mathbb{F}}}{\widetilde{\cal E}}\right)\bigcdot N^{\mathbb{G}}.\label{secondrelationInfinite} \end{eqnarray} \end{theorem} \begin{proof} On the one hand, remark that, due to the assumptions (\ref{MainAssumption4InfiniteHorizon}) and (\ref{MainAssumption4InfiniteHorizonBIS}), both random variables $\int_0^{\infty}\vert f^{\mathbb F}_s\vert ds$ and $\int_0^{\infty}\vert{h}_s\vert dV^{\mathbb F}_s$ belong to $L^p(P)$. In fact, this fact follows from the following two inequalities \begin{align*} \int_0^{\infty}\vert f^{\mathbb F}_s\vert ds={\widetilde{\cal E}}_{\infty}F_{\infty}+\int_0^{\infty} F_s dV^{\mathbb F}_s,\quad\mbox{and}\quad \left(\int_0^{\infty}\vert{h}_s\vert dV^{\mathbb F}_s\right)^p\leq (V^{\mathbb F}_{\infty})^{p-1}\int_0^{\infty}\vert{h}_s\vert^p dV^{\mathbb F}_s\leq\int_0^{\infty}\vert{h}_s\vert^p dV^{\mathbb F}_s . \end{align*} On the other hand, similar arguments as in the proof of Theorem \ref{abcde}, one can prove that any solution to (\ref{RBSDEFinfinite}) $(Y,Z,K)$ satisfies \begin{eqnarray*} Y_t=\mbox{ess}\sup_{\sigma\in{\cal T}_t^{\infty}(\mathbb F)}E\left[\int_t^{\sigma} f^{\mathbb F}_s ds+\int_t^{\sigma}h_s dV^{\mathbb F}_s+S^{\mathbb F}_{\sigma}I_{\{\sigma<+\infty\}}\ \big|{\cal F}t\right]=:\overline{Y}. \end{eqnarray*} Furthermore, due to the Snell envelop theory, see \cite{ElqarouiBSDE} for details or see also the proof of Theorem \ref{abcde}, the Doob-Meyer decomposition of the supermartingale $\overline{Y}+\int_0^{\cdot}h_s dV^{\mathbb F}_s+\int_0^{\cdot} f^{\mathbb F}_s ds=\overline{M}-\overline{K}$ gives the solution triplet to (\ref{RBSDEFinfinite}) as $(\overline{Y},\overline{M},\overline{K})$. This proves that (\ref{RBSDEFinfinite}) has a unique solution, and the proof of assertion (a) is complete.\\ The rest of the proof deals with assertion (b). Remark that, in virtue of Theorem \ref{EstimateInfinite}, the RBSDE (\ref{RBSDEGinfinite}) has at most one solution. Therefore, we will prove that the triplet $(\widehat{Y}, \widehat{Z}, \widehat{K},\widehat{M})$ given by \begin{align*} \widehat{Y}:= \frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+\xi{I}_{[\![\tau,+\infty[\![},\ \widehat{Z}:=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}_{-}} I_{\Rbrack0,\tau]\!]},\ \widehat{K}:=\frac{1}{{\widetilde{\cal E}}_{-}}\bigcdot (K ^{\mathbb{F}})^{\tau},\ \mbox{and}\ \widehat{M}:=\left(h-\frac{Y^{\mathbb{F}}}{\widetilde{\cal E}}\right)\bigcdot N^{\mathbb{G}}, \end{align*} is a solution to (\ref{RBSDEGinfinite}). To prove this fact, we put $\widehat{\Gamma}:={Y}^{\mathbb F}/{\widetilde{\cal E}},$ and on the one hand we remark that \begin{eqnarray}\label{YGGammainfinite} \widehat{Y} =\widehat{\Gamma}I_{\Lbrack0,\tau[\![}+ h_{\tau} I_{[\![\tau,+\infty[\![}=\widehat{\Gamma}^{\tau} +(h-\widetilde{\Gamma})\bigcdot D.\end{eqnarray} On the other hand, by combining It\^o applied to $\widehat{\Gamma}$, (\ref{RBSDEFinfinite}) that the triplet $(Y^{\mathbb F}, Z^{\mathbb{F}}, K^{\mathbb{F}})$ satisfies, ${\widetilde{\cal E}}^{-1}={\cal E}(G^{-1}\bigcdot D^{o,\mathbb F})$, ${\widetilde{\cal E}}={\widetilde{\cal E}}_{-}G/{\widetilde{G}}$, and $dV^{\mathbb F}={\widetilde{\cal E}}_{-}{\widetilde{G}}^{-1}dD^{o,\mathbb F}$, we derive \begin{eqnarray*} d\widehat{\Gamma}&&=Y^{\mathbb F}d{\widetilde{\cal E}}^{-1}+{1\over{{\widetilde{\cal E}}_{-}}}dY^{\mathbb F}={{\widehat{\Gamma}}\over{\widetilde G}}dD^{o,\mathbb F}+{1\over{{\widetilde{\cal E}}_{-}}}d{Y}^{\mathbb F}={{\widehat{\Gamma}}\over{\widetilde G}}dD^{o,\mathbb F}-{{f^{\mathbb{F}}}\over{{\widetilde{\cal E}}_{-}}}ds-{{h}\over{{\widetilde{\cal E}}_{-}}}dV^{\mathbb{F}}-{{1}\over{{\widetilde{\cal E}}_{-}}}dK^{\mathbb{F}}+{{Z^{\mathbb{F}}}\over{{\widetilde{\cal E}}_{-}}}dW\nonumber\\\ &&={{\widehat{\Gamma}-h}\over{\widetilde G}}dD^{o,\mathbb F}-fds-{{1}\over{{\widetilde{\cal E}}_{-}}}dK^{\mathbb{F}}+{{Z^{\mathbb{F}}}\over{{\widetilde{\cal E}}_{-}}}dW.\label{equation4Gamma} \end{eqnarray*} Thus, by stopping $\widehat{\Gamma}$ and inserting the above equality in (\ref{YGGammainfinite}) and arranging terms we get \begin{eqnarray}\label{SDE4YG} d\widehat{Y} =-f(t)d(t\wedge\tau)-d\widehat{K}+d \widehat{M}+{\widehat{Z}}dW^{\tau},\quad\mbox{and}\quad \widehat{Y}_{\tau}=\xi. \end{eqnarray} This proves that $(\widehat{Y}, \widehat{Z}, \widehat{K},\widehat{M})$ satisfies the first equation in (\ref{RBSDEGinfinite}). Furthermore, it is clear that $ {Y}^{\mathbb{F}}_{t}\geq S_{t}^{\mathbb{F}}$ implies the second condition in (\ref{RBSDEGinfinite}). To prove the Skorokhod condition (the last condition in (\ref{RBSDEGinfinite})), we combine the Skorokhod condition for the triplet $({Y}^{\mathbb F}, {Z}^{\mathbb F}, {K}^{\mathbb F})$, the fact that $\widehat{Y}_{-}\geq S_{-}$ on $\Rbrack0,\tau]\!]$, and \begin{eqnarray*} 0\leq \int_{0}^{\tau}({\widehat{Y}}_{t-}-S_{t-})d\widehat{K}_t=\int_{0}^{\tau}({Y}^{\mathbb F}_{t-}-S^{\mathbb F}_{t-}){\widetilde{\cal E}}_{t-}^{-2}d{K}^{\mathbb F}_t\leq \int_{0}^{\infty}({Y}^{\mathbb F}_{t-}-S^{\mathbb F}_{t-}){\widetilde{\cal E}}_{t-}^{-2}dK^{\mathbb F}_t=0,\quad P\mbox{-a.s..} \end{eqnarray*} This ends the second part, and the proof of theorem is complete. \end{proof} \begin{remark} It is clear that, in general, the existence of an $L^p(P)$-solution to (\ref{RBSDEFinfinite}) requires stronger assumptions than the existence of $L^p(P)$-solution to (\ref{RBSDEGinfinite}). \end{remark} We end this section by elaborating the BSDE version of this section. \begin{theorem}\label{LinearBSDEcase} Suppose $G>0$ and consider a pair $(f, h)$ of $\mathbb F$-optional processes satisfying \begin{eqnarray}\label{MainAssumption4InfiniteHorizonBSDE} E\left[\left(F_{\infty}{\widetilde{\cal E}_{\infty}}\right)^p+\int_0^{\infty}\left(\vert{h}_t\vert^p+(F_t)^p\right)dV^{\mathbb F}_t\right]<+\infty,\quad\mbox{where}\quad F_t:=\int_0^t\vert f(s)\vert ds. \end{eqnarray} If $f^{\mathbb{F}}$ and ${\widetilde{\cal E}}$ denote the processes defined in (\ref{ProcessVFandXiF}), then the following assertions hold.\\ {\rm{(a)}} The following BSDE under $\mathbb F$ \begin{eqnarray}\label{BSDEFinfinite} Y_{t}= \displaystyle\int_{t}^{\infty}f^{\mathbb{F}}(s)ds+\int_{t}^{\infty}h_{s}dV^{\mathbb{F}}_{s}-\int_{t}^{\infty}Z_{s}dW_{s},\end{eqnarray} has a unique $L^p(\mathbb{F}, P)$-solution $(Y^{\mathbb F}, Z^{\mathbb F})$.\\ {\rm{(b)}} The following BSDE \begin{eqnarray}\label{BSDEGinfinite} dY=-f(t)d(t\wedge\tau)-dM+ZdW^{\tau},\quad Y_{\tau}=\xi=h_{\tau}, \end{eqnarray} has a unique solution, denoted by $\left(Y^{\mathbb{G}},Z^{\mathbb{G}},M^{\mathbb{G}}\right)$, satisfying \begin{eqnarray}\label{firstrelationInfnite} Y^{\mathbb{G}}= \frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+\xi{I}_{[\![\tau,+\infty[\![},\ Z^{\mathbb{G}}=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}_{-}} I_{\Rbrack0,\tau]\!]},\ \mbox{and}\ M^{\mathbb{G}}=\left(h-\frac{Y^{\mathbb{F}}}{\widetilde{\cal E}}\right)\bigcdot N^{\mathbb{G}}.\label{secondrelationInfinite} \end{eqnarray} {\rm{(c)}} Let $(f^{(i)},h^{(i)})$, $i=1,2,$ be two pairs satisfying (\ref{MainAssumption4InfiniteHorizonBSDE}), and let any $p\in (1,+\infty)$. Then there exists a positive constant $C$ that depends on $p$ only such that \begin{align}\label{Estimate4P1TinifiniteBSDE} &E\left[\sup_{0\leq t\leq \tau}\widetilde{\cal E}_t\vert\delta Y^{{\mathbb{G}}}_{t}\vert^p+\left(\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}\vert \delta Z^{{\mathbb{G}}}_{s}\vert^{2}ds+\int_{0}^{\tau}(\widetilde{\cal E}_{s-})^{2/p}d[ \delta M^{\mathbb{G}},\delta M^{\mathbb{G}}]_s\right)^{p/2}\right]\nonumber\\ &\leq C E\left[\displaystyle\int_0^{\infty}\left(\vert\delta h_t\vert^p+\vert{\delta F_t}\vert^p\right)dV^{\mathbb F}_t\right], \end{align} where $(Y^{\mathbb{G},i},Z^{\mathbb{G},i},M^{\mathbb{G},i})$ is the solution to (\ref{BSDEGinfinite}) associated to $(f^{(i)},h^{(i)})$, for $i=1,2$, and \begin{eqnarray*} &&\delta{Y}^{\mathbb{G}}:=Y^{\mathbb{G},1}-Y^{\mathbb{G},2},\quad\delta{Z}^{\mathbb{G}}:=Z^{\mathbb{G},1}-Z^{\mathbb{G},2},\quad \delta{M}^{\mathbb{G}}:=M^{\mathbb{G},1}-M^{\mathbb{G},2},\label{deltaProcesses}\\ &&\delta{h}:=h^{(1)}-h^{(2)} ,\quad\delta{F}:=\int_0^{\cdot}\vert{f}^{(1)}(s)-f^{(2)}(s)\vert{ds}.\end{eqnarray*} \end{theorem} \begin{proof} Remark that a BSDE is an RBSDE with $S\equiv -\infty$ and the nondecreasing process part of its quadruplet solution is null, i.e. $K\equiv 0$. Thus, by keeping these in mind, we conclude the following. \begin{enumerate} \item The condition (\ref{MainAssumption4InfiniteHorizon}) take the form of (\ref{MainAssumption4InfiniteHorizonBSDE}). \item Assertion (a) is a particular case of assertion (a) of Theorem \ref{Relationship4InfiniteBSDE}, while assertion (b) is direct consequence of a combination of assertion (a) of Theorem \ref{RBSDEFinfinite} and assertion (b) of Theorem \ref{Relationship4InfiniteBSDE}. \item Assertions (c) and (d) follow from Theorem \ref{EstimateInfinite}. \end{enumerate} This ends the proof of theorem. \end{proof} \section{Stopped general RBSDEs: The case of bounded horizon} In this section, we address the RBSDE with general generator satisfying the following assumption \begin{equation}\label{LipschitzAssumption} \exists\ C_{Lip}> 0,\quad \vert f(t,y,z)-f(t,y^{'},z^{'})\vert\leq C_{Lip}(\vert y-y^{'}\vert+\Vert z-z ^{'}\Vert),\quad \mbox{for all}\ y,y^{'}\in\mathbb{R},\ z,z^{'}\in\mathbb{R}^{d}. \end{equation} In this section, we are interested in the following RBSDE, \begin{eqnarray}\label{nonLinear} \begin{cases} dY_{t}=-f(t,Y_{t},Z_{t})d(t\wedge\tau)-d(K_{t\wedge\tau}+M_{t\wedge\tau})+Z_{t}dW_{t}^{\tau},\quad Y_{\tau}=\xi=Y_{T},\\ \\ Y_{t}\geq S_{t},\quad 0\leq t< T\wedge\tau,\quad\rm{and}\quad E\left[\displaystyle\int_{0}^{T\wedge\tau}(Y_{t-}-S_{t-})dK_t\right]=0, \end{cases} \end{eqnarray} where $(\xi, S, f)$ is such that $S$ is an $\mathbb F$-adapted and RCLL process, $f(t,y,z)$ is a $\mbox{Prog}(\mathbb F)\times {\cal B}(\mathbb R)\times {\cal B}(\mathbb R)$-measurable functional and $\xi\in L^2({\cal G}_{T\wedge\tau})$. \subsection{Estimate inequalities for the solution} This subsection derives a number of norm-estimates for the solution of the RBSDEs when this exists. These inequalities play important role in the proof of the existence of uniqueness of the solution of this RBSDE on the one hand. On the other hand, the role of these estimates in studying the stability of RBSDEs is without reproach. \begin{lemma}\label{Estimation4DeltaY} Let $T\in (0,+\infty)$. Then the following assertions hold.\\ {\rm{(a)}} If ($Y^{\mathbb{G}},Z^{\mathbb{G}},K^{\mathbb{G}}, M^{\mathbb{G}}$) is a solution to the RBSDE (\ref{nonLinear}) that corresponds to $(f, S, \xi)$, then \begin{eqnarray}\label{YGessSup} Y^{\mathbb G}_t=\rm{ess}\sup_{\theta\in {\cal T}_{t\wedge\tau}^{T\wedge\tau}(\mathbb G)} E^{\widetilde Q}\left[\int_{t\wedge\tau}^{\theta} f(s, Y^{\mathbb G}_s, Z^{\mathbb G}_s)ds +S_{\theta}I_{\{\theta<T\wedge\tau\}}+\xi I _{\{\theta=T\wedge\tau\}}\ \Bigg|\ {\cal G}_t\right].\end{eqnarray} {\rm{(b)}} If ($Y_{i}^{\mathbb{G}},Z_{i}^{\mathbb{G}},K_{i}^{\mathbb{G}}, M_{i}^{\mathbb{G}}$) is a solution to the RBSDE (\ref{nonLinear}) that corresponds to $(f^{(i)}, S^{(i)}, \xi^{(i)})$, $i=1,2$, then for any $\alpha>0$ the following holds \begin{eqnarray}\label{inequa4deltaYG2bis} \exp\left({{\alpha(t\wedge\tau)}\over{2}}\right)\vert \delta Y^{\mathbb G}_t\vert&&\leq E^{\widetilde Q}\left[\sup_{0<s\leq T\wedge\tau}e^{\alpha s/2}\vert \delta S_u\vert +e^{\alpha(T\wedge\tau)/2}\vert \delta\xi\vert+{{C_{Lip}}\over{\sqrt{\alpha}}} \sqrt{\int_0^{T\wedge\tau} e^{\alpha s}(\delta Z^{\mathbb G}_s)^2 ds}\ \Bigg|\ {\cal G}_t\right]\nonumber\\ &&+{1\over{\sqrt{\alpha}}}E^{\widetilde Q}\left[ C_{Lip}\sqrt{\int_0^{T\wedge\tau} e^{\alpha s}\vert\delta Y^{\mathbb G}_s\vert^2 ds} +\sqrt{\int_0^{T\wedge\tau} e^{\alpha s}\vert\delta f_s\vert^2 ds} \ \bigg|\ {\cal G}_t\right]. \end{eqnarray} {\rm{(c)}} If ($Y^{\mathbb{G}},Z^{\mathbb{G}},K^{\mathbb{G}}, M^{\mathbb{G}}$) is a solution to the RBSDE (\ref{nonLinear}) that corresponds to $(f, S, \xi)$, then for any $\alpha>0$, any $\mathbb F$-stopping time $\sigma\in {\cal T}_0^T(\mathbb F)$ and any $t\in [0,T]$, the following holds \begin{align}\label{inequa4deltaYG2} &\exp\bigl({{{\alpha}\over{2}}(t\wedge\tau)}\bigr)\vert{Y}^{\mathbb G}_t\vert{I}_{\{\sigma\leq\tau\}}I_{\{\sigma\leq {t}\}}\nonumber\\ &\leq{E}^{\widetilde Q}\left[\sup_{\sigma\wedge\tau\leq{u}\leq T\wedge\tau}e^{\alpha u/2} S_u^+ +e^{\alpha(T\wedge\tau)/2}\vert\xi\vert{I}_{\{\sigma\leq\tau\}} +{1\over{\sqrt{\alpha}}}\sqrt{\int_{\sigma\wedge\tau}^{T\wedge\tau} e^{\alpha s}\vert{f}(s,0,0)\vert^2 ds} \ \Bigg|\ {\cal G}_t\right]\nonumber\\ &+{{C_{Lip}}\over{\sqrt{\alpha}}}E^{\widetilde Q}\left[\sqrt{\int_{\sigma\wedge\tau}^{T\wedge\tau} e^{\alpha s}\vert{ Y}^{\mathbb G}_s\vert^2 ds}+ \sqrt{\int_{\sigma\wedge\tau}^{T\wedge\tau} e^{\alpha s}( Z^{\mathbb G}_s)^2 ds} \ \bigg|\ {\cal G}_t\right]. \end{align} \end{lemma} \begin{proof} The proof of assertion (a) follows the same footsteps of the proof of (\ref{RBSDE2Snell}) in Theorem \ref{abcde}. Thus, the rest of this proof focuses on proving assertions (b) and (c) in two parts.\\ {\bf Part 1.} This part proves assertion (b). To this end, we start by proving the following \begin{eqnarray}\label{inequa4deltaYG1} \vert \delta Y^{\mathbb G}_t\vert\leq E^{\widetilde Q}\left[\int_{t\wedge\tau}^{T\wedge\tau}\vert \Delta f_s\vert ds+\sup_{t\wedge\tau<s\leq T\wedge\tau}\vert \delta S_u\vert +\vert \delta\xi\vert\ \bigg|\ {\cal G}_t\right],\ \Delta f_t:=f_1(t, Y^{\mathbb G,1}_t, Z^{\mathbb G,1}_t)-f_2(t, Y^{\mathbb G,2}_t, Z^{\mathbb G,2}_t) \end{eqnarray} Let $t\in [0,T]$ be arbitrary but fixed. Hence, by applying assertion (a) to each $Y^{\mathbb G, i}$, $i=1,2$, we deduce the existence of $\nu_i\in {\cal T}_{t\wedge\tau}^{T\wedge\tau}(\mathbb {G})$ for $i=1,2$ such that the second equality of (\ref{YGessSup}) holds for each $Y^{\mathbb G, i}$, $i=1,2$, and in virtue of the first equality in (\ref{YGessSup}) we have \begin{eqnarray*} Y^{\mathbb G, 1}_t\geq E^{\widetilde Q}\left[\int_{t\wedge\tau}^{\nu_2\wedge\tau} f_1(s, Y^{\mathbb G,1}_s, Z^{\mathbb G,1}_s)ds +S_{\nu_2}^1I_{\{\nu_2<T\wedge\tau\}}+\xi^1 I_{\{\nu_2=T\wedge\tau\}}\ \Bigg|{\cal G}_t\ \right],\\ Y^{\mathbb G, 2}_t\geq E^{\widetilde Q}\left[\int_{t\wedge\tau}^{\nu_1\wedge\tau} f_2(s, Y^{\mathbb G,2}_s, Z^{\mathbb G,2}_s)ds +S_{\nu_1}^2I_{\{\nu_1<T\wedge\tau\}}+\xi^2 I_{\{\nu_1=T\wedge\tau\}}\ \Bigg|{\cal G}_t\ \right].\end{eqnarray*} By combining all these remarks, we derive \begin{eqnarray*} &&E^{\widetilde Q}\left[\int_{t\wedge\tau}^{\nu_2\wedge\tau} \Delta f_s ds +\delta S_{\nu_2}I_{\{\nu_2<T\wedge\tau\}}+\delta \xi I_{\{\nu_2=T\wedge\tau\}}\ \Bigg|{\cal G}_t\right]\leq\delta Y^{\mathbb G}_t\\ \rm{and}&&\delta Y^{\mathbb G}_t\leq E^{\widetilde Q}\left[\int_{t\wedge\tau}^{\nu_1\wedge\tau} \Delta f_s ds +\delta S_{\nu_1}I_{\{\nu_1<T\wedge\tau\}}+\delta\xi I_{\{\nu_1=T\wedge\tau\}}\ \Bigg|{\cal G}_t\right].\end{eqnarray*} Therefore, on the one hand, (\ref{inequa4deltaYG1}) follows immediately from these two inequalities. On the other hand, due to H\"older's inequality, for any nonnegative and progressively measurable process $h$, we have \begin{eqnarray}\label{InequalitySchwartz} \int_{t\wedge\tau}^{T\wedge\tau} h_s ds\leq \left({{{p_1}\over{\alpha' q_1}}}\right)^{{1\over{q_1}}}\exp\left(-{{\alpha'(t\wedge\tau)}\over{p_1}}\right)\left(\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha' s}h^{p_1}_s ds\right)^{{1\over{p_1}}},\ \alpha'>0,\ p_1>1,\ q_1:={{p_1}\over{p_1-1}}. \end{eqnarray} By using the fact that $\vert\Delta f_s\vert \leq \vert \delta f_s\vert +C_{Lip}\vert\delta Y^{\mathbb G}_s\vert +C_{Lip}\vert\delta Z^{\mathbb G}_s\vert $, and applying the above inequality repeatedly, we derive \begin{eqnarray*} \int_{t\wedge\tau}^{T\wedge\tau}\vert \Delta f_s\vert ds&&\leq {1\over{\sqrt{\alpha}}}\exp\left(-{{\alpha(t\wedge\tau)}\over{2}}\right)\left\{\left(\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}\vert\delta f_s\vert^{2} ds\right)^{{1\over{2}}} +C_{Lip}\left(\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}\vert\delta Y^{\mathbb G}_s\vert^2 ds\right)^{{1\over{2}}}\right\}\\ &&\hskip 1cm +{{C_{Lip}}\over{\sqrt{\alpha}}}\exp\left(-{{\alpha(t\wedge\tau)}\over{2}}\right)\left(\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}(\delta Z^{\mathbb G}_s)^2 ds\right)^{{1\over{2}}} \end{eqnarray*} Thus, by combining this inequality with (\ref{inequa4deltaYG1}), we derive \begin{eqnarray*} \exp\left({{\alpha(t\wedge\tau)}\over{ 2}}\right)\vert \delta Y^{\mathbb G}_t\vert&&\leq E^{\widetilde Q}\left[\sup_{t\wedge\tau<s\leq T\wedge\tau}e^{\alpha s/2}\vert \delta S_u\vert +e^{\alpha(T\wedge\tau)/2}\vert \delta\xi\vert+{{C_{Lip}}\over{\sqrt{\alpha}}} \sqrt{\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}(\delta Z^{\mathbb G}_s)^2 ds}\ \Bigg|\ {\cal G}_t\right]\\ &&+{1\over{\sqrt{\alpha}}}E^{\widetilde Q}\left[ C_{Lip}\left(\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}\vert\delta Y^{\mathbb G}_s\vert^2 ds\right)^{{1\over{2}}} +\left(\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}\vert\delta f_s\vert^2 ds\right)^{{1\over{2}}} \ \bigg|\ {\cal G}_t\right]. \end{eqnarray*} Here $C_{Lip}$ is the Lipschitz' constant associated to the driver $f$ defined in (\ref{LipschitzAssumption}). Thus, (\ref{inequa4deltaYG2bis}) follows immediately from the inequality above, and hence part 1 is complete.\\ {\bf Part 2.} Here we prove assertion (c). Thus, we consider $\alpha>0$ and an $\mathbb F$-stopping time $\sigma$. Similarly as in part 1, for any $t\in [0,T]$, thanks to (\ref{YGessSup}) we have \begin{align*} &Y^{\mathbb G}_t\geq-{E}^{\widetilde Q}\left[\int_{t\wedge\tau}^{T\wedge\tau} \left(f(s, Y^{\mathbb G}_s, Z^{\mathbb G}_s)\right)^-{d}s +\xi^- \Bigg|\ {\cal G}_t\right]\\ &Y^{\mathbb G}_t\leq{E}^{\widetilde Q}\left[\int_{t\wedge\tau}^{T\wedge\tau} \left(f(s, Y^{\mathbb G}_s, Z^{\mathbb G}_s)\right)^+ds +\sup_{t\wedge\tau\leq\theta\leq{T}\wedge\tau}S_{\theta}^+ +\xi^+ \ \Bigg|\ {\cal G}_t\right]. \end{align*} Thus, by combining these inequalities with $\vert{x}\vert=x^++x^-$, we obtain \begin{align}\label{MainInequality4Lemmac} \vert{Y}^{\mathbb G}_t\vert\leq {E}^{\widetilde Q}\left[\int_{t\wedge\tau}^{T\wedge\tau} \vert{f}(s, Y^{\mathbb G}_s, Z^{\mathbb G}_s)\vert{d}s +\sup_{t\wedge\tau\leq\theta\leq{T}\wedge\tau}S_{\theta}^{+} +\vert\xi\vert\quad \Bigg|\ {\cal G}_t\right] \end{align} Then the Lipschitz assumption of $f$ in (\ref{LipschitzAssumption}) implies that \begin{align*} \int_{\sigma\wedge\tau}^{T\wedge\tau} \vert{f}(s, Y^{\mathbb G}_s, Z^{\mathbb G}_s)\vert{d}s\leq \int_{\sigma\wedge\tau}^{T\wedge\tau} \vert{f}(s, 0, 0)\vert{d}s+ C_{Lip}\int_{\sigma\wedge\tau}^{T\wedge\tau} \vert{Y}^{\mathbb G}_s\vert{d}s+ C_{Lip}\int_{\sigma\wedge\tau}^{T\wedge\tau} \vert{Z}^{\mathbb G}_s\vert{d}s. \end{align*} Hence, by applying (\ref{InequalitySchwartz}) to each term on the right-hand-side above for $p_1=q_1=2$, and inserting the resulting inequality in (\ref{MainInequality4Lemmac}) afterwards, we obtain for any $t\in [0,T]$ \begin{align* \vert{Y}^{\mathbb G}_t\vert&\leq {{e^{-\alpha(t\wedge\tau)/2}}\over{\sqrt{\alpha}}}{E}^{\widetilde Q}\left[ \left(\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}(f(s,0,0))^2 ds\right)^{{1\over{2}}} +C_{Lip} \left(\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}\vert\delta Y^{\mathbb G}_s\vert^2 ds\right)^{{1\over{2}}}\Bigg|\ {\cal G}_t\right]\\ &+e^{-\alpha(t\wedge\tau)/2}{E}^{\widetilde Q}\left[{{C_{Lip}}\over{\sqrt{\alpha}}}\sqrt{\int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha s}(\delta Z^{\mathbb G}_s)^2 ds} +\sup_{t\wedge\tau\leq\theta\leq{T}\wedge\tau}e^{\theta/2}S_{\theta}^{+} +e^{\alpha(T\wedge\tau)/2}\vert\xi\vert\quad \Bigg|\ {\cal G}_t\right]. \end{align*} Therefore, the inequality (\ref{inequa4deltaYG2}) follows immediately from combining the above inequality with the fact that $(\sigma\leq{t})\cap(\sigma\leq\tau)\in {\cal G}_t$. This proves assertion (c) and ends the proof of the lemma. \end{proof} Throughout the rest of the paper, for any $ p\in (1,+\infty)$, $\alpha_0(p)$ and $\alpha_1(p)$ are given by \begin{eqnarray}\label{alphaZero} \begin{cases} \alpha_0(p):=\max\left(2C_{Lip}+2C_{Lip}^{2}+1,81\left\{1+{{9\sqrt{2}\kappa(1+C_{DB})}\over{3-\sqrt{8}}}\right\}^2 C_{DB}^2C_{Lip}^2\right)\\ \alpha_1(p):=\max\left( {{8}\over{9}}+4C_{Lip}+4C_{Lip}^2,{{81}\over{4}}C_{DB}^2C_{Lip}^2\left(1+{{9\sqrt{2}\kappa(1+C_{DB})}\over{3-\sqrt{8}}}\right)^2\right). \label{alphaOne} \end{cases} \end{eqnarray} Here $C_{DB}$ is the Doob's constant that depends on $p\in (1,+\infty)$ and $\kappa$ is the positive constant that depends on $p\in (1,+\infty)$ only given by Lemma \ref{Lemma4.8FromChoulliThesis}, and $C_{Lip}$ is the Lipschitz constant in (\ref{LipschitzAssumption}). \begin{theorem}\label{WhyNot} For $p>1$ and $\alpha>\alpha_0(p)$, there exists $\widehat{C}>0$ that depend on $(\alpha,p)$ only such that for any $\mathbb F$-stopping time $\sigma\in {\cal T}_0^T(\mathbb F)$ and any solution to (\ref{nonLinear}), denoted by ($Y^{\mathbb{G}},Z^{\mathbb{G}},M^{\mathbb{G}},K^{\mathbb{G}}$), we have \begin{align*} &\Vert {e^{\alpha\cdot/2}} Y^{\mathbb G}I_{\{\tau\geq\sigma\}}I_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{\tau\wedge{T}}(\widetilde{Q},p)}+\Vert e^{ {\alpha}\cdot/2}Z^{\mathbb{G}}I_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{\tau\wedge{T}}(\widetilde{Q}, p)}\nonumber\\ &+\Vert e^{ {\alpha}\cdot/2}Y^{\mathbb{G}}I_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{\tau\wedge{T}}(\widetilde{Q}, p)}+\Vert (e^{ {\alpha'}(\tau\wedge\cdot)}I_{]\!]\sigma,+\infty[\![}\bigcdot {K^{\mathbb{G}}})_{T\wedge\tau}\Vert_{L^p(\widetilde{Q})}+\Vert{e^{\alpha(\tau\wedge\cdot)/2}}I_{]\!]\sigma,+\infty[\![} \bigcdot {M^{\mathbb{G}}}\Vert_{{\cal{M}}^{p}_T(\widetilde{Q})}\\ &\leq \widehat{C}\left\{ \Vert{e^{\alpha(T\wedge\tau)/2}}\xi{I}_{\{\tau\geq\sigma\}}\Vert_{L^p(\widetilde{Q})}+\Vert{e^{(\alpha-\alpha')\cdot}}S^{+}I_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{\tau\wedge{T}}(\widetilde{Q}, p)}+\Vert e^{ {\alpha}\cdot/2}f(\cdot,0,0)I_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{\tau\wedge{T}}(\widetilde{Q}, p)}\right\}.\end{align*} \end{theorem} \begin{proof} Let $\sigma\in {\cal T}_0^T(\mathbb F)$ be an $\mathbb F$-stopping time. Remark that, in virtue of (\ref{inequa4deltaYG2}) and Doob's inequality under $(\widetilde{Q}, \mathbb G)$, on the one hand, we have \begin{align} &\Vert {e^{\alpha\cdot/2}}Y^{\mathbb G}I_{\{\tau\geq\sigma\}}I_{[\![\sigma,+\infty[\![} \Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\nonumber\\ &\leq C_{DB}\left\{ \Vert {e^{\alpha\cdot/2}}S^+I_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert{e^{\alpha(T\wedge\tau)/2}}\xi{I}_{\{\tau\geq\sigma\}}\Vert_{L^p(\widetilde{Q})}+{1\over{\sqrt{\alpha}}} \Vert{e^{\alpha\cdot/2}} f(\cdot,0,0)I_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}\right\}\nonumber\\ &+{{C_{DB}C_{Lip}}\over{\sqrt{\alpha}}}\left\{ \Vert{e^{\alpha\cdot/2}} Z^{\mathbb{G}}I_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert{e^{\alpha\cdot/2}}Y^{\mathbb{G}}I_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}\right\}.\label{equa445} \end{align} On the other hand, by combining It\^o applied to $e^{\alpha{t}}(Y^{\mathbb G}_t)^2$, $e^{\alpha(\sigma\wedge\tau)}(Y^{\mathbb G}_{\sigma\wedge\tau})^2\geq0$, (\ref{nonLinear}), and Young's inequality (i.e. $2xy\leq \epsilon{x^2}+y^2/\epsilon$ for any $\epsilon>0$), we derive \begin{align} &\overbrace{(\alpha-2C_{Lip}-2C_{Lip}^{2}-{\epsilon}^{-1})}^{C} \int_{\sigma\wedge\tau}^{T\wedge\tau}e^{\alpha s}(Y_{s}^{\mathbb{G}})^{2}ds+\frac{1}{2} \int_{\sigma\wedge\tau}^{T\wedge\tau}e^{\alpha s}(Z_{s}^{\mathbb{G}})^{2}ds+ \int_{\sigma\wedge\tau}^{T\wedge\tau}e^{\alpha t}d[M^{\mathbb{G}}, M^{\mathbb{G}}]_{s}\nonumber\\ &\leq e^{\alpha (T\wedge\tau)}\xi^{2}I_{\{\sigma\leq\tau\}}+\epsilon\int_{\sigma\wedge\tau}^{T\wedge\tau}e^{\alpha s}\vert{ f(s,0,0)}\vert^{2}ds+2\int_{\sigma\wedge\tau}^{T\wedge\tau}e^{\alpha s}Y_{s-}^{\mathbb{G}}dK_{s}^{\mathbb{G}}+(I_{]\!]\sigma,+\infty[\![}\bigcdot {L}^{\mathbb G,1})_{T\wedge\tau}\nonumber\\ &\leq{e^{\alpha (T\wedge\tau)}}\xi^{2}I_{\{\sigma\leq\tau\}}+\epsilon\int_{\sigma\wedge\tau}^{T\wedge\tau}e^{\alpha s}\vert{ f(s,0,0)}\vert^{2}ds+2\int_{\sigma\wedge\tau}^{T\wedge\tau}e^{\alpha s}{S_{s-}^+}dK_{s}^{\mathbb{G}}+\sup_{0\leq{t\leq{T}\wedge\tau}}\vert(I_{]\!]\sigma,+\infty[\![}\bigcdot {L}^{\mathbb G,1})_t\vert, \label{equa446}\end{align} where the last equality is due to the Skorokhod's condition and $L^{\mathbb G,1}\in {\cal M}_{loc}(\mathbb G)$ is given by \begin{eqnarray}\label{LG1} L^{\mathbb G,1}:=2e^{\alpha (\tau\wedge\cdot)}(Y_{-}^{\mathbb{G}}-\Delta{K_{s}^{\mathbb{G}}}))\bigcdot {M^{\mathbb{G}}}+2e^{\alpha (\tau\wedge\cdot)}(Y_{-}^{\mathbb{G}})Z^{\mathbb{G}}\bigcdot W^{\tau}.\end{eqnarray} Thus, by applying Lemma \ref{Lemma4.8FromChoulliThesis} to $I_{]\!]\sigma,+\infty[\![}\bigcdot {L}^{\mathbb G,1}$ with $a=b=p$, and using Doob's inequality afterwards to the martingale $E^{\widetilde{Q}}[\sup_{0\leq{s}\leq{T}\wedge\tau}\vert{Y}^{\mathbb G}_s\vert{I}_{\{\sigma\leq{s}\wedge\tau\}}\ \big|{\cal{G}}_t]$, we derive \begin{align}\label{Control4LG1} &\Vert\sqrt{\vert{I}_{]\!]\sigma,+\infty[\![}\bigcdot {L}^{\mathbb G,1}\vert}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\nonumber\\ &\leq 2\sqrt{\left\{\kappa(1+C_{DB})\Vert {e}^{\alpha(\cdot\wedge\tau)/2}I_{]\!]\sigma,+\infty[\![}\bigcdot {M}^{\mathbb{G}}\Vert_{{\cal{M}}^{p}_T(\widetilde{Q})}+\Vert{e}^{\alpha\cdot/2}Z_{s}^{\mathbb{G}}I_{]\!]\sigma,+\infty[\![}\Vert _{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}\right\} \Vert {e^{\alpha\cdot/2}}Y^{\mathbb G}{I}_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)} } \nonumber\\ &\leq \epsilon_1\left\{\Vert{e^{\alpha(\tau\wedge\cdot)/2}}{I}_{]\!]\sigma,+\infty[\![} \bigcdot {M^{\mathbb{G}}}\Vert_{{\cal{M}}^{p}_T(\widetilde{Q})} +\Vert{e^{\alpha\cdot/2 }}Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)} \right\}+{{\kappa(1+C_{DB})}\over{\epsilon_1}}\Vert {e^{\alpha\cdot/2}}Y^{\mathbb G}{I}_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}. \end{align} Therefore, by combining (\ref{equa445}), (\ref{equa446}), and (\ref{Control4LG1}), and the fact that $\Vert\sqrt{\sum_{i=1}^n X_i}\Vert_{L^p(\widetilde{Q})}\geq n^{-1}\sum_{i=1}^n \Vert \sqrt{X_i}\Vert_{L^p(\widetilde{Q})}$ for any nonnegative random variables $(X_i)_{i=1,...,n}$, we derive the following inequality.\\ \begin{align}\label{Major1} &\Vert {e^{\alpha\cdot/2}}Y^{\mathbb G}{I}_{[\![\sigma,+\infty[\![} \Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}+C_1\Vert{e^{\alpha\cdot/2}} Y^{\mathbb{G}}{I}_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}\nonumber\\ &+C_2\Vert{e^{\alpha\cdot/2}}Z^{\mathbb{G}}{I}_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+C_3\Vert{e^{\alpha\cdot/2}} {I}_{]\!]\sigma,+\infty[\![}\bigcdot {M^{\mathbb{G}}}\Vert_{{\cal M}^{p}_{T}(\widetilde{Q})}\nonumber\\ & \leq C_4 \Vert{e^{\alpha\cdot/2}} f(\cdot,0,0){I}_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+C_5 \Vert {e^{\alpha(T\wedge\tau)/2}}\xi{I}_{\{\sigma\leq\tau\}}\Vert_{\mathbb{L}^{p}(\widetilde{Q})}+C_6\Vert {e^{\alpha\cdot/2}} S^+{I}_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\nonumber\\ &+C_7\Vert {e^{(\alpha-\alpha')\cdot}} S^+{I}_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}^{1/2}\Vert{e}^{\alpha'(\tau\wedge\cdot)}{I}_{]\!]\sigma,+\infty[\![}\bigcdot {K}^{\mathbb G}_T \Vert_{\mathbb{L}^{p}(\widetilde{Q})}^{1/2},\quad \mbox{for}\quad \alpha'<\alpha/2, \end{align} where $C_i$, $i=1,...,7$ are given by \begin{align}\label{Constants4estimates} \begin{cases} C_1:={\sqrt{C}\over{3}}-\left(1+{{\kappa(1+C_{DB})}\over{\epsilon_1}} \right){{ C_{DB}C_{Lip}}\over{\sqrt{\alpha}}},\quad{C}_2:={1\over{3\sqrt 2}}-\epsilon_1-\left(1+{{\kappa(1+C_{DB})}\over{\epsilon_1}} \right){{ C_{DB}C_{Lip}}\over{\sqrt{\alpha}}},\\ \\ C_3:={1\over {3}}-\epsilon_1,\quad C_4:=\sqrt{\epsilon}+{{ C_{DB}}\over{\sqrt{\alpha}}}\left(1+{{\kappa(1+C_{DB})}\over{\epsilon_1}} \right),\\ \\ C_5:=1+C_{DB}\left(1+{{\kappa(1+C_{DB})}\over{\epsilon_1}} \right),\quad C_6:=C_{DB}\left(1+{{\kappa(1+C_{DB})}\over{\epsilon_1}} \right),\quad C_7:=\sqrt{2}.\end{cases} \end{align} Thus, the next step consists of controlling the norm of $K^{\mathbb{G}}$. To this end, we use the RBSDE (\ref{nonLinear}) and Ito's formula, and derive for any $\alpha'>0$ and $t>\sigma$ \begin{align*} &\int_{t\wedge\tau}^{T\wedge\tau}e^{\alpha' {s}}dK_{s}^{\mathbb{G}}=- \int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha' {s}}dY^{\mathbb{G}}_{s} -\int_{s\wedge\tau}^{T\wedge\tau}e^{\alpha' {s}}f(s,Y^{\mathbb{G}}_{s},Z^{\mathbb{G}}_{s})ds- \int_{t\wedge\tau}^{T\wedge\tau}e^{\alpha' {s}}dM^{\mathbb{G}}_{t}+ \int_{t\wedge\tau}^{T\wedge\tau}e^{\alpha' {s}}Z^{\mathbb{G}}_{s}dW_{s}^{\tau}\\ &\overset{Ito}{=}- e^{\alpha' {T\wedge\tau}}Y^{\mathbb{G}}_{T\wedge\tau}+ e^{\alpha'{t\wedge\tau}}Y^{\mathbb{G}}_{t\wedge\tau}+ \int_{t\wedge\tau}^{T\wedge\tau} e^{\alpha' {s}}({\alpha'}Y^{\mathbb{G}}_{s}-f(s,Y^{\mathbb{G}}_{s},Z^{\mathbb{G}}_{s}))ds- \int_{t\wedge\tau}^{T\wedge\tau}e^{\alpha' {s}}dM^{\mathbb{G}}_{s}+ \int_{t\wedge\tau}^{T\wedge\tau}e^{\alpha' {s}}Z^{\mathbb{G}}_{s}dW_{s}^{\tau}. \end{align*} Therefore, by using this latter equality together with (\ref{LipschitzAssumption}), we derive \begin{align*} & E^{\widetilde{Q}} \left[\int_{t\wedge\tau}^{T\wedge\tau}e^{ {\alpha'}s}dK_{s}^{\mathbb{G}}\ \big|{\cal G}_{t\wedge\tau}\right]\leq E^{\widetilde{Q}}\left[2\sup_{t\wedge\tau<u\leq T\wedge\tau}e^{ {\alpha'}s}\vert {Y^{\mathbb{G}}_s}\vert + \int_{t\wedge\tau}^{T\wedge\tau}e^{ {\alpha'}s}\vert {\alpha'}Y^{\mathbb{G}}_{s}+ f(s,Y^{\mathbb{G}}_{s},Z^{\mathbb{G}}_{s})\vert ds\ \big|{\cal G}_{t\wedge\tau}\right],\\ &\leq E^{\widetilde{Q}}\left[2\sup_{t\wedge\tau<u\leq T\wedge\tau}e^{ {\alpha'}s}\vert {Y^{\mathbb{G}}_s}\vert + \int_{t\wedge\tau}^{T\wedge\tau}e^{ {\alpha'}s}( {\alpha'}+C_{Lip})\vert{Y^{\mathbb{G}}_s}\vert ds+ \int_{t\wedge\tau}^{T\wedge\tau}e^{ {\alpha'}s}\vert {f(s,0,0)}\vert ds\ \big|{\cal G}_{t\wedge\tau}\right]\\ &+ C_{Lip}E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{T\wedge\tau}e^{ {\alpha'}s}\vert {Z^{\mathbb{G}}_{s}}\vert ds\ \big|{\cal G}_{t\wedge\tau}\right]. \end{align*} Then by applying (\ref{InequalitySchwartz}) for each term above, and choosing $\alpha'<\alpha/2$, we get for $t>\sigma$ \begin{align*} & E^{\widetilde{Q}} \left[\int_{t\wedge\tau}^{T\wedge\tau}e^{ {\alpha'}s}dK_{s}^{\mathbb{G}}\ \big|{\cal G}_{t\wedge\tau}\right]\\ &\leq E^{\widetilde{Q}}\left[2\sup_{\sigma\wedge\tau\leq{u}\leq T\wedge\tau}e^{ {\alpha'}s}\vert {Y^{\mathbb{G}}_s}\vert +{{\alpha'+C_{Lip}}\over{\sqrt{\alpha-2\alpha'}}} \sqrt{\int_{\sigma\wedge\tau}^{T\wedge\tau}e^{ {\alpha}s}(Y^{\mathbb{G}}_s)^2 ds}+ {1\over{\sqrt{\alpha-2\alpha'}}} \sqrt{\int_{\sigma\wedge\tau}^{T\wedge\tau}e^{ {\alpha}s}(f(s,0,0))^2 ds}\ \big|{\cal G}_{t\wedge\tau}\right]\\ &+ {{C_{Lip}}\over{\sqrt{\alpha-2\alpha'}}} E^{\widetilde{Q}}\left[\sqrt{\int_{\sigma\wedge\tau}^{T\wedge\tau}e^{ {\alpha}s}(Z^{\mathbb{G}}_s)^2 ds}\ \big|{\cal G}_{t\wedge\tau}\right]. \end{align*} Therefore, thanks to Theorem \ref{DellacherieAndMeyer}, we deduce that for any $p>1$ and $\alpha'<\alpha/2$, we have \begin{align} &\Vert (e^{ {\alpha'}\cdot}{I}_{]\!]\sigma,+\infty[\![}\bigcdot {K^{\mathbb{G}}})_{T\wedge\tau}\Vert_{L^p(\widetilde{Q})}\nonumber\\ &\leq C'\left\{\Vert {e^{\alpha\cdot/2}} Y^{\mathbb G}{I}_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert e^{ {\alpha}\cdot/2} Y^{\mathbb{G}}{I}_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q}, p)}+\Vert e^{ {\alpha}\cdot/2}Z^{\mathbb{G}}{I}_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q}, p)}\right\}\nonumber\\ &+C'\Vert e^{ {\alpha}\cdot/2}f(\cdot,0,0){I}_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q}, p)},\label{Equa444} \end{align} where the constant $C'$ is given by \begin{align*} C':=p\max\left(2, \frac{\alpha'+C_{Lip}}{\sqrt{\alpha-2\alpha'}}\right). \end{align*} Remark that for $\alpha>\alpha_0(p)$, and by choosing $\epsilon=9/5$ and $\epsilon_1=(3-\sqrt{8})/9\sqrt{2}$, we get $1/9<C_2\leq \min(C_1,C_3)$. Then by inserting (\ref{Equa444}) in (\ref{Major1}) and using Young's inequality afterwards, we get \begin{align*} &\Vert {e}^{{{\alpha\cdot}\over{2}}} Y^{\mathbb G}{I}_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert e^{{{\alpha\cdot}\over{2}}} Y^{\mathbb{G}}{I}_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q}, p)}+\Vert{e}^{{{\alpha\cdot}\over{2}}}Z^{\mathbb{G}}{I}_{]\!]\sigma,+\infty[\![}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert{e^{{{\alpha\cdot}\over{2}}}}{I}_{]\!]\sigma,+\infty[\![} \bigcdot {M^{\mathbb{G}}}\Vert_{{\cal M}^{p}_{T}(\widetilde{Q})}\nonumber\\ & \leq \overline{C}\left\{ \Vert{e^{\alpha(\tau\wedge\cdot)/2}} f(\cdot,0,0){I}_{[\![\sigma,+\infty[\![}\Vert_{\mathbb{S}^{p}_{T}(\widetilde{Q})}+ \Vert {e^{\alpha(T\wedge\tau)/2}}\xi{I}_{\{\sigma\leq\tau\}}\Vert_{\mathbb{L}^{p}(\widetilde{Q})}+\Vert {e^{(\alpha-\alpha')\cdot}} S^+{I}_{]\!]\sigma,+\infty[\![}\Vert_{{\mathbb{D}}_{T\wedge\tau}(\widetilde{Q},p)}\right\},\end{align*} where $\overline{C}:=(20(C')^2+C_6)/(C_2-(1/9))$. Therefore, the proof of the theorem follows immediately from combining the above inequality with (\ref{Equa444}) and choosing $\widehat{C}=\overline{C}(1+C')+C'$. This ends the proof of the theorem. \end{proof} \begin{theorem}\label{uniquenessNonlinear} If ($Y^{\mathbb{G},i},Z^{\mathbb{G},i},K^{\mathbb{G},i}, M^{\mathbb{G},i}$) is a solution to the RBSDE (\ref{nonLinear}) that correspond to $(f_{i}, S_{i}, \xi^i)$, $i=1,2$ respectively, then for any $p>1$ and $\alpha>\max(\alpha_1(p), \alpha_0(p))$ given in (\ref{alphaZero}), there exist positive $\widehat{C}_j$, $j=1,2,3,4$, that depend on $(\alpha, p)$ only such that $\lim_{\alpha\to\infty}\widehat{C}_1=0$ and \begin{align}\label{MainDeltaInequality} &\Vert {e^{\alpha\cdot/2}}\delta Y^{\mathbb G} \Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},\mathbb{G})}+\Vert{e^{\alpha\cdot/2}} \delta Y^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert{e^{\alpha\cdot/2}} \delta Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert{e^{\alpha(\tau\wedge\cdot)/2}} \bigcdot \delta {M^{\mathbb{G}}}\Vert_{{\cal M}^{p}(\widetilde{Q})}\nonumber\\ & \leq \widehat{C}_1 \Vert{e^{\alpha\cdot/2}} \delta f\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\widehat{C}_2\Vert {e^{\alpha(T\wedge\tau)/2}} \delta \xi\Vert_{\mathbb{L}^{p}(\widetilde{Q})}+\widehat{C}_3\Vert {e^{\alpha\cdot/2}} \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\nonumber\\ &+\widehat{C}_4\sqrt{\Vert {e^{\alpha\cdot/2}} \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\left\{\sum_{i=1}^2\Delta(f^{(i)}, S^{(i)}, \xi^{(i)})\right\}}. \end{align} Here $\Delta(f^{(i)}, S^{(i)}, \xi^{(i)})$ is \begin{eqnarray}\label{Delta(i)} \Delta(f^{(i)}, S^{(i)}, \xi^{(i)}):= \Vert{e^{\alpha(T\wedge\tau)/2}}\xi^{(i)}\Vert_{L^p(\widetilde{Q})}+\Vert{e^{\alpha(\tau\wedge\cdot)}}(S^{(i)})^{+}\Vert_{\mathbb{S}_T(\widetilde{Q}, p)}+\Vert e^{ {\alpha}(\tau\wedge\cdot)/2}f^{(i)}(\cdot,0,0)\Vert_{\mathbb{S}_T(\widetilde{Q}, p)} \end{eqnarray} and $(\delta Y^{\mathbb{G}}, \delta Z^{\mathbb{G}}, \delta M^{\mathbb{G}}, \delta K^{\mathbb{G}})$ and $(\delta f,\delta S, \delta\xi)$ are given by \begin{align*} \delta Y^{\mathbb{G}}&:=Y^{\mathbb{G},1}-Y^{\mathbb{G},2},\ \delta Z^{\mathbb{G}}:=Z^{\mathbb{G},1}-Z^{\mathbb{G},2}, \delta M^{\mathbb{G}}:=M^{\mathbb{G},1}-M_{2}^{\mathbb{G},2}, \delta K^{\mathbb{G}}:=K^{\mathbb{G},1}-K^{\mathbb{G},2}, \\ \delta S&:=S^1-S^2,\quad \delta \xi:=\xi^1-\xi^2,\quad \delta f_t:= f_{1}(t,Y^{\mathbb{G},1}_t,Z^{\mathbb{G},1}_t)- f_{2}(t,Y^{\mathbb{G},1}_t,Z^{\mathbb{G},1}_t). \end{align*} \end{theorem} \begin{proof} On the one hand, due to the Lipschitz assumption on $f$, we have \begin{align} &\vert \Delta f_t\vert:=\vert f_{1}(t,Y^{\mathbb{G},1},Z^{\mathbb{G},1})- f_{2}(t,Y^{\mathbb{G},2},Z^{\mathbb{G},2})\vert \leq \vert \delta f_t\vert+C_{Lip}\vert\delta Y^{\mathbb{G}}_t\vert+C_{Lip}\vert \delta Z^{\mathbb{G}}_t\vert.\label{Lipschitz1} \end{align} On the other hand, in virtue of Lemma \ref{Estimation4DeltaY}-(b) and Doob's inequality, we get \begin{align}\label{Control4supYG} \Vert {e^{\alpha\cdot/2}}\delta Y^{\mathbb G} \Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}&\leq C_{DB}\left\{ \Vert {e^{\alpha\cdot/2}}\delta S_u\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert{e^{\alpha(T\wedge\tau)/2}}\delta\xi\Vert_{L^p(\widetilde{Q})}+{1\over{\sqrt{\alpha}}} \Vert{e^{\alpha\cdot/2}} \delta f\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}\right\}\nonumber\\ &+{{C_{DB}C_{Lip}}\over{\sqrt{\alpha}}}\left\{ \Vert{e^{\alpha\cdot/2}} \delta Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert{e^{\alpha\cdot/2}} \delta Y^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}\right\}. \end{align} By combining It\^o applied to $e^{\alpha t}(\delta Y^{\mathbb{G}})^{2}$, $(\delta Y_{0}^{\mathbb{G}})^{2}\geq0$ and (\ref{Lipschitz1}), and putting \begin{eqnarray}\label{LG} L^{\mathbb G}:=e^{\alpha (\tau\wedge\cdot)}(\delta Y_{-}^{\mathbb{G}}-2\Delta(\delta K_{s}^{\mathbb{G}}))\bigcdot \delta M^{\mathbb{G}}+e^{\alpha (\tau\wedge\cdot)}(\delta Y_{-}^{\mathbb{G}})\delta Z^{\mathbb{G}}\bigcdot W^{\tau}\in {\cal M}_{loc}(\widetilde Q, \mathbb G), \end{eqnarray} we derive \begin{align*} &\alpha \int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Y_{s}^{\mathbb{G}})^{2}ds+ \int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Z_{s}^{\mathbb{G}})^{2}ds+ \int_{0}^{T\wedge\tau}e^{\alpha t}d[ \delta M^{\mathbb{G}}, \delta M^{\mathbb{G}}]_{s}\nonumber\\ &\leq e^{\alpha (T\wedge\tau)}(\delta \xi)^{2}+ 2\int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Y_{s}^{\mathbb{G}})\Delta f ds+2\int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Y_{s-}^{\mathbb{G}})d\delta K_{s}^{\mathbb{G}}+L^{\mathbb G}_T,\\ &\leq e^{\alpha (T\wedge\tau)}(\delta \xi)^{2}+2\int_{0}^{T\wedge\tau}e^{\alpha s}|\delta Y_{s}^{\mathbb{G}}|(|\delta f_s|+C_{Lip}(\vert\delta Y^{\mathbb{G}}\vert+\vert\delta Z^{\mathbb{G}}\vert))ds+2(e^{\alpha(\tau\wedge\cdot)}(\delta Y_{-}^{\mathbb{G}})\bigcdot \delta K^{\mathbb{G}})_{T\wedge\tau}+L^{\mathbb G}_T\\ &=e^{\alpha (T\wedge\tau)}(\delta \xi)^{2}+2\int_{0}^{T\wedge\tau}e^{\alpha s}\vert\delta Y_{s}^{\mathbb{G}}\vert\vert\delta f_s\vert ds+2\int_{0}^{T\wedge\tau}e^{\alpha s}C_{Lip}\vert\delta Y_{s}^{\mathbb{G}}\vert^{2}ds \\ &+2\int_{0}^{T\wedge\tau}e^{\alpha s}C_{Lip}\vert\delta Y_{s}^{\mathbb{G}}\vert\vert\delta Z^{\mathbb{G}}\vert ds+2\int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Y_{s-}^{\mathbb{G}})d\delta K_{s}^{\mathbb{G}}+L^{\mathbb G}_T\\ &\overset{Young}{\leq}e^{\alpha (T\wedge\tau)}(\delta \xi)^{2}+\frac{1}{\epsilon}\int_{0}^{T\wedge\tau}e^{\alpha s}\vert\delta Y_{s}^{\mathbb{G}}\vert^{2}ds+\epsilon\int_{0}^{T\wedge\tau}e^{\alpha s}\vert\delta f_s\vert^{2} ds+2\int_{0}^{T\wedge\tau}e^{\alpha s}C_{Lip}\vert\delta Y_{s}^{\mathbb{G}}\vert^{2}ds \\&+2C_{Lip}^{2}\int_{0}^{T\wedge\tau}e^{\alpha s}\vert\delta Y_{s}^{\mathbb{G}}\vert^{2}ds+\frac{1}{2}\int_{0}^{T\wedge\tau}e^{\alpha s}\vert\delta Z^{\mathbb{G}}\vert^{2}ds+2\int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Y_{s-}^{\mathbb{G}})d\delta K_{s}^{\mathbb{G}}+L^{\mathbb G}_T. \end{align*} Therefore, by arranging terms in the last inequality, we obtain \begin{align} &\overbrace{(\alpha-2C_{Lip}-2C_{Lip}^{2}-{\epsilon}^{-1})}^{C} \int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Y_{s}^{\mathbb{G}})^{2}ds+\frac{1}{2} \int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Z_{s}^{\mathbb{G}})^{2}ds+ \int_{0}^{T\wedge\tau}e^{\alpha t}d[ \delta M^{\mathbb{G}}, \delta M^{\mathbb{G}}]_{s}\nonumber\\ &\leq e^{\alpha (T\wedge\tau)}(\delta \xi)^{2}+\epsilon\int_{0}^{T\wedge\tau}e^{\alpha s}\vert\delta f_s\vert^{2}ds+2\int_{0}^{T\wedge\tau}e^{\alpha s}(\delta Y_{s-}^{\mathbb{G}})d\delta K_{s}^{\mathbb{G}}+L^{\mathbb G}_T\label{equa399}\\ &\leq e^{\alpha (T\wedge\tau)}(\delta \xi)^{2}+\epsilon\int_{0}^{T\wedge\tau}e^{\alpha s}\vert\delta f_s\vert^{2}ds+2\int_{0}^{T\wedge\tau}e^{\alpha s}\vert\delta S_{s-}\vert d\mbox{Var}_s(\delta{K^{\mathbb{G}}})+L^{\mathbb G}_T.\label{equa400}\end{align} The last inequality is due to $e^{\alpha (\tau\wedge\cdot)}(\delta Y_{-}^{\mathbb{G}})\bigcdot \delta K^{\mathbb{G}}\leq e^{\alpha (\tau\wedge\cdot)}(\delta S_{-}^{\mathbb{G}})\bigcdot \delta K^{\mathbb{G}}\leq e^{\alpha (\tau\wedge\cdot)}\vert\delta S_{-}^{\mathbb{G}}\vert\bigcdot \mbox{Var}(\delta K^{\mathbb{G}})$ that follows from Skorokhod's condition. Furthermore, by applying Lemma \ref{Lemma4.8FromChoulliThesis} to $L^{\mathbb G}$ given in (\ref{LG}) with $a=b=p$ and Doob's inequality afterwards, there exists a constant $\kappa=\kappa(p)>0$ that depends on $p$ only such that \begin{align}\label{Control4LG} &\Vert \vert L^{\mathbb G}\vert^{1/2}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\nonumber\\ &\leq \sqrt{\kappa(1+C_{DB})}\left\{\Vert{e}^{\alpha\cdot/2}\bigcdot \delta M^{\mathbb{G}}\Vert_{{\cal{M}}^p_T(\widetilde{Q})} +\Vert{e}^{\alpha\cdot/2}\delta Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}\right\}^{1/2}\Vert\delta Y^{\mathbb G}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}^{1/2}\nonumber\\ &\leq \epsilon_1\Vert{e}^{\alpha\cdot/2}\bigcdot \delta M^{\mathbb{G}}\Vert_{{\cal{M}}^p_T(\widetilde{Q})} +\epsilon_1\Vert{e}^{\alpha\cdot/2}\delta Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+{{\kappa(1+C_{DB})}\over{\epsilon_1}}\Vert {e^{\alpha\cdot/2}}\delta Y^{\mathbb G}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}. \end{align} Therefore, by combining (\ref{equa400}), (\ref{Control4LG}) and (\ref{Control4supYG}) and choosing adequately $\alpha, \epsilon, \epsilon_1$ and using $n^{-1}\sum_{i=1}^n x_i^{p/2}\leq (\sum_{i=1}^n x_i)^{p/2}\leq n^{p/2}\sum_{i=1}^n x_i^{p/2}$ for any positive integer and any sequence of nonnegative number $x_i$, we derive \begin{align*} &{1\over{3}}\left\{\sqrt{C}\Vert{e^{\alpha\cdot/2}} \delta Y^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)} +2^{-1} \Vert{e^{\alpha\cdot/2}} \delta Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert{e^{\alpha(\tau\wedge\cdot)/2}} \bigcdot \delta {M^{\mathbb{G}}}\Vert_{{\cal{M}}^{p}_T(\widetilde{Q})}\right\}\\ &\leq \epsilon\Vert{e^{\alpha\cdot/2}} \delta f\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert {e^{\alpha(T\wedge\tau)/2}} \delta \xi\Vert_{\mathbb{L}^{p}(\widetilde{Q})}+\sqrt{2}\Vert {e^{\alpha\cdot/2}} \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}^{1/2}\Vert \rm{Var}_T(e^{\alpha\cdot/2}\bigcdot \delta K^{\mathbb G}) \Vert_{\mathbb{L}^{p}(\widetilde{Q})}^{1/2}\\ &+\Vert\vert L^{\mathbb G}\vert^{1/2}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)},\\ &\leq \epsilon\Vert{e^{\alpha(\tau\wedge\cdot)/2}} \delta f\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q})}+\Vert {e^{\alpha(T\wedge\tau)/2}} \delta \xi\Vert_{\mathbb{L}^{p}(\widetilde{Q})}+\sqrt{2}\Vert {e^{\alpha\cdot/2}} \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}^{1/2}\Vert \rm{Var}_T(e^{\alpha\cdot/2}\bigcdot \delta K^{\mathbb G}) \Vert_{\mathbb{L}^{p}(\widetilde{Q})}^{1/2}\\ &+{\epsilon_1}\Vert{e^{\alpha(\tau\wedge\cdot)/2}} \bigcdot \delta {M^{\mathbb{G}}}\Vert_{{\cal{M}}_T^p(\widetilde{Q})}+{\epsilon_1}\Vert{e^{\alpha\cdot/2}} \delta Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+{{\kappa(1+C_{DB})}\over{\epsilon_1}}\Vert {e^{\alpha\cdot/2}}\delta Y^{\mathbb G}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}. \end{align*} Then by combining this equality with (\ref{Control4supYG}) and (\ref{equa400}) we obtain \begin{align}\label{BeforeLast} &\Vert {e^{\alpha\cdot/2}}\delta Y^{\mathbb G} \Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},\mathbb{G})}+C_1\Vert{e^{\alpha\cdot/2}} \delta Y^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+C_2\Vert{e^{\alpha\cdot/2}} \delta Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+C_3\Vert{e^{\alpha(\tau\wedge\cdot)/2}} \bigcdot \delta {M^{\mathbb{G}}}\Vert_{{\cal M}^{p}(\widetilde{Q})}\nonumber\\ & \leq {C}_4 \Vert{e^{\alpha\cdot/2}} \delta f\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+C_5\Vert {e^{\alpha(T\wedge\tau)/2}} \delta \xi\Vert_{\mathbb{L}^{p}(\widetilde{Q})}+{C}_6\Vert {e^{\alpha\cdot/2}} \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\nonumber\\ &+{C}_7\sqrt{\Vert {e^{\alpha\cdot/2}} \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\Vert \rm{Var}_T(e^{\alpha\cdot/2}\bigcdot \delta K^{\mathbb G}) \Vert_{\mathbb{L}^{p}(\widetilde{Q})}}, \end{align} where ${C}_i$, $i=1,...,7$ are given by (\ref{Constants4estimates}). Then here we take $\epsilon=2/{\alpha}$, $\epsilon_1=(3-\sqrt{8})/9\sqrt{2}$ and $\alpha>\alpha_1(p)$, and remark that $0<{C}_2\leq\min({C}_1,{C}_3)$. Furthermore, in virtue of Theorem \ref{WhyNot} with $\sigma=0$, we get $$\Vert \rm{Var}_T(e^{\alpha\cdot/2}\bigcdot \delta K^{\mathbb G}) \Vert_{\mathbb{L}^{p}(\widetilde{Q})}\leq \Vert(e^{\alpha\cdot/2}\bigcdot K^{\mathbb G,1})_T\Vert_{\mathbb{L}^{p}(\widetilde{Q})}+\Vert({e}^{\alpha\cdot/2}\bigcdot {K}^{\mathbb G,2})_T\Vert_{\mathbb{L}^{p}(\widetilde{Q})}\leq \widehat{C}\sum_{i=1}^2\Delta(f^{(i)}, S^{(i)}, \xi^{(i)}).$$ Therefore, by plugging this inequality in (\ref{BeforeLast}), the inequality (\ref{MainDeltaInequality}) follows immediately with $$ \widehat{C}_1={{ {C}_4}\over{ {C}_2}},\quad \widehat{C}_2={{ {C}_5}\over{{C}_2}},\quad \widehat{C}_3={{ {C}_6}\over{ {C}_2}},\quad \widehat{C}_4={{ {C}_7\sqrt{\widehat{C}}}\over{ {C}_2}}. $$ It is also clear that $\widehat{C}_1$ goes to zero when $\alpha$ goes to infinity. This ends the proof of the theorem. \end{proof} \subsection{Existence, uniqueness and relationship to $\mathbb F$-RBSDEs}\label{GeneralRBSDEfromG2F} In this subsection, we elaborate our results on the existence and uniqueness of the solution to (\ref{nonLinear}), and describe the form of its $\mathbb F$-RBSDE counter part. To this end, we assume that there exists $\alpha>\max(\alpha_0(p),\alpha_1(p))$ such that \begin{eqnarray}\label{MainAssumption4NonlinearBounded} &&E\left[{\widetilde{\cal E}}_T{\cal K}_T(f,S,h)+\int_0^T {\cal K}_s(f,S,h) dV^{\mathbb F}_s\right]<+\infty,\end{eqnarray} where \begin{eqnarray}\label{CalKprocess} {\cal K}_t(f,S,h):=\vert{h}_t\vert^p+\left(\int_0^{t}\vert f(s,0,0)\vert^2 ds\right)^{p/2}+\sup_{0\leq u\leq{t}}(S_u^+)^p,\quad{t}\geq 0.\end{eqnarray} One of the main obstacles, herein, lies in guessing the form of the $\mathbb F$-RBSDE that corresponds to (\ref{nonLinear}). To overcome this challenge, we appeal to the linear case and the known method of approximating the solution to the general RBSDE (\ref{nonLinear}) by the sequence of solutions to linear RBSDEs --as it is adopted in \cite{Bouchard} and the references therein--. This is the aim of the following remark. \begin{remark} Following the footsteps of \cite{Bouchard} and the main stream of BSDE literature, we define the sequence of linear RBSDEs under $\mathbb G$, whose solutions approximate the solution to the general RBSDE (\ref{nonLinear}). Thus, we consider the sequence $\left(Y^{\mathbb{G},n},Z^{\mathbb{G},n},M^{\mathbb{G},n},K^{\mathbb{G},n}\right)$ defined recursively as follows. \begin{eqnarray*} && (Y^{\mathbb{G},0},Z^{\mathbb{G},0},M^{\mathbb{G},0},K^{\mathbb{G},0}):=(0,0,0,0),\\ &&\mbox{for any}\ n\geq 1,\quad \left(Y^{\mathbb{G},n},Z^{\mathbb{G},n},M^{\mathbb{G},n},K^{\mathbb{G},n}\right)\ \mbox{is the unique solution to}:\\ &&\begin{cases} Y_{t}=\xi+\displaystyle\int_{t\wedge\tau}^{T\wedge\tau}f(s,Y_{s}^{\mathbb{G},n-1}, Z_{s}^{\mathbb{G},n-1})ds+\int_{t\wedge\tau}^{T\wedge\tau}dM_s+\int_{t\wedge\tau}^{T\wedge\tau}dK_{s}-\int_{t\wedge\tau}^{T\wedge\tau}Z_{s}dW_{s},\\ Y\geq S\quad \mbox{on}\ \Rbrack0,\tau[\![,\quad\displaystyle\int_{0}^{T\wedge\tau}(Y_{t-}-S_{t-})dK_{t}=0.\end{cases} \end{eqnarray*} Thus, from this recursive sequence of solutions, and thanks to the linear part fully analyzed in Sections \ref{LinearboundedSection} and \ref{LinearUnboundedSection}, we obtain a sequence of RBSDEs under $\mathbb F$ and their solutions. This can be achieved by determining $\left(Y^{\mathbb{F},n},Z^{\mathbb{F},n},K^{\mathbb{F},n}\right)$ associated to $\left(Y^{\mathbb{G},n},Z^{\mathbb{G},n},M^{\mathbb{G},n},K^{\mathbb{G},n}\right)$ for each $n\geq 0$ as follows. \begin{enumerate} \item As ($Y^{\mathbb{G},0},Z^{\mathbb{G},0},M^{\mathbb{G},0},K^{\mathbb{G},0}$):=($0,0,0,0$), then we get ($Y^{\mathbb{F},0},Z^{\mathbb{F},0},K^{\mathbb{F},0}$):=($0,0,0$). \item For $n=1$, $\left(Y^{\mathbb{G},1},Z^{\mathbb{G},1},M^{\mathbb{G},1},K^{\mathbb{G},1}\right)$ is the solution to \begin{eqnarray*} Y_{t}=\xi+\int_{t\wedge\tau}^{T\wedge\tau}f(s,0,0)ds+\int_{t\wedge\tau}^{T\wedge\tau}dM_{s}+\int_{t\wedge\tau}^{T\wedge\tau}dK_{s}-\int_{t\wedge\tau}^{T\wedge\tau}Z_{s}dW_{s}. \end{eqnarray*} Here, the generator/driver is constant in ($Y,Z, M, K$), and hence in virtue of Theorem \ref{abcde} there exists a unique $(Y^{\mathbb{F},1},Z^{\mathbb{F},1},K^{\mathbb{F},1})$ solution to the RBSDE (\ref{RBSDEF}) with generator/driver $f^{\mathbb{F},1}(s):={\widetilde{\cal E}}_{s}f(s,0,0)$ and \begin{eqnarray}\label{EqualityYG2YF1} Y^{\mathbb{G},1}= \frac{Y^{\mathbb{F},1}}{{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+\xi1_{[\![\tau,+\infty[\![},\ Z^{\mathbb{G},1}=\frac{Z^{\mathbb{F},1}}{{\widetilde{\cal E}}_{-}},\ K^{\mathbb{G},1}=\frac{1}{{\widetilde{\cal E}}_{-}}\bigcdot K ^{\mathbb{F},1},\ M^{\mathbb{G},1}=\left(h-\frac{Y^{\mathbb{F},1}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}. \end{eqnarray} \item For $n=2$, $\left(Y^{\mathbb{G},2},Z^{\mathbb{G},2},M^{\mathbb{G},2},K^{\mathbb{G},2}\right)$ is the solution to \begin{align*} Y_{t}&=\xi+\int_{t\wedge\tau}^{T\wedge\tau}f(s,Y_{s}^{\mathbb{G},1}, Z_{s}^{\mathbb{G},1})ds+\int_{t\wedge\tau}^{T\wedge\tau}dM_{s}+\int_{t\wedge\tau}^{T\wedge\tau}dK_{s}-\int_{t\wedge\tau}^{T\wedge\tau}Z_{s}dW_{s}. \end{align*} Thus, by plugging (\ref{EqualityYG2YF1}) in this equation, we obtain \begin{align*} &Y_{t}=\xi+\int_{t\wedge\tau}^{T\wedge\tau}f(s,\frac{Y_{s}^{\mathbb{F},1}}{{\widetilde{\cal E}}_{s}}, \frac{Z_{s}^{\mathbb{F},1}}{{\widetilde{\cal E}}_{s-}})ds+\int_{t\wedge\tau}^{T\wedge\tau}dM_{s}+\int_{t\wedge\tau}^{T\wedge\tau}dK_{s}-\int_{t\wedge\tau}^{T\wedge\tau}Z_{s}dW_{s}. \end{align*} The generator here does not depend on $(Y,Z, M, K)$. Hence, again, Theorem \ref{abcde} yields the existence of a unique $(Y^{\mathbb{F},2},Z^{\mathbb{F},2},K^{\mathbb{F},2})$ solution to the RBSDE (\ref{RBSDEF}) under $\mathbb F$ with generator/driver $ f^{\mathbb{F},2}(s):={\widetilde{\cal E}}_{s}f\left(s,{Y_{s}^{\mathbb{F},1}/{\widetilde{\cal E}}_{s}}, {Z_{s}^{\mathbb{F},1}/{\widetilde{\cal E}}_{s-}}\right),$ and \begin{eqnarray*} Y_{t}^{\mathbb{G},2}= \frac{Y_{t}^{\mathbb{F},2}}{{\widetilde{\cal E}}_{t}}1_{\{t\ <\tau\}}+\xi1_{\{t\geq\tau\}},\ Z^{\mathbb{G},2}=\frac{Z^{\mathbb{F},2}}{{\widetilde{\cal E}}_{-}},\quad K^{\mathbb{G},2}=\frac{1}{{\widetilde{\cal E}}_{-}}\bigcdot K ^{\mathbb{F},2},\ \mbox{and}\ M^{\mathbb{G},2}=\left(h-\frac{Y^{\mathbb{F},2}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}. \end{eqnarray*} \item By iterating this procedure, we get the sequence $\left(Y^{\mathbb{F},n},Z^{\mathbb{F},n},K^{\mathbb{F},n}\right)$ defined recursively as follows. \begin{eqnarray*} && (Y^{\mathbb{F},0},Z^{\mathbb{F},0},K^{\mathbb{F},0}):=(0,0,0,0),\\ && Y^{\mathbb{F},n}_{t}= \displaystyle\xi^{\mathbb{F}}+\int_{t}^{T}f^{\mathbb{F}}(s,Y^{{\mathbb{F}},n-1}_{s},Z^{{\mathbb{F}},n-1}_{s})ds+\int_{t}^{T}h_{s}dV^{\mathbb{F}}_{s}+K^{\mathbb{F},n}_{T}-K^{\mathbb{F},n}_{t}-\int_{t}^{T}Z^{\mathbb{F},n}_{s}dW_{s},\\ &&Y^{\mathbb{F},n}_{t}\geq S_{t}^{\mathbb{F}}1_{\{t\ <T\}}+\xi^{\mathbb{F}}1_{\{t\ =T\}},\quad\displaystyle\int_{0}^{T}(Y^{\mathbb{F},n}_{t-}-S_{t-}^{\mathbb{F}})dK^{\mathbb{F},n}_{t}=0,\end{eqnarray*} \end{enumerate} where $f^{\mathbb{F}}(s,y,z):={\widetilde{\cal E}_s}f\left(s,y({\widetilde{\cal E}_s})^{-1},z({\widetilde{\cal E}_s})^{-1}\right)$. Thus, thanks to the convergence of $\left(Y^{\mathbb{G},n},Z^{\mathbb{G},n},M^{\mathbb{G},n},K^{\mathbb{G},n}\right)$ and the relationship (\ref{secondrelation}), we deduce that $\left(Y^{\mathbb{F},n},Z^{\mathbb{F},n},K^{\mathbb{F},n}\right)$ should also converge to $ \left(Y^{\mathbb{F}},Z^{\mathbb{F}},K^{\mathbb{F}}\right)$, and this triplet satisfies \begin{eqnarray*}\label{RBSDEFsequence} \begin{cases} Y_{t}= \displaystyle\xi^{\mathbb{F}}+\int_{t}^{T}f^{\mathbb F}(s,Y_s, Z_{s})ds+\int_{t}^{T}h_{s}dV^{\mathbb{F}}_{s}+K_{T}-K_{t}-\int_{t}^{T}Z_{s}dW_{s},\\ Y_{t}\geq S_{t}^{\mathbb{F}}1_{\{t\ <T\}}+\xi^{\mathbb{F}}1_{\{t\ =T\}},\quad\displaystyle\int_{0}^{T}(Y_{t-}-S_{t-}^{\mathbb{F}})dK_{t}=0.\end{cases}\end{eqnarray*} This gives us the RBSDE under $\mathbb F$ that we are looking for, and this also shows the importance of analyzing the linear case separately besides its own importance. \end{remark} In the following, we elaborate our main result on how to connect RBSDE in $\mathbb G$ with those in $\mathbb F$. \begin{theorem}\label{alkd} Suppose $G>0$ and both (\ref{LipschitzAssumption})and (\ref{MainAssumption4NonlinearBounded}) hold. Then the following assertions hold.\\ {\rm{(a)}} The following RBSDE under $\mathbb F$, associated to the triplet $(S^{\mathbb{F}}, \xi^{\mathbb{F}},f^{\mathbb F})$, \begin{eqnarray}\label{RBSDEFGENERAL} \begin{cases} Y_{t}= \displaystyle\xi^{\mathbb{F}}+\int_{t}^{T}f^{\mathbb{F}}(s,Y_{s},Z_{s})ds+\int_{t}^{T}h_{s}dV^{\mathbb{F}}_{s}+K_{T}-K_{t}-\int_{t}^{T}Z_{s}dW_{s},\\ \\ Y_{t}\geq S_{t}^{\mathbb{F}},\quad t\in[0,T),\quad \displaystyle\int_{0}^{T}(Y_{t-}-S_{t-}^{\mathbb{F}})dK_{t}=0, \end{cases} \end{eqnarray} has a unique $L^p(P,\mathbb F)$-solution that we denote by $(Y^{\mathbb{F}},Z^{{\mathbb{F}}},K^{\mathbb{F}})$, where \begin{eqnarray}\label{Data4RBSDE(F)} f^{\mathbb{F}}(s,y,z):={\widetilde{\cal E}}_{s}f\left(s,y{\widetilde{\cal E}}_{s}^{-1},z{\widetilde{\cal E}}_{s}^{-1}\right),\quad S^{\mathbb{F}}:={\widetilde{\cal E}}S,\quad \xi^{\mathbb{F}}:={\widetilde{\cal E}}_{T}h_{T},\quad\mbox{and}\quad {\widetilde{\cal E}}:={\cal E}(-{\widetilde{G}}^{-1}\bigcdot D^{o,\mathbb F}). \end{eqnarray} {\rm{(b)}} There exists a unique solution to (\ref{nonLinear}), denoted by ($Y^{\mathbb{G}},Z^{\mathbb{G}},M^{\mathbb{G}},K^{\mathbb{G}}$), and is given by \begin{eqnarray} Y^{\mathbb{G}}= \frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Lbrack0, \tau[\![}+\xi I_{[\![\tau,+\infty[\![},\quad Z^{\mathbb{G}}=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}},\quad K^{\mathbb{G}}=\frac{1}{{\widetilde{\cal E}_{-}}}\bigcdot (K ^{\mathbb{F}})^{\tau}\quad\mbox{and}\quad M^{\mathbb{G}}=\left(h-\frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}.\label{secondrelationn} \end{eqnarray} \end{theorem} \begin{proof} This is divided into two steps, where we prove assertions (a) and (b) respectively.\\ {\bf Step 1.} On the one hand, put \begin{eqnarray*} \widetilde{f}^{\mathbb F}(t,y,z):=f^{\mathbb{F}}(t,y-(h\bigcdot {V}^{\mathbb{F}})_t,z),\quad \widetilde{S}^{\mathbb F}:=S^{\mathbb F}+h\bigcdot {V}^{\mathbb{F}},\quad\mbox{and}\quad \widetilde{\xi}^{\mathbb F}:={\xi}^{\mathbb F}+(h\bigcdot {V}^{\mathbb{F}})_T, \end{eqnarray*} and remark that $(\overline{Y}, \overline{Z}, \overline{K})$ is a solution to (\ref{RBSDEFGENERAL}) if and only if $(Y', Z', K'):=(\overline{Y}+h\bigcdot {V}^{\mathbb{F}}, \overline{Z}, \overline{K})$ is a solution to the following RBSDE \begin{eqnarray}\label{RBSDEftilde} \begin{cases} Y_{t}= \displaystyle\widetilde{\xi}^{\mathbb{F}}+\int_{t}^{T}\widetilde{f}^{\mathbb{F}}(s,Y_{s},Z_s)ds+K_{T}-K_{t}-\int_{t}^{T}Z_{s}dW_{s},\\ Y_{t}\geq \widetilde{S}_{t}^{\mathbb{F}},\quad t\in[0,T),\quad \displaystyle\int_{0}^{T}(Y_{t-}-\widetilde{S}_{t-}^{\mathbb{F}})dK_{t}=0. \end{cases}\end{eqnarray} On the other hand, thanks to (\ref{MainAssumption4NonlinearBounded}), we derive \begin{align*} &\Vert\widetilde{\xi}^{\mathbb F}\Vert_{L^p(P)}\leq \Vert{\xi}^{\mathbb F}\Vert_{L^p(P)}+\Vert(\vert{h}\vert\bigcdot {V}^{\mathbb{F}})_T\Vert_{L^p(P)}\leq \Vert{\xi}^{\mathbb F}\Vert_{L^p(P)}+E\left[(\vert{h}\vert^p\bigcdot {V}^{\mathbb{F}})_T\right]^{1/p}<+\infty,\\ &\Vert \widetilde{f}^{\mathbb F}(\cdot,0,0)\Vert_{\mathbb{S}_{T}(P,p)}\leq \Vert {f}^{\mathbb F}(\cdot,0,0)\Vert_{\mathbb{S}_{T}(P,p)}+C_{Lip}\Vert(\vert{h}\vert\bigcdot {V}^{\mathbb{F}})_T\Vert_{L^p(P)}<+\infty, \\ &\Vert (\widetilde{S}^{\mathbb F})^+\Vert_{\mathbb{D}_{T}(P,p)}\leq \Vert ({S}^{\mathbb F})^+\Vert_{\mathbb{D}_{T}(P,p)}+\Vert(\vert{h}\vert\bigcdot {V}^{\mathbb{F}})_T\Vert_{L^p(P)}<+\infty. \end{align*} Therefore, by combining these inequalities and \cite[Theorem 3.1]{Bouchard}, we conclude that (\ref{RBSDEftilde}) has a unique solution.This ends the first part. \\ {\bf Step 2.} Here we prove assertion (b). To this end, we remark that due to Theorem \ref{uniquenessNonlinear} the RBSDE (\ref{nonLinear}) has at most one solution. Thus, the proof of assertion (b) will follows immediately as soon as we prove that the quadruplet $(\overline{Y},\overline{Z}, \overline{K}, \overline{M})$, give by \begin{eqnarray*} \overline{Y}:= \frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Lbrack0, \tau[\![}+\xi I_{[\![\tau,+\infty[\![},\quad \overline{Z}:=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}},\quad \overline{K}:=\frac{1}{{\widetilde{\cal E}_{-}}}\bigcdot (K ^{\mathbb{F}})^{\tau}\quad\mbox{and}\quad \overline{M}:=\left(h-\frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}, \end{eqnarray*} is in fact a solution to (\ref{nonLinear}). The proof of this latter fact mimics exactly Step 2 in the proof of Theorem \ref{Relationship4InfiniteBSDE}, and will be omitted. This ends the proof the theorem. \end{proof} \section{Stopped general RBSDE: The case of unbounded horizon} In this section, we study the following RBSDE, \begin{eqnarray}\label{nonLinearINFINITE} \begin{cases} dY_{t}=-f(t,Y_{t},Z_{t})d(t\wedge\tau)-d(K_{t\wedge\tau}+M_{t\wedge\tau})+Z_{t}dW_{t}^{\tau},\\ Y_{\tau}=\xi,\quad Y_{t}\geq S_{t},\quad 0\leq t< \tau,\quad{E}\left[\displaystyle\int_{0}^{\tau}(Y_{t-}-S_{t-})dK_{t}\right]=0, \end{cases} \end{eqnarray} where $(\xi, S, f)$ is such that $S$ is an $\mathbb F$-adapted and RCLL process, $f(t,y,z)$ is a $\mbox{Prog}(\mathbb F)\times {\cal B}(\mathbb R)\times {\cal B}(\mathbb R)$-measurable functional satisfying (\ref{LipschitzAssumption}) and $\xi\in L^2({\cal G}_{T\wedge\tau})$ such that there exists an $\mathbb F$-optional process $h$ such that $\xi=h_{\tau}$. This section has three subsections. The first subsection derives estimates and stability inequalities that controls the solutions under the probability $P$ instead of $\widetilde Q$. The second subsection introduces the RBSDE under $\mathbb F$ and discusses the existence and uniqueness of its solution, while the third subsection elaborate our principal results that solves (\ref{nonLinearINFINITE}) and discusses its properties. \subsection{Estimate under $P$ for the solution of (\ref{nonLinear})} This subsection extends Theorem \ref{estimates} and \ref{estimates1} to the case of general driver/generator $f$. These theorems, that give estimates for the solutions under $P$ instead, are based essentially on Theorems \ref{WhyNot} and \ref{uniquenessNonlinear} respectively, and represent an important step towards solving (\ref{nonLinearINFINITE}). \begin{theorem}\label{estimates4GeneralUnbounded} If ($Y^{\mathbb{G}},Z^{\mathbb{G}},M^{\mathbb{G}},K^{\mathbb{G}}$) is a solution to the RBSDE (\ref{nonLinear}), associated to $\left(f, S, \xi\right)$, then for $p>1$ and $\alpha$ large there exists a constant $C({\alpha},p)>0$ that depends on $\alpha$ and $p$ only such that \begin{eqnarray*} &&\Vert {e^{\alpha\cdot/2}}(\widetilde{\cal E})^{1/p}Y^{\mathbb G} \Vert_{\mathbb{D}_{T\wedge\tau}(P,p)}+\Vert{e^{\alpha(\tau\wedge\cdot)/2}} (\widetilde{\cal E})^{1/p}Y^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(P,p)}+\Vert{e^{\alpha(\tau\wedge\cdot)/2}} (\widetilde{\cal E}_{-})^{1/p}Z^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(P,p)}\\ &&+\Vert{e^{\alpha\cdot/2}} I_{\Rbrack0,T\wedge\tau]\!]}(\widetilde{\cal E}_{-})^{1/p}\bigcdot {M^{\mathbb{G}}}\Vert_{{\cal M}^{p}(P,\mathbb{G})}+\Vert \int_{0}^{T\wedge\tau}e^{\frac{\alpha}{2} s}(\widetilde{\cal E}_{s-})^{1/p}d K_{s}^{{\mathbb{G}}}\Vert_{L^p(P)}\\ &&\leq C({\alpha},p)\left\{\Vert {e^{\alpha(T\wedge\tau)/2}}\xi\Vert_{\mathbb{L}^{p}(\widetilde{Q})}+\Vert{e^{\alpha\cdot/2}} {f}(\cdot,0,0)\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert\sup_{0\leq t\leq\cdot}e^{\alpha t/p}S^{+}_{t}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}\right\}, \end{eqnarray*} where $\widetilde{\cal E}$ is defined in (\ref{ProcessVFandXiF}) and that we recall herein \begin{eqnarray}\label{Xitilde} \widetilde{\cal E}_{t}:={\cal E}_{t}(-{\widetilde{G}}^{-1}\bigcdot D^{o,\mathbb{F}}).\end{eqnarray}\end{theorem} \begin{proof} The proof relies essentially on Lemma \ref{technicallemma1} and Theorem \ref{WhyNot}.\\ In fact, a direct application of Lemma \ref{technicallemma1}-(a) to $Y:=e^{\alpha(\cdot\wedge T\wedge\tau)/p}Y^{\mathbb{G}}$ yields \begin{eqnarray}\label{Control4YGgeneralCase} E\left[\sup_{0\leq s\leq{T}\wedge\tau }e^{p\alpha{s}/2}\widetilde{\cal E}_{s}\vert{Y}^{{\mathbb{G}}}_{s}\vert^p\right]\leq G_0^{-1} E^{\widetilde{Q}}\left[\sup_{0\leq s\leq{T}\wedge\tau }e^{p\alpha{s}/2}\vert{Y}^{{\mathbb{G}}}_{s}\vert^p\right]. \end{eqnarray} By applying Lemma \ref{technicallemma1}-(b) to both cases when $K=\int_{0}^{\cdot}e^{\alpha s}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}ds$ and when $K=\int_{0}^{\cdot}e^{\alpha s}\vert Y^{{\mathbb{G}}}_{s}\vert^{2}ds$ afterwards with $a=2/p$, we get \begin{eqnarray}\label{Control4ZGgeneralCase} \begin{cases} E\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s}(\widetilde{\cal E}_{s})^{2/p}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right]\leq {{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right],\\ \\ E\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s}(\widetilde{\cal E}_{s})^{2/p}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right]\leq {{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s}\vert Y^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right]. \end{cases} \end{eqnarray} Similarly, we apply Lemma \ref{technicallemma1}-(b) to $K=e^{\alpha\cdot/2}\bigcdot {K}^{\mathbb{G}}$ with $a=1/p$, we get \begin{align}\label{Control4KGgeneralCase} E\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s/2}(\widetilde{\cal E}_{s})^{1/p}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}d{K}^{\mathbb{G}}_s\right)^{p}\right]&\leq {{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s/2}d{K}^{\mathbb{G}}_s\right)^{p}+\sum_{0<s\leq{T}\wedge\tau}\widetilde{G}_s(e^{\alpha s/2}\Delta{K}^{\mathbb{G}}_s)^p\right]\nonumber\\ &\leq {{2\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s/2}d{K}^{\mathbb{G}}_s\right)^{p}\right]. \end{align} The last inequality follows from the easy facts that $\widetilde{G}\leq 1$ and $\sum_{0<s\leq{T}}(\Delta V_s)^p\leq V_T^p$ for any nondecreasing process $V$ with $V_0=0$ and any $p\geq 1$.\\ The rest of the proof will address the term that involves the $\mathbb G$-martingale $M^{\mathbb G}$. Thus, thanks to Theorem \ref{alkd}, we know that $[M^{\mathbb G}, M^{\mathbb G}]=H'\bigcdot [N^{\mathbb G},N^{\mathbb G}]$ where $H':=(h-Y^{\mathbb F}/{\widetilde{\cal E}})^2$, which is $\mathbb F$-optional. Thus, an application of Lemma \ref{technicallemma1}-(d) to $H:=e^{\alpha\cdot}(h-Y^{\mathbb F}/{\widetilde{\cal E}})^2$ that is $\mathbb F$-optional, we get \begin{align}\label{Control4NGgeneralCase} E\left[\left((\widetilde{\cal E})^{2/p}H\bigcdot [ N^{\mathbb{G}}, N^{\mathbb{G}}]_{T\wedge\tau}\right)^{p/2}\right]&\leq {{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(H\bigcdot [ N^{\mathbb{G}}, N^{\mathbb{G}}]_T\right)^{p/2}+2(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})_T\right]\nonumber\\ &={{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(e^{\alpha\cdot} \bigcdot [M^{\mathbb{G}}, M^{\mathbb{G}}]_T\right)^{p/2}+2(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})_T\right].\end{align} Thus, we need to control the second term in the right-hand-side of this inequality. To this end, we remark that $(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})\leq 2^{p-1}(\vert{h}e^{\alpha\cdot/2}\vert^p+\vert{Y}^{\mathbb G}e^{\alpha\cdot/2}\vert^pI_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})$. Thus, by using this, we derive \begin{align}\label{Control4MG1} 2E^{\widetilde{Q}}\left[(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})_T\right]\leq 2^p{G_0}E^{\widetilde{Q}}\left[e^{p\alpha\tau/2}\vert{h}_{\tau}\vert^p I_{\{\tau\leq{T}\}}\right]+2^pE^{\widetilde{Q}}\left[\sup_{0\leq{t}\leq\tau\wedge{T}}e^{p\alpha{s}/2}\vert{Y}^{\mathbb G}_t\vert^p\right]. \end{align} Therefore, by combining this inequality with ${h}_{\tau}I_{\{\tau\leq{T}\}}=\xi I_{\{\tau\leq{T}\}}$, (\ref{Control4NGgeneralCase}), (\ref{Control4KGgeneralCase}), (\ref{Control4ZGgeneralCase}), (\ref{Control4YGgeneralCase}) and Theorem \ref{WhyNot} with $\sigma=0$, the proof of the theorem follows immediately. \end{proof} \begin{theorem}\label{estimatesdelta} If ($Y^{\mathbb{G},i},Z^{\mathbb{G},i},K^{\mathbb{G},i}, M^{\mathbb{G},i}$) is a solution to the RBSDE (\ref{nonLinear}) that correspond to $(f^{(i)}, S^{(i)}, \xi^{(i)})$, $i=1,2$ respectively, then for any $p>1$ and $\alpha>\max(\alpha_0(p),\alpha_1(p))$, there exist positive $C_i$, $i=1,2,3,$ that depend on $(\alpha,p)$ only such that $\lim_{\alpha\to\infty}C_1=0$ and \begin{align}\label{Estimate4P12} &\Vert {e^{\alpha\cdot/2}}(\widetilde{\cal E})^{1/p}\delta{Y}^{\mathbb G} \Vert_{\mathbb{D}_{T\wedge\tau}(P,p)}+\Vert{e^{\alpha\cdot/2}} (\widetilde{\cal E})^{1/p}\delta{Y}^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(P,p)}\nonumber\\ &+\Vert{e^{\alpha\cdot/2}} (\widetilde{\cal E}_{-})^{1/p}\delta{Z}^{\mathbb{G}}\Vert_{\mathbb{S}_{T\wedge\tau}(P,p)}+\Vert{e^{\alpha\cdot/2}} I_{\Rbrack0,T\wedge\tau]\!]}(\widetilde{\cal E}_{-})^{1/p}\bigcdot \delta{M}^{\mathbb{G}}\Vert_{{\cal M}^{p}(P,\mathbb{G})}\nonumber\\ & \leq C_1\Vert{e^{\alpha\cdot/2}} \delta f\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+ C_2\Vert {e^{\alpha(T\wedge\tau)/p}} \delta \xi\Vert_{\mathbb{L}^p(\widetilde{Q})}+C_3\sqrt{\Vert {e^{\alpha\cdot/2}} \delta S\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\sum_{i=1}^2 \Delta(\xi^{(i)}, f^{(i)}, S^{(i)})}. \end{align} Here $\Delta(\xi^{(i)}, f^{(i)}, S^{(i)})$ is \begin{eqnarray}\label{Deltaxi(i)} \Delta(\xi^{(i)}, f^{(i)}, S^{(i)}):=\Vert {e^{\alpha(T\wedge\tau)/p}}\xi^{(i)}\Vert_{\mathbb{L}^p(\widetilde{Q})}+\Vert{e^{\alpha\cdot/2}}{f}^{(i)}(s,0,0)\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert\sup_{0\leq{t}\leq\cdot}e^{\alpha t/p}(S^{(i)}_{t})^{+}\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)} \end{eqnarray} and $(\delta Y^{\mathbb{G}}, \delta Z^{\mathbb{G}}, \delta M^{\mathbb{G}}, \delta K^{\mathbb{G}})$ and $(\delta f,\delta S, \delta\xi)$ are given by \begin{align}\label{DeltaProcesses4GeneralInfinity} \begin{cases} \delta Y^{\mathbb{G}}:=Y^{\mathbb{G},1}-Y^{\mathbb{G},2},\ \delta Z^{\mathbb{G}}:=Z^{\mathbb{G},1}-Z^{\mathbb{G},2}, \delta M^{\mathbb{G}}:=M^{\mathbb{G},1}-M^{\mathbb{G},2}, \delta K^{\mathbb{G}}:=K^{\mathbb{G},1}-K^{\mathbb{G},2}, \\ \delta S:=S^{(1)}-S^{(2)},\quad \delta \xi:=\xi^{(1)}-\xi^{(2)},\quad \delta f_t:= f^{(1)}(t,Y^{\mathbb{G},1}_t,Z^{\mathbb{G},1}_t)- f^{(2)}(t,Y^{\mathbb{G},1}_t,Z^{\mathbb{G},1}_t). \end{cases}\end{align} \end{theorem} \begin{proof} By applying Lemma \ref{technicallemma1}-(a) to $Y_s=e^{\alpha s/2}\delta Y^{{\mathbb{G}}}_{s}$ and $a=p$, we obtain \begin{eqnarray}\label{Control4YGinfiniteCaseDifference} E\left[\sup_{0\leq s\leq{T}\wedge\tau }e^{p\alpha{s}/2}\widetilde{\cal E}_{s}\vert\delta{Y}^{{\mathbb{G}}}_{s}\vert^p\right]\leq G_0^{-1} E^{\widetilde{Q}}\left[\sup_{0\leq s\leq{T}\wedge\tau }e^{p\alpha{s}/2}\vert\delta{Y}^{{\mathbb{G}}}_{s}\vert^p\right]. \end{eqnarray} By applying Lemma \ref{technicallemma1}-(b) to both cases when $K=\int_{0}^{\cdot}e^{\alpha s}\vert \delta{Z}^{{\mathbb{G}}}_{s}\vert^{2}ds$ and when $K=\int_{0}^{\cdot}e^{\alpha s}\vert \delta{Y}^{{\mathbb{G}}}_{s}\vert^{2}ds$ afterwards with $a=2/p$, we get \begin{eqnarray}\label{Control4ZGinfiniteCaseDifference} \begin{cases} E\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s}(\widetilde{\cal E}_{s})^{2/p}\vert \delta{Z}^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right]\leq {{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s}\vert \delta{Z}^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right],\\ \\ E\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s}(\widetilde{\cal E}_{s})^{2/p}\vert \delta{Y}^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right]\leq {{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(\int_{0}^{T\wedge\tau}e^{\alpha s}\vert \delta{Y}^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right]. \end{cases} \end{eqnarray} Thanks to Theorem \ref{alkd}, we know that $[\delta{M}^{\mathbb G}, \delta{M}^{\mathbb G}]=H'\bigcdot [N^{\mathbb G},N^{\mathbb G}]$ where $H':=(\delta{h}-\delta{Y}^{\mathbb F}/{\widetilde{\cal E}})^2$, which is $\mathbb F$-optional. Thus, an application of Lemma \ref{technicallemma1}-(d) to $H_s:=e^{\alpha s}(\delta{h}-\delta{Y}^{\mathbb F}/{\widetilde{\cal E}})^2$ that is $\mathbb F$-optional, and similar argument as in (\ref{Control4MG1}), we get \begin{align}\label{Control4MGinfiniteCaseDifference} &E\left[\left((\widetilde{\cal E})^{2/p}H\bigcdot [ N^{\mathbb{G}}, N^{\mathbb{G}}]_{T}\right)^{p/2}\right]\nonumber\\ &\leq {{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(H\bigcdot [ N^{\mathbb{G}}, N^{\mathbb{G}}]_T\right)^{p/2}+2(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})_T\right]\nonumber\\ &={{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(e^{\alpha\cdot} \bigcdot [\delta{M}^{\mathbb{G}}, \delta{M}^{\mathbb{G}}]_T\right)^{p/2}+2(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F})_T\right]\nonumber\\ &\leq {{\kappa}\over{G_0}} E^{\widetilde{Q}}\left[\left(e^{\alpha\cdot} \bigcdot [\delta{M}^{\mathbb{G}}, \delta{M}^{\mathbb{G}}]_T\right)^{p/2}\right]+ {{2^p\kappa}\over{G_0}} \left\{E^{\widetilde{Q}}\left[\vert\delta{h}_{\tau}\vert^p I_{\{\tau\leq{T}\}}\right]+E^{\widetilde{Q}}\left[\sup_{0\leq{t}\leq\tau\wedge{T}}e^{p\alpha{s}/2}\vert\delta{Y}^{\mathbb G}_t\vert^p\right]\right\}\end{align} Hence, by combining (\ref{Control4YGinfiniteCaseDifference}), (\ref{Control4ZGinfiniteCaseDifference}), (\ref{Control4MGinfiniteCaseDifference}) and Theorem \ref{uniquenessNonlinear}, the proof of the theorem follows. \end{proof} \subsection{Existence, uniqueness, and estimates} This subsection elaborates our first main result of this section that proves the existence and uniqueness of the solution to (\ref{nonLinearINFINITE}), and estimates it. \begin{theorem}\label{alkdINFINITE} Let $p\in (1,+\infty)$ and suppose $G>0$ and there exists $\alpha>\max(\alpha_0(p),\alpha_1(p))$ such that \begin{eqnarray}\label{MainAssumption4InfiniteHorizonNonlinear} E\left[\int_0^{\infty}\left\{e^{{p\alpha}t/2}\vert{h}_t\vert^p+(F^{(\alpha)}_t)^{p}+\sup_{0\leq {u}\leq t}e^{{\alpha}u}(S_u^+)^p\right\}dV^{\mathbb F}_t\right]<+\infty,\ F^{(\alpha)}_t:=\sqrt{\int_0^te^{{\alpha}s}\vert f(s,0,0)\vert ^2 ds}. \end{eqnarray} Then the following assertions hold.\\ {\rm{(a)}} There exists a unique solution ($Y^{\mathbb{G}},Z^{\mathbb{G}},M^{\mathbb{G}},K^{\mathbb{G}}$) to the RBSDE (\ref{nonLinearINFINITE}). \\ {\rm{(b)}} There exists $C(\alpha,p)>0$ that depends on $\alpha$ and $p$ only such that \begin{align*} &E\left[\sup_{0\leq s\leq\tau }e^{{p\alpha}s/2}\widetilde{\cal E}_{s}\vert{Y}^{{\mathbb{G}}}_{s}\vert^p+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s})^{2/p}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s})^{2/p}\vert Y^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}\right]\\ &+E\left[\left(\int_{0}^{\tau}e^{{\alpha'}s}(\widetilde{\cal E}_{s-})^{1/p}d K_{s}^{{\mathbb{G}}}\right)^p+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s-})^{2/p}d[ M^{\mathbb{G}}, M^{\mathbb{G}}]_s\right)^{p/2}\right]\\ &\leq C(\alpha,p)E\left[\int_0^{\infty} \left\{e^{{p\alpha}t/2}\vert{h}_t\vert^p +(F^{(\alpha)}_t)^{p}+\sup_{0\leq u\leq t}e^{\alpha s}(S^{+}_{u})^p\right\}dV^{\mathbb F}_t\right]. \end{align*} {\rm{(c)}} Let $(f, h^{(i)}, S^{(i)})$, $i=1,2$, be two triplets satisfying (\ref{MainAssumption4InfiniteHorizonNonlinear}), and $(Y^{\mathbb{G},i},Z^{\mathbb{G},i},K^{\mathbb{G},i},M^{\mathbb{G},i})$ be the solutions to their corresponding RBSDE (\ref{nonLinearINFINITE}). There exist $C_1$ and $C_2$ that depend on $\alpha$ and $p$ only such that \begin{align*} &E\left[\sup_{0\leq s\leq\tau }e^{{p\alpha}s/2}\widetilde{\cal E}_{s}\vert{\delta}Y^{{\mathbb{G}}}_{s}\vert^p+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s-})^{2/p}\vert \delta{Z}^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s-})^{2/p}d[ {\delta}M^{\mathbb{G}},{\delta} M^{\mathbb{G}}]_s\right)^{p/2}\right]\\ &\leq C_1 E\left[\int_0^{\infty} \left\{e^{{\alpha}t}\vert{\delta}h_t\vert^p +\sup_{0\leq{u}\leq{t}}e^{\alpha{u}}\vert\delta{S}_u\vert^p\right\}dV^{\mathbb F}_t\right]\nonumber\\ &+C_2\sqrt{E\left[\int_0^{\infty}\sup_{0\leq{u}\leq{t}}e^{\alpha{u}}\vert\delta{S}_u\vert^pdV^{\mathbb F}_t\right]}\sqrt{\sum_{i=1}^2E\left[\int_0^{\infty} \left\{e^{{p\alpha}t/2}\vert{h}^{(i)}_t\vert^p +(F^{(\alpha)}_t)^p+\sup_{0\leq{u}\leq{t}}e^{\alpha{u}}\vert({S}_u^{(i)})^+\vert^p\right\}dV^{\mathbb F}_t\right]}. \end{align*} Here $({\delta}Y^{{\mathbb{G}}},{\delta}Z^{{\mathbb{G}}}, {\delta}K^{{\mathbb{G}}},{\delta}M^{{\mathbb{G}}})$ and $({\delta}f,{\delta}S,{\delta}h)$ are given by (\ref{DeltaProcesses4GeneralInfinity}). \end{theorem} \begin{proof} On the one hand, in virtue of assertion (c), it is clear that (\ref{nonLinearINFINITE}) has at most one solution. Thus, the rest of this proof focuses on proving the existence of the solution and assertion (b) and (c). To this end, we divid the rest of the prof into two parts.\\ {\bf Part 1.} In this part, we comnsider $(f,S,h)$ and suppose that there exists a constant $C$ such that \begin{eqnarray}\label{BoundednessAssumpionInfinite} \max\left(e^{p\alpha\cdot/2}\vert{h}\vert, (F^{(\alpha)})^p, \sup_{0\leq{t}\leq\cdot}e^{p\alpha\cdot}(S^+_t)^p\right)\leq C{\cal E}(G_{-}^{-1}\bigcdot {m}). \end{eqnarray} The rest of this part is divided into three steps.\\ {\bf Step 1.} To the triplet $(f,S,h)$ satisfying (\ref{BoundednessAssumpionInfinite}), we associate the sequence $(\overline{f}^{(n)}, \overline{S}^{(n)},\overline{h}^{(n)})$ given by \begin{eqnarray}\label{overlineFn} \overline{f}^{(n)}:=fI_{\Lbrack0,n]\!]},\quad \overline{S}^{(n)}_t:=S_{n\wedge{t}},\quad \overline{h}^{(n)}_t:=h_{n\wedge{t}},\quad \overline{\xi}^{(n)}:=h_{n\wedge\tau},\quad n\geq 0. \end{eqnarray} Then thanks to Theorem \ref{alkd}, we deduce that for each triplet $(\overline{f}^{(n)}, \overline{S}^{(n)},\overline{\xi}^{(n)})$, the RBSDE (\ref{nonLinearINFINITE}) has a unique solution $(\overline{Y}^{(n)}, \overline{Z}^{(n)},\overline{M}^{(n)},\overline{K}^{(n)} )$. Then by applying Theorem \ref{estimatesdelta} to the difference of solutions $(\delta{Y}, \delta{Z},\delta{M},\delta{K} ):=(\overline{Y}^{(n+m)}, \overline{Z}^{(n+m)},\overline{M}^{(n+m)},\overline{K}^{(n+m)} )-(\overline{Y}^{(n)}, \overline{Z}^{(n)},\overline{M}^{(n)},\overline{K}^{(n)} )$, and the horizon $T=n+m$, we get \begin{align}\label{Estimate4GenreralPoof} &\Vert{e}^{\alpha\cdot/2}{\widetilde{\cal E}}^{1/p}\delta{Y}\Vert_{\mathbb{D}_{T}(\widetilde{P},p)}+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\vert \delta{Z}\Vert_{\mathbb{S}^p_{T}(\widetilde{P},p)}+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\bigcdot \delta{M}\Vert_{\mathbb{M}_{T}^p(\widetilde{P})}\nonumber\\ & \leq C_1\Vert{e^{\alpha(\tau\wedge\cdot)/p}} \delta f\Vert_{\mathbb{S}^p_{T}(\widetilde{Q})}+ C_2\Vert {e^{\alpha(T\wedge\tau)/p}} \delta \xi\Vert_{\mathbb{L}^p(\widetilde{Q})}+C_3\Vert {e^{\alpha(\tau\wedge\cdot)/2}} \delta S\Vert_{\mathbb{D}_{T}(\widetilde{Q})}^{1/2}\sup_{k\geq n}\sqrt{ \Delta(\xi^{(k)}, f^{(k)}, S^{(k)})}. \end{align} Here $\Delta(\xi^{(k)}, f^{(k)}, S^{(k)})$ is given by \begin{eqnarray}\label{Deltaxi(i)4proof} \Delta(\xi^{(k)}, f^{(k)}, S^{(k)}):=\Vert {e^{\alpha(T\wedge\tau)/p}} \overline{\xi}^{(k)}\Vert_{\mathbb{L}^p(\widetilde{Q})}+\Vert{e^{\alpha(\tau\wedge\cdot)/p}} \overline{f}^{(k)}(\cdot,0,0)\Vert_{\mathbb{S}_{T}(\widetilde{Q},p)} +\Vert {e^{\alpha(\tau\wedge\cdot)/2}} ( \overline{S}^{(k)})^{+}\Vert_{\mathbb{D}_{T}(\widetilde{Q},p)}, \end{eqnarray} {\bf Step 2.} It is clear that, due to the assumption \ref{BoundednessAssumpionInfinite} and in virtue of Lemma \ref{ExpecationQtilde2P}, we have \begin{eqnarray}\label{limites} \begin{cases} \displaystyle\lim_{n\to\infty}\sup_{m\geq 0}\Vert {e^{\alpha(\tau\wedge\cdot)/2}} \delta S\Vert_{\mathbb{D}_{T}(\widetilde{Q},p)}\leq 2\lim_{n\to\infty}\sup_{m} \Vert {e^{\alpha(\tau\wedge\cdot)/2}} SI_{[\![{n},n+m]\!]}\Vert_{\mathbb{D}_{T}(\widetilde{Q},p)}=0\\ \displaystyle\lim_{n\to\infty}\sup_{m\geq 0}\Vert {e^{\alpha(T\wedge\tau)/p}} \delta \xi\Vert_{\mathbb{L}^p(\widetilde{Q})}=\lim_{n\to\infty}\sup_{m\geq 0}\Vert {e^{\alpha(T\wedge\tau)/p}}(h_{(n+m)\wedge\tau}-h_{n\wedge\tau})\Vert_{\mathbb{L}^p(\widetilde{Q})}=0,\\ \displaystyle\lim_{n\to\infty}\sup_{m\geq 0}\Vert {e^{\alpha(\tau\wedge\cdot)/2}} \delta f(\cdot,0,0)\Vert_{\mathbb{D}_{T}(\widetilde{Q},p)}=0,\\ \sup_{k\geq 0}\sqrt{ \Delta(\xi^{(k)}, f^{(k)}, S^{(k)})}<+\infty, \end{cases} \end{eqnarray} and \begin{eqnarray}\label{limitesBIS} \begin{cases} \displaystyle\lim_{n\to\infty}\Vert {e^{\alpha(n\wedge\tau)/p}} \overline{\xi}^{(n)}\Vert_{\mathbb{L}^p(\widetilde{Q})}^p=E\left[\int_0^{\infty} e^{\alpha{s}}\vert{h}_s\vert^p dV^{\mathbb F}_s \right]\\ \displaystyle\lim_{n\to\infty}\Vert {e^{\alpha(\tau\wedge\cdot)/p}}(\overline{S}^{(n)})^+\Vert_{\mathbb{D}_{T}(\widetilde{Q},p)}^p=E\left[\int_0^{\infty} \sup_{0\leq{s}\leq{t}}e^{\alpha{s}}(S^+_s)^p dV^{\mathbb F}_s \right]\\ \displaystyle\lim_{n\to\infty}\Vert{e^{\alpha(\tau\wedge\cdot)/p}} \overline{f}^{(n)}(\cdot,0,0)\Vert_{\mathbb{S}_{T}(\widetilde{Q},p)}^p=E\left[\int_0^{\infty}(F^{(\alpha)}_t)^p dV^{\mathbb F}_t \right]. \end{cases} \end{eqnarray} Now we deal with the first term in the right-hand-side of the above inequality. To this end, on the one hand, we remark that \begin{eqnarray}\label{Control4deltaF} &&\vert \delta f_t\vert=\vert (\overline{f}^{(n+m)}-\overline{f}^{(n)})(t,\overline{Y}^{(n+m)}_t, \overline{Z}^{(n+m)}_t)\vert=\vert {f}(t,\overline{Y}^{(n+m)}_t, \overline{Z}^{(n+m)}_t)\vert{I}_{\{n<t\leq n+m\}}\nonumber\\ &&\leq \vert{f}(t,0,0)\vert{I}_{\{n<t\leq n+m\}}+C_{lip}(\vert\overline{Y}^{(n+m)}_t\vert+\overline{Z}^{(n+m)}_t\vert)I_{\{n<t\leq{n+m}\}}. \end{eqnarray} On the other hand, thanks to Theorem \ref{WhyNot}, applied to $(\overline{Y}^{(n+m)}, \overline{Z}^{(n+m)},\overline{M}^{(n+m)},\overline{K}^{(n+m)} )$ and $\sigma=n$, we deduce that \begin{eqnarray*} &&\Vert{e}^{\alpha(\tau\wedge\cdot)/2}\overline{Z}^{(n+m)}I_{]\!]{n},n+m]\!]}\Vert_{\mathbb{S}_{T}(\widetilde{Q},p)}+\Vert{e}^{\alpha\cdot/2} \overline{Y}^{(n+m)}I_{]\!]{n},n+m]\!]}\Vert_{\mathbb{S}_{T}(\widetilde{Q},p)}\\ &&\leq \widehat{C}\left\{ \Vert{e^{\alpha(T\wedge\tau)/2}}\overline{\xi}^{(n+m)}I_{\{\tau>n\}}\Vert_{L^p(\widetilde{Q})}+\Vert{e^{\alpha(\tau\wedge\cdot)}}S^{+}I_{]\!]{n},n+m]\!]}\Vert_{\mathbb{D}_T(\widetilde{Q}, p)}+\Vert e^{ {\alpha}(\tau\wedge\cdot)/2}f(\cdot,0,0)I_{]\!]{n},n+m]\!]}\Vert_{\mathbb{S}_T(\widetilde{Q}, p)}\right\} \end{eqnarray*} Therefore, by combining this inequality with (\ref{limites}) and (\ref{Control4deltaF}), we deduce that \begin{eqnarray*} \lim_{n\to\infty}\sup_{m\geq 0} \Vert{e^{\alpha(\tau\wedge\cdot)/p}} \delta f\Vert_{\mathbb{S}_{(n+m)\wedge\tau}(\widetilde{Q},p)}=0. \end{eqnarray*} Therefore, a combination of this equality with (\ref{Estimate4GenreralPoof}) and (\ref{limites}), we conclude that the sequence $(\overline{Y}^{(n)}, \overline{Z}^{(n)},\overline{M}^{(n)},\overline{K}^{(n)} )$ is a Cauchy sequence in norm, and hence it converges in norm and almost surely for a subsequence, and its limite is a solution to (\ref{nonLinearINFINITE}). This proves assertion (a) of the theorem provided that assertion (c) is true. Furthermore, by applying Theorem \ref{estimates4GeneralUnbounded} to $(\overline{Y}^{(n)}, \overline{Z}^{(n)},\overline{M}^{(n)},\overline{K}^{(n)} )$, we get \begin{align*} & \Vert{e}^{\alpha\cdot/2}{\widetilde{\cal E}}^{1/p}\overline{Y}^{(n)}\Vert_{\mathbb{D}({P},p)}+ \Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\vert \overline{Z}^{(n)}\Vert_{\mathbb{S}({P},p)}+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\bigcdot \overline{M}^{(n)}\Vert_{{\cal{M}}^p(P,\mathbb{G})}+\Vert{e}^{{\alpha}\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\bigcdot \overline{K}^{(n)}_{T\wedge\tau}\Vert_{L^p(P)}\\ &\leq C({\alpha},p) \left\{\Vert{e^{\alpha\cdot/p}}\overline{f}^{(n)}(\cdot,0,0)\Vert_{\mathbb{S}_{T\wedge\tau}(\widetilde{Q},p)}+\Vert {e^{\alpha(T\wedge\tau)/p}} \overline{\xi}^{(n)}\Vert_{\mathbb{L}^p(\widetilde{Q})}+\Vert {e^{\alpha\cdot/p}}(\overline{S}^{(n)})^{+}\Vert_{\mathbb{D}_{T\wedge\tau}(\widetilde{Q},p)}\right\}. \end{align*} Hence, by using (the convergence in norm of the sequence $(\overline{Y}^{(n)}, \overline{Z}^{(n)},\overline{M}^{(n)},\overline{K}^{(n)} )$ or the convergence almost surely and Fatou for the left-hand-side term of the above inequality and using (\ref{limites}) for its right-hand-side term, the proof of assertion (b) follows immediately.\\ {\bf Step 3.} Here we prove assertion (c) under the assumption (\ref{BoundednessAssumpionInfinite}). Consider two triplets $(f, S^{(i)}, h^{(i)})$, $i=1,2$ satisfying (\ref{BoundednessAssumpionInfinite}). Then for each triplet we associate to it a sequence $(\overline{f}^{(n)}, \overline{S}^{(n,i)}, \overline{h}^{(n,i)})$ defined via (\ref{overlineFn}). Thus, there exists two sequences $(\overline{Y}^{(n,i)}, \overline{Z}^{(n,i)},\overline{M}^{(n,i)},\overline{K}^{(n,i)} )$, $i=1,2$, that converge in norm and almost surely for subsequences to $(\overline{Y}^{\mathbb G,i}, \overline{Z}^{\mathbb G, i},\overline{M}^{\mathbb G, i},\overline{K}^{\mathbb G, i} )$ which is solution to (\ref{nonLinearINFINITE}) associated to $(f, S^{(i)}, h^{(i)})$.\ Then by applying Theorem \ref{estimatesdelta} to the difference of solutions $$(\delta{Y}, \delta{Z},\delta{M},\delta{K} ):=(\overline{Y}^{(n,1)}, \overline{Z}^{(n,1)},\overline{M}^{(n,1)},\overline{K}^{(n,1)} )-(\overline{Y}^{(n,2)}, \overline{Z}^{(n,2)},\overline{M}^{(n,2)},\overline{K}^{(n,2)} ),$$ and the horizon $T=n$, we get \begin{align}\label{Estimate4GenreralPoofBis} &\Vert{e}^{\alpha\cdot/2}{\widetilde{\cal E}}^{1/p}\delta{Y}\Vert_{\mathbb{D}_{T}(\widetilde{P},p)}+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\vert \delta{Z}\Vert_{\mathbb{S}^p_{T}(\widetilde{P},p)}+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\bigcdot \delta{M}\Vert_{{\cal{M}}^p(P,\mathbb{G})}\nonumber\\ & \leq C_2\Vert {e^{\alpha(T\wedge\tau)/p}} \delta \xi\Vert_{\mathbb{L}^p(\widetilde{Q})}+C_3\Vert {e^{\alpha(\tau\wedge\cdot)/2}} \delta S\Vert_{\mathbb{D}_{T}(\widetilde{Q})}^{1/2}\sup_{k\geq n}\sqrt{ \Delta(\xi^{(k)}, f^{(k)}, S^{(k)})}. \end{align} Here $\Delta(\xi^{(k)}, f^{(k)}, S^{(k)})$ is given by (\ref{Deltaxi(i)4proof}). Thus, by using (\ref{limitesBIS}) for each term in the right-hand-side term of the above inequality and Fatou and almost convergence for its left-hand-side term, we conclude that assertion (c) holds, and this ends the first part.\\ {\bf Part 2.} Here we drop the assumption (\ref{BoundednessAssumpionInfinite}). Let $(f,S,h)$ be a triplet and consider \begin{eqnarray*} T_n:=\inf\left\{t\geq 0\ :\ e^{\alpha{t}}\max(F^{(\alpha)}_t, \sup_{0\leq{u}\leq t}\vert S_u\vert^p)>n {\cal E}(G_{-}^{-1}\bigcdot m)\right\}. \end{eqnarray*} It is clear that $T_n$ is an $\mathbb F$-stopping time that converges to infinity almost surely. Then we associate a sequence, to the triplet $(f,S,h)$, denoted by $(f^{(n)},S^{(n)},h^{(n)})$, given by \begin{eqnarray}\label{SequenceTn} f^{(n)}:=fI_{\Lbrack0,T_n[\![},\quad S^{(n)}:=SI_{\Lbrack0,T_n[\![},\quad h^{(n)}_t:=h_tI_{\{e^{\alpha{t}}\vert{h}_t\vert^p\leq n{\cal E}_t(G_{-}^{-1}\bigcdot m)\}}I_{\Lbrack0,T_n[\![}(t). \end{eqnarray} {\bf Step 1.} Here we prove that assertion (c) holds. In fact, we consider two triplets $(f,S_i,h_i)$, $i=1,2$, and for each tripler we associate a sequence $(f^{(n)},S^{(n,i)},h^{(n,i)})$, via (\ref{SequenceTn}), for each $i=1,2$. Therefore, for each $i=1,2$ and any $n\geq 1$, the triplet $(f^{(n)},S^{(n,i)},h^{(n,i)})$ fulfills (\ref{BoundednessAssumpionInfinite}), and hence due to Part 1, there exists a unique solution $(\overline{Y}^{(n,i)}, \overline{Z}^{(n,i)},\overline{M}^{(n,i)},\overline{K}^{(n,i)} )$ that converges in norma and almost surely for a subsequence to $({Y}^{\mathbb G,i}, Z^{\mathbb G,i},M^{\mathbb G,i},K^{\mathbb G,i})$. Furthermore, we apply assertion (c) to the difference of solutions $$ \left({\delta}Y^{\mathbb{G},n}, \delta{Z}^{\mathbb{G},n},{\delta}M^{\mathbb{G},n} \right)=(\overline{Y}^{(n,1)}, \overline{Z}^{(n,1)},\overline{M}^{(n,1)},\overline{K}^{(n,1)} )-(\overline{Y}^{(n,2)}, \overline{Z}^{(n,2)},\overline{M}^{(n,2)},\overline{K}^{(n,2)} ),$$ and get \begin{align*} &E\left[\sup_{0\leq s\leq\tau }e^{{p\alpha}s/2}\widetilde{\cal E}_{s}\vert{\delta}Y^{{\mathbb{G},n}}_{s}\vert^p+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s-})^{2/p}\vert \delta{Z}^{{\mathbb{G},n}}_{s}\vert^{2}ds\right)^{p/2}+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s-})^{2/p}d[ {\delta}M^{\mathbb{G},n},{\delta} M^{\mathbb{G},n}]_s\right)^{p/2}\right]\\ &\leq C_1 E\left[\int_0^{T_n} \left\{e^{{\alpha}t}\vert{\delta}h_t\vert^p +\sup_{0\leq{u}\leq{t}}e^{\alpha{u}}\vert\delta{S}_u\vert^p\right\}dV^{\mathbb F}_t\right]\nonumber\\ &+C_2\sqrt{E\left[\int_0^{T_n}\sup_{0\leq{u}\leq{t}}e^{\alpha{u}}\vert\delta{S}_u\vert^pdV^{\mathbb F}_t\right]}\sqrt{\sum_{i=1}^2E\left[\int_0^{T_n} \left\{e^{{p\alpha}t/2}\vert{h}_i(t)\vert^p +(F^{(\alpha)}_t)^p+\sup_{0\leq{u}\leq{t}}e^{\alpha{u}}\vert({S}_i)(u))^+\vert^p\right\}dV^{\mathbb F}_t\right]}. \end{align*} Therefore, by taking the limite on both sides, we deduce that assertion (c) holds. \\ {\bf Step 2.} This step proves assertion (a) and (b) of the theorem.\\ Furthermore, for any $n\geq 0$, we associate the double-sequence as follows \begin{eqnarray*} \overline{f}^{(n,k)}:=f^{(n)}I_{[\![ 0,k]\!]},\overline{S}^{(n,k)}_t:=S^{(n)}_{t\wedge{k}},\quad\overline{h}^{(n,k)}:=h^{(n)} I_{[\![ 0,k]\!]},\quad \overline{\xi}^{(n,k)}:=\overline{h}^{(n,k)}_{\tau}. \end{eqnarray*} Thus, on the one hand, Theorem \ref{alkd} yields the existence of $(\overline{Y}^{(n,k)}, \overline{Z}^{(n,k)},\overline{M}^{(n,k)},\overline{K}^{(n,k)} )$ solution to (\ref{nonLinearINFINITE}) associated to $(\overline{f}^{(n,k)},\overline{S}^{(n,k)},\overline{h}^{(n,k)})$. On the other hand, part 1 of this proof implies the existence of $(\overline{Y}^{(n)}, \overline{Z}^{(n)},\overline{M}^{(n)},\overline{K}^{(n)} )$ solution to (\ref{nonLinearINFINITE}) associated to $({f}^{(n)},{S}^{(n)},{h}^{(n)})$, and which is the limite of $(\overline{Y}^{(n,k)}, \overline{Z}^{(n,k)},\overline{M}^{(n,k)},\overline{K}^{(n,k)} )$ when $k$ goes infinity in norm and almost surely for subsequence. Now, we consider nonengative integers $n,m$ and $k$ and we follow Step 2 of part 1, and apply Theorem \ref{estimatesdelta} to the solutions difference \begin{eqnarray*} \left(\delta Y^{(k)}, \delta Z^{(k)},\delta M^{(k)},\delta K^{(k)}\right):=(\overline{Y}^{(n+m,k)}, \overline{Z}^{(n+m,k)},\overline{M}^{(n+m,k)},\overline{K}^{(n+m,k)} )-(\overline{Y}^{(n,k)}, \overline{Z}^{(n,k)},\overline{M}^{(n,k)},\overline{K}^{(n,k)} ), \end{eqnarray*} and get \begin{align}\label{Estimate4GeneralPoofBis} &\Vert{e}^{\alpha\cdot/2}{\widetilde{\cal E}}^{1/p}{\delta}Y^{(k)}\Vert_{\mathbb{D}_{T}(\widetilde{P},p)}+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p} \delta{Z}^{(k)}\Vert_{\mathbb{S}^p_{T}(\widetilde{P},p)}+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\bigcdot {\delta}M^{(k)}\Vert_{{\cal{M}}^p(P,\mathbb{G})}\nonumber\\ & \leq C_1\Vert{e^{\alpha(\tau\wedge\cdot)/p}} \delta f^{(k)}\Vert_{\mathbb{S}^p_{T}(\widetilde{Q})}+ C_2\Vert {e^{\alpha(T\wedge\tau)/p}} \delta \xi^{(k)}\Vert_{\mathbb{L}^p(\widetilde{Q})}+C_3\Vert {e^{\alpha(\tau\wedge\cdot)/2}} \delta S^{(k)}\Vert_{\mathbb{D}_{T}(\widetilde{Q})}^{1/2}\sup_{l\geq n}\sqrt{ \Delta(l,k)}. \end{align} Here $\Delta(l,k)$ is given by \begin{eqnarray}\label{Deltaxi(i)4proofBis} \Delta(l,k):=\Vert {e^{\alpha(T\wedge\tau)/p}} \overline{\xi}^{(l,k)}\Vert_{\mathbb{L}^p(\widetilde{Q})}+\Vert{e^{\alpha(\tau\wedge\cdot)/p}} \overline{f}^{(l,k)}(\cdot,0,0)\Vert_{\mathbb{S}_{T}(\widetilde{Q},p)} +\Vert {e^{\alpha(\tau\wedge\cdot)/2}} ( \overline{S}^{(l,k)})^{+}\Vert_{\mathbb{D}_{T}(\widetilde{Q},p)}, \end{eqnarray} Here, we have \begin{eqnarray}\label{Control4deltaFBis} &&\vert \delta f_t^{(k)}\vert:=\vert (\overline{f}^{(n+m,k)}-\overline{f}^{(n,k)})(t,\overline{Y}^{(n+m,k)}_t, \overline{Z}^{(n+m,k)}_t)\vert=\vert {f}(t,\overline{Y}^{(n+m,k)}_t, \overline{Z}^{(n+m,k)}_t)\vert{I}_{\{T_n<t\leq T_{n+m}\}}I_{\{t\leq k\}}\nonumber\\ &&\leq \vert{f}(t,0,0)\vert{I}_{\{T_n<t\leq T_{n+m}\}}I_{\{t\leq k\}}+C_{lip}(\vert\overline{Y}^{(n+m,k)}_t\vert+\overline{Z}^{(n+m,k)}_t\vert){I}_{\{T_n<t\leq T_{n+m}\}}I_{\{t\leq k\}} \end{eqnarray} On the other hand, thanks to Theorem \ref{WhyNot} applied to $(\overline{Y}^{(n+m,k)}, \overline{Z}^{(n+m,k)},\overline{M}^{(n+m,k)},\overline{K}^{(n+m,k)} )$ and $\sigma=T_n$, we deduce that \begin{eqnarray*} &&\Vert{e}^{\alpha\cdot/2}\overline{Z}^{(n+m)}I_{]\!]{T_n},T_{n+m}]\!]}\Vert_{\mathbb{S}_{k\wedge\tau}(\widetilde{Q},p)}+\Vert{e}^{\alpha\cdot/2} \overline{Y}^{(n+m,k)}I_{]\!]{T_n},T_{n+m}]\!]}\Vert_{\mathbb{S}_{k\wedge\tau}(\widetilde{Q},p)}\\ &&\leq \widehat{C}\left\{ \Vert{e^{\alpha(T\wedge\tau)/2}}\overline{\xi}^{(n+m,k)}I_{\{\tau>T_n\}}\Vert_{L^p(\widetilde{Q})}+\Vert{e^{\alpha(\tau\wedge\cdot)}}S^{+}I_{]\!]{T_n},T_{n+m}]\!]}\Vert_{\mathbb{D}{k\wedge\tau}(\widetilde{Q}, p)}\right\}\\ &&+ \widehat{C}\Vert e^{ {\alpha}(\tau\wedge\cdot)/2}f(\cdot,0,0)I_{]\!]{T_n},T_{n+m}]\!]}\Vert_{\mathbb{S}_{k\wedge\tau}(\widetilde{Q}, p)}. \end{eqnarray*} Therefore, by combining this inequality with (\ref{Control4deltaFBis}) and Lemma \ref{ExpecationQtilde2P}, we deduce the existence of $\widehat{C}_i$, $i=1,2,3,$ that depend on $p$ and $\alpha$ only such that \begin{align*} &\displaystyle\lim\inf_{k\to\infty}\Vert{e^{\alpha(\tau\wedge\cdot)/p}} \delta f^{(k)}\Vert_{\mathbb{S}_{k\wedge\tau}(\widetilde{Q},p)}\\ &\leq \widehat{C}_1\lim\inf_{k\to\infty} \Vert{e^{\alpha(\tau\wedge\cdot)/2}}{f}(\cdot,0,0){I}_{]\!]{T}_n,T_{n+m}]\!]}\Vert_{\mathbb{S}_{k\wedge\tau}(\widetilde{Q},p)}+\displaystyle\widehat{C}_2\lim\inf_{k\to\infty}\Vert{e^{\alpha(k\wedge\tau)/2}}\overline{\xi}^{(n+m,k)}I_{\{\tau>T_n\}}\Vert_{L^p(\widetilde{Q})}\\ &\displaystyle+\widehat{C}_3\lim\inf_{k\to\infty}\Vert{e^{\alpha(\tau\wedge\cdot)}}S^{+}I_{]\!]{T_n},T_{n+m}]\!]}\Vert_{\mathbb{D}_{k\wedge\tau}(\widetilde{Q}, p)}\\ &\leq \max(\widehat{C}_1,\widehat{C}_2,\widehat{C}_3) E\left[\int_{T_n}^{T_{n+m}}\left\{(F^{(\alpha)}_t)^p+\sup_{0\leq{u}\leq{t}}e^{\alpha{t}}(S_u^+)^p+e^{\alpha{t}}\vert{h_t}\vert^p\right\}dV^{\mathbb F}_t\right]. \end{align*} As a result, in virtue of (\ref{MainAssumption4InfiniteHorizonNonlinear}), the above inequality implies that \begin{eqnarray}\label{LimitZeroGeneral} \lim_{n\to\infty}\sup_{m\geq 1}\lim\inf_{k\to\infty}\Vert{e^{\alpha(\tau\wedge\cdot)/p}} \delta f^{(k)}\Vert_{\mathbb{S}_{k\wedge\tau}(\widetilde{Q},p)}=0. \end{eqnarray} Then by taking the limit when $k$ goes to infinity in (\ref{Estimate4GeneralPoofBis}) and using Lemma \ref{ExpecationQtilde2P} again, we get \begin{align*} &\Vert{e}^{\alpha\cdot/2}{\widetilde{\cal E}}^{1/p}(\overline{Y}^{(n+m)}-\overline{Y}^{(n)})\Vert_{\mathbb{D}(P,p)}+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}(\overline{Z}^{(n+m)}-\overline{Z}^{(n)})\Vert_{\mathbb{S}^p(P,p)}\\ &+\Vert{e}^{\alpha\cdot/2}(\widetilde{\cal E}_{-})^{1/p}\bigcdot (\overline{M}^{(n+m)}-\overline{M}^{(n)})\Vert_{{\cal{M}}^p(P,\mathbb{G})}\nonumber\\ & \leq C_1\lim\inf_{k\to\infty}\Vert{e^{\alpha(\tau\wedge\cdot)/p}} \delta f^{(k)}\Vert_{\mathbb{S}_{k\wedge\tau}(\widetilde{Q},p)}+ C_2E\left[\int_{T_n}^{T_{n+m}}e^{\alpha{t}}\vert{h_t}\vert^pdV^{\mathbb F}_t\right]\\ &+C_3\sqrt{E\left[\int_{T_n}^{T_{n+m}}\sup_{0\leq{u}\leq{t}}e^{\alpha{t}}\vert{S}_t\vert^pdV^{\mathbb F}_t\right]}\sqrt{E\left[\int_0^{\infty}\left\{(F^{(\alpha)}_t)^p+\sup_{0\leq{u}\leq{t}}e^{\alpha{t}}(S_t^+)^p+e^{\alpha{t}}\vert{h_t}\vert^p\right\}dV^{\mathbb F}_t\right]}.\end{align*} Therefore, by combining this with (\ref{LimitZero}), we conclude that $(\overline{Y}^{(n)},\overline{Z}^{(n)},\overline{M}^{(n)})$ is a Cauchy sequence, and hence it converges to $(Y^{\mathbb G}, Z^{\mathbb G},M^{\mathbb G})$ in norm and almost surely for a subsequence. Then the convergence of $\overline{K}^{(n)}$ to $K^{\mathbb G}$ follows immediately, and hence $(Y^{\mathbb G}, Z^{\mathbb G},M^{\mathbb G},K^{\mathbb G})$ is a solution (\ref{nonLinearINFINITE}). This proves assertion (a). Assertion (b) follows from from part 1 of this proof, that guarantees that assertion (b) holds for each $(\overline{Y}^{(n)},\overline{Z}^{(n)},\overline{M}^{(n)},\overline{K}^{(n)})$ ($n\geq 0$), and from taking the limit afterwards in the obtained inequality. This ends the proof of theorem.\end{proof} \subsection{An RBSDE under $\mathbb F$ with infinite horizon and its relationship to (\ref{nonLinearINFINITE}) } In this subsection, we derive our second main result of this section that addresses the RBSDE under $\mathbb F$ given below, and connects to (\ref{nonLinearINFINITE}). \begin{eqnarray}\label{RBSDEFGENERALInfinite} \begin{cases} Y_{t}=\displaystyle\int_{t}^{\infty}f^{\mathbb{F}}(s,Y_s,Z_s)ds+\int_{t}^{\infty}h_{s}dV^{\mathbb{F}}_{s}+K^{\mathbb{F}}_{\infty}-K_t-\int_{t}^{\infty}Z_{s}dW_{s},\\ Y^{\mathbb{F}}_{t}\geq S_{t}^{\mathbb{F}},\quad t\geq 0,\quad \displaystyle{E}\left[\int_{0}^{\infty}(Y_{t-}-S_{t-}^{\mathbb{F}})dK^{\mathbb{F}}_{t}\right]=0. \end{cases} \end{eqnarray} Here $(f^{\mathbb{F}},S^{\mathbb{F}}, {\widetilde{\cal E}})$ denote the functionals defined via (\ref{Data4RBSDE(F)}). First of all, remark that a solution to the above RBSDE is any triplet $(Y, Z, K)$ such that $ \lim_{t\to\infty}Y_t$ exists almost surely and is null, and \begin{eqnarray*} \begin{cases} dY_{t}=f^{\mathbb{F}}(t,Y_t,Z_t)dt-h_tdV^{\mathbb{F}}_t-dK_t+Z_tdW_t,\\ Y^{\mathbb{F}}_{t}\geq S_{t}^{\mathbb{F}},\quad t\geq 0,\quad \displaystyle{E}\left[\int_{0}^{\infty}(Y_{t-}-S_{t-}^{\mathbb{F}})dK^{\mathbb{F}}_{t}\right]=0. \end{cases} \end{eqnarray*} This RBSDE generalizes Hamad\`ene et al . \cite{Hamadane1} in many aspects. First of all, our obstacle process $S^{\mathbb{F}}$ is arbitrary RCLL and might not be continuous at all. Furthermore, we do not exige that the part $(Y,K)$ of the solution to be continuous. Besides these, our RBSDE has an additional term, $\int_0^{\cdot}h_{s}dV^{\mathbb{F}}_{s}$ that might not be absolutely continuous with respect to the Lebesgue measure. \begin{theorem}\label{RBSDEinfite4F} Let $p\in (1,+\infty)$, $(h,S)$ be a pair of $\mathbb F$-optional processes, $f$ is a functional satisfying (\ref{LipschitzAssumption}), and $(f^{\mathbb{F}},S^{\mathbb{F}}, {\widetilde{\cal E}})$ is given by (\ref{Data4RBSDE(F)}). Suppose $G>0$ and there exists $\alpha>\max(\alpha_0(p),\alpha_1(p))$ \begin{eqnarray}\label{additiona4GenertalINfinity} \mbox{(\ref{MainAssumption4InfiniteHorizonNonlinear}) holds and}\quad E\left[\left(\widetilde{\cal E}_{\infty}F^{(\alpha)}_{\infty}\right)^p\right]<+\infty. \end{eqnarray} Then the RBSDE (\ref{RBSDEFGENERALInfinite}) has a unique $L^p(P,\mathbb F)$-solution $(Y^{\mathbb{F}}, Z^{\mathbb{F}},K^{\mathbb F})$. \end{theorem} The proof of this theorem is based on the following lemma \begin{lemma}\label{Inequality4BSDEunderF} For $\alpha>\max(\alpha_0(P),\alpha_1(p))$, there exist $C_i$, $i=1,2,3,4$ that depend on $\alpha$ and $p$ only such that $C_1C_{lip}<1$ and the following assertions hold.\\ {\rm{(a)}} If $(Y^{i}, Z^i, K^i)$ is a solution to the RBSDE (\ref{RBSDEsaid}) associated to $(f^i,S^i, \xi^i)$, $i=1,2$, then \begin{eqnarray} \Vert\delta{Y}\Vert_{\mathbb{D}(P,p)}+\Vert\delta{Z}\Vert_{\mathbb{S}(P,p)}\leq C_1\Vert\delta{f}\Vert_{\mathbb{S}(P,p)}+C_2\Vert\delta\xi\Vert_{L^p(P)}+C_3\sqrt{\Vert\delta{S}\Vert_{\mathbb{S}(P,p)}}\sqrt{\Vert\mbox{Var}_T(\delta{K})\Vert_{L^p(P)}}.\end{eqnarray} {\rm{(b)}} If $(Y, Z, K)$ is a solution to the RBSDE (\ref{RBSDEsaid}), then \begin{eqnarray} \Vert{Y}\Vert_{\mathbb{D}(P,p)}+\Vert{Z}\Vert_{\mathbb{S}(P,p)}+\Vert{Y}\Vert_{\mathbb{S}(P,p)}+\Vert{K}_T\Vert_{L^p(P)}\leq C_4\left\{\Vert{f}(\cdot, 0,0)\Vert_{\mathbb{S}(P,p)}+\Vert\xi\Vert_{L^p(P)}+\Vert{S^+}\Vert_{\mathbb{S}(P,p)}\right\}.\end{eqnarray} \end{lemma} The proof of this lemma mimics those of Theorems \ref{WhyNot} and \ref{uniquenessNonlinear}, and will be omitted here. \begin{proof}[Proof of Theorem \ref{RBSDEinfite4F}] Remark that due to the assumption (\ref{MainAssumption4InfiniteHorizonNonlinear}), the nondecreasing process $U:=\int_0^{\cdot}{h_s}{d}V^{\mathbb F}_s$ has a limit at infinity. Put \begin{eqnarray*} \widetilde{f}^{\mathbb{F}}(s,y,z)=f^{\mathbb{F}}(s,y-U_s,z),\quad \widetilde{S}^{\mathbb{F}}:=S^{\mathbb{F}}+U,\quad\mbox{and}\quad \widehat{\xi}:=U_{\infty}=\int_0^{\infty}h_sdV^{\mathbb F}_s. \end{eqnarray*} Then $(\overline{Y}, \overline{Z},\overline{K})$ is a solution to (\ref{RBSDEFGENERALInfinite}) if and only if $(Y',Z',K'):=(\overline{Y}+U, \overline{Z},\overline{K})$ is a solution to \begin{eqnarray}\label{RBSDEsaid} \begin{cases} Y_{t}= \widehat{\xi}+\displaystyle\int_{t}^{\infty}\widetilde{f}^{\mathbb{F}}(s,Y_s,Z_s)ds+K^{\mathbb{F}}_{\infty}-K_t-\int_{t}^{\infty}Z_{s}dW_{s},\\ \\ Y^{\mathbb{F}}_{t}\geq \widetilde{S}_{t}^{\mathbb{F}},\quad t\geq 0,\quad \displaystyle{E}\left[\int_{0}^{\infty}(Y_{t-}-S_{t-}^{\mathbb{F}})dK^{\mathbb{F}}_{t}\right]=0, \end{cases} \end{eqnarray} Now, we define the sequence $(Y^{(n)}, Z^{(n)}, K^{(n)})$ as follows: $(Y^{(0)}, Z^{(0)}, K^{(0)}):=(0, 0, 0)$, and $(Y^{(n)}, Z^{(n)}, K^{(n)})$ is the unique solution to \begin{eqnarray}\label{linear4n} Y^{(n)}_t=\xi+\int_t^{\infty}{\widetilde{ f}}^{\mathbb F}(s, Y^{(n-1)}_s, Z^{(n-1)}_s)ds +\int_t^{\infty} Z^{(n)}_s dW_s+K^{(n)}_{\infty}-K^{(n)}_t. \end{eqnarray} The existence and uniqueness of this solution is guaranteed by Theorem \ref{Relationship4InfiniteBSDE}. Thus, by applying Lemma \ref{Inequality4BSDEunderF} to $(Y^{(i)}, Z^{(i)}, K^{(i)})$ and $(\delta{Y},\delta{Z},\delta{K}):=\left(Y^{(n+m)}-Y^{(n)}, Z^{(n+m)}-Z^{(n)}, K^{(n+m)}-K^{(n)}\right)$, we deduce that \begin{eqnarray*} &&\sup_{i\geq 0}\Vert(Y^{(i)}, Z^{(i)}, K^{(i)})\Vert:=\sup_{i\geq 0} \left\{\Vert{Y^{(i)}}\Vert_{\mathbb{D}(P,p)}+\Vert{Z^{(i)}}\Vert_{\mathbb{S}(P,p)}+\Vert{Y}\Vert_{\mathbb{S}(P,p)}+\Vert{K^{(i)}}_T\Vert_{L^p(P)}\right\}<+\infty\quad\mbox{and}\\ &&\Vert\delta{Y}\Vert_{\mathbb{D}(P,p)}+\Vert\delta{Z}\Vert_{\mathbb{S}(P,p)}=:\Vert(\delta{Y},\delta{ Z})\Vert\leq C_1\Vert\delta{f}\Vert\leq C_1C_{lip}\Vert(Y^{(n+m-1)}-Y^{(n-1)}, Z^{(n+m-1)}- Z^{(n-1)})\Vert. \end{eqnarray*} Thus, by iterating this inequality, we get \begin{eqnarray*} \Vert(Y^{(n+m)}-Y^{(n)}, Z^{(n+m)}- Z^{(n)})\Vert\leq (C_1C_{lip})^n \Vert(Y^{(m)}, Z^{(m)})\Vert\leq (C_1C_{lip})^n \sup_{i\geq 0}\Vert(Y^{(i)}, Z^{(i)}, K^{(i)})\Vert.\end{eqnarray*} This proves that the sequence $(Y^{(n)}, Z^{(n)})$ is a Cauchy, and hence it convergences in norm and almost surely for a subsequence to $(Y, Z)$. Then the convergence of $ K^{(i)})$ to some $K$ follows immediately from the RBSDE (\ref{RBSDEsaid}), and therefore the triplet $(Y, Z, K)$ is a solution to (\ref{RBSDEsaid}). The uniqueness of the solution to (\ref{RBSDEsaid}) is a direct consequence of Lemma \ref{Inequality4BSDEunderF}-(a). This ends the proof of the theorem. \end{proof} Below, we establish the relationship between the solution of (\ref{nonLinearINFINITE}) and that of (\ref{RBSDEFGENERALInfinite}). \begin{theorem}\label{Relatiuonship4GeneralINifity} Suppose that the assumptions of Theorem \ref{RBSDEinfite4F} hold. Then both RBSDEs (\ref{RBSDEFGENERALInfinite}) and (\ref{nonLinearINFINITE}) have unique solutions $(Y^{\mathbb{F}}, Z^{\mathbb{F}},K^{\mathbb F})$ and $(Y^{\mathbb{G}}, Z^{\mathbb{G}},K^{\mathbb G},M^{\mathbb G})$ respectively, and they satisfy \begin{eqnarray} Y^{\mathbb{G}}= \frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+\xi I_{[\![\tau,+\infty[\![},\ Z^{\mathbb{G}}=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Rbrack0,\tau]\!]},\ K^{\mathbb{G}}=\frac{1}{{\widetilde{\cal E}_{-}}}\bigcdot (K ^{\mathbb{F}})^{\tau}\quad\mbox{and}\quad M^{\mathbb{G}}=\left(h-\frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}.\label{secondrelation4Generalinfinity} \end{eqnarray} \end{theorem} \begin{proof} Thanks to Theorems \ref{RBSDEinfite4F} and \ref{alkdINFINITE}, it is clear that both RBSDEs (\ref{RBSDEFGENERALInfinite}) and (\ref{nonLinearINFINITE}) have unique solutions. This proves the first claim of the theorem, while the proof of (\ref{secondrelation4Generalinfinity}) follows immediately as soon as we prove that $(\overline{Y}, \overline{Z},\overline{M},\overline{K})$ given by \begin{eqnarray*} \overline{Y}:= \frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+\xi I_{[\![\tau,+\infty[\![},\ \overline{Z}:=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Rbrack0,\tau]\!]},\ \overline{K}:=\frac{1}{{\widetilde{\cal E}_{-}}}\bigcdot (K ^{\mathbb{F}})^{\tau}\quad\mbox{and}\quad \overline{M}:=\left(h-\frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}.\end{eqnarray*} is a solution to (\ref{nonLinearINFINITE}). This latter fact can be proved by following exactly the footsteps of Step 2 in the proof of Theorem \ref{Relationship4InfiniteBSDE}. This ends the proof of theorem. \end{proof} We end this section with elaborating the BSDE version of this general case with unbounded horizon. \begin{theorem}\label{BSDEinfini} Let $p\in (1,+\infty)$, $h$ an $\mathbb F$-optional process and $f(t,y,z)$ be a functional satisfying (\ref{LipschitzAssumption}). Suppose that $G>0$ and there exists $\alpha>\max(\alpha_0(p),\alpha_1(p))$ such that \begin{eqnarray}\label{MainAssumption4BSDEinfini} E\left[\left(\widetilde{\cal E}_{\infty}F^{(\alpha)}_{\infty}\right)^p+\int_0^{\infty}\left\{e^{{p\alpha}t/2}\vert{h}_t\vert^p+(F^{(\alpha)}_t)^p\right\}dV^{\mathbb F}_t\right]<+\infty. \end{eqnarray} Then the following assertions hold.\\ {\rm{(a)}} There exists a unique solution ($Y^{\mathbb{G}},Z^{\mathbb{G}},M^{\mathbb{G}},K^{\mathbb{G}}$) to the following BSDE \begin{eqnarray}\label{BSDEinifiteDynamics} dY_{t}=-f(t,Y_{t},Z_{t})d(t\wedge\tau)-dM_{t\wedge\tau}+Z_{t}dW_{t\wedge\tau},\quad{Y}_{\tau}=\xi=h_{\tau}. \end{eqnarray} {\rm{(b)}} For $(f^{\mathbb{F}},S^{\mathbb{F}}, {\widetilde{\cal E}})$ are defined via (\ref{Data4RBSDE(F)}), the following BSDE under $\mathbb F$ \begin{eqnarray}\label{BSDEFInfinite} Y_{t}=\int_{t}^{\infty}f^{\mathbb{F}}(s,Y_s,Z_s)ds+\int_{t}^{\infty}h_{s}dV^{\mathbb{F}}_{s}-\int_{t}^{\infty}Z_{s}dW_{s},\end{eqnarray} has a unique solution that we denote by $(Y^{\mathbb{F}}, Z^{\mathbb{F}})$.\\ {\rm{(c)}} The two solutions $(Y^{\mathbb{G}},Z^{\mathbb{G}},M^{\mathbb{G}})$ and ($Y^{\mathbb{F}},Z^{\mathbb{F}}$) satisfy \begin{eqnarray} Y^{\mathbb{G}}= \frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}+\xi I_{[\![\tau,+\infty[\![},\ Z^{\mathbb{G}}=\frac{Z^{\mathbb{F}}}{{\widetilde{\cal E}}}I_{\Rbrack0,\tau]\!]},\quad\mbox{and}\quad M^{\mathbb{G}}=\left(h-\frac{Y^{\mathbb{F}}}{{\widetilde{\cal E}}}\right)\bigcdot N^{\mathbb{G}}.\label{secondrelationn} \end{eqnarray} {\rm{(d)}} There exists $C(\alpha,p)>0$ that depends on $\alpha$ and $p$ only such that \begin{align*} &E\left[\sup_{0\leq s\leq\tau }e^{{p\alpha}s/2}\widetilde{\cal E}_{s}\vert{Y}^{{\mathbb{G}}}_{s}\vert^p+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s})^{2/p}\vert Z^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s-})^{2/p}d[ M^{\mathbb{G}}, M^{\mathbb{G}}]_s\right)^{p/2}\right]\\ &\leq C(\alpha,p) E\left[\int_0^{\infty} \left\{e^{{p\alpha}t/2}\vert{h}\vert^p_t +(F^{(\alpha)}_t)^p\right\}dV^{\mathbb F}_s\right]. \end{align*} {\rm{(e)}} Let $(f^{(i)}, h^{(i)})$, $i=1,2$, be two pairs satisfying (\ref{MainAssumption4BSDEinfini}), and $(Y^{\mathbb{G},i},Z^{\mathbb{G},i},M^{\mathbb{G},i})$ be the solutions to their corresponding BSDE (\ref{BSDEinifiteDynamics}). There exist $C(\alpha, p)$ that depends on $\alpha$ and $p$ only such that \begin{align*} &E\left[\sup_{0\leq s\leq\tau }e^{{p\alpha}s/2}\widetilde{\cal E}_{s}\vert{\delta}Y^{{\mathbb{G}}}_{s}\vert^p+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s})^{2/p}\vert \delta{Z}^{{\mathbb{G}}}_{s}\vert^{2}ds\right)^{p/2}+\left(\int_{0}^{\tau}e^{\alpha s}(\widetilde{\cal E}_{s-})^{2/p}d[ {\delta}M^{\mathbb{G}},{\delta} M^{\mathbb{G}}]_s\right)^{p/2}\right]\\ &\leq C(\alpha, p) E\left[\int_0^{\infty} \left\{e^{{p\alpha}t/2}\vert{\delta}h_t\vert^p +(\delta{F}^{(\alpha)}_t)^p+\sup_{0\leq{u}\leq{t}}e^{\alpha{u}}\vert\delta{S}_u\vert^p\right\}dV^{\mathbb F}_t\right]. \end{align*} \end{theorem} \begin{proof} Similarly, as in the proof of Theorem \ref{LinearBSDEcase}, we notice that in general a BSDE is a particular case of an RBSDE. In fact a BSDE is an RBSDE with $S\equiv -\infty$, and this yields to having the predictable part with finite variation, in its solution, $K\equiv 0$. Thus, by taking these into account, we remark that when $(S, K)\equiv (-\infty, 0)$, then (\ref{MainAssumption4InfiniteHorizonNonlinear}) reduces to (\ref{MainAssumption4BSDEinfini}), and the theorem follows from combining Theorem \ref{RBSDEinfite4F} and Theorem \ref{alkdINFINITE} .\end{proof} \begin{appendices} \section{Some martingale inequalities} We start this section by recalling an important theorem form martingale inequalities that goes back to Dellacherie and Meyer, see \cite[Th\'eor\`eme 99, Chapter VI]{DELLACHERIE}. \begin{theorem}\label{DellacherieAndMeyer} Consider a complete filtered probability space $\left(\Omega, {\cal F}, \mathbb H=({\cal H}_t)_{0\leq t\leq T}, P\right)$. Let $A$ be predictable (optional) increasing process whose potential (left potential) Z is bounded above by a cadlag martingale $M_{t}=E[M_{\infty}|\mathcal{H}_{t}]$. Then \begin{align} \Vert{ A}_{\infty}\Vert_{\Phi}\leq p_{\Phi}\Vert{M}_{\infty}\Vert_{\Phi}, \end{align} where $p_{\Phi}$ is the constant associated with $\Phi$ and $\Phi$ is increasing convex function defined as the following; \begin{equation*} \Phi(t):=\int_{0}^{t}\phi(s)ds,\quad p_{\Phi}:=\sup_{t}{{t\phi(t)}\over{\Phi(t)}}. \end{equation*} for some right continuous increasing function $\phi$ which is positive on $\mathbb{R}^{+}$. \end{theorem} The following lemma, that plays crucial role in our estimations, ia interesting in itself and generalizes \cite[Lemma 4.8]{Choulli4} . \begin{lemma}\label{Lemma4.8FromChoulliThesis} If $r^{-1}=a^{-1}+b^{-1},$ where $ a>1$ and $b>1$, then there exists a positive constant $\kappa=\kappa(a,b)$ depending only on $a$ and $b$ such that the following assertion holds.\\ For any triplet $(H, X, M)$ such that $H$ is predictable, $X$ is RCLL and adapted process, $M$ is a martingale, and $\vert{H}\vert \leq \vert{X_{-}}\vert$, the following inequality holds. \begin{equation*} \Vert \sup_{0\leq{t}\leq{T}}\vert(H\bigcdot M)_t\vert\Vert_r\leq \kappa\Vert\sup_{0\leq{t}\leq{T}}\vert{X}_t\vert\Vert_a\Vert[M]_T^{\frac{1}{2}}\Vert_b. \end{equation*} \end{lemma} \begin{proof} When $H=X_{-}$, the assertion can be found in \cite[Lemma 4.8]{Choulli4}. To prove the general case, we remark that, there is no loss of generality in assuming $\vert X_{-}\vert>0$, and hence the process $H/X_{-}$ is a well defined process that is predictable and is bounded by one. Therefore, we put \begin{eqnarray*} \overline{M}:={{H}\over{X_{-}}}\bigcdot M,\end{eqnarray*} and remark that $[\overline{M},\overline{M}]=(H/X_{-})^2\bigcdot [M, M]\leq [M, M]$. Thus, we derive \begin{align*} \Vert\sup_{0\leq{t}\leq{T}}\vert({H}\bigcdot M)_t\vert\Vert_r&=\Vert\sup_{0\leq{t}\leq{T}}\vert({X_{-}}\bigcdot \overline{M})_t\vert\Vert_r\leq \kappa \Vert\sup_{0\leq{t}\leq{T}}\vert{X}_t\vert\Vert_a\Vert[\overline{M}]_T^{\frac{1}{2}}\Vert_b\\ &\leq \kappa\Vert\sup_{0\leq{t}\leq{T}}\vert{X}_t\vert\Vert_a\Vert[M, M]_T^{\frac{1}{2}}\Vert_b.\end{align*} This ends the proof of the lemma. \end{proof} \section{Proof of Lemmas \ref{stoppingTimeLemma}, \ref{Solution2SnellEnvelop}, \ref{L/EpsilonTilde}, \ref{Lemma4.11}, \ref{ExpecationQtilde2P} and \ref{technicallemma1} }\label{Appendix4Proofs} \begin{proof}[Proof of Lemma \ref{stoppingTimeLemma}] Thanks to \cite{DellacherieMeyer92}, for our $\mathbb{G}$-stopping time $\sigma^{\mathbb{G}}$, there exists an $\mathbb{F}$-stopping time $\sigma$ such that \begin{equation*} \sigma^{\mathbb{G}}=\sigma^{\mathbb{G}}\wedge\tau=\sigma\wedge\tau. \end{equation*} Put \begin{equation}\label{sigmaFdefinition} \sigma^{\mathbb{F}}:=\min\left(\max(\sigma,\sigma_{1}),\sigma_{2}\right), \end{equation} and on the one hand remark that $ \sigma^{\mathbb{F}}$ is an $\mathbb{F}$- stopping time satisfying the first condition in (\ref{sigmaF}). On the other hand, it is clear that \begin{eqnarray*} \min(\tau, \max(\sigma, \sigma_1))=(\tau\wedge\sigma_1)I_{\{\sigma_1>\sigma\}}+(\tau\wedge\sigma)I_{\{\sigma_1\leq\sigma\}}=\max(\tau\wedge\tau, \sigma_1\wedge\tau).\end{eqnarray*} Thus, by using this equality, we derive \begin{eqnarray*} \sigma^{\mathbb{F}}\wedge\tau&&= \tau\wedge\sigma_2\wedge\max(\sigma,\sigma_1)=(\tau\wedge\sigma_2)\wedge(\tau\wedge\max(\sigma,\sigma_1))\\ &&=(\tau\wedge\sigma_2)\wedge\max(\tau\wedge\tau, \sigma_1\wedge\tau)=\sigma\wedge\tau=\sigma^{\mathbb G}. \end{eqnarray*} This ends the proof of the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{Solution2SnellEnvelop}] Let $\nu\in\mathcal{J}_{t\wedge\tau}^{T\wedge\tau}(\mathbb{G})$. By using (\ref{RBSDEG}) and by taking the conditional expectation under $\widetilde{Q}$ afterwards, we get \begin{align} Y_{t\wedge\tau}&=E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\nu\wedge\tau}f(s)ds+Y_{\nu\wedge\tau}+K_{\nu\wedge\tau}-K_{t\wedge\tau}\ \Big|\ \mathcal{G}_{t}\right]\nonumber\\ &\geq E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\nu\wedge\tau}f(s)ds+S_{\nu\wedge\tau}1_{\{\nu\ <\tau\wedge T\}}+\xi1_{\{\nu\ =\tau\wedge T\}}\ \Big|\ \mathcal{G}_{t}\right]\nonumber\\ &\geq\mbox{ess}\sup_ {\theta\in \mathcal{J}_{t\wedge\tau,T\wedge\tau}(\mathbb{G})}E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\theta}f(s)ds+S_{\theta}1_{\{\theta<\tau\wedge T\}}+\xi{I}_{\{\theta=\tau\wedge T\}}\ \Big|\ \mathcal{G}_{t}\right]\label{sameUntilHere} \end{align} To prove the reverse inequality, we consider the following sequence of stopping times \begin{eqnarray*} \theta_{n}:=\inf\left\{t\wedge\tau\leq u\leq T\wedge\tau;\quad Y_{u}<S_{u}+\frac{1}{n}\right\}\wedge (T\wedge\tau),\quad n\geq 1. \end{eqnarray*} Then it is clear that $\theta_{n}\in\mathcal{J}_{t\wedge\tau}^{T\wedge\tau}(\mathbb{G})$, and \begin{eqnarray*} Y-S\geq\frac{1}{n}\quad \mbox{on }\quad [\![ t\wedge\tau ,\theta_{n}[\![,\quad\mbox{and}\quad Y_{-}-S_{-}\geq\frac{1}{n}\quad \mbox{on }\quad ]\!] t\wedge\tau ,\theta_{n}]\!]. \end{eqnarray*} As a result, we get $I_{ ]\!] t\wedge\tau ,\theta_{n}]\!]}\bigcdot K\equiv 0$, and hence using (\ref{RBSDEG}) again we deduce that \begin{align*} Y_{t\wedge\tau}&=Y_{\theta_{n}}+\int_{t\wedge\tau}^{\theta_{n}}f(s)ds+\int_{t\wedge\tau}^{\theta_{n}}d(K+M)_{t\wedge\tau}-\int_{t\wedge\tau}^{\theta_{n}}Z_{s}dW_{t}^{\tau}\\ &=Y_{\theta_{n}}+\int_{t\wedge\tau}^{\theta_{n}}f(s)ds+\int_{t\wedge\tau}^{\theta_{n}}dM_{t\wedge\tau}-\int_{t\wedge\tau}^{\theta_{n}}Z_{s}dW_{t}^{\tau}. \end{align*} By taking conditional expectation under $\widetilde{Q}$, we get $Y_{t\wedge\tau}=E^{\widetilde{Q}}[Y_{\theta_{n}}+\int_{t\wedge\tau}^{\theta_{n}}f(s)ds|\mathcal{G}_{t}]$ that implies \begin{align*} & \mbox{ess}\sup_{\theta\in \mathcal{J}_{t\wedge\tau,T\wedge\tau}(\mathbb{G})}E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\theta}f(s)ds+S_{\theta}1_{\{\theta\ <\tau\wedge T\}}+\xi{I}_{\{\theta=\tau\wedge T\}}\ \Big|\ \mathcal{G}_{t}\right]\\ &\geq E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\theta_{n}}f(s)ds+S_{\theta_{n}}1_{\{\theta_{n}<\tau\wedge T\}}+\xi1_{\{\theta_{n}=\tau\wedge T\}}\ \Big|\ \mathcal{G}_{t}\right] = Y_{t\wedge\tau}+E^{\widetilde{Q}}\left[(S_{\theta_{n}}-Y_{\theta_{n}})1_{\{\theta_{n}<\tau\wedge T\}} \Big|\ \mathcal{G}_{t}\right]\\ & \geq Y_{t\wedge\tau}-\frac{1}{n}\widetilde{Q}(\theta_{n}<\tau\wedge T|\mathcal{G}_{t}). \end{align*} Thus, by letting $n$ to go to infinity and due $\widetilde{Q}(\theta_{n}<\tau\wedge T\big| {\cal G}_{t})\leq 1$, we get \begin{equation*} \underset{\nu\in \mathcal{J}_{t,T}(\mathbb{G})}{\mbox{esssup}}\hspace{2mm}E^{\widetilde{Q}}\left[\int_{t\wedge\tau}^{\nu\wedge\tau}f(s)ds+S_{\nu\wedge\tau}1_{\{\nu\ <\tau\wedge T\}}+\xi1_{\{\nu=\tau\wedge T\}}\ \Big|\ \mathcal{G}_{t}\right]\geq Y_{t\wedge\tau}. \end{equation*} By combining this inequality with (\ref{sameUntilHere}), we get (\ref{RBSDE2Snell}), and the proof of the lemma completed.\end{proof} \begin{proof}[Proof of Lemma \ref{L/EpsilonTilde}] Let $L$ be an $\mathbb F$-semimartingale. Then we derive \begin{align*} & {{L}\over{\widetilde{\cal E}}}I_{\Lbrack0,\tau[\![}= {{L^{\tau}}\over{\widetilde{\cal E}^{\tau}}}- {{L}\over{\widetilde{\cal E}}}\bigcdot D=L\bigcdot {1\over{\widetilde{\cal E}^{\tau}}}+{1\over{\widetilde{\cal E}_{-}}}\bigcdot L^{\tau}- {{L}\over{\widetilde{\cal E}}}\bigcdot D= {{L}\over{G\widetilde{\cal E}_{-}}}I_{\Rbrack0,\tau]\!]}\bigcdot {D}^{o,\mathbb F} +{1\over{\widetilde{\cal E}_{-}}}\bigcdot L^{\tau}- {{L}\over{\widetilde{\cal E}}}\bigcdot D\\ &= {{L}\over{\widetilde{G}\widetilde{\cal E}}}I_{\Rbrack0,\tau]\!]}\bigcdot {D}^{o,\mathbb F} +{1\over{\widetilde{\cal E}_{-}}}\bigcdot L^{\tau}- {{L}\over{\widetilde{\cal E}}}\bigcdot D=- {{L}\over{\widetilde{\cal E}}}\bigcdot {N}^{\mathbb G}+{1\over{\widetilde{\cal E}_{-}}}\bigcdot L^{\tau}. \end{align*} The fourth equality follows from the fact that $\widetilde{\cal E}=\widetilde{\cal E}_{-}G/\widetilde{G}$. This ends the proof of the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{Lemma4.11}] Recall that $\Delta m={\widetilde G}-G_{-}\leq 1$, and $m$ is a BMO $(\mathbb F, P)$-martingale. Furthermore, we have \begin{align*} &E^{\widetilde Q}\left[[m,m]_{T\wedge\tau}-[m,m]_{t\wedge\tau}\big|{\cal G}_t\right]= E\left[\int_{t\wedge\tau}^{T\wedge\tau}{\cal E}_s(G_{-}^{-1}\bigcdot m)^{-1}d[m,m]_s\big|{\cal G}_t\right]{\cal E}_{t\wedge\tau}(G_{-}^{-1}\bigcdot m)\\ &=E\left[\int_{t\wedge\tau}^{T\wedge\tau}{\cal E}_s(G_{-}^{-1}\bigcdot m)^{-1}d[m,m]_s\big|{\cal F}_t\right]{{{\cal E}_t(G_{-}^{-1}\bigcdot m)}\over{G_t}}I_{\{\tau>t\}}\\ &=E\left[\int_{t}^T{\widetilde{\cal E}}_sd[m,m]_s\big|{\cal F}_t\right]{1\over{{\widetilde{\cal E}}_t}}I_{\{\tau>t\}}\leq \Vert m\Vert_{BMO(P)}. \end{align*} Hence, assertion (a) follows from this latter inequality. Thanks to Lemma \ref{G-projection}, on $(\tau>s)$ we derive \begin{align*} E^{\widetilde{Q}}\left[D^{o,\mathbb{F}}_{T} -D^{o,\mathbb{F}}_{s-}\big|{\cal G}_{s}\right]&=\Delta D^{o,\mathbb{F}}_{s}+ E\left[\int_{s\wedge\tau}^{T\wedge\tau} {1\over{{\cal E}_u(G^{-1}_{-}\bigcdot m)}} d D^{o,\mathbb{F}}_u \big|{\cal G}_{s}\right]{\cal E}_{s\wedge\tau}(G^{-1}\bigcdot m)\\ &=E\left[\int_{s\wedge\tau}^{T\wedge\tau} {1\over{{\cal E}_u(G^{-1}_{-}\bigcdot m)}} d D^{o,\mathbb{F}}_u \big|{\cal F}_{s}\right]{{{\cal E}_{s\wedge\tau}(G^{-1}\bigcdot m)}\over{G_s}}+\Delta D^{o,\mathbb{F}}_{s}\\ &=E\left[\int_{s}^{T} {\cal E}_{u-}(-{\widetilde G}^{-1}_{-}\bigcdot D^{o,\mathbb F}) d D^{o,\mathbb{F}}_u \big|{\cal F}_{s}\right]{1\over{{\cal E}_{s}(-{\widetilde G}^{-1}_{-}\bigcdot D^{o,\mathbb F})}}+\Delta D^{o,\mathbb{F}}_{s}\\ &\leq 2\Delta D^{o,\mathbb{F}}_{s} I_{\{t<\tau\}}. \end{align*} This proves assertion (b). The remaining part of this proof addresses assertion (c). Remark that $1-(1-x)^a\leq\max(a,1) x$ for any $0\leq{x}\leq 1$. Thus, in virtue of (\ref{Vepsilon}), we get \begin{eqnarray*} \Delta\widetilde{V}^{(a)}= 1-\left(1-{{\Delta D^{o,\mathbb F}}\over{\widetilde G}}\right)^a\leq \max(1,a){{\Delta D^{o,\mathbb F}}\over{\widetilde G}}. \end{eqnarray*} Hence, by putting \begin{eqnarray*} W:= {{\max(1,a)}\over{\widetilde G}}\bigcdot D^{o,\mathbb F}- \widetilde{V}^{(a)},\end{eqnarray*} we deduce that both \begin{eqnarray*} I_{\{\Delta D^{o,\mathbb F}\not=0\}} \bigcdot W=\sum\left\{ \max(1,a){{\Delta D^{o,\mathbb F}}\over{\widetilde G}}- 1+\left(1-{{\Delta D^{o,\mathbb F}}\over{\widetilde G}}\right)^a\right\} \end{eqnarray*} and $ I_{\{\Delta D^{o,\mathbb F}=0\}} \bigcdot W= {{(1-a)^+}\over{\widetilde G}}I_{\{\Delta D^{o,\mathbb F}=0\}}\bigcdot {D}^{o,\mathbb F}$ are nondecreasing processes. By combining this with \begin{eqnarray*} W= I_{\{\Delta D^{o,\mathbb F}=0\}} \bigcdot W+ I_{\{\Delta D^{o,\mathbb F}\not=0\}} \bigcdot W \end{eqnarray*} we deduce that assertion (c) holds. This ends the proof of the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{ExpecationQtilde2P}] Remark that, for any process $H$, we have $$H_{T\wedge\tau}=H_{\tau}I_{\{0<\tau\leq T\}}+H_T I_{\{\tau>T\}}+H_0I_{\{\tau=0\}}.$$ Thus, by applying this to the process $X/{\cal E}(G_{-}^{-1}\bigcdot m)$, we derive \begin{eqnarray*} E^{\widetilde Q}[X_{T\wedge\tau}]&&=E\left[{{X_{T\wedge\tau}}\over{{\cal E}_{T\wedge\tau}(G_{-}^{-1}\bigcdot m)}}\right]=E\left[{{X_{\tau}}\over{{\cal E}_{\tau}(G_{-}^{-1}\bigcdot m)}}I_{\{0<\tau\leq T\}}+{{X_T}\over{{\cal E}_T(G_{-}^{-1}\bigcdot m)}}I_{\{\tau> T\}} +X_0I_{\{\tau=0\}}\right]\\ &&=E\left[\int_0^T {{X_s}\over{{\cal E}_s(G_{-}^{-1}\bigcdot m)}}dD_s^{o,\mathbb F}+{{X_T}\over{{\cal E}_T(G_{-}^{-1}\bigcdot m)}}G_T+X_0(1-G_0)\right]\\ &&= E\left[G_0^2\int_0^T X_sdV_s^{\mathbb F}+G_0X_T{\widetilde {\cal E}}_T+X_0(1-G_0)\right]. \end{eqnarray*} Thus, due to $X_0=0$, (\ref{XunderQtilde}) follows immediately from the latter equality. To prove assertion (b), we take the limit on both sides of (\ref{XunderQtilde}) and we use the fact that $G_{\infty-}=\lim_{t\longrightarrow+\infty}G_t=0$ $P$-a.s. and this ends the proof of the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{technicallemma1}]This proof has four parts where we prove the four assertions respectively. \\ {\bf Part 1.} Let $a\in (0,+\infty)$ and $Y$ be a RCLL $\mathbb G$-semimartingale, and put $Y^*_t:=\sup_{0\leq s\leq t}\vert Y_s\vert$. Then, on the one hand, we remark that \begin{eqnarray}\label{remark1} \sup_{0\leq t\leq{T}\wedge\tau} {\widetilde{\cal E}}_t\vert Y_t\vert^a\leq \sup_{0\leq t\leq{T}\wedge\tau} {\widetilde{\cal E}}_t (Y_t^*)^a. \end{eqnarray} On the other hand, thanks to It\^o, we derive \begin{eqnarray}\label{remark2} {\widetilde{\cal E}}(Y^*)^a=(Y_0^*)^a+{\widetilde{\cal E}}\bigcdot (Y^*)^a+(Y^*_{-})^a\bigcdot {\widetilde{\cal E}}\leq (Y_0^*)^a+{\widetilde{\cal E}}\bigcdot (Y^*)^a. \end{eqnarray} Thus, by combining (\ref{remark1}) and (\ref{remark2}) with ${\widetilde{\cal E}}=G/\left(G_0{\cal E}(G_{-}^{-1}\bigcdot m)\right)$, we get \begin{eqnarray*} E\left[\sup_{0\leq t\leq{T}\wedge\tau} {\widetilde{\cal E}}_t\vert Y_t\vert^a\right]&&\leq{E}\left[(Y_0^*)^a+\int_0^{T\wedge\tau} {\widetilde{\cal E}_s} d(Y^*_s)^a\right]\nonumber \\ &&=E[(Y_0^*)^a]+{1\over{G_0}}E^{\widetilde{Q}}\left[\int_0^{T\wedge\tau} G_s{d}(Y^*_s)^a\right]\leq G_0^{-1}E^{\widetilde{Q}}\left[(Y^*_{T\wedge\tau})^a\right]. \end{eqnarray*} This proves assertion (a). \\ {\bf Part 2.} Let $a\in (0,+\infty)$ and $K$ be a RCLL nondecreasing and $\mathbb G$-optional process with $K_0=0$. Then, we remark that \begin{eqnarray}\label{equa300} \widetilde{\cal E}_{-}^a \bigcdot K=K\widetilde{\cal E}^a-K\bigcdot \widetilde{\cal E}^a=K\widetilde{\cal E}^a+K{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}=K\widetilde{\cal E}^a+K_{-}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}+\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)},\end{eqnarray} where $\widetilde {V}^{(a)}$ is defined in (\ref{Vepsilon}). As a result, by combining the above equality, the fact that $(\sum_{i=1}^n x_i)^{1/a}\leq n^{1/a}\sum _{i=1}^n x_i^{1/a}$ for any sequence of nonnegative numbers and Lemma \ref{Lemma4.11}, we derive \begin{eqnarray*} &&E\left[(\widetilde{\cal E}_{-}^a \bigcdot K_{T\wedge\tau})^{1/a}\right]\nonumber\\ &&\leq 3^{1/a}E\left[(K_{T\wedge\tau})^{1/a}\widetilde{\cal E}_{T\wedge\tau}+(K_{-}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}_{T\wedge\tau})^{1/a}+(\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}_{T\wedge\tau})^{1/a}\right]\nonumber\\ &&\leq 3^{1/a}E^{\widetilde{Q}}\left[(K_{T\wedge\tau})^{1/a}{{G_{T\wedge\tau}}\over{G_0}}\right]+4\times3^{1/a}E\left[\sup_{0\leq{t}\leq{T\wedge\tau}}K_{t}^{1/a}{\widetilde{\cal E}_t}\right]+3^{1/a}E\left[(\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}_{T\wedge\tau})^{1/a}\right].\end{eqnarray*} Then, due to $K^{1/a}\widetilde{\cal E}\leq \widetilde{\cal E}\bigcdot {K}^{1/a}$ and ${\widetilde{\cal E}}=G/\left(G_0{\cal E}(G_{-}^{-1}\bigcdot m)\right)$, the above inequality leads to \begin{eqnarray}\label{equa299} &&E\left[(\widetilde{\cal E}_{-}^a \bigcdot K_{T\wedge\tau})^{1/a}\right]\nonumber\\ &&\leq {{3^{1/a}}\over{G_0}}E^{\widetilde{Q}}\left[(K_{T\wedge\tau})^{1/a}\right]+4\times3^{1/a}E\left[\int_0^{T\wedge\tau}{\widetilde{\cal E}_t} dK_{t}^{1/a}\right]+3^{1/a}E\left[(\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}_{T\wedge\tau})^{1/a}\right]\nonumber\\ &&\leq 5{{3^{1/a}}\over{G_0}}E^{\widetilde{Q}}\left[(K_{T\wedge\tau})^{1/a}\right]+3^{1/a}E\left[(\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}_{T\wedge\tau})^{1/a}\right].\end{eqnarray} Thus, it remain to deal with the last term in the right-hand-side term of the above inequality. To this end, we distinguish the cases whether $a\geq 1$ or $a<1$. \\ The case when $a\geq 1$, or equivalently $1/a\leq 1$. Then we use the fact that $(\sum x_i)^{1/a}\leq \sum x_i^{1/a}$ for any sequence of nonnegative numbers, and get \begin{eqnarray*} E\left[(\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}_{T\wedge\tau})^{1/a}\right]&&= E\left[\left(\sum_{0\leq t\leq _{T\wedge\tau}} \Delta{K}_t\widetilde{\cal E}_{t-}^a\Delta\widetilde{V}^{(a)}_t\right)^{1/a}\right]\leq E\left[\sum_{0\leq t\leq _{T\wedge\tau}} (\Delta{K}_t)^{1/a}\widetilde{\cal E}_{t-}(\Delta\widetilde{V}^{(a)}_t)^{1/a}\right]\\ &&\leq a^{1/a} E\left[\sum_{0\leq t\leq _{T\wedge\tau}} (\Delta{K}_t)^{1/a}\widetilde{\cal E}_{t-}\right]= a^{1/a} E\left[\sum_{0\leq t\leq _{T\wedge\tau}} (\Delta{K}_t)^{1/a}{{\widetilde{G}_t}\over{G_t}}\widetilde{\cal E}_{t}\right] \\ &&={{a^{1/a} }\over{G_0}}{E}^{\widetilde{Q}}\left[\sum_{0\leq t\leq _{T\wedge\tau}}\widetilde{G}_t(\Delta{K}_t)^{1/a}\right] . \end{eqnarray*} The last equality follows from $\widetilde{\cal E}/G=G_0^{-1}/{\cal E}(G_{-}^{-1}\bigcdot m)$. Thus, by combining this latter inequality with (\ref{equa299}), assertion (b) follows immediately for this case of $a\geq 1$. \\ For the case of $a\in (0,1)$, or equivalently $1/a>1$, we use Lemma \ref{Lemma4.11} and derive \begin{eqnarray* &&E\left[(\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)})_{T\wedge\tau}-(\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)})_{{t\wedge\tau}-}\bigg|\ \mathcal{G}_{t}\right]\\ &&=E\left[\int_{t\wedge\tau}^{T\wedge\tau}\Delta{K_s}{\widetilde{\cal E}_{s-}^a}d\widetilde{V}^{(a)}_s + (\Delta{K_{t\wedge\tau}}{\widetilde{\cal E}_{{t\wedge\tau}-}^a}\Delta \widetilde{V}^{(a)})_{t\wedge\tau}\bigg|\ \mathcal{G}_{t}\right]\\ &&\leq E\left[\int_{t\wedge\tau}^{T\wedge\tau}\sup_{0\leq u\leq s}\Delta{K_u}{\widetilde{\cal E}_{u-}^a}d\widetilde{V}^{(a)}_s + \sup_{0\leq u\leq{t\wedge\tau}} \Delta{K_u}{\widetilde{\cal E}_{u-}^a}\bigg|\ \mathcal{G}_{t}\right]\\ &&=E\left[\int_{t\wedge\tau}^{T\wedge\tau}E[\widetilde{V}^{(a)}_{T\wedge\tau}-\widetilde{V}^{(a)}_{s-}\big|{\cal G}_{s}]d\sup_{0\leq u\leq s}\Delta{K_u}{\widetilde{\cal E}_{u-}^a} + \sup_{0\leq u\leq{t\wedge\tau}} \Delta{K_u}{\widetilde{\cal E}_{u-}^a}\bigg|\ \mathcal{G}_{t}\right]\\ &&\leq E\left[ \sup_{0\leq u\leq{T\wedge\tau}} \Delta{K_u}{\widetilde{\cal E}_{u-}^a}\bigg|\ \mathcal{G}_{t}\right].\end{eqnarray*} Therefore, a direct application of Theorem \ref{DellacherieAndMeyer}, we obtain \begin{eqnarray*} E\left[(\Delta{K}{\widetilde{\cal E}_{-}^a}\bigcdot \widetilde{V}^{(a)}_{T\wedge\tau})^{1/a}\right]&&\leq a^{-1/a} E\left[ \sup_{0\leq u\leq{T\wedge\tau}} \Delta{K_u}^{1/a}{\widetilde{\cal E}_{u-}}\right]\leq a^{-1/a} E\left[ \sum_{0\leq u\leq{T\wedge\tau}} \Delta{K_u}^{1/a}{\widetilde{\cal E}_{u-}}\right]\\ &&= a^{-1/a}G_0^{-1} E^{\widetilde{Q}}\left[ \sum_{0\leq u\leq{T\wedge\tau}} \widetilde{G}_t\Delta{K_u}^{1/a}\right]. \end{eqnarray*} Hence, by combining this inequality with (\ref{equa299}), assertion (b) follows immediately in this case of $a\in (0,1)$, and the proof of assertion (b) is complete.\\ {\bf Part 3.} Here we prove assertion (c). To this end, we consider $p>1$, a $\mathbb G$-optional process $H$, and we apply assertion (b) to the process $K=H\bigcdot [N^{\mathbb G}, N^{\mathbb G}]$ and $a=2/p$, and get \begin{eqnarray*} E\left[({\widetilde{\cal E}}_{-}^{2/p}H\bigcdot [N^{\mathbb G},N^{\mathbb G}])_{T\wedge\tau} ^{p/2}\right]\leq C(a)G_0^{-1} E^{\widetilde{Q}}\left[(H\bigcdot [N^{\mathbb G},N^{\mathbb G}]_{T\wedge\tau})^{p/2}+ \sum_{0\leq t\leq {T\wedge\tau}}{\widetilde{G}_t}H^{p/2}_t\vert\Delta{N}^{\mathbb G}\vert^p\right]. \end{eqnarray*} Therefore, assertion (c) follows from combining this inequality with $\sum_{0\leq{t}\leq\cdot}{\widetilde{G}_t}H^{p/2}_t\vert\Delta{N}^{\mathbb G}_t\vert= {\widetilde{G}}H^{p/2}\bigcdot \mbox{Var}(N^{\mathbb G})$ and $\vert\Delta{N}^{\mathbb G}\vert^{p-1}\leq 1$.\\ {\bf Part 4.} Consider consider $p>1$ and a nonnegative and $\mathbb H$-optional process $H$. Thus, by applying assertion (c), we obtain the inequality (\ref{Equality4MG}). Hence, to get (\ref{Equality4MGOptionalF}), we remark that $\mbox{Var}(N^{\mathbb G})=(G/\widetilde{G})\bigcdot {D}+ {\widetilde{G}}^{-1}I_{\Rbrack0,\tau[\![}\bigcdot D^{o,\mathbb F}$, and due to the $\mathbb F$-optinality of $H$ we have \begin{eqnarray*} E^{\widetilde{Q}}\left[{\widetilde{G}_t}H^{p/2}_t\mbox{Var}(N^{\mathbb G})_T\right]=2E\left[\int_0^T {{H^{p/2}_t}\over{{\cal E}_t(G_{-}^{-1}\bigcdot {m})}}I_{\Rbrack0,\tau[\![}(t)d D^{o,\mathbb F}_t\right]=2E^{\widetilde{Q}}\left[(H^{p/2}I_{\Rbrack0,\tau[\![}\bigcdot {D}^{o,\mathbb F})_T\right]. \end{eqnarray*} Therefore, by combining this with (\ref{Equality4MG}), assertion (d) follows immediately. This ends the proof of the lemma.\end{proof} \end{appendices} \bibliographystyle{amsplain}
proofpile-arXiv_067-5185
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Researchers suggest that the transmission of SARS-CoV-2 will quickly rebound if interventions (e.g., quarantine and social distancing) are relaxed~\cite{ferguson2020report}. Vaccination has greatly reduced the burden of many infectious diseases~\cite{andre2008vaccination} throughout history, and developing SARS-CoV-2 vaccines that can be used globally is, therefore, a priority for ending the pandemic~\cite{yamey2020ensuring}. Nevertheless, as scientists and medical experts around the world are developing and testing COVID-19 vaccines, the U.S. public is now divided over whether or not to obtain COVID-19 vaccines. According to a recent Pew Research Center study\footnote{https://www.pewresearch.org/science/2020/09/17/u-s-public-now-divided-over-whether-to-get-covid-19-vaccine/ [Accessed July 20, 2021]}, in May, 2020 71\% of U.S. adults indicated that they would definitely or probably obtain a vaccine to prevent COVID-19 if it were available. The percentage dropped sharply, however, to 51\% in September, 2020. The survey shows that the U.S. public is concerned about the safety and effectiveness of possible vaccines, and the rapid pace of the approval process. Previous studies show that the sharing of public concerns about vaccines might lead to delaying or not getting vaccination~\cite{gust2008parents}, which could compromise global COVID-19 vaccine distribution strategies. This phenomenon is termed “vaccine hesitancy”~\cite{dube2013vaccine} which is a complex issue driven by a variety of context-specific factors~\cite{larson2014understanding}. Researchers have investigated public opinions on existing vaccines for vaccine-preventable diseases like MMR~\cite{motta2018knowing, deiner2019facebook}, HPV~\cite{abdelmutti2010risk, pan2020caught} and H1N1~\cite{henrich2011public}. Hesitancy and opinions can vary, however, according to the vaccine involved~\cite{bedford2007more}. \citet{lazarus2020global} and \citet{feleszko2020flattening} have investigated the potential acceptance of a COVID-19 vaccine using survey methods, yet little is known about the scope and causes of public opinions on COVID-19 vaccines on social media platforms. Although the survey data of a traditional design can lead to detecting causality, it is labor-intensive and expensive~\cite{mokdad2010measuring}, thus, being difficult to deploy surveys at a large scale without introducing social desirability biases~\cite{krumpal2013determinants} and in a timely manner, compared to social media data~\cite{mokdad2010measuring}. In addition, due to the passive nature of collecting social media data, observing social media data can potentially capture a different (and unperturbed) view of human behaviors~\cite{heikinheimo2017user}. To the best of our knowledge, there is no other study that has tracked and understood the public opinion regarding COVID-19 vaccines using social media data. Meanwhile, the development and testing of COVID-19 vaccines has drawn great attention and response on social media platforms like Twitter and Reddit that allow fast sharing of health information~\cite{scanfeld2010dissemination,singh2020first, yeung2020face} and are found to play a major role in disseminating information about vaccinations~\cite{stahl2016impact,dunn2017mapping,brainard2020misinformation,tangcharoensathien2020framework, wu2021characterizing}. Public attitudes towards the vaccines, therefore, can be reflected by analyzing comments and posts in social media~\cite{kim2020effects, tomeny2017geographic}. In the current study, we adopt a human-guided machine learning framework based on state-of-the-art transformer language models to capture individual opinions on COVID-19 vaccines, and categorize these opinions into three groups: pro-vaccine, vaccine-hesitant, anti-vaccine. We use more than 40,000 rigorously selected tweets (out of over six million tweets collected using keywords) posted by over 20,000 distinct Twitter users ranging from September to November of 2020. We aggregate the tweets to reflect the state-level and the national attitudes towards COVID-19 vaccines. To characterize the opinion groups, we extract and infer individual-level features such as demographics, social capital, income, religious status, family status, political affiliations, and geo-locations. \citet{lazarus2020global} suggested that personal experience such as COVID-19 sickness in the people and their family, and the external perception such as cases and mortality per million of a nation’s population are associated with the vaccine acceptance level. To quantitatively measure and confirm these two effects, we extract the sentiment of personal pandemic experience and non-pandemic experience for each Twitter user. We collect the number of COVID-19 daily confirmed cases from the data repository maintained by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University to measure the county-level pandemic severity perception. In our study, we hypothesize that: \begin{itemize} \item \textbf{Hypothesis 1:} There will be differences in demographics, social capital, income, religious status, family status, political affiliations and geo-locations among opinion groups. \item \textbf{Hypothesis 2:} The personal pandemic experience will have an impact on shaping the attitude towards potential COVID-19 vaccines. \item \textbf{Hypothesis 3:} The county-level pandemic severity perception will have an impact on shaping the attitude towards potential COVID-19 vaccines. \end{itemize} We conduct multinomial logistic regression and find that there are differences in demographics, social capital, income, religious status, political affiliations and geo-locations among the opinion groups. People who have the worst personal pandemic experience are more likely to hold anti-vaccine opinion. In addition, people who have the worst pandemic severity perception are more likely to be vaccine-hesitant. We further show that the individual-level features can be used to anticipate whether this person is in favor of the potential COVID-19 vaccines - or not - over time. By incorporating the individual-level features and additional factor indicators, and by conducting counterfactual analyses, we find that the U.S. public is most concerned about the safety, effectiveness, and political issues with regard to potential vaccines for COVID-19 and improving personal pandemic experience increases the vaccine acceptance level. \section{Materials and Methods} The Methods section is structured as follows. We describe the datasets we use in Methods M1 and how we infer or extract features in Methods M2. We describe our strategy for opinion mining and the standard of labelling in Methods M3. In Methods M4, we discuss the experimental procedures. \subsection{M1 DataSets} \subsubsection{Twitter} We use the Tweepy API\footnote{https://www.tweepy.org/ [Accessed July 21, 2021]} to collect the related tweets which are publicly available. The search keywords and hashtags are COVID-19 vaccine-related or vaccine-related, including ``vaccine'', ``COVID-19 vaccine'', ``COVID vaccine'', ``COVID19 vaccine'', ``vaccinated'', ``immunization'', ``covidvaccine'', ``\#vaccine'' and ``covid19vaccine''. It is noteworthy that the capitalization of non-hastag keywords does not matter in the Tweepy query. Slang and misspellings of the related keywords are also included which are composed of ``vacinne'', ``vacine'', ``antivax'' and ``anti vax''. In the end, 6,314,327 tweets (including retweets) from September 28 to November 4, 2020 posted by 1,874,468 unique Twitter users are collected. To collect as many related text as possible, both COVID-19 vaccine-related and vaccine-related search keywords are used. However, the tweets collected using the vaccine-related search keywords are not necessarily related to COVID-19 vaccines. For example, MMR vaccine-related or HPV vaccine-related tweets might be crawled as well. In addition, the data collection is carried out during the flu shot season, resulting in collecting many influenza shot-related tweets. We apply a keyword-based search in tweets to remove all the tweets containing MMR, autism, HPV, tuberculosis, tetanus, hepatitis B, flu shot or flu vaccine (4.0\% removed). The tweet content and other Twitter profile information are used to extract or predict demographics, user-level features like the number of {\tt followers}, income, religious status, family status, political affiliations, geo-locations, sentiment about the COVID-19-related experience and non-COVID-related experience. To infer the family status, religious status and sentiment, we use Tweepy API to collect the publicly available tweets posted by each user for the last three months. For example, if the tweet containing the search keywords or hashtags was posted on October 1, 2020, then all the publicly available tweets posted by this Twitter user from July 1 to October 1, 2020 are collected as well. It should be noted that only the last 3,200 tweets can be collected per the Tweepy API limitations. The preprocessing pipeline is shown in Figure~\ref{fig:preprocess}. First, the features of the Twitter users are inferred or extracted. To better understand the relationships between all characteristics, we choose to only keep the users of which we can infer all the features except for sentiment. Next, we achieve the mining of opinions via a human-guided machine learning framework. 25,407 unique users with all the features except for the sentiment scores are used to study the temporal and spatial patterns of the opinions. 10,945 of them with sentiment scores are further included in the characterization study and counterfactual analyses. \begin{figure*}[htbp] \centering \includegraphics[trim=150 0 150 0,clip,width=\linewidth]{M1_diagram.pdf} \caption{The diagram of data preprocessing procedures.} \label{fig:preprocess} \end{figure*} \subsubsection{JHU CSSE} We extract the number of COVID-19 daily confirmed cases from the data repository maintained by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University~\cite{JHUcsse2019}. The median relative change of the number of daily confirmed cases of the last three months at the county level is calculated to measure the county-level pandemic severity perception. \subsection{M2 Feature Inference} \subsubsection{Demographics} Following the methods of \citet{lyu2020sense}, we use the Face++ API\footnote{https://www.faceplusplus.com/ [Accessed July 21, 2021]} to infer the gender and age information of the users using their profile images. The invalid image urls and images with multiple or zero faces are excluded. The gender and age information of the remaining users (i.e., there is only one intelligible face in the profile image) is inferred. Since our study focuses on the opinions of U.S. adults, the users who are younger than 18 are removed. Face++ can achieve a good accuracy of the gender and age inference of Twitter data~\cite{jung2018assessing}. \subsubsection{User-level features} Seven user-level features are crawled by Tweepy API as well which include the number of {\tt followers}, {\tt friends}, {\tt listed memberships}, {\tt favourites}, {\tt statuses}, the number of months since the user account was created, and the {\tt Verified} status. Moreover, we normalize the number of {\tt followers}, {\tt friends}, {\tt listed memberships}, {\tt favourites}, and {\tt statuses} by the number of months since the user account was created. \subsubsection{Geo-locations} For Twitter, we choose to resolve the geo-locations using users' profiles. Similar to \citet{lyu2020sense}, the locations with noise are excluded, and the rest are classified into urban, suburban, or rural. \subsubsection{Income} Following the method of \citet{preoctiuc2015studying}, we use a supervised ensemble model to predict the income of Twitter users. The ensemble model includes Gradient Boost Decision Tree (GBDT), Random Forest, Logistic Regression, and XGboost. We use the income datasets of Twitter users~\cite{preoctiuc2015studying} to train our model(s). The features include age, days of Twitter history, the number of {\tt followers}, {\tt friends}, {\tt listed memberships}, {\tt favourites}, and sentiment score calculated by Vader~\cite{gilbert2014vader}. We categorize income into three classes (low, medium, high) based on the income levels of \citet{kochhar2018american} and turn regression problems into classification problems. The accuracy of is $70.02\%$. \subsubsection{Religious Status} We assign each user a boolean value for whether she/he is religious based on the tweets and the description in the profile~\cite{zhang2021influence}. \subsubsection{Family Status} By applying regular expression search, we identify users who show evidence that they are either fathers or mothers~\cite{zhang2021influence}. \subsubsection{Political Affiliations} The political attribute is labelled based on whether this Twitter user followed the Twitter accounts of the top political leaders. The incumbent president (Joe Biden. Joe Biden was the presidential candidate when the data were collected.) and the former president (Donald Trump) are included in the analysis. Due to limitation of Twitter API, only about half of Donald Trump's follower ID was crawled. \subsubsection{Sentiment} In our study, we intend to infer the sentiment of personal pandemic experience and non-pandemic experience. First, we use keyword search methods to classify the three-month historical tweets into COVID-related and non-COVID-related. If a tweet does not contain any of the keywords: ``corona'', ``covid'', ``covid19'', ``coronavirus'', ``chinese virus'', ``china virus'' , ``wuhan virus'',``wfh'', ``work from home'', ``pandemic'', ``epidemic'', ``herd immunity'', ``quarantine'', ``lockdown'', ``mortality'', ``morbidity'', ``social distancing'', ``mask'', ``social distance'', ``respirator'', ``state of emergency'', ``ventilator'', ``isolation'' , ``fatality'', ``community spread'', ``vaccine'', ``vaccinated'', ``vaccination'', ``panic buying'' , ``hoard'', it is categorized as non-COVID-related. The example tweets are ``{\tt $<user>$ I can not wait to take the last name of my husband! I feel so good to solidify our union by taking his name.I also cringe a little bit at the whole “keep the maiden name on social media” thing some girls do...I’m more “leave-and-cleave” type.}'' and ``{\tt $<user>$ what a distinguished day that was.}'' The remaining tweets are categorized as COVID-19-related. The example tweets are ``{\tt i am the type of person who does half an hour of meditation and yoga from my peloton app before going to bed to read some chapters of my book and be fast asleep before 11pm. quarantine changed me.}'' and ``{\tt $<user>$ Oooorr...I can wear a mask, get on an plane, in a limited space, with NO social distancing, with people from hundreds of different households, ALL going to various destinations, and then take my mask OFF to eat/drink once I’m in my seat $<hashtag>$ $<hashtag>$}'' For each Twitter user, the tweets of the two categories are concatenated, respectively. Next, a normalized, weighted composite score is calculated to measure the sentiment of the tweet content using Vader~\cite{gilbert2014vader}. The score is between -1 (most extreme negative) and +1 (most extreme positive). Vader outperforms individual human raters when assessing the sentiment of tweets~\cite{gilbert2014vader}. \subsection{M3 Opinion Mining} To capture the opinions expressed through text by Twitter users, we adopt a human-guided machine learning framework inspired by \citet{sadilek2013nemesis}. The text are classified into four categories: (1) pro-vaccine, (2) vaccine-hesitant, (3) anti-vaccine, and (4) irrelevant. Tweets might be retweeted for multiple times. We observe that there are 6,703 non-unique tweets in the initial batch of over 90,000 tweets. These non-unique tweets, combined with their retweets constitute 62.9\% of all tweets. As a result, the tweets are divided into two groups - the unique-tweet group and the non-unique-tweet group. 430 non-unique tweets which have been retweeted for at least 20 times are included in the non-unique-tweet group. These tweets and their retweets constitute 41.5\% of all tweets. The rest are included in the unique-tweet group. All the tweets of the non-unique-tweet group are manually annotated. However, only a subgroup of the unique-tweet group are manually annotated. The state-of-the-art transformer-based language model~\cite{yang2019xlnet}, trained with the subgroup, is used to make estimates of the rest of the unique-tweet group. \subsubsection{Human-guided machine learning framework} We annotate the opinions of the tweets as pro-vaccine, vaccine-hesitant, or anti-vaccine using a human-guided machine learning framework to strike the best balance between automation and accuracy. In total, we stream over six million publicly available tweets from Twitter using Tweepy API between September 28 to November 4, 2020 with search keywords that are vaccine-related or COVID-19 vaccine-related. Unlike~\citet{tomeny2017geographic}, a majority of the tweets crawled with the search keywords in our study is irrelevant to the actual individual opinions about the vaccines for COVID-19, which causes a challenging class imbalance problem that may not only slow down the annotation process but also hinder the performance of automated classifiers~\cite{japkowicz2002class}. To address this problem, we adopt a human-guided machine learning framework~\cite{sadilek2013nemesis} based on the state-of-the-art transformer language model to label the opinions of the tweets. After extracting or inferring the features of these tweets and their authors, we only keep the ones with all the required informative features available. We initialize the human-guided machine learning framework by sampling 2,000 unique tweets from the corpus $C$ with 244,049 tweets. Three researchers independently read each tweet and make a judgement whether this tweet is irrelevant, pro-vaccine, vaccine-hesitant, or anti-vaccine. Table~\ref{tab:label_standard} describes the labeling scheme for each opinion category. We label each tweet as one of the categories as long as it matches one of the descriptions of that category. The label of the tweet is assigned with the consensus votes from three researchers. If three researchers vote entirely differently, the senior researcher determines the label of this tweet after discussing with the other two researchers. The Fleiss' Kappa score of the three researchers is 0.52. The corpus $C_{train}$ of the initial 2,000 labelled tweets is fed to the XLNet model~\cite{yang2019xlnet}. The four-class classification model $H_{1}$ is trained and validated on an external validation set $D_{validation}$ with 400 annotated tweets. The distribution of the four categories is balanced. We then construct another binary classification model $H_{2}$ that is trained with only two classes of data. The data for $H_{1}$ and $H_{2}$ are almost the same except for the label of the output variable. For $H_{2}$, one class includes all the irrelevant tweets of the data for $H_{1}$ and the other includes all the relevant tweets that are pro-vaccine, vaccine-hesitant, or anti-vaccine in the data for $H_{1}$. After training, $H_{2}$ is used to make estimates for a corpus of 4,500 unlabeled tweets sampled from $C$ regarding whether they are irrelevant or relevant. 90\% of a new batch of corpus is composed of the top 10\% of the most likely relevant tweets. The other 10\% of the new batch is sampled uniformly at random to increase diversity. This new batch of corpus of 500 tweets is annotated by the three researchers as aforementioned and is added to the corpus $C_{train}$. $H_{1}$ is trained with the updated $C_{train}$ and validated again. This whole process is considered as one iteration. For each iteration, the three researchers annotate a new batch of corpus of 500 tweets. \begin{table*}[htbp] \centering \small \begin{tabular}{|c|l|} \hline \textbf{Category} & \multicolumn{1}{|c|}{\textbf{Description}} \\ \hline \multirow{3}{7em}{Pro-vaccine} & i. Claiming that they would take the vaccine once it is available \\ & ii. Advocating and supporting vaccine/vaccine-associated entities like vaccine experiment trials \\ & iii. Believing that the vaccine will be the solution to the pandemic \\ \hline \multirow{4}{7em}{Vaccine-hesitant} & i. Claiming that they would like to take the vaccine after the vaccine is proven safe/effective \\ & ii. Claiming that they would wait for a while and see whether a vaccine is truly safe/effective \\ &if there is one \\ & iii. Showing worries about the effectiveness of a rushed vaccine \\ \hline \multirow{6}{7em}{Anti-vaccine} & i. Promoting/arguing in favor of conspiracy theory about vaccine/vaccine-associated entities \\ & ii. Believing that an effective vaccine would not be invented quickly and help overcome \\ &the pandemic \\ & iii. Believing that a covid-19 vaccine is dangerous for whatever reasons and would not take it \\ & even though the commenters claim that they are not anti-vaccine \\ \hline \multirow{4}{7em}{Irrelevant} & i. Vaccine News. No written opinion from the commenters \\ & ii. Including vaccine and the commenters’ opinions, but the focus is something else \\ & (i.e., insurance, politics, personal life experience, economics, emotional complaints, etc.) \\ & iii. Comments/questions on vaccines/vaccine-associated entities but with unclear meanings \\ \hline \end{tabular} \caption{Labeling scheme for Tweets.} \label{tab:label_standard} \end{table*} This framework actively searches for relevant tweets to increase the sizes of the relevant datasets. Figure~\ref{fig:dataset_distribution} shows the percentages of the different opinion groups of the original $C_{train}$ and the final $C_{train}$ after five iterations. In each iteration, humans guide the machine to learn the irrelevant, pro-vaccine, vaccine-hesitant, and anti-vaccine tweets by updating the training set. Figure~\ref{fig:xlnet_performance} shows the performance of $H_{1}$ of each iteration. As a result, the framework allows us to label the opinions of the tweets and build the model more efficiently. \begin{figure}[htbp!] \centering \includegraphics[width=\linewidth]{dataset_distribution.pdf} \caption{Distributions of different categories of the original and final training corpora.} \label{fig:dataset_distribution} \end{figure} \begin{figure}[htbp!] \centering \includegraphics[width=\linewidth]{xlnet_result.pdf} \caption{Performance of $H_{1}$ of each iteration.} \label{fig:xlnet_performance} \end{figure} \subsubsection{Tweets preprocessing} We adopt a tweet preprocessing pipeline from \citet{baziotis-etal-2017-datastories-semeval} which can transform the specific text often used in Twitter to special tokens. For example, if the original tweet is ``{\tt Scientists develop a COVID vaccine that could initiate a 10-times stronger immune response $<url>$ }'' After preprocessing, the tweet becomes ``{\tt scientists develop a $<allcaps>$ covid $</allcaps>$ vaccine that could initiate a $<number>$ - times stronger immune response $<url>$ }'' \subsubsection{Performance of the XLNet model} Table~\ref{tab:xlnet_result_tab} summarizes the performance of the final four-class XLNet model $H_{1}$ on the external validation set with 400 samples. The final accuracy is 0.63 and the Cohen's Kappa score is 0.5, which indicates a good agreement. \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{Class}&\textbf{Precision} & \textbf{Recall} & \textbf{F1-score} \\ \hline Irrelevant & 0.45&0.84&0.59\\ Pro-vaccine & 0.78& 0.52&0.62\\ Vaccine-hesitant & 0.77 & 0.54 & 0.64\\ Anti-vaccine & 0.79 & 0.61 & 0.69\\ \hline Overall & 0.7 & 0.63 & 0.63\\ \hline \end{tabular} \caption{Performance of the four-class XLNet model $H_{1}$.} \label{tab:xlnet_result_tab} \end{table} \subsection{M4 Analysis Details} \subsubsection{Statistical Analysis} To understand what opinion (i.e., pro-vaccine, vaccine-hesitant, and anti-vaccine) the people ($n=10,945$) would hold based on the their demographics, social capital, income, religious status, family status, political affiliations, geo-location, sentiment about COVID-19-related experience and non-COVID-related experience, and relative change of the number of daily confirmed cases at the county level, we conduct multinomial logistic regression, selecting vaccine-hesitant group as the reference category. \subsubsection{Counterfactual analyses} Following \citet{chang2020mobility}, we intend to estimate the impact of communication-related strategies by constructing a hypothetical machine learning model that reflects the expected effect. To assess the potential outcomes of the communication-related strategies, we build the machine learning model using the real data, and apply the constructed model to the hypothetical data. The data ranging from September 28 to October 21, 2020 are used to train a support vector machine (SVM) $H_{3}$ which makes predictions about the opinion group of the data of the latest two weeks (October 22 - November 4, 2020). The real percentage of pro-vaccine users and the prediction percentage are plotted in Figure~\ref{fig:counter}. The real percentage falls within in one standard deviation of the predicted percentage, indicating a good simulation performance. \begin{figure}[htbp] \centering \includegraphics[width = \linewidth]{counter.png} \caption{Counterfactual analyses illustrate the importance of politics, safety and effectiveness factor indicators, and personal pandemic experience.} \label{fig:counter} \end{figure} We further analyze the relationship between the opinions and the topics of the tweets using the Latent dirichlet allocation (LDA) topic modelling~\cite{blei2003latent} with 10 topics as shown in Figure~\ref{fig:topic}. The coherence score is 0.31. In the word cloud of each topic, top 30 keywords are plotted. As we can see from the figure, people are most concerned about the safety and effectiveness of the vaccine which is consistent with the Pew Research Center survey\footnote{https://www.pewresearch.org/science/2020/09/17/u-s-public-now-divided-over-whether-to-get-covid-19-vaccine/ [Accessed July 20, 2021]}. Some politics-related keywords like ``administration'', ``white house'', and the names of political figures like ``Trump'' and ``Kamala'' are presented as well. To label the factor indicators, we narrow down the 10 topics to two major ones: ``safety and effectiveness'' and ``politics'', and use keyword search methods. The keywords for the safety and effectiveness include ``safe'', ``effective'', and ``efficacy''. The keywords for the politics include ``administration'', ``politics'', ``politician'', ``political'' and the names of Donald Trump, Mike Pence, Joe Biden and Kamala Harris. Each tweet is labelled 1 if it contains the related keywords, and 0 if it does not. Table~\ref{tab:factor} shows the descriptive statistics of these two variables. The basic settings for the counterfactual classifiers are the same as $H_{3}$. We analyze one factor at a time. We train the classifier with the basic variables and the factor indicator with the real value. The basic variables include user demographics, Twitter usage patterns, sentiment of the pandemic and non-pandemic experience, income, religious status, family status, political affiliation, as well as the population density. The prediction is plotted in orange in Figure~\ref{fig:counter}. Then we change the value of the factor indicator which was originally 1 into 0, keeping other variables constant. The trained classifier is applied to the hypothetical data, and the prediction is plotted in green. \begin{figure*} \centering \includegraphics[width = \textwidth]{final_topic.pdf} \caption{10 topics extracted from the tweets with the top 30 keywords.} \label{fig:topic} \end{figure*} \begin{table}[htbp] \centering \scriptsize \begin{tabular}{l c c c c c} \hline \textbf{Variables} & \textbf{N} &\textbf{Mean} & \textbf{SD} &\textbf{Min} & \textbf{Max} \\ \hline 1. Politics & 10,945 & 0.2512 & 0.4337 & 0 & 1\\ 2. Safety and effectiveness & 10,945 & 0.1801 & 0.3843 & 0 & 1\\ \hline \end{tabular} \caption{Descriptive statistics of the factor indicators.} \label{tab:factor} \end{table} \section{Results} \subsection{Characterization of different opinion groups} The proportions of the different opinion groups of the U.S public change over time as shown in Figure~\ref{fig:us_level}, which roughly correspond to the major pandemic-related events. Figure~\ref{fig:abs_number} shows the number of Twitter users. Overall, 57.65\% (6,218 of 25,407) are pro-vaccine, 19.30\% (2,469 of 25,407) are vaccine-hesitant, and the rest are anti-vaccine. By aggregating people at the state level, we estimate the opinions about the potential COVID-19 vaccines of each state as shown in Figure~\ref{fig:state_opinion}. The Southeast of the U.S. shows a relatively lower acceptance level, so does the cluster of Ohio, Indiana and Kentucky. \begin{figure*}[htbp] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{us_level_opinion.pdf} \caption{} \label{fig:us_level} \end{subfigure} \hfill \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{us_level_number_of_tweets.pdf} \caption{} \label{fig:abs_number} \end{subfigure} \caption{(a) The proportions of the opinion groups from September 28 to November 4, 2020. (b) Number of Twitter users from September 28 to November 4, 2020. The data of October 5, 2020 are missing due to a data collection issue.} \label{fig:temporal_spatial} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width = \textwidth]{MapChart_Map_new.png} \caption{State-level public opinions about potential COVID-19 vaccines. The numbers in parentheses stand for the sizes of the study populations.} \label{fig:state_opinion} \end{figure*} After performing the Granger Causality Test with a one-day lag, we find that (Figure~\ref{fig:granger}), in Nevada, Tennessee and Washington, the percentage of the pro-vaccine people deviates the most from the national average ($p>.05$). The percentage of the pro-vaccine group of Washington is above the national average during the most of the time, while the acceptance level of Nevada is relatively lower than the national average. More drastic changes are observed for the acceptance level of Tennessee. \begin{figure*} \centering \includegraphics[width = \textwidth]{granger.pdf} \caption{The percentages of the pro-vaccine groups of the national average, Nevada, Tennessee, and Washington.} \label{fig:granger} \end{figure*} Descriptive statistics and bi-variate correlations of the variables of the multinomial logistic regression are shown in Table~\ref{tab:char_desc}. Table~\ref{tab:regression_outputs} summarizes the results of the multinomial logistic regression. The Chi-square test shows that the variables significantly predict the opinion on potential COVID-19 vaccines: $\chi^{2}(40, N = 10,945)=1,340.94, p<.001$, McFadden's pseudo $R^{2}=.06$, which supports our hypotheses. Next, we show the predictive effects of these variables with paired comparisons. \begin{sidewaystable*}[htbp] \centering \tabcolsep=0.1cm \scriptsize \begin{tabular}{l r r l l l l l l l l l l l l l l l l l l l l l l l} \hline \textbf{Variables} & \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c}{\textbf{SD}} & \multicolumn{1}{c}{\textbf{1}} & \multicolumn{1}{c}{\textbf{2}} & \multicolumn{1}{c}{\textbf{3}} &\multicolumn{1}{c}{\textbf{4}} & \multicolumn{1}{c}{\textbf{5}} &\multicolumn{1}{c}{\textbf{6} }&\multicolumn{1}{c}{\textbf{7}} &\multicolumn{1}{c}{\textbf{8}} & \multicolumn{1}{c}{\textbf{9}} &\multicolumn{1}{c}{\textbf{10}} & \multicolumn{1}{c}{\textbf{11}} & \multicolumn{1}{c}{\textbf{12}} &\multicolumn{1}{c}{\textbf{13}}&\multicolumn{1}{c}{\textbf{14}}&\multicolumn{1}{c}{\textbf{15}}&\multicolumn{1}{c}{\textbf{16}}&\multicolumn{1}{c}{\textbf{17}}&\multicolumn{1}{c}{\textbf{18}}&\multicolumn{1}{c}{\textbf{19}}\\ \hline 1. Gender (0 = male, 1 = female) & 0.46& 0.50 \\ 2. Age (years) & 39.89 & 14.69 & -$.08^{**}$\\ 3. {\tt Verified} (0 = no, 1 = yes) &0.04 & 0.20 & -.00 & $.03^{**}$ \\ 4. Twitter history (months) & 91.30 & 43.47 & -.02 & $.03^{**}$ & $.09^{**}$\\ 5. \# {\tt Followers} & 1.60 & 1.63 & $.04^{**}$ & -.01 & $.37^{**}$ & -$.08^{**}$\\ 6. \# {\tt Friends} & 1.95 & 1.25 & $.04^{**}$ & -.00 & $.09^{**}$ & -$.29^{**}$ & $.68^{**}$\\ 7. \# {\tt Listed memberships} & -1.62 & 0.93 & -.00 & $.08^{**}$ & $.49^{**}$ & $.22^{**}$ & $.69^{**}$ & $.31^{**}$\\ 8. \# {\tt Favorites} & 4.17 & 1.95 & $.12^{**}$ & -$.14^{**}$ & -$.03^{**}$ & -$.21^{**}$ & $.38^{**}$ & $.47^{**}$ & $.05^{**}$\\ 9. \# {\tt Statuses} & 4.09 &1.43& $.02^{*}$ & -$.11^{**}$ & $.07^{**}$ & -$.09^{**}$ & $.53^{**}$ & $.45^{**}$ & $.29^{**}$ & $.58^{**}$\\ 10. Higher-Income (0 = no, 1 = yes) & 0.00 & 0.05 & -.00 & -$.03^{**}$ & $.03^{**}$ & -.01 & $.05^{**}$ & $.02^{*}$ & $.04^{**}$ & $.02^{**}$ & .01\\ 11. Lower-Income (0 = no, 1 = yes) & 0.76 & 0.43 & -.00 & -$.43^{**}$ & -$.04^{**}$ & $.05^{**}$ & -$.15^{**}$ & -$.18^{**}$ & -$.12^{**}$ & -$.13^{**}$ & -$.09^{**}$ & -$.10^{**}$\\ 12. Religious (0 = no, 1 = yes) & 0.04 & 0.19 & .01 & $.07^{**}$ & -$.03^{**}$ & -$.03^{**}$ & $.03^{**}$ & $.07^{**}$ & -$.03^{**}$ & .02 & .02 & -.00 & -$.06^{**}$\\ 13. Having kids (0 = no, 1= yes) &0.12 & 0.32 & $.09^{**}$ & $.09^{**}$ & $.03^{**}$ & $.02^{*}$ & $.04^{**}$ & $.05^{**}$ & $.04^{**}$ & .01 & -$.05^{**}$ & -.01 & -$.05^{**}$ & $.09^{**}$\\ 14. Following Trump (0 = no, 1 = yes)& 0.11 & 0.31 & -$.05^{**}$ & $.06^{**}$ & -$.03^{**}$ & -$.17^{**}$ & -$.05^{**}$ & $.04^{**}$ & -$.12^{**}$ & -.01 & -$.05^{**}$ & -.00 & -$.05^{**}$ & $.06^{**}$ & $.04^{**}$\\ 15. Following Biden (0 = no, 1 = yes)& 0.17 & 0.38 & $.09^{**}$ & $.07^{**}$ & $.06^{**}$ & $.10^{**}$ & $.05^{**}$ & $.19^{**}$ & $.06^{**}$ & $.09^{**}$ & $.03^{**}$ & .00 & -$.06^{**}$ & -$.03^{**}$ & $.07^{**}$ & .01\\ 16. Rural (0 = no, 1 = yes)& 0.19 & 0.40 & -.02 & $.09^{**}$ & -$.05^{**}$ & -$.04^{**}$ & -$.07^{**}$ & -.02 & -$.08^{**}$ & -.01 & -$.04^{**}$ & .00 & -$.05^{**}$ & $.06^{**}$ & $.04^{**}$ & $.05^{**}$ & -.00 \\ 17. Suburban (0 = no, 1 = yes)& 0.14 & 0.35 & -.01 & $.07^{**}$ & -.01 & -$.02^{*}$ & -$.05^{**}$ & -$.03^{**}$ & -$.05^{**}$ & -$.03^{**}$ & -$.02^{**}$ & -.02 & -.01 &$.03^{**}$ & $.04^{**}$ & $.03^{**}$ & -.00 & -$.20^{**}$ \\ \makecell[l]{18. Pandemic experience \\ (sentiment)} & 0.06 & 0.80 & $.03^{**}$ & -$.04^{**}$ & $.10^{**}$ & $.03^{**}$ & $.09^{**}$ & -.00 & $.15^{**}$ & -$.08^{**}$ & -$.08^{**}$ & $.02^{*}$ & $.04^{**}$ & -$.03^{**}$ & -.00 & -$.06^{**}$ & .01 & -$.03^{**}$ & .00\\ \makecell[l]{19. Non-pandemic experience \\ (sentiment)} & 0.62 & 0.75 & $.07^{**}$ & -$.06^{**}$ & $.07^{**}$ & $.04^{**}$ & $.14^{**}$ & $.09^{**}$ & $.14^{**}$ & $.06^{**}$ & .01 & .02 & .02 & .00 & $.03^{**}$ & -$.06^{**}$ & .01 & -$.03^{**}$ & -$.02^{*}$ & $.27^{**}$\\ \makecell[l]{20. Pandemic severity perception \\ (relative change of \# daily confirmed cases)}& 0.01&0.00& -.01& $.03^{**}$ & -$.03^{**}$ & -$.02^{*}$ & -$.05^{*}$ & -.01 & -$.07^{**}$ & -.00 & -$.04^{**}$ & .01 & -$.03^{**}$ & $.06^{**}$ & $.05^{**}$ & $.04^{**}$ &-.01 & $.27^{**}$ & $.12^{**}$ & -.01& .01\\ \hline \end{tabular} {\raggedright Note. * $p<0.05$. ** $p<0.01$. \par} \caption{Descriptive statistics and the bi-variate correlations. The numbers of {\tt followers}, {\tt friends}, {\tt listed memberships}, {\tt favorites}, {\tt statuses} are normalized by the months of Twitter history and log-transformed.} \label{tab:char_desc} \end{sidewaystable*} \begin{table*}[] \centering \small \setlength{\tabcolsep}{3pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{r l l l l l l} \hline & \multicolumn{3}{l}{\textbf{Anti-vaccine}} & \multicolumn{3}{l}{\textbf{Pro-vaccine}} \\ \cline{2-7} Predictor & \textit{B} & \textit{SE} & OR (95\% CI) & \textit{B} & \textit{SE} & OR (95\% CI) \\ \hline Intercept &-$1.82^{***}$ &0.26 & & $0.79^{***}$ &0.20 & \\ Age (years) & 0.00 & 0.00 & 1.00 (1.00, 1.01)& $0.01^{***}$ &0.00 & 1.01 (1.01, 1.02) \\ Twitter history (months) & 0.00 &0.00& 1.00 (1.00, 1.00)& -$0.003^{***}$& 0.001 & 0.997 (0.996, 0.999)\\ \# {\tt Followers} & $0.28^{***}$ & 0.04 & 1.32 (1.22, 1.42)& $0.08^{**}$ &0.03& 1.09 (1.02, 1.16)\\ \# {\tt Friends} & -$0.18^{***}$ & 0.04 & 0.83 (0.77, 0.90) & -$0.07^{*}$ & 0.03 & 0.93 (0.88, 0.99)\\ \# {\tt Listed memberships} & -$0.63^{***}$ & 0.06 & 0.53 (0.47, 0.60)& $0.10^{*}$ & 0.05 & 1.10 (1.01, 1.20)\\ \# {\tt Favorites} & $0.04^{*}$& 0.03 & 1.12 (1.06, 1.19)& $0.04^{*}$& 0.02 & 1.04 (1.01, 1.08)\\ \# {\tt Statuses} & $0.11^{***}$ & 0.03 & 1.12 (1.06, 1.19)& -$0.06^{*}$& 0.02& 0.94 (0.90, 0.99)\\ Pandemic experience (sentiment) & -$0.18^{***}$ & 0.04 & 0.84 (0.77, 0.90)& $0.21^{***}$& 0.03& 1.24 (1.16, 1.32)\\ Non-pandemic experience (sentiment) & -0.04 & 0.04 &0.96 (0.89, 1.04)& $0.13^{***}$& 0.03& 1.14 (1.07, 1.22)\\ \makecell[r]{Pandemic severity perception \\ (relative change of \# daily confirmed cases)} & -14.99 & 8.13 & 0.00 (0.00, 2.58)& -$22.68^{***}$ & 6.59 & 0.00 (0.00, 0.00)\\ Female & -$0.25^{***}$& 0.06 & 0.78 (0.69, 0.88)& -$0.47^{***}$& 0.05& 0.63 (0.57, 0.69)\\ {\tt Verified} user & -$0.61^{*}$&0.27 & 0.54 (0.32, 0.91)& -0.16& 0.14& 0.85 (0.65, 1.12)\\ Higher-income & -170.67&5.00e+36 & 0.00 (0.00, Inf)& 0.47 & 0.43 & 1.60 (0.70, 3.68)\\ Lower-income & $0.40^{***}$& 0.08& 1.49 (1.26, 1.75)&$0.52^{***}$ & 0.06 & 1.69 (1.49, 1.91)\\ Religious &$0.74^{***}$ &0.17 & 2.10 (1.52, 2.91)&$0.37^{*}$ &0.15& 1.45 (1.07, 1.95)\\ Having kids & -0.11& 0.10 & 0.90 (0.74, 1.09)& 0.00& 0.08&1.00 (0.86, 1.15)\\ Following Trump & $0.41^{***}$& 0.10 &1.51 (1.26, 1.83)& 0.06& 0.08&1.06 (0.90, 1.25)\\ Following Biden & -$1.22^{***}$& 0.10 &0.29 (0.24, 0.36)&-$0.34^{***}$ & 0.06&0.71 (0.63, 0.80)\\ Rural & $0.17^{*}$& 0.08 & 1.19 (1.01, 1.39)& 0.07 & 0.07 & 1.07 (0.94, 1.22)\\ Suburban & $0.18^{*}$& 0.09 & 1.20 (1.01, 1.43)& 0.11 & 0.07 &1.12 (0.97, 1.29)\\ \hline Chi-square & \multicolumn{6}{c}{$1,340.94^{***}$}\\ \textit{df} & \multicolumn{6}{c}{40}\\ $-2$ log likelihood & \multicolumn{6}{c}{20,171.90} \\ McFadden's pseudo $R^{2}$ & \multicolumn{6}{c}{0.06} \\ Sample size & \multicolumn{6}{c}{10,945}\\ \hline {\raggedright Note. * $p<0.05$. ** $p<0.01$. *** $p<0.001$.\par} \end{tabular} \caption{Multinomial logistic regression outputs for the opinion on potential COVID-19 vaccines against demographics and other variables of interest. The vaccine-hesitant group is selected as the reference category.} \label{tab:regression_outputs} \end{table*} \subsubsection{Women are more likely to hold hesitant opinions.} Gender is statistically significant ($\chi^{2}=91.83, p<.001$). Women are likely to hold hesitant opinions rather than polarized opinions (i.e., pro-vaccine, anti-vaccine). Specifically, comparing the anti-vaccine group and vaccine-hesitant group, we find that women are less likely to be anti-vaccine ($B=-0.25, SE=0.06, p<.001, OR = 0.78; 95\% CI = [0.69, 0.88]$). Comparing the pro-vaccine group and vaccine-hesitant group, we find that women are also less likely to be pro-vaccine ($B = -0.47, SE = 0.05, p<.001, OR = 0.63; 95\% CI = [0.57, 0.69]$). \subsubsection{Older people tend to be pro-vaccine.} Age is statistically significant ($\chi^{2}=72.47, p<.001$). Comparing the anti-vaccine group and vaccine-hesitant group, we do not find significant evidence that older people are more anti-vaccine. However, comparing the pro-vaccine group and vaccine-hesitant group, we find that people who are one year older are 1.01 ($B=0.01, SE = 0.00 ,p<.001, OR=1.01; 95\% CI = [1.01, 1.02]$) times more likely to be pro-vaccine instead of vaccine-hesitant, which echoes the findings of \citet{lazarus2020global}. One potential explanation is that the risk of dying with COVID-19 increases with age~\citep{lloyd2020bearing}, and the benefits of not getting infected with COVID-19 outweigh the risk of getting vaccinated. \subsubsection{Different patterns of Twitter usage.} A {\tt Verified} Twitter account must represent or other wise be associated with a prominently recognized individual or brand\footnote{https://help.twitter.com/en/managing-your-account/about-twitter-verified-accounts [Accessed July 21, 2021]}. In our study, {\tt Verified} status is statistically significant ($\chi^{2}=6.12, p<.05$). Comparing the anti-vaccine group and vaccine-hesitant group, we find {\tt Verified} users are less likely to be anti-vaccine ($B = -0.61, SE = 0.27, p<.05, OR = 0.54; 95\% CI = [0.32, 0.91]$), however, comparing the pro-vaccine group and vaccine-hesitant group, we do not find significant differences. Months of Twitter history is statistically significant ($\chi^{2}=17.52, p<.001$). Comparing the anti-vaccine group and vaccine-hesitant group, we do not find significant differences, however, comparing the pro-vaccine group and vaccine-hesitant group, we find if the months of Twitter history were to increase by one month, it is 0.997 ($B=-0.003, SE = 0.001, p <.001, OR =0.997; 95\% CI = [0.996, 0.999]$) less likely to be pro-vaccine than vaccine-hesitant. After normalizing the number of {\tt followers}, {\tt friends}, {\tt listed memberships}, {\tt favorites}, and {\tt statuses} with the number of months of Twitter history, we still find that the social capital is statistically significant. Specifically, there are significant differences in terms of {\tt followers} counts ($\chi^{2}=51.06, p<.001$), {\tt friends} counts ($\chi^{2}=21.28, p<.001$), {\tt listed memberships} counts ($\chi^{2}=199.51, p<.001$), {\tt favorites} counts ($\chi^{2}=6.10, p<.05$), {\tt statuses} counts ($\chi^{2}=47.37, p<.001$). Comparing the anti-vaccine group and vaccine-hesitant group, if the log-scale {\tt followers} count were to increase by one unit, it is 1.32 ($B=0.28, SE = 0.04, p<.001, OR = 1.32; 95\% CI = [1.22, 1.42]$) times more likely to be anti-vaccine. If the log-scale {\tt friends} count were to increase by one unit, it is less likely to be anti-vaccine ($B=-0.18, SE = 0.04, p<.001, OR = 0.83; 95\% CI = [0.77, 0.90]$). If the log-scale {\tt listed memberships} count were to increase by one unit, it is less likely to be anti-vaccine ($B=-0.63, SE = 0.06,p<.001, OR = 0.53; 95\% CI = [0.47, 0.60]$). If the log-scale {\tt favorites} count were to increase by one unit, it is 1.04 ($B=0.04, SE = 0.02, p<.05, OR = 1.04; 95\% CI = [1.00, 1.09]$) times more likely to be anti-vaccine. If the log-scale {\tt statuses} count were to increase by one unit, it is 1.12 ($B=0.11, SE = 0.03, p<.001, OR = 1.12; 95\% CI = [1.06, 1.19]$) times more likely to be anti-vaccine. Comparing the pro-vaccine group and vaccine-hesitant group, if the log-scale {\tt followers} count were to increase by one unit, it is 1.09 ($B=0.08, SE = 0.03, p<.01, OR = 1.09; 95\% CI = [1.02, 1.16]$) times more likely to be pro-vaccine. If the log-scale {\tt friends} count were to increase by one unit, it is less likely to be pro-vaccine ($B=-0.07, SE = 0.03, p<.05, OR = 0.93; 95\% CI = [0.88, 0.99]$). If the log-scale {\tt listed memberships} count were to increase by one unit, it is 1.11 ($B=0.10, SE = 0.05, p<.05, OR =1.10; 95\% CI = [1.01, 1.20]$) times more likely to be pro-vaccine. If the log-scale {\tt favorites} count were to increase by one unit, it is 1.04 ($B = 0.04, SE = 0.02, p<.05, OR = 1.04; 95\% CI = [1.01, 1.08]$) times more likely to be pro-vaccine. If the log-scale {\tt statuses} count were to increase by one unit, it is less likely to be pro-vaccine ($B = -0.06, SE = 0.02, p<.05, OR = 0.94; 95\% CI = [0.90, 0.99]$). Twitter users who have more {\tt followers} or fewer {\tt friends}, or give more {\tt favourites} are more likely to hold polarized opinion. The larger {\tt listed memberships} count is, the more likely the Twitter user is pro-vaccine. Twitter users who post more {\tt statuses} tend to be anti-vaccine. \subsubsection{The lower-income group is more likely to hold polarized opinions.} Income is statistically significant ($\chi^{2}=79.09, p<.001$). Comparing the anti-vaccine group and vaccine-hesitant group, we find that the lower-income group is 1.49 ($B=0.40, SE = 0.08, p<.001, OR = 1.49; 95\% CI = [1.26, 1.75]$) times more likely to be anti-vaccine than the medium-income group, however,the difference between the higher-income group and medium-income group is not significant. Comparing the pro-vaccine group and vaccine-hesitant group, we find that lower-income group is 1.69 ($B = 0.52, SE = 0.06, p<.001, OR = 1.69; 95\% CI = [1.49, 1.91]$) times more likely to be pro-vaccine than medium-income group. The difference between the higher-income group and medium-income group is not significant. {\it Inconsistent} with \citet{lazarus2020global} that the higher the income is, the more likely people are pro-vaccine, we find the effect of income more nuanced. Lower-income people tend to be polarized. \subsubsection{Religious people are more likely to be polarized.} Religious status is statistically significant ($\chi^{2}=21.34, p<.001$). Comparing the anti-vaccine group and vaccine-hesitant group, we find that religious people are more likely to be anti-vaccine than non-religious people ($B = 0.74, SE = 0.17, p<.001, OR = 2.10; 95\% CI = [1.52, 2.91]$). Comparing the pro-vaccine group and vaccine-hesitant group, we find that religious people are also more likely to be pro-vaccine than non-religious people ($B = 0.37, SE = 0.15, p<.05, OR = 1.45; 95\% CI = [1.07, 1.95]$). This is in line with \citet{larson2014understanding} that the effect of religious status is complicated. \subsubsection{Political diversion indicates a divided opinion about the potential COVID-19 vaccines.} Following Donald Trump is statistically significant ($\chi^{2}=25.22, p<.001$). Comparing the anti-vaccine group and vaccine-hesitant group, we find that the Twitter users who follow Donald Trump are 1.51 ($B = 0.41, SE = 0.10, p<.001, OR = 1.51; 95\% CI = [1.26, 1.83]$) times more like to be anti-vaccine than the Twitter users who do not. Comparing the pro-vaccine group and vaccine-hesitant group, following Donald Trump is not significant. Following Joe Biden is statistically significant ($\chi^{2}=177.96, p<.001$). Comparing the anti-vaccine group and vaccine-hesitant group, we find that the Twitter users who follow Joe Biden are less like to be anti-vaccine than the Twitter users who do not ($B = -1.22, SE = 0.10, p<.001, OR = 0.30; 95\% CI = [0.24, 0.36]$). Comparing the pro-vaccine group and vaccine-hesitant group, we find that the Twitter users who follow Joe Biden are also less likely to be pro-vaccine than the Twitter users who do not ($B = -0.34, SE = 0.06, p<.001, OR = 0.71; 95\% CI = [0.63, 0.80]$). Twitter users who follow Donald Trump tend to be anti-vaccine, while those who follow Joe Biden tend to be vaccine-hesitant. \subsubsection{People living in suburban or rural areas are more likely to be anti-vaccine.} Although the population density of the area is not statistically significant across three opinion categories, we still find differences between the anti-vaccine group and vaccine-hesitant group. People living in suburban areas are 1.20 ($B = 0.18, SE = 0.09, p<.05, OR = 1.20; 95\% CI = [1.01, 1.43]$) times more likely to be anti-vaccine than people living in urban areas. People living in rural areas are 1.20 ($B = 0.17, SE = 0.08, p<.05, OR = 1.18; 95\% CI = [1.01, 1.39]$) times more likely to be anti-vaccine than people living in urban areas. Most of the results are consistent with Hypothesis 1. There are significant differences in demographics, social capital, income, religious status, political affiliations and geo-locations among opinion groups, however, we do not find significant difference in family status. \subsubsection{Personal experience with COVID-19 and the county-level pandemic severity perception shape the opinion.} The sentiment score of personal experience with COVID-19 is statistically significant ($\chi^{2}=146.50, p<.001$). Comparing the anti-vaccine group and vaccine-hesitant group, we find that if the sentiment score of personal experience with COVID-19 were to increase by one unit (i.e., the sentiment became more positive), the person would be less likely to hold anti-vaccine opinion ($B = -0.18, SE = 0.04, p<.001, OR = 0.84; 95\% CI = [0.77, 0.90]$). Comparing the pro-vaccine group and vaccine-hesitant group, we find if the sentiment score of personal experience with COVID-19 were to increase by one unit (i.e., the sentiment became more positive), the person would be 1.24 times more likely to hold pro-vaccine opinion ($B = 0.21, SE = 0.03, p<.001, OR = 1.24; 95\% CI = [1.16, 1.32]$), which is consistent with Hypothesis 2. The sentiment score of non-COVID-related personal experience is overall statistically significant ($\chi^{2}=29.28, p<.001$), but comparing the anti-vaccine group and vaccine-hesitant group, we find no significant difference. However, comparing the pro-vaccine group and vaccine-hesitant group, we find if the sentiment score of non-COVID-related personal experience were to increase by one unit (i.e., the sentiment became more positive), the person would be more likely to hold pro-vaccine opinion ($B = 0.13, SE = 0.03, p<.001, OR = 1.14; 95\% CI = [1.07, 1.22]$). The county-level pandemic severity perceptions are overall statistically significant ($\chi^{2}=11.76, p<.01$), supporting Hypothesis 3, but we find no significant difference comparing the anti-vaccine group and vaccine-hesitant group. However, comparing the pro-vaccine group and vaccine-hesitant group, if the relative change of the number of daily confirmed cases at the county level were to increased by one unit, the person would be less likely to be pro-vaccine ($B = -22.68, SE = 6.59, p<.001, OR = 0.00; 95\% CI = [0.00, 0.00]$). At the individual level, the personal pandemic experience is a strong predictor of the opinion about COVID-19 vaccines. People who have the worst personal pandemic experience are more likely to hold anti-vaccine opinion. However, the non-pandemic experience is not a strong predictor of anti-vaccine opinion. At the county level, people who have the worst pandemic severity perception (i.e., the relative change of the number of daily confirmed cases is the largest) are more likely to be vaccine-hesitant. \subsection{Counterfactual analyses} \subsubsection{Removing the safety and effectiveness factors reduces the vaccine acceptance level. However, removing the politics factor increases it.} Figure~\ref{fig:counter} shows the results of counterfactual analyses of factor indicators. Using counterfactual analysis by turning the factor indicator of safety and effectiveness into 0, there is a clear decrease (4.42\% on average) of the percentage of the pro-vaccine people. However, by turning the factor indicator of politics into 0, there is a clear increase (22.65\% on average) of the percentage of the pro-vaccine people. This indicates that people are most concerned about the relationship between the politics and the potential COVID-19 vaccines, which is also mirrored by the news report\footnote{https://www.nytimes.com/2020/08/02/us/politics/coronavirus-vaccine.html [Accessed July 21, 2021]}. \subsubsection{Improving personal pandemic experience increases the vaccine acceptance level.} Figure~\ref{fig:counter} shows the results of counterfactual analyses of different sentiment levels of personal pandemic experience. By increasing the sentiment scores with a factor of 50\%, the percentage of the pro-vaccine people increases by 6.39\%. However, by reducing the sentiment scores of a factor of 50\%, the percentage of the pro-vaccine people decreases by 2.82\%. \subsection{Robustness verification} The multinomial logistic regression and counterfactual analyses are conducted based on the opinion derived from the social media data. By adopting the human-guided machine learning framework, the dataset is composed of human-annotated data and machine-inferred data. Although the Cohen's Kappa score of the machine and human annotators reaches 0.5 after five iterations, the results of the analyses could potentially change given the seemingly limited prediction performance. Therefore, we additionally verify the robustness of the findings of multinomial logistic regression and counterfactual analyses by examining two different combinations of the human-annotated data and machine-inferred data: \begin{itemize} \item Only human-annotated data (n = 2,939) are used to conduct the multinomial logistic regression and counterfactual analyses \item Human-annotated data (n = 2,939) and 50\% machine-inferred data (randomly sampled, n = 4,003) are used to conduct the multinomial logistic regression and counterfactual analyses \end{itemize} The results of the two robustness verification experiments are shown in Appendices. Both of them are basically consistent with the reported results in Table~\ref{tab:regression_outputs} and Figure~\ref{fig:counter}. The findings do not fundamentally change, confirming that the opinion computationally derived from the social media data are reliable for our study. \section{Discussion} We conduct multinomial logistic regression to investigate the scope and causes of public opinions on vaccines and test three hypotheses. The current study shows the hypothesized effects of most of the characteristics in predicting the odds of being pro-vaccine or anti-vaccine against vaccine-hesitant. The findings suggest that women are more vaccine-hesitant, which is consistent with the Reuters/Ipsos survey\footnote{https://www.reuters.com/business/healthcare-pharmaceuticals/more-women-than-men-us-nervous-about-fast-rollout-covid-vaccine-thats-problem-2020-12-11/ [Accessed July 21, 2021]}, and older people tend to be pro-vaccine. With respect to social capital, people who have more {\tt followers} or fewer {\tt friends}, or give more {\tt favorites}, are more likely to hold polarized opinions. {\tt Verified} status, months of Twitter history, {\tt listed memberships} counts and {\tt statuses} counts are statistically significant as well. We also show that the lower-income group is more likely to hold polarized opinions. This is {\it inconsistent} with the finding by \citet{lazarus2020global}. Moreover, religious people tend to hold polarized opinions. As for political affiliations, Twitter users who follow Donald Trump are more likely to be anti-vaccine rather than vaccine-hesitant, while those who follow Joe Biden tend to be vaccine-hesitant rather than anti-vaccine or pro-vaccine. In addition, we find people who live in rural or suburban areas tend to be anti-vaccine. However, we do not find the hypothesized predictive effect of family status on the opinion about vaccines. Furthermore, the current study shows the hypothesized predictive effects of the personal pandemic experience and the county-level pandemic severity perception. In particular, personal experience with COVID-19 is a strong predictor of anti-vaccine opinion. The more negative the experience is, the more negative the opinion on vaccines is. People are more likely to be vaccine-hesitant if their pandemic severity perceptions are worse. Our current study has limitations. The public opinions of some (less populated) states cannot be reflected due to the inadequate data. The findings could be further validated in other populations. In addition, the performance of the XLNet model of this study could be improved so that the proposed human-guided machine learning framework could identify the public opinions on the COVID-19 vaccines more accurately. Specifically, future work could address the class imbalance issue by annotating more tweets or augmenting the training data via back translation (i.e. translating English tweets to another language and then translating them back to English). Despite the limitations, our study broadly captures the public opinions on the potential vaccines for COVID-19 on Twitter. The final F1-score of our study is 0.63, indicating a good performance, which is achieved via the human-guided machine learning framework that is composed of both the state-of-the-art transformer-based language model and the large annotated dataset. By aggregating the opinions, we find a lower acceptance level in the Southeast part of the U.S. The changes of the proportions of different opinion groups correspond roughly to the major pandemic-related events. We show the hypothesized predictive effects of the characteristics of the people in predicting pro-vaccine, vaccine-hesitant, and anti-vaccine group. Using counterfactual analyses, we find that people are most concerned about the safety, effectiveness and politics regarding potential COVID-19 vaccines, and improving personal experience with COVID-19 increases the vaccine acceptance level. Our results can guide and support policymakers making more effective distribution policies and strategies. First, more efforts of dissemination should be spent on the socioeconomically disadvantaged groups who are exposed to potentially higher risks~\cite{chang2020mobility,hopman2020managing, adams2020inequality} and already possess more polarized attitudes towards the vaccines. Second, messaging for the vaccines is extremely important because the vaccine acceptance level can be increased by removing the politics factor. Third, safety and effectiveness issues need to be well addressed because the acceptance level is reduced by removing this factor. Finally, improving personal pandemic experience may increase the vaccine acceptance level as well and thus all helpful measures should be integrated to maximize the vaccine acceptance. In the future, by combining social media data and more traditional survey data, we hope to acquire deeper insights into the public opinions on potential COVID-19 vaccines and thus inform more effective vaccine dissemination policies and strategies.
proofpile-arXiv_067-5382
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION}\label{sec:introduction} Core-collapse supernovae (SNe) are the terminal explosions of stars with initial mass $>${}$8~M_{\odot}$ \citep{Burrows95}. This aspect of massive star evolution was empirically confirmed by the discovery of the blue supergiant progenitor of SN~1987A \citep{podsialowski+93} and subsequent discovery of over two dozen SN progenitors in nearby galaxies \citep[][and references therein, with more discovered since]{Smartt15}. The majority of these stars are red supergiant (RSG) progenitors of hydrogen-rich type II SNe (SNe~II), although several hydrogen-poor SN~IIb progenitor stars, all of which are A--K supergiants, have also been explored in the literature \citep[notably for SNe~1993J, 2008ax, 2011dh, 2013df, and 2016gkg;][]{aldering+94,crockett+08,maund+11,vandyk+14,kilpatrick17:16gkg}. To date, there is only one confirmed example of a progenitor star to a hydrogen-stripped SN~Ib; the progenitor of iPTF13bvn in NGC~5608 was initially identified as a compact Wolf-Rayet (WR) star in pre-explosion {\it Hubble Space Telescope} ({\it HST}) imaging \citep{Cao13} and confirmed as the progenitor by its disappearance \citep{eldridge+16,Folatelli16}. There are numerous upper limits on the progenitor systems of other SNe~Ib in the literature \citep{Eldridge13}, implying they tend to have low optical luminosities. Overall, this relative lack of pre-explosion detections has contributed to an ongoing debate on the nature of stripped-envelope SN progenitors. The transition from hydrogen-rich type II to hydrogen-poor type IIb to hydrogen-free type Ib SNe, and finally to helium-free type Ic SNe is commonly understood as a continuum in final hydrogen (or helium) mass in the envelopes of their progenitor stars \citep{Filippenko97,dessart+11,dessart+12,yoon+12,yoon+15,dessart+15,maund2016}. Possible mechanisms that can deplete stellar envelope mass include radiative mass loss \citep{Heger03, Crowther07, Smith14}, eruptive mass loss \citep{Langer94,Maeder00,r-r2005,Dessart10}, and mass transfer in binary systems \citep{Woosley+94,izzard2004,Fryer07,yoon+17}. Stars with higher initial masses or metallicities are predicted to be more stripped at the time of core collapse due to their strong radiative winds \citep{Heger03}. However, extremely high mass stars that are predicted to efficiently deplete their envelopes have more compact and thus less explodable cores, which is thought to lead to a significant fraction of failed SNe, that is, direct collapse to a black hole with no luminous transient \citep{Burrows07,Sukhbold+16,ari2020}. In addition, the large relative fraction of stripped-envelope SNe in volume-limited surveys \citep[i.e., SNe~Ib and Ic;][]{li+11,shivvers+17,Graur17a,Graur17b} suggests they come from a progenitor channel including stars with initial masses $<$30~$M_{\odot}$ \citep{Smith+11b,Eldridge13}. Mass transfer in a binary system is therefore an appealing alternative mechanism to strip massive star envelopes as the majority of massive stars are observed to evolve in binaries \citep{Sana12,Kiminki12}, and binary interactions can lead to a wide variety of outcomes based on mass ratio, orbital period, and the characteristics of each stellar component \citep[e.g.,][]{2020ApJ...901...44W}. In particular, Case B \citep[during helium core contraction;][]{Kippenhahn67} or Case BB \citep[after core helium exhaustion for a star with previous Case B mass transfer;][]{Delgado81} mass transfer can remove nearly all of a star's hydrogen envelope, although this process typically stops before hydrogen is completely depleted \citep{yoon+10,yoon+15,yoon+17}. Stars with a small amount of hydrogen remaining might also swell up in the latest stages of evolution \citep{Divine65,Habets86,Gotberg17,Laplace20} and fill their Roche lobes to restart mass transfer. If the mass transfer is non-conservative, that is some of the material is not accreted by the companion star, this scenario can lead to dense circumstellar material (CSM) in their local environments. Thus, when the primary star explodes the SN ejecta might encounter and shock this material, producing strong thermal continuum and hydrogen and helium line emission at optical wavelengths \citep[i.e., SN~IIn and Ibn features;][]{Vanbeveren79,Claeys11,Maund16,Yoon17,Smith17,Gotberg19}. Thus, the final envelope mass, radius, and composition of the star can result in SNe with diverse photometric and spectroscopic properties \citep{James10} ranging from type II to type IIn to type Ic-like evolution. One prediction from this model of binary mass transfer is that there may be a continuum between SNe with type IIb and Ib-like behaviour, depending on their final hydrogen mass. \citet{dessart+12} find that progenitor stars with as little as $10^{-3}~M_{\odot}$ hydrogen envelope mass would produce a SN whose spectra exhibit broad H$\alpha$ line emission up to 10~days after maximum light \citep[although other studies find the envelope mass can be as large as 0.02--0.03~$M_{\odot}$ with no H$\alpha$ signature;][]{Hachinger12}. Stars on either edge of this dividing line are expected to vary not only in the spectroscopic evolution of their resulting SN but also their appearance in pre-explosion imaging. Above this threshold, spectroscopic evolution should be similar to archetypal SNe~IIb such as SN~1993J \citep{Filippenko+93, Richmond+94, Woosley+94}, and the progenitor star can inflate to radii $>$400 R$_\odot$ \citep{yoon+17,Laplace20}. Indeed, the progenitor of SN~1993J was a K-type supergiant \citep{Nomoto+93,aldering+94,Fox+14}. In contrast, stars with final hydrogen-envelope masses low enough that they would be classified as a type Ib SN prior to maximum light are only expected to inflate to radii of {\it at most} $\sim$100 R$_\odot$ \citep{yoon+12,yoon+15,yoon+17,kleiser+18,Laplace20} and in many cases remain significantly smaller. This should result in hotter progenitor stars for a given luminosity. Intriguingly, some SNe~Ib exhibit signatures of circumstellar interaction with hydrogen-rich gas weeks to months after explosion, which suggests their progenitor stars (or binary companions) recently released this material. The best-studied example to date is SN~2014C \citep{Milisavljevic15,Margutti+16,Tinyanont16,Tinyanont19}, which was discovered in NGC~7331 at $\approx$15~Mpc, but several other stripped-envelope SNe with similar evolution have been presented in the literature \citep[e.g., SNe~2001em, 2003gk, 2004dk, 2018ijp, 2019tsf, 2019oys;][]{Chugai06,Bietenholz14,Chandra17,Mauerhan18,Pooley19,Sollerman20,Tartaglia20} as well as the initially hydrogen-free superluminous SN iPTF13ehe \citep{Yan17}. Although non-conservative mass transfer or common envelope ejections have been proposed as the source of this material \citep{Sun20}, it is still unclear what evolutionary pathways lead to these apparently hydrogen-stripped stars or what exact mechanism causes an ejection timed only years before explosion \citep[up to $1~M_{\odot}$ of hydrogen-rich CSM for SN~2014C in][]{Margutti+16}. Understanding how common stripped-envelope SNe with circumstellar interactions are might aid in ruling out less likely mechanisms, but constraining the exact rate is difficult as few SNe are close and bright enough to follow to late times and stripped-envelope SNe tend to be further extinguished in their host galaxies \citep{Stritzinger18}. Some SN~Ib exhibit clear signatures of circumstellar interaction with helium-rich material at early times \citep[so-called SNe~Ibn, with narrow emission lines of helium indicative of interaction between SN ejecta and slow-moving, circumstellar helium;][]{pastorello+08,Shivvers17b}, potentially from massive, helium-rich WR stars undergoing extreme mass loss immediately before explosion \citep{smith+17}. However, events from this class are rare and there with significant photometric and spectroscopic diversity \citep{Hosseinzadeh17}. \citet{Margutti+16} analyzed 183 SNe~Ib and Ic with late-time radio observations and found that 10\% exhibit evidence for rebrightening consistent with SN~2014C-like evolution, implying this phenomenon may be relatively common. However, volume-limited samples with light curves beyond $100$~days of discovery \citep[when most of these interactions occur;][]{Sollerman20} are small \citep[e.g., in][]{li+11,shivvers+17}, and so there may be an observational bias preventing precise constraints on the intrinsic rate of these interactions in SNe~Ib/c. In this paper we discuss a progenitor candidate for the SN~Ib 2019yvr discovered in NGC~4666 on UTC 2019 December 27 12:30:14 (MJD 58844.521) by the Asteroid Terrestrial impact Last Alert System \citep[ATLAS;][]{Smith19}\footnote{SN~2019yvr is also called ATLAS19benc. We use SN~2019yvr throughout this paper for consistency with follow-up reports.}. We present early-time light curves and spectra of SN~2019yvr demonstrating that it resembles several other SNe~Ib and is spectroscopically most similar to iPTF13bvn, albeit with much more line-of-sight extinction than most known SNe~Ib. Although we do not present any observations beyond 35~days from discovery, we note that SN~2019yvr exhibited signatures of circumstellar interaction at $>$150~days from discovery, with evidence for relatively narrow H$\alpha$, X-ray, and radio emission at these times (Auchettl et al. in prep.). From this information, we infer that SN~2019yvr is similar to SN~2014C, with early-time type Ib-like evolution but transitioning around 150~days to a light curve powered by shock interaction with CSM at all wavelengths. NGC~4666 has deep {\it Hubble Space Telescope}/Wide Field Camera 3 ({\it HST}/WFC3) imaging in F438W, F555W, F625W, and F814W bands (roughly $BV\!RI$, respectively) that covers the site of SN~2019yvr 2.6~yr before its explosion \citep{Shappee16,Foley16,Graur18}. Compared with limits on the progenitor stars of other SNe~Ib in the literature \citep[][]{Eldridge13} as well as the detection of the progenitor star of iPTF13bvn \citep{Cao13}, these data are among the deepest pre-explosion imaging for any SN~Ib. We compare follow-up adaptive optics-fed imaging to the pre-explosion {\it HST}\ images and identify a single progenitor candidate with ${\rm F555W} = 25.35 \pm 0.03$~mag and ${\rm F555W} - {\rm F814W} = 1.10 \pm 0.04$~mag (AB mag; \autoref{tab:progenitor}). Using constraints on the interstellar host extinction to SN~2019yvr inferred from photometry and spectra of the SN itself, we characterize the progenitor candidate's intrinsic spectral shape and find it is consistent with a star with $\ensuremath{\log(L/L_{\odot})} = 5.3 \pm 0.2$ and $T_{\mathrm{eff}} = 6800\substack{+400\\-200}$~K. This is much cooler than most SN~Ib progenitor stars are generally thought to be \citep[including the progenitor star of iPTF13bvn;][]{Cao13,Bersten+14,Folatelli16}, implying that if the counterpart is dominated by the SN~2019yvr progenitor star it must be significantly inflated compared with expectations for SN~Ib progenitor systems. Analyzing Binary Population and Spectral Synthesis \citep[BPASS;][]{eldridge+17} stellar evolution models, we find that the {\it HST}\ photometry could be consistent with a $19~M_{\odot}$ progenitor star with a relatively low-mass ($\approx$1.9$~M_{\odot}$), close companion star that undergoes common envelope evolution and sheds most of its hydrogen envelope. However, all of these models predict a significant residual hydrogen envelope mass and are thus in conflict with the observed type Ib spectral class of SN~2019yvr. Therefore, we hypothesize that the progenitor star may have shed its remaining hydrogen envelope through pre-SN eruptive mass ejection in the last 2.6~yr before explosion. Otherwise, the apparently inflated radius may be caused by a much lower mass of hydrogen forming a compact quasi-photosphere in the progenitor star's circumstellar environment soon before explosion. Throughout this paper, we assume a distance to NGC~4666 of $m-M=30.8\pm0.2$~mag ($14.4 \pm 1.3$~Mpc) derived from the light curve of the type Ia SN ASASSN-14lp also observed in this galaxy \citep{Shappee16}. We assume a redshift to NGC~4666 of $z=0.005080$ \citep{Allison14} and Milky Way reddening $E(B-V)=0.02$~mag \citep{Schlafly11}. \section{OBSERVATIONS}\label{sec:observations} \subsection{High-Resolution Pre-Explosion Images of the SN~2019\lowercase{yvr} Explosion Site}\label{sec:archival} We analyzed {\it HST}/WFC3 imaging of NGC~4666 obtained from the Mikulski Archive for Space Telescopes\footnote{\url{https://archive.stsci.edu/hst/}}. These data were observed over five epochs from 2017 April 21 to August 7 (Cycle 24, GO-14611, PI Graur; see \autoref{tab:progenitor}), corresponding to 980 to 872~days (2.68 to 2.39~years) before discovery of SN~2019yvr. Using our analysis code {\tt hst123}\footnote{\url{https://github.com/charliekilpatrick/hst123}}, we downloaded every {\it HST}\ image covering the explosion site of SN~2019yvr. These comprised WFC3/UVIS {\tt flc} frames calibrated with the latest reference files, including corrections for bias, dark current, flat-fielding, bad pixels, and geometric distortion. We optimally aligned each image using {\tt TweakReg} with 1000--2000 sources per frame and resulting in frame-to-frame alignment with 0.1--0.2~pix (0.005--0.010$^{\,\prime\prime}$) root-mean-square dispersion. We then drizzled all images in each band and epoch with {\tt astrodrizzle}. With the drizzled F555W frame as a reference, we obtained photometry in the {\tt flc} frames of every source on the same chip as the SN~2019yvr explosion site using {\tt dolphot} \citep{dolphot}. Our {\tt dolphot} parameters followed the recommended settings for WFC3/UVIS\footnote{\url{http://americano.dolphinsim.com/dolphot/dolphotWFC3.pdf}} as described in {\tt hst123}. We show a colour image constructed from the F814W, F555W, and F438W frames obtained on 2017 June 13 in \autoref{fig:astrometry}. In addition, multiple epochs of {\it Spitzer}/Infrared Array Camera (IRAC) imaging of NGC~4666 were obtained from 2005 January 4 to 2014 September 25, or roughly 15.0 to 5.3~yr before discovery of SN~2019yvr. There was a single epoch of Channel 4 (7.9$\mu$m) imaging that observed NGC~4666 (AOR 21999872; PI Rieke), but no {\it Spitzer}/IRAC observations cover NGC~4666 in Channel 3 (5.7$\mu$m). We downloaded the basic calibrated data ({\tt cbcd}) frames and stacked them using our custom {\it Spitzer}/IRAC pipeline based on the {\tt photpipe} imaging and reduction pipeline \citep{Rest+05,Kilpatrick18:16cfr}. The IRAC frames were stacked and regridded to a pixel scale of 0.6~arcsec~pixel$^{-1}$ using {\tt SWarp} \citep{swarp}. We performed photometry on the stacked frames using {\tt DoPhot} \citep{schechter+93} and calibrated our data with {\it Spitzer}/IRAC instrumental response \citep[for the cold and warm missions where appropriate;][]{irac} in the stacked frames. Based on the PSF width and average sky background, the average depth of the {\it Spitzer}/IRAC images is approximately (3$\sigma$; AB mag) 24.3~mag, 24.6~mag, and 23.0~mag at 3.6, 4.5, and 7.9~$\mu$m, respectively. \subsection{Adaptive Optics Imaging of SN~2019\lowercase{yvr}} We observed SN~2019yvr in $H$-band on 2020 March 8, or 72~days after discovery, with the Gemini-South telescope from Cerro Pach\'{o}n, Chile and the Gemini South Adaptive Optics Imager \citep[GSAOI;][]{GSAOI}. We used the Gemini Multi-conjugate Adaptive Optics System \citep[GeMS;][]{GeMS} with the Gemini South laser guide star system to perform adaptive optics corrections over the GSAOI field of view (85$^{\,\prime\prime}$$\times$85$^{\,\prime\prime}$) and using SN~2019yvr itself to perform tip-tilt corrections. We alternated observations between a field covering SN~2019yvr and a relatively empty patch of sky 4$^{\,\prime}$\ to the south in an on-off-on-off pattern, totaling 1005 seconds of on-source exposure time over 39 frames. Using the GSAOI reduction tools in {\tt IRAF}\footnote{{\tt IRAF} is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.}, we flattened the images with a flat-field frame constructed from observations of a uniformly-illuminated screen in the same filter and instrumental setup with unilluminated frames of the same exposure time to account for bias and dark current. We then subtracted the sky frames from our on-source frames. GSAOI has a well-understood geometric distortion pattern \citep{GSAOI-astrom}. We used this distortion pattern to resample each on-source frame to a corrected grid, aligned the individual exposures, and constructed a mosaic from each amplifier in the on-source frames with the GSAOI tool {\tt disco-stu}\footnote{\url{http://www.gemini.edu/sciops/data/software/disco_stu.pdf}}. Finally, we stacked the individual frames with {\tt SWarp} using an inverse-variance weighted median algorithm and scaling each image to the flux of isolated point sources observed in every on-source exposure. The final stacked frame is shown in the upper-left inset of \autoref{fig:astrometry} centered on SN~2019yvr. \begin{figure*} \includegraphics[width=\textwidth]{2019yvr.jpg} \caption{({\it Right}) {\it Hubble Space Telescope} imaging of the SN~2019yvr explosion site from 2.5~years before discovery consisting of F814W (red), F555W (green), and F438W (blue). All images are oriented with north up and east to the left. The color image on the right is 165$^{\,\prime\prime}$$\times$165$^{\,\prime\prime}$, while the left-upper and left-middle images are 38.8$^{\,\prime\prime}$$\times$38.8$^{\,\prime\prime}$, and the left-lower image is 2.4$^{\,\prime\prime}$$\times$2.4$^{\,\prime\prime}$. The blue box denotes the approximate location of SN~2019yvr. ({\it Upper left}): Gemini-S/GSAOI $H$-band image of SN~2019yvr obtained 67~days after discovery of the transient. The image is centered on the location of SN~2019yvr. ({\it Middle left}): Pre-explosion F555W imaging of NGC~4666 showing the same location as the upper left. ({\it Lower left}): Pre-explosion F555W imaging zoomed into the blue box from the middle left. The location of the SN~2019yvr progenitor candidate derived from our Gemini-S/GSAOI imaging is shown as red lines, which agrees with the location of a single point source as discussed in \autoref{sec:alignment}.}\label{fig:astrometry} \end{figure*} \subsection{Photometry of SN~2019\lowercase{yvr}}\label{sec:imaging} We observed SN~2019yvr with the Swope 1.0-m telescope and Direct/4K$\times$4K imager at Las Campanas Observatory, Chile from 2020 January 1 to 28 in $uBV\!gri$. Following reduction procedures described in \citet{Kilpatrick18:16cfr}, we performed all image processing and photometry on the Swope data using {\tt photpipe} \citep{Rest+05}. The final $BV\!gri$ photometry of SN~2019yvr were calibrated using PS1 standard sources \citep{flewelling+16} observed in the same field as SN~2019yvr and transformed into the Swope natural system following the Supercal method \citep{scolnic+15}. In $u$-band, we calibrated our images using SkyMapper standards \citep{Onken20} in the same frame as SN~2019yvr. We also observed SN~2019yvr with the Las Cumbres Observatory (LCO) Global Telescope Network 1-m telescopes from 2019 December 29 to 2020 February 3 with the Sinistro imagers and in $g^{\prime}r^{\prime}i^{\prime}$. We obtained the processed images \citep[from the {\tt BANZAI} pipeline;][]{BANZAI} from the LCO archive and processed them in {\tt photpipe}, registering each image to a corrected grid with {\tt SWarp} \citep{swarp} and performing photometry on the individual frames with {\tt DoPhot} \citep{schechter+93}. We then calibrated the $g^{\prime}r^{\prime}i^{\prime}$ photometry using $gri$ PS1 standards. All Swope and LCO photometry are listed in Table~\ref{tab:photometry} and shown in \autoref{fig:sn-lightcurve}. We estimated the time of maximum light in $V$-band by fitting a low-order polynomial to the overall light curve and derive a time of $V$-band maximum light at MJD 58853.64 (2020 January 5.64). Detailed modelling of the light curves and inferred explosion parameters will be presented by Auchettl et al. (in prep.). \begin{figure} \centering \includegraphics[width=0.49\textwidth]{2019yvr-lightcurve.pdf} \caption{Swope (circle) and LCO (square) $uBgVri$ light curves of SN~2019yvr as described in \autoref{sec:imaging}. We denote the epoch of each observation in rest-frame days (correcting for the redshift of NGC~4666 at $z=0.005080$) from $B$-band maximum light detected at MJD 58854.28 (Table~\ref{tab:photometry}).} \label{fig:sn-lightcurve} \end{figure} \subsection{Spectroscopy and Classification of SN~2019\lowercase{yvr}}\label{sec:spectroscopy} We triggered spectroscopic observations of SN~2019yvr on the Faulkes-North 2-m telescope at Haleakal\={a}, Hawaii with the FLOYDS spectrograph (Program NOAO2020A-008, PI Kilpatrick). The spectrum was observed on 2020 January 2 roughly 5~days after the initial discovery report from ATLAS and 3~days before SN~2019yvr reached $V$-band maximum. The observation was a 1500-s exposure at an average airmass of 1.35 and under near-photometric observing conditions. We reduced the spectrum following standard procedures in {\tt IRAF}, including corrections for telluric absorption and correcting the wavelength solution for atmospheric diffraction using the sky lines. The final reduced spectrum is shown in \autoref{fig:spectra}. We also observed SN~2019yvr on the Keck-I 10-m telescope on Maunakea, Hawaii with the Low-Resolution Imaging Spectrograph (LRIS; Program 2019B-U169, PI Foley) on 27 Jan 2020, approximately 22~days after $V$-band maximum as seen from our light curve. The observation was a 180-s exposure obtained during morning twilight at an average airmass of 1.16 and under near-photometric conditions. We reduced these data using a custom {\tt pyraf}-based LRIS pipeline \citep{Siebert20}\footnote{\url{https://github.com/msiebert1/UCSC_spectral_pipeline}}, which accounts for bias-subtraction, flat-fielding, amplifier crosstalk, background and sky subtraction, telluric corrections using a standard observed on the same night and at a similar airmass, and order combination. The final combined spectrum is shown in \autoref{fig:spectra}. Our spectra reveal characteristic SN~Ib features with strong, broad absorption lines of He\,\textsc{i}\ $\lambda\lambda$4471, 5876, 6678, and 7065 (\autoref{fig:spectra}). These features and the lack of any apparent Balmer line emission indicate that SN~2019yvr is a typical SN~Ib, and our comparisons to other SNe~Ib such as iPTF13bvn \citep{Srivastav14} suggest it is well matched to this spectroscopic class as a whole. From the spectrum obtained at 3 days before maximum light, we infer a velocity from Ca absorption of 22,000~km~s$^{-1}$. We also note prominent lines of Na\,\textsc{i}~D absorption at the redshift of NGC~4666 ($z=0.005080$). The complete spectroscopic evolution of SN~2019yvr will be addressed by Auchettl et al. (in prep.). \begin{figure} \includegraphics[width=0.49\textwidth]{spectra.pdf} \caption{Our spectra of SN~2019yvr (black) with comparison to other SNe~Ib (red). All dates are indicated with a ``d'' with respect to $V$-band maximum light. The comparison spectra have been dereddened for Milky Way extinction based on values in \citet{Schlafly11} and dereddened for host extinction based on values in \citet[][]{Deng00,Benetti11} (for SN~1999dn), \citet[][]{Stritzinger09} (for SN~2007Y), \citet[][]{Valenti11} (for SN~2009jf), and \citet[][]{Srivastav14} (for iPTF13bvn). We removed the recessional velocity for $z=0.005080$ from the SN~2019yvr spectra and dereddened them following the methods given in \autoref{sec:extfit}. The best-fitting extinction and $R_{V}$ parameter are given next to each SN~2019yvr spectrum. We highlight lines of He\,\textsc{i}\ at $\lambda\lambda$4471, 5876, 6678, and 7065, which are present in both epochs, demonstrating that SN~2019yvr is a SN~Ib.}\label{fig:spectra} \end{figure} \section{Extinction Toward SN~2019\lowercase{yvr} and Its Progenitor System}\label{sec:extinction} Stripped-envelope SNe~Ib are known to occur in regions of high extinction in their host galaxies \citep{Drout11,Galbany16a,Galbany16b,Stritzinger18}. However, if there is significant extinction due to dust in the circumstellar environment of SN~2019yvr, it may be variable between the time the {\it HST}\ images and imaging and spectra of SN~2019yvr were obtained. Moreover, we have no a priori constraint on the dust composition or gas-to-dust ratio in the local interstellar environment of SN~2019yvr, which is a major factor in understanding the magnitude of extinction at all optical wavelengths. Based on the relatively low Milky Way reddening of $E(B-V)=0.02$~mag and the fact that SN~2019yvr exhibited red colours (\autoref{fig:sn-lightcurve}) and strong Na\,\textsc{i}~D absorption, we infer that SN~2019yvr and its progenitor system are heavily extinguished by its host's interstellar and/or its own circumstellar environment. Moreover, if we do not correct for any additional extinction, the $V$-band light curve would peak at only $-15.1$~mag. This is extremely faint compared with other SNe~Ib/c and suggests $A_{V}>1$~mag \citep[][although this inference may be affected by Malmquist bias if known samples of SNe~Ib are not representative of the overall luminosity function]{Drout11,Stritzinger18}. Throughout the remainder of this section, we consider contextual information about the host galaxy NGC~4666, observations of SN~2019yvr, and the extinction properties of circumstellar dust around analogous stripped-envelope SN~Ib progenitor systems in order to infer the total extinction to the SN~2019yvr progenitor system. Our goal is to derive a $V$-band extinction $A_{V}$ and reddening law parameter $R_{V}$ that can be used to estimate the total extinction in the {\it HST}\ bandpasses as observed in pre-explosion data. \subsection{Extinction Inferred from Na\,\textsc{i}~D}\label{sec:naid} One quantity that is correlated with line-of-sight reddening in both SNe \citep{Stritzinger18} and quasars \citep{Poznanski12} is the equivalent width of Na\,\textsc{i}~D. We detect Na\,\textsc{i}~D in our 2019yvr LRIS spectrum with equivalent width of 4.2$\pm$0.2~\AA, which is significantly larger than the maximum Na\,\textsc{i}~D equivalent width (2.384~\AA) from the quasars used to derive the reddening relation in \citet{Poznanski12}, implying that we might overestimate the total extinction by applying their relation. Indeed, our measured Na\,\textsc{i}~D equivalent width combined with the \citet{Poznanski12} relation would indicate SN~2019yvr has a light-of-sight $E(B-V)>1000$~mag, which is impossible for any extragalactic optical transient. This finding could be due in part to saturation in the Na\,\textsc{i}~D line for the original sample of quasars in \citet{Poznanski12}, which prevents an accurate measurement of the true column of Na\,\textsc{i}~D as a function of the total column optical extinction. We infer that the \citet{Poznanski12} relationship is not accurate in this high extinction and large Na\,\textsc{i}~D equivalent width regime where we find SN~2019yvr \citep[consistent with findings in][]{Stritzinger18}. If we instead use the relation between $A_{V}$ and Na\,\textsc{i}~D equivalent width in \citet{Stritzinger18}, which was derived specifically from SN~Ib/c colour curves, we find SN~2019yvr has a line-of-sight extinction $A_{V} = 3.4 \pm 0.6$~mag. However, we emphasize that the validity of this relationship at such large equivalent widths has not been tested, and, more broadly, there is significant scatter in the correlation between Na\,\textsc{i}~D equivalent width and optical extinction \citep{Phillips13}. Therefore, we turn to other extinction indicators to better estimate the line-of-sight extinction. \subsection{Extinction Inferred from SN~2019\lowercase{yvr} Spectra}\label{sec:ext-spectra} Spectra and light curves of SNe~Ib similar to SN~2019yvr can be used to constrain its line-of-sight extinction. As host extinction is a dominant systematic uncertainty in estimating intrinsic stripped-envelope SN colours, any differences in broadband colours between SNe at similar epochs can be attributed to extinction. Here we compare our SN~2019yvr spectra to those of other SNe~Ib applying a \citet{cardelli+89} extinction law with variable $E(B-V)$ and $R_{V}$ to deredden our SN~2019yvr until they closely match. Our template spectra are chosen from those of well-observed SNe~Ib with low host reddening ($E(B-V)<0.2$~mag) measured and reported in the literature. These include SN~1999dn \citep[$E(B-V)=0.05$~mag;][]{Deng00,Benetti11}, SN~2007Y \citep[$E(B-V)=0.11$~mag;][]{Stritzinger09}, SN~2009jf \citep[$E(B-V)\approx0.0$~mag;][]{Valenti11}, and iPTF13bvn \citep[$E(B-V)=0.17$~mag;][]{Srivastav14}. We use spectra obtained from the Open Supernova Catalog\footnote{\url{sne.space}} \citep{OSC}. All template spectra were chosen to correspond to roughly the same epoch relative to $V$-band maximum as one of our two SN~2019yvr spectra. For the purposes of our fitting procedure, we assume that these extinction values are exact with no additional uncertainty. Furthermore, we assume all template spectra experienced Milky Way-like host reddening with $R_{V}=3.1$ as $R_{V}$ is either unconstrained or poorly constrained for all of these objects. We acknowledge that this possibly biases our $A_{V}$ and $R_{V}$ estimates for SN~2019yvr based on the spectroscopic fitting method, although $E(B-V)$ is small for our templates and so this may not be a major systematic uncertainty. For both the SN~2019yvr and template spectra, we estimate the uncertainty in the specific flux ($\sigma_{\lambda}$) by taking \begin{equation} \sigma_{\lambda} = \tilde{f}_{\lambda} \sqrt{\left | 1-\frac{f_{\lambda}}{\tilde{f}_{\lambda}} \right |}, \end{equation} \noindent where $\tilde{f}_{\lambda}$ is the specific flux $f_{\lambda}$ passed through a smoothing function with a 50~\AA\ window and rebinned to 1~\AA\ resolution over the maximum overlap range between the SN~2019yvr and template spectrum. Thus, the SN~2019yvr and template spectrum flux uncertainties are propagated through our entire analysis. We then fit our LRIS and FLOYDS spectra of SN~2019yvr to the templates by calculating a dereddened spectral template $\tilde{f}_{\lambda,d}$ assuming the appropriate Milky Way extinction and the interstellar host extinction given above. We also deredden SN~2019yvr for Milky Way extinction following the same procedure yielding $f_{\lambda,\mathrm{19yvr}}$. For both SN~2019yvr and the template, we rescale the uncertainty $\sigma_{\lambda}$ by the same factor as the dereddened spectrum. Finally, we derive the best-fitting host extinction $A_{V,\mathrm{19yvr}}$ and reddening law parameter $R_{V,\mathrm{19yvr}}$ by calculating $A_{\lambda}$ from \citet{cardelli+89} and minimizing the reduced $\chi^{2}$ value \begin{equation} \chi^{2} = \sum^{N}_{\lambda} \frac{(\tilde{f}_{\lambda,d} - C f_{\lambda,\mathrm{19yvr}} 10^{0.4 A_{\lambda}})^{2}}{N (\sigma_{\lambda}^{2} + \sigma_{\lambda,\mathrm{19yvr}}^{2})} \end{equation} \noindent where $N$ is the total number of 1~\AA\ wavelength bins and $C$ is a scaling constant between the two spectra. Thus, our spectral fitting method is primarily sensitive to the overall shape of the two spectra rather than the ratio between their fluxes. We show our best-fitting dereddened SN~2019yvr spectra in \autoref{fig:spectra} and we list our best-fitting $A_{V}$ and $R_{V}$ parameters for SN~2019yvr in \autoref{tab:spectral-fit}. \begin{table} \centering \scriptsize \begin{tabular}{c|c|c|c|c|c} \hline Epoch & Template (Epoch) & $A_{V,Temp.}$ & $A_{V,\mathrm{19yvr}}$ & $R_{V,\mathrm{19yvr}}$ & $\chi^{2}$ \\ (days) & (days) & (mag) & (mag) & & \\ \hline\hline $-$3 & iPTF13bvn ($-$2.1) & 0.53 & 3.33$\pm$0.24& 4.91$\pm$0.37 & 1.00 \\ $-$3 & SN~2007Y ($-$0.9) & 0.35 & 3.14$\pm$0.29& 3.22$\pm$0.41 & 4.40 \\ $-$3 & SN~2009jf ($-$3.3) & 0.00 & 2.46$\pm$0.32& 3.01$\pm$0.40 & 4.08 \\ $-$3 & SN~1999dn ($-$3.0) & 0.15 & 3.62$\pm$0.24& 4.90$\pm$0.49 & 5.31 \\ $+$22 & SN~2009jf ($+$25.6) & 0.00 & 3.94$\pm$0.38& 4.97$\pm$0.56 & 17.42 \\ \hline Mean & & & 3.3$\pm0.4$ & 4.1$\pm$0.9 & \\ \hline \end{tabular} \caption{Our best-fitting parameters for $V$-band extinction ($A_{V}$) and $R_{V}$ inferred for SN~2019yvr based on matching to template spectra as shown in \autoref{fig:spectra} and described in \autoref{sec:ext-spectra}. As in \autoref{fig:spectra}, the epoch of each SN~2019yvr and template spectrum is given in days with respect to $V$-band maximum light. $\chi^{2}$ is given in units of reduced $\chi^{2}/\chi^{2}_{\mathrm{min}}$. We give parameters for each template spectrum used and the inverse $\chi^{2}$-weighted average for $A_{V}$ and $R_{V}$. However, see caveats in \autoref{sec:ext-spectra}.} \label{tab:spectral-fit} \end{table} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{2019yvr-colour-curves.pdf} \caption{Colour curves of SN~2019yvr corrected for Milky Way and interstellar host extinction (with $A_{V}=2.3$~mag and $R_{V}=4.4$) as discussed in \autoref{sec:extfit}. Circles correspond to our Swope photometry while squares are for LCO photometry. We overplot templates for extinction-corrected SN~Ib colour curves from \citet{Stritzinger18} as black lines with the 1$\sigma$ uncertainties in each template as a gray shaded region.} \label{fig:colour-curves} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{2019yvr-colour-chi2.pdf} \caption{$\chi^{2}$ values as a function of assumed interstellar $V$-band extinction $A_{V}$ and reddening law parameter $R_{V}$ and comparing SN~2019yvr colours to the colour templates in \autoref{sec:extfit} and \autoref{fig:colour-curves}. The best-fitting extinction parameters are $A_{V}=2.4$~mag and $R_{V}=4.7$ (yellow star) with an implied a best-fitting $E(B-V)=0.51$~mag. The yellow dashed lines show the 1-$\sigma$ best-fitting limits of $E(B-V)$.} \label{fig:chi2} \end{figure} As reported in \autoref{sec:spectroscopy}, our SN~2019yvr spectra correspond to approximately $-3$~days and $+22$~days relative to $V$-band maximum. For the latter spectrum, only SN~2009jf had a spectrum sufficiently close in $V$-band epoch to perform a robust comparison between spectral shape. Thus, while the best-fitting cases all correspond to the early-time spectrum, our second epoch serves to validate the results of this analysis. In this way, we derive a line-of-sight extinction to SN~2019yvr of $A_{V}=2.4$--$3.9$~mag, although most of our best-fitting values are around $A_{V}=3.2$--$3.6$~mag. These values are consistent with Na\,\textsc{i}~D, but there is significant scatter in $A_{V}$, implying that there are systematic uncertainties in our method. \subsection{Extinction Inferred from SN~2019\lowercase{yvr} Colour Curves}\label{sec:extfit} We further investigate the interstellar host reddening using colour curve templates from \citet{Stritzinger18} and compare to our Swope and LCO colour curves of SN~2019yvr (\autoref{fig:colour-curves}). Using a \citet{cardelli+89} reddening law, we vary the values of $A_{V}$ and $R_{V}$ in order to derive colour corrections due to interstellar reddening. We apply these corrections to our $B-g$, $B-V$, $g-r$, and $g-i$ colour curves to find the best fit with \citet{Stritzinger18} template colours as shown in \autoref{fig:colour-curves}. The best-fitting values are quantified with respect to the summed $\chi^{2}$ values in $B-g$, $B-V$, $g-r$, and $r-i$ and across all epochs. The final reduced $\chi^{2}$ value for different values of $A_{V}$ and $R_{V}$ is shown in \autoref{fig:chi2}. We find the best-fitting values using our colour curve matching are $A_{V}=2.4\substack{+0.7\\-1.1}$~mag and $R_{V}=4.7\substack{+1.3\\-3.0}$ and implying a best-fitting $E(B-V)=0.51\substack{+0.27\\-0.16}$~mag. The value of $R_{V}$ is limited at the high end by our boundary condition that $R_{V}<6.0$. Based on well-measured values of $R_{V}$ for SN host galaxies \citep[e.g., the wide variety of SN~Ia hosts presented in][]{Amanullah15}, which tend to have $1.3 < R_{V} < 3.6$, we infer that our prior $R_{V}<6.0$ is a conservative upper bound on realistic values of the reddening law parameter. \subsection{Final Extinction Value Adopted for SN~2019\lowercase{yvr}}\label{sec:ext-final} Overall, the level of extinction inferred from our spectral analysis is consistent with our estimate from the $V$-band light curve as well as the value inferred from the \citet{Stritzinger18} Na\,\textsc{i}~D relation. While the latter relationship diverges significantly at large extinction values \citep[also similar to][]{Poznanski12}, we infer from the agreement between these three estimates that the line-of-sight host extinction inferred for SN~2019yvr is close to the value inferred from our spectral and colour curve analyses. However, the colour curve analysis involves more independent measurements of the SN~2019yvr optical spectrum, and this analysis has been validated for several SNe~Ib by \citet{Stritzinger18}. Thus, although there is agreement between all of our methods, we infer that $A_{V}=2.4\substack{+0.7\\-1.1}$~mag and $R_{V}=4.7\substack{+1.3\\-3.0}$ is most representative of the line-of-sight extinction to SN~2019yvr, and we use these values and our $\chi^{2}$ distribution on extinction in \autoref{fig:chi2} below. However, the critical question to the analysis below is how much extinction did the SN~2019yvr progenitor star experience? While it is reasonable to assume that the interstellar host extinction inferred from SN~2019yvr would be the same as the extinction that its progenitor star experienced (especially on the 2.4--15.0~yr timescale of our pre-explosion data), there could be additional sources of extinction present when the pre-explosion data were obtained to which our SN~2019yvr observations are not sensitive or vice versa. In particular, circumstellar dust could have been present in the pre-explosion environment but vaporized soon after explosion, or else there could be material ejected by the progenitor star very soon before explosion that was not present when the {\it HST} or {\it Spitzer} data were obtained. Below we consider both scenarios and the effects of circumstellar material and extinction on our overall data set. \subsection{Possibility of More Circumstellar Extinction 2.6~Year Prior to Core Collapse}\label{sec:more} All massive stars exhibit winds that pollute their environments with gas and dust \citep{Smith14}, and this material can lead to significant circumstellar extinction when the wind is dense, clumpy, and relatively cool. Thus it is possible that the SN~2019yvr progenitor star experienced significant circumstellar extinction from a shell of dust that was vaporized before it could be observed in the SN. Auchettl et al. (in prep.)\ find evidence for a significant mass of hydrogen-rich CSM from H$\alpha$, X-ray, and radio emission. Rebrightening in the light curve of SN~2019yvr beginning $>$150~days after discovery suggests that this material is in a shell likely at $>$1000~AU from the progenitor star, thus ejected years or decades before core collapse. The question we address here is whether there could also be material closer to the progenitor star that contributes to circumstellar extinction but was vaporized soon after core collapse, implying that the line-of-sight extinction estimated above underestimates the extinction at the time of the {\it HST} observations. There is no obvious sign of any such material, for example, in evolution of the Na\,\textsc{i}~D profile or excess emission in early-time light curves and spectra. Dust geometries and properties most likely to be associated with circumstellar extinction due to material close \citep[2--10$\times$ the photospheric radius as in][]{kochanek+12} to the progenitor star but unconstrained by our SN~2019yvr observations can be probed with our mid-infrared {\it Spitzer}/IRAC limits. Assuming this material was present on the timescale of the IRAC observations, we model an optically thin shell of dust to our limits of 22.8, 23.1, and 21.5~mag in IRAC bands 1, 2, and 4, respectively (see \autoref{sec:alignment} for a discussion of the IRAC limits). A warm shell of gas and dust ($T>200$~K) would result in bright mid-infrared emission even in cases where it is relatively compact ($<$1~AU). Following analysis in \citet{Kilpatrick18:16cfr}, \citet{Kilpatrick18:17eaw}, and \citet{Jacobson-Galan20}, we modeled optically thin shells of silicate dust with grain sizes $>$0.1$~\mu$m and a range of temperatures from $200$--$1500$~K. At hotter temperatures, the dust would likely sublimate and thus would not exhibit the same extinction properties or attendant mid-infrared emission. Similarly, a shell at large distances from its progenitor star might be so cool that it does not emit significant flux at $<10~\mu$m where our IRAC data probe, even if it has a large mass. The dust mass limits we derive are strongly temperature dependent, with the coolest temperatures yielding the weakest limits on mass ($M_{d}<9\times10^{-4}~M_{\odot}$ and $L_{d}<4\times10^{4}~L_{\odot}$ at $200$~K) whereas hotter dust leads to relatively strong limits on dust mass ($M_{d}<2\times10^{-8}~M_{\odot}$ and $L_{d}<9\times10^{4}~L_{\odot}$ at $1500$~K). We used the $0.1~\mu$m silicate dust grain opacities from \citet{fox+10,fox+11} to calculate these limits. Assuming the same dust grain composition, we approximate the limits on optical depth in $V$-band as $\tau_{V} = \rho \kappa_{V} r_{\mathrm{dust}}$, where $r_{\mathrm{dust}}$ is the implied blackbody radius of the dust shell, $\rho \approx M_{d}/(4/3 \pi r_{\mathrm{dust}}^{3})$, and $\kappa_{V}$ is the opacity in $V$-band. Under these assumptions, the optical depth must be $\tau_{V}<3$--$187$, with the strongest limits again coming from the hottest dust temperatures. Approximating $A_{V}=0.79 \tau_{V}$ as in \citet{kochanek+12} and \citet{Kilpatrick18:17eaw}, these limits are not constraining on the total circumstellar extinction due to a compact dust shell. Indeed, circumstellar dust absorption could be the dominant source of extinction in the SN~2019yvr progenitor system, but we would have no contextual information from the pre-explosion {\it Spitzer}/IRAC photometry to constrain the magnitude of that extinction. The strongest argument against such a compact, warm shell of gas and dust is the lack of any hydrogen or helium emission associated with circumstellar interaction in early-time spectra or any near-infrared excess in the photometry as shown in \autoref{fig:spectra} and \autoref{fig:sn-lightcurve}. However, these arguments are biased by the epoch of the first observations. SN~2019yvr had a reported discovery on 2019 December 27 by ATLAS with the last previous non-detection occurring on 2019 December 11 at $>18.6$~mag in $o$-band \citep{Smith19}. Subsequent non-detection reports by the Zwicky Transient Facility give a more constraining non-detection in $g$-band at $>$19.5~mag on 2019 December 13\footnote{\url{https://www.wis-tns.org/object/2019yvr}}. However, this still allows for 14~days when SN~2019yvr could have interacted with CSM in its immediate environment. Although the first spectrum of SN~2019yvr did not exhibit evidence for flash ionization or narrow emission lines due to CSM interaction, this would not be surprising if the explosion was already more than several days old \citep[e.g., flash ionization lasted for $<$6~days for the type IIb SN 2013cu;][]{gal-yam+14}. Deeper and higher cadence early time observations and pre-explosion limits, especially from high-resolution, near- and mid-infrared imaging, would have been needed to provide meaningful constraints on the presence and total mass of such material. \subsection{Possibility of Less Circumstellar Extinction 2.6~Years Prior to Core Collapse}\label{sec:less} There is strong evidence for circumstellar interaction around SN~2019yvr in optical spectra, radio, and X-ray detections starting around 150~days after discovery (Auchettl et al. in prep.). The development of narrow Balmer lines at these late times indicates this material is hydrogen rich. A delayed interaction points to a shell of material at a large projected separation from the progenitor ($\approx$1000~AU assuming a SN shock velocity of $\approx$10,000~km~s$^{-1}$). A key consideration above is whether this CSM was present at the time of the {\it HST} observations or if it was ejected in the subsequent 2.6~yr before core collapse. In the latter case, any dust synthesized in the CSM would not be present in the {\it HST} data and thus $A_{V}=2.4$~mag would be an overestimate of the extinction affecting any emission we detect in pre-explosion data. Based on the {\it HST} observations and follow up data of the SN, we cannot constrain this scenario. However, one prediction from this scenario would be an intermediate-luminosity transient associated with an extreme mass loss episode over this time. We analyzed this location of the sky and found no luminous counterparts in pre-explosion imaging from the ASAS-SN Sky Patrol\footnote{\url{https://asas-sn.osu.edu/}} \citep{Shappee14,Kochanek17} or the Catalina Surveys Data Release 2 \citep{Drake09}, but these limits only extend to $<13$~mag given contamination from the bright center of NGC~4666. Thus we cannot provide a meaningful estimate on any such CSM, but we consider the possibility that $A_{V}<2.4$~mag in our analysis of pre-explosion counterparts in Sections~\ref{sec:candidate} and \ref{sec:discussion} below. In general, we assume that the extinction inferred from SN~2019yvr is the same between the epoch of {\it HST} observations and the time of explosion. Overall, we consider $A_{V}=2.4\substack{+0.7\\-1.1}$~mag and $R_{V}=4.7\substack{+1.3\\-3.0}$ to represent the total line-of-sight extinction to the SN~2019yvr progenitor system at the time the pre-explosion {\it HST} imaging was obtained. \section{The Progenitor Candidate to SN~2019\lowercase{yvr}}\label{sec:candidate} \subsection{Aligning Adaptive Optics and Pre-Explosion Imaging}\label{sec:alignment} We obtained positions for 114 point sources in our GSAOI adaptive optics image using {\tt sextractor} \citep{sextractor} and compared these to the positions of the same sources in the F555W {{\it HST}}/WFC3 image as obtained in {\tt dolphot}. From these common sources, we derived a coordinate transformation solution from GSAOI$\rightarrow${\it HST}. We also derived the systematic uncertainty in this transformation by splitting our sample of common astrometric sources in half, re-deriving the coordinate transformation, and then comparing the offset between the remaining {\it HST}\ sources and their positions from our GSAOI image and transformation. Repeating this procedure, we are able to derive an average systematic offset between our GSAOI and {\it HST}\ sources. We assume the root-mean-square of these offsets dominates the error in our astrometric solution, which we find is $\sigma_{\alpha} = 0.16$ WFC3/UVIS pixels ($0.008$$^{\,\prime\prime}$) and $\sigma_{\delta} = 0.18$ WFC3/UVIS pixels ($0.009$$^{\,\prime\prime}$). The position of SN~2019yvr in our GSAOI image corresponds to a single point source in the WFC3/UVIS imaging to a precision of $0.1$ WFC3/UVIS pixels ($\approx$0.6$\sigma$ as the uncertainty on the position of this source from our GSAOI is negligible). We detect this source in the drizzled F555W image at 34$\sigma$ significance, and there are no other sources at the $>$5$\sigma$ level within a separation of 0.27$^{\,\prime\prime}$\ or 30 times the astrometric uncertainty. Our WFC2/UVIS photometry is listed in \autoref{tab:progenitor}. We also examined the position of SN~2019yvr in pre-explosion {\it Spitzer}/IRAC imaging. Using the same alignment method as above, we determined the location of SN~2019yvr in the {\it Spitzer}/IRAC stacked images using our GSAOI image of SN~2019yvr. Our alignment uncertainty is typically $\sigma\approx0.2$ IRAC pixels ($0.12$$^{\,\prime\prime}$) from GSAOI$\rightarrow$IRAC in each channel. We found no evidence of a counterpart in any epoch or the cumulative, stacked pre-explosion frames. Therefore, we place an upper limit on the presence of a pre-explosion counterpart in the stacked IRAC frames by injecting and recovering artificial stars at the location of SN~2019yvr and using the native IRAC point response function for each channel. Our pre-explosion limits for IRAC are reported in \autoref{tab:spitzer}. \subsection{The Nature of the {\it HST}\ Counterpart to SN~2019\lowercase{yvr}}\label{sec:cluster} Stripped-envelope SNe are known to occur in the brightest, highest extinction, and highest metallicity regions of their host galaxies \citep{Galbany16a,Galbany16b}. The iPTF13bvn progenitor system was identified in a relatively uncrowded region of NGC~5086 \citep{Cao13} and subsequently confirmed as the actual progenitor by its disappearance \citep{eldridge+16,Folatelli16}, but in general SNe~Ib/c are found in crowded regions of their host galaxies \citep[when the surrounding environment can be resolved, as in][]{Eldridge13}. For example, the candidate progenitor system of the stripped-envelope SN~Ic 2017ein was in an environment with several other luminous sources \citep{Kilpatrick18:17ein}. This fact and the counterpart's high optical luminosity suggest it may in fact have an unresolved star cluster or a chance coincidence. The candidate progenitor system to SN~2019yvr does not appear extended in any of the WFC3/UVIS frames, with {\tt dolphot} average {\tt sharpness}=$-0.02$, {\tt roundness}=$0.36$, and classified as a bright star, which is consistent with a circular point source at WFC3/UVIS resolution. The source is not blended with any other nearby sources and has an average {\tt crowding}=$0.09$. Therefore, we conclude that the candidate counterpart is consistent with being a single, isolated point source in all of our images. One possible scenario is that the candidate source is dominated by emission from multiple stars in a single system or open cluster \citep[similar to those in][]{Bastian05,Gieles06,Gieles10}. Although the candidate source is point-like, the PSF size of {\it HST}/WFC3 in F555W is $\approx$0.067$^{\,\prime\prime}$, or 4.7~pc at the distance of NGC~4666. Many open clusters are smaller than this, and might be so compact as to resemble a point source. The F555W (roughly $V$-band) absolute magnitude we infer for this source is $-$7.8~mag (assuming $A_{V}=2.4$~mag), which would be extremely low luminosity for the population of clusters in \citet{Gieles06}. Thus, while we cannot currently rule out the possibility that the source is a cluster we find it much more likely that the source is dominated by emission from a single star or star system associated with SN~2019yvr. We estimate a single-trial probability of chance coincidence by considering that there are 3281 sources (of any type) detected at $>$5$\sigma$ in a 10$^{\,\prime\prime}$\ region surrounding the candidate SN~2019yvr counterpart in any of the {\it HST}\ frames. Thus, at most $6.7$~arcsec$^{2}$ or 2\% of this region is subtended by area within 3$\sigma$ (astrometric uncertainty) of any source, which is a conservative upper limit on the probability of chance coincidence between the counterpart and SN~2019yvr. We find it is unlikely that SN~2019yvr coincides with this source by chance, although we acknowledge that this scenario cannot be ruled out definitively before we demonstrate that the source has disappeared \citep[as in the case of iPTF13bvn;][]{eldridge+16,Folatelli16}. Given that SN~2019yvr coincides with a single, bright source, that source is point-like and isolated from nearby sources, and the relatively low likelihood of a chance coincidence, we consider this source to be a credible progenitor candidate to SN~2019yvr. Below we assume that this object is dominated by emission from a single stellar system that hosted the SN~2019yvr progenitor star. \begin{table} \centering \centering{WFC3/UVIS Photometry of 2019yvr Progenitor Candidate} \begin{tabular}{l|c|c|c|c} \hline MJD & Filter& Exposure (s) & Magnitude & Uncertainty \\\hline\hline 57864.06972 & F438W & 1140 & 26.2028 & 0.2207 \\ 57864.11750 & F625W & 1134 & 24.8352 & 0.0466 \\ 57864.17879 & F555W & 1200 & 25.4011 & 0.0720 \\ 57864.24510 & F814W & 1152 & 24.1778 & 0.0498 \\ 57890.23024 & F555W & 1143 & 25.1599 & 0.0612 \\ 57890.24828 & F625W & 1140 & 24.8008 & 0.0443 \\ 57917.36677 & F438W & 1140 & 26.3646 & 0.4810 \\ 57917.39478 & F625W & 1134 & 24.9420 & 0.0538 \\ 57917.43295 & F555W & 1200 & 25.5264 & 0.0853 \\ 57917.46140 & F814W & 1152 & 24.3412 & 0.0588 \\ 57944.68250 & F555W & 1143 & 25.2253 & 0.0621 \\ 57944.75022 & F625W & 1140 & 24.8760 & 0.0478 \\ 57972.22031 & F438W & 1140 & 25.9439 & 0.2410 \\ 57972.23837 & F625W & 1134 & 25.0531 & 0.0559 \\ 57972.28531 & F555W & 1200 & 25.4908 & 0.0829 \\ 57972.30377 & F814W & 1152 & 24.2492 & 0.0575 \\\hline \end{tabular} \centering{Average Photometry} \begin{tabular}{l|c|c|c|c}\hline MJD & Filter& Exposure (s) & Magnitude & Uncertainty \\\hline\hline 57917.88560 & F438W & 3420 & 26.1382 & 0.1622 \\ 57904.13112 & F555W & 5886 & 25.3512 & 0.0319 \\ 57917.74983 & F625W & 5682 & 24.8971 & 0.0221 \\ 57918.00342 & F814W & 3456 & 24.2533 & 0.0319 \\\hline \end{tabular} \caption{{\it HST} WFC3/UVIS photometry of the SN~2019yvr progenitor candidate. All magnitudes are on the AB system.} \label{tab:progenitor} \end{table} \begin{table} \centering{{\it Spitzer}/IRAC Pre-explosion Limits} \begin{tabular}{l|c|c|c}\hline Average MJD & Wavelength & Exposure & Limit \\ & ($\mu$m)& (s) & (mag) \\\hline\hline 57917.88560 & 3.6 & 1530.0 & $>$22.8 \\ 57904.13112 & 4.5 & 1864.8 & $>$23.1 \\ 57918.00342 & 7.9 & 278.0 & $>$21.5 \\\hline \end{tabular} \caption{IRAC limits on the presence of a pre-explosion counterpart to SN~2019yvr progenitor candidate. All magnitudes are on the AB system.} \label{tab:spitzer} \end{table} \subsection{Photometric Properties of the Pre-Explosion Counterpart}\label{sec:classification} We show the light curve of the SN~2019yvr progenitor candidate at times relative to explosion in \autoref{fig:lightcurve}. The source is relatively stable with at most $0.47$~mag peak-to-peak variability (corresponding to 3.4$\sigma$) in F555W over a baseline of 110~days. Thus we infer that the progenitor candidate did not exhibit any extreme variability with constant flux at the $<$0.24~mag level in all bands. This suggests that if the counterpart is a star, it was not in an eruptive or any other highly variable phase during these observations, as these events are typically accompanied by large differences in luminosity or colour \citep[as in pre-SN outbursts associated with SNe~IIn, e.g.,][]{smith+09,mauerhan+13,Kilpatrick18:16cfr}. Thus we are confident that the average photometry across all four {\it HST} bands in which we detect the progenitor candidate is representative of its overall spectral energy distribution (SED). Taking an inverse-variance weighted average of across all epochs in each band, we derive average photometry $m_{\rm F438W}=26.138\pm0.162$~mag, $m_{\rm F555W}=25.351\pm0.032$~mag, $m_{\rm F625W}=26.897\pm0.022$~mag, and $m_{\rm F814W}=24.253\pm0.032$~mag as shown in \autoref{tab:progenitor}. Temporarily, ignoring any correction due to host extinction but accounting for Milky Way extinction, the source has $m_{\rm F555W}-m_{\rm F814W}$ (roughly $V-I$) of 1.065$\pm$0.045~mag and $M_{\rm F555W}=-5.5$~mag assuming our preferred distance modulus above (both values are in AB mag). This is roughly consistent with temperatures of $T_{\mathrm{eff}}=3360$~K, which is broadly comparable, although slightly hotter, than most terminal RSGs at solar metallicity \citep{choi+16}. This suggests that the source either has a cool photosphere or is heavily extinguished, in agreement with expectations from our analysis of SN~2019yvr. However, if the source is extinguished due to CSM, there is no clear evidence from pre-explosion variability whether any of this material was ejected during the window of the {\it HST} or {\it Spitzer} observations. \begin{figure} \includegraphics[width=0.49\textwidth]{lightcurve.pdf} \caption{The pre-explosion light curve of the SN~2019yvr progenitor candidate in all four {\it HST} filters for which we have imaging. The source is not significantly variable, with at most $0.47$~mag peak-to-peak variations as discussed in \autoref{sec:classification}.}\label{fig:lightcurve} \end{figure} \subsection{Comparison to Blackbodies and Single-star Spectral Energy Distributions}\label{sec:single} Assuming that the counterpart is dominated by the SED of a single star, we estimate the luminosity and temperature of that star by fitting various SED models to the {\it HST}\ photometry. Broadly, we use blackbody and stellar SEDs obtained from \citet{pickles+10}. We use a full forward modeling and Monte Carlo Markov Chain (MCMC) approach to simulate the in-band apparent magnitudes assuming the distance above and drawing extinction ($A_{V}$) and reddening ($R_{V}$) parameters following the $\chi^{2}_{\mathrm{ext}}$ probability distribution from our light curve analysis in \autoref{sec:extinction} and as shown in \autoref{fig:chi2}. For a blackbody with a given effective temperature $T_{\mathrm{eff}}$ and luminosity $L$ as well as extinction values drawn from the $\chi^{2}$ distribution discussed above, we simulate an intrinsic, absolute magnitude $M_{i}$ in each band $i$ and convert to an apparent magnitude $m_{i}$ with in-band Milky Way extinction $A_{{\rm MW},i}$, the implied host extinction $A_{H,i}$, and our preferred distance modulus $\mu=30.8$~mag. Thus the simulated apparent magnitude depends on both the intrinsic model parameters (i.e., $T_{\mathrm{eff}}$,$L$) and extinction parameters ($A_{V}$, $R_{V}$) drawn from $\chi^{2}_{\mathrm{ext}}$. We then estimate the log likelihood for our MCMC ($\chi^{2}_{\mathrm{eff}}$) using the observed magnitude $m_{o,i}$ and uncertainty $\sigma_{o,i}$ from \autoref{tab:progenitor} as \begin{equation} \chi^{2}_{\mathrm{eff}}=\sum_{i} \left(\frac{m_{i} - m_{o,i}}{\sigma_{i}}\right)^{2} + \chi^{2}_{\mathrm{ext}}(A_{V},R_{V}). \end{equation} In this way we incorporate the differences between the observed and forward-modeled magnitudes as well as between the values of $A_{V}$ and $R_{V}$ for each trial and the best-fitting values from our colour curve template fitting. Assuming a blackbody SED, we estimate the best-fitting parameters $T_{\mathrm{eff}} = 7700\substack{+900\\-1000}$~K and $\ensuremath{\log(L/L_{\odot})} = 5.3\substack{+0.2\\-0.3}$. Although we include the distance modulus uncertainty in our luminosity (and radius) uncertainty estimates, we did not include this value in our fitting method as it does not affect the overall shape of the SED. The implied photospheric radius for the best-fitting blackbodies are $R=250\pm30~R_{\odot}$. As we incorporate $A_{V}$ and $R_{V}$ from the light curve analysis into our models, we also constrain these parameters with best-fitting values $A_{V}=2.8\substack{+0.3\\-0.4}$~mag and $R_{V}=5.2\substack{+0.8\\-0.7}$ assuming the intrinsic blackbody spectrum. We also compared our photometry to single-star SEDs from \citet{pickles+10}. We use stars of all spectral classes, fitting only to a scaled version of the stellar SED as a function of effective temperature. Using the same MCMC method, our walkers drew a temperature randomly and then chose the stellar SED with the closest effective temperature. The best-fitting SEDs are consistent with stars in the F4 to F0 range (intrinsic $T_{\mathrm{eff}} = 6800\substack{+400\\-200}$~K; see \autoref{fig:sed} and \autoref{fig:corner}) with an implied luminosity $\ensuremath{\log(L/L_{\odot})} = 5.3 \pm 0.2$ and a photospheric radius $R = 320\substack{+30\\-50}$~$R_{\odot}$. Thus the best-fitting values are broadly consistent between blackbody and \citet{pickles+10} model SEDs. There is some systematic uncertainty in the exact temperature of the latter models given the sampling of the \citet{pickles+10} spectra, which are increasingly sparse for hotter stars. However, this effect is small at temperatures 5000--10,000~K where there are 40 spectra of varying spectral classes. Our treatment of $A_{V}$ and $R_{V}$ is identical to the forward modeling approach for the blackbodies above, and we derive best-fitting values of $A_{V}=3.1\substack{+0.3\\-0.2}$~mag and $R_{V}=5.9\substack{+0.1\\-0.4}$. In both the blackbody and stellar SED models, our luminosity estimates are on the high-luminosity end for observed core-collapse SN progenitor stars \citep[e.g., in][]{Smartt15} but consistent with most of the SN~IIb progenitor stars (see Figure~\ref{fig:hr}). \begin{figure} \includegraphics[width=0.49\textwidth]{sed.pdf} \caption{The best-fitting SEDs to the average pre-explosion {\it HST} photometry of the SN~2019yvr progenitor candidate. Our best-fitting blackbody has $T_{\mathrm{eff}}=7700$~K, while the best-fitting single-star \citet{pickles+10} model is a F2I star with $T_{\mathrm{eff}}=6800$~K. In both cases, the implied luminosity is consistent with being \ensuremath{\log(L/L_{\odot})}$\approx$5.3. The SEDs are completely forward-modeled in observed flux, thus they include the apparent best-fitting interstellar host and Milky Way extinction. Although we include the distance uncertainty in our luminosity estimate, the error bars in this figure only include measurement uncertainty and uncertainty on extinction. We simply scale the integrated flux density by our preferred distance.}\label{fig:sed} \end{figure} As a further check on the effect of extinction on our derived parameters, we show the relationship between the host extinction $A_{V}$ and reddening law parameter $R_{V}$ and the implied temperature and luminosity for the \citet{pickles+10} models in \autoref{fig:corner}. Luminosity is highly correlated with variations in $A_{V}$, with $A_{V}<2.4$~mag implying a lower luminosity but also a significantly cooler photospheric temperature. In general, such a cool photosphere is associated with a massive hydrogen envelope, which is in tension with the SN~Ib spectroscopic class (see Section~\ref{sec:discussion} for further discussion). In contrast, a larger extinction value would imply a significantly higher luminosity and hotter temperature, although no combination of parameters we considered for the stellar SED fits allowed an effective temperature $>$10,000~K to within the 3-$\sigma$ level. This is in stark contrast with the progenitor of iPTF13bvn with $T_{\mathrm{eff}}\approx45,000$~K \citep{Cao13,Bersten+14,eldridge+16,Folatelli16} and He stars generally, which tend to have effective temperatures $>$20,000~K \citep[as discussed for the progenitors of SNe~Ib in][]{yoon+15}. Overall, our constraints on $A_{V}$ and $R_{V}$ enable a relatively tight fit temperature and luminosity as demonstrated in \autoref{fig:corner}. The minimum $\chi^{2}/\mathrm{degrees of freedom}=1.5$, which suggests that a single, extinguished star is well matched to our data and we cannot effectively constrain scenarios with more free parameters, such as the inclusion of another star to the overall SED. However, while this analysis might accurately reflect the SN~2019yvr progenitor star's evolutionary state at 2.6~yr before explosion, it does not place any specific constraints on the pathway that led to this configuration. We further explore the implications of a SN~Ib progenitor star with these properties and the implications for a single-star origin in \autoref{sec:cool}. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{pickles-corner.pdf} \caption{Corner plot showing the correlation between our fit parameters \ensuremath{\log(L/L_{\odot})} and $\log(T/\mathrm{K})$ of a single star following a \citet{pickles+10} as well as host extinction $A_{V}$ and host reddening $R_{V}$ as described in \autoref{sec:single}. The contours show the 1, 2, and 3$\sigma$ best-fitting values derived from all of our samples. Although the luminosity and $V$-band host extinction are highly correlated, the resulting luminosity and temperature are tightly constrained.} \label{fig:corner} \end{figure} \subsection{Comparison to Binary Star Models}\label{sec:binary} We also compare the SED of the SN~2019yvr progenitor candidate to binary stellar evolution tracks from BPASS \citep{eldridge+17}, comprising 12,663 binary star models at a single metallicity. BPASS provides physical parameters from the binary star system throughout its evolutionary sequence as well as in-band absolute magnitudes for the individual components and binary system as a whole. Our analysis involved a direct comparison between the F438W, F555W, F625W, and F814W magnitudes of the SN~2019yvr counterpart and the total binary emission estimated via BPASS synthetic magnitudes in F435W, F555W, SDSS $r$, and F814W, respectively. As BPASS magnitudes are provided in Vega mag, we transformed our AB mag photometry to Vega mag using the relative Vega mag $-$ AB mag zero points for all four WFC3/UVIS filters \citep[0.15, 0.03, $-$0.15, and $-$0.42~mag, as in][]{WFC3UVIS}. Assuming that the SN~2019yvr progenitor is one component of a binary system, we determined what BPASS evolutionary tracks have a terminal state consistent with the SN~2019yvr candidate photometry. Based on the gas-phase metallicity estimate of NGC~4666 in \citet{Pan+19}, which is consistent with Solar metallicity, we restrict our analysis to BPASS models with fractional metallicity $Z=0.014$. However, we emphasize that the \citet{Pan+19} metallicity estimate is derived from spectra obtained toward the center of NGC~4666 rather than at the site of SN~2019yvr, and so the true metallicity of the SN~2019yvr progenitor system may be significantly different. Otherwise, we consider evolutionary tracks for all BPASS initial masses ($M_{\mathrm{init}}=0.1$--$300~M_{\odot}$), mass ratios ($q=0.1$--$1.0$), and binary periods ($\log(P/\mathrm{1~day})=0$--$4$). We used the same MCMC method as above with walkers drawing from initial masses, mass ratios, and periods and comparing the terminal absolute magnitude and colours of the closest model in parameter space to the SN~2019yvr counterpart photometry. We also included $A_{V}$ and $R_{V}$ as free parameters, but with the walkers drawing from the same $\chi^{2}$ distribution for these parameters as in the blackbody and stellar SED fits above. For our BPASS fits, the best-fitting models correspond to initial mass $M_{\mathrm{init}}=19\pm1~M_{\odot}$, initial mass ratio $q=0.15\pm0.05$ and initial period $\log(P/\mathrm{1~day})=0.5\pm0.2$. We show our a Hertzpsrung-Russell diagram with the best-fitting model ($M_{\mathrm{init}}=19$, $q=0.1$, and $\log(P/\mathrm{1~day})=0.6$) in \autoref{fig:hr} and the luminosity and temperature derived from our \citet{pickles+10} stellar SED fits. We performed our fits by comparing the observed photometry of the SN~2019yvr progenitor candidate to the apparent magnitudes inferred for the combined flux of both stars in the BPASS models\footnote{In our BPASS v2.2.1 fits, we examined columns 53--73, representing the absolute magnitude and colours from the combined flux of both the primary and companion star.}, and so we are sensitive to scenarios where the flux from either the primary or companion star dominates the total emission. In all of the best-fitting models and all four bands we consider, the counterpart is dominated by emission from a $\approx$19$~M_{\odot}$ primary star and the companion contributes very little to the overall flux. We found no other binary scenarios where the total flux was consistent with our photometry at the time one of the stars terminated, including scenarios where the secondary star produced the SN explosion instead (i.e., in a neutron star or black hole binary). In the best-fitting model, the terminal state of the SN progenitor is $\ensuremath{\log(L/L_{\odot})} = 5.3$ and $T_{\mathrm{eff}}=7300$~K with a terminal mass of $M_{\mathrm{final}}=7.3~M_{\odot}$, implying a consistent luminosity but a slightly warmer temperature than we derive from the \citet{pickles+10} models. Similar to above, there are no models at the $<$3-$\sigma$ level where the exploding star has a terminal temperature $>$11,000~K. In the best-fitting model, the secondary star (with an initial mass of $1.9~M_{\odot}$) is mostly unchanged with only $0.009~M_{\odot}$ of material accreted by the time the primary reaches core collapse, implying that most of the mass transfer in this model was non-conservative. In addition, the BPASS models predict that $0.047~M_{\odot}$ (0.6\% mass fraction) of hydrogen remains in the primary in its terminal state and no model with $<${}$0.038~M_{\odot}$ is consistent with our {\it HST}\ photometry at the 3-$\sigma$ level. The best-fitting extinction values for this BPASS model was $A_{V}=2.6\pm0.3$~mag and $R_{V}=5.1\substack{+0.9\\-2.1}$. The primary effect of the BPASS evolutionary models compared to single-star models is the inclusion of Roche-lobe overflow (RLOF). For our specific best-fitting model, RLOF turns on in the post-main sequence phase (i.e., Case B mass transfer; shown with a square in \autoref{fig:hr}), and continues through the end of the primary star's evolution. In particular, mass-loss due to RLOF follows the prescription for common-envelope evolution (CEE) as the radius of the primary star is smaller than the binary separation throughout post-main sequence evolution \citep[following prescription in][]{eldridge+17}. The binary separation is only $8.6~R_{\odot}$ starting in the post-main sequence and at the onset of CEE, and so the primary mass-loss rate increases significantly to 1--5$\times10^{-4}~M_{\odot}~\mathrm{yr}^{-1}$. This common envelope mass loss phase largely determines the final mass and state of the primary star as it is larger than wind-driven mass loss by a factor of $\approx$1000. In BPASS, the onset of CEE is highly correlated with small binary separations and low mass ratios for stars with $M_{\mathrm{init}}>5~M_{\odot}$ \citep{eldridge+17}. Thus stars that terminate near the progenitor candidate in the Hertzsprung-Russell diagram require a specific mass-loss scenario where CEE can strip most of the hydrogen envelope but leave a small amount (at least $0.038~M_{\odot}$ according to our models), leading to relatively tight constraints on binary mass ratio and period for our BPASS fits. However, these parameters are subject to significant systematic uncertainty in terms of the CEE and mass loss prescriptions assumed. In Section~\ref{sec:cool} we discuss whether these best-fit binary models to the pre-explosion photometry of SN\,2019yvr are consistent with its classification as a type Ib SN. \section{What Progenitor Systems Could Explain SN~2019\lowercase{yvr} and the Pre-explosion Counterpart?}\label{sec:discussion} Given our analysis in \autoref{sec:candidate}, we assume throughout this discussion that the SN~2019yvr pre-explosion counterpart is dominated by emission from the SN progenitor system. From our inferences about this source above as well as our knowledge of SN~2019yvr, we consider what evolutionary pathways could lead to the source observed in the {\it HST} photometry as well as the resulting SN. These pathways need to explain several facts referenced throughout the previous analysis, which we summarize here as: \begin{enumerate}[left=0pt] \item SN~2019yvr was a SN~Ib with no evidence for hydrogen in its early-time spectra, starting from 7~days before peak light \citep{ATel13375} until well after peak light. Following models in \citet{dessart+12}, this suggests that the progenitor star must have had $<$10$^{-3}~M_{\odot}$ \citep[but possibly as much as $0.03~M_{\odot}$;][]{Hachinger12} of hydrogen remaining in its envelope at the time of explosion. \item SN~2019yvr began interacting with CSM starting around $150$~days after explosion and exhibited strong H$\alpha$, radio, and X-ray emission consistent with a shock formed in hydrogen-rich material (Auchettl et al. in prep.). Using conservative assumptions about the SN shock velocity (10,000~km~s$^{-1}$) and velocity of the CSM (100~km~s$^{-1}$), we infer that this material must have been ejected at least $\approx$44~yr prior to core collapse and implying a shell or clump of material at $>$1000~AU from the progenitor system (see estimate from \autoref{eqn:mle} in \autoref{sec:eruptions}). Although we find it more likely that this material came from the SN progenitor star itself given the frequency of SNe~Ib with late-time interactions \citep[i.e., similar to SN~2014C;][]{Milisavljevic15,Margutti+16}, it is possible that this material came from a binary companion and the actual SN~2019yvr progenitor star was hydrogen free on this timescale. Beyond this interaction, there is no evidence for dense CSM in early-time spectra or light curves, all of which appear similar to SNe~Ib without evidence for these interactions. \item SN~2019yvr exhibited a large line-of-sight extinction ($A_{V}\approx2.4$~mag) while its Milky Way reddening was only $E(B-V)=0.02$~mag. We infer from the strong Na\,\textsc{i}~D line at the redshift of the SN~2019yvr host galaxy NGC~4666 that this extinction implies a significant dust column in the NGC~4666 interstellar medium toward SN~2019yvr and/or a correspondingly large column of circumstellar dust in the environment of the progenitor system itself. There is no clear evidence for any mass ejections or warm circumstellar gas in a compact shell around the progenitor system, either in pre-explosion data or from circumstellar interaction once SN~2019yvr exploded. \item There is a single, point-like progenitor candidate to SN~2019yvr detected in pre-explosion {\it HST}\ imaging. This source exhibits very little photometric variability over a 110~day period from 2.7 to 2.4~yr prior to core collapse. Before applying our host extinction estimate but applying Milky Way extinction, this progenitor candidate has a red colour of $m_{\mathrm{F555W}}-m_{\mathrm{F814W}}=1.065$~mag. Accounting for the inferred extinction and distance modulus above, the source is relatively luminous with $M_{\mathrm{F555W}}=-7.8$~mag (roughly in $V$-band). This value is consistent with massive stars but low for a stellar cluster \citep[as in][]{Gieles06}. \item Accounting for all extinction, the progenitor candidate is consistent with a \ensuremath{\log(L/L_{\odot})}=5.3$\pm$0.2 and $T_{\mathrm{eff}}\approx6800$~K star, which implies a photosphere with a radius of $\approx320~R_{\odot}$ at 2.6~yr prior to the SN~2019yvr explosion. Such a star would be closest in temperature and luminosity to yellow supergiants confirmed as SN~IIb progenitor stars (\autoref{fig:hr} and green circles for SNe~1993J, 2008ax, 2011dh, 2013df, and 2016gkg). Comparing to stellar SEDs, the spectral type and luminosity class are best matched to F4--F0 supergiant stars. \item Comparing the pre-explosion photometry to BPASS binary stellar evolution tracks in \citet{eldridge+17}, the best-fitting model is a 19~$M_{\odot}$+1.9~$M_{\odot}$ system that undergoes common envelope evolution and strips most of the material from the progenitor star. Immediately prior to explosion, the primary star retains 0.047~$M_{\odot}$ of hydrogen in its envelope, inconsistent with the masses for SN~Ib systems given above. No other BPASS models were consistent with both our pre-explosion photometry and a system that produced a SN explosion. \end{enumerate} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{hr-single.pdf} \includegraphics[width=0.49\textwidth]{hr-binary.pdf} \caption{({\it Top}): A Hertzpsrung-Russell diagram showing the location of the SN~2019yvr progenitor candidate (blue star) with comparison to SN~IIb progenitor stars (green squares), iPTF13bvn \citep[blue diamond;][]{Cao13,Bersten+14,Folatelli16}, and SN~II progenitor stars \citep[red squares;][]{Smartt15}. We overplot MIST single-star evolutionary tracks from \citet{choi+16,choi+17} for comparison. {\it Bottom}: A 19+1.9~$M_{\odot}$ binary star evolution track from BPASS v2.2 \citep{eldridge+17}, which is consistent with the pre-explosion photometry of the SN~2019yvr progenitor candidate. We highlight the location on the track where the primary star begins Roche-lobe overflow (RLOF; square), reaches its minimum hydrogen envelope mass ($0.047~M_{\odot}$, circle), and terminates as a supernova (star). We also show binary star models from \citet{yoon+17} with outcomes predicted for type Ib (blue) and IIb SNe (green).} \label{fig:hr} \end{figure} \subsection{The Anomalously Cool Progenitor and the Type Ib Classification of SN~2019yvr}\label{sec:cool} We compare the SN~2019yvr progenitor candidate in a Hertzsprung-Russell diagram to other known progenitor systems in \autoref{fig:hr}, including SNe~II \citep[][and references therein]{Smartt15}, SNe~IIb \citep[SNe~1993J, 2008ax, 2011dh, 2013df, and 2016gkg;][]{aldering+94,crockett+08,maund+11,vandyk+14,kilpatrick17:16gkg}, and the progenitor of the SN~Ib iPTF13bvn \citep{Cao13,Bersten+14,Folatelli16}. We also note that there is a single SN~Ic progenitor candidate for SN~2017ein \citep{Kilpatrick18:17ein,vandyk+18}, but the source has a luminosity \ensuremath{\log(L/L_{\odot})}$\approx$6.0 and temperature $T_{\mathrm{eff}}>10^{5}$~K and so is off our plotting range (and may actually be a very young open cluster). For comparison, we also overplot single-star evolutionary tracks from the Mesa Isochrones \& Stellar Tracks code \citep{choi+16,choi+17}. The most notable feature of the SN~2019yvr progenitor candidate in \autoref{fig:hr} and compared with other stripped-envelope SN progenitor stars is its relatively cool effective temperature, which in turn implies an extended photosphere given our constraints on its SED. Assuming a single-star origin and judging solely by the source's inferred luminosity and temperature of \ensuremath{\log(L/L_{\odot})}=5.3 and $T_{\mathrm{eff}}=6800$~K, the counterpart is consistent with a $30~M_{\odot}$ star in the so-called ``Hertzsprung gap'' \citep[see, e.g.,][]{deJager97,Stothers01}. This is in sharp contrast to the predicted progenitors of SNe~Ib, which are thought to have low-mass, compact envelopes, consistent with a star that has almost no hydrogen in its outer layers \citep[][]{yoon+15}. Given these facts, we consider the implications of different progenitor scenarios. In particular, we emphasize the apparent contradiction between a SN from a star without a significant hydrogen envelope mass and a pre-explosion counterpart consistent with a massive star with a significantly extended photosphere, which typically requires a non-negligible hydrogen envelope mass. Combined with the SN~2014C-like circumstellar interaction observed at late times, SN~2019yvr may offer significant insight into mass loss in late-stage stellar evolution for stripped-envelope SNe. Here we review ``standard'' single and binary star scenarios and assess whether they can explain all of these properties. \subsubsection{Single-star Models} It is still debated if single massive stars can evolve to the point where they would explode as yellow supergiants, in part because the Hertzsprung gap is typically a short-lived and transitional phase in the post-main sequence. Standard single-star evolution would suggest that a star with an effective temperature of $T_{\mathrm{eff}}=6800$~K and \ensuremath{\log(L/L_{\odot})}=5.3 retains a massive hydrogen envelope: a $30~M_{\odot}$ initial mass star would retain a $60\%$ surface hydrogen mass fraction on its first passage through the Hertzsprung gap \citep[following the structure of model stars in][]{choi+16}. Thus, in the context of stripped-envelope SNe, any single-star model would likely enter the yellow supergiant phase after evolving through the RSG branch and shedding the remainder of its hydrogen envelope. \citet{Georgy09} demonstrate that such evolution is possible if RSG mass loss rates are increased by approximately an order of magnitude towards the end of their nuclear lives. While the physics of an enhanced late-stage mass loss is unclear (see, e.g. \citealt{Yoon2010}), there do exist a number of luminous yellow supergiants (termed ``yellow hypergiants'') which are hypothesized to be such post-RSG stars. Many yellow hypergiants are extremely variable with high mass-loss rates, such as $\rho$~Cas \citep{Smith14,Lobel15} and V509~Cas \citep{Percy92}, and are located toward the end of the luminous blue variable (LBV) bistability track \citep{smith+04}. This variability involves a rapid (years to decades) evolution between quiescent, hot phases and erupting, cool phases with extreme mass-loss episodes \citep[e.g., $10^{-4}~M_{\odot}~\mathrm{yr}^{-1}$ as in the yellow hypergiants $\rho$~Cas and HR~8752;][]{deJager98,Humphreys02}. Yellow hypergiants undergoing extreme mass loss have been proposed as candidates for type Ib/c progenitors as their LBV-like mass ejections provide an efficient way to rid the star of its hydrogen and helium envelope in the years before core collapse \citep[as in SN~2006jc;][]{Foley07,smith+08b}. The presence of these stars in the gap and the fact that some of them explode as type IIb SNe \citep[the bluest SN~IIb progenitor stars span this gap, e.g.,][]{crockett+08,kilpatrick17:16gkg} strengthens the association between extreme mass loss and stripped-envelope SNe \citep{deJager98,Stothers99}. While it is debated if such enhanced mass-loss rates are possible for single stars on the RSG branch \citep[e.g.,][]{Beasor20}, mass-loss episodes and variability of stars near this point on the Hertzpsrung-Russell diagram suggests it is possible to rapidly shed their hydrogen envelopes and increase in temperature over timescales of years \citep[as observed with HR~8752, for example;][]{deJager97}. However, yellow hypergiants that we observe in this region of the Hertzsprung-Russell diagram have massive, hydrogen-rich envelopes and their winds are known to be hydrogen rich \citep{smith+04}. Assuming such a star exploded as a SN~Ib only 2.6~yr after being observed in this evolutionary phase, it would either need to shed its remaining envelope in the final 2.6~yr or remain cool after retaining only a trace hydrogen envelope. The former scenario will be discussed in \autoref{sec:eruptions}, below. Alternatively, it is worth considering if a single star with virtually no remaining hydrogen in its envelope could inflate to a radius of 320~$R_{\odot}$ while exhibiting a photospheric temperature of 6800~K---as required for SN~2019yvr and its progenitor candidate. The coolest known helium stars are only $\approx$10,000~K and these examples are significantly less luminous than the observed counterpart. For example, LSS~4300 is $T_{\mathrm{eff}}= 11,000$~K \citep{Drilling84,Schon84}, KS Per is 10,000~K \citep{Woolf73,Drilling82}, and $\nu$~Sgr is 11,800~K \citep{Frame95}. The counterpart we observe is only this hot assuming our SED modeling is inconsistent with the true temperature at $>3\sigma$ or if there is $>$2~mag more extinction than we assumed at 2.6~yr before explosion. Even if we assume either of these scenarios is true, the progenitor candidate would still have \ensuremath{\log(L/L_{\odot})}$\geq$5.3, which is much more luminous than the most luminous helium stars that reach \ensuremath{\log(L/L_{\odot})}=4.6 \citep[see previous references and][]{Dudley92}. Thus an anomalously high-mass and luminous helium star would be needed to match to SN~2019yvr, which in general is not allowed by standard single star stellar evolution models. \subsubsection{Binary Star Models}\label{sec:disc-binary} As shown above, the best-fitting binary systems for our SN~2019yvr pre-explosion photometry are short-period, low mass ratio systems that undergo CEE. While such systems are expected to have complex circumstellar environments, possibly consistent with SN~2019yvr, we emphasize that all of the BPASS models terminate with final hydrogen-envelope masses of $>$0.038~$M_{\odot}$. This is inconsistent with a classification as a SN~Ib via the models of \citet{dessart+12} and \citet{Hachinger12}. In addition, there is debate as to whether the evolution of such high mass ratio binaries within BPASS is representative of actual stellar systems. \citet{Neugent2018,Neugent2020} argue that any star that will eventually explode as a core-collapse SN evolves off the main sequence fast enough that companion stars with initial masses $\lesssim$3~$M_{\odot}$ would not have enough time to complete their contraction phase and thus would still be protostars. This is not accounted for in current BPASS models, which initialize all stellar masses on the ZAMS simultaneously. More broadly, the progenitor systems of SNe~Ib are predicted to be low-mass helium stars that evolve via binary evolution. While they can expand to moderately large radii due to shell burning, they typically remain $<$100~$R_{\odot}$ \citep[][]{kleiser+18,Laplace20}. Their photospheric temperatures are thus significantly hotter than the {\it HST}\ photometry of the SN~2019yvr progenitor candidate implies. The latter allows at most 11,000~K as opposed to the $\approx$45,000~K photosphere of the progenitor star to iPTF13bvn \citep{eldridge+16,Folatelli16}. \citet{yoon+15} predict that the best observational counterparts to SN~Ib progenitor systems are binary, stripped WR stars similar to HD~45166 \citep{groh+08} and WR7a \citep{Oliveira+03}, both of which have $T_{\mathrm{eff}}>20$,000~K. Nearly all of these stars are known to be in binaries with close ($<$few day) orbits where the primary has been stripped to $3$--$10~M_{\odot}$, likely through Case B mass transfer \citep{yoon+12,yoon+15,yoon+17}. The companion can span a wide range of luminosities and evolutionary states \citep[e.g., WR7a exhibits a 0.204~day binary orbital period, but no secondary is observed implying a very low-mass star or a compact object;][]{pereira+98,Oliveira+03}. The primary stars are left completely stripped of a hydrogen envelope. This is in contrast to the best-fitting BPASS model, which undergoes extreme mass loss due to CEE and RLOF during the post-main sequence but stops before the hydrogen envelope is completely depleted \citep{Delgado81,Ivanova13,eldridge+17}. Thus, as with the single star models, there remains significant tension as to whether any standard binary evolution model can reproduce both the progenitor candidate and a system that explodes as a SN~Ib. We are left needing to invoke some additional mechanism that can account for both the cool photospheric temperature 2.6~yr prior to explosion and the negligible hydrogen envelope mass at the time of core collapse. In the following sections we discuss two potential resolutions to this paradox (\autoref{sec:eruptions} and \autoref{sec:rtau}) and highlight some evolutionary scenarios that may be allowed while also explaining the dense CSM shell observed around SN\,2019yvr (\autoref{sec:scenarios}). \subsection{Ejection of the Final Hydrogen Envelope in the Last 2.6 Years Prior to Core Collapse}\label{sec:eruptions} One way to resolve the apparent conflict between the extended progenitor radius and lack of hydrogen in the ejecta of SN\,2019yvr would be if the progenitor star \emph{did} possess a envelope with $\sim$0.01--0.03~M$_\odot$ of hydrogen (similar to the yellow supergiant progenitor stars of SNe~IIb) at the time of the {\it HST}\ observations, but somehow managed to lose this material in the intervening 2.6~yr. Such a scenario is not unprecedented: episodic mass ejections have been invoked to explain stripped-envelope SN progenitor systems that exhibit dense shells or clumps of CSM \citep[see, e.g.,][]{Chugai06,Bietenholz14,Chandra17,Mauerhan18,Pooley19,Sollerman20,Tartaglia20}. This was also observed directly for SN~2006jc, a SN~Ib with a pre-explosion outburst 2~yr before explosion \citep{Foley07,smith+08b,pastorello+08,Maund16}, which later manifested as circumstellar interaction. Indeed, SN\,2019yvr shows evidence for relatively narrow H$\alpha$, radio, and X-ray emission $\approx$150~days after core collapse, providing evidence for episodic mass ejections throughout the progenitor's final evolutionary stages. In this scenario, this hydrogen-rich material should be located at some radius around the progenitor star. A key question for SN\,2019yvr is whether the shell of H-rich material encountered by the SN ejecta $\sim$150 days post-explosion could be the remnants of such an ejection that occurred between the {\it HST} observations and explosion. While the location and timing of this ejection will be discussed in detail in Auchettl et al. (in prep.), we perform an order of magnitude calculation to investigate this possibility here. For a CSM wind velocity ($v_w$) and a SN shock velocity ($v_s$) the detection of CSM interaction starting $\sim$150~days after core collapse implies a mass-loss event that occurred: \begin{equation} t_{\rm mle}\approx 44 \left( \frac{v_s}{ 10^{4}{\rm ~km s^{-1}}}\right) \left(\frac{ v_{\mathrm{w}}}{100{\rm ~km s^{-1}}}\right)^{-1}\;{\rm yrs}\label{eqn:mle} \end{equation} \noindent before core collapse. We have scaled our results to an average shock velocity of 10,000~km~s$^{-1}$ and wind velocity of 100~km~s$^{-1}$. The former is roughly consistent with the velocity inferred from helium absorption in our spectra of SN~2019yvr near maximum light, which is a lower limit for the shock velocity. While the latter is less constrained, an ejection speed of $\sim$100 km s$^{-1}$ is approximately the escape speed for a star with a radius of 320~$R_{\odot}$ (as inferred from our progenitor candidate) and a mass of 5$-$10 M$_{\odot}$. For these assumptions, the mass loss event would have occurred significantly earlier than the time of the {\it HST} observations. Therefore, if the hydrogen we observe in CSM is the same material inferred from the progenitor star photosphere, the wind velocity must be at least 1800~km~s$^{-1}$ such that the progenitor star could eject it after the {\it HST}\ data were obtained. A wind at this speed can only be achieved by a compact and massive WR-like star \citep[similar to those presented in][]{Rochowicz95,vanderhucht+01}, inconsistent with our pre-explosion photometry. Such a high velocity would therefore require and ejection mechanism capable of accelerating material to $\gtrsim$10$\times$ the stellar escape speed. This would be in contrast to current theoretical models for both wave-driven mass loss in hydrogen-poor stars (which have terminal velocities of a few hundred km~s$^{-1}$ \citealt{fuller+18}) and common-envelope ejections (which tend to proceed at roughly the escape velocity). Unless a stronger ejection mechanism can be identified, while the mass-loss event that led to the material at $\approx$1000~AU was timed soon before the explosion of SN~2019yvr, it may not be directly associated with the relatively cool photosphere we infer from the {\it HST}\ imaging. Multiple eruptive mass-loss events would be needed to explain both the CSM and the final depletion of the progenitor star's hydrogen envelope assuming the pre-explosion counterpart, the source of the material around SN~2019yvr, and the SN~2019yvr progenitor star are all the same. The second ejection would place material closer to the progenitor star, which is not detected in our light curves or spectra \citep[e.g., via enhanced emission due to CSM interaction or flash ionisation features similar to][]{gal-yam+14}. It is possible that this material was missed due to a delay between the explosion of SN~2019yvr and its discovery, and detailed analysis of these data will be carried out in Auchettl et al. (in prep.) to assess whether or not this could be the case. \subsection{A Quasi-photosphere Seen as an Inflated Star}\label{sec:rtau} If the scenario discussed in \autoref{sec:eruptions} is not viable, then the SN\,2019yvr progenitor candidate must have contained virtually no hydrogen 2.6~yr before explosion \citep[$<10^{-2}~M_{\odot}$ as required for a type Ib classification by][]{dessart+12,Hachinger12}. In this case, we would require that the envelope was inflated by some process not accounted for in standard models of stellar evolution. Here, we consider two scenarios in which a star with only a trace hydrogen envelope could exhibit a photospheric radius of $\approx$320~$R_{\odot}$ or roughly 1.5~AU: (a) formation of a pseudo-photosphere in a dense stellar wind, and (b) inflation due to an additional heat/energy source that produces a radiation pressure supported envelope. \subsubsection{A Stellar Wind} For certain mass-loss rates, a stellar wind can become optically thick and appear at a radius well beyond that of the underlying star \citep[see, e.g.,][]{Gallagher92}. The radius at which this quasi-photosphere forms ($R_{\tau}$) depends on the density ($\rho$) and opacity ($\kappa$) in the wind, with optical depth ($\tau$) expressed as \begin{equation} \tau = \int_{R_{\tau}}^{\infty} \rho \kappa dr. \label{eq:tau} \end{equation} \noindent From this equation, a quasi-photosphere will form when $\tau \gtrsim 1$. Assuming that the radial density profile in the wind follows $r^{-2}$ for a constant mass-loss rate ($\dot{M}$), these conditions are largely dependent on the properties of the wind and thus the underlying star. From equation (16) in \citet{deKoter96}, where the wind opacity is modeled as temperature-dependent bound-free opacity from the Paschen continuum, the quasi-photosphere constraint above implies \begin{multline} \dot{M} > 1.9\times10^{-4}~M_{\odot}~\mathrm{yr}^{-1} \left(\frac{T_{\rm eff}}{10^{4}~\mathrm{K}} \right)^{3/4} \left(\frac{R_{\tau}}{300~R_{\odot}}\right)^{3/2} \\ \left(\frac{v_{w}}{100~\mathrm{km~s}^{-1}}\right)^{1/2} \left(\frac{c_{s}}{10~\mathrm{km~s}^{-1}}\right)^{1/2} \label{eq:cond1} \end{multline} \noindent with $v_{w}$ the wind velocity and $c_{s}$ the local sound speed. To estimate mass loss rate required to explain the photosphere observed for the progenitor candidate of SN\,2019yvr, we assume that the wind could be as fast as $100$~km~s$^{-1}$,as described in \S~\ref{sec:eruptions} (a detailed analysis of the CSM properties will be presented in Auchettl et al. (in prep.)). We also use our constraint on the effective temperature of the observed photosphere $T_{\mathrm{eff}}=6800$~K and radius $R=320~R_{\odot}$ above, which gives a local sound speed $c_{s}\approx7$~km~s$^{-1}$ and implies $\dot{M}>1.3\times10^{-4}~M_{\odot}~\mathrm{yr}^{-1}$. This mass-loss rate is extreme for a star with $\ensuremath{\log(L/L_{\odot})} = 5.3$ \citep[even yellow hypergiants or low-luminosity LBVs as in][]{smith+04}, which tend to have time-averaged $\dot{M}\approx10^{-5}~M_{\odot}~\mathrm{yr}^{-1}$ \citep{humphreys+94}. However, the total mass of material needed to form a quasi-photosphere is only $\dot{M} v_{w}^{-1} R_{\tau} \approx 10^{-5}~M_{\odot}$. This material can easily be shed from the star's remaining hydrogen/helium envelope over a brief period of strong mass loss, although the timescale for such a shell to form would be only $R_{\tau}/v_{w}=26$~days. However, we do not observe any emission lines or ``flash spectroscopy'' features in our early spectra of SN\,2019yvr as may be expected if dense wind material is present. This scenario would also require a significant change in the mass loss from the SN~2019yvr progenitor timed $\lesssim$3 years prior to explosion, which is closely aligned with predictions for instabilities during oxygen burning \citep{meakin+06,Arnett14}. However, there is no other evidence for a dense wind that could form the observed photosphere. \subsubsection{A Radiation Dominated Envelope} An alternative is that the progenitor star retained a small amount of hydrogen \citep[$<10^{-2}~M_{\odot}$ as required for a type Ib classification by][]{dessart+12,Hachinger12} with an envelope that has relaxed to a steady-state configuration following some injection of additional energy. Assuming that radiation pressure dominates over gas pressure, the radial density profile in the wind will follow $r^{-3}$ \citep{ulmer1997}. Hydrostatic equilibrium holds up to the radius at which the radiation is no longer trapped, and so we associate the outer radius of this envelope with the photospheric condition $\tau=1$ as \begin{equation} R_{\tau}\approx 300 \left(\frac{M_{\rm env}}{ 10^{-4}M_{\odot}}\right)^{1/2} R_{\odot}, \end{equation} \noindent where we assume Thompson opacity and a hydrogen envelope with a Solar mass fraction. We note that an inflated progenitor could exist provided that the mass contained in the envelope is $>10^{-4}M_{\sun}$, which is both reconcilable with a type Ib classification for SN\,2019yvr and in general is less restrictive than the condition given by \autoref{eq:cond1}. However, even in this case several mysteries remain as to the nature of the progenitor system. In particular, for a complete description of SN\,2019yvr, the progenitor system would still need to eject a shell of hydrogen-rich material at $\sim$1000 AU. This will be be discussed in \autoref{sec:scenarios}. \subsection{Progenitor Scenarios for SN~2019yvr}\label{sec:scenarios} In the sections above, we have argued that reconciling the extended progenitor radius and lack of hydrogen in the ejecta of SN2019yvr requires a system that either: \begin{enumerate}[left=0pt] \item ejected the remainder of its hydrogen envelope in the final 2.6 yrs pre-explosion (\autoref{sec:eruptions}). \item undergoes a process not accounted for in standard models of evolution and that leads to additional inflation or has sufficient material around the progenitor to form a quasi-photosphere in the CSM (\autoref{sec:rtau}). \end{enumerate} In addition, in either case, a viable progenitor scenario for SN\,2019yvr must also explain a shell of hydrogen-rich material observed at $\sim$1000 AU that was likely ejected 50--100 years prior to explosion. Below we consider two pathways that may be consistent within these constraints. We emphasize that this is not an exclusive list of possible scenarios. No existing model currently explains all the observed properties of SN\,2019yvr and its progenitor system. \subsubsection{A Luminous Blue Variable Phase}\label{sec:lbv} One evolutionary pathway that naturally explains many of the observables for SN~2019yvr is a series of eruptions from a massive star in a LBV phase. These eruptions are known to precede many SNe, some of which are thought to be core-collapse explosions \citep[as was argued for SN~2009ip][]{mauerhan+13}, although this interpretation of LBVs as a phase immediately preceding core collapse remains controversial \citep[][]{margutti+14}. Many types of progenitor systems exhibit pre-SN eruptions in the final years to weeks before explosion, notably for the progenitors of SNe~IIn \citep[SNe with narrow Balmer lines in their optical spectra, indicative of interaction between ejecta and a dense mass of CSM;][]{Smith17}. These systems must eject from 0.1--10~$M_{\odot}$ over short timescales \citep{smith+07,fox+10,fox+11} requiring an extreme mass-loss rate and variability on the timescale of the eruptions. Their progenitor systems have also been observed in the literature, notably for SN~2009ip whose progenitor star had an initial mass $>$60~$M_{\odot}$ \citep{smith+09,foley+11}. Although there is some ambiguity whether these events are actually terminal explosions of massive stars \citep{mauerhan+13,margutti+14}, the eruptive mechanism that fills their environments with dense shells of CSM may be more common among stars at a wide range of initial masses. Indeed, so-called SN impostors \citep[lower luminosity, likely non-terminal explosions of massive stars that resemble SNe~IIn;][]{VanDyk00,maund+06,pastorello+10,ofek+16} come from systems with initial masses possibly as low as 20~$M_{\odot}$ \citep[][]{Kilpatrick18:16cfr}. Such an eruption for SN~2019yvr and SN~2014C-like events \citep[hypothesized by][]{Milisavljevic15} could explain the source of the CSM, although most SN~IIn and SN impostor progenitor stars retain some hydrogen leading to broad Balmer lines in their spectra. For SN~2019yvr, we would require a scenario analogous to SNe~IIn and Ibn with strong circumstellar interaction soon after explosion \citep{Smith17,Hosseinzadeh17} but involving multiple mass-loss episodes timed years or even decades ahead of core collapse instead of months to years. The main caveat in invoking this progenitor scenario is whether episodic mass ejections can fully strip the progenitor star's hydrogen envelope on the timescale required by our {\it HST}\ observations. While LBV eruptions provide a compelling mass-loss scenario as progenitor systems extend nearly to the parameter space of the Hertzsprung-Russell diagram where we find the SN~2019yvr counterpart \citep[see SN~2009ip, 2015bh, Gaia16cfr;][]{smith+09,mauerhan+13,elias-rosa+16,thone+17,Kilpatrick18:16cfr}, these events tend to result in hydrogen-rich explosions with long-lived Balmer emission. The final eruptions of SN~2019yvr would need to strip a sufficiently small amount of hydrogen (potentially hundredths of a Solar mass as in \autoref{sec:binary}) that interaction with this material would not be observed in early time spectra. Detailed analysis of the initial observations of SN~2019yvr could place stronger constraints on whether this scenario occurred. \subsubsection{A Common Envelope Mass Loss Scenario}\label{sec:ce} An alternative evolutionary pathway in the final decades of evolution for the SN~2019yvr, and put forward for SN~2014C in \citet{Margutti+16}, is CEE leading to ejection of the primary star's hydrogen envelope $<$100~yr before explosion. In general, this is not expected as RLOF and CEE are commonly associated with mass transfer much earlier in the primary star's life cycle, for example, immediately after helium core contraction as discussed in \autoref{sec:introduction}. The progenitor would need to be inflated in a later evolutionary stage after helium burning to restart mass transfer \citep[i.e., through Case C mass transfer, see][]{Schneider15}. This is predicted to occur for only $\sim$5$-$6\% of binary system with primary masses between 8$-$20 M$\odot$ \citep{Podsiadlowski1992}. However, if the primary star's envelope can inflate during this phase, the companion will spiral inward and start CEE, resulting in the ejection of a significant fraction of the envelope over $<1$~yr \citep{Ivanova13}. The velocity of that material would be comparable to the escape velocity of the primary, and so would likely be $<100$~km~s$^{-1}$ depending on the envelope structure of the primary star at onset of CEE. In this way, the material could survive long enough that the SN ejecta can encounter it within the first year after core collapse, as observed at $\sim$150 days in SN\,2019yvr. The key question for SN~2019yvr is how the post-CEE system evolves in the final years before core collapse such that it resembles the pre-explosion counterpart and also explodes as a SN~Ib. Case-C CEE is an appealing solution to this problem as the remaining envelope is predicted to become unstable, leading to dynamical pulsations and steady mass loss in the remaining years before core collapse \citep[especially for extremely luminous stars;][]{Heger97}. Thus a post-CEE star could still form a quasi-photosphere assuming the mass-loss rate exceeded 10$^{-4}~M_{\odot}~\mathrm{yr}^{-1}$ as in \autoref{sec:rtau}. Alternatively, the inspiral during CE phase itself supplies a source of heat in the stellar envelope. Thus, after the ejection of most of its hydrogen envelope, the resulting post-CEE hydrogen-deficient envelope could remain partly inflated. If the resulting star was dynamically unstable and losing mass significantly faster than the $\approx10^{-5}~M_{\odot}~\text{yr}^{-1}$ predicted for yellow hypergiants \citep{humphreys+94}, it could rapidly shed even the $10^{-2}$--$10^{-1}~M_{\odot}$ that is predicted to remain in the envelope of comparable stars from BPASS. This would simultaneously explain the circumstellar material as the result of CEE, the apparently cool photosphere, and the lack of hydrogen in the primary star's envelope at core collapse. \subsection{Comparisons to Progenitor Star Constraints for SN~2014C and SNe~Ib with Late-time Circumstellar Interaction} SNe~Ib with late-time circumstellar interaction similar to SN~2019yvr are not unprecedented and may in fact represent a large fraction of stripped-envelope SNe overall. As discussed in \autoref{sec:introduction}, SN~2014C began interacting with its circumstellar environment a few weeks after explosion \citep{Margutti+16}, and both SN~2019yvr and SN~2014C have relatively deep progenitor constraints considering both have pre-explosion {\it HST}\ imaging and occurred at $\approx$15~Mpc \citep[][]{Milisavljevic15}. More broadly, there are numerous examples of stripped-envelope SNe with circumstellar interaction soon after explosion \citep{Chugai06,Bietenholz14,Chandra17,Mauerhan18,Pooley19,Sollerman20,Tartaglia20}, and so any mechanism that we invoke above may need to explain the presence of CSM for a significant fraction of all core-collapse SNe \citep{Margutti+16}. A compact star cluster toward the position of SN~2014C in pre-explosion imaging \citep{Milisavljevic15} had a best-fitting age in the range $30$--$300$~Myr, although it could be as young as $10$~Myr. This age implies a main-sequence turnoff mass of $3.5$--$9.5~M_{\odot}$ depending on the metallicity of the cluster. Assuming this cluster hosted the SN~2014C progenitor star and considering the fact that stars in this mass range either do not explode as core-collapse SNe or are thought to lead to SNe~II, \citet{Milisavljevic15} inferred that a more likely explosion scenario for SN~2014C was through binary star channels for a star with a LBV-like phase and $M_{\mathrm{ZAMS}}>$20~$M_{\odot}$ (implying that the cluster is younger than its colours suggest). Given the photometric and spectroscopic similarity between SNe~2014C and 2019yvr and the presence of a strong density gradient in the circumstellar environments of both, it is possible that both systems could be explained through eruptive, LBV-like mass loss or CEE as discussed above. However, these evolutionary phases would require that the SN~2019yvr progenitor star had extreme photometric variability, and the lack of any such variability in the pre-explosion photometry indicates that these episodes would have occurred outside the short window in which we constrain the progenitor star's evolution. Assuming that the mass-loss episode was driven by an instability in late-stage nuclear burning \citep[e.g.,][]{Arnett14}, the star could also be inflated temporarily. \citet{Smith14b} suggest this inflation would occur on a timescale comparable to the orbital period in binary systems, but we see no signature of this variability on $30$--$100$~day timescales. Deep, high-cadence limits, such as those from the Young Supernova Experiment \citep{Jones19,Jones20} or the upcoming Vera C. Rubin Observatory Legacy Survey of Space and Time \citep[down to r$\approx$24.5~mag on a 3--4 day cadence;][]{Ivezic19}, will be able to detect or rule out these mass-loss episodes for future events, potentially constraining the progenitor scenarios for stripped-envelope SNe in the volume where such variability is detectable. \section{Conclusions}\label{sec:conclusions} We present pre-explosion imaging, photometry, and spectroscopy of the SN~Ib 2019yvr. We find: \begin{enumerate}[left=0pt] \item SN~2019yvr was a SN~Ib with a large line-of-sight extinction spectroscopically similar to iPTF13bvn. SN~2019yvr exhibited signatures of interaction $150$~days after discovery consistent with a shock between SN ejecta and dense, hydrogen-rich CSM (Auchettl et al. in prep.)\ and similar to SN~2014C \citep{Milisavljevic15,Margutti+16}. This interaction suggests that the SN~2019yvr progenitor star underwent an eruptive mass-loss episode at least 44~years before explosion. \item There is a single source in {\it Hubble Space Telescope} imaging obtained $\approx$2.6~year before discovery and consistent with being the progenitor system of SN~Ib~2019yvr. Comparing to blackbodies, single-star SEDs, and binary star models, we find that the SN~2019yvr progenitor star is consistent with a star with $\ensuremath{\log(L/L_{\odot})} = 5.3 \pm 0.2$, $T_{\mathrm{eff}} = 6800\substack{+400\\-200}$~K, and thus close ($<$5~day period) binary-star models with initial masses around $19~M_{\odot}$. However, the cool effective temperature and high luminosity implies a remaining hydrogen envelope mass of at least $0.047~M_{\odot}$ in the binary star model, which is inconsistent with the lack of hydrogen in spectra of SN~2019yvr. \item Comparison to SN~Ib progenitor candidates indicates that SN~2019yvr is much cooler than what is predicted for a hydrogen-stripped star, much more similar to the identified progenitors of type IIb SNe. Overall, the progenitor candidate appears cool and inflated relative to the progenitor of iPTF13bvn and helium stars. \item We infer that an extreme and episodic mass-loss scenario is required to produce both a stripped-envelope SN progenitor system and the luminous, cool progenitor candidate. The binary evolution scenarios discussed above do not incorporate physical scenarios that can lead to extreme or eruptive mass loss soon before explosion, and barring such a mass-loss scenario they do not produce a star whose hydrogen envelope is consistent with the SN~Ib classification. We propose that LBV-like mass ejections or CEE provide natural explanations for the stellar classification, the lack of a massive hydrogen envelope, and the presence of dense CSM. We hypothesize that if this mass-loss mechanism occurs, the star could have formed a quasi-photosphere from CSM in its environment, requiring either a mass loss at a rate $>1.3~M_{\odot}~\mathrm{yr}^{-1}$ or a radiation supported hydrogen envelope with a mass $>10^{-4}M_\odot$ at 2.6~yr before core collapse. \end{enumerate} \bigskip\bigskip\bigskip \noindent {\bf ACKNOWLEDGMENTS} \smallskip \footnotesize We thank J.J. Eldridge and H. Stevance for helpful comments about our BPASS analysis, J. A. Vilchez, A. Campillay, Y. K. Riveros and N. Ulloa for help with the Swope observations, as well as R. Carrasco for support of our Gemini-S/GSAOI programme. C.D.K.\ acknowledges support through NASA grants in support of {\it Hubble Space Telescope} programmes GO-15691 and AR-16136. M.R.D.\ acknowledges support from the NSERC through grant RGPIN-2019-06186, the Canada Research Chairs Program, the Canadian Institute for Advanced Research (CIFAR), and the Dunlap Institute at the University of Toronto. K.A.\ is supported by the Danish National Research Foundation (DNRF132) and VILLUM FONDEN Investigator grant (project number 16599). Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. D.O.J.\ is supported by a Gordon and Betty Moore Foundation postdoctoral fellowship at the University of California, Santa Cruz. Support for this work was provided by NASA through the NASA Hubble Fellowship grant HF2-51462.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. The UCSC team is supported in part by NASA grant NNG17PX03C, NASA grants in support of {\it Hubble Space Telescope} programmes AR-14296 and GO-16239 through STScI, NSF grant AST-1815935, the Gordon \& Betty Moore Foundation, the Heising-Simons Foundation, and by a fellowship from the David and Lucile Packard Foundation to R.J.F. J.H.\ was supported by a VILLUM FONDEN Investigator grant (project number 16599). W.J-G is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No.~DGE-1842165. R.M. acknowledges support by the National Science Foundation under Award No. AST-1909796. R.M.\ acknowledges support by the National Science Foundation under Awards No. AST-1909796 and AST-1944985. R.M.\ is a CIFAR Azrieli Global Scholar in the Gravity \& the Extreme Universe Program, 2019 and a Alfred P.\ Sloan Fellow in Physics, 2019. The Margutti team at Northwestern is partially funded by the Heising-Simons Foundation under grant \#2018-0911 (PI: Margutti). E.R.-R.\ is supported by the Heising-Simons Foundation, the Danish National Research Foundation (DNRF132) and NSF. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). The Gemini-S/GSAOI observations in this paper were obtained under program GS-2020A-Q-221 (PI Kilpatrick). Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.\ M.\ Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This work makes use of observations from the LCO network through program NOAO2019B-004 (PI Kilpatrick). Based on observations made with the NASA/ESA {\it Hubble Space Telescope}, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. This work is based in part on observations made with the {\it Spitzer Space Telescope}, which was operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. (AST-1911206 and AST-1852393). \textit{Facilities}: Gemini (GSAOI), {\it HST} (WFC3), Keck (LRIS), LCO (FLOYDS, Sinistro), {\it Spitzer} (IRAC), Swope (Direct/4K$\times$4K) \section*{Data Availability} The photometry in this article are presented in the article. Imaging and spectroscopy data presented in this article are available upon request. The {\it Hubble Space Telescope} and {\it Spitzer Space Telescope} data are publicly available and can be accessed from the Mikulski Archive for Space Telescopes (\url{https://archive.stsci.edu/hst/}) and Spitzer Heritage Archive (\url{http://sha.ipac.caltech.edu/applications/Spitzer/SHA/}), respectively.
proofpile-arXiv_067-5454
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:introduction} The Antarctic Peninsula is a hotspot of global warming. It has experienced a substantial temperature increase in the 20th century~\cite{absence}, strongly affecting its cryosphere. In 1995, the Prince-Gustav-Channel and Larsen-A ice shelves (floating extension of the glaciers) disintegrated~\cite{climatictrend}. The loss of ice shelves led to a reduction of the bracing forces on the tributary glaciers, which subsequently reacted by accelerating ice discharge and further retreat of the calving fronts~\cite{dynamicresponse,icedynamics,retreat}. The position of the calving front is an important variable to measure the glacier state. Its fluctuations can provide information about imbalances and a recession can destabilize the flow regime of an entire glacier system. Therefore monitoring the calving front positions is of high importance. Typically multi-spectral and synthetic aperture radar (SAR) remote-sensing imagery is used to map the position of the glacier calving fronts~\cite{remotesensing}. Manual delineation of the calving front position is applied in most studies, which is subjective, laborious, tedious and expensive~\cite{glacieraccuracy}. However, different (semi-)automatic routines using edge detection or image classification techniques were also developed. Baumhoer et al.~\cite{remotesensing} provide a detailed review on calving front detection approaches. Particularly in polar regions, the sea surface next to the calving front is often covered by the ice melange (sea ice and icebergs), making both the manual and automatic calving front detection challenging. Deep Convolutional Neural Networks (CNNs) have shown impressive performance in various image segmentation tasks. The first study using deep CNNs for calving front mapping on remote sensing imagery was conducted by Mohajerani et~al.~\cite{margins}. They applied the U-Net CNN architecture on Landsat imagery at glaciers in Greenland. Zhang et~al.~\cite{enze19} and Baumhoer et~al.~\cite{glacierextraction} applied similar processing pipeline on SAR imagery from either the TerraSAR-X or Sentinel-1 mission. Today, there are many deep learning-based approaches and network architectures for image segmentation~\cite{DeepLearnSeg}. One of the most successful ones is the U-Net architectur ~\cite{Unet}. In 2018, Schlemper et~al.~\cite{Sononet} used soft attention gates in Sononet, which are able to localize salient regions in the feature maps. They introduced these gates to help the network preserve more local information and provide guided object localization in the forward pass. These gates use low-level features and global information gathered by the high-level feature maps in order to calculate an attention coefficient for each pixel. This approach was further enhanced by Oktay et~al.~\cite{AttUNet}. They added additive soft attention gates in the skip connections of the U-Net architecture. These gates learn the attention maps automatically without adding too much computational power. They are modular and can easily be implemented in the U-Net architecture. We use these gates together with the same U-Net model used in Zhang et~al.'s work~\cite{enze19} to detect the front positions on multi-mission SAR imagery. Predicting thin fronts directly leads to a severe class-imbalance problem. In this work, we extract the attention maps for each layer to see how well the network learns and where its attention is. Furthermore, we study the effect of distance weighted loss functions to alleviate this problem. \section{Methodology}\label{sec:methodology} ~\\\noindent\textbf{Attention U-Net.}\noindent \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{Figures/Att-Unet.png} \caption{The architecture of our proposed Attention U-Net. } \label{fig:attunet} \end{figure*} Attention gates are modular building blocks, which can be placed into the skip connections of a U-Net architecture~\cite{AttUNet}. We use a five-layered U-Net model by Zhang et al.~\cite{enze19} as our base model, depicted in \cref{fig:attunet}. Each layer has two sequential blocks consisting of one two-dimensional convolution with a kernel size of $5 \times 5$ pixels and one batch normalization layer followed by a Leaky ReLU activation function. We use Max-Pooling to encode and shrink the feature maps on each layer and use transposed convolution in the upscaling path. We add attention gates to the same architecture and train and test it with the same data and hyperparameters. Features (activation maps) $\bm{x}^l$ are copied to the attention gates together with the gating signal $\bm{g}^l$ taken from the upscaling path. The number of feature maps $F_x$ of $\bm{x}^l$ and $F_g$ of $\bm{g}^l$ are reduced to $F_{\text{int}}$ intermediate layers by means of $1 \times 1$ convolution. Due to the coarser scale of the gating signal, the spatial dimensions of $\bm{x}^l$ are halved using strided convolution. To combine low-level and high-level features maps, the intermediate layers are added and passed to a ReLU activation function $\sigma_1$. In order to obtain the attention coefficients, these layers are combined through $\bm{\psi} \in \mathbb{R}^{F_{\text{int}} \times 1}$ (realized by a $1 \times 1$ convolution) and normalized by the Sigmoid activation function $\sigma_2$, i.\,e.\xspace for each pixel $i$ the attention coefficient for layer $l$ follow as: \begin{equation}\label{eq:att_coef} \alpha^l_i =\sigma_2 (\bm{\psi}^T (\sigma_1 (\bm{W}^T_x \bm{x}^l_i + \bm{W}^T_g \bm{g}^l_i + \bm{b}_g) + b_\psi) \enspace, \end{equation} where $\bm{W}_x \in \mathbb{R}^{F_x \times F_{\text{int}}}$ and $\bm{W}_g \in \mathbb{R}^{F_g \times F_{\text{int}}}$ are pixel-wise linear combinations of $F_x$ and $F_g$ feature maps and transformations to $F_{\text{int}}$ intermediate layers. The bias terms are denoted by $\bm{b}_g \in \mathbb{R}^{F_{\text{int}}}$ and $b_\psi \in\mathbb{R}$. In the end, they are upscaled by a bilinear interpolation to the original spatial dimensions of the incoming feature maps. The attention coefficients $\alpha^l$ represent context-relevant regions. We apply the soft attention mechanism by pixel-wise multiplication of the attention maps and every feature map $x^l$, i.\,e.\xspace: $\hat{x}^l_{i,c} = x^l_{i, c} \cdot \alpha^l_i$, where $c$ describes the feature map index. In this way, noisy and irrelevant regions in the feature maps get filtered out by the attention gate before going through the merging process. During training, those attention maps that localize the regions of interest are learned automatically. In this work, we use Zhang et~al.'s U-Net architecture~\cite{enze19} as the baseline model in our semantic segmentation pipeline and add the attention gates to it, as depicted in \cref{fig:attunet}. The attention gates narrow down the region of interest by automatically learning the glacier front line positions. Furthermore, the easily extractable attention maps demonstrate the learning quality evolution of the network. ~\\\noindent\textbf{Distance-Weighted Loss Function.}\label{sec:dwloss} \noindent One big challenge in segmenting the front line directly is that a slight offset in the prediction heavily increases the loss. Adhikari et~al.~\cite{foresttrail} proposed to tolerate the network's misprediction in a controlled close distance of the ground truth front line. This is formulated and realized by calculating a weight map $\bm{W}_w$: \begin{equation}\label{eq:Ww} \bm{W}_w = \sigma\left(\frac{\text{EDT}(\bm{y})}{w}\right) \enspace, \end{equation} which depends on the Euclidean distance transformation $\text{EDT}$ of the ground truth map $\bm{y}$. Errors near the front line get weighted smaller than errors further away. The weights are normalized by a Sigmoid function $\sigma$. With the parameter $w$, we can control the distance weighting. Bigger $w$ results in tolerance towards the mispredictions further away from the actual front line. In this study, we explore weights $w = \{4,8,16\}$. The sigmoid output is zero-centered and added to the ground truth mask in order to omit weighting pixels within the glacier front, i.\,e.\xspace: $\widetilde{\bm{W}}_w = 2(\bm{W}_w - 0.5) + \bm{y}$. The result of this transformation is shown in \cref{fig:dw_b}. This distance map contains the weighting coefficient for each pixel of the prediction $\hat{\bm{y}}$. We pixel-wise multiply the prediction and the distance-weighted map. The result is a distance-weighted prediction $\hat{\bm{y}}_w$, \emph{cf}.\xspace \cref{fig:dw_d} for an example. \begin{figure}[t] \centering \begin{subfigure}[b]{0.20\linewidth} \centering \centerline{\includegraphics[width=.87\linewidth]{Figures/gt_edt.png}} \caption{}\label{fig:dw_a} \end{subfigure} % \begin{subfigure}[b]{0.20\linewidth} \centering \centerline{\includegraphics[width=1\linewidth]{Figures/weight_map.png}} \caption{}\label{fig:dw_b} \end{subfigure} % \begin{subfigure}[b]{0.20\linewidth} \centering \centerline{\includegraphics[width=.87\linewidth]{Figures/pred_edt.png}} \caption{}\label{fig:dw_c} \end{subfigure} % \begin{subfigure}[b]{0.20\linewidth} \centering \centerline{\includegraphics[width=1\linewidth]{Figures/wpred.png}} \caption{}\label{fig:dw_d} \end{subfigure} % \caption{\subref{fig:dw_a} ground truth, \subref{fig:dw_b} weight map $\widetilde{\bm{W}}_8$, \subref{fig:dw_c} prediction, and \subref{fig:dw_d} weighted prediction $\hat{\bm{y}}_8$.} \label{fig:distance_weighting} \end{figure} We use this modified prediction for the calculation of the BCE loss and non-binary Dice metric. \section{Evaluation}\label{sec:experimental_setup} \subsection{Dataset}\label{sec:dataset} The SAR dataset is composed of imagery from the satellite missions ERS-1/2, Envisat, RadarSAT-1, ALOS, TerraSAR-X (TSX) and TanDEM-X (TDX). It contains $244$ images showing the glacier systems of Sjögren-Inlet (SI) and Dinsmoore-Bombardier-Edgworth (DBE) located in the Antarctic Pensula. Together, they cover the time period of 1995-2014. We use multilooking to reduce speckle in the image and take the refined ASTER digital elevation model~\cite{cook2012new} of the Antarctic Peninsula for geocoding and orthorectification. \begin{comment} More detailed parameters of the SAR sensors and imagery are shown in Tab.~\ref{tab:sar_sensors}. \begin{table}[htbp] \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{c|c|c|c|c|c} \hline Platform & Sensor & Mode & \makecell{Repetition\\ cycle [d]} & \makecell{Multi \\ looking\\ factors} & \makecell{Ground \\range\\ ERS-1/2 & SAR & IM & 53/1 & $1\times5$ & 20 \\\hline \makecell{RADARSAT 1 \\ (SLC+PRI) }& SAR & ST & 24 & $1\times4$ & \makecell{20 (SLC) \\ 12.5 (PRI)} \\ \hline Envisat & ASAR & IM & 35 & $2\times5$ & 20 \\ \hline ALOS & PAlSAR & FBS & 46 & $2\times5$& 16.7 \\ \hline \makecell{TerraSAR-X \\ tanDEM-X \\ (SLC format)} &SAR & SM & 11 & $3\times 3$ & 6.7 \\ \hline \end{tabular} \caption{Overview of SAR sensors and specifications used in this study.} \label{tab:sar_sensors} \end{table} \end{comment} We split this dataset into $144$ samples for training, $50$ for validation and $50$ for testing. As for the expert data annotation, we exploited the front positions used by Seehaus et al.~\cite{icedynamics, dynamicresponse}. \begin{comment} They manually localized and picked the fronts with different qualities. Due to the similarity of the glacier and the ice melange, the accuracy of their manual front detection varies between 70 meters and 450 meters. On average, there is an uncertainty of $\approx$ \SI{200}{m}. \end{comment} \begin{figure}[t] \centering \mbox{% \includegraphics[width=\linewidth]{Figures/predictions_small.jpg} \phantomsubcaption\label{fig:p_a} \phantomsubcaption\label{fig:p_b} \phantomsubcaption\label{fig:p_c} \phantomsubcaption\label{fig:p_d} }% \caption{Predictions of both networks trained on different weights for two samples of the test set. \subref{fig:p_a} and \subref{fig:p_c} are predicted by the U-Net approach~\cite{enze19}, and \subref{fig:p_b} and \subref{fig:p_d} show the results produced by our Attention U-Net; green: ground truth, red: prediction, yellow: intersection of the ground truth and the prediction.} \label{fig:predictions} \end{figure} \subsection{Evaluation Protocol} We applied median filtering for noise reduction. Since the images in the dataset have different spatial resolutions, we used an adaptive kernel size, such that it covers a region of \SI{2500}{m^2}. Next, we applied zero-padding on the images to form squares and then resized the images to $512 \times 512$ pixels using bilinear interpolation. In order to alleviate the severe class-imbalance problem, we used morphological dilation to thicken the glacier front lines in the ground truth images such that they became $6$ pixels wide after downscaling. This reduces the class-imbalance severity from approximately 2000:1 to 100:1. Additionally, we augmented our training set using vertical flips and rotated versions ($90^\circ$, $180^\circ$ and $270^\circ$). For training the networks, we used Adaptive Moment Estimation (ADAM) optimizer with a batch size of $5$ and a cyclic learning rate with minimum and maximum boundaries of $10^{-5}$ and $10^{-3}$, respectively. For the Leaky ReLU activation functions, we set the slope to $0.1$. As for the loss function, we used the normal Binary Cross-Entropy (BCE) in one variant and the distance-weighted BCE (WBCE) in the other variant. We monitored the Dice coefficient and its modified version for early stopping with a patience of $20$ epochs. \subsection{Qualitative Results} \label{sec:qualitativ} \Cref{fig:predictions} illustrates the predictions of two glacier fronts using U-Net and Attention U-Net, trained on a distance-weighted BCE loss with different weights. It shows that our proposed Attention U-Net outperforms U-net for big distance weights, especially for $w=16$, as shown in \cref{fig:predictions}. Note, increasing $w$ results in thicker predicted lines and consequently higher uncertainty. \begin{comment} \begin{figure}[tbp] \centering \includegraphics[width=.75\linewidth]{Figures/thickness.png} \caption{Predictions of U-Net trained on BCE loss with weights: (a) no weighting, (b) $w=4$, (c) $w=8$, and (d) $w=16$.} \label{fig:thickness} \end{figure} \end{comment} Using the normal BCE loss often results in disconnected predicted lines. While WBCE mitigates this problem, this can sometimes be a disadvantage, e.\,g.\xspace when mountains near the front lines cause gaps on the ground truth line due to layover and shadowing. \Cref{fig:p_a} and \cref{fig:p_b} show that networks trained on weight parameter greater than four fail to localize small gaps and simply connect the predicted fronts. We can analyze the learning process of Attention U-Net by means of the attention maps $\alpha^l$, extracted from layer $l$. \Cref{fig:attention_2007_1_1} shows the evolution of the attention maps (resized to the same scale). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/attention_maps_2007_1_1_cut.png} \caption{Attention maps $\alpha^l$ for models saved at epoch 5, 10, and 36.} \label{fig:attention_2007_1_1} \end{figure} We observe that the attention maps converge quickly within the first epochs to the regions of interest. In general, attention maps $\alpha^4$ and $\alpha^3$ look more noisy than $\alpha^2$ and $\alpha^1$. However, this shows that the attention gates can learn salient regions automatically, even in the bottom layers of a U-Net. Besides the glacier front, there is also a weak highlighting on the zero-padded border and other regions like mountains and ice melange located near the front. This may help the network to recognize specific glacier systems. \Cref{fig:attention_maps_weights} shows the effect of different distance weights on the attention maps. Training on a distance-weighted loss function with a small weight reduces noisy highlights in the attention maps, especially in layer $3$. When increasing the weight too much, the attention gates cannot localize the regions of interest in the low-level feature maps, as shown in \cref{fig:a_d}. Choosing a weight between $w=4$ and $w=8$ lead to the best qualitative results in our test set. \begin{figure}[t] \centering \mbox{% \includegraphics[width=.6\linewidth]{Figures/attention_maps_weights.png} \phantomsubcaption\label{fig:a_a} \phantomsubcaption\label{fig:a_b} \phantomsubcaption\label{fig:a_c} \phantomsubcaption\label{fig:a_d} }% \caption{Attention maps for models trained on \subref{fig:a_a} BCE, \subref{fig:a_b} $\mathrm{WBCE}_4$, \subref{fig:a_c} $\mathrm{WBCE}_8$, and \subref{fig:a_d} $\mathrm{WBCE}_{16}$.} \label{fig:attention_maps_weights} \end{figure} \subsection{Quantitative Results}\label{sec:quantitativ} \begin{table}[t] \centering \caption{Quantitative results of the U-Net, with and without the attention gates.% } \small \begin{tabular}{@{ }c@{ }@{ }c@{ }@{ }c@{ }@{ }c@{ }@{ }c@{ }@{ }c@{ }@{ }c@{ }} \toprule Model & Loss & Dice & $\mathrm{WDice}_4$ & $\mathrm{WDice}_8$& $\mathrm{WDice}_{16}$ & IoU \\ \midrule U-Net & BCE &\h{73.9} & \h{83.0} & \h{86.0} & 88.3 & \h{60.6} \\ Att. U-Net & BCE &73.8 & 82.9 & 85.9 & \h{88.7} & 60.3 \\ \midrule U-Net & $\mathrm{WBCE}_4$ &\h{72.9} & \h{83.6} & \h{87.1} & \h{90.1} & \h{58.9} \\ Att. U-Net & $\mathrm{WBCE}_4$ &72.6 & \h{83.6} & 87.0 & 89.9 & 58.5 \\ \midrule U-Net & $\mathrm{WBCE}_8$ &70.1 & 81.1 & 84.6 & 87.6 & 55.5 \\ Att. U-Net & $\mathrm{WBCE}_8$ &\h{70.8} & \h{82.5} & \h{86.1} & \h{89.2} & \h{56.1} \\ \midrule U-Net & $\mathrm{WBCE}_{16}$ &\h{67.0} & \h{78.9} & \h{82.6} & 85.9 & \h{51.9} \\ Att. U-Net & $\mathrm{WBCE}_{16}$ &65.2 & 77.8 & 82.3 & \h{86.1} & 49.8 \\ \bottomrule \end{tabular} \label{tab:quantitativ} \end{table} \Cref{tab:quantitativ} presents quantitative results for our test set. Overall, the results show that Attention U-Net performs similar to the base U-Net with the advantage of better interpretability. In case of a distance weight of 8 (WBCE$_8$), Attention U-Net performs significantly better than the base model. Distance-weighted Dice scores with bigger weights are generally better since errors near the front are weighted smaller. \begin{table}[t] \centering \caption{Average thickness of the predicted lines in pixels and the resulting certainty (perpendicular to the estimated glacier front position) based on an average spatial resolution of \SI{44.74}{m}.} \small \begin{tabular}{cccccc} \toprule $w$ & 0 & 4 & 8&16 \\ \midrule thickness [px] & 6.5 & 8.4 & 10.6 & 11.1 \\ certainty [m] & 145.40 & 187.90 & 237.12 & 248.30 \\ \bottomrule \end{tabular} \label{tab:uncertainty} \end{table} Assuming a small amount of misprediction, we can estimate the average thickness of the lines by the class ratio of front to the background. \Cref{tab:uncertainty} shows that training on BCE with different weights leads to front predictions with different thicknesses and certainties. \begin{comment} \begin{table}[htbp] \centering \small \begin{tabular}{|c|c|c|c|c|c|} \hline $w$ & 0 & 4 & 8&16 \\ \hline \makecell{thickness\\ $[$px$]$} & 6.5 & 8.4 & 10.6 & 11.1 \\ \hline \makecell{certainty \\ $[$m$]$} & 145.40 & 187.90 & 237.12 & 248.30 \\ \hline \end{tabular} \caption{Average thickness of the predicted lines in pixels and the resulting certainty $($perpendicular to estimated glacier front position$)$ based on an average pixel resolution of \SI{44.74}{m}.} \label{tab:uncertainty} \end{table} Although, the loss scales nearly linearly to the weight and the distance, the thickness of the predicted lines does not. Doubling $w$ does not mean that the lines are getting two times thicker. However, increasing the weight too much leads to a significant performance decrease, which also can be observed in the qualitative results, as explained in Sec.~\ref{sec:qualitativ}. \end{comment} With $w=4$, we obtain a Dice score of \SI{83.6}{\percent} with a tolerance of $187.90$ meters. It is possible to reduce this uncertainty by using a smaller dilation kernel, smaller distance weight, or a patch-wise line segmentation with a higher pixel resolution. \section{Conclusion}\label{sec:conclusion} In summary, this study shows that it is possible to delineate calving glacier fronts by means of a trained U-Net in a single forward pass without additional post-processing. The addition of attention gates revealed insights into the learning behavior of the network. Within a few epochs, the network learns to localize important regions automatically and shows where its attention is. Our results show that the attention mechanism itself improves our predictions for big distance weights. Furthermore, it can be used as an analysis tool to find proper loss functions and hyperparameters. Consequently, we conclude that training on a distance weighted version of Binary Cross-Entropy with a small weight parameter leads to optimal qualitative and quantitative results. \section{Introduction}\label{sec:introduction} The Antarctic Peninsula is a hotspot of global warming. It has experienced a substantial temperature increase in the 20th century~\cite{absence}, strongly affecting its cryosphere. In 1995, the Prince-Gustav-Channel and Larsen-A ice shelves (floating extension of the glaciers) disintegrated~\cite{climatictrend}. The loss of ice shelves led to a reduction of the bracing forces on the tributary glaciers, which subsequently reacted by accelerating ice discharge and further retreat of the calving fronts~\cite{dynamicresponse,icedynamics,retreat}. The position of the calving front is an important variable to measure the glacier state. Its fluctuations can provide information about imbalances and a recession can destabilize the flow regime of an entire glacier system. Therefore monitoring the calving front positions is of high importance. Typically multi-spectral and synthetic aperture radar (SAR) remote-sensing imagery is used to map the position of the glacier calving fronts~\cite{remotesensing}. Manual delineation of the calving front position is applied in most studies, which is subjective, laborious, tedious and expensive~\cite{glacieraccuracy}. However, different (semi-)automatic routines using edge detection or image classification techniques were also developed. Baumhoer et al.~\cite{remotesensing} provide a detailed review on calving front detection approaches. Particularly in polar regions, the sea surface next to the calving front is often covered by the ice melange (sea ice and icebergs), making both the manual and automatic calving front detection challenging. Deep Convolutional Neural Networks (CNNs) have shown impressive performance in various image segmentation tasks. The first study using deep CNNs for calving front mapping on remote sensing imagery was conducted by Mohajerani et~al.~\cite{margins}. They applied the U-Net CNN architecture on Landsat imagery at glaciers in Greenland. Zhang et~al.~\cite{enze19} and Baumhoer et~al.~\cite{glacierextraction} applied similar processing pipeline on SAR imagery from either the TerraSAR-X or Sentinel-1 mission. Today, there are many deep learning-based approaches and network architectures for image segmentation~\cite{DeepLearnSeg}. One of the most successful ones is the U-Net architectur ~\cite{Unet}. In 2018, Schlemper et~al.~\cite{Sononet} used soft attention gates in Sononet, which are able to localize salient regions in the feature maps. They introduced these gates to help the network preserve more local information and provide guided object localization in the forward pass. These gates use low-level features and global information gathered by the high-level feature maps in order to calculate an attention coefficient for each pixel. This approach was further enhanced by Oktay et~al.~\cite{AttUNet}. They added additive soft attention gates in the skip connections of the U-Net architecture. These gates learn the attention maps automatically without adding too much computational power. They are modular and can easily be implemented in the U-Net architecture. We use these gates together with the same U-Net model used in Zhang et~al.'s work~\cite{enze19} to detect the front positions on multi-mission SAR imagery. Predicting thin fronts directly leads to a severe class-imbalance problem. In this work, we extract the attention maps for each layer to see how well the network learns and where its attention is. Furthermore, we study the effect of distance weighted loss functions to alleviate this problem. \section{Methodology}\label{sec:methodology} ~\\\noindent\textbf{Attention U-Net.}\noindent \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{Figures/Att-Unet.png} \caption{The architecture of our proposed Attention U-Net. } \label{fig:attunet} \end{figure*} Attention gates are modular building blocks, which can be placed into the skip connections of a U-Net architecture~\cite{AttUNet}. We use a five-layered U-Net model by Zhang et al.~\cite{enze19} as our base model, depicted in \cref{fig:attunet}. Each layer has two sequential blocks consisting of one two-dimensional convolution with a kernel size of $5 \times 5$ pixels and one batch normalization layer followed by a Leaky ReLU activation function. We use Max-Pooling to encode and shrink the feature maps on each layer and use transposed convolution in the upscaling path. We add attention gates to the same architecture and train and test it with the same data and hyperparameters. Features (activation maps) $\bm{x}^l$ are copied to the attention gates together with the gating signal $\bm{g}^l$ taken from the upscaling path. The number of feature maps $F_x$ of $\bm{x}^l$ and $F_g$ of $\bm{g}^l$ are reduced to $F_{\text{int}}$ intermediate layers by means of $1 \times 1$ convolution. Due to the coarser scale of the gating signal, the spatial dimensions of $\bm{x}^l$ are halved using strided convolution. To combine low-level and high-level features maps, the intermediate layers are added and passed to a ReLU activation function $\sigma_1$. In order to obtain the attention coefficients, these layers are combined through $\bm{\psi} \in \mathbb{R}^{F_{\text{int}} \times 1}$ (realized by a $1 \times 1$ convolution) and normalized by the Sigmoid activation function $\sigma_2$, i.\,e.\xspace for each pixel $i$ the attention coefficient for layer $l$ follow as: \begin{equation}\label{eq:att_coef} \alpha^l_i =\sigma_2 (\bm{\psi}^T (\sigma_1 (\bm{W}^T_x \bm{x}^l_i + \bm{W}^T_g \bm{g}^l_i + \bm{b}_g) + b_\psi) \enspace, \end{equation} where $\bm{W}_x \in \mathbb{R}^{F_x \times F_{\text{int}}}$ and $\bm{W}_g \in \mathbb{R}^{F_g \times F_{\text{int}}}$ are pixel-wise linear combinations of $F_x$ and $F_g$ feature maps and transformations to $F_{\text{int}}$ intermediate layers. The bias terms are denoted by $\bm{b}_g \in \mathbb{R}^{F_{\text{int}}}$ and $b_\psi \in\mathbb{R}$. In the end, they are upscaled by a bilinear interpolation to the original spatial dimensions of the incoming feature maps. The attention coefficients $\alpha^l$ represent context-relevant regions. We apply the soft attention mechanism by pixel-wise multiplication of the attention maps and every feature map $x^l$, i.\,e.\xspace: $\hat{x}^l_{i,c} = x^l_{i, c} \cdot \alpha^l_i$, where $c$ describes the feature map index. In this way, noisy and irrelevant regions in the feature maps get filtered out by the attention gate before going through the merging process. During training, those attention maps that localize the regions of interest are learned automatically. In this work, we use Zhang et~al.'s U-Net architecture~\cite{enze19} as the baseline model in our semantic segmentation pipeline and add the attention gates to it, as depicted in \cref{fig:attunet}. The attention gates narrow down the region of interest by automatically learning the glacier front line positions. Furthermore, the easily extractable attention maps demonstrate the learning quality evolution of the network. ~\\\noindent\textbf{Distance-Weighted Loss Function.}\label{sec:dwloss} \noindent One big challenge in segmenting the front line directly is that a slight offset in the prediction heavily increases the loss. Adhikari et~al.~\cite{foresttrail} proposed to tolerate the network's misprediction in a controlled close distance of the ground truth front line. This is formulated and realized by calculating a weight map $\bm{W}_w$: \begin{equation}\label{eq:Ww} \bm{W}_w = \sigma\left(\frac{\text{EDT}(\bm{y})}{w}\right) \enspace, \end{equation} which depends on the Euclidean distance transformation $\text{EDT}$ of the ground truth map $\bm{y}$. Errors near the front line get weighted smaller than errors further away. The weights are normalized by a Sigmoid function $\sigma$. With the parameter $w$, we can control the distance weighting. Bigger $w$ results in tolerance towards the mispredictions further away from the actual front line. In this study, we explore weights $w = \{4,8,16\}$. The sigmoid output is zero-centered and added to the ground truth mask in order to omit weighting pixels within the glacier front, i.\,e.\xspace: $\widetilde{\bm{W}}_w = 2(\bm{W}_w - 0.5) + \bm{y}$. The result of this transformation is shown in \cref{fig:dw_b}. This distance map contains the weighting coefficient for each pixel of the prediction $\hat{\bm{y}}$. We pixel-wise multiply the prediction and the distance-weighted map. The result is a distance-weighted prediction $\hat{\bm{y}}_w$, \emph{cf}.\xspace \cref{fig:dw_d} for an example. \begin{figure}[t] \centering \begin{subfigure}[b]{0.20\linewidth} \centering \centerline{\includegraphics[width=.87\linewidth]{Figures/gt_edt.png}} \caption{}\label{fig:dw_a} \end{subfigure} % \begin{subfigure}[b]{0.20\linewidth} \centering \centerline{\includegraphics[width=1\linewidth]{Figures/weight_map.png}} \caption{}\label{fig:dw_b} \end{subfigure} % \begin{subfigure}[b]{0.20\linewidth} \centering \centerline{\includegraphics[width=.87\linewidth]{Figures/pred_edt.png}} \caption{}\label{fig:dw_c} \end{subfigure} % \begin{subfigure}[b]{0.20\linewidth} \centering \centerline{\includegraphics[width=1\linewidth]{Figures/wpred.png}} \caption{}\label{fig:dw_d} \end{subfigure} % \caption{\subref{fig:dw_a} ground truth, \subref{fig:dw_b} weight map $\widetilde{\bm{W}}_8$, \subref{fig:dw_c} prediction, and \subref{fig:dw_d} weighted prediction $\hat{\bm{y}}_8$.} \label{fig:distance_weighting} \end{figure} We use this modified prediction for the calculation of the BCE loss and non-binary Dice metric. \section{Evaluation}\label{sec:experimental_setup} \subsection{Dataset}\label{sec:dataset} The SAR dataset is composed of imagery from the satellite missions ERS-1/2, Envisat, RadarSAT-1, ALOS, TerraSAR-X (TSX) and TanDEM-X (TDX). It contains $244$ images showing the glacier systems of Sjögren-Inlet (SI) and Dinsmoore-Bombardier-Edgworth (DBE) located in the Antarctic Pensula. Together, they cover the time period of 1995-2014. We use multilooking to reduce speckle in the image and take the refined ASTER digital elevation model~\cite{cook2012new} of the Antarctic Peninsula for geocoding and orthorectification. \begin{comment} More detailed parameters of the SAR sensors and imagery are shown in Tab.~\ref{tab:sar_sensors}. \begin{table}[htbp] \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{c|c|c|c|c|c} \hline Platform & Sensor & Mode & \makecell{Repetition\\ cycle [d]} & \makecell{Multi \\ looking\\ factors} & \makecell{Ground \\range\\ ERS-1/2 & SAR & IM & 53/1 & $1\times5$ & 20 \\\hline \makecell{RADARSAT 1 \\ (SLC+PRI) }& SAR & ST & 24 & $1\times4$ & \makecell{20 (SLC) \\ 12.5 (PRI)} \\ \hline Envisat & ASAR & IM & 35 & $2\times5$ & 20 \\ \hline ALOS & PAlSAR & FBS & 46 & $2\times5$& 16.7 \\ \hline \makecell{TerraSAR-X \\ tanDEM-X \\ (SLC format)} &SAR & SM & 11 & $3\times 3$ & 6.7 \\ \hline \end{tabular} \caption{Overview of SAR sensors and specifications used in this study.} \label{tab:sar_sensors} \end{table} \end{comment} We split this dataset into $144$ samples for training, $50$ for validation and $50$ for testing. As for the expert data annotation, we exploited the front positions used by Seehaus et al.~\cite{icedynamics, dynamicresponse}. \begin{comment} They manually localized and picked the fronts with different qualities. Due to the similarity of the glacier and the ice melange, the accuracy of their manual front detection varies between 70 meters and 450 meters. On average, there is an uncertainty of $\approx$ \SI{200}{m}. \end{comment} \begin{figure}[t] \centering \mbox{% \includegraphics[width=\linewidth]{Figures/predictions_small.jpg} \phantomsubcaption\label{fig:p_a} \phantomsubcaption\label{fig:p_b} \phantomsubcaption\label{fig:p_c} \phantomsubcaption\label{fig:p_d} }% \caption{Predictions of both networks trained on different weights for two samples of the test set. \subref{fig:p_a} and \subref{fig:p_c} are predicted by the U-Net approach~\cite{enze19}, and \subref{fig:p_b} and \subref{fig:p_d} show the results produced by our Attention U-Net; green: ground truth, red: prediction, yellow: intersection of the ground truth and the prediction.} \label{fig:predictions} \end{figure} \subsection{Evaluation Protocol} We applied median filtering for noise reduction. Since the images in the dataset have different spatial resolutions, we used an adaptive kernel size, such that it covers a region of \SI{2500}{m^2}. Next, we applied zero-padding on the images to form squares and then resized the images to $512 \times 512$ pixels using bilinear interpolation. In order to alleviate the severe class-imbalance problem, we used morphological dilation to thicken the glacier front lines in the ground truth images such that they became $6$ pixels wide after downscaling. This reduces the class-imbalance severity from approximately 2000:1 to 100:1. Additionally, we augmented our training set using vertical flips and rotated versions ($90^\circ$, $180^\circ$ and $270^\circ$). For training the networks, we used Adaptive Moment Estimation (ADAM) optimizer with a batch size of $5$ and a cyclic learning rate with minimum and maximum boundaries of $10^{-5}$ and $10^{-3}$, respectively. For the Leaky ReLU activation functions, we set the slope to $0.1$. As for the loss function, we used the normal Binary Cross-Entropy (BCE) in one variant and the distance-weighted BCE (WBCE) in the other variant. We monitored the Dice coefficient and its modified version for early stopping with a patience of $20$ epochs. \subsection{Qualitative Results} \label{sec:qualitativ} \Cref{fig:predictions} illustrates the predictions of two glacier fronts using U-Net and Attention U-Net, trained on a distance-weighted BCE loss with different weights. It shows that our proposed Attention U-Net outperforms U-net for big distance weights, especially for $w=16$, as shown in \cref{fig:predictions}. Note, increasing $w$ results in thicker predicted lines and consequently higher uncertainty. \begin{comment} \begin{figure}[tbp] \centering \includegraphics[width=.75\linewidth]{Figures/thickness.png} \caption{Predictions of U-Net trained on BCE loss with weights: (a) no weighting, (b) $w=4$, (c) $w=8$, and (d) $w=16$.} \label{fig:thickness} \end{figure} \end{comment} Using the normal BCE loss often results in disconnected predicted lines. While WBCE mitigates this problem, this can sometimes be a disadvantage, e.\,g.\xspace when mountains near the front lines cause gaps on the ground truth line due to layover and shadowing. \Cref{fig:p_a} and \cref{fig:p_b} show that networks trained on weight parameter greater than four fail to localize small gaps and simply connect the predicted fronts. We can analyze the learning process of Attention U-Net by means of the attention maps $\alpha^l$, extracted from layer $l$. \Cref{fig:attention_2007_1_1} shows the evolution of the attention maps (resized to the same scale). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/attention_maps_2007_1_1_cut.png} \caption{Attention maps $\alpha^l$ for models saved at epoch 5, 10, and 36.} \label{fig:attention_2007_1_1} \end{figure} We observe that the attention maps converge quickly within the first epochs to the regions of interest. In general, attention maps $\alpha^4$ and $\alpha^3$ look more noisy than $\alpha^2$ and $\alpha^1$. However, this shows that the attention gates can learn salient regions automatically, even in the bottom layers of a U-Net. Besides the glacier front, there is also a weak highlighting on the zero-padded border and other regions like mountains and ice melange located near the front. This may help the network to recognize specific glacier systems. \Cref{fig:attention_maps_weights} shows the effect of different distance weights on the attention maps. Training on a distance-weighted loss function with a small weight reduces noisy highlights in the attention maps, especially in layer $3$. When increasing the weight too much, the attention gates cannot localize the regions of interest in the low-level feature maps, as shown in \cref{fig:a_d}. Choosing a weight between $w=4$ and $w=8$ lead to the best qualitative results in our test set. \begin{figure}[t] \centering \mbox{% \includegraphics[width=.6\linewidth]{Figures/attention_maps_weights.png} \phantomsubcaption\label{fig:a_a} \phantomsubcaption\label{fig:a_b} \phantomsubcaption\label{fig:a_c} \phantomsubcaption\label{fig:a_d} }% \caption{Attention maps for models trained on \subref{fig:a_a} BCE, \subref{fig:a_b} $\mathrm{WBCE}_4$, \subref{fig:a_c} $\mathrm{WBCE}_8$, and \subref{fig:a_d} $\mathrm{WBCE}_{16}$.} \label{fig:attention_maps_weights} \end{figure} \subsection{Quantitative Results}\label{sec:quantitativ} \begin{table}[t] \centering \caption{Quantitative results of the U-Net, with and without the attention gates.% } \small \begin{tabular}{@{ }c@{ }@{ }c@{ }@{ }c@{ }@{ }c@{ }@{ }c@{ }@{ }c@{ }@{ }c@{ }} \toprule Model & Loss & Dice & $\mathrm{WDice}_4$ & $\mathrm{WDice}_8$& $\mathrm{WDice}_{16}$ & IoU \\ \midrule U-Net & BCE &\h{73.9} & \h{83.0} & \h{86.0} & 88.3 & \h{60.6} \\ Att. U-Net & BCE &73.8 & 82.9 & 85.9 & \h{88.7} & 60.3 \\ \midrule U-Net & $\mathrm{WBCE}_4$ &\h{72.9} & \h{83.6} & \h{87.1} & \h{90.1} & \h{58.9} \\ Att. U-Net & $\mathrm{WBCE}_4$ &72.6 & \h{83.6} & 87.0 & 89.9 & 58.5 \\ \midrule U-Net & $\mathrm{WBCE}_8$ &70.1 & 81.1 & 84.6 & 87.6 & 55.5 \\ Att. U-Net & $\mathrm{WBCE}_8$ &\h{70.8} & \h{82.5} & \h{86.1} & \h{89.2} & \h{56.1} \\ \midrule U-Net & $\mathrm{WBCE}_{16}$ &\h{67.0} & \h{78.9} & \h{82.6} & 85.9 & \h{51.9} \\ Att. U-Net & $\mathrm{WBCE}_{16}$ &65.2 & 77.8 & 82.3 & \h{86.1} & 49.8 \\ \bottomrule \end{tabular} \label{tab:quantitativ} \end{table} \Cref{tab:quantitativ} presents quantitative results for our test set. Overall, the results show that Attention U-Net performs similar to the base U-Net with the advantage of better interpretability. In case of a distance weight of 8 (WBCE$_8$), Attention U-Net performs significantly better than the base model. Distance-weighted Dice scores with bigger weights are generally better since errors near the front are weighted smaller. \begin{table}[t] \centering \caption{Average thickness of the predicted lines in pixels and the resulting certainty (perpendicular to the estimated glacier front position) based on an average spatial resolution of \SI{44.74}{m}.} \small \begin{tabular}{cccccc} \toprule $w$ & 0 & 4 & 8&16 \\ \midrule thickness [px] & 6.5 & 8.4 & 10.6 & 11.1 \\ certainty [m] & 145.40 & 187.90 & 237.12 & 248.30 \\ \bottomrule \end{tabular} \label{tab:uncertainty} \end{table} Assuming a small amount of misprediction, we can estimate the average thickness of the lines by the class ratio of front to the background. \Cref{tab:uncertainty} shows that training on BCE with different weights leads to front predictions with different thicknesses and certainties. \begin{comment} \begin{table}[htbp] \centering \small \begin{tabular}{|c|c|c|c|c|c|} \hline $w$ & 0 & 4 & 8&16 \\ \hline \makecell{thickness\\ $[$px$]$} & 6.5 & 8.4 & 10.6 & 11.1 \\ \hline \makecell{certainty \\ $[$m$]$} & 145.40 & 187.90 & 237.12 & 248.30 \\ \hline \end{tabular} \caption{Average thickness of the predicted lines in pixels and the resulting certainty $($perpendicular to estimated glacier front position$)$ based on an average pixel resolution of \SI{44.74}{m}.} \label{tab:uncertainty} \end{table} Although, the loss scales nearly linearly to the weight and the distance, the thickness of the predicted lines does not. Doubling $w$ does not mean that the lines are getting two times thicker. However, increasing the weight too much leads to a significant performance decrease, which also can be observed in the qualitative results, as explained in Sec.~\ref{sec:qualitativ}. \end{comment} With $w=4$, we obtain a Dice score of \SI{83.6}{\percent} with a tolerance of $187.90$ meters. It is possible to reduce this uncertainty by using a smaller dilation kernel, smaller distance weight, or a patch-wise line segmentation with a higher pixel resolution. \section{Conclusion}\label{sec:conclusion} In summary, this study shows that it is possible to delineate calving glacier fronts by means of a trained U-Net in a single forward pass without additional post-processing. The addition of attention gates revealed insights into the learning behavior of the network. Within a few epochs, the network learns to localize important regions automatically and shows where its attention is. Our results show that the attention mechanism itself improves our predictions for big distance weights. Furthermore, it can be used as an analysis tool to find proper loss functions and hyperparameters. Consequently, we conclude that training on a distance weighted version of Binary Cross-Entropy with a small weight parameter leads to optimal qualitative and quantitative results.
proofpile-arXiv_067-5488
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} The paper investigates a critical known problem of airplane wing control under the flexural torsion flutter. The stability in this task is closely connected to the ability to maintain under oscillations wings integrity because, during a flight, their components can experience significant strain. After attaining a certain flight speed, say $V_{flat}$, oscillations of the wing rapidly and catastrophically increase until the wing breaks if they are not stopped. It is so named the flutter phenomenon. For example, even in horizontal flight at a constant speed on a heavy transport aircraft, the deflection of the wing end can achieve several meters~\cite{A1} so that the corresponding deformations affect the magnitude and distribution of the aerodynamic load leading potentially to structural instability, both static (e.g. wing divergence) and dynamic (e.g. flutter). Therefore, safeguarding the required aerodynamic characteristics and wing stability in various aircraft flight phases are needed. It should be noted that the elements of the wing mechanization are widespread in modern aircraft construction (pre-flaps, ailerons, flaps, etc.) serving precisely this purpose, in particular, in the most critical take-off and landing modes. However, there is no current common way to counteract wing flutter effectively. Maybe except to impose a bound on the aircraft maximum flight speed $V_{max}$ to be less than $V_{flat}$ Inventors strive to enlarge flight speed $V_{flat}$ as much as possible in order to increase the maximum allowable aircraft speed $V_{max}$. A novel approach proposed in this paper suggests to cover an aircraft wing with small-sized movable elements (``feathers'') both above and below, capable of changing their orientation consistently with the airflow. In the neutral position, when feathers are not raised but lie on the surface, they do not affect the calculated wing profile (see Fig.~\ref{fig:1p}, the left panel). In this case the wing dynamic does not deteriorate. \begin{figure}[thpb] \centering \includegraphics[scale=0.4]{1p} \caption{Wing with feathers} \label{fig:1p} \end{figure} The methodology novelty lies in the use of ``feathers'' on the aircraft's surface. In this way, a wing becomes a multiagent system that can be controlled via the control theory methods. At the heart of multiagent systems is a decentralized approach to solving problems, in which dynamically updated information in a distributed network of intelligent agents is processed not in some center, but directly at the agents on the basis of their local observations and locally available information from neighbors. At the same time, both resource and time costs for communication in the network are significantly reduced, as well as the time for processing and decision-making in the center of the entire system (if it does exist). Such an approach is new and has not been considered before in the flutter problem area. Because of the relatively intensive wings oscillations during flutter, their essential performance indicator is embodied by a period needed to damp and maintain vibrations safely. Application of multiagent approach allows an occurrence of emergent intelligence (intellectual resonance, swarm intelligence) or an appearance of unexpected properties that a system possesses, but none of its individual elements has. Each feather tries to minimize the deviation of the wing segment, to which it is attached, from its initial state. The action of each feather, in general, is not consolidated with other feathers actions, however a combination of all feathers impact results in a new property of the wing as the multiagent system of feathers to damp the vibrations. A crucial theoretical suggestion is the ability of a ``feather'' to swap the orientation instantly along with the airflow. The impact of a possible delay should be a subject of a further research. The rest of the paper is structured in the following way: Section 2 provides an overview of related works. Section 3 is devoted to studying system dynamics for small flexural-torsional vibrations. Section 4 describes the control laws produced using the Speed-Gradient method. Section 5 presents a possible connection to the multiagent control methodology. Section 6 is devoted to the conclusion. \section{RELATED WORK} The condition frontier between stable and self-sustaining motions in a flight is named flutter speed and flutter boundary~\cite{WrightCopper}. The flutter can be divided into several groups according to the instability appearing with changes in conditions like dynamic pressure increasing. An explosive flutter occurs after exceeding the flutter speed $V_{flat}$. This process results in highly divergent oscillations and wing breaks within a fraction of a second. The moderate (mild) flutter corresponds to when the system is stable but lowly significantly damped before achieving the flutter speed and can be identified well below the flutter speed by instabilities extrapolating. An approach intended to steady an unstable flutter system is called Active Flutter Suppression (AFS). A broad overview of AFS research is presented in~\cite{G1}. The study of the ability to suppress flutter instability through actively controlled closed-loop action of control surfaces has a long history~\cite{c5}. Numerous researches in this field are carried out as early as the 1970s and 1980s~\cite{c9}. Various approaches to synthesizing AFS systems control law, including adaptive control methods and control with variable parameters, are also considered~\cite{c364,c366,c369,c370}. AFS is essential for an effective solution of aeroelastic instability problems and can lead to significant aircraft and airframes weight savings, (see, e.g.~\cite{c6}). The AFS approach based on elimination of delays in loading growth induced by unsteady aerodynamic stresses is investigated in~\cite{c371}. The concepts of ``active flexible wing'' or ``active flexible airframe'' are respected in~\cite{c57}; similarly, flight stability and controllability of rigid and flexible aircraft are considered in~\cite{c15, c23, c27}. The influence of the aeroelasticity on stability and controllability of flight using corrections of the derivatives of static aeroelastic stability is studied in~\cite{c31}; an active controls perspective is presented in~\cite{c134}. The idea of using an active control system has been considered and discussed since the advent of flights with a human crew~\cite{c15,c23}. The adaptive control methodology appears attractive due to multiple plant characteristics and the ability to respond to damage scenarios, cf.~\cite{c417,c418,c420,c422,c423,c424}---article~\cite{c403} attempts to systematize the modern control theory laws developed in the field. A broad review of methods for synthesizing control laws and modeling of aeroelastic systems is presented in~\cite{c444}. With this connection, it is possible to recall several works considering different aspects of the mentioned thematics~\cite{c434, c437, c438, c439, c443},~\cite{c366,c369,c370}. Multiagent systems have many applications in civilian, security, and military areas~\cite{cdc2013, ecc2015}. Centralized quantitative and qualitative modeling, analysis, constraint satisfaction, maintenance, and control seem to be too strict for these systems~\cite{math2021}. On the other hand, the distributed and incremental reasoning seems to be more scalable, robust and flexible. That is why an investigation of multiagent control systems is popular nowadays~\cite{F1, Gr3, Fradkov1}. The multiagent methodology can serve the general model of the interactions in a complex system~\cite{F7, Fradkov1}. An overview of publications considering the emergent intelligence and self-organization in groups of devices is provided in~\cite{math2021}. Dynamical networks and multiagent protocol for the airplane are investigated in~\cite{Gran1,Gran2,A5}, where the multiagent control is used for leveling the perturbing forces in the turbulence. \section{DYNAMICS EQUATIONS OF WING WITH FEATHERS} This section describes the proposed approach named a ``wing with feathers''. Let us consider the phenomenon of flexural-torsional flutter as described in~\cite{A2,A3} in a steady horizontal flight at a constant speed. We regard the non-sweeping wing of the half-span $l$ and feathers in neutral position with a cantilevered beam with a static distributed load on bending and torsion. The elastic axis of the wing passes through the Stiffness Centers (SC's) of the sections and does not coincide with the line of Gravity Centers (GC's) of the sections. We assume that the wing stiffness in the longitudinal and transverse directions of the wing plane to be very large neglecting of the vibrations in these directions. We also ignore the possible movements of the SC and GC along the sections during the flight. Figure~\ref{fig:2p} illustrates the conceivable location of the SC and GC lines on a wing. \begin{figure}[thpb] \centering \includegraphics[scale=0.3]{2p} \caption{\centering Stiffness Centers and Gravity Centers lines on a wing, where \parbox{0.65\textwidth}{ $O$ is the stiffness center of the wing section on the fuselage; \\ $X$-axis is directed on a free stream; \\ $Z$-axis is directed along the elastic axis of the undeformed wing; \\ $Y$-axis complements the coordinate system to the right; \\ $x_{0}$ is the distance from the leading edge of the wing to the SC section; \\ $\sigma_{T}$ is the distance between SC and GC; \\ $h$ is the transverse deflection of the section $A_{1}A_{2}$; \\ $l$ is the half-span of the wing.} } \label{fig:2p} \end{figure} It must be noted that typically the GC is located behind the SC section. \begin{figure}[thpb] \centering \includegraphics[scale=0.5]{3p} \caption{Wing cross section} \label{fig:3p} \end{figure} The corresponding equations of the elastic line of the beam are of the form, cf. \cite{c5}: \begin{equation}\label{eq1} \begin{cases} \frac{\partial^{2}}{\partial z^{2}}\left( EJ\frac{\partial^{2}y}{\partial z^{2}}\right) = q^{0},\\ \frac{\partial}{\partial z}\left( GJ_{k}\frac{\partial\Theta}{\partial z}\right) = m^{0}, \end{cases} \end{equation} where as presented in Fig.~\ref{fig:3p} \begin{itemize} \item $b$ is the wing section chord; \item $h$ is the transverse deflection of the section $A_{1}A_{2}$; \item $\alpha_{CT}$ is the angle of attack in the section of the undeformed wing; \item $y$ is the deflection of the stiffness axis in the current wing section; \item $\Theta$ is the wing twist angle, which is considered positive if it increases the angle of attack in the section; \item $EJ, GJ_{k}$ are the wing stiffness in bending and torsion, respectively; \item $q^{0}$ and $m^{0}$ are the linear force and moment relative to the stiffness axis acting on the wing. \end{itemize} The functions of \eqref{eq1} are time-independent since the wing is in a steady (stationary) state: \begin{equation}\label{eq2} y=y^{0}(z),~\Theta=\Theta^{0}(z). \end{equation} These solutions must satisfy the boundary conditions at the ends of the wing~\cite{fung} \begin{equation}\label{eq3} \begin{cases} \left. y\right\rvert_{z=0}=0; ~~ \left. \frac{\partial y}{\partial z}\right\rvert_{z=0}=0~~ (\textnormal{tight fuselage fit});\\ \left. EJ\frac{\partial^{2}y}{\partial z^{2}}\right\rvert_{z=l}=0;~~ \left. \frac{\partial}{\partial z}\left(EJ\frac{\partial^{2}y}{\partial z^{2}}\right) \right\rvert_{z=l}=0 \\ ~~ (\textnormal{moment and shear force at the free end}),\\ \left. \Theta\right\rvert_{z=0}=0; ~~ \left. GJ_{k}\frac{\partial \Theta}{\partial z}\right\rvert_{z=l}=0 ~~ \\ (\textnormal{angle of rotation of the terminated end and moment at the free end}). \end{cases} \end{equation} Now, suppose that for some unknown reason like sudden aileron movement, air holes, a gust of wind, and so on, the wing deviated from its stationary position (see \eqref{eq2}). At the end of the specified impact, after the cessation of this factor exposure, the wing returns to the equilibrium state under the influence of the elastic forces. If the energy dissipation is insignificant, the aperiodic process does not appear. However, wing oscillations may arise. We assume that these fluctuations are initially ignored and do not affect the aircraft dynamics. According to \cite{A2,A3}, small bending-torsional oscillations of a wing near its equilibrium position (see \eqref{eq2}, \eqref{eq3}) in the laminar flow are described by the following equations: \begin{equation}\label{eq4} \begin{cases} \frac{\partial^{2}}{\partial z^{2}}\left( EJ\frac{\partial^{2}y_{1}}{\partial z^{2}}\right) + m\frac{\partial^{2}y_{1}}{\partial t^{2}} - m\sigma_{T}\frac{\partial^{2}\Theta_{1}}{\partial t^{2}}=q_{a}, \\ \frac{\partial}{\partial z}\left( GJ_{k}\frac{\partial\Theta_{1}}{\partial z}\right) + m\sigma_{T}\frac{\partial^{2} y_{1}}{\partial t^{2}} - J_{m}\frac{\partial^{2}\Theta_{1}}{\partial t^{2}}=m_{a}, \end{cases} \end{equation} where \begin{itemize} \item $y_{1} $ and $\theta_{1}$ are the additional deflection and angle of twisting of the wing relative to the stationary state (see \eqref{eq2},\eqref{eq3})), due to fluctuations; \item $m$ is the linear mass of the wing; \item $J_{m}$ is the linear mass moment of inertia of the wing relative to its stiffness axis; \item $q_ {a}$ and $m_ {a}$ are the linear aerodynamic force of the wing and the linear moment of the aerodynamic force relative to the stiffness axis, due to wing vibrations. \end{itemize} Solutions of the system \eqref{eq4} must satisfy boundary conditions similar to \eqref{eq3}. We represent the right-hand sides of \eqref{eq4} in the form: \begin{equation* q_{a}=\Delta q_{a} + q_{u}, ~~ m_{a} = \Delta m_{a} + m_{u}, \end{equation*} where \begin{itemize} \item $\Delta q_ {a}$ and $\Delta m_ {a}$ are the linear aerodynamic force and moment relative to the stiffness axis respectively, arising due to wing oscillations in the neutral position of the feathers; \item $q_{u}$ and $m_{u}$ are the linear aerodynamic force and moment created by changing the orientation of the feathers. \end{itemize} Following the flutter equations (\cite{A2}, p. 176, equation (35)) and taking into account linear aerodynamic forces and moment $q_{u}$ and $m_{u}$, we rewrite \eqref{eq4} as \begin{equation}\label{sist6} \begin{cases} \frac{\partial^{2}}{\partial z^{2}}(EJ\frac{\partial^{2}y_{1}}{\partial z^{2}}) +m\frac{\partial^{2}y_{1}}{\partial t^{2}} -m\sigma_{T}\frac{\partial^{2}\Theta_{1}}{\partial t^{2}} \\ -C_{y}^{\alpha}\left[\Theta_{1}+(\frac{3}{4}b-x_{0})\frac{1}{V}\frac{\partial\Theta_{1}}{\partial t} -\frac{1}{V}\frac{\partial y_{1}}{\partial t}\right]\rho bV^{2}=q_{u}\\ \frac{\partial}{\partial z}(GJ_{k}\frac{\partial\Theta_{1}}{\partial z}) + m\sigma_{T}\frac{\partial^{2}y_{1}}{\partial t^{2}} - J_{m}\frac{\partial^{2}\Theta_{1}}{\partial t^{2}} - \frac{\pi}{16}\frac{b^2}{V}\frac{\partial\Theta_{1}}{\partial t} \rho bV^{2} \\ +\left\{ +C_{y}^{\alpha}(x_{0}-\frac{b}{4})\left[\Theta_{1}+(\frac{3}{4}b-x_{0})\frac{1}{V}\frac{\partial\Theta_{1}}{\partial t} -\frac{1}{V}\frac{\partial y_{1}}{\partial t}\right] \right\}\rho bV^{2} = m_{u}, \\ y_{1} = \frac{\partial y_{1}}{\partial z}=\Theta_{1} = 0,~~z=0,\\ \frac{\partial^{2}y_{1}}{\partial z^{2}} = \frac{\partial^{3}y_{1}}{\partial z^{3}} = \frac{\partial\Theta_{1}}{\partial z}=0,~~z=l, \end{cases} \end{equation} where \begin{itemize} \item $C_{y}^{\alpha}=\frac{\partial C_{y}}{\partial \alpha}$; $C_{y}$ is the wing lift coefficient; \item $C_{y}^{\alpha}$ consider constant along the span; \item $C_{y}=C_{y}^{\alpha}(\alpha-\alpha_{0});$ \item $\alpha=\alpha_{CT}+\Theta^{0}+\Theta_{1}$ is the instant value of the angle of attack when the wing moves; \item $\alpha_{0}$ is the value of the angle of attack at which $C_{y}=0$; \item $\rho$ is the air density. \end{itemize} As it can be seen from \eqref{sist6}, the bending and torsional vibrations of the wing are interdependent. It is one of the necessary conditions for flutter occurrence. It is also known \cite{c5} with increasing speed $V$, the bending and torsional oscillations approach each other, and for $V = V_{flat}$ a wing coalesce. Moreover, there is a phase shift between these oscillations, a necessary condition for the occurrence of flutter \cite{A2,A3}. It is important to remark that in this case the wing amplitude oscillations wavers around a small constant value. So the oscillations themselves are no longer self-damped, as it was the case where $V < V_{flat}$. The crucial problem arises when $V > V_{flat}$, i.e. when the slightest deformations overgrow catastrophically. In terms of \eqref{sist6}, it is necessary to form $q_u$ and $m_u$ to prevent flutter so that a wing oscillation energy is bounded. The bound value is reliable from a controllability standpoint, taking into account the stability and the integrity of the aircraft structure. So, if $V > V_{flat}$, for the total energy of the cantilevered beam, \cite{A2}, the condition is: \begin{multline}\label{eq7} E = E_{kinet} + E_{poten} = \frac{1}{2}\int_{0}^{l}{m\left(\frac{\partial y_{1}}{\partial t} \right)^{2}dz} + \frac{1}{2}\int_{0}^{l}{J_{m}\left(\frac{\partial \Theta_{1}}{\partial t} \right)^{2}dz} -\int_{0}^{l}{m\sigma_{T}\frac{\partial y_{1}}{\partial t}\frac{\partial \Theta_{1}}{\partial t}dz} + \\ \frac{1}{2}\int_{0}^{l}{EJ\left(\frac{\partial^{2}y_{1}}{\partial z^{2}} \right)^{2}dz} +\frac{1}{2}\int_{0}^{l}{GJ_{k}\left(\frac{\partial \Theta_{1}}{\partial z} \right)^{2}dz} \leq E_{*}. \end{multline} A stricter requirement is to get the system \eqref{sist6} solution in a given small neighborhood of the solution of \eqref{eq2}: \begin{equation}\label{eq8} \left\|\bar{x}\right\| \leq \epsilon, \end{equation} where \begin{equation}\notag \bar{x} = \left( y_{1}, \frac{\partial y_{1}}{\partial t}, \Theta_{1}, \frac{\partial \Theta_{1}}{\partial t} \right) \end{equation} The power $q_u$ and the moment $m_u$ impacts are influenced by the high-speed pressure and therefore should depend on the velocity of the flow of $V$, the position of the feathers on the wing, their orientation and other factors associated with the adopted aerodynamic calculation scheme. Let us assume that the feathers are completely rigid structural elements, and that a change in the feather's orientation does not influence the airflow around the remaining feathers. We also suppose that a wing as a whole keeps its laminar. Then, we obtain: \begin{equation}\label{eq9} q_{u} = \sum_{i}^{n(z)}{q_{u_{i}}}, ~~ m_{u} = \sum_{i}^{n(z)}{m_{u_{i}}}, \end{equation} where $q_{u_{i}}$ and $m_{u_{i}}$ are the additional linear forces and the moment from the i-th feather, and the summation is carried out over all the feathers, covering the section frontier. The number of feathers is denoted as $n(z)$. The tangent can be used to measure the rotation angle between the feathers on the upper surface and the wing profile $\beta_{i}\in[0,\beta^{-}],~\beta^{-}<0,$ and between the feathers on the lower surface and the wing profile $\beta_{i}\in[0,\beta^{+}],~\beta^{+}>0.$ \begin{figure}[thpb] \centering \includegraphics[scale=0.38]{4p} \caption{Wing profiles in a static position and during vibrations} \label{fig:4p} \end{figure} In Fig. \ref{fig:4p}: \begin{itemize} \item $A_{0}~A_{1}~A_{2}~A_{4}$ are wing profiles (considered to be thin) in a static position (before vibrations); \item $\acute{A}_{0}~\acute{A}_{1}~\acute{A}_{2}~\acute{A}_{4}$ are wing profiles during oscillations; \item $E$ and $E_{1}$ are SC's of the wing section in the static position and during vibrations, respectively; \item $X$-axis corresponds to the speed of the main stream; \item $Y$-axis is perpendicular to $X$-axis and to the axis of rigidity of the undeformed wing; \item $A_{1}~A_{3}~\acute{A}_{1}~\acute{A}_{3}$ are the front and back edges of the feather 1 in the neutral position on the corresponding profiles; \item $A_{2}~A_{4}~\acute{A}_{2}~\acute{A}_{4}$ are the front and back edges of the feather 2 (analog of the aileron) in the neutral position on relevant profiles; \item $x_{1}^{*}$ and $x_{1K}$ are the distances from the leading and trailing edges of the feather 1 to the leading edge of the wing; \item $x_{2}^{*}$ and $x_{2K}$ are similar parameters for the feather 2; \item $EE_{1}$ is the deflection of the wing; \item $\Theta_{1}$ is the angle of twisting of the wing near the point $E_{1}$; \item $\beta_{1}<0$ is the angle of deviation of the feather 1 from the neutral position; \item $\beta_{2}>0$ is the angle of deviation of the feather 2 from the neutral position; \item $\hat{A}_{3}$ and $\hat{A}_{4}$ are the trailing edges of the feathers 1 and 2, respectively, after their deviations. \end{itemize} Parameters $\Theta_{1}$, wing deflection $y_{CT}$ are ordinates in a static position, $\beta_{i}$ are considered small. A similar figure for the aileron is presented in (\cite{A1} p. 143, Fig. 41). Following the technique suggested by (\cite{A2}, pp.143-146), it can be shown that the influence of the $i$-th feather on the wing is generally calculated as follows: \begin{equation}\label{eq10} \begin{cases} q_{u_{i}} = A_{i}V^{2}\beta_{i} + B_{i}V\dot{\beta}_{i}, \\ m_{u_{i}} = C_{i}V^{2}\beta_{i} + D_{i}V\dot{\beta}_{i}, \end{cases} \end{equation} where $A_{i}=C_{y}^{\alpha}G_{i}\rho b^{2},~$ $B_{i}=C_{y}^{\alpha}H_{i}\rho b^{3},$ $C_{i}=-\left[ I_{i} + C_{y}^{\alpha}(\frac{x_{0}}{b}-\frac{1}{4})G_{i}\right]\rho b^{2},$ $D_{i}=-\left[ J_{i} + C_{y}^{\alpha}(\frac{x_{0}}{b}-\frac{1}{4})H_{i}\right]\rho b^{3},$ $G_{i}=\frac{1}{\pi}\left[ (\psi_{ik}-\psi_{i}^{*}) - (\sin{\psi_{ik}}-\sin{\psi_{i}^{*}})\right],$ \begin{multline}\notag H_{i}=\frac{1}{2\pi}\left( \cos{\psi_{i}^{*}}(\psi_{ik}-\psi_{i}^{*}) - (\sin{\psi_{ik}}-\sin{\psi_{i}^{*}})\right) -\cos{\psi_{i}^{*}}(\sin{\psi_{ik}}-\sin{\psi_{i}^{*}}) \\ +\frac{1}{2}\left( (\psi_{ik}-\psi_{i}^{*}) +\frac{1}{2}(\sin{2\psi_{ik}}-\sin{2\psi_{i}^{*}}) \right); \end{multline} $I_{i}=\frac{1}{8}\left[ 2(\sin{\psi_{ik}}-\sin{\psi_{i}^{*}}) + (\sin{2\psi_{ik}}-\sin{2\psi_{i}^{*}})\right],$ \begin{multline}\notag J_{i}=-\frac{1}{16}\left( -2\cos{\psi_{i}^{*}}(\sin{\psi_{ik}}-\sin{\psi_{i}^{*}}) + (\psi_{ik}-\psi_{i}^{*}) \right) -\frac{1}{16}\left( (\frac{1}{2}-\cos{\psi_{i}^{*}})(\sin{2\psi_{ik}}-\sin{2\psi_{i}^{*}}) \right) \\ -\frac{1}{16}\left( (\sin{\psi_{ik}}-\sin{\psi_{i}^{*}}) +\frac{1}{3}\left( \sin{3\psi_{ik}}-\sin{3\psi_{i}^{*}} \right) \right); \end{multline} $x_{i}^{*}=\frac{b}{2}(1-\cos{\psi_{i}^{*}}),~~$ $x_{ik}=\frac{b}{2}(1-\cos{\psi_{ik}}),$ $\psi_{i}^{*}\in[0,\pi],~~$ $\psi_{ik}\in[0,\pi].$ According to \cite{A2,A3}, the solution of \eqref{sist6} near the flutter is given as \begin{equation}\label{eq11} \begin{cases} y_{1}(z,t)=q(t)f(z),\\ \Theta_{1}(z,t)=r(t)\phi(z). \end{cases} \end{equation} where $f(z)$ and $\phi(z)$ are vibration modes functions, satisfying the boundary conditions: at $z=0,~f=0;~f'=0;~\phi=0;$ at $z=l,~f''=0;~f'''=0;~\phi'=0.$ Here for the sake of simplicity, we suggest $f'=\frac{\partial f}{\partial z}$, $f'''=\frac{\partial^{3} f}{\partial z^{3}}.$ We substitute \eqref{eq11} into \eqref{sist6}, multiply the first equation by $f$ and the second equation by $\phi$ and then integrate from 0 to l. It gives after simple transformations with taking into account \eqref{eq9} and \eqref{eq10} \begin{equation}\label{eq12} \begin{cases} a_{11}\ddot{q} + a_{12}\dot{q} + a_{13}q + b_{11}\ddot{r} + b_{12}\dot{r}+b_{13}r = Q(\beta,\dot{\beta}), \\ a_{21}\ddot{q} + a_{22}\dot{q} + b_{21}\ddot{r} + b_{22}\dot{r}+b_{23}r = M(\beta,\dot{\beta}), \end{cases} \end{equation} where $a_{11}=\int_{0}^{l}{mf^{2}dz},$ $a_{12}=C_{y}^{\alpha}\rho V\int_{0}^{l}{bfdz},$ $a_{13}=\int_{0}^{l}{\frac{d^{2}\left(EJf''\right)}{d{z}^{2}}fdz},$ $b_{11}=-\int_{0}^{l}{m\sigma_{T}f\phi dz},$ $b_{12}=-C_{y}^{\alpha}\rho V\int_{0}^{l}{\left(\frac{3}{4}b-x_{0}\right)bf\phi dz},$ $b_{13}=-C_{y}^{\alpha}\rho V^{2}\int_{0}^{l}{bf\phi dz},$ $a_{21}=\int_{0}^{l}{m\sigma_{T}f\phi dz}=-b_{11},$ $a_{22}=-C_{y}^{\alpha}\rho V\int_{0}^{l}{\left(x_{0}-\frac{b}{4}\right)bf\phi dz},$ $b_{21} = -\int_{0}^{l}{J_{m}\phi^{2}dz},$ $b_{22}=-\frac{\pi}{16}\rho V\int_{0}^{l}{b^{3}\phi^{2}dz} + C_{y}^{\alpha}\rho V\int_{0}^{l}{b(x_{0}-\frac{b}{4})\left(\frac{3}{4}b-x_{0}\right)\phi^{2} dz},$ $b_{23}=b_{23}^{(1)} + b_{23}^{(2)} = C_{y}^{\alpha}\rho V^{2}\int_{0}^{l}{b(x_{0}-\frac{b}{4})\phi^{2}dz} + \int_{0}^{l}{\frac{d\left(GJ_{k}\phi' \right)}{dz}\phi dz},$ $Q(\beta,\dot{\beta}) = \sum_{i=1}^{N}{\left(\bar{A}_{i}V^{2}\beta_{i} + \bar{B}_{i}V\dot{\beta}_{i}\right)},$ $\bar{A}_{i}=\int_{0}^{l}{A_{i}f dz},~~$ $\bar{B}_{i}=\int_{0}^{l}{B_{i}f dz},$ $M(\beta,\dot{\beta}) = \sum_{i=1}^{N}{\left( \bar{C}_{i}V^{2}\beta_{i} + \bar{D}_{i}V\dot{\beta}_{i}\right)},$ $\bar{C}_{i}=\int_{0}^{l}{C_{i}\phi dz},~~$ $\bar{D}_{i}=\int_{0}^{l}{D_{i}\phi dz},$ $\beta=col\left\{\beta_{i},~i={1,N} \right\},~~$ $\dot{\beta}=col\left\{\dot{\beta}_{i},~i={1,N} \right\},$ $N$ is the total feathers number. Given the functions, $f$ and $\phi$ and the distributions of the mass and stiffness parameters of the wing (we consider time independent), the coefficients $a_{ij}$ and $b_{ij}, ~ i, j = 1,2,3$ can be figured out to be constants. The further results hardly rest on the choice of the functions $f$ and $\phi$. Note without going into details that these functions can be reasonably calculated, for example, by the successive approximations method. We complete \eqref{eq12} with the control equations \begin{equation}\label{eq13} \dot{\beta} = u, \end{equation} where $u = col\left\{u_{i},~~i={1,N} \right\};~~$ $\beta_{i}\in[0,\beta^{+}], ~~ \beta^{+} > 0, ~~ i \in \bar{1,n^{+}},~$ ,---, $n^{+}$ is the total number of feathers on the lower surface of the wing; $\beta_{i}\in[\beta^{-},0], ~~ \beta^{-} < 0, ~~ i \in \bar{n^{+}+1,N},~$ ,---, $n^{-}=N-n^{+}$ is total number of feathers on the upper surface of the wing. Introduce: \begin{equation}\label{eq14} x=col\left\{q, \dot{q}, r, \dot{r} \right\} = col\left\{x_{i},~~i=\bar{1,4} \right\}. \end{equation} Then, we substitute \eqref{eq14} into \eqref{eq12} and reduce this system to the normal Cauchy form. After combining \eqref{eq12} and \eqref{eq13}, we get \begin{equation}\label{eq15} \begin{cases} \dot{x}_{1}=x_{2},\\ \dot{x}_{2}=\sum_{k=1}^{4}{C_{1k}x_{k}} + F_{1}(\beta,u),\\ \dot{x}_{3}=x_{4},\\ \dot{x}_{4}=\sum_{k=1}^{4}{C_{2k}x_{k}} + F_{2}(\beta,u),\\ \dot{\beta}=u, \end{cases} \end{equation} where $F_{1}(\beta,u)=d_{11}Q+d_{12}M=\sum_{i=1}^{N}{\left(R_{1i}\beta_{i}+s_{1i}u_{i} \right)},$ $F_{2}(\beta,u)=d_{21}Q+d_{22}M=\sum_{i=1}^{N}{\left(R_{2i}\beta_{i}+s_{2i}u_{i} \right)},$ $R_{1i}=V^{2}\left( \bar{A}_{i}d_{11} + \bar{C}_{i}d_{12}\right),$ $s_{1i}=V\left( \bar{B}_{i}d_{11} + \bar{D}_{i}d_{12}\right),$ $d_{11}=\left[a_{11}\left(1-\frac{a_{21}b_{11}}{a_{11}b_{21}} \right) \right]^{-1},$ $d_{12}=-d_{11}{b_{11}} / {b_{12}},$ $R_{2i}=V^{2}\left( \bar{A}_{i}d_{21} + \bar{C}_{i}d_{22}\right),$ $s_{2i}=V\left( \bar{B}_{i}d_{21} + \bar{D}_{i}d_{22}\right),$ $d_{21}=-a_{21}\frac{d_{11}}{b_{21}},$ $d_{22}=\left(1 - a_{21}d_{12} \right) / b_{21},$ $C_{11}=-d_{11}a_{13},$ $C_{12}=-d_{11}(a_{12}-b_{11}\frac{a_{22}}{b_{21}}),$ $C_{13}=-d_{11}(b_{13}-b_{11}\frac{b_{23}}{b_{21}}),$ $C_{14}=-d_{11}(b_{12}-b_{11}\frac{b_{22}}{b_{21}}),$ $C_{21}=-a_{21}\frac{c_{11}}{b_{21}},$ $C_{22}=-\left( a_{22}+a_{21}c_{12}\right)/b_{21},$ $C_{23}=-\left( b_{23}+a_{21}c_{13}\right)/b_{21},$ $C_{24}=-\left( b_{22}+a_{21}c_{14}\right)/b_{21}.$ Now, we convert \eqref{eq7} using \eqref{eq11} and \eqref{eq14} \begin{multline}\label{eq16} E=\frac{1}{2}\int_{0}^{l}{mf^{2}dz}\dot{q}^{2} + \frac{1}{2}\int_{0}^{l}{J_{m}\phi^{2}dz}\dot{r}^{2} -\int_{0}^{l}{m\sigma_{T}f\phi dz} \dot{q}\dot{r} +\frac{1}{2}\int_{0}^{l}{EJ(f'')^{2}dz}q + \frac{1}{2}\int_{0}^{l}{GJ_{k}(\phi')^{2}dz}r = \\ \frac{1}{2}a_{13}x_{1}+\frac{1}{2}a_{11}x_{2}^{2} -\frac{1}{2}b_{23}^{(2)}x_{3}-\frac{1}{2}b_{21}x_{4}^{2}-a_{21}x_{2}x_{4}\leq E_{*}. \end{multline} Finally, by integration by parts we get: \begin{multline}\notag \int_{0}^{l}{EJ(f'')^{2}dz} = \left. EJf''f' \right\rvert_{0}^{l} - \int_{0}^{l}{\left(EJf''\right)'f'dz} = -\frac{d\left( EJf''\right)}{dz}\left. f\right\rvert_{0}^{l} + \int_{0}^{l}{\frac{d^{2}\left( EJf''\right)}{dz^{2}}fdz} = a_{13}, \end{multline} \begin{equation}\notag \int_{0}^{l}{GJ_{k}\left(\phi' \right)^{2}dz} = GJ_{k}\phi'\left. \phi\right\rvert_{0}^{l} - \int_{0}^{l}{\frac{d\left(GJ_{k}\phi' \right)}{dz}\phi dz} = -b_{23}^{(2)}. \end{equation} Thus an equation \eqref{eq15} of the system is provided by a dynamics describing small flexural-twisting wing oscillations in a laminar flow taking into account the linear aerodynamic force and moment together with the equation \eqref{eq16} of limiting the total system energy. The changing rate of the inclination angle of the ``feather'' concerning the wing plane can be selected as a control parameter. \section{CONTROL SYNTHESIS WITH THE SPEED-GRADIENT METHOD} To equalize the forces impact on different parts of the wing, we use the Speed-Gradient Principle~\cite{A6, A7} to derive the feather angle control law. According to this principle, all physical systems evolve along the shortest path in the direction of thermodynamic equilibrium, which corresponds to the maximal value of entropy. In the Speed-Gradient algorithm, the maximal increment of entropy corresponds to the minimal value of the energy in~\eqref{eq16}. Up to this point, we have described the dynamic system taking into account how the solution should look in the optimal state, determined by \eqref{eq16}. However, a control law admitting the system to reach the desired state is absent. In this section such a law is produced. First of all, we seek to control for the \eqref{eq15} with the criterion of \eqref{eq16}, using the speed-gradient method introduced in \cite{A6,A7}. \begin{multline}\notag \frac{dE}{dt} = \frac{1}{2}a_{13}x_{2}+a_{11}x_{2}\left( \sum_{k=1}^{4}{C_{1k}x_{k}} + F_{1}(\beta, u)\right) -\frac{1}{2}b_{23}^{(2)}x_{4} - b_{21}x_{4}\left( \sum_{k=1}^{4}{C_{2k}x_{k}} + F_{2}(\beta, u)\right) \\ -a_{21}x_{4}\left( \sum_{k=1}^{4}{C_{1k}x_{k}} + F_{1}(\beta, u)\right) -a_{21}x_{2}\left( \sum_{k=1}^{4}{C_{2k}x_{k}} + F_{2}(\beta, u)\right), \end{multline} \begin{multline}\notag \nabla_{u}\left( \frac{dE}{dt}\right) = col\left\{ (a_{11}x_{2}-a_{21}x_{4})\frac{\partial F_{1}}{\partial u_{i}} -(b_{21}x_{4}+a_{21}x_{2})\frac{\partial F_{2}}{\partial u_{i}}, i={1,N} \right\}=\\ col\left\{ (a_{11}x_{2}-a_{21}x_{4})s_{1i} -(b_{21}x_{4}+a_{21}x_{2})s_{2i}, i={1,N}\right\}=\\ col\left\{ (a_{11}s_{1i}-a_{21}s_{2i})x_{2} -(a_{21}s_{1i}+b_{21}s_{2i})x_{4}, i={1,N}\right\}= col\left\{ \mu_{i}x_{2} + \nu_{i}x_{4}, i={1,N}\right\}, \end{multline} where $\mu_{i} = a_{11}s_{1i}-a_{21}s_{2i};~$ $\nu_{i} = -(a_{21}s_{1i}+b_{21}s_{2i}).$ Thus, \begin{equation}\notag \frac{du_{i}}{dt} = -\gamma_{i}(\mu_{i}x_{2}+\nu_{i}x_{4}), ~~ i={1,N}, ~~ \gamma_{i}>0 \end{equation} \noindent or \begin{equation}\notag \frac{du_{i}}{dt} = -\gamma_{i}(\mu_{i}\dot{x}_{1}+\nu_{i}\dot{x}_{3}) \Rightarrow u_{i} = -\gamma_{i}(\mu_{i}x_{1}+\nu_{i}x_{3}) + const_{i}. \end{equation} Since for $x_{1}=x_{2}=x_{3}=x_{4}=0$ all values of $u_{i}=0,$ and it means that $const_{i} = 0, i={1,N}.$ The resulting control equation derived in accordance with the Speed-Gradient Principle \begin{equation}\label{eq17} u_{i} = -\gamma_{i}(\mu_{i}x_{1}+\nu_{i}x_{3}) \end{equation} is a control in the form of feedback on a deviation with constant coefficients. \section{MULTI-AGENT CONTROL} Utilization of the multiagent control allows for formation of emergent intelligence (intellectual resonance, swarm intelligence) or an occurrence of unexpected properties that a system possesses, but none of its individual elements has. Each feather in the system aims to solve its own ``task'' of minimizing the deviation of the wing segment, to which the feather is attached, from its initial state. The action of each feather, in general, is not consolidated with other feathers actions, however a combination of all feathers impact results in a new property of the wing as the multiagent system of feathers to damp the vibrations. Now, consider the feathers as intelligent agents, so that each of them can receive information about the movement of the wing, exchange this information with other agents (transfer its information to them and receive their information), process the received data, and form local force and moment impacts on the wing, trying to keep the wing as close to its initial shape as possible (see Fig. \ref{fig:2p}). Assuming the feathers size is relatively small compared to the wing's surface, we relate a feather to some point on the wing's surface, when it is found in the neutral position. Introduce for each feather ($i={1,N}$): \begin{itemize} \item $\bar{z}_{i},\bar{\psi}_{i}$ --- coordinates of the point to which the $i$-th feather is related; \item $y_{1i}$ and $\Theta_{1i}$ --- deflection and angle of twisting of the wing at the location of the $i$-th feather (deviations from the curve \eqref{eq2}); \item $N_{i}$ --- the set of feathers with which the $i$-th feather can exchange information; \item $\bar{b}_{ij}$ --- a non-negative weighting coefficient of the significance of information from the $i$-th feather to the $j$-th. Here, we assume that $\bar{b}_{ij}=\bar{b}_{ji}$ and $\sum_{j\in N_{i}}{\bar{b}_{ij}}=1$ is the normalization condition; \item $\bar{b}_{ij}=0,$ if the i-th and $j$-th feathers are not connected informationally; \item $\bar{B}=[\bar{b}_{ij}]$ --- adjacency matrix. \end{itemize} According to \eqref{eq8} and \eqref{eq11}, for each $i={1,N}$ \begin{equation}\label{eq_Phi} \begin{Vmatrix} w_{i} \end{Vmatrix} = \begin{Vmatrix} y_{1i}\\ \dot{y}_{1i}\\ \Theta_{1i}\\ \dot{\Theta}_{1i} \end{Vmatrix} = \begin{Vmatrix} qf(z_{i})\\ \dot{q}f(z_{i})\\ r\phi(z_{i})\\ \dot{r}\phi(z_{i}) \end{Vmatrix} = \begin{Vmatrix} \Phi_{i}\bar{x} \end{Vmatrix} < \epsilon \end{equation} for $t>t_{1}$ for an extended period of time; $t_{1}$ --- the moment of reaching $V_{flat}$; $\Phi_{i}=diag\{f(z_{i}),f(z_{i}),\phi_{i}(z_{i}),\phi_{i}(z_{i})\}$ Moreover, we take into account that \begin{equation}\notag \left\|\Phi_{i}\bar{x} - \Phi_{j}\bar{x} \right\| =\left\|\left[\Phi_{i} - \Phi_{j}\right]\bar{x} \right\| \leq \left\|\Phi_{i}\bar{x}\right\| + \left\|\Phi_{j}\bar{x}\right\| < 2\epsilon = \epsilon^{*}. \end{equation} The compensation for deviations from the stationary position is given in the model as \begin{equation}\label{eq18} L(\bar{x})=\frac{1}{2}\sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}\left\|\left(\Phi_{i} - \Phi_{j}\right)\bar{x} \right\|^{2}}. \end{equation} The problem by analogy with \cite{A5} is formulated as follows. In conditions of uniform rectilinear flight of the aircraft in a laminar flow, when approaching the critical speed of flexural-torsional flutter onset $V_{flat}$, it is required to find such controls $u_i$ for each feather in the system \eqref{eq15}, that would ensure the fulfillment of the target condition for the functional \eqref{eq18}: \begin{equation* L(\bar{x}) \leq \epsilon^{*} \end{equation*} for a small given tolerance $\epsilon_{*}>0$ for $t>t_{1}$ during a sufficiently long period, where $t_{1}$ is the time of reaching the critical flutter speed. The feather control laws are generated according to the Speed-Gradient Principle as discussed above. \subsection{NON-MULTI-AGENT CONTROL SYNTHESIS} Let us consider the functional \begin{equation}\notag L(x) = \frac{1}{2}\sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}\left\|(\Phi_{i}-\Phi_{j})\bar{x}\right\|}^{2}\leq \epsilon^{*}. \end{equation} We suppose that $\bar{b}_{ij}\geq 0~\forall i,j$ which implies $L\geq 0.$ \begin{multline}\notag \left\|(\Phi_{i}-\Phi_{j})\bar{x}\right\|^{2} = \left\|diag\{a,~b, ~c,~d \}\bar{x}\right\|^{2} = \left\|\left(f_{ij}x_{1},~f_{ij}x_{2},~\phi_{ij}x_{3},~\phi_{ij}x_{4} \right)^{T}\right\|^{2} = \\ f_{ij}^{2}(x_{1}^{2}+x_{2}^{2}) + \phi_{ij}^{2}(x_{3}^{2}+x_{4}^{2}), \end{multline} where $a = f(z_{i})-f(z_{j})$, $b = f(z_{i})-f(z_{j})$, $c = \phi(z_{i})-\phi(z_{j})$, $\Phi$ is defined in~\eqref{eq_Phi}, $d = \phi(z_{i})-\phi(z_{j})$ and $f_{ij}=f_{i}-f_{j}=f(z_{i})-f(z_{j}),$ $\phi_{ij}=\phi_{i}-\phi_{j}=\phi(z_{i})-\phi(z_{j}).$ So: \begin{multline}\label{eq20} L(x) = \frac{1}{2}\sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}\left[f_{ij}^{2}(x_{1}^{2}+x_{2}^{2}) + \phi_{ij}^{2}(x_{3}^{2}+x_{3}^{2}) \right]} = \frac{1}{2}\left[\chi(x_{1}^{2}+x_{2}^{2}) + \lambda(x_{3}^{2}+x_{4}^{2}) \right], \end{multline} where $\chi = \sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}f_{ij}^{2}}\geq 0$ and $\lambda = \sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}\phi_{ij}^{2}}\geq 0$ are constants determined by the topology of the agent network (a wing in our model) (for given functions of the waveforms $f(z)$ and $\phi(z)$). A singular situation appears once $\chi=0$ or $\lambda=0$. Following the Speed-Gradient method of \cite{A6} from \eqref{eq20} and \eqref{eq15}, we obtain: \begin{multline}\notag \frac{dL}{dt}=\chi(x_{1}x_{2} + x_{2}\dot{x}_{2})+\lambda(x_{3}x_{4} + x_{4}\dot{x}_{4})= \\ \chi x_{2}\left[x_{1}+\sum_{k=1}^{4}{C_{1k}x_{k}} + F_{1}(\beta,u) \right] + \lambda x_{4}\left[x_{3}+\sum_{k=1}^{4}{C_{2k}x_{k}} + F_{2}(\beta,u) \right], \end{multline} \begin{equation}\notag \nabla_{u}\dot{L} = col\left\{ \frac{\partial\dot{L}}{\partial u_{i}},~i=1,N\right\}, \end{equation} \begin{multline* \frac{\partial}{\partial u_{i}}\left(\frac{dL}{dt}\right) = \chi x_{2}\frac{\partial F_{1}(\beta,u)}{\partial u_{i}} + \lambda x_{4}\frac{\partial F_{2}(\beta,u)}{\partial u_{i}} = \\ \chi x_{2}s_{1i} + \lambda x_{4}s_{2i} = \chi s_{1i}\dot{x}_{1} + \lambda s_{2i}\dot{x}_{3}. \end{multline*} Consequently: \begin{multline}\label{eq21} \frac{d u_{i}}{dt} = -\gamma_{i}\left\{ \chi s_{1i}\dot{x}_{1} + \lambda s_{2i}\dot{x}_{3}\right\} \Rightarrow \\ \dot{\beta}_{i} = u_{i} = -\gamma_{i}\left\{ \chi s_{1i}{x}_{1} + \lambda s_{2i}{x}_{3}\right\}, ~ \gamma_{i}>0,~i=1,N, \end{multline} since the integration constant is zero for the same reasons as in~\eqref{eq17}. The \eqref{eq17} and \eqref{eq21} have the same structure, but there are some differences. In fact, according to \eqref{eq15}, we can consider the coefficients $s_{1i}$ and $s_{2i}$ as the coefficients of influence of the $i$-th feather on the force factor in bending vibrations and on the moment factor in torsional vibrations, respectively. Actually, the values of these coefficients show the degree of participation of the $i$-th feather in wing dynamics. In \eqref{eq17}, the feedback coefficients take into account the influence of the types of wing oscillations on each other, while in \eqref{eq21} the feedback coefficients for bending and torsional vibrations are strictly separated and determined through influence factors only on its type of oscillation. Equation \eqref{eq21} is not multiagent by nature since its dependence upon information about the state of other agents is static and it is invariant to the dynamics of the $i$-th feather. \subsection{MULTI-AGENT CONTROL SYNTHESIS} Now, we go back to \eqref{eq15}. We expand the vector of phase coordinates by introducing \begin{equation}\label{eq22} \tilde{x}_{i}=col\left\{ \Phi_{i}\bar{x},\beta_{i}\right\}. \end{equation} For this extended vector, we compose a functional analogously to \eqref{eq20} \begin{equation}\label{eq23} \tilde{L}=\frac{1}{2}\sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}\left\| col\left\{ (\Phi_{i}-\Phi_{j})\bar{x}, \beta_{i}-\beta_{j}\right\}\right\|^{2}}. \end{equation} The application of the proposed approach can be justified by the fact that for minor deviations of the wing from the stationary position, determined by \eqref{eq2}, deviations of the feathers from their neutral position $\beta_{i},~i=1,N$ should be small as well. That is, at least for feathers on one side of the wing (lower/upper), we have that \begin{equation}\notag \left(\beta_{i}-\beta_{j}\right)^{2}\leq \beta_{i}^{2}+\beta_{j}^{2} < 2\epsilon^{2}_{\beta}, \end{equation} where $\epsilon_{\beta}$ is a reasonably small number. After simple transformations, we get \begin{equation}\label{eq24} \tilde{L}=L+\frac{1}{2}\sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}\left(\beta_{i}-\beta_{j}\right)^{2}} < \epsilon^{*} + \sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}\epsilon^{2}_{\beta}} < \epsilon^{**}. \end{equation} Now, we apply the Speed-Gradient method: \begin{equation}\notag \frac{d\tilde{L}}{dt} = \dot{L} + \sum_{i=1}^{N}\sum_{j\in N_{i}}{\bar{b}_{ij}\left(\beta_{i}-\beta_{j}\right)\left(u_{i}-u_{j}\right)}; \end{equation} \begin{equation}\notag \nabla_{u}\left(\frac{d\tilde{L}}{dt}\right)=col\left\{ \frac{\partial\dot{L}}{\partial u_{i}} + 2\sum_{j\in N_{i}}{\bar{b}_{ij}(\beta_{i}-\beta_{j})}\right\} \end{equation} Finally, the control law is: \begin{multline}\label{eq25} \dot{\beta}_{i}=u_{i}=-\tilde{\gamma}_{i}\left(\chi s_{1i}\dot{x}_{1} + \lambda s_{2i}\dot{x}_{3}\right) -2\tilde{\gamma}_{i}\sum_{j\in N_{i}}{\bar{b}_{ij}(\beta_{i}-\beta_{j})},~ \tilde{\gamma}_{i}>0,~ i=1,N. \end{multline} It is important to pay attention to the multiagent nature of the control protocol of \eqref{eq22} and \eqref{eq25}, since the control signal for the rotation of each feather is formed on the basis of information about its own current state and the current state of the feathers connected with it. The connection is defined by the second term in \eqref{eq25} which states the dependence of feather $i$ angle adjustment $\dot{\beta}_i$ on the feather angle $\beta_i$ deviation from its neighbors' angles $\beta_j$, $j \in N_i$. At the same time, the first part of \eqref{eq25} describes control in the form of feedback with constant coefficients according to the speed of deviation of bending and torsional vibrations from the stationary state of (2). It is essential that if $\dot{x}_{1}=\dot{x}_{3}=0,$ then this does not entail $u_{i}=0,$ since in the general case there can be $x_{1}\neq 0$ and $x_{3}\neq 0.$ To reduce them, it is necessary to apply a control defined by \eqref{eq25}. The expression $\dot{\beta}_{i}=u_{i}=0$ is true only in the case of complete absence of oscillations, when $x_{1} = x_{2} = x_{3} = x_{4} = 0,$ since only then $\beta_{i}=0,~p=1,N.$ The control of \eqref{eq25} does not explicitly depend on the time, at which the critical flutter speed is reached that allows using this control without any changes also to multiple transitions in speed across this boundary. Introduced control for each feather $i$ uses only local information about its own angle $\beta_i$ and angles of its nearest neighbors $\beta_j$. Suggested procedure, on the one hand, does not require data collection from all feathers on the wing to form the controls \eqref{eq25} and, on the other hand, requires only small data amount to compute the control inputs in \eqref{eq25} which are the essential features of multiagent approach. \section{SIMULATION} Let us compare the introduced control laws: synthesized with the speed-gradient method~\eqref{eq17}, non-multiagent~\eqref{eq22} and multiagent~\eqref{eq25} control laws. In the simulation experiments the constants were chosen equal to the values (in some arbitrary units) listed below. Time step $\Delta_t = 10^{-5}$; number of time instants $T = 10$; number of feathers $N = 5$; air density $\rho = 1.225$; linear mass of the wing $m = 10$; wing section chord $b = 10$; wing length $l = 10$; from~\eqref{sist6} the derivative of the wing lift coefficient with respect to $\alpha$, $C^{\alpha}_y = 10$; airspeed $V = 10$; wing stiffness in bending $EJ = 50$; wing stiffness in torsion $GJ_k = 70$; distance between stiffness centers and gravity centers in the wing cross section assumed to be constant and equal $\sigma_T = 0.1$; wing cross-section is assumed to be elliptical with height $a = 2$; linear mass moment of inertia of the wing relative to its stiffness axis $J_m = (\pi a b / 4) a^2 b^2 / (4 (a^2 + b^2))$; feather coordinates in Z axis $\bar{z} = l \cdot (0.1, 0.2,\ldots, 0.9)$; feather coordinates in X axis according to formulas~\eqref{eq10} $\bar{\psi} = \pi / 4 \cdot (0.1, 0.2,\ldots, 0.9)$; the coordinates of feathers trailing edges in X axis equal $x = 3/4 l$; the coordinates of feathers joint in X axis $x_\star = x - 0.01$; the distance from the leading edge of the wing to the SC section $x_0 = l / 4$. \begin{figure} \includegraphics[scale=0.7]{control1.png} \caption{feather angles $\beta$ change under the control synthesized with speed-gradient method} \end{figure} \begin{figure} \includegraphics[scale=0.7]{control2.png} \caption{feather angles $\beta$ change under the non-multiagent control law} \label{fig:control2} \end{figure} \begin{figure} \includegraphics[scale=0.7]{control3.png} \caption{feather angles $\beta$ change under the multiagent control law} \label{fig:control3} \end{figure} The non-multiagent and multiagent control laws minimize the feather angles $\beta$ while the control law synthesized with the speed-gradient method tends to maximize the feather angles under given conditions. Multiagent and non-multiagent control treat the feathers separately and the difference between them lies in usage of the information from neighboring feathers in process of the control action synthesis. In case of multiagent control each feather ``takes into account'' the angles of its neighbors which allows to achieve faster control law. As could be seen from graphs in Figures~\ref{fig:control2} and~\ref{fig:control3} the multiagent control law (in Fig.~\ref{fig:control3}) minimizes the feather angles with a larger rate. \section{CONCLUSIONS AND OUTLOOK} This work is the first study of the authors, related to multiagent control of the wing with feathers aimed to avoid increasing wing oscillations when approaching flutter. In the article - a mathematical model of the bending-torsional vibrations of an airplane wing with controlled feathers on its surface is given; - three different statements of the control problem are considered, which differ by the goal functional; - the three control laws \eqref{eq17}, \eqref{eq21} and \eqref{eq25} are synthesized by the Speed-Gradient method. Only one of them: \eqref{eq25} is multiagent. \eqref{eq21} is an "intermediate" one for the synthesis of a multiagent control law. It has a similar structure to \eqref{eq17}, but takes into account the presence of other feathers and their contribution. However, the information about them remains static. It means that the state and dynamics of other pens is not considered. The multiagent control law allows for each feather to take into account information about its own current state and about the current state of feathers in the area of the wing where it is located. As a rule, this allows for more precise tuning and quicker tuning to external factors, which makes this equation the most promising. In the future, we advise to study the effectiveness of the obtained control laws and to compare them. The most critical indicator in the comparison should be the time, during which the system is able to damp vibrations to a safe level and hold them. The relevance of this indicator is due to the rather fast process of increasing wing oscillations during flutter. Another promising area for the further research is the development of multiagent control of feathers following the example of a swarm and research its effectiveness. \section*{ACKNOWLEDGMENT} Sections 1-5 of the work were supported by the IPME RAS by Russian Science Foundation (project no. 21-19-00516). Section 6 was supported by the St. Petersburg State University (project No. 73555239).
proofpile-arXiv_067-5589
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and Statement of the Result} For $ q $, $ a $ integers such that $ a \geq 1 $, $ 1 < q $, $ (x, y) \in {\mathbb{R}}^{2} $, we consider the operator \begin{equation} \label{eq:P} P(x, y, D_{x}, D_{y}) = D_{x}^{2} + x^{2(q-1)} D_{y}^{2} + y^{2a} D_{y}^{2} . \end{equation} Here $ D_{x} = 1/\sqrt{-1} \ \partial_{x} $. The operator $ P $ generalizes the operator $$ P_{M} (x, y, D_{x}, D_{y}) = D_{x}^{2} + x^{2} D_{y}^{2} + y^{2} D_{y}^{2}, $$ defined by M\'etivier and for which M\'etivier, in an article in the Comptes Rendus de l'Acad\'emie des Sciences in 1981, stated that it is not real analytic hypoelliptic, \cite{metivier81}. Actually in \cite{metivier81} it was stated that $ P_{M} $ is Gevrey 2 hypoelliptic and not better, meaning that it is not Gevrey-$ s $ hypoelliptic for any $ s $, $ 1 \leq s < 2 $. For the sake of completeness we give the definition of $ G^{s}(\Omega) $---the class of all Gevrey-$ s $ functions in the open set $ \Omega $: \begin{definition} \label{def:gevrey} If $ \Omega \subset {\mathbb{R}}^{n} $ is an open set we say that the function $ u $ belongs to the Gevrey class of order $ s \geq 1 $, $ G^{s}(\Omega) $, if $ u \in C^{\infty}(\Omega) $ and for every compact set $ K \subset \Omega $ there exists a positive constant $ C_{K} $ such that $$ \sup_{K} | \partial_{x}^{\alpha} u(x) | \leq C_{K}^{|\alpha| + 1} \alpha!^{s} . $$ Note that $ G^{1}(\Omega) $ is the class of the real analytic functions in $ \Omega $. \end{definition} We point out explicitly that the characteristic set of $ P_{M} $ is the non symplectic submanifold of $ T^{*}{\mathbb{R}}^{2}\setminus\{0\} $ given by $$ \Char(P_{M}) = \{ (x, y; \xi, \eta) \ | \ x = y = 0, \xi = 0, \eta \neq 0 \}. $$ We have $$ \Char(P) = \{ (x, y; \xi, \eta) \ | \ x = y = 0, \xi = 0, \eta \neq 0 \} = \Char(P_{M}) . $$ The problem of the analytic or Gevrey regularity of the operator \eqref{eq:P} is part of the broader problem of the analytic regularity for sums of squares of vector fields with analytic coefficients. Technically speaking the operator \eqref{eq:P} is not a sum of squares, but it is easy to see that it shares the same properties with the sum of squares $$ D_{x}^{2} + \left(x^{q-1} D_{y}\right)^{2} + \left(y^{a} D_{y}\right)^{2} . $$ There has been a fair amount of literature on the problem in the eighties and ninenties and we refer to the paper \cite{bm-surv} for a survey of the main results. We focus on the conjecture stated by Treves (see \cite{Treves}, \cite{btreves} and \cite{trevespienza} for its statement) in 1999, characterizing the analytic hypoellipticity of sums of squares. The Treves conjecture defines a stratification of $ \Char(P) $ such that the strata satisfy the following properties (everything is meant to be microlocal near a fixed point in $ \Char(P) $) \begin{itemize} \item[(i)]{} Real analytic submanifolds of $ \Char(P) $. \item[(ii)]{} The symplectic form $ \sigma = d\xi \wedge dx $ restricted to each stratum has constant rank. \item[(iii)]{} For each stratum there exists an integer $ \nu \in {\mathbb{N}} $ such that all the Poisson brackets of the symbols of the vector fields of length $ \leq \nu $ are identically zero, but there is a Poisson bracket of length $ \nu+1 $ which is not zero. By length of a Poisson bracket we mean the number of vector fields of which we take an iterated bracket. \end{itemize} According to the conjecture an operator is analytic hypoelliptic if and only if every stratum of its characteristic variety is a symplectic manifold. For the operator $ P $ the stratification is made up of two half lines, $ \Sigma_{+} \cup \Sigma_{-} $, where $$ \Sigma_{\pm} = \{ (x, y; \xi, \eta) \ | \ x = y = 0, \xi = 0, \pm \eta > 0 \} . $$ $ \Sigma_{\pm} $ is an isotropic submanifold of $ T^{*}{\mathbb{R}}^{2}\setminus\{0\} $ and, as such, it coincides with its Hamilton leaf along the half-fiber $ \pm \eta >0 $. Thus the Treves stratification is not symplectic, which suggests that $ P $ is not analytic hypoelliptic. We recall that if a manifold is not symplectic there is always a foliation, called the Hamilton foliation, whose leaves can or cannot be transverse to the fibers of the cotangent bundle. We point out that the operator in \eqref{eq:P} actually is one of the simplest model operator exhibiting a Hamilton leaf non transverse to the fibers. By adapting the technique of \cite{monom} one can prove that $ P $ is Gevrey $ s_{0} $ hypoelliptic, with $$ s_{0}^{-1} = 1 - \frac{1}{a} \frac{q - 1}{q} . $$ In the present paper we want to show that $ s_{0} $ is an optimal index, i.e. \begin{theorem} \label{th:1} The operator $ P $ is not Gevrey-$ s $ hypoelliptic for any $ s $ such that $ 1 \leq s < s_{0} $. Hence $P$ is not analytic hypoelliptic. \end{theorem} We point out that the optimality proof in the non tranverse case is technically much more involved than that in the transverse case and, to our knowledge, in the non transverse case only M\'etivier paper \cite{metivier81} is available. In the remaining part of this section we want to give a brief exposition about our motivation to study the operator $ P $ and set it in a more general context in the framework of the problem of the real analytic regularity of the solutions to sums of squares operators, of which $ P $ is a two variable example. An essential tool in M\'etivier's paper, \cite{metivier81}, is the expansion of a solution as a linear combination of the eigenfunctions of the corresponding harmonic oscillator, $ D_{x}^{2} + x^{2} $. It is known that the eigenfunctions of the harmonic oscillator are a basis in $ L^{2}({\mathbb{R}}) $, are rapidly decreasing at infinity and satisfy finite recurrence relations allowing us to express e.g. the derivative, or the multiplication by $ x $, of an eigenfunction as a combination of two different eigenfunctions. These recurrence relations are essential in the M\'etivier approach. When the degree of the potential is larger than two, i.e. for the anharmonic oscillator $ D_{x}^{2} + x^{2(q-1)} $, $ q > 2 $, Gundersen in \cite{gundersen} proved the \begin{theorem}[\cite{gundersen}] \label{th:gundersen} Consider the equation in $ {\mathbb{C}} $ $$ w''(z) + ( \lambda - p(z)) w(z) = 0, \qquad \lambda \in {\mathbb{R}}, \ z \in {\mathbb{C}} . $$ Here $ p $ denotes the polynomial $ p(z) = a_{2m}z^{2m} + a_{2m-2} z^{2m-2} + \ldots + a_{2} z^{2} $. Assume that $ a_{i} \geq 0 $ for every $ i $ and $ a_{2m} > 0 $. Denote by $ (\psi_{n}(z))_{n \in {\mathbb{N}}} $ a set of solutions that is a complete orthonormal basis in $ L^{2}({\mathbb{R}}) $. Then if $ m $ is even and $ g $ is a non zero polynomial we have that $ g(z)$ $ \cdot \psi_{\ell}^{(k)}(z) $ is not a finite linear combination of the $ \psi_{n} $, for $ k \in {\mathbb{N}} \cup \{0\} $, $ \ell \in {\mathbb{N}} $ fixed and $ \deg(g) \geq 1 $ if $ k = 0 $. \end{theorem} Thus for $ q $ odd there is no finite recurrence relation for the anharmonic oscillator. For an even $ q $, Bender and Wang, \cite{bw}, showed that the eigenfunctions for the operator \begin{equation} \label{eq:bender} - u''(x) + x^{2N+2} u(x) = E x^{N}u(x) , \qquad N = -1, 0, 1, 2, \ldots, \end{equation} do have finite recurrence relations using confluent hypergeometric functions. We refer to Chinni, \cite{chinni}, for a result in this sense. Our proof of Theorem \ref{th:1}, as well as M\'etivier's approach, shares the general pathway of constructing an asymptotic formal solution with all the optimality proofs in the transverse case. This is done first at the formal level and then substantiated by introducing suitable cutoff functions to turn a formal solution into a true solution of an equation with a better right hand side. We shall show that the Gevrey regularity of $P$ is strictly related to the functions satisfying the equation \begin{equation} \label{eq:oscill} - u''(x) + x^{2(q-1)} u(x) = \lambda u(x) , \end{equation} i.e. the eigenfunctions of the operator \begin{equation} \label{eq:Q} Q = D^{2} + x^{2(q-1)}. \end{equation} However we do not require any recurrence relations among the eigenfunctions. \bigskip From a more geometric point of view we point out that even though the Treves conjecture has been shown not to hold for $ n \geq 4 $ (we refer to \cite{abm}, \cite{bm} for a proof and to \cite{bm-j} for a case that might suggest that strata are still the object to be defined) there are no counterexamples when $ n = 2, 3 $. We observe that in the proofs of \cite{abm}, \cite{bm} there is a non symplectic submanifold of the characteristic manifold that is not identified by the Treves conjecture stratification procedure and plays a crucial role in carrying the non real analytic wave front set. This submanifold has a foliation on the base of the cotangent bundle, i.e. there is a Gevrey (or analytic) wave front set propagation in the space variables. When $ n=3 $ this may no longer occur due to a dimensional constraint, but we think that e.g. the following example \begin{equation} \label{eq:nonT-dim-3} D_{1}^{2} + x_{1}^{2(r-1)} D_{y}^{2} + D_{2}^{2} + x_{2}^{2(q-1)} D_{y}^{2} + x_{2}^{2(p-1)} y^{2a} D_{y}^{2} , \end{equation} where $ a $, $ r $, $ p $, $ q \in {\mathbb{N}} $, $ 1 < r < p < q $, should be a candidate to violate the Treves conjecture in dimension $ 3 $, since the characteristic manifold has a foliation whose leaves are the $ \eta $ fibers of the cotangent bundle that, in our opinion, should carry analytic wave front set. To our knowledge no proof is known either of its analytic regularity or of its non-analytic hypoellipticity. We hope that the same techniques we are using for the operator $ P $ above can be suitably modified to prove that the above example is not analytic hypoelliptic. While for $ n \geq 3 $ there is no conjecture about the stratification to use in order to characterize analytic hypoellipticity, when $ n=2 $ the Treves conjecture seems to describe accurately the geometry of the characteristic variety and we think that it might actually be true (see \cite{monom} again). Furthermore the operator studied in this paper is a microlocal model for a class of operators in two variables. In this perspective, proving Theorem \ref{th:1} seems a necessary step to accomplish a proof of the conjecture when $ n=2 $. However no proof is available up to now. Finally a few words about the proof of the theorem. We just sketch the idea of the proof with a simplified notation. First we construct a formal solution of $ P(A(u)) = 0 $, of the form $$ A(u)(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} u(x \rho^{\frac{s_{0}}{q}}, \rho) d\rho , $$ where $$ u(x, \rho) = \sum_{j\geq0} u_{j}(x, \rho). $$ This is accomplished by observing that if $ P(A(u)) = 0 $, then $ u $ must satisfy to $$ \sum_{j=0}^{2a} \frac{1}{\rho^{j}} P_{j}(x, D_{x}, D_{\rho}) u(x, \rho) = 0 , $$ where the $ P_{j} $ are differential operators of order $ 2a $. This is done in section 2. In section 3 we compute the $ u_{j} $ as solutions to countably many PDEs of the form $$ P_{0} u_{j} = - \sum_{k=1}^{\min\{j, 2a\}} \frac{1}{\rho^{k}} P_{k} u_{j-k} , $$ where, essentially, $ P_{0} = D_{x}^{2} + x^{2(q-1)} + D_{\rho}^{2a} $ on the half plane $ {\mathbb{R}}_{x} \times$ $ ]0, +\infty[ $. In order to compute $ u_{j} $ we have to compute an inverse of $ P_{0} $ in the half plane $ \rho > 0 $. This is accomplished by separating variables, using the eigenfunctions of the anharmonic oscillator $ D_{x}^{2} + x^{2(q-1)} $, and solving an ODE in $ \rho $ for $ \rho > 0 $. It can be seen that when solving the above mentioned ODE corresponding to higher eigenstates of the anharmonic oscillator we gain a decreasing rate of the form $ \mathscr{O}(\rho^{-j} e^{-c\rho}) $. When it comes to the fundamental eigenstate the situation is more involved and, to obtain a similar decreasing rate, one has to decouple the equation for $ u_{j} $, using the projector onto the fundamental eigenstate, $ \pi $, and its complement $ 1 - \pi $ (see \eqref{eq:tr1}, \eqref{eq:tr2}). In sections 4, 5, 6, 7 we derive weighted $ L^{2} $-type estimates for $ \pi u_{j} $, $ (1-\pi)u_{j} $ and hence for $ u_{j} $ (Theorem \ref{th:est}). The reason why we resorted to $ L^{2} $, and not $ L^{\infty} $, estimates is that we exploit the orthonormality of the eigenfunctions of the anharmonic oscillator, since there are no finite recurrence relations. In section 8 the pointwise estimates for the $ u_{j} $ are derived via the Sobolev immersion theorem (Theorem \ref{th:pest-t}). In order to turn the formal solution $ A(u) $ of the equation $ P A(u) = 0$ into a true solution, in section 9 we replace $ u $ with $ v $ where $$ v(x, \rho) = \sum_{j\geq 0} \psi_{j}(\rho) u_{j}(x, \rho), $$ where $ \psi_{j} $ denotes a suitable cutoff function whose precise definition is given in Lemma \ref{lemma:psij}. Then $ A(v) $ solves an equation of the form $ P A(v) = f $, for a function $ f $ in a Beurling class of order $ s_{0} $ (see Definition \ref{def:Bs} for the definition of such classes). From Theorem 3.1 of M\'etivier, \cite{metivier80}, arguing by contradiction, if $ P $ is Gevrey hypoelliptic of order less than $ s_{0} $, then $ A(v) $ belongs to the Beurling class of order $ s_{0} $. The purpose of section 10 is to obtain a contradiction to the fact that $ A(v) $ is in a Beurling class of order $ s_{0} $ by using the structure of $ A(v) $ given above. This is done by comparing the growth rate of its $ y $-Fourier transform as a member of the Beurling class of order $ s_{0} $ (see Lemma \ref{lemma:fourier}) as well as a function given by the above expression (see Lemma \ref{lemma:below}), when $ x $ is frozen at the origin. As a technical detail we mention that the use of the Fourier transform forces us to replace the Beurling class of order $ s_{0} $ with $ L^{2}({\mathbb{R}}_{y}) $ intersected with the global Beurling class of order $ s_{0} $ (see definition \ref{def:Bs}). To show that actually $ A(v) $ belongs to the latter class we use Lemma \ref{lemma:yd}. Finally we gathered in the Appendixes some results needed in the main body of the paper. Appendixes A and B present some basically known facts and fix the notation in our setting. Appendix C and D present some estimates that are essential for sections 5 and 6. We postponed them in order to allow the reader to better follow the deployment of the proof in those sections. \section{A Formal Solution} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} \setcounter{remark}{0} First we look for a formal solution to $ P v = 0 $ by taking $ v $ of the form \begin{equation} \label{eq:Au} A(u)(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r} u(x \rho^{\frac{s_{0}}{q}}, \rho) d\rho , \end{equation} where \begin{equation} \label{eq:s0} \frac{1}{s_{0}} = 1 - \frac{1}{a} \ \frac{q - 1}{q} , \end{equation} $ r $ is a complex number to be chosen later and $ u $ denotes a smooth function defined in $ {\mathbb{R}} \times {\mathbb{R}} $, with $ \supp u \subset \{ \rho > 0 \}$, and rapidly decreasing for $ \rho \to + \infty $. We have \begin{equation} \label{eq:D2A} D_{x}^{2} A(u)(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} (D_{x}^{2} u)(\rho^{\frac{s_{0}}{q}} x, \rho) \ d\rho. \end{equation} \begin{equation} \label{eq:x2(q-1)} x^{2(q-1)} D_{y}^{2} A(u)(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \Big [x^{2(q-1)} u(x, \rho)\Big]_{x = x \rho^{s_{0}/q}} \ d\rho. \end{equation} Here the notation $ x = x \rho^{s_{0}/q} $ means that the variable $ x $ inside the square brackets has to be replaced by the r.h.s. of the equation. Finally \begin{equation} \label{eq:x2(p-1)} y^{2a} D_{y}^{2} A(u)(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 s_{0}} y^{2a} \Big [ u(x, \rho)\Big]_{x = x \rho^{s_{0}/q}} \ d\rho. \end{equation} Since $$ y e^{i y \rho^{s_{0}}} = \frac{1}{i s_{0} \rho^{s_{0} - 1}} \partial_{\rho} e^{i y \rho^{s_{0}}} , $$ integrating by parts we may rewrite the above expression as \begin{multline*} y^{2a} D_{y}^{2} A(u)(x, y) \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \left( - \partial_{\rho} \frac{1}{i s_{0} \rho^{s_{0} - 1}}\right)^{2a} \rho^{r + 2 s_{0} \left(\frac{q-1}{q} + \frac{1}{q}\right)} \Big [ u(x, \rho) \Big]_{x = x\rho^{s_{0}/q}} \ d\rho. \end{multline*} Remark that \begin{equation} \label{eq:gamma0} \left(\partial_{\rho} \frac{1}{i s_{0} \rho^{s_{0} - 1}}\right)^{2a} = \sum_{h=0}^{2a} \gamma_{2a, h} \frac{1}{\rho^{2a s_{0} - h}} \partial_{\rho}^{h}, \end{equation} where the $ \gamma_{2a, h} $ are constants satisfying \begin{equation} \label{eq:gamma} | \gamma_{2a, h} | \leq C_{\gamma}^{2a+h} (2a-h)! , \qquad \gamma_{2a, 2a} = \left(\frac{i}{s_{0}}\right)^{2a}, \ \gamma_{2a, 0} = 1. \end{equation} Hence, writing for the sake of simplicity, $ \gamma_{h} $ instead of $ \gamma_{2a, h} $, we obtain \begin{multline*} y^{2a} D_{y}^{2} A(u)(x, y) \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \sum_{h=0}^{2a} \gamma_{h} \frac{1}{\rho^{2as_{0} - h}} \partial_{\rho}^{h} \left( \rho^{r + 2 s_{0} \left(\frac{q-1}{q} + \frac{1}{q}\right)} \Big [u(x, \rho)\Big]_{x = x\rho^{s_{0}/q}} \right) \ d\rho \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \sum_{h=0}^{2a} \sum_{\alpha=0}^{h} \binom{h}{\alpha} \gamma_{h} \left(r + 2 s_{0} \left( \frac{q-1}{q} + \frac{1}{q} \right)\right)_{h-\alpha} \\ \cdot \left. \rho^{-2as_{0} + h + r + 2s_{0} \left( \frac{q-1}{q} + \frac{1}{q}\right) - h + \alpha} \partial_{\rho}^{\alpha} \Big [ u(x, \rho)\Big]_{x = x\rho^{s_{0}/q}} \right) \ d\rho , \end{multline*} where $ (\lambda)_{\beta} $ denotes the Pochhammer symbol, defined by \begin{equation} \label{eq:pochhammer} (\lambda)_{\beta} = \lambda (\lambda -1) \cdots (\lambda - \beta + 1), \qquad (\lambda)_{0} = 1, \quad \lambda \in {\mathbb{C}}. \end{equation} Moreover, since $$ a (1 - s_{0}) + s_{0} \frac{q-1}{q} = a\left( 1 - s_{0} \left( 1 - \frac{1}{a} \ \frac{q-1}{q}\right) \right) = 0, $$ we obtain that \begin{multline*} y^{2a} D_{y}^{2} A(u)(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \sum_{h=0}^{2a} \sum_{\alpha=0}^{h} \binom{h}{\alpha} \gamma_{h} \left(r + 2 \frac{s_{0}}{q} + 2 s_{0} \frac{q-1}{q} \right)_{h-\alpha} \\ \cdot \rho^{r + 2 \frac{s_{0}}{q} + \alpha - 2a} \partial_{\rho}^{\alpha} \Big [u(x, \rho)\Big]_{x = x\rho^{s_{0}/q}} \ d\rho . \end{multline*} Since \begin{equation} \label{eq:drhobin} \partial_{\rho}^{\alpha} v(\rho^{\theta}x, \rho) = \left [ \left( \frac{\theta}{\rho} x \partial_{x} + \partial_{\rho} \right)^{\alpha} v(x, \rho) \right]_{x = x \rho^{\theta}} , \end{equation} we have \begin{multline*} y^{2a} D_{y}^{2} A(u)(x, y) = \sum_{h=0}^{2a} \sum_{\alpha=0}^{h} \binom{h}{\alpha} \gamma_{h} \left(r + 2 \frac{s_{0}}{q} + 2 s_{0} \frac{q-1}{q} \right)_{h-\alpha} \\ \cdot \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q} + \alpha - 2a} \left[ \left(\frac{s_{0}}{q \rho} x \partial_{x} + \partial_{\rho} \right)^{\alpha} u(x, \rho) \right]_{x = x\rho^{s_{0}/q}} \ d\rho . \end{multline*} Because of the identity $$ \left(\frac{a}{\rho} + \partial_{\rho}\right)^{\alpha} = \rho^{-a} \partial_{\rho}^{\alpha} \rho^{\alpha} = \sum_{k=0}^{\alpha}\binom{\alpha}{k} (a)_{k} \rho^{-k} \partial_{\rho}^{\alpha - k} , $$ we deduce that \begin{multline} \label{eq:x2(p-1):2} y^{2a} D_{y}^{2} A(u)(x, y) = \sum_{h=0}^{2a} \sum_{\alpha=0}^{h} \binom{h}{\alpha} \gamma_{h} \left(r + 2 \frac{s_{0}}{q} + 2 s_{0} \frac{q-1}{q} \right)_{h-\alpha} \\ \cdot \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q} + \alpha - 2a} \left[ \sum_{k=0}^{\alpha} \binom{\alpha}{k} \left( \frac{s_{0}}{q} x \partial_{x} \right)_{k} \rho^{-k} \partial_{\rho}^{\alpha-k} u(x, \rho) \right]_{x = x \rho^{s_{0}/q}} d\rho, \end{multline} where, in analogy with \eqref{eq:pochhammer}, we used the notation $$ \left( \frac{s_{0}}{q} x \partial_{x} \right)_{k} = \frac{s_{0}}{q} x \partial_{x} \left( \frac{s_{0}}{q} x \partial_{x} - 1 \right) \cdots \left( \frac{s_{0}}{q} x \partial_{x} - k + 1 \right) . $$ An inspection of \eqref{eq:x2(p-1):2} readily gives that the differential operator above is a polynomial of degree $ k $ in $ x \partial_{x} $ with uniformly bounded coefficients. We therefore may write it in the form $$ \left( \frac{s_{0}}{q} x \partial_{x} \right)_{k} = p_{k}(x \partial_{x}) = \sum_{j=1}^{k} b_{k,j} (x \partial_{x})^{j}, $$ where $$ b_{k, k} = \left( \frac{s_{0}}{q} \right)^{k}, \qquad b_{k, 0} = \frac{s_{0}}{q} (-1)^{k-1}(k-1)! $$ Hence \begin{multline} \label{eq:x2(p-1):3} y^{2a} D_{y}^{2} A(u)(x, y) = \sum_{h=0}^{2a} \sum_{\alpha=0}^{h} \binom{h}{\alpha} \gamma_{h} \left(r + 2 \frac{s_{0}}{q} + 2 s_{0} \frac{q-1}{q} \right)_{h-\alpha} \\ \cdot \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q} + \alpha - 2a} \left[ \sum_{k=0}^{\alpha} \binom{\alpha}{k} p_{k}(x\partial_{x}) \rho^{-k} \partial_{\rho}^{\alpha-k} u(x, \rho) \right]_{x = x \rho^{s_{0}/q}} d\rho, \end{multline} where $ p_{0}(x\partial_{x}) = 1 $ by convention. From \eqref{eq:D2A}, \eqref{eq:x2(q-1)} and the above we may then write \begin{multline} \label{eq:PA} P A(u) (x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[ D_{x}^{2} + x^{2(q-1)} \right. \\ + \sum_{h=0}^{2a} \sum_{\alpha=0}^{h} \sum_{k=0}^{\alpha} \binom{h}{\alpha} \gamma_{h} \left(r + 2 \frac{s_{0}}{q} + 2 s_{0} \frac{q-1}{q} \right)_{h-\alpha} \binom{\alpha}{k} \\ \left. p_{k}(x\partial_{x}) \rho^{\alpha - k -2a} \partial_{\rho}^{\alpha-k} u(x, \rho) \right]_{x = x \rho^{s_{0}/q}} d\rho. \end{multline} Let us consider the sums on the r.h.s. of the above expression. First we call $ \ell = \alpha - k $ and then rewrite the sums setting $ \ell = 2a - j$. We obtain \begin{multline*} P A(u) (x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[ D_{x}^{2} + x^{2(q-1)} \right. \\ + \sum_{h=0}^{2a} \sum_{\alpha=0}^{h} \sum_{j=2a-\alpha}^{2a} \binom{h}{\alpha} \gamma_{h} \left(r + 2 \frac{s_{0}}{q} + 2 s_{0} \frac{q-1}{q} \right)_{h-\alpha} \binom{\alpha}{2a-j} \\ \left. p_{\alpha+j-2a}(x\partial_{x}) \rho^{-j} \partial_{\rho}^{2a-j} u(x, \rho) \right]_{x = x \rho^{s_{0}/q}} d\rho. \end{multline*} We may then interchange the sums in $ \alpha $ and $ j $ and after that interchange the sums in $ h $ and $ j $ to get \begin{multline} \label{eq:PA:2} P A(u) (x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[ D_{x}^{2} + x^{2(q-1)} \right. \\ + \sum_{j=0}^{2a} \rho^{-j} \left( \sum_{h=2a-j}^{2a} \sum_{\alpha=2a-j}^{h} \binom{h}{\alpha} \gamma_{h} \left(r + 2 \frac{s_{0}}{q} + 2 s_{0} \frac{q-1}{q} \right)_{h-\alpha} \right. \\ \left. \left. \vphantom{\sum_{h=2a-j}^{2a}} \binom{\alpha}{2a-j} p_{\alpha+j-2a}(x\partial_{x}) \right) \partial_{\rho}^{2a-j} u(x, \rho) \right]_{x = x \rho^{s_{0}/q}} d\rho. \end{multline} Denote by $ P_{0}(x, D_{x}, D_{\rho}) $ the differential operator corresponding to $ j=0 $: \begin{equation} \label{eq:P0} P_{0}(x, D_{x}, D_{\rho}) = D_{x}^{2} + x^{2(q-1)} + s_{0}^{-2a} D_{\rho}^{2a} , \end{equation} the differential operator induced by $ P $---modulo some normalization---on the $ \eta $ fibers. Furthermore set, for $ j \geq 1 $, \begin{align} \label{eq:Pj} P_{j}(x, \partial_{x}, \partial_{\rho})& = \sum_{h=2a-j}^{2a} \sum_{\alpha=2a-j}^{h} \binom{h}{\alpha} \gamma_{h} \left(r + 2 \frac{s_{0}}{q} + 2 s_{0} \frac{q-1}{q} \right)_{h-\alpha} \notag \\ & \phantom{=}\cdot \binom{\alpha}{2a-j} p_{\alpha+j-2a}(x\partial_{x}) \partial_{\rho}^{2a-j} \\[5pt] & = \tilde{P}_{j}(x\partial_{x}) \partial_{\rho}^{2a-j} , \notag \end{align} with $ \ord(\tilde{P}_{j}) = j $. We also have, see also \eqref{eq:gamma0}, \begin{multline} \label{eq:P1} P_{1}(x, \partial_{x}, \partial_{\rho}) = \\ \left( 2a \gamma_{2a} \left(p_{1}(x\partial_{x}) + r + \frac{2s_{0}}{q} + 2 s_{0} \frac{q-1}{q}\right) +\gamma_{2a-1} \right) \partial_{\rho}^{2a-1}. \end{multline} To keep the notation simple we write \begin{equation} \label{eq:Pjtilda} \tilde{P}_{j}(x \partial_{x}) = \sum_{\ell=0}^{j} p_{j\ell} (x \partial_{x})^{\ell} , \end{equation} for $ j = 1, \ldots, 2a $ and suitable numbers $ p_{j\ell} $. Equation \eqref{eq:PA:2} can thus be written as \begin{multline} \label{eq:PA:3} P A(u) (x, y) \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[\sum_{j=0}^{2a} \frac{1}{\rho^{j}} P_{j}(x, D_{x}, D_{\rho}) u(x, \rho)\right]_{x = x \rho^{s_{0} /q}} d\rho . \end{multline} \begin{remark} \label{rem:0} The whole computation has been carried out for the mo\-del in \eqref{eq:P}. We might have allowed variable coefficients in \eqref{eq:P}, at least to a certain extent, which would have resulted in an infinite sum in \eqref{eq:PA:3}. The rest of the proof should proceed essentially with minor modifications. We however preferred to stick to the proposed model for a greater clarity in the exposition, with a very tiny degree of generality lost. \end{remark} Our next step is to formally solve the equation \begin{equation} \label{eq:transp} \sum_{j=0}^{2a} \frac{1}{\rho^{j}} P_{j}(x, D_{x}, D_{\rho}) u(x, \rho) = 0. \end{equation} \section{Computing the Formal Solution} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} \setcounter{remark}{0} We start this section by discussing how to solve the prototype equation \begin{equation} \label{eq:P0u=f} P_{0}(x, D_{x}, D_{\rho}) u(x, \rho) = f(x, \rho), \end{equation} where $ P_{0} $ is given by \eqref{eq:P0}. As a preliminary result we consider the kernel of the operator \begin{equation} \label{eq:Qlam} Q_{\mu}(x, D) = D^{2} + x^{2(q-1)} - \mu , \end{equation} $ \mu $ being a real parameter. The following proposition is well known (see e.g. \cite{berezin}.) \begin{proposition} \label{prop:ker-Q} There exist countably many positive numbers, $ \mu_{j} $, $ \mu_{j+1} > \mu_{j} $, $ j \geq 0 $, such that $$ \ker Q_{\mu_{j}} \neq \{ 0 \}, $$ and actually $ \dim \ker Q_{\mu_{j}} = 1 $. Here when we write $ \ker Q_{\mu_{j}} $ we mean the kernel of the operator $ Q_{\mu_{j}} $ as an unbounded operator in $ L^{2}({\mathbb{R}}) $. \end{proposition} Let us denote by $ \mu_{j} $, $ \phi_{j}(x) $, $ j \geq 0 $, the eigenvalues and the eigenfunctions of the operator $ Q $, defined in \eqref{eq:Q}, constructed in the above proposition. They are an orthonormal basis in $ L^{2}({\mathbb{R}}) $. \bigskip Consider now the equation $$ \left( D^{2} + x^{2(q-1)} + s_{0}^{-2a} D_{\rho}^{2a}\right) u = f, $$ with $ f \in L^{2} $. We want to find an expression for $ u \in L^{2} $ intersected with the natural domain of $ P_{0} $. We note explicitly that the operator $ P_{0} $ has no tempered distributions in its kernel if we consider it in the $ (x, \rho) $-plane. We are considering it in a half plane, and hence we can find non trivial distributions in its kernel. Write $$ u(x, \rho) = \sum_{k\geq 0} u_{(k)}(\rho) \phi_{k}(x), $$ where $ u_{(k)}(\rho) = \langle u, \phi_{k} \rangle $. Here $ \langle \cdot , \cdot \rangle $ denotes the (complex) scalar product in $ L^{2}({\mathbb{R}}) $, in the only variable $ x $. Hence, with an analogous expansion of $ f $, \begin{multline*} P_{0}(x, D, D_{\rho}) u = \sum_{k \geq 0} \left( u_{(k)} Q \phi_{k} + \phi_{k} s_{0}^{-2a} D_{\rho}^{2a} u_{(k)} \right) \\ = \sum_{k \geq 0} \phi_{k} \left( u_{(k)} \mu_{k} + s_{0}^{-2a} D_{\rho}^{2a} u_{(k)} \right) = \sum_{k\geq 0} f_{k} \phi_{k} , \end{multline*} where $ f_{k} = \langle f, \phi_{k} \rangle $. Identifying the coefficients of $ \phi_{k} $ in the last equality above we may find the $ u_{(k)} $ as the solution of the differential equations \begin{equation} \label{eq:ode-k} \left( \partial_{\rho}^{2a} + (-1)^{a} s_{0}^{2a} \mu_{k} \right) u_{(k)} = (-1)^{a} s_{0}^{2a} f_{k}. \end{equation} Let us denote again by $ f_{k} $ the r.h.s. of \eqref{eq:ode-k}, the difference being just a multiplicative constant. For $ j = 1, \ldots, 2a $, denote by $ \mu_{k j} $ the $ 2a $-roots of $ (-1)^{a+1} s_{0}^{2a} \mu_{k} $. Assume now that $ k \geq 1 $. Applying the result of Appendix A, we find that \begin{equation} \label{eq:u_{k}} u_{(k)}(\rho) = \sum_{j=1}^{2a} A_{kj} I_{kj}(f_{k})(\rho) , \end{equation} where the $ A_{kj} $ are defined as in \eqref{eq:Al} and \begin{multline} \label{eq:Ikj} I_{kj}(f_{k})(\rho) \\ = - \sgn\left( \re \mu_{kj}\right) \int_{{\mathbb{R}}} e^{\mu_{kj} (\rho - \sigma)} H\left(-\sgn\left(\re \mu_{kj}\right) (\rho - \sigma)\right) \\ \cdot H(\sigma -R) f_{k}(\sigma) d\sigma , \end{multline} where $ H $ denotes the Heaviside function, $ R $ is a positive number that can be chosen depending on some parameter to be precised. We note that \eqref{eq:Ikj} defines a function solving \eqref{eq:ode-k} for $ \rho > R $. Consider the case $ k = 0 $. We define $ u_{(0)} $ in a slightly different way. The motivation for such a distinction will be clear in the subsequent sections. Consider $ \mu_{0} $, the smallest of the values $ \mu_{0} < \mu_{1} < \cdots $ defined in Proposition \ref{prop:ker-Q}, and denote by $ \mu_{0i} $, $ i = 1, \ldots, 2a $, the $ 2a $-roots of $$ (-1)^{a+1} s_{0}^{2a} \mu_{0}. $$ Then define $\tilde{\mu}_{0} $ by \begin{align} \label{eq:mitlda0} \tilde{\mu}_{0} &= \re \mu_{0 i^{*}} , \text{ where } \\ &\phantom{ = } \ \ \mu_{0 i^{*}} \text{ is a $ 2a $-root of $ (-1)^{a+1} s_{0}^{2a} \mu_{0} $ with maximum negative \notag \\ &\phantom{ = } \ \text{ real part.} \notag \end{align} We remark explicitly that, if $ a > 1 $, there are always two (complex conjugate) roots, $ \mu_{0i_{1}} $, $ \mu_{0i_{2}} $ of maximal negative real part, since we are taking even roots. Of course the definition of $ \tilde{\mu}_{0} $ is independent of the choice of the root. The reason of the above choice for $ \tilde{\mu}_{0} $, which will have important implications in the sequel, is that we have better decay rates for the components along higher eigenfunctions than that of the fundamental eigenfunction. Then we define \begin{align} \label{eq:u0} u_{(0)}(\rho) &= - \sum_{\begin{subarray}{c} i \in \{1, \ldots, 2a\} \\ \re \mu_{0i} > 0 \\ \text{ or } \re \mu_{0i} = \tilde{\mu}_{0} \end{subarray}} A_{0i} \int_{\rho}^{+\infty} e^{\mu_{0i} (\rho - \sigma)} H(\sigma - R) f_{0}(\sigma) d\sigma \\ &\phantom{=}\ + \sum_{\begin{subarray}{c} i \in \{1, \ldots, 2a\} \\ \re \mu_{0i} < \tilde{\mu}_{0} \end{subarray}} A_{0i} \int_{R}^{\rho} e^{\mu_{0i} (\rho - \sigma)} H(\sigma - R) f_{0}(\sigma) d\sigma . \notag \end{align} We point out that if $ f_{0}(\sigma) = \mathscr{O}(\sigma^{-1-\delta} e^{\tilde{\mu}_{0}\sigma}) $, with $ \delta > 0 $, the integrals where $ \re \mu_{0i} = \tilde{\mu}_{0} $ are well defined. It is convenient to harmonize the notation for \eqref{eq:u_{k}}, \eqref{eq:Ikj} and \eqref{eq:u0}. To this end we point out that, for $ k > 0 $, $ \re \mu_{ki} > 0 $ implies that $ \re \mu_{ki} - \tilde{\mu}_{0} > 0 $, and that if $ \re \mu_{ki} < 0 $ then $ \re \mu_{ki} - \tilde{\mu}_{0} < \re \mu_{0i} - \tilde{\mu}_{0} \leq 0 $, due to the fact that the sequence $ (\mu_{k})_{k \in {\mathbb{N}} \cup \{0\}} $ is strictly increasing. Since these inequalities for $ k > 0 $ can be perturbed it is clear that there exists a small positive number, say $ \epsilon_{\mu} $, such that for any $ k \geq 0 $ we may write that $$ u_{(k)}(\rho) = \sum_{j=1}^{2a} A_{kj} I_{kj}(f_{k})(\rho) , $$ with \begin{multline} \label{eq:Ikjgl} I_{kj}(f_{k})(\rho) = - \sgn\left( \re \mu_{kj} - \tilde{\mu}_{0} + \epsilon_{\mu} \right) \int_{{\mathbb{R}}} e^{\mu_{kj} (\rho - \sigma)} \\[5pt] \cdot H\left(-\sgn\left(\re \mu_{kj} - \tilde{\mu}_{0} + \epsilon_{\mu}\right) (\rho - \sigma)\right) \cdot H(\sigma - R) f_{k}(\sigma) d\sigma , \end{multline} where the $ A_{0i} $ are chosen according to the above prescription. \bigskip Thus we obtain an expression for $ u $: \begin{align} \label{eq:P0inv} u(x, \rho) & = P_{0}^{-1} (f) \\ & = \sum_{k\geq 0} \left( \sum_{j=1}^{2a} A_{kj} I_{kj}(f_{k})(\rho) \right) \phi_{k}(x) \notag \\ & = \sum_{k\geq 0} E_{k}(f_{k})(\rho) \phi_{k}(x), \notag \end{align} where for the sake of brevity we used the notation \begin{equation} \label{eq:Ek} E_{k}(f_{k})(\rho) = \sum_{j=1}^{2a} A_{kj} I_{kj}(f_{k})(\rho) . \end{equation} In order to find a formal solution to $$ P(x, y, D_{x}, D_{y}) A(u) = 0 , $$ we look for a function $ u $ of the form \begin{equation} \label{eq:uj} u(x, \rho) = \sum_{j \geq 0} u_{j}(x, \rho). \end{equation} such that, when we plug it into \eqref{eq:transp}, it gives \begin{equation} \label{eq:transp1} \sum_{k=0}^{2a} \frac{1}{\rho^{k}} P_{k}(x, D_{x}, D_{\rho}) u(x, \rho) = \sum_{k=0}^{2a} \frac{1}{\rho^{k}} P_{k}(x, D_{x}, D_{\rho}) \sum_{j \geq 0} u_{j}(x, \rho) = 0. \end{equation} It seems natural to split \eqref{eq:transp1} according to \begin{equation} \label{eq:nattransp} P_{0} u_{j} = - \sum_{k=1}^{\min\{j, 2a\}} \frac{1}{\rho^{k}} P_{k} u_{j-k}, \qquad j \geq 0 . \end{equation} (The sum is understood to be zero if its upper index is zero). Consider the first equation to be solved: \begin{equation} \label{eq:P0u0} P_{0}(x, D, D_{\rho}) u_{0}(x, \rho) = 0. \end{equation} Arguing as we just did, we may take, see \eqref{eq:mitlda0}, \begin{equation} \label{eq:u0-} u_{0}(x, \rho) = \phi_{0}(x) e^{\mu_{0i^{*}} \rho}. \end{equation} We would like to compute the $ u_{j} $, $ j \geq 1 $, in such a way that \eqref{eq:nattransp} is satisfied and that $ u_{j} = \mathscr{O}(\rho^{-j} e^{\tilde{\mu}_{0} \rho}) $ (actually, for technical reasons, we prove in Theorem \ref{th:pest-t} a slightly weaker estimate). Solving \eqref{eq:nattransp}, using the spectral decomposition $ \phi_{k}(x) $, by \eqref{eq:ode-k}, boils down to solving an ode of the form $$ (\partial_{\rho}^{2a} + (-1)^{a} s_{0}^{2a} \mu_{k})w = g_{k} , $$ where $ g_{k} = \mathscr{O}(\rho^{-j} e^{\tilde{\mu}_{0} \rho}) $ and $ \tilde{\mu}_{0} $ is defined in \eqref{eq:mitlda0}. If $ k \geq 1 $ it is easy to see that $ w = \mathscr{O}(\rho^{-j} e^{\tilde{\mu}_{0} \rho}) $, since $ \tilde{\mu}_{0} $ differs from the real part of the roots of the characteristic equation. On the other hand if $ k = 0 $ this is not true anymore and we get that $ w = \mathscr{O}(\rho^{-j+1} e^{\tilde{\mu}_{0} \rho}) $, since $ \tilde{\mu}_{0} $ is the real part of two roots of the characteristic equation. This prevents us from proving that $ u_{j} = \mathscr{O}(\rho^{-j} e^{\tilde{\mu}_{0} \rho}) $. A way around this is to reshuffle the transport equations using the projection onto the ground state function defined as \begin{definition} \label{def:pi} Denote by $ \pi $ the orthogonal projection onto $$ L^{2}({\mathbb{R}}_{\rho}) \otimes [\phi_{0}] , $$ whose action is described by $$ \pi(f)(x, \rho) = \langle f(\cdot, \rho), \phi_{0} \rangle_{L^{2}({\mathbb{R}}_{x})} \phi_{0}(x) . $$ \end{definition} \begin{proposition} \label{prop:pi-and-P0} We have that \begin{equation} \label{eq:pP} (1 - \pi) P_{0} = P_{0} (1 - \pi), \qquad \pi P_{0} = P_{0} \pi , \end{equation} where $ P_{0} $ is given by \eqref{eq:P0}. \end{proposition} \begin{proof} Let us prove the first relation. The second has the same proof. Compute first the l.h.s. % \begin{align*} (1 - \pi) P_{0} v &= (1 - \pi) P_{0} \sum_{k\geq 0} v_{k} \phi_{k} \\ &= (1 - \pi) \sum_{k \geq 0} \left( v_{k} Q \phi_{k} + s_{0}^{-2a} D_{\rho}^{2a} v_{k} \phi_{k}\right) \\ &= (1 - \pi) \sum_{k \geq 0} \left( \mu_{k} v_{k} + s_{0}^{-2a} D_{\rho}^{2a} v_{k} \right)\phi_{k} \\ &= \sum_{k \geq 1} \left( \mu_{k} v_{k} + s_{0}^{-2a} D_{\rho}^{2a} v_{k} \right)\phi_{k} . \end{align*} On the other hand \begin{align*} P_{0} (1 - \pi) v &= P_{0} (1 - \pi) \sum_{k \geq % 0} v_{k} \phi_{k} \\ &= P_{0} \sum_{k \geq 1} v_{k} \phi_{k} \\ &= \sum_{k \geq 1} \left( v_{k} Q \phi_{k} + \phi_{k} s_{0}^{-2a} D_{\rho}^{2a} v_{k} \right) \\ &= \sum_{k \geq 1} \left( \mu_{k} v_{k} + s_{0}^{-2a} D_{\rho}^{2a} v_{k} \right) \phi_{k} . \end{align*} This ends the proof. \end{proof} Next we are going to choose $ r $ in the definition \eqref{eq:Au} of $ A(u) $. This is necessary in order to bootstrap a formal recursive calculation of all the $ u_{j} $. \begin{proposition} \label{prop:r} It is possible to choose $ r $ in \eqref{eq:Au} so that \begin{equation} \label{eq:P1u0} \pi P_{1} u_{0} = 0. \end{equation} \end{proposition} \begin{proof} It is just a computation. By \eqref{eq:P1} the above condition can be written as \begin{multline*} \pi P_{1} u_{0} = \pi \left( \left[ \alpha x \partial_{x} + \beta + 2a \gamma_{2a} r\right] \partial_{\rho}^{2a-1} e^{\mu_{0i^{*}}\rho} \phi_{0}(x) \right) \\ = (\partial_{\rho}^{2a-1} e^{\mu_{0i^{*}}\rho} ) \left[ \langle \alpha x \partial_{x} \phi_{0}, \phi_{0} \rangle + \beta + 2a \gamma_{2a} r \right]\phi_{0}. \end{multline*} Here we used the fact that a function of $ \rho $ only commutes with the projectors and that $ \| \phi_{0} \| = 1 $. Finally it is clear that the quantity in square brackets can be made zero by suitably choosing $ r $, since $ \gamma_{2a} \neq 0 $ by \eqref{eq:P1}, \eqref{eq:gamma}. \end{proof} In order to get the optimal decreasing rate $ u_{j} = \mathscr{O}(\rho^{-j} e^{\tilde{\mu}_{0} \rho}) $, we split the equation \eqref{eq:transp1} into two sets of equations using the projection $ \pi $. First we define $ u_{0} $ by \eqref{eq:u0} and then for $ j \geq 1$ we solve recursively the equations \begin{subequations} \begin{gather} \label{eq:tr1} (1 - \pi) P_{0} u_{j} = P_{0} (1 - \pi) u_{j} = -\sum_{k=1}^{\min\{j, 2a\}} \frac{1}{\rho^{k} } (1 - \pi) P_{k} u_{j-k} \\ % \label{eq:tr2} \begin{align} \pi P_{0} u_{j} &= P_{0} \pi u_{j} = - \frac{1}{\rho} \pi P_{1} (1 - \pi) u_{j} - \frac{1}{\rho} \pi P_{1} \pi u_{j-1} \\ &\phantom{= =} - \sum_{k=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{k+1}} \pi P_{k+1} u_{j-k} . \notag \end{align} \end{gather} \end{subequations} We point out that the idea of the above splitting of the set of the transport equations is due to M\'etivier, who first used it in \cite{metivier81}. The motivation for the splitting is that one has to match the decreasing rate of the $ u_{j} $ for $ \rho $ large. Moreover the decay rate for large $ \rho $ of the $ (1-\pi)u_{j} $ will turn out to be slightly better than that of the $ \pi u_{j} $, due to the choice of $ \tilde{\mu}_{0} $ in \eqref{eq:mitlda0}. \begin{proposition} \label{prop:equiv} The set of equations \eqref{eq:tr1}, \eqref{eq:tr2} for $ j \geq 1 $ as well as the equation $ P_{0} u_{0} = 0 $ are formally equivalent to \eqref{eq:transp1}. \end{proposition} \begin{proof} We have, for $ j \geq 1 $, \begin{align} \label{eq:P0uj} P_{0} u_{j} &= P_{0}(1 - \pi) u_{j} + P_{0} \pi u_{j} \\ &= - \sum_{\ell=1}^{\min\{j, 2a\}} \frac{1}{\rho^{\ell} } (1 - \pi) P_{\ell} u_{j-\ell} - \frac{1}{\rho} \pi P_{1} (1 - \pi) u_{j} \notag \\ & \phantom{=} \ - \frac{1}{\rho} \pi P_{1} \pi u_{j-1} - \sum_{\ell=1}^{\min\{j, 2a -1\}} \frac{1}{\rho^{\ell+1}} \pi P_{\ell+1} u_{j-\ell} \notag \end{align} Since $$ \sum_{k=0}^{2a} \frac{1}{\rho^{k}} P_{k} \sum_{j \geq 0} u_{j} = 0 $$ iff $$ \sum_{j \geq 1} P_{0} u_{j} + \sum_{\substack{j \geq 0\\ k \geq 1}} \frac{1}{\rho^{k}} P_{k} u_{j} = 0 , $$ from \eqref{eq:P0uj} we get \begin{align*} \sum_{j \geq 1} P_{0} u_{j} &= - \sum_{j \geq 1} \sum_{\ell=1}^{\min\{j, 2a\}} \frac{1}{\rho^{\ell} } (1 - \pi) P_{\ell} u_{j-\ell} - \frac{1}{\rho} \sum_{j \geq 1} \pi P_{1} (1 - \pi) u_{j} \\ & \phantom{=} \ - \frac{1}{\rho} \sum_{j \geq 1} \pi P_{1} \pi u_{j-1} - \sum_{j \geq 1} \sum_{\ell=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{\ell+1}} \pi P_{\ell+1} u_{j-\ell} . \end{align*} The third term on the r.h.s. above is written as \begin{align*} \frac{1}{\rho} \sum_{j \geq 1} \pi P_{1} \pi u_{j-1} &= \frac{1}{\rho} \pi P_{1} \pi u_{0} + \frac{1}{\rho} \sum_{j \geq 2} \pi P_{1} \pi u_{j-1} \\ &= \frac{1}{\rho} \pi P_{1} u_{0} + \frac{1}{\rho} \sum_{j \geq 1} \pi P_{1} \pi u_{j} . \end{align*} Hence \begin{align*} \sum_{j \geq 1} P_{0} u_{j} &= - \frac{1}{\rho} (1 - \pi) P_{1} u_{0} - \sum_{j \geq 2} \sum_{\ell=1}^{\min\{j, 2a\}} \frac{1}{\rho^{\ell} } (1 - \pi) P_{\ell} u_{j-\ell} \\ &\phantom{ = }\ - \frac{1}{\rho} \sum_{j \geq 1} \pi P_{1} (1 - \pi) u_{j} - \frac{1}{\rho} \pi P_{1} u_{0} \\ &\phantom{ = }\ - \frac{1}{\rho} \sum_{j \geq 1} \pi P_{1} \pi u_{j} - \sum_{j \geq 1} \sum_{\ell=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{\ell+1}} \pi P_{\ell+1} u_{j-\ell} \\ &= - \frac{1}{\rho} P_{1} u_{0} - \frac{1}{\rho} \sum_{j \geq 1} \pi P_{1} u_{j} - \sum_{j \geq 2} \sum_{\ell=1}^{\min\{j, 2a\}} \frac{1}{\rho^{\ell} } (1 - \pi) P_{\ell} u_{j-\ell} \\ &\phantom{ = }\ - \sum_{j \geq 1} \sum_{\ell=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{\ell+1}} \pi P_{\ell+1} u_{j-\ell} . \end{align*} The third term in that last expression above when $ \ell = 1 $ is $$ \sum_{j \geq 2} \frac{1}{\rho} (1 - \pi) P_{1} u_{j-1} = \frac{1}{\rho} \sum_{j \geq 1} (1 - \pi) P_{1} u_{j} , $$ so that \begin{align*} \sum_{j \geq 1} P_{0} u_{j} &= - \frac{1}{\rho} P_{1} u_{0} - \frac{1}{\rho} \sum_{j \geq 1} \pi P_{1} u_{j} - \frac{1}{\rho} \sum_{j \geq 1} (1 - \pi) P_{1} u_{j} \\ &\phantom{=}\ - \sum_{j \geq 2} \sum_{\ell=2}^{\min\{j, 2a\}} \frac{1}{\rho^{\ell} } (1 - \pi) P_{\ell} u_{j-\ell} \\ &\phantom{=}\ - \sum_{j \geq 1} \sum_{\ell=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{\ell+1}} \pi P_{\ell+1} u_{j-\ell} . \end{align*} Then \begin{align*} \sum_{j \geq 1} P_{0} u_{j} &= - \frac{1}{\rho} P_{1} u_{0} - \frac{1}{\rho} \sum_{j \geq 1} P_{1} u_{j} - \sum_{j \geq 1} \sum_{\ell=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{\ell+1}} \pi P_{\ell+1} u_{j-\ell} \\ &\phantom{=}\ - \sum_{j \geq 1} \sum_{\ell=1}^{\min\{j+1, 2a\}-1} \frac{1}{\rho^{\ell+1} } (1 - \pi) P_{\ell+1} u_{j-\ell}. \end{align*} And finally, since $ \min\{j+1, 2a\}-1 = \min\{j, 2a-1\} $, \begin{align*} \sum_{j \geq 1} P_{0} u_{j} &= - \frac{1}{\rho} \sum_{j \geq 0} P_{1} u_{j} - \sum_{j \geq 1} \sum_{\ell=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{\ell+1} } P_{\ell+1} u_{j-\ell} \\ &= - \frac{1}{\rho} P_{1} u_{0} - \sum_{j \geq 1} \sum_{\ell=0}^{\min\{j, 2a-1\}} \frac{1}{\rho^{\ell+1} } P_{\ell+1} u_{j-\ell} \\ &= - \sum_{j \geq 0} \sum_{\ell=0}^{\min\{j, 2a-1\}} \frac{1}{\rho^{\ell+1} } P_{\ell+1} u_{j-\ell} \end{align*} Since $ P_{0} u_{0} = 0 $ we have $$ \sum_{j \geq 0} P_{0} u_{j} + \sum_{j \geq 0} \sum_{\ell=0}^{\min\{j+1, 2a\}-1} \frac{1}{\rho^{\ell+1} } P_{\ell+1} u_{j-\ell} = 0. $$ Changing the indices from $ (j, \ell) $ to $ (t, \ell) $, $ \ell \geq 0 $, $ j-\ell = t $, we conclude $$ \sum_{t \geq 0} \sum_{\ell=0}^{2a} \frac{1}{\rho^{\ell}} P_{\ell} u_{t} = 0. $$ \end{proof} The equations in \eqref{eq:tr1} and \eqref{eq:tr2} can be solved, in principle. We must however make sure that the so obtained solutions have estimates allowing us to turn our formal solution into a real one. Since the degeneration of the operator coefficients is higher than quadratic, there is no possibility of using the three terms recurrence relations valid for the quadratic case or in some special case, like in \cite{metivier81} and \cite{chinni}. This leads us to getting estimates directly by inspecting the form of the $ u_{j} $. This technique, although more involved than the classical one, seems however promising for more general (and generic) cases in two variables. This will be the object of the next section. \section{Sobolev Type Estimates of the $ u_{j} $} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} In this section we deduce estimates of the functions $ u_{j} $ solving \eqref{eq:tr1}, \eqref{eq:tr2} in the region $ \rho > \textrm{Const}\ j $. As it can be seen from \eqref{eq:P0inv} the functions $ u_{j} $, or rather their projections, are computed as infinite expansions in the eigenfunctions of a anharmonic oscillator. It is then natural to use the $ L^{2} $ norms of the $ u_{j} $ and of their derivatives, to take advantage of the orthonormal system of the $ \phi_{k} $. We point out that in M\'etivier's case each $ u_{j} $ is given by a finite expansion in the eigenfunctions and furthermore their derivatives are a finite linear combination of the same eigenfunctions and allow to get the $ L^{\infty} $ estimates of the $ u_{j} $ in a direct way. In the present setting the pointwise estimates will be deduced using the Sobolev embedding theorems in Section \ref{sec:pt}. First we define the weight function (see \eqref{eq:mitlda0}) \begin{equation} \label{eq:wj} w_{j}(\rho) = e^{| \tilde{\mu}_{0} | \rho} \rho^{(j - \delta)\kappa}, \end{equation} where \begin{equation} \label{eq:kappadelta} 0 < \delta < 1, \qquad \frac{1}{s_{0}} < \kappa < 1, \qquad \kappa \delta > \frac{1}{2}. \end{equation} We point out that the role of $ \delta $, $ \kappa $ is exquisitely technical and due to the use of $ L^{2} $ norms, i.e. making certain integrals in Lemmas \ref{lemma:pi-mu0}, \ref{lemma:piP-1-pi} absolutely convergent. We define the Sobolev spaces $ B^{k}({\mathbb{R}}_{x}) $ as the space of all $ L^{2} $ functions such that \begin{equation} \label{eq:spBk} \| u\|_{k} = \max_{\frac{\beta}{q-1} + \alpha \leq k} \| x^{\beta} D_{x}^{\alpha} u \|_{0} . \end{equation} For the estimates we shall use the norms \begin{equation} \label{eq:norms} \| w_{j}(\rho) f(x, \rho) \|_{k, A}^{2} = \int_{A}^{+\infty} w_{j}^{2}(\rho) \| f( \cdot, \rho) \|_{k}^{2} d\rho , \end{equation} where $ A $ is a suitable positive constant to be chosen later. We want to prove the \begin{theorem} \label{th:est} There exist positive constants $ C_{u} $, $ R_{0} $ such that \begin{multline} \label{eq:est} \| w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \\ \leq C_{u}^{1 + j + \alpha + \beta + \gamma} \cdot \left( \frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! , \end{multline} where \begin{equation} \label{eq:AR} \gamma \leq j + \gamma^{\#}, \qquad A = A(j) = R_{0}(j + 1) , \end{equation} and $ R_{0} > 0 $, $ \gamma^{\#} $ a positive constant. Here we understand that for a positive number $ x $, $ x! = \Gamma(x+1) $. \end{theorem} Using the Sobolev immersion theorem, from Theorem \ref{th:est} we deduce the desired pointwise estimates \eqref{eq:pest}. \vskip 2mm We are going to prove a slightly different statement. Actually in the theorem below both space derivatives and multiplication by $ x $ have been replaced by the corresponding powers of the operator $ Q $. This has the advantage that the action of $ Q $ onto an eigenfunction expansion keeps the orthonormality of the basis, while this is not true for both derivatives and multiplication by $ x $. The transition from $ Q $ to the derivatives and multiplication by $ x $ is done in Proposition \ref{prop:ineqQ}. \begin{theorem} \label{th:est'} Let $ \nu $ denote a rational number $ \geq -1 $ and $ \gamma \in {\mathbb{N}} $ such that $ \nu + \gamma (2a)^{-1} \geq 0 $. There exist positive constants $ C_{u} $, $ R_{0} $, $ \sigma $, $ \sigma' $ such that \begin{equation} \label{eq:est'} \| w_{j}(\rho) \partial_{\rho}^{\gamma} Q^{\nu} u_{j}(x, \rho) \|_{0, A} \leq C_{0}^{1 + \sigma j + \sigma'( \nu + \gamma)} (\lambda(j, \nu, \gamma)+1)^{\lambda(j, \nu, \gamma)} , \end{equation} where $$ \lambda(j, \nu, \gamma) = \frac{j}{s_{0}} + \left(2 \nu + \frac{\gamma}{a}\right) \frac{q-1}{q}, $$ and $ A = A(j) = R_{0}(j + 1) $, $ R_{0}> 0 $, $ \gamma \leq j + \gamma^{\#} $. \end{theorem} \begin{remark} \label{rem:c0} We point out that the constants $ R_{0} $, $ C_{0} $ may be enlarged independently from each other. \end{remark} We shall proof in the following that Theorem \ref{th:est'} actually implies Theorem \ref{th:est}. First we prove Theorem \ref{th:est'}. \bigskip The proof proceeds by induction on $ j $. Consider first $ u_{0} $ defined in \eqref{eq:u0}. We have for $ A = A(0, \gamma) $, \begin{multline*} \| w_{0}(\rho) \partial_{\rho}^{\gamma} Q^{\nu} u_{0}(x, \rho) \|_{0, A}^{2} = \int_{A}^{+\infty} \rho^{-2\delta \kappa} e^{2 | \tilde{\mu}_{0} | \rho} \left|\partial_{\rho}^{\gamma} e^{\mu_{0i^{*}} \rho} \right|^{2} \| Q^{\nu} \phi_{0}(x) \|_{0}^{2} d\rho \\ \leq C^{\gamma} \| Q^{\nu} \phi_{0}(x) \|_{0}^{2} \leq C^{\gamma} \mu_{0}^{\nu} \leq C_{u}^{\gamma} \lambda(0, \nu, \gamma)^{\lambda(0, \nu, \gamma)} . \end{multline*} Assume now that \eqref{eq:est'} holds for any $ k $, $ 0 \leq k < j $. We recall that $$ u_{j} = (1 - \pi) u_{j} + \pi u_{j} , $$ where the summands in the r.h.s. above are defined by \eqref{eq:tr1} and \eqref{eq:tr2} respectively. We have \begin{multline} \label{eq:est''} \| w_{j}(\rho) \partial_{\rho}^{\gamma} Q^{\nu} u_{j}(x, \rho) \|_{0, A} \\ \leq \| w_{j}(\rho) \partial_{\rho}^{\gamma} Q^{\nu} (1 - \pi) u_{j}(x, \rho) \|_{0, A} + \| w_{j}(\rho) \partial_{\rho}^{\gamma} Q^{\nu} \pi u_{j}(x, \rho) \|_{0, A} . \end{multline} We are going to prove estimates for each of the two summands on the right hand side of \eqref{eq:est''}. \section{Estimate of $(1 - \pi) u_{j} $} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} This section is devoted to proving the estimates \eqref{eq:est'} for $ (1 - \pi) u_{j} $. Actually we shall prove a slightly better kind of inequality only for $ (1 - \pi)u_{j} $. This improvement will be crucial in inductively proving the estimates \eqref{eq:est'} for $ \pi u_{j}$. \begin{theorem} \label{th:est-1-p} Let $ \nu $ denote a rational number $ \geq -1 $ and $ \gamma \in {\mathbb{N}} $ such that $ \nu + \gamma (2a)^{-1} \geq 0 $. There exist positive constants $ C_{u} $, $ R_{0} $, $ \sigma $, $ \sigma' $ such that \begin{multline} \label{eq:est-1-p} \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{\gamma} Q^{\nu} (1 - \pi) u_{j}(x, \rho) \|_{0, A} \\ \leq C_{0}^{1 + \sigma j + \sigma'( \nu + \gamma)} (\lambda(j, \nu, \gamma)+1)^{\lambda(j, \nu, \gamma)} , \end{multline} where $$ \lambda(j, \nu, \gamma) = \frac{j}{s_{0}} + \left(2 \nu + \frac{\gamma}{a}\right) \frac{q-1}{q}, $$ and $ A = A(j) = R_{0}(j + 1) $, $ R_{0} > 0 $, $ \gamma \leq j + \gamma^{\#} $. \end{theorem} \begin{proof} We have, by \eqref{eq:tr1}, $$ (1 - \pi) u_{j} = -\sum_{k=1}^{\min\{j, 2a\}} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) P_{k} u_{j-k} \right) , $$ where $ P_{0}^{-1} $ has been defined in \eqref{eq:P0inv}. Hence \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{\gamma} Q^{\nu} (1 - \pi) u_{j}(x, \rho) \|_{0, A} \\ \leq \sum_{k=1}^{\min\{j, 2a\}} \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{\gamma} Q^{\nu} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) P_{k} u_{j-k} \right ) \|_{0, A} . \end{multline*} \begin{lemma} \label{lemma:QnuP0} We have $$ Q^{\nu} P_{0}^{-1}(f) = P_{0}^{-1}( Q^{\nu} f), \qquad Q^{\nu} (1 - \pi) = (1 - \pi) Q^{\nu}. $$ \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:QnuP0}] From \eqref{eq:P0inv} we get (here $ f_{k} = \langle f, \phi_{k} \rangle $) \begin{multline*} Q^{\nu} P_{0}^{-1}(f) = Q^{\nu} \sum_{k\geq 0} \left( \sum_{j=1}^{2a} A_{kj} I_{kj}(f_{k})(\rho) \right) \phi_{k}(x) \\ = \sum_{k\geq 0} \left( \sum_{j=1}^{2a} A_{kj} I_{kj}(f_{k})(\rho) \right) \mu_{k}^{\nu} \phi_{k}(x) \\ = \sum_{k\geq 0} \left( \sum_{j=1}^{2a} A_{kj} I_{kj}(\mu_{k}^{\nu} f_{k})(\rho) \right) \phi_{k}(x) \\ = \sum_{k\geq 0} \left( \sum_{j=1}^{2a} A_{kj} I_{kj}( ( Q^{\nu} f)_{k})(\rho) \right) \phi_{k}(x) . \end{multline*} The proof of the second relation is straightforward. \end{proof} Using the above lemma we get \begin{multline} \label{eq:uj-2} \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{\gamma} Q^{\nu} (1 - \pi) u_{j}(x, \rho) \|_{0, A} \\ \leq \sum_{k=1}^{\min\{j, 2a\}} \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{\gamma} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) Q^{\nu} P_{k} u_{j-k} \right ) \|_{0, A} . \end{multline} \begin{lemma} \label{lemma:drhogamma} Let $ f \in \mathscr{S}({\mathbb{R}}) $ and $ \gamma \in {\mathbb{N}} $. Write $ \gamma = 2a r + \gamma_{1} $, with $ 0 \leq \gamma_{1} < 2a $. Then for $ k \geq 0 $ we have \begin{multline} \label{eq:drhogamma} \left(\frac{1}{s_{0}} D_{\rho}\right)^{\gamma} E_{k}(f_{k})(\rho) = \sum_{s=0}^{r-1} (- \mu_{k})^{s} \left(\frac{1}{s_{0}} D_{\rho}\right)^{\gamma - 2a (1+s)} f_{k} \\ + (- \mu_{k})^{r} \left(\frac{1}{s_{0}} D_{\rho}\right)^{\gamma_{1}} E_{k}(f_{k})(\rho), \end{multline} where $ E_{k}(f_{k}) $ is defined in \eqref{eq:Ek} and $ f_{k} = \langle f, \phi_{k} \rangle $, $ \phi_{k} $ denoting the $ k $-th eigenfunction of $ Q $. \end{lemma} The proof is just a computation using the fact that $ E_{k}(f_{k}) $ is a solution of \eqref{eq:ode-k} rapidly decreasing at infinity. \begin{corollary} \label{cor:drhogammaP0} Let $ f \in \mathscr{S}({\mathbb{R}}) $ and $ \gamma \in {\mathbb{N}} $. Write $ \gamma = 2a r + \gamma_{1} $, with $ 0 \leq \gamma_{1} < 2a $. Then \begin{multline} \label{eq:drhogammaP0} \left(\frac{1}{s_{0}} D_{\rho}\right)^{\gamma} P_{0}^{-1}(f) = \sum_{s=0}^{r-1} (- 1)^{s} \left(\frac{1}{s_{0}} D_{\rho}\right)^{\gamma - 2a (1+s)} Q^{s} f \\ + (- 1)^{r} \left(\frac{1}{s_{0}} D_{\rho}\right)^{\gamma_{1}} P_{0}^{-1}( Q^{r} f) . \end{multline} \end{corollary} Using Corollary \ref{cor:drhogammaP0} inequality \eqref{eq:uj-2} becomes \begin{multline} \label{eq:uj-3} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}_{\rho}^{\gamma} Q^{\nu} (1 - \pi) u_{j}(x, \rho) \|_{0, A} \\ \leq \sum_{k=1}^{\min\{j, 2a\}} \sum_{s=0}^{r-1} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma - 2a(1+s)}_{\rho} \left( \frac{1}{\rho^{k} } (1 - \pi) Q^{\nu +s} P_{k} u_{j-k} \right ) \|_{0, A} \\ + \sum_{k=1}^{\min\{j, 2a\}} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) Q^{\nu +r} P_{k} u_{j-k} \right ) \|_{0, A} , \end{multline} where we used the notation \begin{equation} \label{eq:dslash} \textrm{\DH}_{\rho} = \frac{1}{s_{0}} D_{\rho} = \frac{1}{i s_{0}} \partial_{\rho} . \end{equation} First consider the norm in the third line of \eqref{eq:uj-3}. We denote by $ g_{jk} = Q^{\nu +r} P_{k} u_{j-k} $. Then for $ k \in \{1, \ldots, \min\{j, 2a\}\}$, \begin{multline*} \| w_{j}(\rho)\rho^{1-\kappa} \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) g_{jk} \right ) \|_{0, A} \\ = \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{\ell \geq 1} \sum_{i=1}^{2a} A_{\ell i} I_{\ell i}(\rho^{-k} \langle g_{j k} , \phi_{\ell} \rangle ) \phi_{\ell} \|_{0, A} \\ = \left( \sum_{\ell \geq 1} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{i=1}^{2a} A_{\ell i} I_{\ell i}(\rho^{-k} \langle g_{j k} , \phi_{\ell} \rangle ) \|_{A}^{2} \right)^{\frac{1}{2}} , \end{multline*} where $$ \| f(\rho) \|_{A}^{2} = \int_{A}^{+\infty} | f(\rho) |^{2} d \rho . $$ Remark that, by \eqref{eq:Ikj}, \eqref{eq:Amuk}, since $ 0 \leq \gamma_{1} < 2a $, \begin{equation} \label{eq:Drhogamma1} \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{i=1}^{2a} A_{\ell i} I_{\ell i}(f_{k}) = \frac{1}{(i s_{0})^{\gamma_{1}}} \sum_{i=1}^{2a} A_{\ell i} \mu_{\ell i}^{\gamma_{1}} I_{\ell i}(f_{k}) . \end{equation} Hence \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) g_{jk} \right ) \|_{0, A} \\ = s_{0}^{-\gamma_{1}} \left( \sum_{\ell \geq 1} \| w_{j}(\rho) \rho^{1-\kappa} \sum_{i=1}^{2a} A_{\ell i} \mu_{\ell i}^{\gamma_{1}} I_{\ell i}(\rho^{-k} \langle g_{j k} , \phi_{\ell} \rangle ) \|_{A}^{2} \right)^{\frac{1}{2}} \end{multline*} Now $$ A_{\ell i} = \prod_{\substack{r=1\\r\neq i}}^{2a} \frac{1}{\mu_{\ell i} - \mu_{\ell r}} . $$ Since the roots $ \mu_{\ell r} $ are the vertices of a $ 2a $-regular polygon whose circumscribed circle has radius $ \mu_{\ell}^{\frac{1}{2a}} $ (arithmetic root), we have that $$ \frac{|A_{\ell i} \mu_{\ell i}^{\gamma_{1} } |}{\mu_{\ell}^{\frac{\gamma_{1}+ 1 - 2a}{2a}}} = c , $$ where $ c $ denotes a positive constant independent of $ \ell $, $ i $, $ \gamma_{1} $. Thus \begin{multline} \label{eq:uj-4} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) g_{jk} \right ) \|_{0, A} \\ \leq C_{1} \sum_{i=1}^{2a} \left( \sum_{\ell \geq 1} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}(\rho^{-k} \langle g_{j k} , \mu_{\ell}^{\frac{\gamma_{1}+ 1 - 2a}{2a}} \phi_{\ell} \rangle ) \|_{A}^{2} \right)^{\frac{1}{2}} \end{multline} \begin{lemma} \label{lemma:muli>0} Assume that $ \ell \geq 1 $, $ \re \mu_{\ell i} > 0 $. Then \begin{equation} \label{eq:muli>0} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}( \rho^{-k} f(\rho)) \|_{A}^{2} \leq C_{0} \| w_{j-k}(\rho) \mu_{\ell}^{-\frac{1}{2a}} f(\rho) \|_{A}^{2} , \end{equation} for a suitable constant $ C_{0} > 0 $ independent of $ \ell $, $ j $, $ k $, $ i $. \end{lemma} \begin{remark} In \eqref{eq:muli>0} the factor $ \rho^{1-\kappa} $ appears only in the norms on the left hand side of the inequality because the right hand side may contain both $ (1-\pi) u_{j-k} $ and $ \pi u_{j-k} $. \end{remark} \begin{proof} By \eqref{eq:Ikj}, \eqref{eq:mitlda0}, \eqref{eq:wj} we have \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}( \rho^{-k} f(\rho)) \|_{A}^{2} = \int_{A}^{+\infty} \rho^{2(j-\delta) \kappa + 2(1-\kappa)} e^{2|\tilde{\mu}_{0}|\rho} \Big| \int_{{\mathbb{R}}} e^{\mu_{\ell i} (\rho - \sigma)} \\ \cdot H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \frac{1}{\sigma^{k}} \ H(\sigma - R) f(\sigma) d\sigma \Big|^{2} d\rho \\ \leq \int_{A}^{+\infty} \Big( \int_{{\mathbb{R}}} e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) (\rho - \sigma)} H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \\ \cdot \sigma^{ - (k - 1) (1 - \kappa) } e^{|\tilde{\mu}_{0}| \sigma} \sigma^{(j-k-\delta)\kappa} \ H(\sigma - R) | f(\sigma)| d\sigma \Big)^{2} d\rho \\ \leq \int_{A}^{+\infty} \Big( \int_{{\mathbb{R}}} e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) (\rho - \sigma)} H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \\ \cdot \sigma^{ - (k - 1) (1 - \kappa) } e^{|\tilde{\mu}_{0}| \sigma} \sigma^{(j-k-\delta)\kappa} \ H(\sigma - A) | f(\sigma)| d\sigma \Big)^{2} d\rho \end{multline*} where, since $ \sigma \geq \rho $, we bounded $ \rho $ by $ \sigma $. Moreover for the same reason $ \sigma \geq A $ so that we may replace $ H(\sigma-R)$ by $ H(\sigma-A) $. We estimate the integral on the right hand side of the above expression with an integral over the whole real line and apply Young inequality to obtain \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}( \rho^{-k} f(\rho)) \|_{A}^{2} \\ \leq \int_{-\infty}^{+\infty} \Big( \int_{{\mathbb{R}}} e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) (\rho - \sigma)} H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \\ \cdot w_{j-k}(\sigma) H(\sigma - A) \sigma^{ - (k - 1) (1 - \kappa) } | f(\sigma)| d\sigma \Big)^{2} d\rho \\ \leq C_{1} \| e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) \rho} H\left(- \sgn(\re \mu_{\ell i}) \rho \right) \|_{L^{1}({\mathbb{R}})}^{2} \ \| w_{j-k}(\sigma) \sigma^{ - (k - 1) (1 - \kappa) } f(\sigma) \|_{A}^{2} \\ \leq C_{2} \mu_{\ell}^{-\frac{1}{2a}} \| w_{j-k}(\sigma) f(\sigma) \|_{A}^{2} . \end{multline*} This concludes the proof of the lemma. \end{proof} \begin{lemma} \label{lemma:muli<0} Assume that $ \ell \geq 1 $, $ \re \mu_{\ell i} < 0 $. Then \begin{equation} \label{eq:muli<0} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}( \rho^{-k} f(\rho)) \|_{A}^{2} \leq C_{0} \| w_{j-k}(\rho) \mu_{\ell}^{-\frac{1}{2a}} f(\rho) \|_{A}^{2} , \end{equation} for a suitable constant $ C_{0} > 0 $ independent of $ \ell $, $ j $, $ k $, $ i $, and $ R = A(j)= R_{0}(j+1)$. \end{lemma} \begin{proof} As before by \eqref{eq:Ikj}, \eqref{eq:mitlda0}, \eqref{eq:wj} we have \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}( \rho^{-k} f(\rho)) \|_{A}^{2} = \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa + 2(1-\kappa)} e^{2|\tilde{\mu}_{0}|\rho} \Big| \int_{{\mathbb{R}}} e^{\mu_{\ell i} (\rho - \sigma)} \\ \cdot H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \frac{1}{\sigma^{k}} \ H(\sigma - A) f(\sigma) d\sigma \Big|^{2} d\rho \\ \leq \int_{A}^{+\infty} \Big( \rho^{(j-\delta)\kappa + 1 -\kappa} \int_{{\mathbb{R}}} e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) (\rho - \sigma)} \\ \cdot H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) e^{|\tilde{\mu}_{0}| \sigma} \sigma^{-k} \ H(\sigma - A) | f(\sigma)| d\sigma \Big)^{2} d\rho \\ = \int_{A}^{+\infty} \Big( \int_{{\mathbb{R}}} e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) (\rho - \sigma)} H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \\ \cdot \left( 1 + \frac{\rho - \sigma}{\sigma}\right)^{(j-\delta)\kappa + 1 - \kappa} e^{|\tilde{\mu}_{0}| \sigma} \sigma^{(j - k - \delta) \kappa} \ H(\sigma - A) \sigma^{-(k-1)(1-\kappa)}| f(\sigma)| d\sigma \Big)^{2} d\rho \end{multline*} Now, since $ (j - \delta)\kappa + 1 -\kappa \leq j $, $$ \left( 1 + \frac{\rho - \sigma}{\sigma}\right)^{(j-\delta)\kappa + 1 - \kappa} \leq \left( 1 + \frac{\rho - \sigma}{\sigma}\right)^{j} , $$ keeping in mind that $ A \leq \sigma \leq \rho $, we have \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}( \rho^{-k} f(\rho)) \|_{A}^{2} \\ \leq \int_{A}^{+\infty} \Big( \int_{{\mathbb{R}}} e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) (\rho - \sigma)} H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \\ \cdot \left( \sum_{r=0}^{j} \binom{j}{r} \frac{(\rho - \sigma)^{r}}{\sigma^{r}} \right) e^{|\tilde{\mu}_{0}| \sigma} \sigma^{(j - k - \delta)\kappa} \ H(\sigma - A) \sigma^{-(k-1)(1-\kappa)} | f(\sigma)| d\sigma \Big)^{2} d\rho \\ = \int_{A}^{+\infty} \Big( \sum_{r=0}^{j} \binom{j}{r} \int_{{\mathbb{R}}} e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) (\rho - \sigma)} H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \\ \cdot \frac{(\rho - \sigma)^{r}}{\sigma^{r}} \ e^{|\tilde{\mu}_{0}| \sigma} \sigma^{(j - k - \delta)\kappa} \ H(\sigma - A) \sigma^{-(k-1)(1-\kappa)} | f(\sigma)| d\sigma \Big)^{2} d\rho . \end{multline*} Using the inequality \begin{equation} \label{eq:diseg} \left( \sum_{i=1}^{n} a_{i} \right)^{2} \leq \sum_{i=1}^{n-1} 2^{i} a_{i}^{2} + 2^{n-1} a_{n}^{2} \leq \sum_{i=1}^{n} 2^{i} a_{i}^{2} , \end{equation} we get \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}( \rho^{-k} f(\rho)) \|_{A}^{2} \\ \leq \sum_{r=0}^{j} 2^{r+1} \binom{j}{r}^{2} \int_{A}^{+\infty} \Big( \int_{{\mathbb{R}}} e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) (\rho - \sigma)} (\rho - \sigma)^{r} H\left(- \sgn(\re \mu_{\ell i}) (\rho - \sigma)\right) \\ \cdot \sigma^{-(k-1)(1-\kappa) - r} e^{|\tilde{\mu}_{0}| \sigma} \sigma^{(j - k - \delta)\kappa} \ H(\sigma - A) | f(\sigma)| d\sigma \Big)^{2} d\rho . \end{multline*} Now $ \sigma^{-r -(k-1)(1-\kappa) } \leq R_{0}^{-r } j^{-r } $ and $$ \| e^{(\re \mu_{\ell i} + |\tilde{\mu}_{0}|) \rho} \rho^{r} \|_{L^{1}({\mathbb{R}}^{+})} = \frac{r!}{\left |\re \mu_{\ell i} + |\tilde{\mu}_{0}| \, \right |^{r+1} } . $$ Hence, using Young inequality, we obtain \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} I_{\ell i}( \rho^{-k} f(\rho)) \|_{A}^{2} \\ \leq C_{3} \sum_{r=0}^{j} 2^{r+1} \frac{1}{\left |\re \mu_{\ell i} + |\tilde{\mu}_{0}| \, \right |^{2(r+1)} } \left(\binom{j}{r} \frac{r!}{ R_{0}^{r} j^{r}} \right)^{2} \| w_{j-k}(\rho) f(\rho) \|_{A}^{2} . \end{multline*} Remark that, for $ r > 0 $, $$ \binom{j}{r} \frac{r!}{ R_{0}^{r} j^{r}} = \frac{1}{R_{0}^{r}} \left(1 - \frac{1}{j}\right) \cdots \left( 1 - \frac{r-1}{j}\right) \leq \frac{1}{R_{0}^{r}} $$ and that $$ \frac{1}{\left |\re \mu_{\ell i} + |\tilde{\mu}_{0}| \, \right |^{r+1}} \leq \left(\frac{C_{4}}{\re \mu_{\ell i}}\right)^{r+1} \leq C_{5}^{r+1} \mu_{\ell}^{- \frac{r+1}{2a}} , $$ we obtain the conclusion of the lemma recalling \eqref{eq:mitlda0}, provided $$ R_{0} \geq \max \{4 C_{5} \mu_{\ell}^{-\frac{1}{2a}} , 2 \} . $$ \end{proof} Using the above lemmas \eqref{eq:uj-4} can be bound as \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) g_{jk} \right ) \|_{0, A} \\ \leq 2 a C_{1} C_{0} \left( \sum_{\ell \geq 1} \| w_{j - k}(\rho) \mu_{\ell}^{\frac{\gamma_{1}}{2a} -1} \langle g_{j k} , \phi_{\ell}\rangle \|_{A}^{2} \right)^{\frac{1}{2}} , \end{multline*} where $ g_{jk} = Q^{\nu +r} P_{k} u_{j-k} $. We have $ \mu_{\ell}^{\frac{\gamma_{1}}{2a} -1} \langle g_{j k} , \phi_{\ell}\rangle = \langle Q^{\nu + \frac{\gamma}{2a} - 1} P_{k} u_{j-k} , \phi_{\ell} \rangle $, see Lemma \ref{lemma:drhogamma}. Since the $ \phi_{\ell} $ are an orthonormal basis we obtain % \begin{multline*} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1} \left( \frac{1}{\rho^{k} } (1 - \pi) Q^{\nu +r} P_{k} u_{j-k} \right ) \|_{0, A} \\ \leq C_{6} \| w_{j - k}(\rho) Q^{\nu + \frac{\gamma}{2a} - 1} P_{k} u_{j-k} \|_{0,A} , \end{multline*} As a consequence the last term on the r.h.s. of \eqref{eq:uj-3} is estimated by $$ \Lambda_{r} = C_{6} \sum_{k=1}^{\min\{j, 2a\}} \| w_{j - k}(\rho) Q^{\nu + \frac{\gamma}{2a} - 1} P_{k} u_{j-k} \|_{0, A} . $$ Recalling \eqref{eq:Pj}, \eqref{eq:Pjtilda} we write $$ \Lambda_{r} \leq C_{6} \sum_{k=1}^{\min\{j, 2a\}} \sum_{m=0}^{k} |p_{k m}| \| w_{j - k}(\rho) \partial_{\rho}^{2a-k} Q^{\nu + \frac{\gamma}{2a} - 1} (x \partial_{x})^{m} u_{j-k} \|_{0,A} . $$ Applying Proposition \ref{prop:AjD} and keeping in mind that the $ \rho $-derivative is conserved, we have to estimate \begin{multline} \label{eq:Lambdar-2} \Lambda_{r} \leq C_{7} \sum_{k=1}^{\min\{j, 2a\}} \sum_{m=0}^{k} \Big( \| w_{j - k}(\rho) \partial_{\rho}^{2a-k} Q^{\nu + \frac{\gamma}{2a} - 1 + m \frac{q}{2(q-1)} } u_{j-k} \|_{0,A} \\ + \left( \nu +\frac{\gamma}{2a} - 1\right)^{2\frac{q-1}{q} (\nu + \frac{\gamma}{2a} - 1) + m } \| w_{j - k}(\rho) \partial_{\rho}^{2a-k} u_{j-k} \|_{0,A} \Big) \\ \leq C_{8} \sum_{k=1}^{\min\{j, 2a\}} \Big( \| w_{j - k}(\rho) \partial_{\rho}^{2a-k} Q^{\nu + \frac{\gamma}{2a} - 1 + k \frac{q}{2(q-1)} } u_{j-k} \|_{0,A} \\ + \left( \nu +\frac{\gamma}{2a} - 1\right)^{2\frac{q-1}{q} (\nu + \frac{\gamma}{2a} - 1) + k } \| w_{j - k}(\rho) \partial_{\rho}^{2a-k} u_{j-k} \|_{0,A} \Big) , \end{multline} where we estimated the coefficients of the polynomials $ \tilde{P}_{k} $, $ p_{km} $, with a uniform constant and used the estimate $$ \|Q^{(m-k) \frac{q}{2(q-1)}} v \|_{0} \leq C \| v\|_{0} , $$ for $ 0 \leq m < k \leq 2a $. We also remark that, since $ A = A(j) = R_{0} (j+1) $, enlarging the domain of integration we have norms over the half line $ ] A(j-k), + \infty[ $, to which we may apply our inductive hypothesis. In fact the exponent of $ Q $ is evidently $ \geq -1 $ and $$ \nu + \frac{\gamma}{2a} - 1 + k \frac{q}{2(q-1)} + \frac{2a-k}{2a} \geq \frac{k}{2} \frac{q}{q-1} \frac{1}{s_{0}} > 0. $$ Thus, choosing $ \gamma^{\#} \geq 2a $, \begin{multline*} \| w_{j - k}(\rho) \partial_{\rho}^{2a-k} Q^{\nu + \frac{\gamma}{2a} - 1 + k \frac{q}{2(q-1)} } u_{j-k} \|_{0,A} \\ \leq C_{0}^{1+\sigma(j-k) +\sigma'(\nu + \frac{\gamma}{2a} - 1 + k \frac{q}{2(q-1)} + 2a - k)} (\lambda+1)^{\lambda} , \end{multline*} where \begin{multline*} \lambda = \frac{j-k}{s_{0}} + \left( 2 \nu + \frac{\gamma}{a} - 2 + k \frac{q}{q-1} + 2 - \frac{k}{a}\right) \frac{q-1}{q} \\ = \frac{j}{s_{0}} + \left( 2\nu + \frac{\gamma}{a}\right) \frac{q-1}{q} = \lambda(j, \nu, \gamma) . \end{multline*} As for the constant $ C_{0} $ we have \begin{multline*} \sigma(j-k) +\sigma'(\nu + \frac{\gamma}{2a} - 1 + k \frac{q}{2(q-1)} + 2a - k) \\ = \sigma j + \sigma'(\nu + \frac{\gamma}{2a}) - \sigma k + \sigma' (2a-1) - \sigma' k \frac{q-2}{2(q-1)} \\ \leq \sigma j + \sigma'(\nu + \gamma) - \sigma', \end{multline*} if we choose \begin{equation} \label{eq:sigma} \sigma = 2a \sigma'. \end{equation} An analogous computation can be made for the summands of the second type in \eqref{eq:Lambdar-2}. This completes the estimate of the second line on the right hand side of \eqref{eq:uj-3}. Next we are going to estimate the first line of the right hand side of \eqref{eq:uj-3}: \begin{multline*} \sum_{k=1}^{\min\{j, 2a\}} \Lambda_{k} = \sum_{k=1}^{\min\{j, 2a\}} \sum_{s=0}^{r-1} \| w_{j}(\rho) \rho^{1-\kappa} \textrm{\DH}^{\gamma - 2a(1+s)}_{\rho} \\ \left( \frac{1}{\rho^{k} } (1 - \pi) Q^{\nu +s} P_{k} u_{j-k} \right ) \|_{0, A} . \end{multline*} Since the sum over $ k $ is finite it is enough to consider just one summand. Observe that \begin{multline*} \Lambda_{k} \leq C_{9} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \sum_{t=0}^{k} \binom{\gamma - 2a(1+s)}{m} \frac{(k+m-1)!}{(k-1)!} \\ \cdot \| w_{j}(\rho) \frac{1}{\rho^{k+m}} \rho^{1-\kappa} (1-\pi) \textrm{\DH}_{\rho}^{\gamma - 2a(1+s) - m + 2a - k} Q^{\nu+s} (x \partial_{x})^{t} u_{j-k} \|_{0,A} \\ \leq C_{10} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \sum_{t=0}^{k} \binom{\gamma - 2a(1+s)}{m} \frac{(k+m-1)!}{(k-1)!} \frac{1}{A(j)^{m}} \\ \cdot \| w_{j-k}(\rho) \rho^{-(k-1)(1-\kappa)} \partial_{\rho}^{\gamma - 2a s - m - k} Q^{\nu+s} (x \partial_{x})^{t} u_{j-k} \|_{0,A} \\ \leq C_{10} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \sum_{t=0}^{k} \binom{\gamma - 2a(1+s)}{m} \frac{(k+m-1)!}{(k-1)!} \frac{1}{A(j)^{m}} \\ \cdot \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} Q^{\nu+s} (x \partial_{x})^{t} u_{j-k} \|_{0,A} \end{multline*} We apply Proposition \ref{prop:AjD} we obtain \begin{multline} \label{eq:Lambdaks} \Lambda_{k} \leq C_{11} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \sum_{t=0}^{k} \binom{\gamma - 2a(1+s)}{m} \frac{(k+m-1)!}{(k-1)!} \\ \cdot \frac{1}{A(j)^{m}} \Big( \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} Q^{\nu+s+t \frac{q}{2(q-1)}} u_{j-k} \|_{0,A} \\ + (\nu + s )^{2 \frac{q-1}{q} (\nu + s) + t} \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} u_{j-k} \|_{0,A} \Big) \\ \leq C_{12} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \binom{\gamma - 2a(1+s)}{m} \frac{(k+m-1)!}{(k-1)!} \frac{1}{A(j)^{m}} \\ \cdot \Big( \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} Q^{\nu+s+k \frac{q}{2(q-1)}} u_{j-k} \|_{0,A} \\ + (\nu + s )^{2 \frac{q-1}{q} (\nu + s) + k} \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} u_{j-k} \|_{0,A} \Big) . \end{multline} First we remark that, by \eqref{eq:AR}, \begin{multline*} \binom{\gamma - 2a(1+s)}{m} \frac{(k+m-1)!}{(k-1)!} \frac{1}{A(j)^{m}} \\ = \binom{k+m-1}{m} \frac{(\gamma - 2a(1+s))!}{(\gamma -2a(1+s) -m)!} \frac{1}{(R_{0} (j+1))^{m}} \\ \leq \binom{k+m-1}{m} \frac{(\gamma - 2a(1+s)) \cdots (\gamma - 2a(1+s) - m + 1)}{R_{0}^{m} (j+1)^{m}} \\ \leq \binom{k+m-1}{m} \left(\frac{j+\gamma^{\#}}{R_{0}(j+1)}\right)^{m} \leq 2^{2a-1} \left( \frac{2 \gamma^{\#}}{R_{0}}\right)^{m} , \end{multline*} if $ R_{0} $ is suitably chosen. Moreover the quantities in the norms above verify the assumptions of Theorem \ref{th:est'}, i.e. $ \nu \geq -1 $ and $ \nu + \gamma (2a)^{-1} \geq 0 $. In fact the first is trivially true. As for the second we have \begin{multline*} \nu + s + k \frac{q}{2(q-1)} + \frac{\gamma}{2a} - s - \frac{m}{2a} - \frac{k}{2a} \\ = \nu + \frac{k}{2} \left( \frac{q}{q-1} - \frac{1}{a}\right) + \frac{\gamma - m}{2a} \\ \geq \frac{k}{2} \frac{q}{q-1} \frac{1}{s_{0}} + \nu + 1 > 0 , \end{multline*} since $ m \leq \gamma - 2a $ and $ \nu \geq -1 $. Moreover $ \gamma - k \leq j - k + \gamma^{\#} $ if $ \gamma \leq j + \gamma^{\#} $. We may thus apply the inductive hypothesis to both norms in the right hand side of \eqref{eq:Lambdaks}. Starting with the first on the next to last line we have \begin{multline*} \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} Q^{\nu+s+k \frac{q}{2(q-1)}} u_{j-k} \|_{0,A} \\ \leq C_{0}^{1+\sigma(j-k) + \sigma'(\nu+s+k \frac{q}{2(q-1)} +\gamma - 2a s - m - k)} (\lambda+1)^{\lambda} , \end{multline*} where \begin{multline*} \lambda = \frac{j-k}{s_{0}} + \left( 2\nu + 2s + k \frac{q}{q-1} + \frac{\gamma}{a} - 2s - \frac{m}{a} - \frac{k}{a}\right) \frac{q-1}{q} \\ = \frac{j}{s_{0}} + \left( 2\nu + \frac{\gamma}{a}\right) \frac{q-1}{q} - \frac{m}{a} \frac{q-1}{q} \\ \leq \frac{j}{s_{0}} + \left( 2\nu + \frac{\gamma}{a}\right) \frac{q-1}{q} = \lambda(j, \nu, \gamma) . \end{multline*} As for the exponent in the constant $ C_{0} $, we have \begin{multline*} \sigma(j-k) + \sigma'\left(\nu+s+k \frac{q}{2(q-1)} +\gamma - 2a s - m - k\right) \\ = \sigma j + \sigma' (\nu + \gamma) - \sigma k + \sigma' k \frac{q}{2(q-1)} - \sigma' s (2a-1) - \sigma' m \\ \leq \sigma j + \sigma' (\nu + \gamma) - \sigma' \left( 2a - 1\right) - \sigma' s (2a-1) , \end{multline*} since $ k \geq 1 $ and $ \sigma = 2a \sigma' $. Consider now the second term in the right hand side of \eqref{eq:Lambdaks}: by the inductive hypothesis \begin{multline*} (\nu + s )^{2 \frac{q-1}{q} (\nu + s) + k} \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} u_{j-k} \|_{0,A} \\ \leq C_{0}^{1+\sigma(j-k) + \sigma'(\gamma-2as-m-k)} (\nu + s )^{2 \frac{q-1}{q} (\nu + s) + k} (\lambda_{1}+1)^{\lambda_{1}}, \end{multline*} where $$ \lambda_{1} = \frac{j-k}{s_{0}} + \left(\frac{\gamma}{a} -2s -\frac{m}{a} - \frac{k}{a}\right) \frac{q-1}{q} . $$ Both $ \nu + s $ and $ \lambda_{1} $ can be enlarged to be $ \lambda(j, \nu, \gamma) $, so that it is enough to check that the sum of the exponents has the right value. As before this is \begin{multline*} \frac{j-k}{s_{0}} + \left(\frac{\gamma}{a} -2s -\frac{m}{a} - \frac{k}{a}\right) \frac{q-1}{q} + 2 \frac{q-1}{q} (\nu + s) + k \\ = \frac{j}{s_{0}} + \left( 2\nu + \frac{\gamma}{a}\right) \frac{q-1}{q} - \frac{m}{a} \frac{q-1}{q} < \lambda(j, \nu, \gamma). \end{multline*} Hence \begin{multline*} \Lambda_{k} \leq C_{12} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \binom{\gamma - 2a(1+s)}{m} \frac{(k+m-1)!}{(k-1)!} \frac{1}{A(j)^{m}} \\ \cdot \Big( \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} Q^{\nu+s+k \frac{q}{2(q-1)}} u_{j-k} \|_{0,A} \\ + (\nu + s )^{2 \frac{q-1}{q} (\nu + s) + k} \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2a s - m - k} u_{j-k} \|_{0,A} \Big) \\ \leq C_{12} 2^{2a-1} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \left(\frac{2 \gamma^{\#}}{R_{0}}\right)^{m} \\ \cdot \Big( C_{0}^{1 + \sigma j + \sigma' (\nu + \gamma) - \sigma' \left( 2a - 1\right) - \sigma' s (2a-1)} (\lambda(j, \nu, \gamma)+1)^{\lambda(j, \nu, \gamma)} \\ + C_{0}^{1+\sigma j - 2a \sigma' + \sigma'(\gamma + \nu) - (2a-1) \sigma' s} (\lambda(j, \nu, \gamma)+1)^{\lambda(j, \nu, \gamma)} \Big) \\ \leq C_{0}^{1 + \sigma j + \sigma' (\nu + \gamma)} (\lambda(j, \nu, \gamma)+1)^{\lambda(j, \nu, \gamma)} \\ \cdot C_{12} 2^{2a-1} C_{0}^{-\sigma'(2a-1)} \sum_{s=0}^{\infty} C_{0}^{-\sigma'(2a-1)s} \sum_{m=0}^{\infty} \left( \frac{2 \gamma^{\#}}{R_{0}}\right)^{m} . \end{multline*} Keeping into account that $ k $ ranges on a finite number of indices, a suitable choice of both $ C_{0} $ and $ R_{0} $ completes the proof of Theorem \ref{th:est-1-p}. \end{proof} \section{Estimate of $ \pi u_{j} $} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} This section is devoted to proving estimates, analogous to those in \eqref{eq:est-1-p}, for $ \pi u_{j} $. We use the same notation of the preceding section. \begin{theorem} \label{th:est-p} Let $ \nu $ denote a rational number $ \geq -1 $ and $ \gamma \in {\mathbb{N}} $ such that $ \nu + \gamma (2a)^{-1} \geq 0 $. There exist positive constants $ C_{0} $, $ R_{0} $, $ \sigma $, $ \sigma' $ such that \begin{equation} \label{eq:est-p} \| w_{j}(\rho) \partial_{\rho}^{\gamma} Q^{\nu} \pi u_{j}(x, \rho) \|_{0, A} \leq C_{0}^{1 + \sigma j + \sigma'( \nu + \gamma)} (\lambda(j, \nu, \gamma)+1)^{\lambda(j, \nu, \gamma)} , \end{equation} where $$ \lambda(j, \nu, \gamma) = \frac{j}{s_{0}} + \left(2 \nu + \frac{\gamma}{a}\right) \frac{q-1}{q}, $$ and $ A = A(j) = R_{0}(j + 1) $, $ \gamma \leq j+\gamma^{\#} $, $ 2 \gamma^{\#} < R_{0} $. \end{theorem} \begin{proof} Recall that from \eqref{eq:tr2} $ \pi u_{j} $ is obtained as a solution of \begin{align} \label{eq:pi-u} \pi P_{0} u_{j} &= P_{0} \pi u_{j} = - \frac{1}{\rho} \pi P_{1} (1 - \pi) u_{j} - \frac{1}{\rho} \pi P_{1} \pi u_{j-1} \\ &\phantom{= =} - \sum_{k=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{k+1}} \pi P_{k+1} u_{j-k} . \notag \end{align} First of all we observe that the decay rate in $ \rho $ of the first and third terms on the right hand side of \eqref{eq:pi-u} is $$ \rho^{-[(j-\delta)\kappa + (1-\kappa) +1]} , \qquad \rho^{-[(j-\delta)\kappa + ( 1- \kappa) k + 1]} $$ respectively, while for the second term we have $ \rho^{-[(j -\delta)\kappa + 1 - \kappa]} $ which would not allow us to make an inductive argument. \begin{lemma} \label{lemma:piPpi} We have % \begin{equation} \label{eq:piPpi} \pi P_{1} \pi v = 0. \end{equation} \end{lemma} \begin{proof} We just remark that $ \pi P_{1} \pi v = \pi P_{1} \left( \langle v, \phi_{0} \rangle \phi_{0}\right) $. Using the notation of Proposition \ref{prop:r} we may write \begin{eqnarray*} \pi P_{1} \left( \langle v, \phi_{0} \rangle \phi_{0}\right) & = & \pi \left( \left[ \alpha x \partial_{x} + \beta + 2a\gamma_{2a} r\right] \partial_{\rho}^{2a-1} \left( \langle v, \phi_{0} \rangle \phi_{0}\right) \right) \\ & = & \pi \left( \left( \partial_{\rho}^{2a-1} \langle v, \phi_{0} \rangle \right) \left[ \alpha x \partial_{x} + \beta + 2a\gamma_{2a} r\right] \phi_{0} \right) \\ & = & \left( \partial_{\rho}^{2a-1} \langle v, \phi_{0} \rangle \right) \left[ \langle \alpha x \partial_{x} \phi_{0} , \phi_{0}\rangle + \beta + 2a\gamma_{2a} r \right] \phi_{0} = 0, \end{eqnarray*} due to the choice of $ r $ in Proposition \ref{prop:r}. \end{proof} As a consequence \eqref{eq:pi-u} becomes \begin{equation} \label{eq:pi-u2} P_{0} \pi u_{j} = - \frac{1}{\rho} \pi P_{1} (1 - \pi) u_{j} - \sum_{k=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{k+1}} \pi P_{k+1} u_{j-k} . \end{equation} Let us write \begin{equation} \label{eq:util} \tilde{u}_{\ell}^{(j)} = \begin{cases} (1 - \pi) u_{j} & \textrm{if $ \ell = j $} \\ u_{\ell} & \textrm{if $ \ell < j $} \end{cases} . \end{equation} Hence \eqref{eq:pi-u2} becomes \begin{equation} \label{eq:pi-u3} P_{0} \pi u_{j} = - \sum_{k=0}^{\min\{j, 2a-1\}} \frac{1}{\rho^{k+1}} \pi P_{k+1} \tilde{u}_{j-k}^{(j)} . \end{equation} By the inductive hypothesis and by Theorem \ref{th:est-1-p} the functions $ \tilde{u}_{j-k}^{(j)} $ verify the estimates \eqref{eq:est'} and \eqref{eq:est-1-p} for $ k=0 $. To start with, by Lemma \ref{lemma:QnuP0}, \begin{multline} \label{eq:puj-1} \| w_{j}(\rho) \partial_{\rho}^{\gamma} Q^{\nu} \pi u_{j}(x, \rho) \|_{0,A} \\ \leq \sum_{k=0}^{\min\{j, 2a-1\}} \| w_{j}(\rho) \partial_{\rho}^{\gamma} P_{0}^{-1} \left( \frac{1}{\rho^{k+1}} \pi Q^{\nu} P_{k+1} \tilde{u}_{j-k}^{(j)} \right) \|_{0,A} . \end{multline} By Corollary \ref{cor:drhogammaP0} we have \begin{multline} \label{eq:puj-2} \| w_{j}(\rho) \textrm{\DH}_{\rho}^{\gamma} Q^{\nu} \pi u_{j}(x, \rho) \|_{0, A} \\ \leq \sum_{k=0}^{\min\{j, 2a-1\}} \sum_{s=0}^{r-1} \| w_{j}(\rho) \textrm{\DH}^{\gamma - 2a(1+s)}_{\rho} \left( \frac{1}{\rho^{k+1} } \pi Q^{\nu +s} P_{k+1} \tilde{u}_{j-k}^{(j)} \right ) \|_{0, A} \\ + \sum_{k=0}^{\min\{j, 2a-1\}} \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1} \left( \frac{1}{\rho^{k+1} } \pi Q^{\nu +r} P_{k+1} \tilde{u}_{j-k}^{(j)} \right ) \|_{0, A} . \end{multline} Consider the terms on the third line of the above inequality. Denote by $ \tilde{g}_{jk} = Q^{\nu +r} P_{k+1} \tilde{u}_{j-k}^{(j)} $. Then for $ k \in \{0, \ldots, \min\{j, 2a-1\}\}$, \begin{multline} \label{eq:dgamma1} \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1} \left( \frac{1}{\rho^{k+1} } \pi \tilde{g}_{jk} \right ) \|_{0, A} \\ = \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{i=1}^{2a} A_{0 i} I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \phi_{0} \rangle ) \phi_{0} \|_{0, A} \\ = \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{i=1}^{2a} A_{0 i} I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \phi_{0} \rangle ) \|_{A} , \end{multline} Preliminarily we need the analogous of lemmas \ref{lemma:muli>0}, \ref{lemma:muli<0} for the fundamental eigenfunction, corresponding to the projection $ \pi $. \begin{lemma} \label{lemma:pi-muli>0} Assume that $ \re \mu_{0 i} > 0 $. Then \begin{equation} \label{eq:pi-muli>0} \| w_{j}(\rho) I_{0 i}( \rho^{-k} f(\rho)) \|_{A}^{2} \leq \tilde{C}_{0} \mu_{0}^{-\frac{1}{a}} \| w_{j-k}(\rho) f(\rho) \|_{A}^{2} , \end{equation} for a suitable constant $ \tilde{C}_{0} > 0 $ independent of $ j $, $ k $, $ i $, where we choose $ R = A(j) $, see \eqref{eq:Ikjgl}. \end{lemma} \begin{lemma} \label{lemma:pi-muli<0} Assume that $ \re \mu_{0 i} < \tilde{\mu}_{0} < 0 $. Then \begin{equation} \label{eq:pi-muli<0} \| w_{j}(\rho) I_{0 i}( \rho^{-k} f(\rho)) \|_{A}^{2} \leq \tilde{C}_{0} \mu_{0}^{-\frac{1}{a}} \| w_{j-k}(\rho) f(\rho) \|_{A}^{2} , \end{equation} for a suitable constant $ \tilde{C}_{0} > 0 $ independent of $ j $, $ k $, $ i $, and $ R = A(j)$. \end{lemma} Under the assumptions of lemmas \ref{lemma:pi-muli<0}, \ref{lemma:pi-muli>0}, $ \re \mu_{0i} - \tilde{\mu}_{0} \neq 0 $ and, as a consequence, their proofs are completely analogous to those of lemmas \ref{lemma:muli<0}, \ref{lemma:muli>0}, since the factor $ \rho^{1-\kappa} $ plays no role. \begin{lemma} \label{lemma:pi-mu0} Assume that $ \re \mu_{0 i} = \tilde{\mu}_{0} < 0 $ and $ 1 \leq k \leq \min\{j, 2a - 1\}$. Then \begin{equation} \label{eq:pi-mu0} \| w_{j}(\rho) I_{0 i}( \rho^{-(k+1)} f(\rho)) \|_{A}^{2} \leq \tilde{C}_{0}^{2} \frac{1}{j A(j)^{2(1-\kappa)}} \| w_{j-k}(\rho) f(\rho) \|_{A}^{2} , \end{equation} for a suitable constant $ \tilde{C}_{0} > 0 $ independent of $ j $, and $ R = A(j)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:pi-mu0}] We have to estimate, using \eqref{eq:Ikjgl}, \begin{multline*} \| w_{j}(\rho) I_{0 i}( \rho^{-(k+1)} f(\rho)) \|_{A}^{2} = \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} e^{-2 \tilde{\mu}_{0} \rho} \Big| \int_{{\mathbb{R}}} e^{\mu_{0i} (\rho - \sigma)} \\ \cdot H\left(-\sgn \left( \re \mu_{0i} - \tilde{\mu}_{0} + \epsilon_{\mu} \right) (\rho -\sigma) \right) \frac{H(\sigma - A)}{\sigma^{k+1}} f(\sigma) d \sigma \Big|^{2} d\rho . \end{multline*} Since $ \re \mu_{0i} - \tilde{\mu}_{0} = 0 $ and $ \epsilon_{\mu} > 0 $, we obtain \begin{multline*} \| w_{j}(\rho) I_{0 i}( \rho^{-(k+1)} f(\rho)) \|_{A}^{2} \\ = \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} e^{-2 \tilde{\mu}_{0} \rho} \Big| \int_{{\mathbb{R}}} e^{\mu_{0i} (\rho - \sigma)} H\left(- (\rho -\sigma) \right) \frac{1}{\sigma^{k+1}} H(\sigma - A) f(\sigma) d \sigma \Big|^{2} d\rho \\ \leq \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} e^{-2 \tilde{\mu}_{0} \rho} \Big( \int_{{\mathbb{R}}} e^{\tilde{\mu}_{0} (\rho - \sigma)} H\left(- (\rho -\sigma) \right) \frac{1}{\sigma^{k+1}} H(\sigma - A) |f(\sigma)| d \sigma \Big)^{2} d\rho \\ = \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} \Big( \int_{\rho}^{+\infty} \frac{1}{\sigma^{k+1}} H(\sigma - A) e^{- \tilde{\mu}_{0} \sigma} |f(\sigma)| d \sigma \Big)^{2} d\rho \\ = \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} \Big( \int_{\rho}^{+\infty} \frac{H(\sigma - A)}{\sigma^{(j-\delta)\kappa + k(1-\kappa)+1}} \sigma^{(j-k-\delta)\kappa} e^{- \tilde{\mu}_{0} \sigma} |f(\sigma)| d \sigma \Big)^{2} d\rho . \end{multline*} Using H\"older inequality on the inner integral above we get \begin{multline*} \| w_{j}(\rho) I_{0 i}( \rho^{-(k+1)} f(\rho)) \|_{A}^{2} \\ \leq \| w_{j-k}(\rho) f(\rho) \|_{A}^{2} \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} \int_{\rho}^{+\infty} \frac{1}{\sigma^{2(j-\delta)\kappa + 2 k (1-\kappa)+2}} d\sigma \\ = L_{j k} \| w_{j-k}(\rho) f(\rho) \|_{A}^{2} \int_{A}^{+\infty} \frac{1}{\rho^{2k(1-\kappa) + 1}} d\rho \end{multline*} where we set $$ \int_{\rho}^{+\infty} \frac{1}{\sigma^{2(j-\delta)\kappa + 2 k (1-\kappa) +2}} d\sigma = \frac{L_{j k}}{\rho^{2(j-\delta)\kappa + 2k (1-\kappa) + 1} } . $$ The fact that $ k \geq 1 $ allows us to conclude the proof of the lemma. \end{proof} Finally we need one more result to estimate the first term in \eqref{eq:pi-u3}, i.e. the term corresponding to $ k = 0 $ when $ \re \mu_{0 i} = \tilde{\mu}_{0} $. \begin{lemma} \label{lemma:piP-1-pi} Assume that $ \re \mu_{0 i} = \tilde{\mu}_{0} < 0 $. \begin{equation} \label{eq:piP-1-pi} \| w_{j}(\rho) I_{0 i}( \rho^{-1} f(\rho)) \|_{A}^{2} \leq \frac{C_{0}}{j A(j)^{2(1-\kappa)}} \| w_{j}(\rho) \rho^{1-\kappa} f(\rho) \|_{A}^{2} , \end{equation} for a suitable constant $ C_{0} > 0 $ independent of $ j $, and $ R = A(j)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:piP-1-pi}] Arguing as in the beginning of the proof of Lemma \ref{lemma:pi-mu0} we have \begin{multline*} \| w_{j}(\rho) I_{0 i}( \rho^{-1} f(\rho)) \|_{A}^{2} = \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} e^{-2\tilde{\mu}_{0} \rho} \Big | \int_{{\mathbb{R}}} e^{\mu_{0i} (\rho - \sigma)} \\ \cdot H\left(-\sgn \left( \re \mu_{0i} - \tilde{\mu}_{0} + \epsilon_{\mu} \right) (\rho -\sigma) \right) \frac{H(\sigma - A)}{\sigma} f(\sigma) d \sigma \Big|^{2} d\rho \\ \leq \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} \Big( \int_{\rho}^{+\infty} \frac{1}{\sigma} H(\sigma - A) e^{- \tilde{\mu}_{0} \sigma} |f(\sigma)| d \sigma \Big)^{2} d\rho \\ = \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} \Big( \int_{\rho}^{+\infty} \frac{H(\sigma - A)}{\sigma^{(j-\delta) \kappa + 2 - \kappa}} \rho^{(j-\delta)\kappa + 1 - \kappa} e^{- \tilde{\mu}_{0} \sigma} |f(\sigma)| d \sigma \Big)^{2} d\rho \end{multline*} Using H\"older inequality on the inner integral above we get \begin{multline*} \| w_{j}(\rho) I_{0 i}( \rho^{-1} f(\rho)) \|_{A}^{2} \\ \leq \| w_{j}(\rho) \rho^{1-\kappa} f(\rho) \|_{A}^{2} \int_{A}^{+\infty} \rho^{2(j-\delta)\kappa} \int_{\rho}^{+\infty} \frac{1}{\sigma^{2(j-\delta)\kappa + 2 (2-\kappa) }} d\sigma \\ = L_{j} \| w_{j}(\rho)\rho^{1-\kappa} f(\rho) \|_{A}^{2} \int_{A}^{+\infty} \frac{1}{\rho^{1 +2(1 - \kappa)}} d\rho \end{multline*} where we set $$ \int_{\rho}^{+\infty} \frac{1}{\sigma^{2(j-\delta)\kappa + 2 (2-\kappa)}} d\sigma = \frac{L_{j}}{\rho^{2(j-\delta)\kappa + 2(1-\kappa)+1} } . $$ This concludes the proof of the lemma. \end{proof} Going back to \eqref{eq:dgamma1} and using \eqref{eq:Drhogamma1} we have \begin{multline} \label{eq:gamma1} \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} P_{0}^{-1}\left(\frac{1}{\rho^{k+1}} \pi \tilde{g}_{jk}\right) \|_{0,A} \\ = \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{i=1}^{2a} A_{0 i} I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \phi_{0} \rangle ) \|_{A} \\ = s_{0}^{-\gamma_{1}} \| w_{j}(\rho) \sum_{i=1}^{2a} A_{0 i} \mu_{0 i}^{\gamma_{1}} I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \phi_{0} \rangle ) \|_{A} \\ \leq C_{1} \sum_{i=1}^{2a} \| w_{j}(\rho) I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle ) \|_{A} . \end{multline} To the last quantity we apply Lemmas \ref{lemma:pi-muli>0}, \ref{lemma:pi-muli<0} when $ \re \mu_{0i} \neq \tilde{\mu}_{0} $, Lemma \ref{lemma:pi-mu0} when $ \re \mu_{0i} = \tilde{\mu}_{0} $ and $ 1 \leq k \leq \min\{ j, 2a\} $ and Lemma \ref{lemma:piP-1-pi} when $ \re \mu_{0i} = \tilde{\mu}_{0} $ and $ k = 0 $. In the first case, i.e. when $ \re \mu_{0i} \neq \tilde{\mu}_{0} $, we obtain a decay rate better than that we get when $ \re \mu_{0i} = \tilde{\mu}_{0} $ so that the gain must be neglected. We also remark that there is no need to keep track of the precise powers of $ \mu_{0} $ in the formula above because they may always be absorbed by a constant. We did that only to have a more symmetric formula. Consider first the terms where $ k = 0, 1, \ldots, \min\{j, 2a-1\} $ and $ \re \mu_{0i}$ $ \neq \tilde{\mu}_{0}$. Applying Lemmas \ref{lemma:pi-muli>0}, \ref{lemma:pi-muli<0} in \eqref{eq:gamma1}, we get (see \eqref{eq:Pj}) \begin{multline*} \sum_{\re \mu_{0i} \neq \tilde{\mu}_{0}} \| w_{j}(\rho) I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \mu_{0}^{\frac{\gamma_{1} -2a + 1}{2a} } \phi_{0} \rangle ) \|_{A} \\ \leq C_{2} \| w_{j-k-1}(\rho) \langle \tilde{g}_{j k} , \mu_{0}^{\frac{\gamma_{1}}{2a} - 1} \phi_{0} \rangle \|_{A} \\ = C_{2} \| w_{j-k-1}(\rho) \langle Q^{\nu + r} P_{k+1} \tilde{u}_{j-k}^{(j)} , \mu_{0}^{\frac{\gamma_{1}}{2a} - 1} \phi_{0} \rangle \|_{A} \\ = C_{2} \| w_{j-k-1}(\rho) \partial_{\rho}^{2a-k-1} \langle \tilde{u}_{j-k}^{(j)} , \mu_{0}^{\nu + \frac{\gamma}{2a}-1} \ {}^{t}\tilde{P}_{k+1} \phi_{0} \rangle \|_{A} \\ = C_{2} \mu_{0}^{\nu + \frac{\gamma}{2a}-1} \| w_{j-k-1}(\rho) \partial_{\rho}^{2a-k-1} \langle Q^{- \frac{2a-k-1}{2a}} \tilde{u}_{j-k}^{(j)} , Q^{\frac{2a-k-1}{2a}} \ {}^{t}\tilde{P}_{k+1} \phi_{0} \rangle \|_{A} \\ \leq \frac{C_{2}\mu_{0}^{\nu + \frac{\gamma}{2a}-1}}{A(j)^{\kappa}} \|Q^{\frac{2a-k-1}{2a}} \ {}^{t} \tilde{P}_{k+1} \phi_{0} \|_{0} \| w_{j-k}(\rho)\partial_{\rho}^{2a-k-1} Q^{- \frac{2a-k-1}{2a}} \tilde{u}_{j-k}^{(j)} \|_{0,A} \end{multline*} We remark that the norm $ \|Q^{\frac{2a-k-1}{2a}} \ {}^{t}\tilde{P}_{k+1} \phi_{0} \|_{0} $ is an absolute constant since $ 0 \leq k \leq 2a-1 $. Furthermore we see that the conditions $ \nu \geq -1 $ and $ \nu + \frac{\gamma}{2a} \geq 0 $, when $ \nu $, $ \gamma $ are the exponents of $ Q $, $ \partial_{\rho} $ respectively, are satisfied; moreover $ 2a - k -1 \leq j-k+\gamma^{\#} $ and hence we may apply the inductive hypothesis when $ k \geq 1 $ and the result of the preceding section for $ (1-\pi)u_{j} $ when $ k = 0 $, thus obtaining the bound \begin{multline*} \sum_{k=0}^{\min\{j, 2a-1\}} \sum_{\re \mu_{0i} \neq \tilde{\mu}_{0}} \| w_{j}(\rho) I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \mu_{0}^{\frac{\gamma_{1} -2a + 1}{2a} } \phi_{0} \rangle ) \|_{A} \\ \leq \frac{C_{3}\mu_{0}^{\nu + \frac{\gamma}{2a}}}{A(j)^{\kappa}} \sum_{k=0}^{\min\{j, 2a-1\}} \| w_{j-k}(\rho)\partial_{\rho}^{2a-k-1} Q^{- \frac{2a-k-1}{2a}} \tilde{u}_{j-k}^{(j)} \|_{0,A} \\ \leq \frac{C_{3}\mu_{0}^{\nu + \frac{\gamma}{2a}}}{A(j)} \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{2a-1} Q^{- \frac{2a-1}{2a}} (1-\pi)u_{j} \|_{0,A} \\ + \frac{C_{3}\mu_{0}^{\nu + \frac{\gamma}{2a}}}{A(j)^{\kappa}} \sum_{k=1}^{\min\{j, 2a-1\}} \| w_{j-k}(\rho)\partial_{\rho}^{2a-k-1} Q^{- \frac{2a-k-1}{2a}} u_{j-k} \|_{0,A} \\ \leq \frac{C_{3}\mu_{0}^{\nu + \frac{\gamma}{2a}}}{A(j)^{\kappa}} \sum_{k=0}^{\min\{j, 2a-1\}} C_{0}^{1 + \sigma(j-k)+ \sigma' (2a-k-1) } \\ \cdot (\lambda(j-k, 0, 0) + 1)^{\lambda(j-k, 0, 0)} . \end{multline*} Hence we get the estimate \begin{multline} \label{eq:k>0.} \sum_{k=1}^{\min\{j, 2a-1\}} \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{\re \mu_{0i} \neq \tilde{\mu}_{0}} A_{0 i} I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \phi_{0} \rangle ) \|_{A} \\ \leq \frac{1}{2} C_{0}^{1 + \sigma j + \sigma'(\nu + \gamma) - \sigma'} (\lambda(j, \nu, \gamma) + 1)^{\lambda(j, \nu, \gamma)} , \end{multline} when $ k \geq 1 $ and $ \re \mu_{0i} \neq \tilde{\mu}_{0} $, provided we choose $ C_{0} $ large enough. Let us now consider the case $ k=0 $ and $ \re \mu_{0i} \neq \tilde{\mu}_{0} $. We have to estimate \begin{multline*} \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{\re \mu_{0i} \neq \tilde{\mu}_{0}} A_{0 i} I_{0 i}(\rho^{-1} \langle (1-\pi) u_{j} , \phi_{0} \rangle ) \|_{A} \\ \leq \frac{C_{3}\mu_{0}^{\nu + \frac{\gamma}{2a}}}{A(j)^{\kappa}} C_{0}^{1 + \sigma j+ \sigma' (2a-1)} (\lambda(j, 0, 0) + 1)^{\lambda(j, 0, 0)} \\ \leq \frac{C_{3} C_{0}^{2a\sigma'} }{R_{0}^{\kappa}(j+1)^{\kappa}} \frac{\mu_{0}^{\nu + \frac{\gamma}{2a}}}{ C_{0}^{\sigma'(\nu+\gamma)} } \ C_{0}^{1 + \sigma j+ \sigma' (\nu+\gamma) - \sigma'} (\lambda(j, \nu, \gamma) + 1)^{\lambda(j, \nu, \gamma)} \end{multline*} We make the choice $ C_{0}^{\sigma'} > \mu_{0} $ and $ R_{0}^{\kappa} > 2 C_{3} C_{0}^{2a\sigma'} $, so that \begin{multline} \label{eq:kgeq0} \sum_{k=0}^{\min\{j, 2a-1\}} \| w_{j}(\rho) \textrm{\DH}^{\gamma_{1}}_{\rho} \sum_{\re \mu_{0i} \neq \tilde{\mu}_{0}} A_{0 i} I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \phi_{0} \rangle ) \|_{A} \\ \leq C_{0}^{1 + \sigma j + \sigma'(\nu + \gamma) - \sigma'} (\lambda(j, \nu, \gamma) + 1)^{\lambda(j, \nu, \gamma)} \end{multline} \bigskip So far we treated \eqref{eq:dgamma1} in the case when $ \re \mu_{0i} \neq \tilde{\mu}_{0} $ for any value of $ k $. \bigskip Consider now \eqref{eq:dgamma1}; we have to estimate \eqref{eq:gamma1} when $ \re \mu_{0i} = \tilde{\mu}_{0} $. \begin{equation} \label{eq:617} C_{1} \sum_{k=0}^{\min\{j, 2a -1\}} \sum_{\re \mu_{0i} = \tilde{\mu}_{0}} \| w_{j}(\rho) I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle ) \|_{A} . \end{equation} Let us start considering the case when $ k=0 $: $$ C_{1} \| w_{j}(\rho) I_{0 i}(\rho^{-1} \langle \tilde{g}_{j 0} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle ) \|_{A} . $$ Applying Lemma \ref{lemma:piP-1-pi} we get \begin{multline*} C_{1} \| w_{j}(\rho) I_{0 i}(\rho^{-1} \langle \tilde{g}_{j 0} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle ) \|_{A} \\ \leq C_{1} \frac{\tilde{C}_{0}}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \| w_{j}(\rho) \rho^{1-\kappa} \langle \tilde{g}_{j 0} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle \|_{A} \\ = C_{1} \frac{\tilde{C}_{0}}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \| w_{j}(\rho) \rho^{1-\kappa} \langle Q^{\nu +r} P_{1} (1-\pi)u_{j} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle \|_{A} \\ = C_{1} \frac{\tilde{C}_{0}}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \| w_{j}(\rho) \rho^{1-\kappa} \langle \partial_{\rho}^{2a-1} Q^{- \frac{2a-1}{2a}} (1-\pi)u_{j}, Q^{\frac{2a-1}{2a}} \mu_{0}^{\frac{\gamma - 2a + 1}{2a} + \nu} \ {}^{t}\tilde{P}_{1} \phi_{0} \rangle \|_{A} \\ \leq C_{1} \frac{\tilde{C}_{0}}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \mu_{0}^{\frac{\gamma - 2a + 1}{2a} + \nu} \| Q^{\frac{2a-1}{2a}} \ {}^{t}\tilde{P}_{1} \phi_{0} \|_{0} \\ \cdot \| w_{j}(\rho) \rho^{1-\kappa} Q^{- \frac{2a-1}{2a}} \partial_{\rho}^{2a-1} (1-\pi) u_{j} \|_{0,A} \end{multline*} We may apply the estimate obtained in the preceding section for $ (1-\pi) u_{j} $, since $ - \frac{2a-1}{2a} \geq -1 $ and $ - \frac{2a-1}{2a} + \frac{2a-1}{2a} = 0 $, $ 2a-1 \leq j + \gamma^{\#} $. We thus get \begin{multline} \label{eq:pi-k=0} C_{1} \| w_{j}(\rho) I_{0 i}(\rho^{-1} \langle \tilde{g}_{j 0} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle ) \|_{A} \\ \leq \frac{C_{3}}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \mu_{0}^{\frac{\gamma}{2a} + \nu} \| w_{j}(\rho) \rho^{1-\kappa} Q^{- \frac{2a-1}{2a}} \partial_{\rho}^{2a-1} (1-\pi) u_{j} \|_{0,A} \\ \leq \frac{C_{3}}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \mu_{0}^{\frac{\gamma}{2a} + \nu} C_{0}^{1 + \sigma j + \sigma' \frac{(2a-1)^{2}}{2a} } (\lambda(j, 0, 0)+1)^{\lambda(j, 0, 0)}, \end{multline} where $$ \lambda(j, 0, 0) = \frac{j}{s_{0}} . $$ Hence from \eqref{eq:pi-k=0} we have \begin{multline} \label{eq:pi-k=0--2} C_{1} \| w_{j}(\rho) I_{0 i}(\rho^{-1} \langle \tilde{g}_{j 0} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle ) \|_{A} \\ \leq \frac{C_{3} C_{0}^{2 a \sigma'} }{j^{\frac{1}{2}} A(j)^{1-\kappa}} \mu_{0}^{\frac{\gamma}{2a} + \nu} C_{0}^{-\sigma'(\nu+\gamma)} C_{0}^{1 + \sigma j + \sigma'(\nu + \gamma) - \sigma'} (\lambda(j, \nu, \gamma) + 1)^{\lambda(j, \nu, \gamma)} \\ \leq C_{0}^{1 + \sigma j + \sigma'(\nu + \gamma) - \sigma'} (\lambda(j, \nu, \gamma) + 1)^{\lambda(j, \nu, \gamma)} , \end{multline} if we make the choice $ C_{0}^{\sigma'} > \mu_{0} $, $ R_{0}^{1-\kappa} > C_{3} C_{0}^{2 a \sigma'} $. To conclude the analysis of the last term on the right hand side of \eqref{eq:puj-2} we must examine \eqref{eq:617} when $ \re \mu_{0i} = \tilde{\mu}_{0} $ for $ k \in \{1, \ldots, \min\{j, 2a-1\}\} $. It is enough to estimate one out of the two terms occurring: $$ C_{1} \sum_{k=1}^{\min\{j, 2a -1\}} \| w_{j}(\rho) I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle ) \|_{A} . $$ Applying Lemma \ref{lemma:pi-mu0} we have \begin{multline*} C_{1} \sum_{k=1}^{\min\{j, 2a -1\}} \| w_{j}(\rho) I_{0 i}(\rho^{-(k+1)} \langle \tilde{g}_{j k} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle ) \|_{A} \\ \leq \tilde{C}_{0}C_{1} \sum_{k=1}^{\min\{j, 2a -1\}} \frac{1}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \| w_{j-k}(\rho) \langle \tilde{g}_{j k} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle \|_{A} \\ = \tilde{C}_{0}C_{1} \sum_{k=1}^{\min\{j, 2a -1\}} \frac{1}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \| w_{j-k}(\rho) \langle Q^{\nu + r} P_{k+1} u_{j-k} , \mu_{0}^{\frac{\gamma_{1} - 2a + 1}{2a}} \phi_{0} \rangle \|_{A} \\ = \tilde{C}_{0}C_{1} \sum_{k=1}^{\min\{j, 2a -1\}} \frac{1}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \\ \cdot \| w_{j-k}(\rho) \partial_{\rho}^{2a-k-1} \langle Q^{-\frac{2a-k-1}{2a}} u_{j-k} , \mu_{0}^{\frac{\gamma - 2a + 1}{2a} + \nu} Q^{\frac{2a-k-1}{2a}} \ {}^{t}\tilde{P}_{k+1} \phi_{0} \rangle \|_{A} \\ \leq \tilde{C}_{0}C_{1} \sum_{k=1}^{\min\{j, 2a -1\}} \frac{1}{j^{\frac{1}{2}} A(j)^{1-\kappa}} \mu_{0}^{\frac{\gamma - 2a + 1}{2a} + \nu} \| Q^{\frac{2a-k-1}{2a}} \ {}^{t}\tilde{P}_{k+1} \phi_{0} \|_{0} \\ \cdot \| w_{j-k}(\rho) \partial_{\rho}^{2a-k-1} Q^{-\frac{2a-k-1}{2a}} u_{j-k} \|_{0,A} \\ \leq \tilde{C}_{0}C_{2} \frac{\mu_{0}^{\frac{\gamma - 2a + 1}{2a} + \nu} }{j^{\frac{1}{2}} A(j)^{1-\kappa}} \sum_{k=1}^{\min\{j, 2a -1\}} \| w_{j-k}(\rho) \partial_{\rho}^{2a-k-1} Q^{-\frac{2a-k-1}{2a}} u_{j-k} \|_{0,A} . \end{multline*} Here we absorbed the norm $ \| Q^{\frac{2a-k-1}{2a}} \ {}^{t}\tilde{P}_{k+1} \phi_{0} \|_{0} $ into an absolute constant $ C_{2} $ since $ 1\leq k \leq 2a - 1$. We point out that the conditions $ \nu \geq -1 $ and $ \nu + \frac{\gamma}{2a} \geq 0 $, when $ \nu $, $ \gamma $ are the exponents of $ Q $, $ \partial_{\rho} $ respectively, are satisfied and hence we may apply the inductive hypothesis thus obtaining the bound $$ \tilde{C}_{0}C_{2} \frac{\mu_{0}^{\frac{\gamma - 2a + 1}{2a} + \nu} }{j^{\frac{1}{2}} A(j)^{1-\kappa}} \sum_{k=1}^{\min\{j, 2a -1\}} C_{0}^{1 + \sigma(j-k) + \sigma'(2a-k-1)} (\lambda+1)^{\lambda}, $$ where $$ \lambda = \frac{j-k}{s_{0}} < \lambda(j, \nu, \gamma). $$ Since $ \sigma = 2a \sigma' $ (see \eqref{eq:sigma}) we may bound the above quantity by \begin{multline} \label{eq:k>0--mu0i=mu0t} C_{3} C_{0}^{-\sigma'} \frac{\mu_{0}^{\nu + \frac{\gamma}{2a}}}{C_{0}^{\sigma'(\nu + \gamma)}} C_{0}^{1 + \sigma j + \sigma'(\nu + \gamma) - \sigma'} (\lambda(j, \nu, \gamma) + 1)^{\lambda(j, \nu, \gamma)} \\ \leq C_{0}^{1 + \sigma j + \sigma'(\nu + \gamma) - \sigma'} (\lambda(j, \nu, \gamma) + 1)^{\lambda(j, \nu, \gamma)} . \end{multline} This concludes the estimation of the second term on the right hand side of \eqref{eq:puj-2}. We are left with the estimate of the first term in \eqref{eq:puj-2}. We split the sum as \begin{multline} \label{eq:kgrt1} \sum_{s=0}^{r-1} \| w_{j}(\rho) \textrm{\DH}^{\gamma - 2a(1+s)}_{\rho} \left( \frac{1}{\rho} \pi Q^{\nu +s} P_{1} (1-\pi)u_{j} \right ) \|_{0, A} \\ + \sum_{k=1}^{\min\{j, 2a-1\}} \sum_{s=0}^{r-1} \| w_{j}(\rho) \textrm{\DH}^{\gamma - 2a(1+s)}_{\rho} \left( \frac{1}{\rho^{k+1} } \pi Q^{\nu +s} P_{k+1} u_{j-k} \right ) \|_{0, A} \\ = B_{0} + \sum_{k=1}^{\min\{j, 2a-1\}} B_{k}. \end{multline} Let us examine $ B_{k} $, $ k > 0 $, first. \begin{multline*} \sum_{s=0}^{r-1} \| w_{j}(\rho) \textrm{\DH}^{\gamma - 2a(1+s)}_{\rho} \left( \frac{1}{\rho^{k+1} } \pi Q^{\nu +s} P_{k+1} u_{j-k} \right ) \|_{0, A} \\ \leq C_{4} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \sum_{\ell=0}^{k+1} \binom{\gamma - 2a(1+s)}{m} \frac{(k+m)!}{k!} \frac{1}{A(j)^{m}} \\ \cdot \| w_{j}(\rho) \frac{1}{\rho^{k+1} } \partial_{\rho}^{\gamma - 2as - m - k - 1} \pi Q^{\nu +s} (x \partial_{x})^{\ell} u_{j-k} \|_{0, A} \\ \leq C_{5} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \sum_{\ell=0}^{k+1} \binom{k+m}{m} \frac{(\gamma - 2a(1+s))!}{(\gamma - 2a(1+s) - m)!} \\ \cdot \frac{1}{A(j)^{m+1+k(1-\kappa) }} \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2as - m - k - 1} \pi Q^{\nu +s} (x \partial_{x})^{\ell} u_{j-k} \|_{0, A} . \end{multline*} Now $ \pi Q^{\nu +s} (x \partial_{x})^{\ell} u_{j-k} = \langle Q^{\nu +s} (x \partial_{x})^{\ell} u_{j-k} , \phi_{0} \rangle \phi_{0} $, so that we get $$ \langle (x \partial_{x})^{\ell} u_{j-k} , \mu_{0}^{\nu+s} \phi_{0} \rangle \phi_{0} = \langle u_{j-k} , \mu_{0}^{\nu+s}\ \ {}^{t}(x \partial_{x})^{\ell} \phi_{0} \rangle \phi_{0} . $$ Hence the above expression is bounded by \begin{multline*} C_{6} \sum_{s=0}^{r-1} \mu_{0}^{\nu+s} \sum_{m=0}^{\gamma - 2a(1+s)} \sum_{\ell=0}^{k+1} \binom{k+m}{m} \frac{(\gamma - 2a(1+s))!}{(\gamma - 2a(1+s) - m)!} \\ \cdot \frac{1}{A(j)^{m+1+k(1-\kappa) }} \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2as - m - k - 1} u_{j-k} \|_{0, A} , \end{multline*} where we used the fact that since $ \ell $ takes up a finite number of values, we can bound $ \| {}^{t}(x \partial_{x})^{\ell} \phi_{0} \|_{0} $ by an absolute constant. Now \begin{multline} \label{eq:factorials} \binom{k+m}{m} \frac{(\gamma - 2a(1+s))!}{(\gamma -2a(1+s) -m)!} \frac{1}{A(j)^{m+1 +k(1-\kappa)}} \\ = \binom{k+m}{m} \frac{(\gamma - 2a(1+s))!}{(\gamma -2a(1+s) -m)!} \frac{1}{(R_{0} (j+1))^{m+1+k(1-\kappa)}} \\ \leq \binom{k+m}{m} \frac{(\gamma - 2a(1+s)) \cdots (\gamma - 2a(1+s) - m + 1)}{R_{0}^{m+1+k(1-\kappa)} (j+1)^{m+1+k(1-\kappa)}} \\ \leq \binom{k+m}{m} \left(\frac{j+\gamma^{\#}}{R_{0}(j+1)}\right)^{m} \leq 2^{2a-1} \left(\frac{2 \gamma^{\#}}{R_{0}}\right)^{m} . \end{multline} Thus the above quantity is estimated by $$ C_{7} \sum_{s=0}^{r-1} \mu_{0}^{\nu+s} \sum_{m=0}^{\gamma - 2a(1+s)} \left(\frac{2 \gamma^{\#}}{R_{0}}\right)^{m} \| w_{j-k}(\rho) \partial_{\rho}^{\gamma - 2as - m - k - 1} u_{j-k} \|_{0, A} . $$ Using the inductive hypothesis we have that the above sum is bounded by $$ C_{7} \sum_{s=0}^{r-1} \mu_{0}^{\nu+s} \sum_{m=0}^{\gamma - 2a(1+s)} \left(\frac{2 \gamma^{\#}}{R_{0}}\right)^{m} C_{0}^{1+\sigma(j-k) + \sigma'(\gamma - 2as - m - k - 1)} (\lambda+1)^{\lambda}, $$ where $$ \lambda = \frac{j-k}{s_{0}} + \left( \frac{\gamma - 2as - m - k - 1}{a}\right) \frac{q-1}{q} . $$ We see immediately that $$ \lambda < \lambda(j, \nu, \gamma). $$ As for the sums above we remark that, choosing $ R_{0} > 2 R_{1} $, we may estimate them by $$ C_{8} C_{0}^{-(\sigma + \sigma') k} \left( \frac{\mu_{0}}{C_{0}^{\sigma'}}\right)^{\nu} \ \sum_{s=0}^{r-1} \left(\frac{ \mu_{0}}{C_{0}^{\sigma' 2 a}}\right)^{s} C_{0}^{1+\sigma j + \sigma'(\gamma + \nu)} (\lambda(j, \nu, \gamma) +1)^{\lambda(j, \nu, \gamma)} $$ so that the choice $ C_{0}^{\sigma'} > \max\{\mu_{0}, \mu_{0}^{\frac{1}{2a}}\} $, $ C_{0}^{\sigma'} > 2a C_{8} $ allows us to obtain from \eqref{eq:kgrt1} \begin{equation} \label{eq:Bk} \sum_{k=1}^{\min\{j, 2a-1\}} B_{k} \leq C_{0}^{-\sigma'} C_{0}^{1+\sigma j + \sigma'(\gamma + \nu)} (\lambda(j, \nu, \gamma) +1)^{\lambda(j, \nu, \gamma)} . \end{equation} Next we have to estimate in \eqref{eq:k>0--mu0i=mu0t} $$ B_{0} = \sum_{s=0}^{r-1} \| w_{j}(\rho) \textrm{\DH}^{\gamma - 2a(1+s)}_{\rho} \left( \frac{1}{\rho} \pi Q^{\nu +s} P_{1} (1-\pi)u_{j} \right ) \|_{0, A} . $$ Taking the $ \rho $-derivative as above and recalling \eqref{eq:dslash} we have \begin{multline*} B_{0} \leq C_{9} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \binom{\gamma -2a(1+s)}{m} m! \\ \cdot \| w_{j}(\rho) \frac{1}{\rho^{1+m} } \partial_{\rho}^{\gamma-2a(1+s) - m} \pi Q^{\nu +s} P_{1} (1-\pi)u_{j} \|_{0, A} \\ = C_{9} \sum_{s=0}^{r-1} \sum_{m=0}^{\gamma - 2a(1+s)} \frac{(\gamma -2a(1+s))!}{(\gamma - 2a(1+s) - m)!} \\ \cdot \| w_{j}(\rho) \frac{1}{\rho^{2 - \kappa +m} } \rho^{1-\kappa} \partial_{\rho}^{\gamma-2a(1+s) - m} \pi Q^{\nu +s} P_{1} (1-\pi)u_{j} \|_{0, A} \\ \leq C_{10} \sum_{s=0}^{r-1} \mu_{0}^{\nu+s} \sum_{m=0}^{\gamma - 2a(1+s)} \frac{(\gamma -2a(1+s))!}{(\gamma - 2a(1+s) - m)!} \frac{1}{A(j)^{2-\kappa+m}} \\ \cdot \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{\gamma-2a s - m -1} \langle (1-\pi)u_{j} , {}^{t}\tilde{P}_{1} \phi_{0} \rangle \phi_{0} \|_{0, A} \\ \leq C_{11} \sum_{s=0}^{r-1} \mu_{0}^{\nu+s} \sum_{m=0}^{\gamma - 2a(1+s)} \frac{(\gamma -2a(1+s))!}{(\gamma - 2a(1+s) - m)!} \frac{1}{A(j)^{2-\kappa+m}} \\ \cdot \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{\gamma-2a s - m -1} (1-\pi)u_{j} \|_{0,A} , \end{multline*} where we used \eqref{eq:Pj} and included in $ C_{11} $ the norm $ \| {}^{t}\tilde{P}_{1} \phi_{0} \|_{0} $, which is an absolute constant . Arguing as we did in \eqref{eq:factorials} we get \begin{multline*} B_{0} \leq C_{12} \frac{1}{R_{0}} \sum_{s=0}^{r-1} \mu_{0}^{\nu+s} \sum_{m=0}^{\gamma - 2a(1+s)} \left(\frac{2 \gamma^{\#}}{R_{0}}\right)^{m} \\ \cdot \| w_{j}(\rho) \rho^{1-\kappa} \partial_{\rho}^{\gamma - 2as - m - 1} (1-\pi) u_{j} \|_{0, A} . \end{multline*} Using the inductive hypothesis we may bound the above quantity by $$ C_{12} \frac{1}{R_{0}} \sum_{s=0}^{r-1} \mu_{0}^{\nu+s} \sum_{m=0}^{\gamma - 2a(1+s)} \left( \frac{2 \gamma^{\#}}{R_{0}}\right)^{m} C_{0}^{1+\sigma j + \sigma'(\gamma - 2as - m - 1)} (\lambda+1)^{\lambda}, $$ where $$ \lambda = \frac{j}{s_{0}} + \left( \frac{\gamma - 2as - m - 1}{a}\right) \frac{q-1}{q} . $$ We see immediately that $$ \lambda < \lambda(j, \nu, \gamma). $$ Taking $ R_{0} $ large enough and choosing $ C_{0} $ as above we conclude that \begin{equation} \label{eq:B0} B_{0} \leq C_{0}^{1+\sigma j + \sigma'(\gamma + \nu) - \sigma'} (\lambda(j, \nu, \gamma) + 1)^{\lambda(j, \nu, \gamma)}. \end{equation} Summing up, when we plug \eqref{eq:B0}, \eqref{eq:Bk}, \eqref{eq:k>0--mu0i=mu0t}, \eqref{eq:pi-k=0--2}, \eqref{eq:kgeq0} into \eqref{eq:puj-2} and choose $ C_{0} $ sufficiently large we get the desired estimate. This ends the proof of Theorem \ref{th:est-p}. \end{proof} \section{End of the Proof of Theorem \ref{th:est}} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} Using Theorem \ref{th:est-1-p}, \ref{th:est-p} and renaming $ C_{0} $ we obtain Theorem \ref{th:est'}. Next we want to prove Theorem \ref{th:est}. To this end we are going to use Proposition \ref{prop:ineqQ} with $ \theta = 0 $: \begin{multline*} \| w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \\ \leq \sum_{\ell = 0}^{m(0, I)} C^{\ell + 1} \binom{m(0, I)}{\ell} \langle I \rangle^{\ell} \| w_{j}(\rho) \partial_{\rho}^{\gamma} Q^{\frac{1}{2}\left(\langle I \rangle - \ell \frac{q}{q-1} \right)} u_{j} \|_{0, A}, \end{multline*} where $$ \langle I \rangle = \alpha + \frac{\beta}{q-1}, $$ and $$ m(0, I) = \left[ \langle I \rangle \frac{q-1}{q} \right] , $$ where the square brackets denote the integer part. By Theorem \ref{th:est'} we have \begin{multline*} \| w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \\ \leq \sum_{\ell = 0}^{m(0, I)} C^{\ell + 1} \binom{m(0, I)}{\ell} \langle I \rangle^{\ell} C_{0}^{1+\sigma j + \sigma' (\gamma + \langle I \rangle - \ell \frac{q}{q-1} )} \\ \cdot \left(1 + \lambda(j, \frac{1}{2}\left(\langle I \rangle - \ell \frac{q}{q-1} \right), \gamma)\right)^{\lambda(j, \frac{1}{2}\left(\langle I \rangle - \ell \frac{q}{q-1} \right), \gamma)} . \end{multline*} Now $$ \lambda(j, \frac{1}{2}\left(\langle I \rangle - \ell \frac{q}{q-1} \right), \gamma) = \frac{j}{s_{0}} + \langle I \rangle \frac{q-1}{q} + \frac{\gamma}{a} \frac{q-1}{q} - \ell, $$ and, since $$ \langle I \rangle \leq \frac{q}{q-1} \lambda(j, \frac{1}{2}\left(\langle I \rangle - \ell \frac{q}{q-1} \right), \gamma) , $$ we obtain that \begin{multline*} \| w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \\ \leq \sum_{\ell = 0}^{m(0, I)} C_{1}^{\ell + 1} \binom{m(0, I)}{\ell} C_{0}^{1+\sigma j + \sigma' (\gamma + \langle I \rangle - \ell \frac{q}{q-1} )} \left(1 + \lambda(j, \frac{1}{2} \langle I \rangle, \gamma)\right)^{\lambda(j, \frac{1}{2}\left(\langle I \rangle \right), \gamma)} \\ \leq C_{0}^{1+\sigma j + \sigma' (\gamma + \langle I \rangle )} \left(1 + \lambda(j, \frac{1}{2} \langle I \rangle, \gamma)\right)^{\lambda(j, \frac{1}{2}\left(\langle I \rangle \right), \gamma)} \\ \sum_{\ell = 0}^{m(0, I)} C_{1}^{\ell + 1} \binom{m(0, I)}{\ell} C_{0}^{- \ell \frac{q}{q-1}\sigma'} \\ \leq C_{0}^{1+\sigma j + \sigma' (\gamma + \langle I \rangle )} \left(1 + \lambda(j, \frac{1}{2} \langle I \rangle, \gamma)\right)^{\lambda(j, \frac{1}{2}\left(\langle I \rangle \right), \gamma)} C_{1} \left( 1 + \frac{C_{1}}{C_{0}^{\frac{q}{q-1}\sigma'}} \right)^{m(0, I)} . \end{multline*} Choosing $ C_{0}^{\frac{q}{q-1}\sigma'} > C_{1} $ and recalling that $ m(0, I) < \langle I \rangle \leq \alpha + \beta $ we have that \begin{multline*} \| w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \\ \leq 2^{\alpha+\beta} C_{1} C_{0}^{1+\sigma j + \sigma' (\gamma + \langle I \rangle )} \left(1 + \lambda(j, \frac{1}{2} \langle I \rangle, \gamma)\right)^{\lambda(j, \frac{1}{2}\left(\langle I \rangle \right), \gamma)} . \end{multline*} Moreover $$ \lambda(j, \frac{1}{2} \langle I \rangle, \gamma) = \frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} , $$ so that \begin{multline} \label{eq:4.1} \| w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \leq C_{u}^{1 + j + \alpha + \beta + \gamma} \\ \cdot \left( 1 + \frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)^{\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q}} . \end{multline} Renaming the constant $ C_{u} $, we finish the proof of Theorem \ref{th:est}. \section{Pointwise Estimates of the $ u_{j} $} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} \label{sec:pt} First of all we point out that from the estimates of Theorem \ref{th:est} it is straightforward to deduce the same type of estimates for \begin{multline} \label{eq:norm} \| \partial_{\rho}^{\gamma'} \partial_{x}^{\alpha'} w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \leq C_{u}^{1 + j + \alpha + \beta + \gamma} \\ \cdot \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \end{multline} with $ \alpha' + \gamma' \leq 2 $, with a different meaning of the constant $ C_{u} $. The purpose of adding two extra derivatives with respect to $ (x, \rho) $ is to apply the Sobolev immersion theorem to deduce pointwise estimates. To this end we use cutoff functions in order to apply the Sobolev immersion theorem on the whole plane. \begin{lemma} \label{lemma:chij} There exist smooth functions $ \chi_{j}(\rho) $ such that \begin{itemize} \item[(i) ]{} $ \supp \chi_{j} \subset \{ \rho \geq R_{0}(j+1) \} $. \item[(ii) ]{} $ \chi_{j} \equiv 1 $ for $ \rho \geq 3 R_{0}(j+1) $. \item[(iii)]{} $ | D^{\gamma}_{\rho} \chi_{j}(\rho) | \leq C_{\chi}^{\gamma} $, for $ 0 \leq \gamma \leq R_{0}(j+1) $. \end{itemize} \end{lemma} \begin{proof} Let $ \chi $ denote the characteristic function of the half line $ [2 R_{0}(j+1), + \infty [ $, and denote by $ \psi $ a function in $ C_{0}^{\infty}({\mathbb{R}}) $, $ \supp \psi \subset \{ |x| \leq 1 \} $, and such that $ \int_{{\mathbb{R}}}\psi(x) dx = 1 $. Define, assuming that $ R_{0} $ is an integer, $$ \chi_{j} = \chi * \underbrace{\psi * \cdots * \psi}_{R_{0}(j+1) \text{ times}}. $$ The support of $ \chi_{j} $ is evidently contained in $ \{ \rho \geq R_{0}(j+1) \} $ and $ \chi_{j} \equiv 1 $ for $ \rho \geq 3 R_{0}(j+1) $. Moreover $$ D^{\gamma} \chi_{j}(\rho) = \chi * \underbrace{D\psi * \cdots * D\psi}_{\gamma \text{ times}} * \psi * \cdots * \psi , $$ for $ \gamma \leq R_{0}(j+1) $. Hence, by Young inequality for the convolution, $$ | D^{\gamma} \chi_{j}(\rho) | \leq \left( \| D\psi \|_{L^{1}({\mathbb{R}})} \right)^{\gamma} . $$ This completes the proof of the lemma. \end{proof} Consider now $$ \| \partial_{\rho}^{\gamma'} \partial_{x}^{\alpha'} w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} \chi_{j}(\rho) u_{j}(x, \rho) \|_{0} , $$ where $ \| \cdot \|_{0} $ denotes the norm in $ L^{2}({\mathbb{R}}_{\rho} \times {\mathbb{R}}_{x})$. We have \begin{multline*} \| \partial_{\rho}^{\gamma'} \partial_{x}^{\alpha'} w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} \chi_{j}(\rho) u_{j}(x, \rho) \|_{0} \\ \leq \| \chi_{j}(\rho) \partial_{\rho}^{\gamma'} \partial_{x}^{\alpha'} w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0} \\ + 2^{\gamma+\gamma'} \sum_{\substack{k=1\\ \gamma_{1}' + \gamma_{1} = \gamma' + \gamma -k}}^{\gamma+\gamma'} \binom{\gamma + \gamma'}{k} \\ \cdot \| D^{k}_{\rho}\chi_{j}(\rho) \cdot \partial_{\rho}^{\gamma'_{1}} \left( \partial_{x}^{\alpha'} w_{j}(\rho) \partial_{\rho}^{\gamma_{1}} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \right) \|_{0} \\ \leq \| \partial_{\rho}^{\gamma'} \partial_{x}^{\alpha'} w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \\ + 2^{\gamma+\gamma'} \sum_{\substack{k=1\\ \gamma_{1}' + \gamma_{1} = \gamma' + \gamma -k}}^{\gamma+\gamma'} \binom{\gamma + \gamma'}{k} C_{\chi}^{k} \| \partial_{\rho}^{\gamma'_{1}} \partial_{x}^{\alpha'} w_{j}(\rho) \partial_{\rho}^{\gamma_{1}} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{0, A} \\ \leq C_{u}^{1 + j + \alpha + \beta + \gamma} \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \\ + 2^{\gamma+\gamma'} \sum_{\substack{k=1\\ \gamma_{1}' + \gamma_{1} = \gamma' + \gamma -k}}^{\gamma+\gamma'} \binom{\gamma + \gamma'}{k} C_{\chi}^{k} C_{u}^{1 + j + \alpha + \beta + \gamma_{1}} \\ \cdot \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma_{1}}{a} \frac{q-1}{q} \right)! \\ \leq C_{1}^{j+\alpha+\beta+\gamma' + \gamma} C_{u}^{1 + j + \alpha + \beta + \gamma + \gamma'} \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \\ \leq ((C_{1} C_{u})^{2})^{1 + j + \alpha + \beta + \gamma} \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \end{multline*} provided $ C_{u} > C_{\chi} $ and $ \gamma \leq j+\gamma^{\#}-2 $. Summing up we obtain \begin{multline} \label{eq:estR} \| \partial_{\rho}^{\gamma'} \partial_{x}^{\alpha'} w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} \chi_{j}(\rho) u_{j}(x, \rho) \|_{0} \\ \leq C_{1u}^{1 + j + \alpha + \beta + \gamma} \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \end{multline} for a suitable constant $ C_{1u} > 0 $. The Sobolev embedding theorem yields \begin{multline*} | w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) | \leq \| w_{j}(\rho) \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} \chi_{j}(\rho) u_{j}(x, \rho) \|_{L^{\infty}({\mathbb{R}}^{2})} \\ \leq C_{2u}^{1 + j + \alpha + \beta + \gamma} \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \end{multline*} for $ \rho \geq 3 R_{0}(j+1) $. As a consequence we get, recalling \eqref{eq:wj}, \eqref{eq:mitlda0}, \begin{multline} \label{eq:pest} | \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) | \\ \leq K_{u}^{1 + j + \alpha + \beta + \gamma} e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta) \kappa} \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \end{multline} So far we have proved the \begin{theorem} \label{th:pest-t} Let the functions $ u_{j} $ denote the solutions of the equations \eqref{eq:tr1} and \eqref{eq:tr2}, for $ j \geq 1 $ and let $ u_{0} $ denote the function in \eqref{eq:u0-}, \eqref{eq:mitlda0}. Then for every $ R_{1} > 0 $ there exist positive constants $ K_{u} $, $ R_{0} > R_{1} $, $ \delta $, $ \kappa $, with $ 0 < \delta < 1 $, $ \frac{1}{s_{0}} < \kappa < 1 $, $ \kappa \delta > \frac{1}{2} $, such that for $ j \geq 1 $ the estimates \eqref{eq:pest} are satisfied for $ \rho \geq 3 R_{0} (j+1) $ and $ \gamma \leq j+ \gamma^{\#} $, where $ \gamma^{\#} $ denotes a fixed positive constant $ \geq 2a $. \end{theorem} Next we also need a lemma to estimate $ x $-derivatives of the projections $ \pi $ and $ 1-\pi $ applied to $ \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) $. This will be used in the proof of Theorem \ref{th:Beu}. \begin{lemma} \label{lemma:derproj} Under the same assumptions of Theorem \ref{th:pest-t} we have the estimates \begin{multline} \label{eq:derproj1} \left| \partial_{x}^{\alpha'} \pi \left(\partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \right) \right| \leq M_{u}^{1 + j + \alpha + \alpha'+ \beta + \gamma} \\ \cdot e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta) \kappa} \left(\frac{j}{s_{0}} + (\alpha+\alpha') \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \end{multline} and \begin{multline} \label{eq:derproj2} \left| \partial_{x}^{\alpha'} (1-\pi) \left(\partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \right) \right| \leq M_{u}^{1 + j + \alpha + \alpha'+ \beta + \gamma} \\ \cdot e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta) \kappa} \left(\frac{j}{s_{0}} + (\alpha+\alpha') \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \end{multline} for $ \rho \geq 3 R_{0} (j+1) $ and $ \gamma \leq j+\gamma^{\#} $ and for any $ \alpha $, $ \alpha' $. \end{lemma} \begin{proof} Let us start by proving \eqref{eq:derproj1}. Since \begin{multline*} \left| \partial_{x}^{\alpha'} \pi \left(\partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \right) \right| = \left| \langle \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) , \phi_{0} \rangle \partial_{x}^{\alpha'} \phi_{0}(x) \right| \\ \leq \| \partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \|_{L^{\infty}({\mathbb{R}}_{x})} \| \phi_{0} \|_{L^{1}({\mathbb{R}}_{x})} \left| \partial_{x}^{\alpha'} \phi_{0}(x) \right| . \end{multline*} By Corollary \ref{cor:Dphi}, Appendix B, we have that $$ | \partial_{x}^{\alpha'} \phi_{0}(x) | \leq C_{\phi}^{\alpha' + 1} \alpha'!^{\frac{q-1}{q}}, $$ and moreover $ \| \phi_{0} \|_{L^{1}({\mathbb{R}}_{x})} \leq \tilde{C}_{\phi} $ for a suitable constant $ \tilde{C}_{\phi} > 0 $. Thus if $ \rho \geq 3 R_{0} (j+1) $ from Theorem \ref{th:pest-t} we conclude that \begin{multline*} \left| \partial_{x}^{\alpha'} \pi \left(\partial_{\rho}^{\gamma} x^{\beta} \partial_{x}^{\alpha} u_{j}(x, \rho) \right) \right| \leq \tilde{C}_{\phi} \cdot C_{\phi}^{\alpha' + 1} \alpha'!^{\frac{q-1}{q}} \\ \cdot K_{u}^{1 + j + \alpha + \beta + \gamma} e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta) \kappa} \left(\frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{\beta}{q} + \frac{\gamma}{a} \frac{q-1}{q} \right)! \end{multline*} which gives \eqref{eq:derproj1} provided $ M_{u} $ is chosen sufficiently large. Finally to prove \eqref{eq:derproj2} it is enough to bound each term of the difference, noting that the bound of the term with the identity follows from \eqref{eq:pest}, while the bound of the term including the projection has been already proved. This ends the proof of the lemma. \end{proof} \section{Turning a Formal Solution into a True Solution} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} In the preceding sections we constructed functions $ u_{j} $, $ j \geq 0$, such that the formal sum $ u = \sum_{j\geq 0} u_{j} $ formally verifies the equation \eqref{eq:transp} \begin{multline} \label{eq:Pu-form} P(x, y, D_{x}, D_{y}) A(u)(x, y) \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[ \sum_{j=0}^{2a} \frac{1}{\rho^{j}} P_{j}(x, D_{x}, D_{\rho}) u(x, \rho) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho = 0 . \end{multline} Our purpose is to turn $ u $ into a true solution of an equation of the form \begin{equation} \label{eq:PAu=f} P(x, y, D_{x}, D_{y}) A(u)(x, y) = f(x, y), \end{equation} for some smooth function $ f $ defined in $ {\mathbb{R}}^{2}_{(x, y)} $. In order to do so we need a set of cutoff functions similar to those discussed in Lemma \ref{lemma:chij}. \begin{lemma} \label{lemma:psij} There exist smooth functions $ \psi_{j}(\rho) $, $ j \geq 0 $, such that \begin{itemize} \item[(i) ]{} $ \supp \psi_{j} \subset \{ \rho \geq 3 R_{0}(j+1) \} $. \item[(ii) ]{} $ \psi_{j} \equiv 1 $ for $ \rho \geq 6 R_{0}(j+1) $. \item[(iii) ]{} $ | D^{\gamma}_{\rho} \psi_{j}(\rho) | \leq C_{\psi}^{\gamma} $, for $ 0 \leq \gamma \leq R_{0}(j+1) $. \end{itemize} \end{lemma} The proof is the same as that of Lemma \ref{lemma:chij} and we omit it. Define \begin{equation} \label{eq:v} v(x, \rho) = \sum_{j\geq 0} \psi_{j}(\rho) u_{j}(x, \rho) . \end{equation} The function $ v $ is well defined and smooth since the above sum is locally finite in $ \rho $. The remaining part of the present section is devoted to computing $ P A(v) $, where $ v $ is defined by \eqref{eq:v}. Obviously $ A(v) $ is no more a null solution of $ P $, due to the errors introduced by the cutoff functions $ \psi_{j} $. First we recall the definition of the Beurling classes: \begin{definition} \label{def:Bs} Let $ \Omega $ be an open subset of $ {\mathbb{R}}^{2} $ and $ s \geq 1 $. We define the class $ \mathscr{B}^{s}(\Omega) $ (of Beurling type functions on $ \Omega $) as the set of all smooth functions $ f(x, y) $ defined on $ \Omega $ such that for every $ \epsilon > 0 $ and for every $ K \Subset \Omega $ compact, there exists a positive constant $ C = C(\epsilon, K) $ such that \begin{equation} \label{eq:Bs} | \partial_{x}^{\alpha} \partial_{y}^{\beta} f(x, y)| \leq C \epsilon^{\alpha+\beta} \alpha!^{s} \beta!^{s}, \end{equation} for every $ (x, y) \in K $ and every $ \alpha $, $ \beta $. If $ \Omega \subset {\mathbb{R}}^{2} $ we define the global class $ \mathscr{B}^{s}_{g}(\Omega) $ as the set of all smooth functions $ f(x, y) $ defined on $ \Omega $ such that for every $ \epsilon > 0 $ there exists a positive constant $ C = C(\epsilon, \Omega) $ such that \eqref{eq:Bs} holds for every $ (x, y) \in \Omega $ and every $ \alpha $, $ \beta $. \end{definition} \begin{remark} \label{rem:1} It is a consequence of the above definition that a function $ f \in G^{s'}(\Omega)$ belongs to $ \mathscr{B}^{s}(\Omega) $, for $ s' < s $. \end{remark} We point out that $ v(x, \rho) = 0 $ if $ \rho \leq 3 R_{0} $ and that, by \eqref{eq:pest}, the support properties of the $ \psi_{j} $, using an analogous but simpler argument as in the proof of Theorem \ref{th:Beu} below, \begin{equation} \label{eq:convv} | \rho^{k} \partial_{x}^{\alpha} v(x, \rho) | \leq C_{\alpha k} e^{-\tilde{\lambda} \rho} , \end{equation} for suitable positive constants $ C_{\alpha k} $, $ \tilde{\lambda} < |\tilde{\mu}_{0}| $. As a consequence $ A(v) $ is a smooth function defined on $ {\mathbb{R}}^{2} $. The idea behind the next theorem is the following: applying $ P $ to the function $ A(v) $ yields a function of the form $ A(\sum_{j} \tilde{v}_{j}) $, where, because of the transport equations and the support properties of the cutoff functions $ \psi_{j} $, $ \tilde{v}_{j} = \mathscr{O}(\rho^{-j}) $, has compact $\rho $-support, where $ \rho \sim j $. This implies that the sum is $ \mathscr{O}(e^{- c \rho \log \rho}) $. Via Lemma \ref{lemma:intBs0} we see that $ P A(v) $ belongs to $ \mathscr{B}^{s_{0}}_{g}({\mathbb{R}}^{2}) $. \begin{theorem} \label{th:Beu} We have that \begin{equation} \label{eq:Beu} P(x, y, D_{x}, D_{y}) A(v)(x, y) \in \mathscr{B}^{s_{0}}_{g}({\mathbb{R}}^{2}). \end{equation} \end{theorem} \begin{proof} By \eqref{eq:Pu-form} we have \begin{multline*} P(x, y, D_{x}, D_{y}) A(v)(x, y) \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[ \sum_{k=0}^{2a} \frac{1}{\rho^{k}} P_{k}(x, D_{x}, D_{\rho}) v(x, \rho) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho . \end{multline*} Since the sum $ v = \sum_{j\geq 0} \psi_{j}(\rho) u_{j}(x, \rho) $ is a finite sum for any $ \rho $ in a compact set, we have \begin{multline*} \sum_{k=0}^{2a} \frac{1}{\rho^{k}} P_{k}(x, D_{x}, D_{\rho}) v(x, \rho) = \sum_{k=0}^{2a} \frac{1}{\rho^{k}} P_{k}(x, D_{x}, D_{\rho}) \sum_{j\geq 0} \psi_{j} u_{j} \\ = \sum_{j\geq 0} \sum_{k=0}^{2a} \frac{1}{\rho^{k}} \sum_{\gamma=0}^{2a} D_{\rho}^{\gamma} \psi_{j} \frac{1}{\gamma!} P_{k}^{(\gamma)}(x, D_{x}, D_{\rho}) u_{j} , \end{multline*} where $ P_{k}^{(\gamma)} $ denotes the symbol $ \partial_{\sigma}^{\gamma} P_{k}(x, \xi, \sigma) $. Hence \begin{multline} \label{eq:Si} \sum_{k=0}^{2a} \frac{1}{\rho^{k}} P_{k}(x, D_{x}, D_{\rho}) v(x, \rho) = \sum_{j\geq 0} \sum_{k=0}^{2a} \frac{1}{\rho^{k}} \psi_{j} P_{k} u_{j} \\ + \sum_{j\geq 0} \sum_{k=0}^{2a} \frac{1}{\rho^{k}} \sum_{\gamma=1}^{2a} D_{\rho}^{\gamma} \psi_{j} \frac{1}{\gamma!} P_{k}^{(\gamma)}(x, D_{x}, D_{\rho}) u_{j} \\ = S_{1}(x, \rho) + S_{2}(x, \rho) . \end{multline} We remark that the $ \rho $-derivatives appearing in the above expression have order at most $ 2a $. For the first summand we are going to organize the terms using the transport equations \eqref{eq:tr1} and \eqref{eq:tr2}, while for the second summand we use the estimates \eqref{eq:pest} and the fact that the derivatives of $ \psi_{j} $ have compact support. Define $$ f_{i}(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[ S_{i}(x, \rho) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho . $$ Let us start with $ f_{2} $. We have $$ \partial_{x}^{\alpha} \partial_{y}^{\beta} f_{2}(x, y) = i^{\beta} \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + (2+ \alpha) \frac{s_{0}}{q} +\beta s_{0} } \left[ \partial_{x}^{\alpha} S_{2}(x, \rho) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho . $$ In view of \eqref{eq:Pj}, the $ x $-derivatives of $ S_{2} $ are given by $$ \partial_{x}^{\alpha} S_{2}(x, \rho) = \sum_{j\geq 0} \sum_{k=0}^{2a} \sum_{\gamma=1}^{2a} \sum_{\alpha'=0}^{k} \binom{\alpha}{\alpha'} \frac{1}{\rho^{k}}D_{\rho}^{\gamma} \psi_{j} \frac{1}{\gamma!} P_{k, (\alpha') }^{(\gamma)}(x, D_{x}, D_{\rho}) \partial_{x}^{\alpha-\alpha'} u_{j} , $$ where $ P_{k, (\alpha') }^{(\gamma)} $ denotes the symbol $ \partial_{x}^{\alpha'} \partial_{\sigma}^{\gamma} P_{k}(x, \xi, \sigma) $ and we can assume without loss of generality that $ \alpha > 2a$. Let us estimate $ | \partial_{x}^{\alpha} S_{2}(x, \rho) | $. We have \begin{multline} \label{eq:daS2} | \partial_{x}^{\alpha} S_{2}(x, \rho) | \\ \leq \sum_{j\geq 0} \sum_{k=0}^{2a} \sum_{\gamma=1}^{2a} \sum_{\alpha'=0}^{k} \binom{\alpha}{\alpha'} | D_{\rho}^{\gamma} \psi_{j}| \frac{1}{\gamma!} \left | P_{k, (\alpha') }^{(\gamma)}(x, D_{x}, D_{\rho}) \partial_{x}^{\alpha-\alpha'} u_{j} \right | . \end{multline} Recalling the definitions \eqref{eq:P0}, \eqref{eq:P1}, \eqref{eq:Pj} and \eqref{eq:Pjtilda}, we see that the above sum has a typical summand of the form $$ \binom{\alpha}{\alpha'} \frac{1}{\gamma!} | D_{\rho}^{\gamma} \psi_{j}| \left| \partial_{\rho}^{2a -k - \gamma} x^{m-\alpha'} \partial_{x}^{m+\alpha-\alpha'} u_{j} \right|, $$ if $ k > 0 $, for $ m = 0, 1, \ldots, k $, with the proviso that the term is zero if $ 2a - k - \gamma < 0 $ or $ m-\alpha' < 0 $, and of the form $$ \binom{\alpha}{\alpha'} \frac{1}{\gamma!} | D_{\rho}^{\gamma} \psi_{j}| \left|\partial_{\rho}^{2a - \gamma} \partial_{x}^{\alpha} u_{j} \right| , $$ if $ k=0 $. Let us start by examining the first type. First observe that, since $ \gamma > 0 $, the $ \rho $-support of every term is contained in the interval $ 3 R_{0}(j+1) \leq \rho \leq 6 R_{0}(j+1) $ and that $ |D_{\rho}^{\gamma} \psi_{j}| $ is uniformly bounded since $ \gamma $ runs over a finite set of indices. As for the function $ u_{j} $ we may apply \eqref{eq:pest}, thus obtaining \begin{multline*} \left| \partial_{\rho}^{2a -k - \gamma} x^{m-\alpha'} \partial_{x}^{m+\alpha-\alpha'} u_{j} \right| \\ \leq K_{u}^{1+j+m+\alpha-\alpha'+m-\alpha'+2a-k-\gamma} e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta)\kappa} \\ \cdot \left(\frac{j}{s_{0}} + (m+\alpha-\alpha')\frac{q-1}{q} + \frac{m-\alpha'}{q} + \frac{2a-k-\gamma}{a} \frac{q-1}{q}\right)! \\ \leq K_{u}^{1+j+k+\alpha-2\alpha'+2a-\gamma} e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta)\kappa} \\ \cdot \left(\frac{j}{s_{0}} + \frac{k}{s_{0}} + (\alpha-\alpha')\frac{q-1}{q} - \frac{\alpha'}{q} + \frac{2a-\gamma}{a} \frac{q-1}{q}\right)! \end{multline*} where we bound $ m $ by $ k $. We can then see that there is a suitable positive constant, $ M_{1} $, independent of $ j $, $ \alpha $, $ \alpha' $, such that \begin{equation} \label{eq:k>0} \left| \partial_{\rho}^{2a -k - \gamma} x^{m-\alpha'} \partial_{x}^{m+\alpha-\alpha'} u_{j} \right| \leq M_{1}^{1+j+\alpha} e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta)\kappa} j^{\frac{j}{s_{0}}} \alpha^{\alpha \frac{q-1}{q}} . \end{equation} Furthermore we can make the same argument to estimate the term with $ k = 0 $, i.e. \begin{equation} \label{eq:k=0} \left|\partial_{\rho}^{2a - \gamma} \partial_{x}^{\alpha} u_{j} \right| \leq M_{1}^{1+j+\alpha} e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta)\kappa} j^{\frac{j}{s_{0}}} \alpha^{\alpha \frac{q-1}{q}} . \end{equation} Hence, when $ k > 0 $ we get \begin{multline} \label{eq:k>0-1} \binom{\alpha}{\alpha'} \frac{1}{\gamma!} | D_{\rho}^{\gamma} \psi_{j}| \left| \partial_{\rho}^{2a -k - \gamma} x^{m-\alpha'} \partial_{x}^{m+\alpha-\alpha'} u_{j} \right| \\ \leq M_{2}^{1+j+\alpha} | D_{\rho}^{\gamma} \psi_{j}| e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta)\kappa} j^{\frac{j}{s_{0}}} \alpha^{\alpha \frac{q-1}{q}} \\ \leq M_{2}^{1+j+\alpha} \left( \frac{1}{3R_{0}}\right)^{\frac{j}{s_{0}}} | D_{\rho}^{\gamma} \psi_{j}| e^{\tilde{\mu}_{0} \rho} \rho^{\delta \kappa} \rho^{-j( \kappa - \frac{1}{s_{0}} )} \alpha^{\alpha \frac{q-1}{q}} \\ \leq M_{2}^{1+j+\alpha} C_{1} \left( \frac{1}{3R_{0}}\right)^{\frac{j}{s_{0}}} e^{\tilde{\mu}_{0} \rho} \rho^{\delta \kappa} \rho^{- (\frac{\rho}{6R_{0}} - 1) ( \kappa - \frac{1}{s_{0}} )} \alpha^{\alpha \frac{q-1}{q}} \\ \leq M_{2}^{1+j+\alpha} C_{2} \left( \frac{1}{3R_{0}}\right)^{\frac{j}{s_{0}}} e^{-\left( \frac{ \kappa - \frac{1}{s_{0}}}{6R_{0}} \right) \rho \log \rho + \frac{\tilde{\mu}_{0}}{2}\rho} \alpha^{\alpha \frac{q-1}{q}} , \end{multline} and an analogous estimate for the term containing \eqref{eq:k=0}. Plugging the above estimate into \eqref{eq:daS2} and keeping into account that the summations in $ k $, $ \gamma $, $ \alpha' $ run over a finite range of indices, we obtain with a new positive constant $ M_{3} $ \begin{equation} \label{eq:daS2-1} | \partial_{x}^{\alpha} S_{2}(x, \rho) | \leq \sum_{j\geq 0} M_{3}^{1+j+\alpha} \left( \frac{1}{3R_{0}}\right)^{\frac{j}{s_{0}}} e^{-\left( \frac{ \kappa - \frac{1}{s_{0}}}{6R_{0}} \right) \rho \log \rho+ \frac{\tilde{\mu}_{0}}{2}\rho } \alpha^{\alpha \frac{q-1}{q}} . \end{equation} Thus choosing $ (3 R_{0})^{\frac{1}{s_{0}}} > M_{3} $ we get \begin{equation} \label{eq:daS2-2} | \partial_{x}^{\alpha} S_{2}(x, \rho) | \leq M_{4}^{1+\alpha} \ e^{-\left( \frac{ \kappa - \frac{1}{s_{0}}}{6R_{0}} \right) \rho \log \rho + \frac{\tilde{\mu}_{0}}{2}\rho } \alpha^{\alpha \frac{q-1}{q}} . \end{equation} We need the following \begin{lemma} \label{lemma:intBs0} Let $ \mu > 0 $, $ s > 1 $. For any $ \epsilon > 0 $ there is a constant $ C_{\epsilon} > 0 $ such that \begin{equation} \label{eq:intBs0} \int_{0}^{+\infty} e^{- \mu \rho \log \rho} \rho^{s \alpha} d\rho \leq C_{\epsilon} \epsilon^{\alpha} \alpha!^{s}. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:intBs0}] Pick a positive $ M $ to be chosen later and write \begin{eqnarray*} \int_{0}^{+\infty} e^{- \mu \rho \log \rho} \rho^{s\alpha} d\rho & = & \int_{0}^{M} e^{- \mu \rho \log \rho} \rho^{s\alpha} d\rho + \int_{M}^{+\infty} e^{- \mu \rho \log \rho} \rho^{s\alpha} d\rho \\ & = & I_{1} + I_{2}. \end{eqnarray*} Consider $ I_{2} $. Because $ e^{- \mu \rho \log \rho} \leq e^{- \mu \rho \log M} $, we get $$ I_{2} \leq \int_{0}^{+\infty} e^{- \mu \rho \log M} \rho^{s\alpha} d\rho = \left( \frac{1}{\mu \log M}\right)^{s\alpha + 1} \alpha!^{s}. $$ Choosing $ \mu^{-s} (\log M)^{-s} \leq \epsilon $ we prove the assertion for $ I_{2} $. Consider $ I_{1} $. $$ I_{1} \leq e^{\mu/e} \frac{M^{s\alpha + 1}}{s \alpha + 1} \leq e^{\mu/e} M \frac{\left(\frac{M^{s}}{\epsilon}\right)^{\alpha}}{\alpha!^{s} } \epsilon^{\alpha} \alpha!^{s}. $$ and this implies the assertion also for $ I_{1} $. \end{proof} Going back to the derivative of $ f_{2} $ we have \begin{multline} \label{eq:df2} |\partial_{x}^{\alpha} \partial_{y}^{\beta} f_{2}(x, y)| \\ \leq \int_{0}^{+\infty} \rho^{\re r + (2+ \alpha) \frac{s_{0}}{q} +\beta s_{0} } \left| \partial_{x}^{\alpha} S_{2}(x, \rho) \right|_{x = x\rho^{\frac{s_{0}}{q}}} d\rho \\ \leq M_{4}^{1+\alpha} \ \alpha^{\alpha \frac{q-1}{q}} \int_{0}^{+\infty} \rho^{\re r + (2+ \alpha) \frac{s_{0}}{q} +\beta s_{0} } e^{-\left( \frac{ \kappa - \frac{1}{s_{0}}}{6R_{0}} \right) \rho \log \rho + \frac{\tilde{\mu}_{0}}{2}\rho} d\rho \\ \leq M_{5}^{1+\alpha} \ \alpha^{\alpha \frac{q-1}{q}} \int_{0}^{+\infty} \rho^{\left(\frac{\alpha}{q} +\beta\right) s_{0} } e^{-\left( \frac{ \kappa - \frac{1}{s_{0}}}{6R_{0}} \right) \rho \log \rho } d\rho . \end{multline} Applying Lemma \ref{lemma:intBs0} we obtain that \begin{multline} \label{eq:df2-1} |\partial_{x}^{\alpha} \partial_{y}^{\beta} f_{2}(x, y)| \leq M_{5}^{1+\alpha} \ \alpha^{\alpha \frac{q-1}{q}} C_{\epsilon} \epsilon^{\frac{\alpha}{q} +\beta} \left(\frac{\alpha}{q} +\beta\right)!^{s_{0}} \\ \leq \tilde{C}_{\epsilon_{1}} \epsilon_{1}^{\alpha + \beta} \alpha^{\alpha \left(\frac{q-1}{q} +\frac{s_{0}}{q}\right) } \beta^{\beta s_{0}} \leq \tilde{C}_{\epsilon_{1}} \epsilon_{1}^{\alpha + \beta} \alpha^{\alpha s_{0}} \beta^{\beta s_{0}} , \end{multline} since the inequality $$ \frac{q-1}{q} + \frac{s_{0}}{q} < s_{0} $$ is obviously true, being equivalent to $ s_{0} > 1 $. This proves that the assertion is true for $ f_{2} $. Consider now $ f_{1} $: $$ f_{1}(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[ \sum_{j\geq 0} \sum_{k=0}^{2a} \frac{1}{\rho^{k}} \psi_{j} P_{k} u_{j}(x, \rho) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho . $$ Using Proposition \ref{prop:equiv}, or rather its proof, we may regroup the terms in the above summation according to the scheme of \eqref{eq:tr1}, \eqref{eq:tr2}. \begin{multline} \label{eq:f1-1} f_{1}(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + 2 \frac{s_{0}}{q}} \left[\psi_{0} P_{0} u_{0} + \sum_{j\geq 1} \left( \sum_{k=0}^{\min\{j, 2a\}} \frac{1}{\rho^{k}} \psi_{j-k} (1 - \pi) P_{k} u_{j-k} \right) \right. \\ + \sum_{j\geq 1} \left(\vphantom{\sum_{k=1}^{\min\{j, 2a-1\}}} \psi_{j} \pi P_{0} u_{j} + \frac{1}{\rho} \psi_{j} \pi P_{1} (1-\pi) u_{j} + \frac{1}{\rho} \psi_{j-1} \pi P_{1} \pi u_{j-1} \right. \\ \left . \left . + \sum_{k=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{k+1}} \psi_{j-k} \pi P_{k+1} u_{j-k} \right) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho \\ = f_{10}(x, y) + f_{11}(x, y) + f_{12}(x, y) . \end{multline} Observe that $ f_{10} = 0 $ since $ P_{0} u_{0} = 0 $ because of \eqref{eq:u0-}. Consider $ f_{11} $. For a fixed $ j $, the support of $$ \sum_{k=0}^{\min\{j, 2a\}} \frac{1}{\rho^{k}} \psi_{j-k} (1 - \pi) P_{k} u_{j-k} $$ is contained in the interval $ [3 R_{0}((j-2a)_{+}+1), 6R_{0}(j+1)] $. In fact for $ \rho \geq 6 R_{0}(j+1) $ the functions $ \psi_{j-k} \equiv 1$ and \eqref{eq:tr1} is satisfied. On the other hand if $ \rho \leq 3 R_{0}((j-2a)_{+}+1) $, then $ \psi_{j-k} \equiv 0 $. Moreover \begin{multline} \label{eq:f11d} \partial_{x}^{\alpha} \partial_{y}^{\beta} f_{11}(x, y) = i^{\beta} \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + (2+ \alpha) \frac{s_{0}}{q} +\beta s_{0} } \\ \cdot \left[ \sum_{j\geq 1} \sum_{k=0}^{\min\{j, 2a\}} \frac{1}{\rho^{k}} \psi_{j-k} \partial_{x}^{\alpha} (1 - \pi) P_{k} u_{j-k} \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho . \end{multline} In view of Lemma \ref{lemma:derproj}, arguing as we did before for the derivatives of $ f_{2} $, we conclude that $ f_{11} \in \mathscr{B}^{s_{0}}_{g}({\mathbb{R}}^{2}) $. Finally consider $ f_{12} $. We first remark that since $ \pi P_{1}\pi =0$, due to Lemma \ref{lemma:piPpi}, \begin{multline*} \psi_{j} \pi P_{0} u_{j} + \frac{1}{\rho} \psi_{j} \pi P_{1} (1-\pi) u_{j} + \frac{1}{\rho} \psi_{j-1} \pi P_{1} \pi u_{j-1} \\ + \sum_{k=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{k+1}} \psi_{j-k} \pi P_{k+1} u_{j-k} \\ = \psi_{j} \pi P_{0} u_{j} + \frac{1}{\rho} \psi_{j} \pi P_{1} u_{j} + \sum_{k=1}^{\min\{j, 2a-1\}} \frac{1}{\rho^{k+1}} \psi_{j-k} \pi P_{k+1} u_{j-k} \end{multline*} Again as above we have that, for a fixed $ j $, the $ \rho $-support of the above quantity is contained in the interval $ [3 R_{0}((j-2a+1)_{+}+1), 6R_{0}(j+1)] $. Arguing as above we achieve the proof of the theorem. \end{proof} We remark that, as a consequence of Theorem \ref{th:Beu}, we found a function $ f(x, y) \in \mathscr{B}^{s_{0}}_{g}({\mathbb{R}}^{2}) $ such that \begin{equation} \label{eq:PAv=f} P A(v) = f, \qquad \text{ in } {\mathbb{R}}^{2}. \end{equation} We also need a technical variant of Theorem \ref{th:Beu} adding a finite order vanishing rate at infinity to the property of being in $ \mathscr{B}^{s_{0}}_{g}({\mathbb{R}}^{2}) $. \begin{corollary} \label{cor:Beu-somm} We use the same notation of Theorem \ref{th:Beu}. Let $ b > 1 $ denote a fixed positive integer and $ \epsilon > 0 $. Then for every $ k $, $ 1 \leq k \leq b $, there is a constant $ C_{\epsilon} > 0 $ such that \begin{equation} \label{eq:Beu-somm} \left| \langle y \rangle^{k} \partial_{x}^{\alpha} \partial_{y}^{\beta} PA(v)(x, y) \right| \leq C_{\epsilon} \epsilon^{\alpha+\beta} \alpha!^{s_{0}} \beta!^{s_{0}}, \end{equation} for every $ \alpha $, $ \beta \in {\mathbb{N}} \cup \{ 0 \} $ and for any $ (x, y) \in {\mathbb{R}}^{2} $. \end{corollary} \begin{proof} The proof is made along the same lines as the proof of Theorem \ref{th:Beu}, so we are going to give a sketch of it detailing those parts where it is different from that of the above theorem. We have, using \eqref{eq:gamma0} with $ 2a $ replaced by $ k $, \begin{multline*} y^{k} \partial_{x}^{\alpha} \partial_{y}^{\beta} \left( P(x, y, D_{x}, D_{y}) A(v)(x, y) \right) \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \sum_{h=0}^{k} \gamma_{k, h} \frac{(-1)^{h}}{\rho^{k s_{0} - h}} \partial_{\rho}^{h} \\ \cdot \left[\rho^{r + (2+\alpha) \frac{s_{0}}{q} + \beta s_{0}} \partial_{x}^{\alpha} \left( \sum_{j=0}^{2a} \frac{1}{\rho^{j}} P_{j}(x, D_{x}, D_{\rho}) v(x, \rho) \right) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho . \end{multline*} Proceeding as we did for \eqref{eq:drhobin}, \eqref{eq:PA:2}, \eqref{eq:Pj} we obtain \begin{multline*} y^{k} \partial_{x}^{\alpha} \partial_{y}^{\beta} \left( P(x, y, D_{x}, D_{y}) A(v)(x, y) \right) \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{k(1-s_{0}) + N_{\alpha, \beta}} \\ \cdot \left[ \sum_{j'=0}^{k} \frac{1}{\rho^{j'}} P^{\#}_{j'}(x \partial_{x}) \partial_{\rho}^{k-j'} \partial_{x}^{\alpha} \sum_{j=0}^{2a} \frac{1}{\rho^{j}} P_{j}(x, D_{x}, D_{\rho}) v(x, \rho) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho , \end{multline*} where $ P^{\#}_{j'} $ are polynomials of degree $ j' $ in $ x \partial_{x} $ whose coefficients are $ \mathscr{O}(\hat{C}^{\alpha+\beta}) $ with $ \hat{C} $ a positive absolute constant. Using \eqref{eq:Si} we are reduced to estimating, for $ i= 1, 2 $, the expressions $$ \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{k(1-s_{0}) + N_{\alpha, \beta}} \left[ \sum_{j'=0}^{k} \frac{1}{\rho^{j'}} P^{\#}_{j'}(x \partial_{x}) \partial_{\rho}^{k-j'} \partial_{x}^{\alpha} S_{i} (x, \rho) \right]_{x = x\rho^{\frac{s_{0}}{q}}} d\rho . $$ Consider first the quantity involving $ S_{2} $. The typical summand in the quantity in square brackets has the form $$ \mathscr{O}(\hat{C}^{\alpha+\beta}) \frac{1}{\rho^{j'}} x^{\gamma} \partial_{x}^{\gamma+\alpha} \partial_{\rho}^{k-j'} S_{2} , $$ where $ \gamma $, $ j' \leq k $. Arguing as for the derivation of \eqref{eq:daS2-2}, choosing $ \gamma^{\#} $ large enough depending on $ k $, we obtain \begin{multline} \label{eq:daS2pp} \left| \mathscr{O}(\hat{C}^{\alpha+\beta}) x^{\gamma} \partial_{x}^{\gamma+\alpha} \partial_{\rho}^{k-j'} S_{2}(x, \rho) \right| \\ \leq M_{2}^{1+\alpha+\beta} \ e^{-\left( \frac{ \kappa - \frac{1}{s_{0}}}{6R_{0}} \right) \rho \log \rho + \frac{\tilde{\mu}_{0}}{2}\rho } \alpha^{\alpha \frac{q-1}{q}} . \end{multline} Then the conclusion follows as in the proof of Theorem \ref{th:Beu}. Let us examine the term containing $ S_{1} $. As in the proof of Theorem \ref{th:Beu} we see that $$ S_{1}(x, \rho) = \sum_{j\geq 0} \sum_{k=0}^{2a} \frac{1}{\rho^{k}} \psi_{j} P_{k} u_{j}, $$ so that its $ \rho $-support is contained in the interval $ [3 R_{0}((j-2a)_{+}+1), 6R_{0}(j+1)] $. Then the proof goes along the same lines as that of Theorem \ref{th:Beu}. This completes the proof of the corollary. \end{proof} \section{End of the Proof} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} To finish the proof of Theorem \ref{th:1} we argue by contradiction. Assume that $ P $ is Gevrey-$ s $ hypoelliptic for an $ s $, with $ 1 \leq s < s_{0} $. By Theorem 3.1 in \cite{metivier80} it follows from \eqref{eq:PAv=f} that \begin{equation} \label{eq:AvinB} A(v) \in \mathscr{B}^{s_{0}}({\mathbb{R}}^{2}) . \end{equation} Here is a brief description of what we are going to do in what follows. First we show that any $ y $-derivative of $ A(v) $ is integrable over $ {\mathbb{R}} $ for $ x = 0 $. Then we prove that $ A(v)(0, y) \in \mathscr{B}_{g}^{s_{0}}({\mathbb{R}}) $ with some decreasing rate at infinity, and this allows us to show that for any $ \delta > 0 $ \begin{equation} \label{eq:Fav1} \left | \mathscr{F}(A(v))(0, \eta) \right| \leq C_{\delta} e^{- \delta^{-1} |\eta|^{\frac{1}{s_{0}}}} , \end{equation} for a suitable $ C_{\delta} > 0 $. On the other hand the construction of $ A(v) $ implies that its Fourier transform satisfies a bound from below of the form \begin{equation} \label{eq:bbelow} \left | \mathscr{F}(A(v))(0, \eta) \right| \geq C_{0} \eta^{\lambda'} e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}}, \qquad \eta \geq (6 R_{0})^{s_{0}}, \end{equation} for a suitable $ C_{0} > 0 $ and $ \lambda' \in {\mathbb{R}} $, where $ \tilde{\mu}_{0} $ is defined in \eqref{eq:mitlda0}. This is the desired contradiction. \bigskip \begin{lemma} \label{lemma:global} For any $ \alpha $, $ \beta $ there exists a positive constant $ C_{\alpha, \beta} $ such that \begin{equation} \label{eq:L2} (1 + x^{2k} + y^{2k}) \left| \partial_{x}^{\alpha} \partial_{y}^{\beta} A(v)(x, y) \right| \leq C_{\alpha, \beta}, \end{equation} for $ k \leq b $, $ b \in {\mathbb{N}} $ a fixed integer. In particular, if we choose $ b $ suitably, the $ (\alpha, \beta) $-derivatives of $ A(v) $ are in $ L^{2}({\mathbb{R}}^{2}) $. \end{lemma} \begin{proof} Consider $$ x^{2k} \partial_{x}^{\alpha} D_{y}^{\beta} A(v)(x, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r+(\alpha-2k)\frac{s_{0}}{q} + \beta s_{0} } \left(x^{2k} \partial_{x}^{\alpha} v \right)(x\rho^{\frac{s_{0}}{q}} , \rho) d\rho . $$ Proceeding analogously to the proof of Theorem \ref{th:Beu}, from \eqref{eq:pest} we get \begin{multline} \label{eq:xdalfa} \left| x^{2k} \partial_{x}^{\alpha} v(x, \rho) \right| \leq \sum_{j \geq 0} \psi_{j}(\rho) \left | x^{2k} \partial_{x}^{\alpha} u_{j}(x, \rho) \right| \\ \leq \sum_{j \geq 0} \psi_{j}(\rho) e^{\tilde{\mu}_{0} \rho} K_{u}^{1+j+\alpha+2k} \rho^{-(j-\delta)\kappa} \left( \frac{j}{s_{0}} + \alpha \frac{q-1}{q} + \frac{2k}{q}\right)! \\ \leq C_{\alpha} e^{\tilde{\mu}_{0} \rho} \sum_{j \geq 0} \tilde{K}_{u}^{1+j} \left(3 R_{0}(j+1)\right)^{-(j-\delta)\kappa} (j+1)^{\frac{j}{s_{0}}} \\ \leq C_{1 \alpha} e^{\tilde{\mu}_{0} \rho} \sum_{j \geq 0} \tilde{K}_{u}^{1+j} (j+1)^{j(\frac{1}{s_{0}} -\kappa)} = C_{2 \alpha} e^{\tilde{\mu}_{0}\rho} , \end{multline} whence we conclude. Consider then $ y^{2k} \partial_{x}^{\alpha} D_{y}^{\beta} A(v)(x, y) $. Arguing in the same way as we did when deducing \eqref{eq:PA:3}, and disregarding the behaviour of the coefficients, we may write \begin{multline*} y^{2k} \partial_{x}^{\alpha} D_{y}^{\beta} A(v)(x, y) \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \sum_{h=0}^{2k} \gamma_{2k, h} \frac{1}{\rho^{2k s_{0} - h}} \partial_{\rho}^{h} \rho^{r+\alpha\frac{s_{0}}{q} + \beta s_{0} } \left( \partial_{x}^{\alpha} v \right)(x\rho^{\frac{s_{0}}{q}} , \rho) d\rho \\ = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{N_{\alpha, \beta}} \sum_{\ell=0}^{2k} \frac{1}{\rho^{\ell}} \left[ L_{\ell}(x \partial_{x}) \partial_{\rho}^{2k-\ell} \partial_{x}^{\alpha} v(x, \rho) \right]_{x = x \rho^{\frac{s_{0}}{q}}} d\rho, \end{multline*} where $ N_{\alpha, \beta} $ is a suitable complex number independent of $ \ell $ and $ L_{\ell} $ is a polynomial of degree $ \ell $ with respect to $ x \partial_{x} $. In order to estimate the above integral we proceed as before using \eqref{eq:pest}. Because of \eqref{eq:v} the sum inside the above integral reads $$ \sum_{j \geq 0} \sum_{\ell=0}^{2k} \frac{1}{\rho^{\ell}} \partial_{\rho}^{2k-\ell} L_{\ell}(x \partial_{x}) \partial_{x}^{\alpha} \psi_{j}(\rho) u_{j}(x, \rho). $$ Writing $ L_{\ell}(x \partial_{x}) = \sum_{m=0}^{\ell} c_{\ell m} (x \partial_{x})^{m} $ we have, for fixed $ j $, a finite sum ($ \ell \leq 2k \leq b $ and $ b $ is a fixed positive integer) whose typical summand has the form $$ \frac{c_{\ell m}}{\rho^{\ell}} \partial_{\rho}^{2k-\ell} x^{m} \partial_{x}^{m+\alpha} \left( \psi_{j}(\rho) u_{j}(x, \rho)\right). $$ Choosing $ R_{0} $, $ \gamma^{\#} $ so that $ 4k \leq 2 \gamma^{\#} \leq R_{0} $, we obtain by \eqref{eq:pest} and Lemma \ref{lemma:psij} that the above quantity is bounded by $$ C_{2}^{1+\alpha+j} e^{\tilde{\mu}_{0} \rho} \rho^{-(j-\delta)\kappa} \left( \frac{j}{s_{0}} + \alpha \frac{q-1}{q} \right)! $$ We may then argue as in \eqref{eq:xdalfa} and conclude that \eqref{eq:L2} holds. \end{proof} To prove \eqref{eq:Fav1} we prove the following lemma, from which \eqref{eq:Fav1} will be deduced. \begin{lemma} \label{lemma:yd} For any $ \epsilon > 0 $, there exists a $ C_{\epsilon} > 0 $, such that for any index $ \alpha $ we have \begin{equation} \label{eq:yd} \left| \langle y \rangle^{a} \partial_{y}^{\alpha} \partial_{x}^{\beta} A(v)(0, y) \right| \leq C_{\epsilon} \epsilon^{\alpha} \alpha!^{s_{0}}, \end{equation} where $ \beta = 0, 1 $ and $ \langle y \rangle = ( 1 + y^{2})^{\frac{1}{2}} $. \end{lemma} \begin{proof} Let $ \chi $ denote a smooth function in $ G^{s'}({\mathbb{R}}) $, $ s' < s_{0} $, such that $ \chi(t) = 0 $ for $ |t| \leq 1 $ and $ \chi(t) = 1 $ for $ |t| \geq 2 $. Observe that, by \eqref{eq:PAv=f}, $$ P \chi(y) A(v) = \chi(y) f - [P, \chi] A(v) . $$ By Corollary \ref{cor:Beu-somm} and formula \eqref{eq:AvinB}, if we denote by $ g $ the right hand side of the above equation and $ \psi(x) \in G_{0}^{s'}({\mathbb{R}}) $ we have that \begin{equation} \label{eq:Bs0PA} \left| \langle y \rangle^{a} \partial_{y}^{\alpha} \partial_{x}^{\beta} (\psi g)(x, y) \right| \leq C_{\delta} \delta^{\alpha+\beta} \alpha!^{s_{0}} \beta!^{s_{0}}, \end{equation} for every $ (x, y) \in {\mathbb{R}}^{2} $ and $ \delta > 0 $. From the above inequality it readily follows that, possibly renaming $ C_{\delta} $, \begin{equation} \label{eq:Bs0PA2} \| \partial_{y}^{\alpha} \partial_{x}^{\beta} (\psi g)(x, y) \|_{0} \leq C_{\delta} \delta^{\alpha+\beta} \alpha!^{s_{0}} \beta!^{s_{0}} \end{equation} We are going to use the maximal estimate for the above equation: \begin{equation} \label{eq:maximal} \sum_{i=1}^{3} \|X_{i} u \|_{0}^{2} \leq C \left( \langle P u, u \rangle + \| y^{a-1} u \|_{0}^{2} \right), \end{equation} where $ u \in L^{2}({\mathbb{R}}^{2}) $, $ Pu \in L^{2}({\mathbb{R}}^{2}) $ and $ y^{a-1} u \in L^{2}({\mathbb{R}}^{2}) $, and $ X_{1}$ $ = D_{x} $, $ X_{2} = x^{q-1}D_{y} $, $ X_{3} = y^{a} D_{y} $. Let $ \phi=\phi(x) \in G^{s'}_{0}({\mathbb{R}}) $ denote a cutoff function near the origin. We want to show that for any $ \epsilon >0 $, for any positive integers $ \alpha $, $ \beta $, with $ 0 \leq \beta \leq N $, $ N $ arbitrarily fixed natural number, there exists a positive constant $ C_{\epsilon} $ such that \begin{equation} \label{eq:XA} \| X_{i} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi A(v) \|_{0} \leq C_{\epsilon} \epsilon^{\alpha} \alpha!^{s_{0}}, \end{equation} $ i = 1, 2, 3 $. Since $ A(v) \in \mathscr{B}^{s_{0}}({\mathbb{R}}^{2}) $, it is enough to prove \eqref{eq:XA} for $ A'(v) = \chi A(v) $. Actually we are going to prove that for any $ \epsilon_{1} >0 $, $ \epsilon_{2} > 0 $, for any positive integers $ \alpha $ and $ \beta $, $ 0 \leq \beta \leq N $, there exists a positive constant $ C_{\epsilon_{1} \epsilon_{2}} $ such that \begin{equation} \label{eq:XA'} \sum_{i=1}^{3} \|X_{i} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v)) \|_{0} \leq C_{\epsilon_{1} \epsilon_{2}} \epsilon^{\alpha+\beta}_{1} \epsilon_{2}^{k} (\alpha + \beta + k)!^{s_{0}}. \end{equation} The estimate \eqref{eq:XA} then follows keeping into account that $ \beta $ takes a finite number of values and choosing $ \epsilon_{1} $, $ \epsilon_{2} $ suitably. We proceed by induction on $ \alpha + \beta $. Actually \eqref{eq:XA'} is true when $ \alpha + \beta = 0 $. In fact we use Lemma \ref{lemma:global} as well as the fact that $ \phi \in G_{0}^{s'}({\mathbb{R}}) \subset \mathscr{B}^{s_{0}}_{g}({\mathbb{R}}) $. Consider \begin{equation} \label{eq:ori.norm} \sum_{i=1}^{3} \|X_{i} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} . \end{equation} By \eqref{eq:maximal} we have \begin{multline} \label{eq:max1} \sum_{i=1}^{3} \|X_{i} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v)) \|_{0}^{2} \\ \leq C \left( \langle P \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right. \\ \left. + \| y^{a-1} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} \right) \\ \leq C_{1} \left( \underbrace{\langle \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} P( A'(v)), \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle}_{B_{1}} \right . \\ + 2 \sum_{i=1}^{3} \Big( \underbrace{\langle [X_{i} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] A'(v), X_{i}^{*} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle}_{B_{2i}} \\ + \underbrace{% \langle [X_{i} , [ X_{i} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] ] A'(v) , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle}_{B_{3i}} \Big) \\ + \underbrace{% \langle [i a y^{2a-1} D_{y} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] A'(v), \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle}_{B_{2}'} \\ \left . + \underbrace{% \| y^{a-1} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v)) \|_{0}^{2}}_{B_{4}} \right) . \end{multline} Let us start with $ B_{1} $. Let $ \psi \in G^{s'}_{0}({\mathbb{R}}) $ be such that $ \phi \psi = \phi $. Then, \begin{multline*} | B_{1}| \leq \left( \| \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} P( A'(v)) \|_{0} + \| \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}\right)^{2} \\ \leq C_{2} \left( \sum_{\ell=0}^{\beta} \binom{\beta}{\ell} \| \phi^{(k+\ell)} \partial_{y}^{\alpha} \partial_{x}^{\beta-\ell} \psi P( A'(v)) \|_{0} \right. \\ \left. + \| X \partial_{y}^{\alpha'} \partial_{x}^{\beta'} \phi^{(k)} A'(v) \|_{0} \vphantom{\sum_{\ell=0}^{\beta}} \right)^{2} , \end{multline*} where $ X $ denotes the vector field $ X_{1} $ or $ X_{3} $ according to the fact that $ \beta > 0 $ or $ \alpha > 0 $ respectively, since on the support of $ A'(v) $, $ |y| \geq c > 0 $ and $ \alpha'+\beta' = \alpha + \beta -1 $. Consider the first term in the sum above. By assumption \eqref{eq:Bs0PA2} we have \begin{multline*} \sum_{\ell=0}^{\beta} \binom{\beta}{\ell} \| \phi^{(k+\ell)} \partial_{y}^{\alpha} \partial_{x}^{\beta-\ell} \psi P( A'(v)) \|_{0} \\ \leq \sum_{\ell=0}^{\beta} \binom{\beta}{\ell} C_{\delta_{2}} \delta_{2}^{k+\ell} (k+\ell)!^{s_{0}} C_{\delta_{1}} \delta_{1}^{\alpha+\beta-\ell} (\alpha+ \beta -\ell)!^{s_{0}} \\ \leq \frac{1}{\nu C_{2}} C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} , \end{multline*} choosing $ \delta_{1} < \frac{\epsilon_{1}}{2} $, $ \delta_{2} < \min\{\epsilon_{2}, \delta_{1}\} $ and $ C_{\epsilon_{1} \epsilon_{2}} > \nu C_{\delta_{1}} C_{\delta_{2}} C_{2}$. As for the second term we have, by applying the inductive assumption \eqref{eq:XA'}, that \begin{multline*} \| X \partial_{y}^{\alpha'} \partial_{x}^{\beta'} \phi^{(k)} A'(v) \|_{0} \leq C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha'+\beta'} \epsilon_{2}^{k} (\alpha'+\beta'+k)!^{s_{0}} \\ = C_{\epsilon_{1} \epsilon_{2}} \frac{1}{(\alpha+\beta+k)^{s_{0}} \epsilon_{1}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} \\ \leq \frac{1}{\nu} C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} , \end{multline*} provided \begin{equation} \label{eq:abk1} \alpha+\beta+k > (\nu \epsilon_{1}^{-1})^{\frac{1}{s_{0}}} . \end{equation} By Lemma \ref{lemma:global} we obtain the same estimate for any $ \alpha $, $ \beta $, $ 0 \leq \beta \leq N $, $ k $, if $ C_{\epsilon_{1} \epsilon_{2}} $ is chosen large enough. Consider now $ B_{2i} $. Remark that if $ i = 1 $, $ [X_{1} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] = \partial_{y}^{\alpha} \partial_{x}^{\beta}$ $ \frac{1}{i} \phi^{(k+1)}$. Hence for $ i = 1 $ we have \begin{multline*} \langle [X_{1} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] A'(v), X_{1}^{*} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \\ = \langle \partial_{y}^{\alpha} \partial_{x}^{\beta} \frac{1}{i} \phi^{(k+1)} A'(v), X_{1} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle . \end{multline*} Thus \begin{multline*} \left |\langle [X_{1} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] A'(v), X_{1}^{*} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq \| \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k+1)} A'(v)\|_{0} \cdot \| X_{1} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0} \\ \leq \mu \| X_{1} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} + \frac{1}{\mu} \| \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k+1)} A'(v)\|_{0}^{2} . \end{multline*} The first term is absorbed on the left hand side of \eqref{eq:max1} if $ \mu $ is small enough. We are left with the second term. As before it can be bounded by $$ C_{3} \frac{1}{\mu} \| X \partial_{y}^{\alpha'} \partial_{x}^{\beta'} \phi^{(k+1)} A'(v)\|_{0}^{2}, $$ where $ \alpha' + \beta' = \alpha+\beta-1 $, and $ X $ denotes either $ X_{1} $ or $ X_{3} $ according to the fact that $ \beta > 0 $ or $ \alpha >0 $ respectively, since, on the support of $ \phi^{(k+1)} A'(v)$, $ y $ is bounded away from zero. The above norm is bounded by $$ C_{3} \frac{1}{\mu} C_{\epsilon_{1}\epsilon_{2}} \epsilon_{1}^{\alpha'+\beta'} \epsilon_{2}^{k+1} (\alpha'+\beta'+k+1)!^{s_{0}}, $$ which becomes $$ \left( C_{3} \frac{1}{\mu} \frac{\epsilon_{2}}{\epsilon_{1}} \right) C_{\epsilon_{1}\epsilon_{2}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} , $$ so that, choosing \begin{equation} \label{eq:e1} \frac{\epsilon_{2}}{\epsilon_{1}} < \frac{1}{\nu} \frac{\mu}{C_{3}} , \end{equation} we obtain, modulo terms absorbed on the left, \begin{multline} \label{eq:B21} \left |\langle [X_{1} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] A'(v), X_{1}^{*} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq \frac{1}{\nu} C_{\epsilon_{1}\epsilon_{2}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} . \end{multline} Consider now $ B_{22} $. Since $ [X_{2} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] = \partial_{y}^{\alpha} [ x^{q-1} , \partial_{x}^{\beta} ] \phi^{(k)} D_{y} $, we may write, for suitable positive constants $ C_{q} $, $ C_{1q} $, depending only on $ q $, \begin{multline*} \left | \langle [X_{2} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] A'(v), X_{2} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq C_{q} \sum_{\ell=1}^{\min\{\beta, q-1\}} \binom{\beta}{\ell} \left| \langle x^{q-1-\ell} \partial_{x}^{\beta-\ell} \phi^{(k)} \partial_{y}^{\alpha+1} A'(v), X_{2} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq C_{1q} \sum_{\ell=1}^{\min\{\beta, q-1\}} \frac{\beta!}{(\beta - \ell)!} \| x^{q-1-\ell} \partial_{x}^{\beta-\ell} \partial_{y}^{\alpha+1} \phi^{(k)} A'(v) \|_{0} \cdot \| X_{2} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v)\|_{0} \\ \leq C_{1q} \left( \vphantom{\left( \sum_{\ell=1}^{\min\{\beta, q-1\}}\right.} \mu \| X_{2} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v)\|_{0}^{2} \right . \\ \left . + \frac{1}{\mu} \left( \sum_{\ell=1}^{\min\{\beta, q-1\}} \frac{\beta!}{(\beta - \ell)!} \| x^{q-1-\ell} \partial_{x}^{\beta-\ell} \partial_{y}^{\alpha+1} \phi^{(k)} A'(v) \|_{0} \right)^{2} \right) \end{multline*} The first term is absorbed on the left hand side of \eqref{eq:max1} provided $ \mu $ is small but otherwise fixed, so that we are left with the estimate of the square of the sum. The latter can be bounded as \begin{multline} \label{eq:commX2} C_{4} \sum_{\ell=1}^{\min\{\beta, q-1\}} \beta (\beta-1) \cdots (\beta-\ell+1) \| X_{3} \partial_{x}^{\beta-\ell} \partial_{y}^{\alpha} \phi^{(k)} A'(v) \|_{0} \\ \leq C_{4} \sum_{\ell=1}^{\min\{\beta, q-1\}} \beta (\beta-1) \cdots (\beta-\ell+1) C_{\epsilon_{1}\epsilon_{2}} \epsilon_{1}^{\beta-\ell+\alpha} \epsilon_{2}^{k} (\alpha + \beta -\ell + k)!^{s_{0}} \\ \leq C_{\epsilon_{1}\epsilon_{2}} \epsilon_{1}^{\beta+\alpha} \epsilon_{2}^{k} (\alpha + \beta+ k)!^{s_{0}} \cdot C_{4} \sum_{\ell=1}^{\min\{\beta, q-1\}} \left(\frac{1}{(\alpha+\beta+k-(q-1))^{s_{0}-1} \epsilon_{1}}\right)^{\ell} \\ \leq \frac{1}{\nu} \sqrt{\mu} C_{\epsilon_{1}\epsilon_{2}} \epsilon_{1}^{\beta+\alpha} \epsilon_{2}^{k} (\alpha + \beta+ k)!^{s_{0}}, \end{multline} provided \begin{equation} \label{eq:abk2} \alpha+\beta+k -(q-1) \geq \left( \frac{2 \nu C_{4}}{\sqrt{\mu} \epsilon_{1}}\right)^{\frac{1}{s_{0}-1}} . \end{equation} By Lemma \ref{lemma:global} we obtain the same estimate of $ B_{22} $ for any $ \alpha $, $ \beta $, $ k $, if $ C_{\epsilon_{1} \epsilon_{2}} $ is chosen large enough. Consider now $ B_{23} $ in \eqref{eq:max1}. Since $[X_{3} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] = [ X_{3} , \partial_{y}^{\alpha} ] $ $\partial_{x}^{\beta} \phi^{(k)} = [ y^{a} , \partial_{y}^{\alpha} ] D_{y} \partial_{x}^{\beta} \phi^{(k)} $. Hence \begin{multline*} \left | \langle [X_{3} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] A'(v), X_{3}^{*} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq C_{1a} \sum_{\ell=1}^{\min\{\alpha, a\}} \binom{\alpha}{\ell} \left | \langle y^{a-\ell} \partial_{y}^{\alpha-\ell+1} \partial_{x}^{\beta} \phi^{(k)} A'(v), X_{3}^{*} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq C_{1a} \sum_{\ell=1}^{\min\{\alpha, a\}} \binom{\alpha}{\ell} \| y^{a-\ell} \partial_{y}^{\alpha-\ell+1} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0} \| X_{3}^{*} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0} \\ \leq C_{2a} \left( \vphantom{\left(\sum_{\ell=1}^{\min\{\alpha, a\}} \right.} \mu \| X_{3} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} + \mu \|y^{a-1} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} \right . \\ \left. + \frac{1}{\mu} \left( \sum_{\ell=1}^{\min\{\alpha, a\}} \binom{\alpha}{\ell} \| y^{a-\ell} \partial_{y}^{\alpha-\ell+1} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0} \right)^{2} \right) \end{multline*} The first summand is absorbed on the left of \eqref{eq:max1}, so that we have to treat the second as well as the sum. Consider the second term above. If $ \alpha = 0 $ we use Lemma \ref{lemma:global} and the fact that $ \beta $ takes a finite number of values to conclude that the second term verifies the desired estimate. Assume $ \alpha > 0 $. Then \begin{multline*} C_{2a} \mu \|y^{a-1} \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} \leq C_{5} \|X_{3} \partial_{y}^{\alpha-1} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} \\ \leq C_{5} \left(C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha+\beta-1} \epsilon_{2}^{k} (\alpha+\beta+k-1)!^{s_{0}} \right)^{2} \\ = C_{5} \frac{1}{(\alpha+\beta+k)^{2s_{0}} \epsilon_{1}^{2} } \left(C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} \right)^{2} \\ \leq \frac{1}{\nu^{2}} \left(C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} \right)^{2}, \end{multline*} provided \begin{equation} \label{eq:abk3} \alpha+\beta+k \geq \left( \frac{\nu \sqrt{C_{5}}}{\epsilon_{1}} \right)^{\frac{1}{s_{0}}} . \end{equation} If \eqref{eq:abk3} is not satisfied we choose a larger $ C_{\epsilon_{1} \epsilon_{2}} $ as we did above, in view of Lemma \ref{lemma:global}. The same argument treats also $ B_{4} $ and $ B_{2}' $. Consider now $$ \sqrt{\frac{C_{2a}}{\mu}} \sum_{\ell=1}^{\min\{\alpha, a\}} \binom{\alpha}{\ell} \| y^{a-\ell} \partial_{y}^{\alpha-\ell+1} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0} . $$ The above sum is bounded as \begin{multline} \label{eq:commX3} \sqrt{\frac{C_{2a}}{\mu}} \sum_{\ell=1}^{\min\{\alpha, a\}} \binom{\alpha}{\ell} \| X_{3} \partial_{y}^{\alpha-\ell} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0} \\ \leq \sqrt{\frac{C_{2a}}{\mu}} \sum_{\ell=1}^{\min\{\alpha, a\}} \binom{\alpha}{\ell} C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha+\beta-\ell} \epsilon_{2}^{k} (\alpha+\beta+k-\ell)!^{s_{0}} \\ \leq C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} \\ \cdot \sqrt{\frac{C_{2a}}{\mu}} \sum_{\ell=1}^{\min\{\alpha, a\}} \left(\frac{1}{(\alpha+\beta+k-a)^{s_{0}-1} \epsilon_{1}} \right)^{\ell} , \end{multline} whence we conclude as for $ B_{22} $ (see \eqref{eq:abk2}). \bigskip We are left with the estimate of $ B_{3i} $ in \eqref{eq:max1}. Let us start with $ B_{31} $. Since $$ [X_{1} , [ X_{1} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] ] = [X_{1} ,\partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k+1)} ] = \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k+2)} , $$ we have \begin{multline*} \left| \langle [X_{1} , [ X_{1} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] ] A'(v) , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ = \left| \langle \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k+2)} A'(v) , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| . \end{multline*} We may assume that $ \alpha+\beta \geq 2 $, since otherwise bounding the original norm \eqref{eq:ori.norm} is easily done by choosing $ C_{\epsilon_{1} \epsilon_{2}} $ sufficiently large. Thus taking one derivative to the right hand side the above scalar product becomes $$ \left| \langle \partial_{y}^{\alpha'} \partial_{x}^{\beta'} \phi^{(k+2)} A'(v) , \partial_{y}^{\alpha''} \partial_{x}^{\beta''} \phi^{(k)} A'(v) \rangle \right| , $$ where $ \alpha' + \beta' = \alpha + \beta - 1 $ and $ \alpha'' + \beta'' = \alpha + \beta + 1 $. Next we may write \begin{multline*} \left| \langle \partial_{y}^{\alpha'} \partial_{x}^{\beta'} \phi^{(k+2)} A'(v) , \partial_{y}^{\alpha''} \partial_{x}^{\beta''} \phi^{(k)} A'(v) \rangle \right| \\ \leq \| \partial_{y}^{\alpha'} \partial_{x}^{\beta'} \phi^{(k+2)} A'(v) \|_{0} \cdot \| \partial_{y}^{\alpha''} \partial_{x}^{\beta''} \phi^{(k)} A'(v) \|_{0} \\ \leq M_{1} \left( \frac{1}{\mu} \| X \partial_{y}^{\alpha_{-}} \partial_{x}^{\beta_{-}} \phi^{(k+2)} A'(v) \|_{0}^{2} + \mu \| X \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} \right) , \end{multline*} where $ \mu $ is a small fixed positive number, $ \alpha_{-}+ \beta_{-} = \alpha + \beta - 2 $. Moreover in the above expression $ X $ denotes either the vector field $ X_{1} $ or the field $ X_{3} $, since the $ y $-support of the integrated function is bounded away from zero. The second term above can be absorbed on the left of \eqref{eq:max1} provided $ \mu $ is chosen small but finite, so that we have to estimate the first, to which we may apply the inductive hypothesis: \begin{multline*} \frac{M_{1}^{\frac{1}{2}}}{\mu^{\frac{1}{2}}} \| X \partial_{y}^{\alpha_{-}} \partial_{x}^{\beta_{-}} \phi^{(k+2)} A'(v) \|_{0} \leq \frac{M_{1}^{\frac{1}{2}}}{\mu^{\frac{1}{2}}} C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha_{-} + \beta_{-}} \epsilon_{2}^{k+2} (\alpha_{-} + \beta_{-} + k + 2)!^{s_{0}} \\ = \frac{1}{\nu} C_{\epsilon_{1} \epsilon_{2}} \epsilon_{1}^{\alpha+\beta} \epsilon_{2}^{k} (\alpha+\beta+k)!^{s_{0}} \cdot \frac{M_{1}^{\frac{1}{2}} \nu \epsilon_{2}^{2}}{\mu^{\frac{1}{2}} \epsilon_{1}^{2}} , \end{multline*} which gives the desired estimate provided $ \epsilon_{1} $, $ \epsilon_{2} $ are such that \begin{equation} \label{eq:abk4} \frac{M_{1}^{\frac{1}{2}} \nu \epsilon_{2}^{2}}{\mu^{\frac{1}{2}} \epsilon_{1}^{2}} < 1. \end{equation} Consider now $ B_{32} $. Since \begin{multline*} [X_{2}, [X_{2} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] ] = [ X_{2}, \partial_{y}^{\alpha} [x^{q-1}, \partial_{x}^{\beta}] \phi^{(k)} D_{y} ] \\ =\partial_{y}^{\alpha} [ x^{q-1} , [ x^{q-1}, \partial_{x}^{\beta}] ] \phi^{(k)} D_{y}^{2} \\ = \partial_{y}^{\alpha} \sum_{\ell_{1}=1}^{\min\{\beta, q-1\}} \binom{\beta}{\ell_{1}} (\ad \partial_{x})^{\ell_{1}}(x^{q-1}) [ \partial_{x}^{\beta-\ell_{1}} , x^{q-1} ] \phi^{(k)} D_{y}^{2} \\ = - \sum_{\ell_{1}=1}^{\min\{\beta, q-1\}} \sum_{\ell_{2}=1}^{\min\{\beta-\ell_{1}, q-1\}} \binom{\beta}{\ell_{1}} \binom{\beta - \ell_{1}}{\ell_{2}} \\ \cdot (\ad \partial_{x})^{\ell_{1}}(x^{q-1}) (\ad \partial_{x})^{\ell_{2}}(x^{q-1}) \partial_{x}^{\beta-\ell_{1} -\ell_{2}} \phi^{(k)} \partial_{y}^{\alpha+2} . \end{multline*} Hence, $ \beta $ being bound by $ N $, \begin{multline*} \left| \langle [X_{2} , [ X_{2} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] ] A'(v) , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq M_{2} \sum_{\ell=2}^{\min\{\beta, 2(q-1)\}} \left| \langle \partial_{x}^{\beta - \ell} \partial_{y}^{\alpha+2} \phi^{(k)} A'(v) , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq M_{3} \sum_{\ell=2}^{\min\{\beta, 2(q-1)\}} \left ( \| X_{3} \partial_{x}^{\beta - \ell} \partial_{y}^{\alpha+1} \phi^{(k)} A'(v) \|_{0}^{2} \right. \\ \left. + \| X_{3} \partial_{y}^{\alpha-1} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0}^{2} \right) . \end{multline*} We may apply the inductive hypothesis to both summands above and treat each of them as we did in \eqref{eq:commX2}. Finally consider $ B_{33} $. We may suppose that $ \alpha > 0 $, otherwise the double commutator is identically zero. We have \begin{multline*} [X_{3}, [X_{3} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] ] = - [ y^{a} \partial_{y} , [ y^{a} , \partial_{y}^{\alpha} ] \partial_{y} ] \partial_{x}^{\beta} \phi^{(k)} \\ = \sum_{\ell=2}^{\min\{\alpha, 2a\}} c_{\alpha \ell} \alpha^{\ell} y^{2a-\ell} \partial^{\alpha+2-\ell}_{y} \partial_{x}^{\beta} \phi^{(k)} , \end{multline*} where the $ c_{\alpha \ell} $ are absolute constants. Hence \begin{multline*} \left| \langle [X_{3} , [ X_{3} , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} ] ] A'(v) , \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq \sum_{\ell=2}^{\min\{\alpha, 2a\}} |c_{\alpha \ell}| \alpha^{\ell} \left| \langle y^{2a-\ell} \partial^{\alpha+2-\ell}_{y} \partial_{x}^{\beta} \phi^{(k)} A'(v), \partial_{y}^{\alpha} \partial_{x}^{\beta} \phi^{(k)} A'(v) \rangle \right| \\ \leq M_{4} \sum_{\ell=2}^{\min\{\alpha, 2a\}} \alpha^{\ell - 1} \| X_{3} \partial^{\alpha+1-\ell}_{y} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0} \\ \cdot \alpha \| X_{3} \partial_{y}^{\alpha-1} \partial_{x}^{\beta} \phi^{(k)} A'(v) \|_{0} . \end{multline*} The inductive hypothesis can be applied to both factor in the sum above and we proceed as for \eqref{eq:commX3}. Choosing $ \nu $ sufficiently large achieves the proof of \eqref{eq:XA'} and hence of \eqref{eq:XA}. Via the Sobolev immersion theorem as well as Lemma \ref{lemma:global}, we get a pointwise estimate of the same type, from which setting $ x=0 $ we deduce the conclusion of the lemma. \end{proof} Let us now prove \eqref{eq:Fav1}. \begin{lemma} \label{lemma:fourier} For any $ \epsilon > 0 $ there exist positive constants $ K_{\epsilon} $, $ M $, such that \begin{equation} \label{eq:fourier} \left | \mathscr{F}(\partial_{x}^{\beta} A(v))(0, \eta) \right| \leq K_{\epsilon} e^{- M \left(\frac{ |\eta|}{\epsilon}\right)^{\frac{1}{s_{0}}}} , \end{equation} for $ \beta = 0, 1 $. \end{lemma} \begin{proof} Set $ \beta = 0 $. The argument is exactly the same when $ \beta = 1 $. We have, $$ | D^{\alpha}_{y} A(v)(0, y) | \leq C_{\epsilon} \langle y \rangle^{-a} \epsilon^{\alpha} \alpha!^{s_{0}}, $$ by Lemma \ref{lemma:yd}. Then we obtain \begin{equation} \label{eq:FAv} |\mathscr{F}(A(v))(0, \eta) | \leq \frac{1}{|\eta|^{\alpha}} C_{\epsilon} \epsilon^{\alpha} \alpha!^{s_{0}} \int_{{\mathbb{R}}} \langle y \rangle^{-a} dy \leq C_{1 \epsilon} \frac{1}{|\eta|^{\alpha}} \epsilon^{\alpha} \alpha!^{s_{0}} . \end{equation} Then $$ | \mathscr{F}(A(v))(0, \eta) |^{\frac{1}{s_{0}}} \frac{\left(\left(\frac{|\eta|}{2\epsilon}\right)^{\frac{1}{s_{0}}} \right)^{\alpha}}{\alpha!} \leq C_{1 \epsilon}^{\frac{1}{s_{0}}} \left( \frac{1}{2^{\frac{1}{s_{0}}}} \right)^{\alpha}, $$ whence, summing on $ \alpha $, we deduce the assertion of the lemma. \end{proof} Next we prove an estimate from below of $ \left | \mathscr{F}(\partial_{x}^{\beta} A(v))(0, \eta) \right| $, where $ \beta $ takes either the value 0 or 1, depending on the ground state, $ \phi_{0} $, of the operator $ Q $. \begin{lemma} \label{lemma:below} There exist positive constants $ \lambda $, $ C_{\lambda} $ and a real constant $ \lambda' $ such that for $ \eta \geq (6 R_{0})^{s_{0}} $ and $ R_{0} $ large enough, we have \begin{equation} \label{eq:below} \left | \mathscr{F}(\partial_{x}^{\beta} A(v))(0, \eta) \right| \geq C_{\lambda} \eta^{\lambda'} e^{- \lambda |\eta|^{\frac{1}{s_{0}}}} . \end{equation} \end{lemma} \begin{proof} We have $$ \partial_{x}^{\beta} A(v) (0, y) = \int_{0}^{+\infty} e^{i y \rho^{s_{0}}} \rho^{r + \beta \frac{s_{0}}{q}} \partial_{x}^{\beta} v (0, \rho) d\rho , $$ where $ v $ is given by \eqref{eq:v}. By Lemma \ref{lemma:psij} we see that the $ \rho $-support of $ v $ is contained in $ [3 R_{0}, +\infty [$, hence changing variables according to $ \eta = \rho^{s_{0}} $ we obtain $$ \partial_{x}^{\beta} A(v) (0, y) = \frac{1}{s_{0}} \int_{-\infty}^{+\infty} e^{i y \eta} \eta^{\frac{r}{s_{0}} + \frac{\beta}{q}} \partial_{x}^{\beta} v (0, \eta^{\frac{1}{s_{0}}}) \eta^{\frac{1}{s_{0}} -1} d\eta . $$ By \eqref{eq:convv} we have that \begin{multline} \label{eq:Ft} \mathscr{F}(\partial_{x}^{\beta} A(v))(0, \eta) = \frac{2\pi}{s_{0}} \eta^{\frac{r+1}{s_{0}} + \frac{\beta}{q} -1} \partial_{x}^{\beta} v (0, \eta^{\frac{1}{s_{0}}}) \\ = \frac{2\pi}{s_{0}} \eta^{\frac{r+1}{s_{0}} + \frac{\beta}{q} -1} \sum_{j \geq 0} \psi_{j}(\eta^{\frac{1}{s_{0}}} ) \partial_{x}^{\beta} u_{j}(0, \eta^{\frac{1}{s_{0}}}) . \end{multline} Moreover by \eqref{eq:u0-}, \eqref{eq:mitlda0} we have \begin{multline*} \left | \sum_{j \geq 0} \psi_{j}(\eta^{\frac{1}{s_{0}}} ) \partial_{x}^{\beta} u_{j}(0, \eta^{\frac{1}{s_{0}}}) \right| \geq \psi_{0}(\eta^{\frac{1}{s_{0}}} ) \left| e^{\mu_{0 i^{*}} \eta^{\frac{1}{s_{0}}}} \partial_{x}^{\beta}\phi_{0}(0) \right| \\ - \sum_{j > 0} \psi_{j}(\eta^{\frac{1}{s_{0}}} ) \left |\partial_{x}^{\beta} u_{j}(0, \eta^{\frac{1}{s_{0}}}) \right| \\ \geq \psi_{0}(\eta^{\frac{1}{s_{0}}} ) e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} \left| \partial_{x}^{\beta}\phi_{0}(0) \right| - \sum_{j > 0} \psi_{j}(\eta^{\frac{1}{s_{0}}} ) e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} \left |e^{- \tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} \partial_{x}^{\beta} u_{j}(0, \eta^{\frac{1}{s_{0}}}) \right| . \end{multline*} In view of the estimates \eqref{eq:pest} and Lemma \ref{lemma:psij}, we examine the sum on the right hand side of the above inequality. \begin{multline*} \sum_{j > 0} \psi_{j}(\eta^{\frac{1}{s_{0}}} ) e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} \left |e^{- \tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} \partial_{x}^{\beta} u_{j}(0, \eta^{\frac{1}{s_{0}}}) \right| \\ \leq \sum_{j > 0} \psi_{j}(\eta^{\frac{1}{s_{0}}} ) e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} K_{u}^{1+j+\beta} \eta^{-\frac{(j-\delta)\kappa}{s_{0}}} \left( \frac{j}{s_{0}} + \beta \frac{q-1}{q}\right)! \\ \leq C_{\psi} e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} \sum_{j > 0} \tilde{K}_{u}^{1+j} \left(3 R_{0}(j+1)\right)^{-(j-\delta)\kappa} (j+1)^{\frac{j}{s_{0}}} \\ \leq C_{\psi} e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} (3 R_{0})^{-(1-\delta)\kappa} \sum_{j > 0} \tilde{K}_{u}^{1+j} (j+1)^{j(\frac{1}{s_{0}} -\kappa) + \delta \kappa} \\ = C_{1} (3 R_{0})^{-(1-\delta)\kappa} e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} . \end{multline*} In the last line above we used the fact that $ \delta < 1 $, $ \kappa > \frac{1}{s_{0}} $ (see the statement of Theorem \ref{th:pest-t}). As a consequence, taking $ \eta^{\frac{1}{s_{0}}} \geq 6 R_{0} $, we have the following lower bound \begin{multline} \label{eq:lowerbd} \left | \sum_{j \geq 0} \psi_{j}(\eta^{\frac{1}{s_{0}}} ) \partial_{x}^{\beta} u_{j}(0, \eta^{\frac{1}{s_{0}}}) \right| \\ \geq e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} \left[ \left| \partial_{x}^{\beta}\phi_{0}(0) \right| - C_{1} (3 R_{0})^{-(1-\delta)\kappa} \right]. \end{multline} Here $ \phi_{0} $ denotes the ground state of the operator $ Q $ defined in \eqref{eq:Q}. We point out that either $ \phi_{0}(0) $ or $ \partial_{x} \phi_{0}(0) $ are non zero. This is easily seen by remarking that if both $ \phi_{0}(0) $ and $\partial_{x} \phi_{0}(0) $ are zero then all derivatives of $ \phi_{0} $ vanish at the origin, which is false due to the analyticity of $ \phi_{0} $. Choosing $ R_{0} $ large enough we deduce that there is a $ \beta \in \{0, 1\} $ and there is a constant $ C_{2} > 0 $, such that, for $ \eta^{\frac{1}{s_{0}}} \geq 6 R_{0} $, \begin{equation} \label{eq:lowerbd-1} \left | \sum_{j \geq 0} \psi_{j}(\eta^{\frac{1}{s_{0}}} ) \partial_{x}^{\beta} u_{j}(0, \eta^{\frac{1}{s_{0}}}) \right| \geq C_{2} e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} . \end{equation} Then, plugging this into \eqref{eq:Ft} we get $$ \left| \mathscr{F}(\partial_{x}^{\beta} A(v))(0, \eta) \right| \geq C_{3} \eta^{\frac{\re r+1}{s_{0}} + \frac{\beta}{q} -1} e^{\tilde{\mu}_{0} \eta^{\frac{1}{s_{0}}}} , $$ for $ \eta^{\frac{1}{s_{0}}} \geq 6 R_{0} $, and a suitable positive constant $ C_{3} $. This proves the lemma. \end{proof} Now Lemma \ref{lemma:fourier} as well as Lemma \ref{lemma:below} yield a contradiction, thus proving Theorem \ref{th:1}. \setcounter{section}{0} \renewcommand\thesection{\Alph{section}} \section{Appendix: Solution of an ODE} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} Let $ \mu > 0 $ be a positive number. We want to find the solution of the ordinary differential equation \begin{equation} \label{eq:ode} \left( D_{\rho}^{2a} + \mu \right) u = f, \end{equation} where $ f \in C^{\infty}({\mathbb{R}}) $, $ \supp f \subset {\mathbb{R}}^{+} $, which is rapidly decreasing at $ +\infty $. Let us denote by $ \mu_{i} $, $ i = 1, \ldots, 2a $, the $ 2a $-roots of $ (-1)^{a+1} \mu $. We observe that $ \re \mu_{i} \neq 0 $ for every $ i $. In fact if $ a $ is an even integer, in order to have $ \re \mu_{k+1} = 0 $, for some $ k = 0, 1, \ldots, 2a-1 $, we should have $$ \frac{\pi}{2a} + k \frac{\pi}{a} = \ell \frac{\pi}{2}, $$ for some odd integer $ \ell $. This would imply $ 2k + 1 = a \ell $, which is impossible. Analogously, assume $ a $ is an odd integer. Then if $\re \mu_{k+1} = 0$ we must have $$ k \frac{\pi}{a} = \ell \frac{\pi}{2}. $$ This would imply $ 2 k = a \ell $, which is also impossible, since $ a \ell$ is odd. For any $ j = 1, \ldots, 2a $, define \begin{multline} \label{eq:If} I_{j}(f)(\rho) \\ = - \sgn\left( \re \mu_{j}\right) \int_{{\mathbb{R}}} e^{\mu_{j} (\rho - \sigma)} H\left(-\sgn\left(\re \mu_{j}\right) (\rho - \sigma)\right) f(\sigma) d\sigma , \end{multline} where $ H $ denotes the Heaviside function. Since $$ \frac{1}{\sigma^{2a} + (-1)^{a} \mu} = \prod_{j=1}^{2a} \frac{1}{\sigma - \mu_{j}}, $$ define the positive numbers $ A_{j} $ by the relation \begin{equation} \label{eq:Aj} \sum_{j=1}^{2a} A_{j} \frac{1}{\sigma - \mu_{j}} = \prod_{j=1}^{2a} \frac{1}{\sigma - \mu_{j}}. \end{equation} Multiplying both sides of \eqref{eq:Aj} by $ \sigma - \mu_{\ell} $, $ \ell \in \{1, \ldots, 2a\} $, we get $$ A_{\ell} + \sum_{\substack{j = 1\\j\neq\ell}}^{2a} A_{j} \frac{\sigma - \mu_{\ell}}{\sigma - \mu_{j}} = \prod_{\substack{j=1\\j\neq\ell}}^{2a} \frac{1}{\sigma - \mu_{j}} . $$ Computing the above identity for $ \sigma = \mu_{\ell} $ we obtain an expression for the $ A_{j} $: \begin{equation} \label{eq:Al} A_{\ell} = \prod_{\substack{j=1\\j\neq\ell}}^{2a} \frac{1}{\mu_{\ell} - \mu_{j}} . \end{equation} Define now \begin{equation} \label{eq:u} u(\rho) = \sum_{j=1}^{2a} A_{j} I_{j}(f)(\rho) . \end{equation} We want to show that $ u $ is the desired solution. First observe that $$ \partial_{\rho} I_{j}(f) = \mu_{j} I_{j}(f) + f . $$ Hence we deduce \begin{align} \label{eq:dku} \partial_{\rho} u & = \sum_{j=1}^{2a} A_{j} \left(f +\mu_{j} I_{j}(f) \right) \\ \partial_{\rho}^{2} u & = \sum_{j=1}^{2a} A_{j} \left( f' + \mu_{j} f + \mu_{j}^{2} I_{j}(f) \right) \notag \\ \vdots &\hphantom{=} \hspace{2.cm} \vdots \notag \\ \partial_{\rho}^{2a} u & = \sum_{j=1}^{2a} A_{j} \left( f^{(2a-1)} + \mu_{j} f^{(2a-2)} + \cdots + \mu_{j}^{2a-1} f + \mu_{j}^{2a} I_{j}(f) \right). \notag \end{align} Now, because of \eqref{eq:Aj}, we have \begin{equation} \label{eq:Aj:2} \sum_{i=1}^{2a} A_{i} \prod_{\substack{j=1\\j\neq i}}^{2a} (\sigma - \mu_{j}) = 1 . \end{equation} Moreover the polynomial multiplying a single $ A_{i} $ above is written as \begin{multline} \label{eq:Aj:3} \prod_{\substack{j=1\\j\neq i}}^{2a} (\sigma - \mu_{j}) = \sigma^{2a-1} - \sigma^{2a-2} \sum_{\substack{j=1\\j\neq i}}^{2a} \mu_{j} +\sigma^{2a-3} \sum_{\substack{j_{1} < j_{2} =1 \\ j_{1}, j_{2} \neq i}}^{2a} \mu_{j_{1}} \mu_{j_{2}} \\ + \cdots + (-1)^{k} \sum_{\substack{j_{1} < \cdots < j_{k} =1 \\ j_{1}, \ldots, j_{k} \neq i }}^{2a} \mu_{j_{1}} \ldots \mu_{j_{k}} \sigma^{2a-k-1} + \cdots + (-1)^{2a-1} \prod_{\substack{j=1\\j \neq i}}\mu_{j} . \end{multline} Defining as $ s_{k}^{(i)}= s_{k}^{(i)}(\mu_{1}, \ldots, \mu_{2a}) $ the symmetric function of degree $ k $ of the $ 2a-1 $ arguments $ \mu_{1}, \ldots, \mu_{i-1}, \mu_{i+1}, \ldots, \mu_{2a} $, with $ s_{0}^{(i)} = 1 $, the above identity can be rewritten as $$ \prod_{\substack{j=1\\j\neq i}}^{2a} (\sigma - \mu_{j}) = \sigma^{2a-1} + \sum_{k=1}^{2a-1} (-1)^{k} s_{k}^{(i)} \sigma^{2a-1-k} . $$ Let us show inductively that for $ 1 \leq k \leq 2a $ we have \begin{equation} \label{eq:ski} s_{k}^{(i)} = s_{k} - \mu_{i} s_{k-1} + \mu_{i}^{2} s_{k-2} + \cdots + (-1)^{k} \mu_{i}^{k}, \end{equation} where $ s_{k} $ denotes the symmetric function of $ k $ of the $ 2a $ arguments $ \mu_{1}, \ldots,$ $ \mu_{2a} $, and we make the convention that $ s_{0} = 1 $. For $ k=1 $ it is obviously true. Consider \begin{align*} s_{k}^{(i)} &= \sum_{\substack{j_{1} < \cdots <j_{k} =1 \\ j_{1}, \cdots, j_{k} \neq i}}^{2a} \mu_{j_{1}} \cdots \mu_{j_{k}} \\ &= \sum_{\substack{j_{1} < \cdots <j_{k} =1 \\ j_{2}, \cdots, j_{k} \neq i}}^{2a} \mu_{j_{1}} \cdots \mu_{j_{k}} - \mu_{i} \sum_{j_{2} < \cdots <j_{k} = i+1}^{2a} \mu_{j_{2}} \cdots \mu_{j_{k}} \end{align*} Iterating the argument we get \begin{align*} s_{k}^{(i)} &= \sum_{\substack{j_{1} < \cdots <j_{k} =1 \\ j_{3}, \cdots, j_{k} \neq i}}^{2a} \mu_{j_{1}} \cdots \mu_{j_{k}} - \mu_{i} \sum_{j_{2} < \cdots <j_{k} = i+1}^{2a} \mu_{j_{2}} \cdots \mu_{j_{k}} \\ &\phantom{=}\ - \mu_{i} \sum_{j_{1} < i < j_{3} < \cdots < j_{k}} \mu_{j_{1}} \mu_{j_{3}} \cdots \mu_{j_{k}} \\ &= \cdots = \sum_{j_{1} < \cdots <j_{k}} \mu_{j_{1}} \cdots \mu_{j_{k}} - \mu_{i} \left( \sum_{i < j_{2} < \cdots <j_{k}} \mu_{j_{2}} \cdots \mu_{j_{k}} \right. \\ &\phantom{=}\ \left. + \sum_{j_{1} < i < j_{3} < \cdots <j_{k}} \mu_{j_{1}} \mu_{j_{3}} \cdots \mu_{j_{k}} + \cdots + \sum_{j_{1} < \cdots <j_{k-1} < i} \mu_{j_{1}} \cdots \mu_{j_{k-1}} \right) \\ &= s_{k} - \mu_{i} s_{k-1}^{(i)}. \end{align*} Applying the inductive hypothesis \begin{align*} s_{k}^{(i)} &= s_{k} -\mu_{i} \left( s_{k-1} - \mu_{i} s_{k-2} + \cdots + (-1)^{k} \mu_{i}^{k-1}\right) \\ &= \sum_{\ell=0}^{k} (-1)^{\ell} \mu_{i}^{\ell} s_{k-\ell}, \end{align*} which is the desired conclusion. Going back to \eqref{eq:Aj:2}, \eqref{eq:Aj:3} we see that \eqref{eq:Aj:3} can be written as $$ \prod_{\substack{j=1\\j\neq i}}^{2a} (\sigma - \mu_{j}) = \sum_{\ell=0}^{2a-1} (-1)^{\ell} \sigma^{2a-1-\ell} s_{\ell}^{(i)} , $$ so that, identifying the polynomials on both sides of \eqref{eq:Aj:2}, we obtain $$ \sum_{i=1}^{2a} A_{i} s_{k}^{(i)} = 0, \qquad \text{for } k = 0, \ldots, 2a-2. $$ When $ k=0 $ we immediately get that $ \sum_{i=1}^{2a}A_{i} = 0 $. Assume $ k=1 $. Since $ s_{1}^{(i)} = s_{1} - \mu_{i} $ we have that $ \sum_{i=1}^{2a} A_{i} s_{1}^{(i)} = s_{1} \sum_{i=1}^{2a} A_{i} - \sum_{i=1}^{2a} A_{i} \mu_{i} = 0$, giving that $ \sum_{i=1}^{2a} A_{i} \mu_{i} = 0 $. Iterating this kind of argument we obtain that \begin{equation} \label{eq:Amuk} \sum_{i=1}^{2a} A_{i} \mu_{i}^{k} = 0, \qquad \text{for } k = 0, \ldots, 2a-2. \end{equation} Finally, again from \eqref{eq:Aj:2}, \eqref{eq:Aj:3}, the identity $$ - \sum_{i=1}^{2a} A_{i} \mu_{1} \cdots \mu_{i-1} \mu_{i+1} \cdots \mu_{2a} = 1 $$ implies that \begin{multline} \label{eq:Amu2a-1} 1 = - \sum_{i=1}^{2a} A_{i} \frac{1}{\mu_{i}} \mu_{1} \cdots \mu_{2a} = - \sum_{i=1}^{2a} A_{i} \frac{1}{\mu_{i}} (-1)^{a} \mu \\ = \sum_{i=1}^{2a} A_{i} \frac{1}{\mu_{i}} (-1)^{a+1} \mu = \sum_{i=1}^{2a} A_{i} \frac{1}{\mu_{i}} \mu_{i}^{2a} = \sum_{i=1}^{2a} A_{i} \mu_{i}^{2a-1} . \end{multline} Thus $ u $, as defined in \eqref{eq:u}, is a solution of \eqref{eq:ode}. \section{Appendix: Estimate of the Ground Level Eigenfunction} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} This section contains the proof of the estimates of the function $ \phi_{0} \in \ker Q_{\lambda_{0}} $, where $ Q_{\lambda_{0}} = D_{x}^{2} + x^{2(q-1)} - \lambda_{0} $. The method for obtaining such estimates has been introduced by M\'etivier in \cite{metivier-duke-80} in a homogeneous, i.e. quadratic, case. Let us start by defining \begin{equation} \label{eq:X+} \left \{ % \begin{array}{ccc} X_{+} & = & \partial_{x} \\[7pt] X_{-} & = & x \end{array} \right . \end{equation} For $ k \in {\mathbb{N}} $ we denote by $ I $ the multiindex $ I = (i_{1}, \ldots, i_{k}) $, where $ i_{j} \in \{ \pm \} $ for $ j = 1, \ldots, k $. We also write $ k = | I | $. Define \begin{equation} \label{eq:XI} X_{I} = X_{i_{1}} \cdots X_{i_{k}}. \end{equation} Set $$ I_{+} = (i_{1}^{+}, \ldots, i_{k}^{+}) , $$ where \begin{equation} \label{eq:i+} i_{\nu}^{+} = \begin{cases} + , & \text{ if } i_{\nu} = + \\[5pt] 0 , & \text{ if } i_{\nu} = - \end{cases} , \end{equation} and analogously for $ I_{-} $ and $ i_{\nu}^{-} $. Define $ |I_{+}| $ as the number of the non-zero components of $ I_{+} $ and similarly for $ |I_{-}| $. We finally set \begin{equation} \label{eq:bracketsI} \langle I \rangle = |I_{+}| + \frac{| I_{-}|}{q-1} . \end{equation} We are going to need the spaces $ H^{k}_{q}({\mathbb{R}}) $, which for suitable $ k $ are natural domains of the operator $ Q_{\lambda_{0}} $ in $ L^{2}({\mathbb{R}}) $: \begin{equation} \label{eq:Hk} H^{k}_{q}({\mathbb{R}}) = \{ u \in L^{2}({\mathbb{R}}) \ | \ X_{I}u \in L^{2}({\mathbb{R}}), \text{ for every } I, \langle I \rangle \leq k \}, \end{equation} for $ k \in {\mathbb{N}} \cup \{0\} $. We equip $ H^{k}_{q}({\mathbb{R}}) $ with the norm \begin{equation} \label{eq:normq} \| u \|_{k} = \max_{0 \leq \ell \leq k} | u |_{\ell} , \end{equation} where $$ |u|_{\ell} = \max_{\langle I \rangle = \ell} \| X_{I} u \|_{L^{2}({\mathbb{R}})} . $$ For the sake of simplicity we write $ \| u \|_{0} $ for the $ L^{2}({\mathbb{R}}) $ norm of $ u $. Due to Gru\v sin, \cite{grushin70}, or rather Lemma \ref{lemma:apr} below for the anisotropic case, we know that the following a priori estimate is satisfied \begin{equation} \label{eq:apriori} \| u \|_{2} \leq C_{0} \left( \| Q_{\lambda_{0}} u \|_{0} + \| u \|_{0} \right), \end{equation} for a suitable positive constant $ C_{0} $. We want to prove the following \begin{proposition} \label{prop:1st-est} Assume that $ u \in \ker Q_{\lambda_{0}} $. Then there exist positive constants $ C $, $ R $, depending only on the operator $ Q_{\lambda_{0}} $, such that, for every multiindex $ I $, we have the inequality \begin{equation} \label{eq:XIu} \| X_{I} u \| \leq C R^{\langle I \rangle} \|u \|_{0} (\langle I \rangle !)^{\frac{q-1}{q}}, \end{equation} where, for $ x > 0 $, $ x! $ means $ \Gamma(x+1) $ and $ 0! = 1 $. \end{proposition} \begin{corollary} \label{cor:xd} We have for any $ u \in \ker Q_{\lambda_{0}} $ \begin{equation} \label{eq:xd} \| x^{\beta} \partial_{x}^{\alpha} u \|_{0} \leq C^{\alpha+\beta+1} \left( \alpha \frac{q-1}{q} + \frac{\beta}{q}\right)! \end{equation} \end{corollary} Before proving the proposition we state a couple of lemmas that are used in its proof. \begin{lemma} \label{lemma:comm} Let $ I $ be a multliindex. Then \begin{equation} \label{eq:comm} \| [ Q_{\lambda_{0}} , X_{I} ] u \|_{0} \leq C |I| \| u \|_{2 + \langle I \rangle - \frac{q}{q-1}}, \end{equation} for any $ u $ in the $ L^{2} $ domain of the operator on the left hand side. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:comm}] The assertion is proved by remarking that $$ [ Q_{\lambda_{0}} , X_{I} ] = \sum_{I_{1}} b_{I_{1}} X_{I_{1}} , $$ where $ b_{I_{1}} \in {\mathbb{C}} $ and are bounded by a quantity depending only on the problem and $ \langle I_{1} \rangle = \langle I \rangle + 2 - \frac{q}{q-1} $. Moreover the number of terms in the summation above is bounded by $ \kappa | I | $, $ \kappa $ denoting a positive constant depending only on the problem data. Inequality \eqref{eq:comm} then immediately follows. \end{proof} \begin{lemma} \label{lemma:dec} Let $ I $ a multiindex such that $ \langle I \rangle > 2 $. Then we may decompose $ X_{I} $ as $$ X_{I} = X_{I''} X_{I'} + A , $$ where $ \langle I'' \rangle = 2 $, $ \langle I' \rangle = \langle I \rangle - 2 $ and $ A $ is a finite sum of the form $$ A = \sum_{J} c_{J} X_{J} , $$ with $ \langle J \rangle = \langle I \rangle - \frac{q}{q-1} $. Here both the coefficients $ c_{J} $ and the number of summands are bounded by an absolute constant. \end{lemma} The proof of the lemma is straightforward and we skip it. \begin{proof}[Proof of Proposition \ref{prop:1st-est}] Instead of proving \eqref{eq:XIu} directly we are going to show that the following inequality holds: \begin{equation} \label{eq:XIu'} \| X_{I} u \| \leq C_{1} R_{1}^{\langle I \rangle} \|u \|_{0} \langle I \rangle^{\langle I \rangle \frac{q-1}{q}}, \end{equation} for certain positive constants $ C_{1} $, $ R_{1} $, for every multiindex $ I $. It is then obvious that \eqref{eq:XIu'} implies \eqref{eq:XIu} slightly modifying the constants, because of the Stirling formula for the Euler Gamma function. We argue by induction on $ k $ where $ \langle I \rangle = \frac{k}{q-1} $. First of all we observe that when $ \langle I \rangle \leq 2 $ we have by \eqref{eq:apriori} that $$ \| X_{I} u \| \leq C_{0}\left(\|Q_{\lambda_{0}} u \|_{0} + \| u \|_{0} \right) = C_{0} \| u \|_{0}. $$ Assume now that the assertion holds for any $ I $ with $ \langle I \rangle = \frac{\ell}{q-1} $, $ \ell = 0, 1, \ldots, k $, $ k > 2(q-1) $. We want to show that the assertion is true for $ I $ with $ \langle I \rangle = \frac{k+1}{q-1} $. Let $ I $ be a multiindex with $ \langle I \rangle = \frac{k+1}{q-1} $. Using Lemma \ref{lemma:dec} we write $ X_{I} = X_{I'} X_{J} + A $, with $ \langle I' \rangle = 2 $, $ \langle J \rangle = \langle I \rangle - 2 $ and $ A $ of the form specified in Lemma \ref{lemma:dec}. By \eqref{eq:apriori} we have \begin{eqnarray*} \| X_{I} u \|_{0} & \leq & \|X_{I'} X_{J} u \|_{0} + \| Au \|_{0} \\[5pt] & \leq & C_{0} \left ( \| Q_{\lambda_{0}} X_{J} u \|_{0} + \| X_{J} u \|_{0} + \| A u \|_{0} \right). \end{eqnarray*} Since $ u \in \ker Q_{\lambda_{0}} $, $$ Q_{\lambda_{0}} X_{J} u = X_{J} Q_{\lambda_{0}} u + [ Q_{\lambda_{0}} , X_{J} ] u = [ Q_{\lambda_{0}} , X_{J} ] u $$ Hence \begin{equation} \label{eq:apriori-comm} \| X_{I} u \|_{0} \leq C_{0} \left ( \| [ Q_{\lambda_{0}} , X_{J} ] u \|_{0} + \| X_{J} u \|_{0} + \| A u \|_{0} \right). \end{equation} Since $ \langle J \rangle = \langle I \rangle -2 = \frac{k+1}{q-1} - 2 \leq \frac{k}{q-1}$ and $ A $ is a sum of terms involving $ X_{J'} $, with $ \langle J' \rangle = \langle I \rangle - \frac{q}{q-1} = \frac{k- (q-1)}{q-1} = \frac{k}{q-1} - 1 < \frac{k}{q-1} $, we see that both terms $ \| X_{J}u \| $ and $ \| A u \| $ satisfy the inductive hypothesis: \begin{multline*} \| X_{J} u \| \leq C_{1} R_{1}^{\langle J \rangle} \| u \|_{0} \langle J \rangle^{\langle J \rangle \frac{q-1}{q}} \\ = C_{1} R_{1}^{\frac{k+1}{q-1} -2} \| u \|_{0} \left(\frac{k+1}{q-1} -2\right)^{\left(\frac{k+1}{q-1} -2\right) \frac{q-1}{q}} \leq (C_{1} R_{1}^{-2}) R_{1}^{\langle I \rangle} \| u \|_{0} \langle I \rangle^{\langle I \rangle \frac{q-1}{q}} . \end{multline*} Furthermore writing $ A = \sum_{J'} c_{J'} X_{J'} $, with $ \langle J' \rangle = \frac{k}{q-1} -1 $ and where both the number of summands and the constants $ c_{J'} $ are bounded by a universal constant, say $ M $, we have \begin{multline*} \| Au \|_{0} \leq \sum_{J'} | c_{J'} | \| X_{J'} u \|_{0} \leq \sum_{J'} | c_{J'} | C_{1} R_{1}^{\langle J' \rangle} \| u \|_{0} \langle J' \rangle^{\langle J' \rangle \frac{q-1}{q}} \\ \leq ( C_{1} M^{2} R_{1}^{- \frac{q}{q-1}} ) R_{1}^{\langle I \rangle} \| u \|_{0} \langle I \rangle^{\langle I \rangle \frac{q-1}{q}} . \end{multline*} Note that there is always a gain of a negative power of $ R_{1} $ in the above terms. Consider now the norm with the commutator in the right hand side of \eqref{eq:apriori-comm}. By Lemma \ref{lemma:comm} we have \begin{eqnarray} \label{eq:comm-est} \| [ Q_{\lambda_{0}} , X_{J} ] u \|_{0} &\leq & C |J| \| u \|_{2 + \langle J \rangle - \frac{q}{q-1}} \notag \\ & \leq & C |J| \max_{0 \leq \ell \leq 2+\langle J \rangle - \frac{q}{q-1}} \max_{\langle I' \rangle = \ell} \|X_{I'} u \|_{0} . \end{eqnarray} Observe that $$ 2 + \langle J \rangle - \frac{q}{q-1} = \langle I \rangle - \frac{q}{q-1} = \frac{k}{q-1} - 1. $$ Thus the norms in the right hand side of \eqref{eq:comm-est} satisfy the inductive hypothesis. We deduce that \begin{eqnarray*} \| [ Q_{\lambda_{0}} , X_{J} ] u \|_{0} & \leq & C k C_{1} R_{1}^{\frac{k}{q-1} -1} \| u \|_{0} \left( \frac{k}{q-1} -1 \right)^{\left(\frac{k}{q-1} -1\right) \frac{q-1}{q}} \\ & \leq & \left( \frac{C C_{1} (q-1)}{R_{1}^{\frac{q}{q-1}}} \right) \| u \|_{0} R_{1}^{\frac{k+1}{q-1}} \left( \frac{k+1}{q-1}\right)^{\frac{k+1-q}{q} +1} \\ & = & \left( \frac{C C_{1} (q-1)}{R_{1}^{\frac{q}{q-1}}} \right) \| u \|_{0} R_{1}^{\frac{k+1}{q-1}} \left( \frac{k+1}{q-1}\right)^{\frac{k+1}{q}} . \end{eqnarray*} Choosing $ R_{1} $ in such a way that the constant in parentheses is bounded by $ C_{1} $ achieves the proof of the proposition. \end{proof} Taking $ |I_{-}| = \beta $, $ |I_{+}| = \alpha $ and using the Sobolev embedding theorem (in one dimension,) we obtain a bound for the derivatives of the functions in $ \ker Q_{\lambda_{0}} $: \begin{corollary} \label{cor:Dphi} Let $ \phi \in \mathscr{S}({\mathbb{R}}) $ be such that $ Q_{\lambda_{0}} \phi = 0$. Then for every $ \alpha, \beta \in {\mathbb{N}} \cup \{ 0 \} $ there exists a positive constant $ C_{\phi} $ such that \begin{equation} \label{eq:Dphi} |x^{\beta} \partial_{x}^{\alpha} \phi(x) | \leq C_{\phi}^{\alpha+\beta+ 1} \alpha!^{\frac{q-1}{q}} \beta!^{\frac{1}{q}}, \end{equation} for every $ x \in {\mathbb{R}} $. \end{corollary} \section{Appendix: An Inequality Involving Powers of $ Q $ } \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} We prove here the following inequality \begin{proposition} \label{prop:ineqQ} Let $ u \in \mathscr{S}({\mathbb{R}}) $, and $ Q $ the operator $ D_{x}^{2} + x^{2(q-1)} $, $ \theta \in {\mathbb{Q}}^{+} \cup \{ 0 \} $, and $ X_{I} $ an operator of the type defined in \eqref{eq:XI}. Set $$ m(\theta, I) = \left[ (2 \theta + \langle I \rangle) \frac{q-1}{q} \right], $$ where the square brackets denote the integer part. Then there exists a positive constant $ C $ such that \begin{equation} \label{eq:ineqQ} \| Q^{\theta} X_{I} u \|_{0} \leq \sum_{\ell = 0}^{m(\theta, I)} C^{\ell + 1} \binom{m(\theta, I)}{\ell} p^{\ell} \| Q^{\frac{1}{2}\left(p - \ell \frac{q}{q-1} \right)} u \|_{0}, \end{equation} where \begin{equation} \label{eq:hk} p = 2\theta + \langle I \rangle. \end{equation} Here we used the fact that $ Q $ is a positive globally elliptic operator with discrete spectrum and its rational powers are defined via the spectral mapping theorem, see Helffer \cite{helffer84}. \end{proposition} \begin{proof} The proof is carried out via a number of lemmas. \begin{lemma} \label{lemma:A} Let $ \mu \in {\mathbb{Q}}^{+} $, $ \mu \geq 1 $, $ I $ a multiindex, then \begin{equation} \label{eq:A} \| Q^{\mu} X_{I} u \|_{0} \leq \| Q^{\mu - 1} X_{I} Q u \|_{0} + \sum_{I_{1}} c_{I_{1}} \| Q^{\mu - 1} X_{I_{1}} u \|_{0}, \end{equation} where $ I_{1} $ is a multiindex such that $$ \langle I_{1} \rangle = \langle I \rangle + 2 - \frac{q}{q-1}, $$ the constants $ c_{I_{1}} $ are uniformly bounded by a constant and the number of summands is bounded by $ M \langle I \rangle $, $ M > 0 $ independent of $ I $. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:A}] We have $ Q^{\mu} X_{I} = Q^{\mu - 1}X_{I} Q + Q^{\mu-1} [ Q, X_{I}] $. We then remark that $$ [ Q, X_{I}] = \sum_{I_{1}} c_{I_{1}} X_{I_{1}}, $$ where $ \langle I_{1} \rangle = \langle I \rangle + 2 - \frac{q}{q-1} $ and the bounds in the above statement hold. \end{proof} \begin{lemma} \label{lemma:apr} Let $ I $ be such that $ \langle I \rangle = 2$. Then for any $ u \in \mathscr{S}({\mathbb{R}}) $ we have the estimate \begin{equation} \label{eq:apr} \| X_{I} u \|_{0} \leq \| Q u \|_{0} + C_{q} \| x^{q-2} u \|_{0}, \end{equation} where $ C_{q} $ denotes a positive constant. \end{lemma} \begin{proof} It is a very simple computation. First remark that $$ \| Q u \|_{0}^{2} = \| D_{x}^{2} u \|_{0}^{2} + \| x^{2(q-1)} u \|_{0}^{2} + \langle \left( x^{2(q-1)} D_{x}^{2} + D_{x}^{2} x^{2(q-1)} \right) u, u \rangle. $$ Now $$ x^{2(q-1)} D_{x}^{2} + D_{x}^{2} x^{2(q-1)} = 2 D_{x} x^{2(q-1)} D_{x} + [ D_{x} , [ D_{x} , x^{2(q-1)}] ], $$ so that we get the identity $$ \| D_{x}^{2} u \|_{0}^{2} + \| x^{2(q-1)} u \|_{0}^{2} + 2 \| x^{q-1}D_{x} u \|_{0}^{2} = \| Q u \|_{0}^{2} + (2q-2)(2q-3) \| x^{q-2} u \|_{0}^{2}, $$ and \eqref{eq:apr} immediately follows remarking that $$ \| X_{I} u \|_{0} \leq \| D_{x}^{2} u \|_{0} + \| x^{2(q-1)} u \|_{0} + \| x^{q-1}D_{x} u \|_{0} + C \| x^{q-2} u \|_{0} . $$ \end{proof} \begin{lemma} \label{lemma:B} Let $ \mu \in {\mathbb{Q}}^{+} $, $ \mu < 1 $, $ I $ a multiindex, then \begin{equation} \label{eq:AC} \| Q^{\mu} X_{I} u \|_{0} \leq \| Q^{\mu} X_{\tilde{I}} Q u \|_{0} + \sum_{I_{1}} c_{I_{1}} \| Q^{\mu} X_{I_{1}} u \|_{0} + C \| Q^{\mu + 1 - \frac{q}{2(q-1)}} X_{\tilde{I}} u \|_{0} , \end{equation} where $ \tilde{I} $ is a multiindex such that $ \langle \tilde{I} \rangle = \langle I \rangle - 2 $, $ \langle I_{1}\rangle = \langle I \rangle - \frac{q}{q-1} $, and for the sum we have bounds analogous to those in Lemma \ref{lemma:A}. \end{lemma} \begin{proof} We may write $$ X_{I} = X_{\hat{I}} X_{\tilde{I}} + \sum_{I_{1}} c_{I_{1}}' X_{I_{1}} , $$ where $ \langle \hat{I} \rangle = 2 $, $ \langle \tilde{I} \rangle = \langle I \rangle - 2 $, $ \langle I_{1} \rangle = \langle I \rangle - \frac{q}{q-1}$. Moreover both the constants $ c_{I_{1}}' $ and the number of the summands are bounded by a universal constant. Now $ Q^{\mu} X_{\hat{I}} X_{\tilde{I}} = X_{\hat{I}} Q^{\mu} X_{\tilde{I}} + [ Q^{\mu} , X_{\hat{I}}] X_{\tilde{I}} $. By lemma \ref{lemma:apr} we have \begin{eqnarray*} \| X_{\hat{I}} Q^{\mu} X_{\tilde{I}} u \|_{0} & \leq & \| Q^{\mu + 1} X_{\tilde{I}} u \|_{0} + C_{q} \| x^{q-2} Q^{\mu} X_{\tilde{I}} u \|_{0} \\ & \leq & \| Q^{\mu + 1} X_{\tilde{I}} u \|_{0} + C_{q} \| Q^{\mu} X_{I_{3}} u \|_{0} \\ & & + C_{q} \| [ Q^{\mu}, x^{q-2}] X_{\tilde{I}} u \|_{0}, \end{eqnarray*} where $ \langle I_{3} \rangle = \langle I \rangle - \frac{q}{q-1} $ so that the second term in the last line above has the same weight as the terms containing $ X_{I_{1}} $ above. Adapting to the anisotropic case the calculus of globally elliptic pseudodifferential operators we get---see Helffer \cite{helffer84}, Theorem 1.11.2 and Proposition 1.6.11---that $$ \| [ Q^{\mu}, x^{q-2}] X_{\tilde{I}} u \|_{0} \leq C_{1} \| Q^{\mu - \frac{1}{q-1}} X_{\tilde{I}} u \|_{0} \leq C_{1} \| Q^{\mu + 1 - \frac{q}{2(q-1)}} X_{\tilde{I}} u \|_{0} , $$ $$ \| [ Q^{\mu} , X_{\hat{I}}] X_{\tilde{I}} u \|_{0} \leq C_{2} \| Q^{\mu + 1 - \frac{q}{2(q-1)}} X_{\tilde{I}} u \|_{0}. $$ The conclusion follows applying Lemma \ref{lemma:A} to the term $\| Q^{\mu + 1} X_{\tilde{I}} u \|_{0} $. \end{proof} Let us now go back to the proof of inequality \eqref{eq:ineqQ}. We proceed by induction with respect to $ p = 2 \theta + \langle I \rangle $. Since $ \theta \in {\mathbb{Q}} $ and $ \langle I \rangle $ is a rational number whose denominator is $ q-1 $, we may write $ p $ as a fraction $ p = p_{1}/ d(\theta, I) $, so that, proceeding by induction actually means inducing with respect to $ p_{1} $. If $ p_{1} = 0 $ there is nothing to prove. Assume thus that \eqref{eq:ineqQ} holds for any $ q_{1}/d(\theta, I) $, $ q_{1} < p_{1} $ and let us show that it is true when $ q_{1} = p_{1} $. Assume $ \theta < 1 $. Applying Lemma \ref{lemma:B} we obtain \begin{eqnarray*} \| Q^{\theta} X_{I} u \|_{0} & \leq & \| Q^{\theta} X_{\tilde{I}} Q u \|_{0} + \sum_{I_{1}} c_{I_{1}} \| Q^{\theta} X_{I_{1}} u \|_{0} \\ & & + C \| Q^{\theta + 1 - \frac{q}{2(q-1)}} X_{\tilde{I}} u \|_{0} \\ & = & A_{1} + A_{2} + A_{3}, \end{eqnarray*} where $ \langle I_{1} \rangle = \langle I \rangle - \frac{q}{q-1} $ and $ \langle \tilde{I} \rangle = \langle I \rangle - 2 $. Let us start by examining $ A_{1} $. Write $$ \left( 2 \theta + \langle \tilde{I} \rangle \right) \frac{q-1}{q} = m(\theta, \tilde{I}) + \tilde{\sigma}, \qquad 0 \leq \tilde{\sigma} < 1, $$ with $ m(\theta, \tilde{I}) \in {\mathbb{N}} \cup \{ 0 \} $. Since $$ \left( 2 \theta + \langle \tilde{I} \rangle \right) \frac{q-1}{q} = \left( 2 \theta + \langle I \rangle \right) \frac{q-1}{q} -1 - \left( 1 - \frac{2}{q}\right), $$ we deduce that $$ m(\theta, I) - 2 \leq m(\theta, \tilde{I}) \leq m(\theta, I) -1 . $$ Hence applying the induction to $ A_{1} $ we obtain \begin{equation} \label{eq:A1} \| Q^{\theta} X_{\tilde{I}} (Q u) \|_{0} \leq \sum_{\ell=0}^{m(\theta, \tilde{I})} C^{\ell+1} \binom{m(\theta, \tilde{I})}{\ell} \tilde{p}^{\ell} \|(Q^{\frac{1}{2}})^{\tilde{p} + 2 - \ell \frac{q}{q-1}} u \|_{0} , \end{equation} where $ \tilde{p} = 2 \theta + \langle \tilde{I} \rangle $. Since $$ \tilde{p} + 2 = 2 \theta + \langle \tilde{I} \rangle + 2 = 2 \theta + \langle I \rangle = p , $$ we may write \begin{eqnarray} \label{eq:A1fin} \| Q^{\theta} X_{\tilde{I}} (Q u) \|_{0} & \leq & \sum_{\ell=0}^{m(\theta, \tilde{I})} C^{\ell+1} \binom{m(\theta, \tilde{I})}{\ell} p^{\ell} \|(Q^{\frac{1}{2}})^{p - \ell \frac{q}{q-1}} u \|_{0} \notag \\ & \leq & \sum_{\ell=0}^{m(\theta,I) -1} C^{\ell+1} \binom{m(\theta,I) -1}{\ell} p^{\ell} \|(Q^{\frac{1}{2}})^{p - \ell \frac{q}{q-1}} u \|_{0} \end{eqnarray} Next let us examine $ A_{2} $. Applying the inductive hypothesis we may write \begin{multline} \label{eq:A2} \sum_{I_{1}} c_{I_{1}} \| Q^{\theta} X_{I_{1}} u \|_{0} \\ \leq \sum_{I_{1}} C_{1} \sum_{\ell=0}^{m(\theta, I_{1})} C^{\ell+1} \binom{m(\theta, I_{1})}{\ell} p^{\ell} \| (Q^{\frac{1}{2}})^{2\theta + \langle I_{1} \rangle - \ell \frac{q}{q-1} } u \|_{0} \end{multline} Since $$ 2 \theta + \langle I_{1} \rangle = 2 \theta + \langle I \rangle - \frac{q}{q-1} = p - \frac{q}{q-1} , $$ we have that $$ m(\theta, I_{1}) = \left[ \frac{q-1}{q} \left( 2 \theta + \langle I \rangle - \frac{q}{q-1}\right) \right] = \left[ \frac{q-1}{q} p\right] -1 = m(\theta, I) -1. $$ Hence we may conclude that \begin{multline*} \sum_{I_{1}} c_{I_{1}} \| Q^{\theta} X_{I_{1}} u \|_{0} \\ \leq M C_{1} \sum_{\ell=0}^{m(\theta, I) - 1} C^{\ell+1} \binom{m(\theta, I) - 1}{\ell} p^{\ell+1} \| (Q^{\frac{1}{2}})^{2\theta + \langle I \rangle - (\ell + 1) \frac{q}{q-1} } u \|_{0} \\ = M C_{1} \sum_{\ell=1}^{m(\theta, I)} C^{\ell} \binom{m(\theta, I) - 1}{\ell-1} p^{\ell} \| (Q^{\frac{1}{2}})^{2\theta + \langle I \rangle - \ell \frac{q}{q-1} } u \|_{0} \\ = M C_{1}C^{-1} \sum_{\ell=1}^{m(\theta, I)} C^{\ell+1} \binom{m(\theta, I) - 1}{\ell-1} p^{\ell} \| (Q^{\frac{1}{2}})^{2\theta + \langle I \rangle - \ell \frac{q}{q-1} } u \|_{0} \\ \leq \frac{1}{2} \sum_{\ell=1}^{m(\theta, I)} C^{\ell+1} \binom{m(\theta, I) - 1}{\ell-1} p^{\ell} \| (Q^{\frac{1}{2}})^{2\theta + \langle I \rangle - \ell \frac{q}{q-1} } u \|_{0} , \end{multline*} provided $ C $ is chosen so that $ M C_{1}C^{-1} \leq \frac{1}{2} $. Finally consider $ A_{3} $. Since $$ 2 \theta + 2 - \frac{q}{q-1} + \langle \tilde{I} \rangle = 2 \theta + 2 - \frac{q}{q-1} + \langle I \rangle - 2 = 2 \theta + \langle I \rangle - \frac{q}{q-1} < p, $$ we may apply the inductive hypothesis. Moreover as above $$ \left[ \frac{q-1}{q} \left( p - \frac{q}{q-1} \right) \right] = \left[ \frac{q-1}{q} p \right] - 1 = m(\theta,I) - 1 , $$ so that, renaming $ \tilde{C} $ the constant $ C $ in the definition of $ A_{3} $, we have \begin{multline*} \tilde{C} \| Q^{\theta + 1 - \frac{q}{2(q-1)}} X_{\tilde{I}} u \|_{0} \\ \leq \tilde{C} \sum_{\ell=0}^{m(\theta, I)-1} C^{\ell+1} \binom{m(\theta, I) - 1}{\ell} p^{\ell} \| (Q^{\frac{1}{2}})^{p - (\ell + 1) \frac{q}{q-1} } u \|_{0} \\ = \tilde{C} \sum_{\ell=1}^{m(\theta, I)} C^{\ell} \binom{m(\theta, I) - 1}{\ell - 1} p^{\ell - 1} \| (Q^{\frac{1}{2}})^{p - \ell \frac{q}{q-1} } u \|_{0} \\ \leq \frac{1}{2} \sum_{\ell=1}^{m(\theta, I)} C^{\ell+1} \binom{m(\theta, I) - 1}{\ell - 1} p^{\ell} \| (Q^{\frac{1}{2}})^{p - \ell \frac{q}{q-1} } u \|_{0} , \end{multline*} provided $ C $ is chosen so that $ \tilde{C} C^{-1} \leq \frac{1}{2} $. Hence we conclude that \begin{multline*} \| Q^{\theta} X_{I} u \|_{0} \leq \sum_{\ell=0}^{m(\theta,I) -1} C^{\ell+1} \binom{m(\theta,I) -1}{\ell} p^{\ell} \|(Q^{\frac{1}{2}})^{p - \ell \frac{q}{q-1}} u \|_{0} \\ + \sum_{\ell=1}^{m(\theta, I)} C^{\ell+1} \binom{m(\theta, I) - 1}{\ell - 1} p^{\ell} \| (Q^{\frac{1}{2}})^{p - \ell \frac{q}{q-1} } u \|_{0} \\ = \sum_{\ell=0}^{m(\theta, I)} C^{\ell+1} \binom{m(\theta, I)}{\ell} p^{\ell} \| (Q^{\frac{1}{2}})^{p - \ell \frac{q}{q-1} } u \|_{0} , \end{multline*} thus concluding the proof of Proposition \ref{prop:ineqQ}. If $ \theta > 1 $ the proof is completely analogous, using Lemma \ref{lemma:A}. \end{proof} \section{Appendix: Reducing Vector Fields to Powers of $ Q $ } \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{corollary}{0} \setcounter{definition}{0} We first prove th following \begin{proposition} \label{prop:A} Let $ \mu \in {\mathbb{Q}}^{+} $. Then there exists a positive constant, $ C $, independent of $ \mu $, such that \begin{equation} \label{eq:A1D} \| Q^{\mu} x \partial_{x} u \|_{0} \leq C \left( \| Q^{\mu + \frac{q}{2(q-1)}} u\|_{0} + \mu^{2\frac{q-1}{q} \mu + 1} \| u \|_{0} \right). \end{equation} \end{proposition} \begin{proof} Let $ [ \mu ] = k $, so that $ \mu = k + \theta $, $ 0 \leq \theta < 1 $. Then $$ Q^{\mu} x \partial_{x} = x \partial_{x} Q^{\mu} + [ Q^{\theta} Q^{k} , x \partial_{x} ] = x \partial_{x} Q^{\mu} + Q^{\theta} [ Q^{k}, x \partial_{x} ] + [ Q^{\theta}, x \partial_{x} ] Q^{k} . $$ Since $ x \partial_{x} $ has weight $ \frac{q}{q-1} $ we have $$ \| x \partial_{x} v \|_{0} \leq C_{1} \| Q^{\frac{q}{2(q-1)}} v \|_{0}, $$ for a suitable constant $ C_{1} $. Here we used the pseudodifferential calculus adapted to the anharmonic oscillator $ Q $ (see \cite{helffer84} and \cite{bm-apde1}, Definition 2.1.) Hence $$ \| x \partial_{x} Q^{\mu} u \|_{0} \leq C_{1} \| Q^{\mu + \frac{q}{2(q-1)}} u \|_{0}. $$ Analogously for the third term we have the estimate $$ \| [ Q^{\theta}, x \partial_{x} ] Q^{k} u \|_{0} \leq C_{2} \| Q^{\mu} u \|_{0} \leq C_{3} \| Q^{\mu + \frac{q}{2(q-1)}} u \|_{0} , $$ where $ C_{2} $, $ C_{3} $ are independent of $ k $. Here we also used the fact that if $\sigma $, $ \tau $ are rational numbers such that $ 0 < \sigma < \tau $, then $ \| Q^{\sigma} u \|_{0} \leq C \| Q^{\tau} u \|_{0} $, for a suitable constant $ C > 0 $, independent of $ \sigma $, $ \tau $. Let us consider the term $ Q^{\theta} [ Q^{k}, x \partial_{x} ] $. We have $$ Q^{\theta} [ Q^{k}, x \partial_{x} ] = \sum_{i=1}^{k} \binom{k}{i} Q^{\theta} (\ad Q)^{i}(x \partial_{x}) Q^{k-i} . $$ The iterated commutator above is a sum of products of $ X_{-} $, $ X_{+} $ (see equation \eqref{eq:X+}) $$ (\ad Q)^{i}(x \partial_{x}) = \sum_{I} c_{I} X_{I} , $$ where the $ |c_{I}| \leq C_{4}^{i} $, the number of summands is bounded by $ C_{5}^{i} $ and $$ \langle I \rangle = i \frac{q-2}{q-1} + \frac{q}{q-1}. $$ Applying Proposition \ref{prop:ineqQ} to $ Q^{\theta} X_{I} $ we obtain the estimate \begin{equation} \label{eq:D2} \| Q^{\theta} (\ad Q)^{i}(x \partial_{x}) v \|_{0} \leq C_{*}^{i} \sum_{\ell=0}^{m(i)} C^{\ell+1} \binom{m(i)}{\ell} p(i)^{\ell} \|(Q^{\frac{1}{2}})^{p(i) - \ell \frac{q}{q-1}} v \|_{0} , \end{equation} where $$ p(i) = 2\theta + i \frac{q-2}{q-1} + \frac{q}{q-1} , \qquad m(i) = \left[ 2 \theta \frac{q-1}{q} + i \frac{q-2}{q}\right] + 1. $$ It will be useful to simplify the sums of the above type by, roughly, taking the terms with the maximum and the minimum power of $ Q $, and, correspondingly, with the minimum and maximum power of $ p(i) $. This is done by a sort of convexity estimate of the following type \begin{proposition} \label{prop:cvx} Let $ 1 \leq p, p' < + \infty $, $ p^{-1} + p'^{-1} = 1 $, and let $ \lambda > 0 $ be a real number. Then $$ \| \lambda Q v \|_{0} \leq \frac{1}{p} \| Q^{p} v \|_{0} + \frac{1}{p'} \lambda ^{p'} \| v \|_{0} . $$ \end{proposition} The proof is straightforward, by using the spectral mapping theorem and we omit it. Going back to \eqref{eq:D2} we may write \begin{multline} \label{eq:D3} \| Q^{\theta} (\ad Q)^{i}(x \partial_{x}) v \|_{0} \\ \leq C_{*}^{i} \sum_{\ell=0}^{m(i)} C^{\ell+1} \binom{m(i)}{\ell} \left( \| (Q^{\frac{1}{2}})^{p(i)} v \|_{0} + p(i)^{p(i) \frac{q-1}{q}} \| v \|_{0}\right) \\ \leq C_{6}^{i+1} \left( \| (Q^{\frac{1}{2}})^{p(i)} v \|_{0} + p(i)^{p(i) \frac{q-1}{q}} \| v \|_{0}\right), \end{multline} due to the bound $$ C_{*}^{i} \sum_{\ell=0}^{m(i)} C^{\ell+1} \binom{m(i)}{\ell} = C C_{*}^{i} (C+1)^{m(i)} \leq C_{6}^{i+1} . $$ As a consequence \begin{multline} \label{eq:D4} \| Q^{\mu} x \partial_{x} u \|_{0} \leq C_{7} \| Q^{\mu + \frac{q}{2(q-1)}} u \|_{0} \\ + \sum_{i=1}^{k} C_{6}^{i+1} \binom{k}{i} \left( \| (Q^{\frac{1}{2}})^{p(i) + 2(k-i)} u \|_{0} + p(i)^{\frac{q-1}{q} p(i)} \| Q^{k-i} u \|_{0} \right) \\ = C_{7} \| Q^{\mu + \frac{q}{2(q-1)}} u \|_{0} \\ + \sum_{i=1}^{k} C_{6}^{i+1} \binom{k}{i} \left( \| Q^{\mu - (i-1) \frac{q}{2(q-1)}} u \|_{0} + p(i)^{\frac{q-2}{q} i + 1 + 2 \theta \frac{q-1}{q}} \| Q^{k-i} u \|_{0} \right) \end{multline} Observe now that $$ p(i)^{\frac{q-2}{q} i + 1 + 2 \theta \frac{q-1}{q}} \leq C_{0}'^{i} i^{\frac{q-2}{q} i} i^{1+2\theta \frac{q-1}{q}} \leq C_{0}^{i} \mu^{\frac{q-2}{q} i}, $$ since $ i \geq 1 $, for a suitable constant $ C_{0} $. Moreover $ \binom{k}{i} \leq \mu^{i} i!^{-1} $, since $ k = [\mu] $. Applying Proposition \ref{prop:cvx} to both terms under the sum sign above, we obtain $$ \mu^{i} \| Q^{\mu + \frac{q}{2(q-1)} - i \frac{q}{2(q-1)}} u \|_{0} \leq \| Q^{\mu + \frac{q}{2(q-1)}} u \|_{0} + \mu^{2\frac{q-1}{q} \mu + 1} \| u \|_{0} , $$ and $$ \mu^{\frac{q-2}{q} i + i} \|Q^{\mu + \frac{q}{2(q-1)} - i} u \|_{0} \leq \| Q^{\mu + \frac{q}{2(q-1)}} u \|_{0} + \mu^{2\frac{q-1}{q} \mu + 1} \| u \|_{0} . $$ Plugging the above estimates into \eqref{eq:D4}, we find \begin{multline*} \| Q^{\mu} x \partial_{x} u \|_{0} \leq C_{8} \sum_{i=0}^{[\mu]} \frac{C^{i+1}}{i!} \left ( \| Q^{\mu + \frac{q}{2(q-1)}} u \|_{0} + \mu^{2\frac{q-1}{q} \mu + 1} \| u \|_{0} \right) \\ \leq C_{9} \left( \| Q^{\mu + \frac{q}{2(q-1)}} u \|_{0} + \mu^{2\frac{q-1}{q} \mu + 1} \| u \|_{0} \right) . \end{multline*} This completes the proof of the proposition. \end{proof} Proposition \ref{prop:A} takes care of the action of (rational) powers of $ Q $ on the principal part of the transport operator $ P_{1} $. We need a similar result for the other transport operators, $ P_{k} $, $ k = 2,\ldots, 2a $, in \eqref{eq:Pj}. \begin{proposition} \label{prop:AjD} Let $ \mu \in {\mathbb{Q}}^{+} $. Then there exists a positive constant, $ C $, independent of $ \mu $, such that \begin{equation} \label{eq:AjD} \| Q^{\mu} (x \partial_{x})^{j} u \|_{0} \leq C_{1}^{j} \left( \| Q^{\mu + j \frac{q}{2(q-1)}} u\|_{0} + \mu^{2\frac{q-1}{q} \mu + j} \| u \|_{0} \right) , \end{equation} $ j = 1,\ldots, 2a $. \end{proposition} \begin{proof} We proceed by induction with respect to $ j $. When $ j=1 $ the assertion is just Proposition \ref{prop:A}. Assume that the assertion for $ j $ holds and let us prove the assertion for $ j+1 $. We have $$ \| Q^{\mu} (x \partial_{x})^{j+1} u \|_{0} = \| Q^{\mu} (x \partial_{x})^{j} (x \partial_{x}) u \|_{0} . $$ Applying the above estimate for $ j $ we obtain $$ \| Q^{\mu} (x \partial_{x})^{j+1} u \|_{0} \leq C_{1}^{j} \left( \| Q^{\mu + j \frac{q}{2(q-1)}} (x \partial_{x}) u\|_{0} + \mu^{2\frac{q-1}{q} \mu + j} \| (x \partial_{x}) u \|_{0} \right) $$ For the first term we use Proposition \ref{prop:A}, while for the second we apply Propositions \ref{prop:ineqQ} and \ref{prop:cvx}. Now \begin{multline*} \| Q^{\mu} (x \partial_{x})^{j+1} u \|_{0} \\ \leq C_{1}^{j} \Bigg ( C_{*} \left( \| Q^{\mu + (j+1)\frac{q}{2(q-1)}} u \|_{0} + \mu^{2\frac{q-1}{q} \mu + 1 + j} \| u \|_{0} \right) \\ + C_{**} \left( \| Q^{\mu + (j + 1) \frac{q}{2(q-1)}} u\|_{0} + \mu^{2\frac{q-1}{q} \mu + j + 1} \| u \|_{0} \right) \Bigg), \end{multline*} so that choosing $ C_{1} \geq 2 \max\{ C_{*}, C_{**}\} $ achieves the proof. \end{proof}
proofpile-arXiv_067-5711
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} In using proof assistants to establish theorems with very high assurance at relatively low human cost, two main methods are employed. One can implement \emph{proof-generating theorem provers} that justify their decisions in terms of more primitive reasoning steps, or one can employ the \emph{proof by reflection} style, which involves verifying the correctness of provers formally, so that there is no need to audit their outputs. The latter style has been used fairly heavily in the type theory community, where it is often viewed as a straightforward implementation technique to improve both performance of and assurance about proof search. To our knowledge, however, proof by reflection has only been applied to fairly focused problems where the domain of discourse (or its axiomatization) is known a priori. To explore its limits, we explore mostly automated correctness verification of imperative programs, in a framework supporting higher-order logic and \emph{abstract predicates}. \emph{Extensibility} is crucial in this domain, as the (expert) author of an abstract data type (ADT) would like to teach the prover how to reason about related proof goals that arise in verifying (non-expert) client code that calls the ADT's methods. We hope to achieve this \emph{modularity} and \emph{expressivity} while minimizing the overhead of proofs. \medskip Realistic program verification involves consideration of many details of program behavior. Automated theorem provers play a crucial role in discharging the many straightforward proof obligations, freeing the programmer to focus on providing invariants and other pieces of high-level insight. However, conventional automated program verifiers as in the ESC family~\cite{EscPLDI02} suffer from a number of disadvantages. \begin{itemize} \item \textbf{Debuggability.} Theorem prover implementations can be quite complex. While the prover will facilitate effective static verification for many other programs, the prover itself is usually debugged tediously via testing, and ``bugs'' often manifest as confusing failures of the tool to discharge specific goals. \item \textbf{Trustworthiness.} As a corollary of the last concern, we might worry that an off-the-shelf theorem prover is unsound in a way that can have serious consequences for security and so forth, if we use it as a trusted component of a verification system. \item \textbf{Flexibility.} The logics of automated theorem provers are generally chosen to be decidable or otherwise tractable in practice, ruling out expressive higher-order features. As a result, some important correctness theorems (e.g., semantic correctness of a compiler) may not be possible to state, and others must be stated in prohibitively verbose ways. \item \textbf{Composability.} Many theorem provers act as standalone systems that do not easily produce results that can be combined with results arising from other tools using a unified logical language. For instance, we may want to verify the correctness of a compiler (which is outside the reach of traditional program verifiers), verify the correctness of a program in the compiler's source language (which is well within the scope of traditional verifiers), and compose the results into a high-confidence deduction about the compiled version of that source program. \end{itemize} All of the above reasons have contributed to the popularity of program verification with proof assistants like Coq~\cite{Coq} and Isabelle~\cite{Isabelle}. Such systems are based on small, trustworthy proof checkers for relatively simple sets of axioms and inference rules. They achieve \textbf{trustworthiness} by allowing the use of arbitrary proof \emph{search} techniques without expanding the trusted code base for proof \emph{checking}. The underlying logics are higher order, providing very good \textbf{flexibility}. It is even possible to leverage external tools, like SMT solvers, to generate proof assistant proofs, addressing the \textbf{composability} concern. However, \textbf{debuggability} remains a serious concern. While automated provers support many features, they are not a panacea. For example, combinations of modal, linear, and higher-order logic are not supported by any prover that we are aware of. And even when they are, the undecidability of powerful logics forces frequent use of new, problem-specific heuristics that must be embedded into automation or applied manually each time they are needed. Without the ability to justify such extensions, user customization compromises the trustworthiness of the entire system. While, it is possible to use arbitrary implementations to generate proof traces in novel ways, such procedures can be nearly as hard to debug as conventional, non-proof-generating theorem provers. It is also true that switching to an approach based on proof generation can bring substantial \textbf{performance} costs. A proof language with few orthogonal features is very attractive because it allows proof checkers to be small and trustworthy, but small proof languages tend to promote large proofs. Automated provers must be augmented to support generation of proofs, which adds code complexity and performance overhead. Later, it is necessary to check the proofs, which adds further overhead. \medskip An established best-of-both-worlds technique for type-theoretic proof assistants is \textbf{proof by reflection}~\cite{ReflectionTACS97}. Here, instead of writing a procedure that \emph{generates a proof} on each invocation, we instead \emph{verify the procedure itself}. Now no proofs need to be generated, freeing us from the associated overhead. However, the formal guarantees are just as strong, because we have proved the correctness of the procedure, which is generally implemented in a functional programming language within the proof assistant's logic. Proof by reflection has previously been implemented for conventional mathematical decision problems. For instance, Coq comes packaged with a tactic for algebraic simplification over mathematical rings. This procedure, and others out there, operate over a small vocabulary of logical symbols whose semantics must be understood. Often the procedures are \emph{extensible} in the sense of allowing customization of the set of symbols. However, such procedures can be thought of as single-minded, never combining multiple, user-defined reasoning strategies in the way that successful automated program verifiers do. To investigate how far the scope of proof by reflection may be expanded, we built reflective implementations of two key proof procedures for imperative program verification with \emph{higher-order separation logic}~\cite{SeplogLICS02} within the Bedrock~\cite{BedrockPLDI11} library for Coq. Previously, these procedures were implemented in a proof-generating way, with Coq's domain-specific tactic language Ltac~\cite{LtacLPAR00}. Listing~\ref{treeset} shows how our verification procedures handle user-defined abstractions by showing a verification of binary search tree ``lookup.'' The code defines a representation invariant for binary search trees (\coqe{bst}). Afterward, a few \emph{hint lemmas} are proved about the invariant. These lemmas encapsulate the semantic knowledge that the reflective procedures will need to reason about \coqe{bst}. Specifications for the imperative code make heavy use of logical quantifiers, which create additional challenges for our automation. Listing~\ref{treeset} ends with a short proof of program correctness, via the \coqe{sep} tactic that calls the reflective procedures with a particular package of program-specific \emph{hints}. \begin{figure}[t] \tikzstyle{commentlabel}=[anchor=west,rotate=90,font=\small,text=darkgreen] \tikzstyle{commentline}=[very thick,draw=darkgreen] \begin{tikzpicture}[overlay] \draw[commentline] (8.05,-3.75) -- (8.05,-4.5) ; \node[commentlabel] at (8.25,-4.6) {user predicate}; \draw[commentline] (8.05,-4.9) -- (8.05,-6.6) ; \node[commentlabel] at (8.25,-6.8) {refinement hints}; \draw[commentline] (8.05,-7.35) -- (8.05,-8.3) ; \node[commentlabel] at (8.25,-8.7) {combine hints}; \draw[commentline] (8.05,-9.9) -- (8.05,-11.1) ; \node[commentlabel] at (8.25,-11.6) {quantified invariants}; \draw[commentline] (8.05,-16.9) -- (8.05,-17.5) ; \node[commentlabel] at (8.25,-17.6) {prove with hints}; \draw[commentline] (3.8,-17.5) -- (4.5,-17.5) ; \end{tikzpicture} \vspace{-0.2cm} \begin{lstlisting}[language=coq,caption={Verification of binary search trees implementing finite-set ``lookup''},label=treeset,captionpos=t] (* "Spine" type to define the rep. predicate *) Inductive tree := Leaf : tree | Node : tree -> tree -> tree. (* Recursive representation predicate for BSTs *) Fixpoint bst' (s : set) (t : tree) (p : W) : HProp := (* details omitted *). (* Main rep. predicate, which wraps the above with a * mutable pointer to its root *) Definition bst (s : set) (p : W) := [| freeable p 2 |] * Ex t, Ex r, Ex junk, p =*> r * (p ^+ \$4) =*> junk * bst' s t r. (* A standard tree refinement hint *) Theorem nil_fwd : forall s t (p : W), p = 0 -> bst' s t p ===> [| s Proof. destruct t; sepLemma. Qed. (* ...more hints... *) (* Combine the hints into a first-class package *) Definition hints : HintPackage. prepare (nil_fwd, bst_fwd, cons_fwd) (nil_bwd, bst_bwd, ...). Defined. Definition bstM : bmodule := { ... (* Method implementation *) bfunction "lookup"("s", "k", "tmp") [lookupS] "s" <-* "s";; [Al s, Al t, PRE[V] bst' s t (V "s") * mallocHeap POST[R] [| (V "k" While ("s" <> 0) { "tmp" <-* "s" + 4;; If ("k" = "tmp") { (* Key matches! *) Return 1 } else { If ("k" < "tmp") { (* Searching for a lower key *) "s" <-* "s" } else { (* Searching for a higher key *) "s" <-* "s" + 8 } } };; Return 0 ... }. (* Prove our implementation partially correct. *) Theorem bstMOk : moduleOk bstM. Proof. vcgen; abstract (sep hints; auto). Qed. \end{lstlisting} \end{figure} Our high-level contributions come from \textbf{adapting the proof-by-reflection approach to a more open-ended setting}. In particular: \begin{itemize} \item To the best of our knowledge, ours are the \textbf{first reflective tactics to support a notion of reusable hints}, similar to the notions exposed in Coq Ltac programming. Our approach leverages three types of hints to teach our core procedures about new abstract predicates. Even the usual ``points-to'' predicate of separation logic is not built into our tactics, but rather taught to it via hints. We identify three hint mechanisms that suffice to support all of the above from a small core, and we prove the soundness of the hint architecture. We also demonstrate (proved correct) classic data structure examples like arrays, linked lists, and search trees. \item We extend the proof-by-reflection approach to handle proof goals containing \textbf{quantifiers}, and we implement and verify a \textbf{unification} algorithm to facilitate related reasoning. Coq itself includes a disjoint treatment of unification, and we have developed an approach to two-way communication between the two unification systems. This interface is not trivial because unification is not part of Coq's logic, but rather added on outside the trusted base and interfaced with using Ltac, Coq's tactic language for building proofs. \item Hints are naturally expressed over different logical theories, involving different data structure representation predicates and different background theories (e.g., bitvectors, lists, strings) for stating side conditions of lemmas. We have developed a \textbf{modular architecture for composing verified hints over different theories}, including a mechanism for carving out and combining smaller domains. \item We also provide a \textbf{performance analysis of design choices in implementation of extensible reflective tactics}. We encountered surprising challenges in achieving reasonable performance while supporting all of the above features. \end{itemize} We begin with background on reflective proofs (Section~\ref{sec:reflection}). We then discuss the broadly applicable novel technical devices behind our implementation, which enable us to apply reflective reasoning to the complicated expressions that arise in our verification tasks (Section~\ref{sec:extensible}). We then try to distill the reusable engineering lessons we have learned about implementing and optimizing reflective procedures at this scale (Section~\ref{sec:lessons}). Next comes an evaluation of the automation achieved by our procedures and an analysis of their performance characteristics compared to non-reflective verification (Section~\ref{sec:evaluation}), where our overall conclusion is that we improve asymptotic performance substantially, though well-abstracted programs and specifications may not lead to large enough invariants to exhibit the scaling improvements clearly. We wrap up with a discussion of related work (Section~\ref{sec:related-work}). Our techniques, and many of the components that we built, are implemented in the MirrorShard library, available for download at: \begin{center} \url{https://github.com/gmalecha/mirror-shard/} \end{center} \section{A Primer on Proof By Computational Reflection} \label{sec:reflection} In this section, we discuss the idea of reflection~\cite{ReflectionTACS97}. For the sake of simplicity, we shall present reflection using a stripped-down example: here, we are interested in reducing the proof of equalities like $f~a~(g~b) = f~c~(g~(h~a))$ to the proof of $a = c$ and $b = h~a$. That is: we are computationally discharging equalities between terms in the (multi-sorted) algebra generated by an arbitrary signature, generating new proof obligations for equalities between subterms that do not follow from basic properties of equality. The first step of reflection is to \emph{encode the syntax of proof goals in a datatype defined within your proof assistant's logic}. In the case of Coq, this logic is \emph{Gallina}, a dependently typed lambda calculus with inductive definitions. Thus, we start by defining a datatype \coqe{expr} to represent terms: in this syntaxified representation, an expression is just a function symbol applied to a list of expressions (see Listing~\ref{lst:reif}). We break the circularity using 0-argument functions to represent variables and constants. This is important because it is imperative that the \coqe{expr} type has a decidable equality so we can avoid generating obligations such as \coqe{x = x}. We achieve this by representing functions as indices into an environment of functions. To make the meaning of our syntax formal, we define a denotation function \coqe{denote} that maps an expression \coqe{e} supposedly of the type represented by \coqe{ty}, to a value of type \coqe{option (nth types ty)}. Note that this denotation function is partial, because our data type admits the encoding of ill-typed terms. (We will return to this particular design choice in Section~\ref{sec:lessons}.) The denotation function is parameterized by an environment of types and function signatures (of type \coqe{sig} indexed by the type environment), and will perform dynamic type-checking to ensure that the Gallina term that it produces is well-typed. Thus, where \coqe{ltb} is a Boolean-valued less-than test for natural numbers, the term \coqe{ltb (x + y) z} can be represented using the following environments and term. \vspace{0.2cm} \begin{coq} Let types := [nat; bool]. Let functions := [sig [0;0] 0 plus; sig [0;0] 1 ltb; sig [] 0 x; sig [] 0 y; sig [] 0 z] Func 1 [Func 0 [Func 2 []; Func 3 []]; Func 4 []] : expr \end{coq} Using this representation, we can \emph{implement a (heuristic) decision procedure} in Gallina. The procedure considers the head symbols of two expressions. If they are the same, it proceeds recursively on their arguments. Otherwise it accumulates a ``proof obligation'' (i.e., a pair of expressions whose denotations must be proven equal). \begin{coq} Fixpoint f_eq (a b: expr) (ty: nat) : list (nat*expr*expr) := match a , b with | Func f1 args1 , Func f2 args2 => if f1 == f2 then union (map3 f_eq args1 args2 (domainOf f1)) else [(ty, a, b)] end. \end{coq} Finally, we prove the procedure sound. That is, we prove that if all of the constraints are satisfied then the denotations of the original terms are equal. \begin{coq} Theorem f_eq_correct : forall a b ty, Forall (fun (t,x,y) => denote x t = denote y t) (f_eq a b ty) -> denote a ty = denote b ty. \end{coq} \begin{figure} \begin{lstlisting}[frame=none,caption={Representing multi-sorted expressions},label=lst:reif] Inductive expr := | Func: nat -> list expr -> expr. Variable types : list Type. Fixpoint ftype (domain: list nat) (range: nat) : Type := match dom with | [] => nth range types | t::dom => nth t types -> ftype dom range end. Record sig := {dom: list nat; rng: nat; val: ftype dom rng}. Variable functions : list sig. Fixpoint denote (e: expr) (ty: nat): option (nth ty types) := ... \end{lstlisting} \end{figure} Applying this lemma makes it possible to replace several proof steps (one proof step for every common head symbol between the lefthand-side and righthand-side expressions) by a single proof step, plus a \emph{computation}. That is, we come to the final crucial element of a proof by reflection: \emph{a goal is proved by appealing to a theorem and then proving its hypothesis by ``running'' the hypothesis to reduce it to a normal form, which should then be much easier to prove than the original goal.} For example, suppose that we want to prove % \coqe{ltb (x+y) z = ltb (x+y) w}. We can apply the aforementioned lemma, using suitable values \coqe{a} and \coqe{b} of type \coqe{expr}, and Coq will check that \coqe{denote 1 a} (resp. \coqe{denote 1 b}) is convertible to the lefthand side (resp. righthand side) of the goal, according to the lambda calculus reduction rules of Gallina. Here we take advantage of the fact that, in Coq's logic, reduction-equivalent terms may always be used interchangeably, with no need to include explicit proof steps as justification. \paragraph{Reification} In the above discussion, we side-stepped a difficulty: we have to automate the construction of the syntactic representation (i.e., terms of type \coqe{expr}) from the terms that appear in the goal. While this operation, called \emph{reification}, is conceptually the dual of \coqe{denote}, it must be performed at the meta level using special-purpose tactics. We shall return to this problem in Section~\ref{sec:lessons}. \section{An Extensible Verification Architecture} \label{sec:extensible} Compared to past work on proof by reflection, our verification architecture achieves its powerful form of extensibility through three novel technical devices: \begin{description} \item[Extensible Syntax] enables us to encode terms with arbitrary Coq constants and quantifiers, while retaining the ability to compute on constants embedded in terms. (Section~\ref{sec:representation}) \item[Composable Soundness] enables us to combine soundness proofs about procedures that reason about different logical domains. (Section~\ref{sec:composition}) \item[Integration with Unification] enables flexible integration of reflective procedures with traditional Ltac-based automation that uses unification variables that are not formalized in Coq's core logic. (Section~\ref{sec:unification}) \end{description} \subsection{An Extensible Syntax with Binders} \label{sec:representation} Our reified syntax for assertions of separation logic~\cite{SeplogLICS02} (summarized in Listing~\ref{lst:syntax}) is similar in spirit to the generic syntax described in Section~\ref{sec:reflection} but incorporates several new forms for dealing with binders, manipulating proofs, and performing domain-specific reasoning. The four key differences are: \begin{figure} \begin{lstlisting}[frame=none,caption={Representing separation logic expressions with binders},label=lst:syntax] Record type := { Impl : Type ; Eq : Impl -> Impl -> bool }. Inductive tvar := tvProp : tvar | tvType : nat -> tvar. Inductive expr (ts : list type) : Type := | Const: forall (ty : tvar), tvarD ty ts -> expr ts | Var: nat -> expr ts | UVar : nat -> expr ts | Func: nat -> list (expr ts) -> expr ts | Equal : expr ts -> expr ts -> expr ts. Inductive sexpr (ts : list type) : Type := | Star : sexpr ts -> sexpr ts -> sexpr ts | Emp : sexpr ts | Pred : nat -> list (expr ts) -> sexpr ts | Inj : expr ts -> sexpr ts | Exists : tvar -> sexpr ts -> sexpr ts. Fixpoint exprD (ts : list type) (fs : list (signature ts)) (vars uvars : list { t : tvar & tvarD ts t }) (e : expr ts) (t : tvar) : option (tvarD ts t) := .... Fixpoint sexprD (ts : list type) (fs : list (signature ts)) (ps : list (psignature ts)) (vars uvars : list { t : tvar & tvarD ts t }) (e : sexpr ts) : hprop := .... \end{lstlisting} \vspace{-0.3cm} \end{figure} \begin{description} \item[A distinguished encoding for the type of logical propositions] (\coqe{tvProp}) simplifies representing logical properties and enables us to represent polymorphic equality (\coqe{Equal}), even though our encoding does not support polymorphic functions in general. \item[An expression type family] indexed by a type environment, which gives for each type both its \emph{Coq representation} and a compatible \emph{equality testing function}. The \coqe{Const} constructor is used to inject terms that our unification algorithm should consider comparing for equality by calling type-specific procedures, in contrast to the syntactic equality check used for the rest of the constructors. \item[Binders and local variables] are represented by the constructors \coqe{Exists} of \coqe{sexpr} and \coqe{Var} of \coqe{expr}. Our encoding is similar to the \emph{locally nameless} technique for lambda terms, in that we maintain distinct representations of global/free variables (nullary function applications via \coqe{Func}) and local/bound variables (via \coqe{Var}). \item[Unification variables] (represented with \coqe{UVar}) are represented explicitly so that our procedures may deduce and substitute values for them. Informally, a \coqe{Var} expression supports universal-quantifier reasoning, while a \coqe{UVar} expression supports existential-quantifier reasoning. We must prove any theorem considering all possible \coqe{Var} values, but we are allowed to choose specific \coqe{UVar} values that make the theorem true. \end{description} To support our new features, the denotation functions \coqe{exprD} and \coqe{sexprD} have several new parameters for variables, unification variables, and separation logic predicates. To see how these components fit together, we show a simple heap assertion for a cons cell where the first value is a unification variable from the context ($\mathsf{?a}$) and the second (occurring 4 bytes later) is an existentially quantified value ($v$): \vspace{-0.05cm} $$ \exists v, \mathsf{p} \mapsto \mathsf{?a} * (\mathsf{p} + 4) \mapsto v $$ \vspace{-0.06cm} This term can be represented as: \vspace{-0.06cm} \begin{coq} Let types := [(word, eq_word)]. Let funcs := [([],tvType 0,p); ([tvType 0;tvType 0],tvType 0,+)]. Let preds := [([tvType 0; tvType 0], $\mapsto$)]. Let vars := []. (* p is represented as a function *) Let uvars := [(tvType 0, ?a)]. Exists (tvType 0) (Star (Pred 0 [Func 0 [] ; UVar 0]) (Pred 0 [Func 1 [Func 0 []; Const 4]; Var 0])) \end{coq} \subsection{Achieving Compositionality} \label{sec:composition} To be useful, hints must be both \textbf{self-contained}, packaging together the hint and its soundness proof, and \textbf{compositional}, enabling us to combine separately defined hints in a meaningful way. Here, our type-family representation introduces problems. The most na\"ive prover type, with one environment of types fixed for all provers, will never compose with provers using different types. We solve the problem using a sort of universal quantification over type environments that imposes constraints on the presence of specific identifiers. \paragraph{A Constraint Formulation} One way of representing the constraints is propositionally, with explicit logical assertions. We can encode constraints using partial environments and say that an environment ($e$) satisfies a constraint ($C$), written $C \models e$, when all mappings in the constraint are consistent with the environment. Using this formulation, we can define two provers, one for lists and the other for machine words. Each constraint is a list of optional types, where the presence of a type forces the final environment to contain that type in that list position. \begin{coq} Let C1 := [None; Some (list word)]. Definition prover_1 : forall ts, C1 $\models$ ts -> expr ts -> bool. Let C2 := [Some word]. Definition prover_2 : forall ts, C2 $\models$ ts -> expr ts -> bool. \end{coq} Since \coqe{C1} and \coqe{C2} are compatible, these provers can be composed into a new prover that accepts an environment that satisfies both \coqe{C1} and \coqe{C2}. The difficulty of this formulation arises in type-checking prover implementations, where it is generally necessary to use \emph{casts} justified by appealing to the consistency proof. For example, suppose that \coqe{prover_2} is determining whether a number is a multiple of 4. An ideal formulation of the soundness of this prover would be: \begin{coq} Theorem prover2_sound' : forall ts fs vars uvars e v (pf : C2 $\models$ ts), FC2 $\models$ fs -> prover_2 ts pf e = true -> exprD ts fs vars uvars (tvType 0) e = Some v -> v mod 4 = 0. \end{coq} where \coqe{FC2} is \coqe{prover_2}'s constraint on the function environment. Unfortunately, this soundness statement is not well typed. The problem stems from \coqe{v}. In line 3, \coqe{v} must have type \coqe{tvarD ts (tvType 0)}, while in line 4 \coqe{v} must have type \coqe{word}. The core problem lies in the intensional nature of Coq's type theory. Coq distinguishes between two notions of equality. \begin{description} \item[Definitionally equal ($\equiv$)] terms are identical after reduction. This notion of equality is part of Coq's core logic and requires no extra work to apply during type-checking. \item[Provably equal ($=$)] terms are defined by a binary inductive predicate \coqe{x = y}. This type encodes an explicit proof (in Coq's logic) that \coqe{x} and \coqe{y} are equal. To use this type of equality, we must perform a cast using the proof. \end{description} For a concrete example, we return to the problem above. From the meaning of $\models$ we can prove \coqe{tvarD ts (tvType 0) = word}, but the two are not definitionally equal under Coq's reduction rules, since \coqe{ts} is a variable and not a concrete environment; the reduction to determine equality gets stuck examining the structure of \coqe{ts}. We can solve this problem using the following cast: \begin{coq} Theorem prover2_sound : forall ts fs vars uvars e v (pf : C2 $\models$ ts), FC2 $\models$ fs -> prover_2 ts pf e = true -> exprD ts fs vars uvars (tvType 0) e = Some v -> (cast v (GetConsistent pf 0)) mod 4 = 0. \end{coq} where \coqe{GetConsistent} takes the consistency proof and the index and returns a proof of \coqe{tvarD ts (tvType 0) = word} (i.e. by looking up the index in the constraint). The cast is present for a reason: removing it produces an ill-typed term. Therefore, it would be unsound to include a reduction rule that removes useful casts. The problem is that proving triviality of casts (i.e., that they convert between \emph{definitionally} equal terms) is extra work with no counterpart in pencil-and-paper reasoning. A proof step like this one must be preceded by another proof step that rearranges the proof context to a form that is \emph{well-typed both before and after the cast is removed}, which can be surprisingly subtle and case-specific. Furthermore, justifying this step (``removing a cast between definitionally equal terms has no effect'') is not possible in Coq's core logic without appealing to axioms. \vspace{0.45cm} \paragraph{The Computational Formulation} \label{sec:computational} While Coq's reduction mechanism does not handle the above formulation well, we can give an alternative formulation that enjoys better computational properties. In this formulation we achieve constrained quantification over environments not by starting from an arbitrary environment and asserting a constraint over it, but rather by \emph{starting with an arbitrary environment and performing a computation on it to make it constraint-compliant by construction}. The heart of the technique is a recursive function called like \coqe{applyC c e}, which ``instruments'' environment \coqe{e} to satisfy constraint \coqe{c}. \begin{coq} Fixpoint applyC (c: constraint T) (e: list T) : list T := match c with | nil => e | None :: c' => hd d e :: applyC c' (tl e) | Some v :: c' => v :: applyC c' (tl e) end. \end{coq} Reformulating the previous theorem leads to the following: \begin{coq} Definition prover_2 : forall ts, let ts' := applyC C2 ts in expr ts' -> bool. Theorem prover2_sound : forall ts fs vars uvars e v, let ts' := applyC C2 ts in let fs' := applyC FC2 fs in exprD ts' fs' vars uvars (tvType 0) e = Some v -> prover_2 ts' e = true -> v mod 4 = 0. \end{coq} Now, \coqe{tvarD (applyC C2 ts) (tvType 0)} is definitionally equal to \coqe{word} since the environment has been reformed by \coqe{applyC} so that it is manifestly a series of ``cons'' operations, containing the proper type in the proper position. In particular, reduction tells us \coqe{applyC C2 ts $\equiv$ word :: tl ts}. In general, we may now extract any constant index occurring in the constraint from the updated environment, without any need for dependent casts. In fact, this formulation gives us much more: it enables \emph{computational} composition. When two constraint environments are compatible, i.e. they do not specify different values for any index, applying \coqe{applyC} to them commutes \emph{definitionally}. $$ \mathsf{applyC}\, C1\, (\mathsf{applyC}\, C2\, e) \equiv \mathsf{applyC}\, C2\, (\mathsf{applyC}\, C1\, e) $$ This feature makes composition of two provers trivial. If we wish to compose two provers, say \coqe{p1} and \coqe{p2}, with different, but compatible, environments \coqe{TC1} and \coqe{TC2}, then we can pre-compose each function by applying the other's type constraint to produce two provers on the same environment. Concretely: \begin{coq} (fun ts => p1 (applyC TC2 ts)) : forall ts, expr (applyC TC1 (applyC TC2 ts)) -> bool (fun ts => p2 (applyC TC1 ts)) : forall ts, expr (applyC TC2 (applyC TC1 ts)) -> bool \end{coq} Since these types are definitionally equal, we can treat them identically, applying both functions to the exact same term. That is, any composition operation can be written free of both \emph{explicit proofs} and \emph{explicit casts}. Using this representation, we can package together reusable, self-contained verification hints using Coq's dependent records. \begin{coq} Record HintDatabase := { Types : constraint type ; Funcs : forall ts, constraint (signature (applyC Types ts)) ; Preds : forall ts, constraint (psignature (applyC Types ts)) ; Hints : forall ts, HintsT (applyC Types ts) ; Hints_correct : forall ts fs ps, HintsT_correct (Hints ts) (applyC (Funcs ts) fs) (applyC (Preds ts) ps) }. \end{coq} The first three fields express the constraints on the type, function, and heap predicate environments. The fourth field contains a record that packages together the three different types of hints that our system uses (discussed in Section~\ref{sec:procedures}). Note that the \emph{type} of \coqe{Hints} is dependent on the \emph{value} of the \coqe{Types} field. The final field encapsulates the soundness proofs for the hints. When applying our reflective procedures, our tactics use the first three fields to seed the environments used to reify terms, the hints to compute the results, and the soundness proof to justify reasoning with the hints. Because we use a shallow embedding of constraints, we do not need to write a proof that two packages compose. Unfortunately, this means that we cannot write a Gallina function that combines two hint databases. Rather, we use Ltac (a dynamically typed language) to construct the term and turn the type checker loose on it. If the result type-checks, then the environments are consistent and the hints compose; otherwise, the programmer gets an error message about compatibility. \subsection{Interfacing with Unification Variables} \label{sec:unification} Traditional proofs in Coq are done in ``proof mode'' using tactics that manipulate a goal that is displayed to resemble a standard sequent calculus. Universally quantified variables are displayed in a ``proof context'' above a double line, while the goal is displayed below the line. In order to integrate well with existing tactic-based and interactive proof techniques, our reflective procedures must fit naturally into this view. \begin{figure} \begin{lstlisting}[gobble=6,language=coq] p,q,r : word $\mathrm{(a)}$ ?a : word ============================== p $\mapsto$ q * exists x, q $\mapsto$ x ===> p $\mapsto$ ?a * exists y, ?a $\mapsto$ y * exists z, r $\mapsto$ z \end{lstlisting} \vspace{-0.2cm} \begin{lstlisting}[gobble=6,language=coq] p,q,r : word $\mathrm{(b)}$ ?a : word ============================== forall x, exists z, ?a = q /\ (* unification equation *) (p $\mapsto$ q * q $\mapsto$ x ===> p $\mapsto$ q * q $\mapsto$ x * r $\mapsto$ z) \end{lstlisting} \vspace{-0.2cm} \begin{lstlisting}[gobble=6,language=coq] p,q,r : word $\mathrm{(c)}$ x : word (* from [forall x] *) ?b : word (* from [exists z] *) ============================== p $\mapsto$ q * q $\mapsto$ x ===> p $\mapsto$ q * q $\mapsto$ x * r $\mapsto$ ?b \end{lstlisting} \caption{Representation of variables as they pass through our verification procedures: (a) initial goal; (b) direct output of the unification procedure; (c) after simplification with Ltac} \label{fig:unification-variables} \end{figure} Figure~\ref{fig:unification-variables} demonstrates how our reflective procedures manipulate binders and unification variables. The implementation of these procedures is complicated by manipulation of de'Bruijn indicies but is otherwise mostly standard. It is the phrasing of the soundness theorems that we focus on here. We begin with an illustrative example, Figure \ref{fig:unification-variables}.(a), which shows a simple heap implication (denoted by $\Longrightarrow$) with three internal quantifiers and a unification variable (\coqe{?a}). Unlike in normal Coq output, we include unification variables explicitly in proof contexts, i.e. we can pick any term of type \coqe{word} for \coqe{?a} as long as it mentions only globals and variables that occur above it. Our procedures take as inputs goals, like the one in Figure \ref{fig:unification-variables}.(a), after they are reified as terms in the logic. For explanatory purposes, here we consider a simple procedure that only performs unification, attempting to learn the values of both bona fide Coq unification variables (e.g., \coqe{?a}) and variables that are quantified existentially in the conclusion of the implication (e.g., \coqe{y} and \coqe{z}). A key question is how the unification procedure, a pure function in the logic, can \emph{cause side effects} to resolve unification variables in the original Coq proof context. As in proof by reflection in general, our procedures may only announce results by replacing one logical formula with another that has been proven to imply the original. The result of the unification procedure is shown in Figure \ref{fig:unification-variables}.(b). Four different sorts of variables have been handled in four different ways. First, a variable \emph{existentially quantified on the lefthand side of the original implication} (e.g., \coqe{x} here) is returned via a top-level universal quantification. Second, a \emph{normal Coq unification variable} (e.g., \coqe{?a} here) is asserted to be equal to whatever replacement has been inferred for it. Finally, there are two cases for a variable \emph{existentially quantified on the righthand side of the implication}. Either no unification was found for it (e.g., \coqe{z} in this example), in which case it gets a top-level existential quantifier in the new goal; or some unification was found (e.g., \coqe{y} in this example) and the variable is simply removed by substituting for it everywhere it appears. A unification is represented as a map from unification variable to syntactic expression. As for our other syntactic representations, our soundness proofs ascribe a denotation to unifications, this time as a conjunction of provable equalities (\coqe{substD}). Using this denotation function in a premise, we can prove that \emph{syntactic} instantiation, by \coqe{instantiate}, preserves the \emph{semantic} meaning of terms. \begin{coq} Theorem substD_instantiate : forall funcs U G e t sub, substD funcs U G sub -> exprD funcs U G (instantiate sub e) t = exprD funcs U G e t. \end{coq} The final step occurs in Ltac. We simplify the goal by moving variables ``above the line'' into the proof context. All \coqe{forall} quantifiers lead to normal Coq variables (e.g., \coqe{x}), and all \coqe{exists} quantifiers lead to Coq unification variables (e.g., \coqe{?a}). More importantly, we remove the unification equations like \coqe{?a = q} by first performing the \emph{side effect} of setting \coqe{?a} equal to \coqe{q} and then proving the equation trivially by reflexivity. These side effects are possible in Ltac but not Gallina, and our strategy for generating initial output goals is designed to be very telegraphic in suggesting side effects. \section{Reflective Procedures} \label{sec:procedures} The techniques described in the previous sections enable extensible reflective verification with rich formulas. In this section, we show how to apply MirrorShard, the Coq library we have built with those techniques, to create reflective automation for the Bedrock~\cite{BedrockPLDI11} library, which supports program verification in separation logic. Our development leverages MirrorShard to build the two core verification components: symbolic execution and separation logic cancellation. The procedure is illustrated in Figure~\ref{fig:strategy} and described in the rest of the section. \begin{figure} \centering \scalebox{0.4}{\includegraphics{progression.pdf}} \caption{The high-level verification strategy applied to a simple program that increments a memory cell} \label{fig:strategy} \end{figure} \paragraph{Symbolic Execution} First, symbolic execution takes a precondition and a sequence of instructions and computes a post-condition. Symbolic execution starts by using pure facts in the context to refine abstract predicates. In the example, the procedure finds that the linked list predicate can be unfolded since the head pointer is not null. This information is encoded in our first type of hint: \begin{description} \item[Refinement Hints] are stylized Coq theorems that express predicated heap implications. These theorems are reified into Coq as inhabitants of a record type with fields for a list of universally quantified variables, a list of pure premises (\coqe{expr}s), and the expressions on each side of the separation logic implication. Due to their first-order nature, these hints can be constructed completely automatically (using Ltac programs) from actual Coq theorems that have a particular syntactic form. \end{description} Pure facts, such as the side conditions on refinement hints (e.g., ``pointer is not null''), are discharged by our second type of hint: \begin{description} \item[Base Theory Provers] are verified Coq functions taking the place of Coq tactics (since we cannot call Ltac from Gallina code), proving \coqe{expr}-encoded proof goals with arbitrary algorithms that can be coded and verified in Gallina. We have developed four provers: \begin{itemize} \item A \textbf{reflexivity prover}, which proves statements of the form $e = e$. \item An \textbf{assumption prover}, which maintains a list of known facts and attempts to find the goal as a syntactic match to one of these facts. \item A prover for reasoning about \textbf{linear arithmetic on width-32 bitvectors} to prove equalities and inequalities. This prover makes inferences by combining hypotheses representing expressions $e_1 = e_2 + k$ for constants $k$. (This last form of reasoning is especially applicable to common patterns of pointer arithmetic.) \item A prover oriented toward \textbf{array bounds checks}, which understands that array writes preserve length. \end{itemize} We support composing provers in a simple disjunctive style, i.e. a proposition is provable if either of two provers can prove it. \end{description} After predicate refinement, symbolic execution begins interpreting instructions. Total arithmetic and logical instructions are trivial to model by converting the transfer functions into their syntaxified forms. Instructions that access memory (both reading and writing) require more care, but our use of separation logic enables an effective algorithm based on our third type of hint: \begin{description} \item[Memory Evaluators] are verified functions that reason about reads from and writes to heaps satisfying a separation logic assertion. This approach enables the symbolic evaluator to interpret memory operations in terms of many different data structure predicates, without the need to expose individual points-to assertions algebraically. In addition to a composition operator that combines two memory evaluators into one, we have implemented memory evaluators for 32-bit points-to, arrays (of both words and bytes), and local variable stack frames. \end{description} In the example, the memory evaluator uses the provers to inspect the separation logic formula and determines that the value read from \coqe{p} is \coqe{x} (because a subformula \coqe{p =*> x} appears) and the value written is therefore \coqe{x+1}, which is constructed syntactically when interpreting the addition. \paragraph{Cancellation} Cancellation proves that the strongest post-condition, computed by symbolic execution, implies the specification's post-condition. The algorithm begins with backward refinement, which uses the same type of refinement hints but refines in the conclusion of the heap implication~\footnote{Forward and backward refinement are both thin wrappers around a common, general, unification-based procedure for determining when quantified-equality hints apply to a goal.}. In the example, an analogous hint refines the \coqe{list} predicate, exposing the first cell. In the conclusion, existential variables are introduced as new unification variables that the rest of cancellation will attempt to instantiate. The core of the cancellation algorithm uses the cancellative properties of separation logic to prove the implication. Since cancellation leverages unification to resolve unification variables and does not backtrack, the order of considering predicates to cancel matters. We use a simple heuristic based on a lexicographic ordering of syntactic expressions where unification variables have the highest values and are thus unified last. This ordering, for example, will attempt to unify \coqe{p $\mapsto$ ?a} with \coqe{p $\mapsto$ v} before it tries to unify it with \coqe{?b $\mapsto$ ?c}. The Bedrock-specific instantiation of MirrorShard, including all of the examples, is available online: \begin{center} \url{https://github.com/gmalecha/bedrock-mirror-shard} \end{center} \section{Lessons Learned: Engineering Reflective Proof Procedures in Coq} \label{sec:lessons} In this section we highlight a variety of design choices that arise in the development of reflective decision procedures. While the details are Coq-specific, the ideas generalize and shed light on interesting design decisions both for users and developers of proof assistants. \subsection{Term representation} General dependent types provide many representation alternatives that are not available in most programming languages. The first implementation choice is whether the type of terms should guarantee that every term is well-typed. Such a representation allows us to make the denotation function (\coqe{exprD}) total, simplifying theorem statements and avoiding the need to prove that functions preserve the well-typedness of terms. The cost of this convenience is indexing terms by additional environments. In our setting, we would need to parameterize \coqe{expr} by the environments of functions, variables, and unification variables in addition to the expression type, leading to a type like: \begin{coq} Inductive dexpr (ts : list type) (fs : list signature) (uvars vars : list tvar) : tvar -> Type := ... \end{coq} This representation moves the hard work from the soundness proofs to the computational operations that manipulate terms. For example, with dependent types, we weaken terms by structural recursion: \begin{coq} Fixpoint dexpr_weaken ts fs u u' g g' t (e : dexpr ts fs u g t) : dexpr ts fs (u ++ u') (g ++ g') t := ... \end{coq} Only the variable cases are interesting, essentially needing to justify that valid references into \coqe{u} (respectively \coqe{g}) are the same as references into \coqe{u ++ u'} (respectively \coqe{g ++ g'}). Using a non-dependent representation, this function becomes a no-op since environments are extended at the end. Our proofs appeal to the following lemma that relates the meanings of expressions to their meanings under weakened environments. \begin{coq} Theorem exprD_weaken : forall ts fs u g t e v, exprD ts fs u g e t = Some v -> exprD ts fs (u ++ u') (g ++ g') e t = Some v. \end{coq} \begin{figure} \centering \scalebox{0.6}{\includegraphics{term-rep.pdf}} \caption{Verification times for two term encodings} \label{fig:perf} \end{figure} This ease of implementation also translates to performance improvements for procedures like cancellation, as shown in Figure~\ref{fig:perf}. The first bar uses the dependent representation while the second uses the minimally dependent representation described in Section~\ref{sec:representation}. We believe that the large difference between the bars is due to the need to evaluate proof terms to reduce dependent casts, though Coq provides no profiling tools for verifying this hypothesis. \subsection{Efficient Computation} Analogous to the choice of term representation is the choice of function implementation. We may implement functions with dependent types, making their properties manifest directly; or we may choose simple types and then prove properties after the fact. The most common manifestation of this choice is for equality-testing functions. In the dependent style, it is common to write the type of an equality function as: \coqe!forall x y, {x = y} + {x <> y}! (which is a particular case of dependent sum type carrying proofs). The non-dependent version (\coqe{eqb}) returns a Boolean, and provides a separate proof: \coqe{forall x y, eqb x y = true $\leftrightarrow$ x = y}. From a computational perspective, the latter is much more efficient under call-by-value reduction, since in the former the proofs that are constructed must be reduced completely. Changing our algorithms to use the non-dependent version of equality checks resulted in a 40\% reduction in the proof generation and checking time. (Note also that in the context of code extraction from Coq, the difference between the solutions disappears.) While this fact may be well-known for seasoned implementers of reflective decision procedures, there is little guidance from Coq's standard library toward this choice. In adapting the proofs, we have found it easy to recover the proof behavior of the dependent equality using Coq's dependent type classes~\cite{sozeau08typeclasses}. Traditional type classes carry additional information about \textit{types}, e.g. an equality decider. Dependent types enable type classes to carry additional information about \textit{values}. For example, a type class indexed by a function can carry a proof about that function: \begin{coq} Class EqOk (T : Type) (f : T -> T -> bool) : Type := { eq_ok : forall x y, f x y = true $\leftrightarrow$ x = y }. \end{coq} Proofs and automation can now reference the symbol \coqe{eq_ok} and Coq's type class resolution will attempt to find an appropriate instance. This approach is similar to the development in the math classes project~\cite{math-classes} and is the core principle underlying recent work on ExtLib~\footnote{\url{https://github.com/coq-ext-lib/coq-ext-lib}}. \subsection{Reification} Most sources gloss over the (mostly) uninteresting problem of reifying terms. While not particularly glamorous, the reification process can dramatically affect verification time. Our first version of reification for pure expressions, separation logic formulas, and object-language commands used Ltac. However, an initial performance evaluation showed that reification was a major performance bottleneck (taking almost 50\% of total verification time). First, some of this overhead can be attributed to Ltac itself: the language is dynamically typed and built for writing backtracking tactics for proof search, rather than building actual terms. Second, we had to circumvent the lack of support for manipulating open terms (i.e., terms with free variables) caused by embedded binders. The trick requires copious use of second-order pattern matching, which is considerably more time-intensive and results in code that is more difficult to read and maintain. Third, we had to split reification into two passes: first, to gather the type environment, and then to build the reified terms indexed by this environment. To address the performance problem, we implemented a second version of our reification as an OCaml plugin. This alleviates all of the previously mentioned problems: we use OCaml data structures (which are more efficient than their Ltac counterparts) to build environments and terms; we manipulate open terms, which makes reification more direct; we make a single pass on the Coq term to build the reification environment and the reified terms, rather than two in Ltac. This plugin does not need to be trusted more than any Ltac tactic since it constructs terms that are fed into the Coq kernel. Using this plugin dramatically reduced the time spent on reification. Whereas Ltac reification had taken approximately 32 minutes for a test suite of 10 examples, OCaml reification takes approximately 22 seconds (an 88X speedup). Processing of a single file will typically perform hundreds of reifications, so these figures are quite reasonable. The reason for this slowdown is more than the interpretive overhead of Ltac. The explanation is a Coq feature (misfeature?) dealing with building terms in Ltac. To illustrate, consider two different Ltac expressions that build a natural number by repeatedly applying the successor function \coqe{S} to the zero constant \coqe{O}. First, there is the simple version \coqe{S (S (...O...))}, which just builds the term directly. For reasonably sized numbers, this expression evaluates instantly. Then, there is the expanded form \coqe{let n := O in let n := (S n) in ...(S n)...}, which binds an Ltac-level variable for each intermediate term. This expression evaluates \emph{in time quadratic in the term size}. The underlying problem is that \emph{Coq re-typechecks all parts of each new term that is constructed in Ltac}. This is not an unreasonable-sounding requirement for a dynamically typed language. Ltac-bound variables appearing in Gallina terms have their contents substituted in explicitly. Reification naturally builds terms step-by-step through a recursive process. The overhead of repeated typechecking can become overwhelming in such cases. In contrast, all Coq-level type checking in OCaml is done explicitly. This allows dramatic speed-ups by only typechecking the final term once. \subsection{Engineering Proof Terms} \label{sec:eng-terms} While Coq proofs are often thought of as tactic scripts, the final product of Coq proving is \emph{proof terms} in a core type theory. Tactics merely provide a more convenient way to construct these terms (which often make heavy use of dependency). A proof that looks straightforward at the tactic level may produce proof terms that take substantially longer to check than the proof script took to run. The central issue in term engineering is the statement of the correctness theorem. Two broad strategies are used commonly. The first, more traditional approach uses an equality proof to separate the computation from its meaning: \begin{coq} Theorem cancel_correct_with_eq : forall ts fs ps uvars pures l r, AllProvable ts fs ps uvars pures -> (* premises (1) *) forall l' r', cancel l r = (l',r') -> (* computation (2) *) sexprD uvars l' ===> sexprD uvars r' -> (* denote (3) *) sexprD uvars l ===> sexprD uvars r. (* denote (4) *) \end{coq} The benefit, and curse, of this formulation lies in the quantification over \coqe{l'} and \coqe{r'}, which together act as the result of the cancellation procedure. To apply the above theorem, our proof must directly record the values of \coqe{l'} and \coqe{r'}. Thus, when Coq checks the proof it knows exactly what type line (3) has, regardless of the unification procedure used to justify the equation in line (2). Representing and type checking these embedded terms in the final proof, however, can be expensive if they are large. While our representation does not necessarily look large on the surface, the dependency of terms on the type environment requires that the type environment be repeated syntactically at every constructor. Even paring down the type environment to contain only the type (eliding the equality function and its proof) does not shrink the term enough. This is a limitation of Coq's type checker, but not a theoretical one. Adapting Coq to use bi-directional type checking~\cite{pierce00bidirectional} could solve this problem. To circumvent the embedded term problem, we replace the quantifier with a \coqe{let} declaration scoped over the relevant premises. \begin{coq} Theorem cancel_correct : forall ts fs ps uvars pures l r, AllProvable ts fs ps uvars pures -> (* premises (1) *) (let (l',r') := cancel l r in (* computation (2) *) sexprD uvars l' ===> sexprD uvars r') -> (* denote (3) *) sexprD uvars l ===> sexprD uvars r. (* denote (4) *) \end{coq} The drawback to this approach is that na\"ive uses of this theorem do not record the result of the reduction. During proof checking, Coq will lazily evaluate this term, leaving large, partially reduced terms unevaluated at key places, requiring them to be reduced multiple times during subsequent proof checking. While Coq provides only limited methods for specifying reduction strategies in proof terms, in practice knowing the result is enough to make proof checking reasonably efficient. In general, we can save the result by performing cut elimination with an explicit type annotation. We revisit this problem in Section~\ref{sec:eng-reduction}, discussing our solution for making this phrasing efficient. \subsection{Engineering Reduction} \label{sec:eng-reduction} The workhorse of proof by reflection is the computation step, and fast evaluation of terms is essential to making reflective proofs efficient. Unfortunately, as we demonstrated in Section~\ref{sec:eng-terms}, standard formulations of reflective theorems do not work well in our setting. Our formulation requires fast \emph{delimited} evaluation, i.e. evaluation keeping certain symbols opaque. Since Coq's logic admits reduction under binders, evaluation strategies are able to handle opaque terms. However, to maintain abstraction by not unfolding user-defined logical symbols, we need the ability to engineer the handling of particular identifiers. Throughout our development we evaluated several reduction mechanisms, two of which are noteworthy. \paragraph{Delimited cbv} Coq's full-beta, call-by-value reduction mechanism is reasonably fast and supports the abstraction that we need to avoid the reduction of certain terms. As an illustration, consider the goal after an application of \coqe{cancel_correct}. \begin{coq} let (l',r') := cancel funcs l r in sexprD funcs uvars l' ===> sexprD funcs uvars r' \end{coq} We would like to reduce the above proposition to a heap implication using the standard separation logic connectives, e.g. $*$ and \coqe{emp}. Since we know the definition of \coqe{cancel}, we can customize Coq's \coqe{cbv} tactic to leave certain constants, like $*$, opaque by specifying a whitelist of identifiers that should be reduced. Unfortunately, specifying this list modularly is not possible. The customization available to \coqe{cbv} requires an explicit whitelist (or blacklist) of identifiers, with no facility for supporting dynamically constructed lists or using wildcards to include all values from a module. The whitelist required to reduce the above goal contains roughly 450 symbols, is sensitive to any refactoring (including adding additional provers), and is very difficult to debug. Missing symbols cause evaluation to get stuck producing enormous, partially evaluated terms that can chew through 8G of RAM in under a minute. Nevertheless, once the whitelist is complete, evaluation is fast. We post-process the resulting term \emph{folding} named definitions that may have occurred in the reflective code and in the user function environment. That is, we substitute an identifier for its associated definition, to simplify the term. It is this final point that makes a blacklist unattractive. Though the list would be considerably shorter, we would not know which terms to refold or how to refold them at the end of the reduction. \paragraph{vm\_compute} \coqe{vm_compute}~\cite{VmComputeICFP02} is an even faster Coq reduction mechanism based on compiling Gallina terms to the OCaml virtual machine, executing them there, and translating the results back to Gallina. The price for this speed is reduced flexibility in two ways: \begin{enumerate} \item Neither whitelists nor blacklists are supported. All identifier are unfolded if definitions for them exist. \item \coqe{vm_compute} fails if it encounters Coq unification variables. \end{enumerate} On the surface, both limitations appear to be show-stoppers. Without a whitelist (or blacklist), separation logic abstractions (in addition to simple functions like \coqe{plus}) will be torn apart, revealing symbolic memories and 32-bit words. Further, the goals fed to our tactics routinely contain several unification variables introduced by program-specific Ltac code. Our solution to these problems relies on building an anonymous function that explicitly abstracts over terms that should not be reduced. This allows us to reduce only in the anonymous function and leave the dangerous subterms alone. Consider the following simplified example, where we want to reduce the lefthand side to the righthand side: $$ 2 * 9 \Longrightarrow 9 + 9 + 0 $$ Na\"ively using \coqe{vm_compute} produces $18$. It is easy to abstract $9$, but $+$ does not occur syntactically in the term. To expose it, we use a special form of $*$ that takes $+$ as an argument: \begin{coq} Fixpoint mult' (plus' : nat -> nat -> nat) (n m : nat) : nat := match n with | 0 => 0 | S n' => plus' m (mult' plus' n' m) end. \end{coq} Using this definition, we can engineer the following reduction: $$ (\mathsf{fun}\, p\, x \Rightarrow mult'\, p\, 2\, x) \overset{vm}{\Longrightarrow} (\mathsf{fun}\, p\,x \Rightarrow p\, x\, (p\, x\, 0)) $$ which differs from our target reduction by a single, cheap $\beta$-reduction when applied to $+$ and 9. This technique works to make many terms opaque while still avoiding the problems of embedding the intermediate syntactic representation in the proof term (see Section~\ref{sec:eng-terms}); however, there are several limitations. First, we cannot use it to limit reduction in types since abstracting by a type will often produce an ill-typed term. Second, the abstractions are manifest in the resulting proof term, making it larger than it would be using delimited \coqe{cbv}. Finally, producing the abstraction using Ltac can be expensive since each abstraction must perform a linear walk over the term. Because we need to abstract all terms in the function and separation predicate environments, it is not uncommon to be blacklisting 30 or more symbols, each of which requires a linear pass over the term. To improve efficiency, we packaged the functionality with \coqe{vm_compute} into a tactic called \coqe{evm_compute} in a Coq plugin available online~\footnote{\url{https://github.com/braibant/evm_compute}}. Unlike \coqe{cbv}, our tactic supports dynamic blacklists by accepting Coq lists of identifiers. \paragraph{Comparison} \begin{figure} \centering \scalebox{0.6}{\includegraphics{reduce-scale.pdf}} \label{fig:perf-reduce} \caption{Performance measurements for different reduction strategies. Number of conjuncts proxies complexity.} \end{figure} To make the performance difference concrete, Figure~\ref{fig:perf-reduce} shows the reduction and proof checking times for delimited \coqe{cbv}, \coqe{evm_compute}, and \coqe{vm_compute}. We use the number of conjuncts as a proxy for the amount of computation since the cancellation algorithm is O($n^2$) in this case. First, note that \coqe{cbv} is considerably slower than the virtual machine-based strategies for large problems. The further slow-down during checking \coqe{cbv} proofs is due to the customized reduction strategy not being recorded in the proof term. This causes the proof checker to fall back on lazy evaluation, which is considerably slower. The virtual-machine based strategies are considerably faster with much better scaling properties. The overhead of blacklisting is roughly constant, becoming negligible for problems with more than 64 conjuncts. This behavior justifies the efficiency of the lightly dependent design that MirrorShard advocates. However, better facilities for customizing reduction would still be beneficial. One promising idea is to use a delimiting function such as: \begin{coq} Definition block (T : Type) (v : T) : T := v. \end{coq} While such a term would have no effect on the logical meaning of a statement, certain reduction strategies could treat occurrences of \coqe{block x} opaquely, not unfolding \coqe{x}. This would enable blocking reduction inside types and avoid the need to write functions like \coqe{mult'} in Section~\ref{sec:eng-terms} that abstract their dependencies to make them visible at the top level. Reduction strategies like \coqe{vm_compute} could then be parameterized by a set of these blocking functions. \section{Evaluation} \label{sec:evaluation} In this section we evaluate reflective proof techniques, comparing them to the standard Ltac-style verify-and-check approach to mechanized verification. We begin with a brief discussion of the automation level of our verification framework by discussing our test suite, before focusing on two grounds for comparison with Ltac-based verification methodologies: performance and debuggability. \paragraph{Usability \& Automation} \begin{figure} \begin{tabular}{l|r|r|r|r|r} File & Program & Invar. & Tactics & Other & Overhead \\ \hline LinkedList & 42 & 26 & 27 & 31 & 2.0 \\ Malloc & 43 & 16 & 112 & 94 & 5.2 \\ ListSet & 50 & 31 & 23 & 46 & 2.0 \\ TreeSet & 108 & 40 & 25 & 45 & 1.0 \\ Queue & 53 & 22 & 80 & 93 & 3.7 \\ Memoize & 26 & 13 & 56 & 50 & 4.6 \\ \end{tabular} \caption{Case study verifications, with data on annotation burden, in lines of code} \label{fig:annotations} \end{figure} In addition to the example excerpted in Figure~\ref{treeset}, we have carried out a number of other library module verifications, to validate the usefulness and extensibility of MirrorShard. Figure~\ref{fig:annotations} shows some statistics of our six largest case studies. In order, the columns of Figure \ref{fig:annotations} count the executable part of the module being verified, the function specifications and invariants asserted in code, the Ltac tactic proof scripts (including commands to register hints), all the remaining lines, and finally the ratio of verification lines to program lines. The lines that we account for under ``Other'' are almost all definitions of data structure representation predicates and statements of theorems about them. Our case studies exercise reasoning about a variety of user-defined abstract predicates. With the exception of a small set of obligations about words (mostly pertaining to memory access), the correctness side conditions (such as theorems about lists and sets) are verified by Ltac proof search. Our case studies are: LinkedList, consisting of the classic functions is-empty, length, reverse, and concatenate (the latter two performed in-place with mutation); Malloc, a na\"ive memory allocator, based on an unsorted free list with no coalescing, used by all the later case studies; ListSet and TreeSet, implementations of a common finite set interface specified with mathematical sets, respectively using unsorted lists and binary search trees; Queue, a standard FIFO queue specified mathematically using bags; and Memoize, a higher-order function that memoizes Bedrock code that implements a mathematical function. The last of these requires interesting interplay between our automation and custom Ltac code to handle higher-order proof obligations related to first-class code pointers. The proof overhead is slightly lower than with the case studies used for the old fully Ltac-based Bedrock~\cite{BedrockPLDI11}. The decrease arises mostly from our modularization of hint databases. Our ability to verify the same examples demonstrates that we have achieved a similar level of automation and integration with Ltac. Our procedures have also been used in a larger case study~\cite{MtisSubmitted} that has built a verified cooperative threading library and then verified a Web server running on top of the library. The thread library includes about 400 lines of implementation code and 3000 additional lines for its verification, while the Web server has 200 lines of implementation and 500 more for the proof, which establishes that representation invariants are maintained for key data structures. \paragraph{Performance} Beyond expressive power, a crucial benchmark for verification tools is performance. Long-running tactics (or tools) cut the programmer out of the loop, making iterative development difficult. \begin{figure} \center \scalebox{0.54}{\includegraphics{refinement-cancellation.pdf}} \label{fig:performance} \caption{Performance comparison to non-reflective procedures} \end{figure} Figure~\ref{fig:performance} uses a microbenchmark to compare the performance of our reflective procedures based on MirrorShard to those Chlipala developed for his initial version of Bedrock~\cite{BedrockPLDI11}. The background for this task is an abstract predicate $\mathsf{sll}$ for singly linked lists, along with two \emph{theorems} that we use as refinement hints: {\small $$\lceil p = 0 \rceil \Longrightarrow \mathsf{sll}([], p)$$ $$\lceil p \neq 0 \rceil * \exists p'. \; p \mapsto x, p' * \mathsf{sll}(\ell, p')) \Longrightarrow \mathsf{sll}(x :: \ell, p)$$ } Out of these theorems, we can derive variants for concrete list lengths. For readability, we leave out side conditions on nullness or non-nullness of pointer variables, which appear in our actual benchmark theorem statements. \begin{eqnarray*} \mathsf{emp} &\Longrightarrow& \mathsf{sll}([], p_0) \\ p_0 \mapsto x_0, p_1 &\Longrightarrow& \mathsf{sll}(x_0 :: [], p_0) \\ p_0 \mapsto x_0, p_1 * p_1 \mapsto x_1, p_2 &\Longrightarrow& \mathsf{sll}(x_0 :: x_1 :: [], p_0) \\ \end{eqnarray*} \noindent ...and so on, generalizing to an arbitrary number of list cells. If the length of the list is $n$, solving this problem requires $n+1$ refinements, with $n$ refinements via the theorem for non-empty lists and the final refinement using the empty-list theorem. In the process, we introduce $n$ unification variables and $n$ pure facts (that none of the intermediate pointers are equal to 0). Proving this family of theorems using the Ltac automation from the old Bedrock system~\cite{BedrockPLDI11} is painfully slow both to find a proof (Ltac) and check it (Ltac-Qed). Our experiments time out for a list of length 32, while our new reflective automation (Refl) finishes in under a second. We also see that the reflective tactic spends only slightly longer on proof search than checking, while with the old Ltac approach we see proof search running for at least 10 times longer. It is now faster to \emph{find} proofs than it had been to \emph{check} them. This straightforward result is the ``good news'' arising from our experiments. We achieve asymptotically better performance scaling than the Ltac-based alternative, and the constants are low enough that the performance gap becomes clear even for relatively small microbenchmarks. The ``bad news'' arising from our experiments is that we see no clear change in overall performance for our full case studies like in Figure \ref{fig:annotations}. Our experimental set-up is quite conservative, since our new Bedrock system involves a number of complexities not found in the original. For instance, we added support for higher-order quantification in assertions; we switched the machine word representation from natural numbers to size-32 bitvectors; and we introduced the possibility for programs to \emph{crash} by accessing invalid memory addresses, creating many new crash-safety proof obligations for each program. Seen from this perspective, one might consider it a very promising sign that we hold overall verification performance at approximately the same level. It is probably also true that programs making good use of data structure encapsulation will tend to feature relatively small assertions that do not provide much opportunity to show off the asymptotic scaling of proof procedures. \begin{figure} \center \scalebox{0.4}{\includegraphics{process.pdf}} \vspace{0.2cm} \scalebox{0.63}{\includegraphics{phase-breakdown.pdf}} \caption{Verification process and the breakdown of verification time} \label{fig:process} \end{figure} Figure~\ref{fig:process} shows how our reflective proofs fit into the overall verification. The flexibility afforded to us by this method is, in some sense, its downfall. Two-thirds of verification time is spent in Ltac, and pushing more reasoning into Gallina procedures is likely to reduce verification time drastically. We expect that our general techniques to support quantifiers and integrate pure provers should streamline further development of similar procedures. As we experiment with more programs to verify, we expect both to improve the performance of our pure provers, by introducing more efficient Gallina data structures; and to add new procedures to discharge obligations in new mathematical theories. \paragraph{Debuggability} A crucial benefit of reflective proofs over their Ltac counterparts beyond performance is the ability to reason about the correctness of the proof-generating procedure. Ltac programs have complicated backtracking semantics that can make them difficult to write and even more difficult to debug. For example, the backtracking severely complicated debugging our Ltac-based reification code, since a typo in a single case would cause an exponential backtrack through the algorithm. In addition, debugging tools are difficult, and tactics that compute terms must be hand-coded in continuation-passing style to get reasonable debugging support via \coqe{idtac}, Ltac's equivalent of \coqe{printf} debugging. On the other hand, even with a minimally dependent term representation, coding in Gallina enables us to use Coq's type checker to get shallow ``sanity'' properties. Our soundness theorems allow us to prove the deeper properties that we are relying on. During development we found ourselves frequently fixing bugs related to de Bruijn indices and binders, up \emph{until} the point when the proofs about the components were completed. The proving process contributed considerably to the development process, pointing out bugs that our initial test cases did not cover. \section{Related Work} \label{sec:related-work} Our work is part of a recent trend to improve the automation available in proof assistants, which have traditionally supported only very manual proof styles. Researchers have proposed several alternative approaches. Proof by reflection~\cite{ReflectionTACS97} is a well-established technique in the communities of Coq and other closely related proof assistants. Gr\'egoire and Mahboubi built a reflective tactic to simplify terms using the operators of any \emph{ring} algebraic structure~\cite{RingTPHOLs05}, and Braibant and Pous built a reflective implementation of rewriting modulo associativity and commutativity of user-specified operators~\cite{AcCPP11}. These past projects consider self-contained, well-defined problems in the style of classical decision problems. In contrast, our work considers \emph{open-ended, extensible} procedures more along the lines of those commonly used for automated program verification. Such an expansion of scope raises the new issues that we have described, like supporting quantifiers, an interface with a proof assistant's unification engine, and modular combination of verified decision procedures over different theories. The last of these has been considered by Lescuyer~\cite{lescuyer11these}, who developed an SMT implementation in Coq. The theory composition that he achieves is more integrated than the simple composite provers that we implemented, but the approach does not share the computational composition that enables us to achieve lightweight extension. The Ssreflect Coq library~\cite{SsreflectJFR10} employs a \emph{small-scale reflection} style where many predicates are coded as functional programs returning Booleans, sidestepping concerns of decidability. The approach of Gonthier et al.~\cite{AdHocICFP11} uses Coq's canonical structures mechanism as a clever means of building proof-generating procedures, retaining most of the usual relative advantages and disadvantages of proof generation versus verification of proof procedures like ours. Other recent work has proposed Mtac~\cite{mtac}, a new style of proof automation in Coq. Like reflective proofs, Mtac proof procedures are implemented in Gallina; however, in order to to provide the types of operations necessary for making tactic development simple, these ``tactics'' have a monadic type. The monad supports non-termination, failure, and syntactic matching of patterns against terms. The last of these features makes it impossible to reason about Mtac procedures inside of Coq, since syntactic matching breaks the substitution property of equality, i.e. $x = y \rightarrow f x = f y$. Execution of these programs is done at type-elaboration time through a special \texttt{run} expression that exists outside of Gallina. Several projects~\cite{CvcPDPAR05,Smt3CPP11} have studied translation of SMT-solver proof traces into forms acceptable to proof assistants, and some of these projects~\cite{Smt1CPP11,Smt2CPP11} are based on reflective Coq tactics. In the latter case, one verifies a \emph{proof checker} rather than the prover itself. Compared to our approach, there are non-obvious performance trade-offs. Verifying the prover removes the need for potentially expensive proof generation and checking, but the proof-generating approach is compatible with using efficient low-level languages and optimizing compilers to implement the provers. Verifying the prover helps avoid completeness bugs, where a tool may sometimes generate invalid proof traces; but proof checkers are generally easier to verify than provers. Lescuyer and Conchon~\cite{SatFroCos09} built a reflective SAT solver tactic for Coq, and Nanevski et al.~\cite{HttPOPL10} and Oe et al.~\cite{VersatVMCAI12} have verified efficient low-level code for a part of an SMT solver and a full SAT solver, respectively. None of this past work supports modular extension with new \emph{provers} rather than just \emph{proof checkers}, and none supports a rich formula language including quantifiers and user-specified predicates with associated axioms that should be applied automatically. A few past projects have proved the correctness of non-extensible separation logic proof procedures. Marti and Affeldt~\cite{AffeldtCS08} verified a simplification of Smallfoot~\cite{SmallfootFMCO05} using Coq. Stewart et al. have done Coq verification of a Smallfoot-style verification tool VeriSmall~\cite{VeriSmallCPP11} that relies on a novel verified heap theorem prover VeriStar~\cite{VeriStarICFP12}. The prior work of this kind has considered none of functional correctness verification (as opposed to just memory safety), extension with abstract predicates, or higher-order programs or specifications. Many standalone tools do efficient, automated analysis of large low-level code bases for memory safety, using separation logic, outside of the context of proof assistants. Examples include Smallfoot~\cite{SmallfootFMCO05}, SpaceInvader~\cite{SpaceInvaderPOPL09}, and SLAyer~\cite{SLAyerCAV11}. Xisa~\cite{XisaPOPL08} bears a special relationship to our new work, as it is extensible with new predicate definitions in separation logic. Several other proof assistant libraries provide support for separation logic proofs, including the tactic libraries of Appel~\cite{AppelTactics} and McCreight~\cite{PtslTPHOLs09}, Holfoot~\cite{HolfootTPHOLs09}, Ynot~\cite{YnotICFP09}, and Charge!~\cite{ChargeITP12}. Some of the libraries in this latter category provide proof automation comparable to that of the standalone tools. Our work described in this paper is the first to verify such automation formally, rather than merely constructing it to output program-specific proofs. One disadvantage of all such approaches is greater performance overhead compared to standalone tools, though traditional proof techniques can be applied directly in places where sophisticated, custom reasoning is necessary. \section{Conclusion} \label{sec:conclusions} We have built a core reflective proof framework that supports user extensions via reflected lemmas and custom proof procedures that are packaged into reusable strategies. Our framework allows bi-directional communication with Coq's unification variables, supporting both the instantiation of existing unification variables and the construction of new unification variables. To justify the framework's applicability, we have instantiated it to reason about a combination of higher-order and separation logic including support for user-defined abstract predicates. In addition to the benefit of user extension for handling abstract predicates, our reflective tactics scale much better performance-wise than Bedrock's original tactic-based verification procedure~\cite{BedrockPLDI11}, while producing at least comparable performance for the realistic case studies that we have experimented with. We succeeded in isolating two large chunks of separation logic-based verification engines. Looking forward, it seems that the ripest areas for performance improvements are the non-reflective portions. Building a larger library of (reusable) base theory provers in our framework would reduce the number of goals passed back to Ltac and enable us to apply more refinements reflectively. It would also be interesting see how the framework can be extended to capture fragments of higher-order logic that are common during verification. This would enable us, in simple cases, to avoid some of the ping-ponging which we believe is the source of much of our overhead. While full reflective verification would be ideal, the ability of our framework to integrate nicely with more manual proofs enables us to choose to invest time in automation only when there will be a comparable payoff, i.e. where similar obligations crop up repeatedly. Highly specialized reasoning can still be proved manually or semi-automatically. \bibliographystyle{abbrvnat}
proofpile-arXiv_067-5745
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} Whilst the Standard Model (SM) seems to have survived in good health the first round of tests at the LHC, at least three different types of observations represent clear evidences for new physics. These are: neutrino oscillations that require neutrino masses; the Universe matter-antimatter asymmetry that remains quantitatively unexplained within the SM; the existence of dark matter (DM) for which the SM has no candidate. To claim completeness, a particle physics model must account at least for the first two evidences. As regards DM, as it is well known, all the undisputed experimental evidences for its existence are so far related only to its gravitational effects. Thus, given that particle physics models are generally written down in the approximation of neglecting gravity, failing to explain DM is not necessarily a signal of incompleteness, and can conceivably be a consequence of the working approximation. One of the simplest extensions of the SM that can account for neutrino masses and naturally explain their tiny values, is the {\sl standard} (type I) seesaw~\cite{Minkowski:1977sc,Yanagida:1979as,Glashow,% GellMann:1980vs,Mohapatra:1980yp,Schechter:1980gr}: three singlet right-handed (RH) neutrinos with large Majorana masses are added to the SM particle spectrum providing neutrino masses that, differently from the masses of all other fermions, get suppressed by the Majorana mass scale. Quite elegantly, the seesaw mechanism automatically embeds a solution to the baryon asymmetry problem by means of the leptogenesis mechanism~\cite{Fukugita:2002hu,Davidson:2008bu,Fong:2013wr}: in the early Universe, the out of equilibrium decays of the heavy RH neutrinos can dynamically produce a lepton asymmetry, which is partially converted into a baryon asymmetry due to fast sphaleron processes. Unfortunately, both the required large suppression of neutrino masses and the viability of leptogenesis hint to a very large Majorana mass scale, which puts direct tests of the seesaw via production of the heavy neutrinos out of the reach of foreseeable experiments. In particular, while the light neutrino masses could be also suppressed (admittedly in a less elegant way) by very small couplings to not-so-heavy singlet states, this also implies that any type of production process has vanishingly small rates. On the other hand, for a non-degenerate spectrum of heavy neutrinos successful leptogenesis necessarily requires a Majorana mass scale $M \gsim 10^9$~GeV~\cite{Davidson:2002qv}. Thus, while within the seesaw TeV scale neutrino mass generation remains an open possibility, TeV scale leptogenesis is not successful, and RH neutrino production is impossible. From the phenomenological point of view, the subset of models for neutrino masses that can satisfy simultaneously the three requirements of \begin{itemize} \item[(i)\ ] generating neutrino masses at the TeV scale, \item[(ii)\;] being testable at the LHC via direct production of new states, \item[(iii)\,] allowing for successful leptogenesis at temperatures $\mathcal O$(TeV), \end{itemize} can be considered of utmost interest. Unfortunately, the difficulties encountered in the seesaw model in satisfying these three requirements are rather generic in model building and, to our knowledge, this subset is almost empty\footnote{See however, refs.~\cite{Canetti:2012vf,Drewes:2012ma}.}. In this paper we describe a set of relatively simple variations of the type I seesaw extended by the addition of different types of scalars (one at the time) with the same quantum numbers than the SM fermions, and with masses of ${\mathcal O}$(TeV). The role of these new states is basically that of allowing the mechanism of neutrino mass generation to get decoupled from the mechanism governing leptogenesis and from the RH neutrino production processes. In our scenario, the requirement (i) is satisfied in the usual way by assuming sufficiently small Yukawa couplings for the RH neutrinos; (ii) can be fulfilled because the new scalars are gauge non-singlets. Their production is then possible via SM gauge interactions, and in turn they can bridge the production of RH neutrinos. Finally, sizeable CP asymmetries in the decays of RH neutrinos to the new scalars allow to satisfy (iii) with all masses at the TeV scale. \section{Generalities} \label{sec:generalities} The relevant new parameters appearing in the type I seesaw Lagrangian: \begin{equation} \label{eq:seesaw} - {\mathcal L}_{\rm seesaw} = \frac{1}{2}M_{i}\overline{N}_{i}N_{i}^{c}+ \lambda_{\alpha i}\overline{\ell}_{\alpha}N_{i} \, \epsilon H^*\, \end{equation} are the masses $M_i$ of the RH neutrinos $N_i$ (we assume three of them) and their Yukawa couplings $\lambda_{\alpha i}$ to the SM lepton doublets $\ell_\alpha$ and to the Higgs doublets $H$ ($\epsilon = i\tau_2$ is the $SU(2)$ antisymmetric tensor). Without loss of generality we have chosen the usual basis in which the RH neutrino mass matrix is diagonal with real and positive eigenvalues, and it is also understood that the matrix $\lambda_{\alpha i}$ corresponds to the basis in which the matrix of Yukawa couplings for the $SU(2)$ lepton singlets $e_\alpha$ is also diagonal $h_{\alpha\alpha} \overline{\ell}_{\alpha} e_\alpha H$. The matrix $\lambda$ can be expressed in terms of the heavy RH and light neutrinos mass eigenvalues $M^D={\rm diag}(M_1,M_2,M_3)$ and $m^D_\nu={\rm diag}(m_{\nu_1},m_{\nu_2},m_{\nu_3})$ and of the neutrino mixing matrix $U_\nu$ as~\cite{Casas:2001sr} \begin{equation} \lambda=\frac{1}{v}\, U_\nu^\dagger \, \sqrt{m^D_\nu}\, R\, \sqrt{M^D}\,, \label{eq:yukawa_CI} \end{equation} where $v=\langle H\rangle$ is the Higgs vacuum expectation value (VEV) and $R$ is a complex orthogonal matrix satisfying $R^TR=RR^T=1$. Taking the light neutrino masses at a common scale $m_\nu \sim 0.1\,$eV, assuming a RH neutrino mass scale ${\mathcal O}$(1\,TeV) and given that the modulus of the entries in $U_\nu$ is bounded to be $\leq 1$, we can write the order of magnitude relation: \begin{equation} |\lambda| \sim 10^{-6} \, \sqrt{\frac{M_N}{1 {\rm TeV}}} \sqrt{\frac{m_\nu}{0.1 {\rm eV}}}\, |R|\,. \label{eq:numasses} \end{equation} If the entries in $R$ remain $\lsim {\mathcal O}(1)$, then the seesaw Yukawa couplings are way too small for producing $N$ with observable rates and condition (ii) above is not satisfied. Strictly speaking, the entries of the complex orthogonal matrix $R$ are not bounded in modulus, and the possibility of having couplings $\lambda\gg {\mathcal O}(10^{-6})$ with $M_N\sim {\mathcal O}(1)\,$TeV, together with acceptable values for the light neutrino masses cannot be excluded. This, however, requires fine tuned cancellations in the neutrino mass matrix which, in the absence of some enforcing symmetry principle, are highly unnatural. As regards leptogenesis, for a hierarchical RH neutrino spectrum ($M_1 \ll M_{2,3}$) the CP asymmetry in $N_1$ decays reads \begin{equation} \epsilon_{1} = -\frac{3}{16\pi}\frac{1}{(\lambda^\dagger \lambda)_{11}} \sum_{j\neq 1} {\rm Im}\left[(\lambda^\dagger \lambda)^2_{j1}\right] \frac{M_1}{M_j}\,. \label{eq-08:epsilon_1} \end{equation} Using for the Yukawa couplings the parameterization in~\eqn{eq:yukawa_CI} and the orthogonality condition ${\displaystyle \sum_i} R^2_{1i}=1$, one obtains the Davidson-Ibarra (DI) bound~\cite{Davidson:2002qv} \begin{equation} |\epsilon_{1}| \leq \epsilon^{DI} = \frac{3}{16\pi}\frac{M_1}{v^2}\frac{\Delta m_{atm}^2}{m_{\nu_1}+m_{\nu_3}}, \label{eq:DI} \end{equation} where $m_{\nu_3}$ ($m_{\nu_1}$) is the heaviest (lightest) light neutrino mass. The cosmic baryon asymmetry generated in $N_1$ decays can be approximated as \begin{equation} \label{eq:YB} Y_{\Delta B} = \, Y^{eq}_{N_1}\cdot c_S \cdot \epsilon_1 \, \eta_{1\,\rm eff}\,, \end{equation} where $Y^{eq}_{N_1}\sim 4\times 10^{-3}$ is the ratio between the equilibrium number density of RH neutrinos at $T\gg M_1$ and the entropy density, $\eta_{1\,\rm eff}\leq 1$ is the efficiency for preserving the asymmetry generated in $N_1$ decays, and $c_S$ is a factor related to sphalerons $L\to B$ conversion (in the SM $c_S \sim 1/3$). Experimentally $Y_{\Delta B}^{CMB}=(8.79 \pm 0.44) \times 10^{-11}$~\cite{Komatsu:2010fb}. Thus to obtain $Y_{\Delta B} \simeq Y_{\Delta B}^{CMB}$ a value \begin{equation} \label{eq:etaeff} \epsilon_1 \cdot \eta_{1\,\rm eff} \sim 6\cdot 10^{-8} \end{equation} is required. From \eqn{eq:DI} and \eqn{eq:etaeff} we have \begin{equation} M_1 \gsim \frac{2.5 \times 10^8}{\eta_{1\, \rm eff}} \, \left(\frac{m_{\nu_1}+m_{\nu_3}}{0.1\,{\rm eV}}\right) \,{\rm GeV}, \label{eq:leptobound} \end{equation} thus the leptogenesis scale lies well above the TeV and (iii) is not satisfied.\footnote{The derivation of the DI bound requires summing up the CP asymmetries over the lepton flavours, which is an incorrect procedure in the flavoured regimes (below $T \sim 10^{12}\,$GeV)~\cite{Barbieri:1999ma,Abada:2006ea,Nardi:2006fx}. Moreover the bound holds only for a hierarchical spectrum of RH neutrinos $M_1\ll M_2\ll M_3$ and when $N_1$ contributions to leptogenesis are dominant~\cite{Engelhard:2006yg}. However, detailed numerical analysis indicate that while the limit~\eqn{eq:leptobound} could indeed get relaxed, for example by flavour effects in generic~\cite{Blanchet:2008pw} as well as in specific~\cite{Racker:2012vw} scenarios, the leptogenesis scale still remains bounded to lie well above the TeV. One can get around this conclusion if the CP asymmetries are resonantly enhanced~\cite{Pilaftsis:2003gt,Pilaftsis:2004xx,Pilaftsis:2005rv}. This, however, requires two almost degenerate RH neutrino masses.} \section{Extensions of the Type I seesaw} \label{sec:extensions} A way around the difficulties in satisfying the three conditions (i)-(iii) can be obtained by equipping the RH neutrinos with new (complex) couplings to the SM fermions. This allows to decouple the size of the CP asymmetries and the rates of $N$'s production from the constraints implied by the light neutrino masses \eqn{eq:numasses} and \eqn{eq:DI}. Since the RH neutrinos are SM gauge singlets, the form of the new couplings is restricted by gauge invariance to involve only new scalars with the same quantum numbers than the SM fermions (that we generically denote as $\psi$). The form of the additional couplings is: \begin{equation} \label{eq:newcoupling} -{\mathcal L_{\tilde \psi}}= \eta_{m i} \bar \psi_{L m} N_i \> \tilde\psi + \sum_{\psi'\,\psi''} y_{m n} \bar{\psi}'_{L m} \psi''_{R n} \> \tilde\psi\,\ +\ {\rm h.c.} \end{equation} where $\psi_L,\,\psi_L'$ denote the SM left-handed (LH) fermion fields $\ell,\,e^c,\,Q,\,d^c,\,u^c$, ($N^c=N^c_L$ will denote the LH $SU(2)$ singlet neutrino) while the SM RH fields are $\psi''_R = \ell^c,\,e,\,Q^c,\,d,\,u$ (and $N=N_R$). In the above $\tilde\psi$ denote scalars that must match the gauge quantum numbers of $\psi_L$ in the first term, and $\eta_{m i}$ and $y_{m n}$ are matrices of Yukawa couplings.\footnote{We use $i,j$ to denote the generation indices for the RH neutrinos, $\alpha, \beta$ for leptons in the basis specified in \eqn{eq:seesaw} and $m,n$ for generic states when their identity (or basis) is unspecified. It is understood that $\eta$ in the first term is different for different types of scalar $\tilde\psi$, while $y$ within the sum in the second term is different also for different $\bar\psi' \psi''$ fermion bilinears.} In order to keep easily in mind the gauge representations of the new states, we borrow the usual supersymmetric notation and denote the relevant scalars with a tilde: $\tilde\psi = \tilde \ell\,,\,\tilde e,\,\tilde Q,\, \tilde d,\,\tilde u$. The effect of the couplings in the first term in~\eqn{eq:newcoupling} is threefold: 1. They can bridge the production of RH neutrino by means of $\tilde \psi$ exchange which, being gauge non-singlets, have sizeable couplings to the SM gauge bosons. 2. They open a new decay channel $N\to \bar \psi \tilde\psi$ for which the associated CP violating asymmetries receive contributions from self energy loops involving both $\lambda$ and $\eta$ (see Figure~\ref{fig:CP_violation_diagrams}). 3. They contribute via new self energy diagrams to the CP asymmetries in $N\to \bar \ell H$ decays (see Figure~\ref{fig:CP_violation_diagrams}). The important point is that since the couplings $\eta$ are not related to light neutrino masses, they can be sufficiently large to allow for $N$ production with observable rates and for large enhancements of the CP asymmetries. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{cp_violation_diagrams} \caption{The CP asymmetries in $N_i\to \chi_m \tilde\chi$ decays from one-loop self-energy and vertex diagrams with $\chi^{(\prime)}_m= \ell_{\alpha},\,(\psi_{m}) $ and $\tilde{\chi}^{(\prime)}= H,\,(\tilde{\psi}) $. \label{fig:CP_violation_diagrams} } \end{center} \end{figure} Assuming $M_j>M_{1}>M_{\tilde\psi}$ ($j=2,3$) and summing over final state flavours, the self-energy and vertex contributions to the CP asymmetries in $N_1\to \bar \ell H, \bar \psi\tilde\psi$ decays are: \begin{eqnarray} \label{eq:self} \epsilon_{1\chi}^{S} & = & \frac{\kappa_\chi}{16 \pi D_1}\sum_{j\neq 1} \sum_{\chi'}\kappa_{\chi'}\,{\rm Im} \left[\left(\xi_{\chi'}^{\dagger}\xi_{\chi'}\right)_{1j} \left(\xi_\chi^{\dagger}\xi_\chi\right)_{1j} \right] f^{S}\left(\frac{M_{j}^{2}}{M_{1}^{2}}\right)\,, \\ \label{eq:vertex} \epsilon_{1\chi}^{V} & = & \frac{\kappa_\chi}{8 \pi D_1} \sum_{j\neq 1} \,{\rm Im}\left[\xi_\chi^{\dagger}\xi_\chi \right]^2_{1j} f^{V}\left(\frac{M_{j}^{2}}{M_{1}^{2}}\right)\,, \end{eqnarray} where $D_1=16\pi \Gamma_1/M_1$ with $\Gamma_1$ the total $N_1$ decay width, $\chi,\chi'=\{\ell,\psi\}$ denote the SM fermions in the final states and in the loops, $\xi_{\chi},\,\xi_{\chi'}=\{\lambda,\eta\}$ and $\kappa_{\chi},\kappa_{\chi'}$ are the corresponding Yukawa couplings and gauge multiplicities. The self energy and vertex loop functions are respectively: \begin{eqnarray} f^{S} & = & \frac{\sqrt{x}}{1-x},\qquad\qquad f^{V}=\sqrt{x}\left[1-(1+x)\ln\frac{1+x}{x}\right]. \end{eqnarray} We will see below that loops involving the new couplings $\eta$ can always dominate, but that in spite of the enhancement from these new loops, $\epsilon_{1\ell}$ remains too small to make leptogenesis succeed. In contrast, the CP asymmetries for decays into $\psi\,\tilde\psi $ can have quite large values. We then assume $\epsilon_{1\psi}\gg \epsilon_{1\ell}$ and, for simplicity, we set $\lambda\to 0$ in the expressions for the CP asymmetries. Once a particular new scalar is introduced, besides the coupling $\eta$ to the RH neutrinos other couplings with SM fermion bilinears are generally possible, and these are collectively represented by the second term in \eqn{eq:newcoupling}. Clearly, we need to ensure that this second term will not contain dangerous $B$ and/or $L$ violating interactions. Table~\ref{tab:1} lists the possible scalars, their couplings to SM fermions allowed by gauge invariance and, when they can be consistently given, the assignments that render the Lagrangian~\eqn{eq:newcoupling} $L$ and $B$ conserving. The last two columns give the amount of $L$ and $B$ violation of the $ \bar \psi_L N\tilde\psi$ term, taking conventionally $L(N)=0$. Let us now analyze the different possibilities. (1) $\tilde \ell$ in the first row is a (down-type) second Higgs, so we can consistently assign $B=L=0$ to it. Neutrino mass models with an extra Higgs doublet have interesting properties, and have been studied for example in ~\cite{Ibarra:2011gn,Ibarra:2011zz}, although with no special emphasis on leptogenesis. The possibility of having $\tilde \ell$ at the TeV scale is, however, rather dangerous because in the diagonal mass basis for the quarks the new couplings to quarks bilinears (see Table~\ref{tab:1}) will generally be non diagonal, and this can induce FCNC at the tree level~\cite{Georgi:1978ri}. Experimental limits then require that either $M_{\tilde \ell}$ is very large, or that its couplings are sufficiently small~\cite{Branco:2011iw}, which implies that a TeV-scale $\tilde \ell$ does not represent a favourable possibility. (2) $\tilde e$ is a lepton since, in order to conserve lepton number in the interactions with the SM fermions, we have to assign $L=+2$ to it. The new couplings between $N$ and $\tilde e$ are well suited to break the relation between the size of the CP asymmetries and the light neutrino mass matrix, and in case they are sufficiently large they can enhance the CP-violating loop corrections and render leptogenesis viable. In principle, $\tilde e$ can be pair produced at the LHC via electroweak processes, and if its $\eta$ couplings are particularly large it could also bridge the $N$ production. We will discuss these signatures in Sec.~\ref{sec:lhc}. (3) $\tilde Q$ is a leptoquark with $L=-1$ and $B=+1/3$, as follows from requiring $B$ and $L$ conservation in its interactions with the SM fermions. Then, while $L$ is violated in $N \to Q \tilde Q^*$ decays, $B$ is not, and the model conserves (perturbatively) $B$. Being $\tilde Q$ a coloured particle, it can be produced with large rates at colliders~\cite{Grifols:1981aq,Hewett:1987yg,Blumlein:1996qp,Kramer:1997hh,Kramer:2004df}, for example via gluon fusion $gg \to \tilde Q \tilde Q^*$, and it can bridge RH neutrino production at the observable level if its $\eta$ couplings are sufficiently large. Thus, a TeV-scale $\tilde Q$ represents a very interesting possibility. We will discuss the related signatures in Sec.~\ref{sec:lhc}. (4) The scalar $\tilde u$ can couple to the SM fermions in a $B$ and $L$ conserving way by assigning $L(\tilde u)=0$ and $B(\tilde u)=-2/3$ . As regards its couplings to the RH neutrinos, by assigning conventionally $L(N)=0$ we have that $\bar u N^c \tilde u$ is $L$ conserving, so that only the seesaw couplings $\lambda\,\bar \ell N H$ violate $L$. This implies that any $L$ violating quantity (like the leptogenesis $CP$ asymmetries) must vanish in the limit $\lambda \to 0$, and leads us to conclude that adding $\tilde u$ cannot enhance the generation of lepton asymmetries. Moreover, $\bar u N^c \tilde u$ violates $B$ by one unit. Below the TeV scale, after integrating out the $N$'s the dimension 7 operator $\frac{1}{M_{\tilde u}^2 M_N}\, \left(\bar d^cd\right)\,\left(\bar L u\right)\, H$ arises. For moderately small $y$ and $\eta$ couplings the contributions of this operator to proton decay is under control, since after integrating out the Higgs, it gives rise at the GeV scale to a dimension 9 operator $\frac{1}{M_{\tilde u}^2 M_N M^2_H}\, \left(\bar d^cd\right)\,\left(\bar L u\right)\, (\bar L \mu)$ which is sufficiently suppressed to keep the rates for the decays $p\to \pi^+ \mu^+ \mu^- \nu_e$ and $p\to \pi^+ e^\pm\mu^\mp\nu_\mu$ below current limits. However, $SU(2)\times U(1)$ spontaneous symmetry breaking induces a mixing between the RH and light neutrinos, which is of order $\sqrt{m_\nu/M_N}$ and gives rise to the dimension 6 operator $\frac{1}{M_{\tilde u}^2}\,\sqrt{\frac{m_\nu}{M_N}} \left(\bar d^cd\right)\,\left(\bar \nu u\right)$. This operator induces the decays $p,n\to \pi \nu$ and, taking $m_{\nu}\sim 10^{-2}\,$eV and $M_{\tilde u}\sim M_{N}\sim 1\,$TeV, this results in a nucleon lifetime: \begin{eqnarray} \label{eq:nucleonlifetime} \tau_{N\to \pi \nu} &\sim& 10^{32} \left(\frac{10^{-19}} {y_{dd\tilde u}\, \eta_{Nu\tilde u}}\right)^2\, {\rm yrs.}\,. \end{eqnarray} To satisfy the experimental limits~\cite{Beringer:1900zz} $\tau_{p\to \pi \nu} < 0.25\times 10^{32}\,$yrs. and $\tau_{n\to \pi \nu} < 1.12\times 10^{32}\,$yrs. the required suppression of the couplings $y $ and $\eta$ is so extreme, that we prefer to discard the possibility of a $\tilde u$ of TeV mass. (5) The scalar $\tilde d$ can be coupled in a gauge invariant way both to quark-quark and to quark-lepton bilinears, and thus there is no possible assignment that conserves $B$ and $L$. As a consequence, such a scalar can mediate proton decay via unsuppressed dimension 6 operators. Thus the possibility of a TeV scale $\tilde d$ must be excluded. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Scalar field & Couplings & $B$ & $L$ & $\Delta B$ & $\Delta L$ \\ \hline \phantom{$\Big|$}$\tilde \ell $ & $\bar \ell e\,(\epsilon \tilde \ell^*),\ \, \bar Q d\, (\epsilon \tilde \ell^*),\ \, \bar Q u\, \tilde \ell $ & $0$ & $0$ &$0$ & $-1$ \\ \hline \phantom{$\Big|$}$\tilde e $ & $\bar \ell (\epsilon \ell^c)\,\tilde e $ &0 &+2 &$0$ & $+1$ \\ \hline \phantom{$\Big|$}$\tilde Q$ & $\bar \ell d\, (\epsilon \tilde Q^*) $&$+1/3$ &$-1$ &$0$ & $-1$ \\ \hline \phantom{$\Big|$}$\tilde u$ &$\overline{d^c} d\, \tilde u $ &$-2/3$ & 0 &$ -1 $ & $ 0 $ \\ \hline \phantom{$\Big|$}$\tilde d $ &$\bar \ell (\epsilon Q^c)\,\tilde d,\ \overline{Q^c}(\epsilon Q)\,\tilde d,\ \bar u e^c\,\tilde d,\ \overline{u^c}d\,\tilde d $ & $-$ & $-$ &$ - $ & $ - $ \\ \hline \end{tabular} \end{center} \medskip \caption{ The five types of scalars that can be coupled to the RH neutrinos and to one type of SM fermions ($\epsilon=i\tau_2$ is the $SU(2)$ antisymmetric tensor). The third and fourth columns list the assignments that render these couplings $B$ and $L$ conserving. $\tilde \ell$ is a (down-type) second Higgs, $\tilde e$ is a lepton, $\tilde Q$ is a leptoquark, $\tilde u$ is a baryon. For $\tilde d$ no $B$ and $L$ conserving assignments are possible. The last two columns give the amount of $L$ and $B$ violation in the couplings to RH neutrinos, taking conventionally $L(N)=0$. \label{tab:1} } \end{table} \section{Viable TeV scale Leptogenesis} \label{sec:leptogenesis} We have seen that the two types of scalars $\tilde \psi = \tilde e,\, \tilde Q$ (and marginally also $\tilde\ell$) allow for phenomenologically viable extensions of the Type I seesaw. We will now study whether in such extensions the three conditions (i)-(iii) listed in the introduction can be satisfied. We assume for the moment that leptogenesis is driven by the dynamics of the lightest RH neutrino, with $M_1\ll M_{2,3}$. To allow for successful leptogenesis, the RH neutrino couplings to leptons ($\lambda$) and to the new scalars ($\eta$) should satisfy the following requirements: (i)\ {\it Out of equilibrium $N_1$ dynamics.} The $N_1$ couplings to $\ell_\alpha$ ($\lambda_{\alpha 1}$) and to $\psi_m$ ($\eta_{m 1}$) must be sufficiently small to ensure that $N_1$ decays and scatterings are out-of-equilibrium at $T\sim M_1$. The Universe expansion rate is: \begin{equation} \label{eq:Hubble} H(T) = 1.66\, \sqrt{g_*}\,\frac{T^2}{M_p} = H_1 \cdot r_H(T)\,, \end{equation} where $g_*$ is the total number of relativistic degrees of freedom (d.o.f.) and $M_p$ is the Planck mass. In the second equality we have introduced $H_1=1.4\times 10^{-12}\,$GeV that is the Hubble rate evaluated at $T=1\,$TeV with only the SM degrees of freedom $g_*^{SM} = 106.75$, while $r_H(T) = \sqrt{1+g_*^{NP}/g_*^{SM}} \left(T/1\, {\rm TeV}\right)^2$ with $g_*^{NP}$ the additional d.o.f. corresponding to the new states is, in the temperature range we are interested in, an ${\mathcal O}(1)$ correction. Assuming for example that $M_1 > M_{\tilde\psi}$, out-of-equilibrium $N_1$ decays require \begin{equation} \label{eq:outofeq} \Gamma_1 = \frac{M_1}{16 \pi} \left(\kappa_\ell (\lambda^\dagger\lambda)_{11} + \kappa_\psi (\eta^\dagger\eta)_{11} \right) \lsim H_1 \,, \end{equation} which, at temperatures $T\sim M_1\sim 1\,$TeV, gives: \begin{equation} \label{eq:lam1} D_1 = \kappa_L(\lambda^\dagger\lambda)_{11} + \kappa_\psi (\eta^\dagger\eta)_{11} \lsim 7 \cdot 10^{-14} \,. \end{equation} This clearly excludes the possibility of producing $N_1$ at colliders. (ii)\ {\it Out of equilibrium $N_{2,3}$ washouts.} Because of \eqn{eq:lam1}, only $N_{j}$ ($j=2,3$) could eventually be produced, and it is then desirable to have their couplings to other particles as large as possible. Since these couplings enter the loops responsible for the CP asymmetries, large values will also enhance the particle asymmetries generated in $N_1$ decays. On the other hand, $N_{j}$ production requires that $M_{j}$ cannot be much larger than 1 TeV. Together with the assumed large values of the couplings, this condition could result, at $T \sim M_1$, in too large washouts from off shell $N_{j}$ exchange. An example are the following $s$-channel processes: \begin{eqnarray} \label{eq:first} {\mathcal O}\left(|\lambda_{\alpha j}|^2\cdot|\lambda_{\beta j}|^2\right): && \qquad \bar \ell_\alpha H \ \leftrightarrow\ \ell_\beta H^*\,, \\ \label{eq:second} {\mathcal O}\left(|\eta_{m j}|^2\cdot |\lambda_{\alpha j}|^2\right): && \qquad \bar \psi_m \tilde\psi \ \leftrightarrow\ \ell_\alpha H^*,\ (\bar \ell_\alpha H)\,, \\ \label{eq:third} {\mathcal O}\left(|\eta_{m j}|^2\cdot |\eta_{n j}|^2\right): &&\qquad \bar \psi_m \tilde\psi \ \leftrightarrow\ \psi_n \tilde\psi^* \,. \end{eqnarray} Other processes that are not directly related to washouts but that are relevant in the following discussion, are the $B$ and $L$ conserving reactions induced by the second term in~\eqn{eq:newcoupling}, that involve $\tilde \psi$ and a pair of the SM fermions: \begin{equation} \label{eq:fourth} \tilde \psi \leftrightarrow \psi' \bar{\psi''}\,. \end{equation} At $T\sim 1\,$TeV all the SM Yukawa reactions are in equilibrium, which means that the chemical potentials of all the particles are related. It is then sufficient that the asymmetry of any one of the SM states is washed out to zero, to drive to zero all the asymmetries in the global charges. The condition that the $N_j$ mediated washouts $\gamma_w$ are out of equilibrium reads: \begin{equation} \label{eq:washouts} \gamma_w \sim \frac{1}{\pi^3} \frac{T^3}{M_j^2} \, |\xi_{m j}|^2 \cdot |\xi'_{n j}|^2 \lsim 17 \frac{T^2}{M_p}\,, \end{equation} where $\xi$ and $\xi'$ denote either $\lambda$ or $\eta$, see \eqns{eq:first}{eq:third}, and we have neglected for simplicity the gauge multiplicity factors $\kappa_{\ell,\psi}$. This yields \begin{equation} \label{eq:lam2} |\xi_{m j}|\cdot |\xi'_{n j}| \lsim 1.6 \cdot 10^{-7} \, \frac{M_j}{M_1} \left(\frac{M_1}{1\,{\rm TeV}}\right)^{1/2}\,. \end{equation} The constraints $|\lambda|\lsim 10^{-6}$ from the light neutrino masses~\eqn{eq:numasses} implies that the first set of processes~\eqn{eq:first} are easily out of equilibrium. After setting $|\lambda_{\alpha j}|\lsim 10^{-6}$ the second set of processes~\eqn{eq:second} is also out of equilibrium if only $|\eta_{m j}|\lsim 10^{-1}$, which is still large enough to allow for $N_j$ production with observable rates. However, to have the third set of processes \eqn{eq:third} out of equilibrium we would need to require $|\eta_{m j}|\lsim 4\cdot 10^{-4}$, pushing again $N_j$ production rates well below observability. We will argue below that in equilibrium rates for the processes in \eqn{eq:third} do not imply the erasure of global asymmetries, and therefore, if the values of $\lambda_{\alpha j}$ satisfy the constraints from neutrino masses~\eqn{eq:numasses}, successful leptogenesis can proceed even if $\eta_{m j} \sim {\mathcal O}(1)$, which on the other hand allows for observable $N_j$ production. \section{Equilibrium conditions} \label{sec:equilibrium} Because of intergenerational mixing, at $T\sim 1\,$TeV quark flavours are treated symmetrically by the network of chemical equilibrium conditions, so that there is just one chemical potential for each type of quark: \begin{equation} \label{eq:Qud} \mu_{Q_m}=\mu_Q\,, \qquad \mu_{u_m}=\mu_u\,, \qquad \mu_{d_m}=\mu_d\,. \end{equation} As regards the leptons, chemical potentials are generally different for different flavours~\cite{Barbieri:1999ma,Abada:2006fw,Nardi:2006fx}. However, if $\eta_{\alpha j} \sim {\mathcal O}(1)$ the reactions~\eqn{eq:third} are in chemical equilibrium, implying \begin{equation} \label{eq:equilibrium} \mu_{\psi_\alpha}+\mu_{\psi_\beta} = 2\,\mu_{\tilde \psi}\,. \end{equation} Therefore, when $\tilde\psi=\tilde e$ (or $\tilde \ell$) it follows that $\mu_{\tilde e}=\mu_{e_\alpha}$ (or $\mu_{\tilde \ell}=\mu_{\ell_\alpha}$) for each $\alpha$. Charged leptons Yukawa equilibrium in turn implies $\mu_{\ell_\alpha}-\mu_{e_\alpha}=\mu_H$ so that in both cases of $\tilde e$ and $\tilde \ell$, lepton flavour equilibration~\cite{AristizabalSierra:2009mq} is enforced and we can set $\mu_{e_\alpha}=\mu_e$ and $ \mu_{\ell_\alpha}=\mu_\ell$. Note that, by itself, condition~\eqn{eq:equilibrium} does not imply $\mu_\psi=\mu_{\tilde\psi}=0$. In fact, although with the assignments given in Table~\ref{tab:1} reactions~\eqn{eq:third} appear to violate global $L$ number, it is possible to preserve particle asymmetries even when they are in thermal equilibrium. A simple way to illustrate this is the following: in type I seesaw leptogenesis there are always enough conditions to express all particle asymmetries in terms of the (non-vanishing) asymmetries in the anomaly free flavour charges $Y_{\Delta_\alpha} = B/3-L_\alpha$. One can then interpret, for example, the effect of putting into thermal equilibrium the $\Delta L =2$ scatterings~\eqn{eq:first} as imposing three new chemical equilibrium conditions without introducing any new chemical potential. This implies that the homogeneous system of conditions becomes overconstrained, and $Y_{\Delta_\alpha} =0$ is the only solution. In the present case, however, while \eqn{eq:equilibrium} gives new equilibrium conditions, we also have one additional chemical potential $\mu_{\tilde \psi}$, so that the system is not overconstrained. More in detail, when $\tilde\psi = \tilde Q$, \eqn{eq:equilibrium} gives a single new condition, and the new chemical potential is $\mu_{\tilde Q}$. The constraining conditions can then be solved in terms of non-vanishing $Y_{\Delta_\alpha}$. When $\tilde \psi = \tilde e$ (or $\tilde \ell$), then \eqn{eq:equilibrium} represents three additional conditions. Two are satisfied by equating $Y_{\Delta_e}=Y_{\Delta_\mu}=Y_{\Delta_\tau}=(1/3)\, Y_{\Delta B-L}$ (this kills all dynamical flavour effects~\cite{AristizabalSierra:2009mq}) and the third one can also be satisfied while keeping $Y_{\Delta B-L}$ non-vanishing, thanks to the additional variable $\mu_{\tilde e}$ (or $\mu_{\tilde \ell}$). Clearly, only if there are no other conditions involving $\mu_{\tilde \psi}$ that need to be satisfied it is possible to have $\mu_\psi=\mu_{\tilde \psi}\neq 0$. In particular, we must require that besides reactions \eqn{eq:first} and \eqn{eq:second}, also the reactions in \eqn{eq:fourth} are out of equilibrium. The rates of these reactions, which are induced by the second term in~\eqn{eq:newcoupling}, depend on the size of the couplings $y$ and can be estimated in analogy with the electron Yukawa coupling rates~\cite{Cline:1993bd} as $\gamma_y \sim 10^{-2}\, |y|^2\,T$. They remain out of equilibrium if: \begin{equation} \label{eq:ycoupling} |y|\lsim 4\times 10^{-7}\left(\frac{T}{1\,\rm TeV}\right)^{1/2}\,. \end{equation} The reason why there is no conflict in having reactions \eqn{eq:equilibrium} in equilibrium while preserving nonvanishing particle density-asymmetries, in spite of the $B$ and $L$ assignments given in Table~\ref{tab:1}, is that for $\tilde e,\,\tilde\ell,\, \tilde Q,$ these assignments have been fixed by requiring that the coupling to the SM fermions conserve $B$ and $L$. However, if at the time the $N_1$'s decay~\eqn{eq:ycoupling} is fulfilled, then in the effective Lagrangian appropriate to this temperature regime one must set $y\to 0$~\cite{Fong:2010bv,Fong:2010qh}. Once this is done, one can formally obtain a $B$ and $L$ conserving Lagrangian simply by assigning to $\tilde \psi$ the same $B$ and $L$ numbers of the fermion $\psi$ (setting conventionally $L(N)=0$). From this point of view, out-of-equilibrium $N_1\to \psi \tilde\psi^*$ decays yield asymmetries which are only constrained to satisfy $\mu_\psi=\mu_{\tilde \psi}$ by the fast ${\mathcal O}(\eta^4)$ scatterings mediated by $N_{2,3}$, but there is no global asymmetry in the $L$ (or $B$) quantum numbers as defined in this way. Leptogenesis can still proceed because at lower temperatures $\tilde \psi$ will eventually decay into SM fermions, violating the $L$ number defined in the $y\to 0$ limit, so that in the end a $B-L$ asymmetry results. As regards the usual leptogenesis processes $N_1 \leftrightarrow \bar \ell\,H,\> \ell\, H^*$, for a rather subtle reason they play a fundamental role in the case when the initial $N_1$ abundance is vanishing. The equilibrium condition \eqn{eq:equilibrium} implies $\mu_\psi - \mu_{\tilde \psi}=0$. However, $\mu_\psi - \mu_{\tilde \psi}$ is precisely the number densities factor that weights the washout rates from the inverse decays $\psi + \tilde\psi^* \to N_1$ and $ \bar\psi + \tilde\psi \to N_1$. Therefore there is no washout from these inverse decays. The equilibrium condition in fact implies precisely that a scarcity of $\psi$ with respect to $\bar\psi$ is exactly compensated by an excess of $\tilde \psi^*$ with respect to $\tilde\psi$, so that both processes proceed at the same rate. If the initial $N_1$ abundance is vanishing, such a situation can prevent the generation of any asymmetry. This is easily understood by writing the Boltzmann equations with no washout term: \begin{eqnarray} \label{eq:BE0W1} \dot Y_{N_1} &=& \left( y_{N_1}-1\right) \gamma_{D}, \\ \label{eq:BE0W2} \dot Y_{\Delta_{B-L}} &=& {\mathcal Q}_{B-L}^\psi \, \epsilon_{1\psi}\,\left( y_{N_1}-1\right) \gamma_{D}\,, \end{eqnarray} where $\gamma_D$ is the thermally averaged decay rate, $y_N = {Y_{N_1}}/{Y^{eq}_{N_1}}$, ${\mathcal Q}_{B-L}^\psi$ is the $B-L$ charge carrieof by the $(\psi,\,\tilde\psi^*)$ final state, and the time derivative is $\dot Y= (sHz)\,d Y/dz$, with $s$ the entropy density and $z=M_1/T$. After plugging the first equation in the second one and integrating, we obtain that at the final time $z_f\gg 1$: \begin{equation} \label{eq:BEint} Y_{\Delta_{B-L}}(z_f) = {\mathcal Q}_{B-L}^\psi\, \epsilon_{1\psi}\, Y_{N_1}(z_i)\,, \end{equation} where we have used $Y_{N_1}(z_f)=0$ and we have assumed no initial asymmetries $ Y_{\Delta {\mathcal Q}}(z_i)=0$ at $z_i\ll 1$. As anticipated, if $ Y_{N_1}(z_i)=0$ then the final asymmetry vanishes. This is the consequence of a perfect balance between the opposite sign asymmetries generated first in $N_1$ production, and later on in $N_1$ decays~\cite{Abada:2006ea,Nardi:2007jp}. However, the $L$ and $B$ number asymmetries are related by fast electroweak sphaleron interactions, so that any type of additional washouts in $L$ or in $B$ must be accounted for in the Boltzmann equation \eqn{eq:BE0W2} and, if present, this would be sufficient to spoil the previous cancellation. In fact, we know that for a neutrino mass scale of the order of the atmospheric or solar mass square differences, the rates of lepton number violating inverse decays $\bar \ell H,\> \ell H^*\to N_1$ are likely to be comparable with the Universe expansion rate, and thus non-negligible. Their effect must then be included in the Boltzmann equations, and this suffices to spoil the cancellation between the asymmetries in $N_1$ production and decay, allowing for successful leptogenesis even when $Y_{N_1}(z_i)=0$. In conclusion, all the conditions that we have discussed above are satisfied if: \begin{equation} \label{eq:values} |\lambda_{\alpha 1}|,\, |\eta_{m 1}|,\, |y| \lsim 10^{-7},\quad |\lambda_{\alpha 2}|,\, |\lambda_{\alpha 3}| \lsim 10^{-6},\,\quad |\eta_{m 2}|,\, |\eta_{m 3}| \sim 10^{-1}\,. \end{equation} In particular, with these figures we obtain for the CP asymmetries $ \epsilon_{1\ell} \sim 10^{-8}$ and $\epsilon_{1\psi} \sim 10^{-3}$ which shows that the asymmetries in the global charges carried by the fermions $\psi$ can indeed be quite large. Depending on the mass ordering between the RH neutrinos $N_i$ and the new scalars $\tilde \psi$, two different realizations of leptogenesis become possible. We now discuss them focusing for definiteness on the case $\tilde \psi = \tilde Q$. \begin{itemize} \itemsep 4pt \item {$\mathbf{ M_{\tilde Q} < M_1 < M_{2,3}}$.} In this case $N_1\to Q \tilde Q^*$ decays generate the two asymmetries $Y_{\Delta Q}$ and $Y_{\Delta \tilde Q}$ (with $Y_{\Delta \tilde Q} = 2 Y_{\Delta Q}$ from $\mu_Q=\mu_{\tilde Q}$ equilibration). Later, the decay $\tilde Q^* \to \ell \bar d $ induced by the second term in~\eqn{eq:newcoupling} occurs. Regardless of the particular $L(\tilde Q)$ assignment, the decay chain $N_1\to Q \tilde Q^* \to Q\, \ell\, \bar d$ always implies $\Delta L \neq 0$ and a lepton number asymmetry is generated. Leptogenesis then proceeds in the standard way. Note, however, that if $|y|\lsim 10^{-8}$, then $\tilde Q$ decays occur after sphalerons are switched off, and thus the lepton asymmetry cannot trigger leptogenesis. \item {$\mathbf{ M_1 < M_{\tilde Q} < M_{2,3}}$.} The advantage of this possibility is that the lightest RH neutrino $N_1$ can be produced via $\tilde Q$ decays even if it is weakly coupled. An asymmetry in $Y_{\tilde Q}$ is first generated in the decays $N_2\to Q \tilde Q^*$ (that must occur out of equilibrium, implying that $N_2$ cannot be produced). After $\tilde Q$ is produced, it will decay via the two channels $\tilde Q \to \bar \ell d $ and $\tilde Q \to N_1 Q$. The first decay feeds the $Y_{\tilde Q}$ asymmetry into $Y_{\Delta L}$, proportionally to its branching ratio, triggering leptogenesis. The second channel allows for $N_1$ production even if the corresponding couplings are tiny. In fact, in order not to suppress either the leptogenesis efficiency or $N_1$ production, we have to require that the two branching ratios are not too hierarchical in size. Then the out-of-equilibrium condition $|y| \lsim 4\times 10^{-7}$ (see \eqn{eq:ycoupling}) implies that the couplings $\eta_{\alpha 1}$ must also be rather small. \end{itemize} \section{Possible signals at the LHC} \label{sec:lhc} In the previous sections we have seen that the two types of scalars $\tilde \psi = \tilde e,\, \tilde Q$ allow for phenomenologically viable extensions of the Type I seesaw, and lead to viable leptogenesis with scalar and RH neutrino masses in the TeV range. This opens up the possibility of testing these scenarios at the LHC. A scalar $\tilde e$ with masses of order TeV can be pair produced at the LHC via the process $pp\rightarrow \tilde e \tilde e^*$ mediated by a photon or a $Z$ boson (the same is of course true also for the scalar $SU(2)$ doublet $\tilde \ell$). The corresponding cross sections are shown in Fig.~\ref{fig:lhc} for the two center of mass energies $\sqrt{s}=8, 14$ TeV. Unfortunately, as can be seen from the figure, the cross sections are too small to lead to observable rates at LHC8 with the accumulated ${\mathcal L\sim 20}$ fb$^{-1}$, while detecting a signal at LHC14 with an accumulated luminosity of ${\mathcal L\sim 100}$ fb$^{-1}$ is only marginally allowed. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{gluglu} \caption{Tree-level contributions to $pp\rightarrow \tilde Q\tilde Q$ via gluon fusion. \label{fig:gluglu}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{qqbar} \caption{Tree-level contributions to $pp \rightarrow \tilde Q\tilde Q$ via quark-antiquark annihilation. In the first diagram $q$ stands for $Q,u$ or $d$. \label{fig:qqbar}} \end{center} \end{figure} The scalar leptoquark $\tilde Q$, being a coloured particle, has larger production cross sections. The dominant production mechanism is via pair production $pp\rightarrow \tilde Q \tilde Q^*$ which can proceed via gluon fusion (see Fig.~\ref{fig:gluglu}) or via quark-antiquark annihilation (see Fig.~\ref{fig:qqbar})~\cite{Grifols:1981aq,Hewett:1987yg,Blumlein:1996qp,Kramer:1997hh,Kramer:2004df}. The gluon fusion channel is, as usual, proportional to $\alpha_s^2$. Production via quark-antiquark annihilation gets contribution from three different types of diagrams, which can interfere only for some specific initial/final state configurations. The amplitude for $s$-channel gluon exchange depicted in the first diagram in Fig.~\ref{fig:qqbar} is ${\cal O}(\alpha_s)$ and corresponds to the dominant contribution. The amplitude for $t$-channel $N$ exchange in the second diagram is ${\cal O}(\eta^2)\lsim 10^{-2}$ and thus subdominant. This is the only channel that allows production of pairs of leptoquarks carrying an overall nonvanishing charge, like {\it e.g.} in $pp\rightarrow \tilde u\tilde d^*$. The last diagram in Fig.~\ref{fig:qqbar} is the $t$-channel $\ell$ exchange amplitude which is ${\cal O}(y^2)$ and thus, due the out-of-equilibrium condition eq.~(\ref{eq:ycoupling}), negligibly small. \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{qN} \caption{Tree-level contributions to single $N$ production via quark-gluon coannihilation. \label{fig:qN}} \end{center} \end{figure} Compared with standard leptoquark models, the presence of the couplings between the leptoquarks and the RH neutrinos yields some distinguishable characteristic. The most striking one is that single leptoquark production proceeds dominantly via associate production of a heavy RH neutrino $N$, as shown in Fig.~\ref{fig:qN}. This is because single production in association with a SM lepton is strongly suppressed by the smallness of the $y$ Yukawa couplings. It is also interesting to note that $N$ exchange opens up the possibility of the $L$-violating production processes $pp\rightarrow \tilde Q \tilde Q$ (see Fig.~\ref{fig:qq}) for which the cross section is \begin{eqnarray} \sigma_{Q_m^{a}Q_n^{b} \to\tilde{Q}^{a}\tilde{Q}^{b}}\left(\hat s\right)& = & \frac{2}{\hat{s}}\Bigg\{ \sum_{i}\frac{\left|\eta_{m i}\right|^{2}\left |\eta_{n i}\right|^{2} n_{i}^{2}}{32\pi} \frac{\beta}{n_{i}^{4}/\hat{x}+2 \left(1+\beta^{2}\right)n_{i}^{2}+\left(1-\beta^{2}\right)^{2} \hat{x}}\nonumber \\ & & +\frac{1}{64\pi}\sum_{j<i} \frac{{\rm Re} \left(\eta_{m j}\eta_{n j}\eta_{m i}^{*}\eta_{n i}^{*} \right)n_{j}n_{i}}{n_{j}^{2}-n_{i}^{2}}\left[L_{i}\left(\hat x\right)- L_{j}\left(\hat x\right)\right] \\ & & +\frac{\delta_{ab}}{384\pi} \sum_{i,j}\frac{{\rm Re}\left(\eta_{m j}\eta_{n j} \eta_{m i}^{*}\eta_{n i}^{*} \right)n_{j}n_{i}}{n_{j}^{2}+n_{i}^{2}+2\left(1+\beta^{2}\right)\hat{x}} \left[L_{j}\left(\hat x\right)+L_{i}\left(\hat x\right)\right]\Bigg\}\,,\nonumber \end{eqnarray} where $a,b$ are $SU(2)$ indices, $\hat{x}=\frac{\hat s}{4M_{\tilde{Q}}^{2}}$ with $\hat{s}$ the partonic center of mass energy, $n_{i}=\frac{M_{i}}{M_{\tilde{Q}}}$, $\beta=\sqrt{1-\frac{4M_{\tilde{Q}}^{2}}{\hat s}}$ and $L_{i}\left(\hat x\right) = \log\frac{n_{i}^{2}+\left(1+\beta\right)^{2} \hat{x}}{n_{i}^{2}+\left(1-\beta\right)^{2}\hat{x}}$. \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{qq} \caption{Tree-level contributions to $L$-violating production $Q Q \rightarrow \tilde Q\tilde Q$ \label{fig:qq}} \end{center} \end{figure} In Fig.~\ref{fig:lhc} we plot the cross sections for the different production mechanisms both for LHC8 (left) and for LHC14 (right), adopting for illustrative purposes the value $\eta_{\alpha 2}=0.1$ for the Yukawa couplings and $M_2=2$ TeV for the second heaviest RH neutrino mass. The cross sections have been computed with the CTEQ6L1 parton distribution functions \cite{CTEQ6}. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{color_prod_8TeV} \includegraphics[width=0.49\textwidth]{color_prod_14TeV} \caption{Production cross section for leptoquarks and scalar leptons at LHC8 (left) and LHC14(right). The line with black squares in the left panel gives the CMS 95\% CL exclusion limits (with 19.6 fb$^{-1}$~ \cite{CMS:zva}) for leptoquark searches through the process $pp\rightarrow \tilde{Q}\tilde{Q}^{*} \to\mu\mu jj$ assuming leptoquarks decay with 100\% branching fraction to muon+jet. The yellow band shows our estimated sensitivity for leptoquarks decaying 100\% into third generation fermions $pp\to\tilde{Q}\tilde{Q}^{*}\to\tau\tau bb$ (see text for details). \label{fig:lhc}} \end{center} \end{figure} As we see from the figure, QCD mediated $\tilde Q\tilde Q^*$ pair production is the dominant mechanism, while the $L$-violating $\tilde Q\tilde Q$ production rate remains between two and three orders of magnitude below. Nevertheless, we can expect that the background for this second process will be much smaller. This could open up the possibility of differentiating this scenario from standard leptoquark models, by observing its specific lepton number violating signatures. The detection of these signals will depend on the dominant decaying modes of the leptoquarks. For $M_1> M_{\tilde Q}$ the dominant decay channels are $\tilde{Q}\to\bar{\ell}d$ and $\tilde{Q}\to Q\nu$, respectively with decay widths \begin{eqnarray} \Gamma_{\tilde{Q}\to \ell_{\alpha}d_{m}} & = & \frac{\left|y_{\alpha m}\right|^{2}}{16\pi}M_{\tilde{Q}},\label{eq:lqdec} \\ \Gamma_{\tilde{Q}\to Q_{m}\nu_{\alpha}} & = & \frac{M_{\tilde{Q}}v^{2}}{16\pi}\left|\left(\eta M^{-1}\lambda^{T}\right)_{m \alpha}\right|^{2}\sim {\cal O} \left(\frac{|\eta_{m 2}|^2}{16\pi} \frac{m_\nu}{M_2}\,M_{\tilde Q}\right). \end{eqnarray} Three body decays $\tilde Q \to Q \ell H^*$ mediated by $N$ exchange are much more suppressed and negligible. The decay $\tilde{Q}\to Q\nu$ proceeds via the mixing of the light neutrino states with the RH neutrinos, and therefore it is suppressed by the factor $m_\nu/M$. On the other hand $\tilde{Q}\to\bar{\ell}d$ is mediated by the $y$ Yukawa couplings which must be strongly suppressed to satisfy the out-of-equilibrium condition eq.~(\ref{eq:ycoupling}), and thus the two decay channels might well have comparable rates. Note that for the isospin $-\frac{1}{2}$ component $\tilde d$, both decays lead to a final state where the out coming lepton is a $\nu$. Consequently, in either $\tilde d \tilde d^*$ or $\tilde d \tilde d$ production the final state will contain two jets plus missing energy. Such a final state suffers from very large QCD backgrounds which render these processes undetectable. For the isospin $+\frac{1}{2}$ component $\tilde u$ the first decay leads to the usual leptoquark signal with a charge lepton and a jet. This is the dominant decay mode {\it e.g.} for $|y|\sim 10^{-7}$, $|\eta|\leq 0.2$ and $M_2\sim {\mathcal O}$ (TeV) . In this case the $L$-violating production $pp\rightarrow \tilde u \tilde u$ would lead to a clean signature with two jets and two same-sign leptons in the final state. At present the strongest constraints from LHC experiments on leptoquarks (that we keep denoting generically by $\tilde Q$) come from searches for the process $pp\rightarrow \tilde Q \tilde Q^*$ followed by the decay $\tilde Q\rightarrow l q$ (where $l$ denotes a charged lepton and $q$ a generic quark) which results into two jets and a $l^+l^-$ pair in the final state \cite{CMS:zva,Chatrchyan:2012vza,Chatrchyan:2012sv,ATLAS:2013oea,ATLAS:2012aq}. The most up-to-date searches at LHC8, with 19.6 fb$^{-1}$ of integrated luminosity, have been reported by the CMS collaboration \cite{CMS:zva}. We depict in Fig.~\ref{fig:lhc} the 95\% CL exclusion plot for leptoquark pair production assuming 100\% decays into $\mu$+jet. This bound applies directly to our scenario for $pp\rightarrow \tilde u\tilde u^*$, if $\tilde u$ decays dominantly through this mode. We see from the figure that the CMS bound already rules out masses $M_{\tilde Q}\lesssim 850$ GeV. Similar bounds are expected for decays into $e$+jet. Nevertheless, the LHC8 bounds still allow for the possibility of observing at LHC14, with an integrated luminosity ${\mathcal L\sim 100}$ fb$^{-1}$, a few same sign dilepton events from the $L$-violating decay mode. For leptoquarks decaying dominantly into third generation fermions $\tau+b$, the LHC8 bounds will be somewhat weaker. In fact, comparison of present bounds from LHC7 searches for leptoquarks decaying into first and/or second generation fermions \cite{Chatrchyan:2012vza} with the bounds for leptoquarks decaying into $\tau+b$, shows that the corresponding limits get relaxed by about a factor of 10. For illustration, in the left panel in Fig.~\ref{fig:lhc} we plot, for the present scenario, the limits on the leptoquark masses obtained by rescaling the CMS bounds from $\mu$+jet by a factor 10. Thus, the yellow band spans the estimated exclusion region for $pp\rightarrow \tilde u\tilde u^*$ from LHC8 leptoquark searches, for all final states with two charge leptons and two jets. From this exercise we can conclude that $M_{\tilde Q}\gtrsim 500$ GeV could still be allowed at 95\% CL if $\tilde u$ decays dominantly into $\tau$+b. In this case, somewhat larger $L$-violating rates could be allowed, although the observability of the lepton number violating signals at LHC14 will crucially depend on the efficiency for $\tau$-charge reconstruction. Finally, let us add two comments about possible differences between the signatures that could stem from our scenario with respect to standard leptoquark models. In the first place, if the $\tilde u\rightarrow l^+ d$ decay mode dominates, but the $y$ Yukawa couplings are sufficiently small, this decay may produce a displaced vertex. From eq.~(\ref{eq:lqdec}) we can estimate the $\tilde u$ decay length as $c\tau=0.1 \left(\frac{10^{-7}}{|y|}\right)^2\frac{1\, \rm TeV}{M_{\tilde Q}}$~cm. The presence of such a displaced vertex can modify the applicability of usual leptoquark searches to this scenario. Secondly, if $M_{\tilde{Q}}>M_{1}$ the decay mode $\tilde{Q}\to N_{1}Q$ becomes allowed. The decay width reads \begin{eqnarray} \Gamma_{\tilde{Q}\to N_{1}Q_{m}} & = & \frac{\left|\eta_{m 1}\right|^{2}M_{\tilde{Q}}}{16\pi} \left(1-\frac{M_{1}^{2}}{M_{\tilde{Q}}^{2}}\right)^{2}. \end{eqnarray} Depending on the value of the ratio of Yukawa couplings $|\eta_{m 1}|^2/|y_{m n}|^2$, the clean leptoquark signatures with two leptons and two jets in the final state could get overshadowed. In this case, dedicated searches of final states which include the decay products of $N_1$ would be needed. Exploring in detail this possibility goes, however, beyond the scope of this paper. \section{Constraints from FCNC} \label{sec:fcnc} On general grounds, one expects that the $\tilde\psi$ couplings $\eta$ to the RH neutrinos and $y$ to SM fermion bilinears (see~\eqn{eq:newcoupling}) will have generic flavour structures, and could therefore generate dangerous contributions to FCNC processes. Let us note that the couplings $\eta$ involve the two heavy states $N$ and $\tilde \psi$, and thus can give contributions to rare processes only via loop diagrams. Loop suppression is important in this case, because for example $\eta_{m2},\eta_{m3}$ can have particularly large values (see~\eqn{eq:values}). In contrast, the couplings $y$ involve just one heavy state $\tilde \psi$, and thus can contribute via tree level diagrams. However, the values for the $y$'s are already constrained by the out of equilibrium condition to be $\lsim 10^{-7}$. Their contributions to FCNC processes is thus strongly suppressed. We will now estimate more quantitatively for the two types of new scalars $\tilde Q$ and $\tilde e$, the limits on $\eta$ and $y$ implied by the most relevant FCNC processes. \\ 1. $\tilde \psi={\tilde Q}$:\ Through a loop involving $N$ and $\tilde Q$, $\eta$ couplings can contribute to radiative decays of quarks. At the quark level, the most dangerous transition is $s \to d\gamma$ that induces e.g. the radiative decay $K^+\to \pi^+\gamma$, which is bounded by Br$(K^+\to \pi^+\gamma) < 2.9\times 10^{-9}$~\cite{Beringer:1900zz}. We estimate the leading contribution of the loop involving the RH neutrinos $N_j$ as~\cite{Dimopoulos:1995ju,Sutter:1995kp}: \begin{equation} {\rm Br} (K^+\rightarrow \pi^+ \gamma)= \tau_{K}\, \frac{\alpha |\eta_{2j}\eta^*_{1j}|^2}{(8 \pi)^4}\frac{m_{K}^5} {M_{\tilde{Q}}^4}f\left(\frac{M_{j}^2}{M_{\tilde{Q}}^2} \right) \sim 4.3 \times 10^{-7}\> |\eta_{2j}\eta^*_{1j}|^2\,, \label{eq:Ktopi} \end{equation} where $\tau_{K}=1.2 \times 10^{-8}\,$s. is the $K^+$ lifetime, the loop function is \begin{equation} f(r)=\frac{1}{12(1-r)^4}\left(2 r^3+3 r^2-6r +1-6r^2\log r\right) \sim \frac{1}{6r} \end{equation} (the approximation holds for $r\gg 1$), and we have taken $M_{j}= 2\,$TeV and $M_{\tilde{Q}}=1\,$TeV. The estimate \eqn{eq:Ktopi} translates into \begin{equation} \sqrt{|\eta_{2j}\eta^*_{1j}|}\lsim 0.29\,, \end{equation} which is not in conflict with the numbers given in \eqn{eq:values}. As regards FCNC decays mediated by tree level diagrams involving the coupling $y$, semileptonic lepton flavour violating (LFV) $K$ decays provide the strongest constraints, e.g. Br$(K^+\rightarrow \pi^+\mu^+e^-)< 1.3 \times 10^{-11}$~\cite{Beringer:1900zz}. We estimate the branching ratio for this process by comparing it with the three body semileptonic decay {\rm Br}$(K^+\rightarrow \pi^0\mu^+\nu_\mu) = 3.4\% $: \begin{equation} {\rm Br}(K^+\rightarrow \pi^+\mu^+e^-) = \frac{|y_{22}\,y^*_{11}|^2}{g^4} \frac{M_W^4}{M_{\tilde Q}^4}\, {\rm Br}(K^+\rightarrow \pi^0\mu^+\nu_\mu) \sim 7.8\times 10^{-6}\,|y_{22}\,y^*_{11}|^2\,, \end{equation} where again we have taken $M_{\tilde{Q}}=1\,$TeV. This yields \begin{equation} \sqrt{|y_{22}\,y^*_{11}|} \lsim 3.6\times 10^{-2}\,, \end{equation} which is much less constraining than what is required to satisfy the out of equilibrium condition. An analogous limit can be also derived for $|y_{12}\,y^*_{21}|$. Other FCNC $K,B$ and $D$ decays yield limits which are even less constraining. \\ 2. $\tilde \psi={\tilde e}$:\ Through a loop involving $N$ and $\tilde Q$, the $\eta$ couplings of $\tilde e$ can contribute to $\mu \to e\gamma$, for which a tight limit has been recently obtained by the MEG collaboration Br$(\mu\to e\gamma)< 5.7\times 10^{-13}$~\cite{Adam:2013mnn}. We estimate the leading contribution as \begin{equation} {\rm Br }(\mu^+\rightarrow e^+ \gamma)=\tau_\mu \frac{\alpha |\eta_{2j}\eta_{1j}^*|^2}{(8 \pi)^4}\frac{m_{\mu}^5}{M_{\tilde{e}}^4}f\left(\frac{M_ {j}^2}{M_{\tilde{e}}^2}\right) \sim 2.5\times 10^{-8}\> |\eta_{2j}\eta_{1j}^*|^2\,, \end{equation} where $\tau_{\mu}=2.2 \times 10^{-6}\,$s, $M_{j}= 2\,$TeV and $M_{\tilde{e}}=1\,$TeV. We obtain \begin{equation} \sqrt{|\eta_{2j}\eta_{1j}^*|} \lsim 0.07\, \end{equation} which, roughly speaking, is also within the range suggested in~\eqn{eq:values}. As regards the $y$ couplings, since $\tilde e$ is $SU(2)$ singlet it can only mediate LFV decays as $\mu^+ \to \nu_\tau \bar \nu_e e^+$ or similars, in which LFV occurs in the undetected neutrino flavours. Thus these processes cannot yield useful constraints. Loose limits, at best at the level of several percent, could still be obtained from measurements of the $\mu$-decay parameters, given that the couplings to the scalar mediator $\tilde e$ are not of the $V-A$ type. However, \eqn{eq:values} shows that they are certainly satisfied. \section{Discussion and Conclusions} \label{sec:conclusions} The SM equipped with the type I seesaw mechanism can account for the suppression of neutrino masses and, through leptogenesis, for the baryon asymmetry of the Universe. However, it should be recalled that from the theoretical point of view it suffers from a serious fine-tuning problem related with the sensitivity of the Higgs mass to loop contributions that are quadratic in the large mass scale of the RH neutrinos~\cite{Casas:2004gh}. From the experimental point of view it is quite unpleasant that the type I seesaw evades the possibility of direct tests in laboratory experiments. Lowering the seesaw scale down to the TeV solves the theoretical fine-tuning problem, since RH neutrino loop effects become small and are completely under control. However, this does not suffice to render the model testable, because the RH neutrino Yukawa couplings become too tiny to allow for their production. Also, leptogenesis becomes not viable with such a low scale, implying that a quite desirable feature of the model is lost. In this paper we have shown that by introducing new scalars that couple to the RH neutrinos and one other species of SM fermion, we can realize scenarios which satisfy the three conditions of (i) generating neutrino masses at the TeV scale; (ii) being testable at the LHC via direct production of new states; (iii) allowing for successful leptogenesis at the TeV scale. In particular, we have shown that the theoretically most favourable possibilities are a scalar leptoquark transforming under $SU(3)\times U(1)$ as $\tilde Q \sim (3,2)$ and a scalar lepton $\tilde e \sim (1,1)$ with $L=+2$. These two possibilities do not introduce perturbative $B$ violation and thus do not affect nucleon stability, and new FCNC contributions remain generically under control. As regards leptogenesis, it can be realized thanks to some new subtle effects, like the presence of the new chemical potential of the scalars, and also thanks to sufficiently small washout rates. We have shown that in both these cases leptogenesis at the TeV scale can be successful. As regards direct production of new states, we have found that the $\tilde e$ pair production could be marginally observable at the LHC at 14 TeV. On the other hand, the larger production rates for the coloured scalar $\tilde Q$ could make it observable already at the LHC with 8 TeV. We have also pointed out two novel features that can allow to experimentally distinguish our $\tilde Q$ scenario from a standard leptoquark model, which are the $L$-violating production processes $pp\to \ell\ell jj$ mediated by RH neutrinos, and the possibility of a displaced vertex for the decay of $\tilde u \to \ell^+ d$ that is implied by the tiny value of the coupling $|y| \sim 10^{-7}$, which is required for successful leptogenesis. \section*{Acknowledgments} We would like to thank D. Aristizabal Sierra for helpful discussions. CSF would also like to thank the CNYITP at Stony Brook University for the hospitality while the final part of this work was being completed. This work is supported by USA-NSF grant PHY-09-6739, by CUR Generalitat de Catalunya grant 2009SGR502, by MICINN grant FPA2010-20807 and consolider-ingenio 2010 program CSD-2008-0037 and by EU grant FP7 ITN INVISIBLES (Marie Curie Actions PITN-GA-2011-289442). \vspace{2truecm}
proofpile-arXiv_067-5947
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Band nesting} In semiconductors, the band gap plays an important role in what concerns optical absorption. It defines the threshold after which there is absorption of electromagnetic radiation, by the promotion of an electron from the valence band to the conduction band. But the largest absorption is usually not at the band gap edge; it is often considered to be in a VHS in the electronic structure. These correspond to singularities in the density of states; if at a given point of the reciprocal space there are VHS both in the conduction and the valence band, there will be a singularity of the optical conductivity. Yet, this coincidence normally happens only at high symmetry points, and there are very few in the Brillouin Zone (BZ). A particular case is the extended van Hove singularity (EVHS) in that these are single band saddle points with a flat band in one of the directions.\cite{gofron_observation_1994} The optical conductivity of a material can be written as \begin{equation} \sigma_1(\omega) = \kappa_2(\omega)\omega\epsilon_0 \, , \nonumber \end{equation} where $\kappa_2(\omega)$ is the imaginary part of the relative electric permittivity, $\omega$ is the frequency of the incoming electromagnetic radiation, and $\epsilon_0$ is the vacuum permittivity. In the optical dipole approximation we can write: \begin{equation} \kappa_2(\omega) = A(\omega)\sum_{v,c}\int_{BZ} \dfrac{d^2\bf{k}}{(2\pi)^2}|d_{vc}|^2\delta\left( E_c - E_v - \hslash\omega \right) \, , \label{kappa2} \end{equation} The sum is over the occupied states in the valence band ($v$) and the unoccupied states in the conduction band ($c$) with energies $E_v$ and $E_c$, and includes implicitly the sum over spins, $A(\omega) = 4\pi^2e^2/(m^2\omega^2)$ ($e$ is the electric charge and $m$ the carrier mass), $d_{vc}$ is the dipole matrix element. The integral in (\ref{kappa2}) is evaluated over the entire 2D BZ. If we consider cuts $S(E)$ of constant energy $E$, $E = \hslash\omega = E_c - E_v$, in the bandstructure, we can write: \begin{equation} d^2{\bf k} = dS\dfrac{d\left(E_c - E_v \right) }{|\nabla_k \left(E_c - E_v \right)|} \, , \nonumber \end{equation} and the integral in (\ref{kappa2}) can be rewritten as: \begin{equation} \kappa_2(\omega) = A(\omega)\sum_{v,c} \dfrac{1}{(2\pi)^2}\int_{S(\omega)}\dfrac{dS}{|\nabla_k \left(E_c - E_v \right)|}|d_{vc}|^2 \, . \nonumber \end{equation} Notice that the strong peaks in the optical conductivity will come from regions in the spectrum where $|\nabla_k \left(E_c - E_v \right)|\approx 0$. If $d_{vc}$ varies slowly over these regions (so that there is a gradient expansion) we can write: \begin{equation} \kappa_2(\omega) \approx A(\omega)\sum_{v,c} |d_{vc}|^2 \rho_{vc}(\omega) \, , \nonumber \end{equation} where \begin{equation} \rho_{vc}(\omega) = \dfrac{1}{(2\pi)^2}\int_{S(\omega)}\dfrac{dS}{|\nabla_k \left(E_c - E_v \right)|} \, , \nonumber \end{equation} is the joint density of states (JDOS). The points where $\nabla_k \left(E_c - E_v \right) = 0$ are called critical points (CP) and they can be of several types. If $\nabla_k E_c = \nabla_k E_v = 0$ we have either a maximum, a minimum or a saddle point in each band; this usually occurs only at high symmetry points. These points often receive more attention, because they are easy to pinpoint by visual inspection of the bandstructure, and give rise to singularities in the DOS. On the other hand, the condition $\nabla_k \left(E_c - E_v \right) = 0$ with $|\nabla_kE_c| \approx |\nabla_kE_v| > 0$, that is {\it band nesting}, gives rise to singularities of the JDOS, and therefore to high optical conductivity. Notice that this condition differs from an EVHS \cite{gofron_observation_1994} in that the later refers to saddle points in one band, with a flat band in one of the directions, while here it is determined by the ``topographic'' difference between the conduction and valence bands. In the case of two dimensional materials, a saddle point of $E_c-E_v$ gives rise to a divergence of the optical conductivity, whereas in 3D materials it merely gives rise to an edge with $(E-E_0)^{1/2}$ dependence, in first approximation.\cite{bassani-book} \section{Method} We performed a series of DFT calculations for the STMDC family using the open source code {\sc Quantum ESPRESSO}.\cite{Giannozzi2009} We used norm conserving, fully relativistic pseudopotentials with nonlinear core-correction and spin-orbit information to describe the ion cores.\cite{PSEUDO} The exchange correlation energy was described by the generalized gradient approximation (GGA), in the scheme proposed by Perdew, Burke and Ernzerhof\cite{Perdew1996} (PBE). The integrations over the Brillouin-zone (BZ) were performed using scheme proposed by Monkhorst-Pack\cite{Monkhorst1968,dft} for all calculations except those of the density of states, for which the tetrahedron method\cite{Blochl1994} was used instead. We calculated the optical conductivity directly from the bandstructure.\cite{epsFR} It is well known that GGA underestimates the band gap,\cite{komsa-PRB-86-241201} and hence the optical conductivity shows the peaks displaced towards lower energies relative to actual experiments. However, their shapes and intensities are expected to be correct. We notice the importance of including spin-orbit and so to perform full relativistic, non-collinear calculations \cite{zhu_giant_2011,ramasubramaniam_tunable_2011}. Significant spin-orbit splittings in the range 50~meV to 530~meV can be obtained in these crystals and can be measured using current spectroscopic techniques. Still, spin-orbit interaction is ignored in most of DFT calculations \cite{ataca_stable_2012,bhattacharyya_semiconductor-metal_2012,ding_first_2011,kuc_influence_2011}. In our case, even for light transition metals, such as Ti, we can have a spin-orbit splitting of the order of 40~meV, which can be easily measured. The trigonal prismatic (T) geometry does not have inversion symmetry, and has a considerable spin-orbit splitting, specially around the high symmetry point K. The octahedral structure (O) has inversion symmetry, and therefore no spin-orbit splitting can be observed ($E(k,\uparrow)=E(k,\downarrow)$). This results from the inversion symmetry of the energy bands in the reciprocal space, which implies that $E(k,\uparrow)=E(-k,\uparrow)$ and $E(k,\downarrow)=E(-k,\downarrow)$, while time reversal symmetry (preservation of the Kramers degeneracy) requires that $E(k,\uparrow)=E(-k,\downarrow)$. \section{Results} \begin{figure*}[t] \centering \includegraphics[width=1.4\columnwidth]{bands-fig-combined6.ps} \caption{(Color online) Band structures, and DOS of TiS$_2$ and ZrS$_2$ (group 4A sulphides), MoS$_2$ and WS$_2$ (group 6A sulphides) and PdS$_2$ and PtS$_2$ (group 8 sulphides). The arrows indicate the transitions corresponding to the first prominent peaks in the optical conductivity.} \label{fig:bands} \end{figure*} \subsection{Bandstructure calculations} Calculations of the electronic structure were performed for all 2D $MX_2$ with $X=$S, Se, for both the trigonal prismatic and octahedral structures. Amongst these, we found eleven to be semiconductors. Unless otherwise stated, we will only show results for the lowest energy structures for each compound, which are the T structure for Mo$X_2$ and W$X_2$ and the O structure for Ti$X_2$, Zr$X_2$, Pt$X_2$ and Pd$X_2$. However, the same analysis can be extended to the metastable structures as well. The electronic bandstructures and density of states (DOS) of TiS$_2$, ZrS$_2$, MoS$_2$, WS$_2$, PtS$_2$ and PdS$_2$ are shown in Fig. \ref{fig:bands}. It is useful to compare the results for dichalcogenides with $M$ belonging to the same group of the periodic table, which usually have the same lowest energy structure type and have similar features in the bandstructure close to the gap. The same can be said of $M$S$_2$ and $M$Se$_2$ for the same transition metal. However, T and O structures, even of the same material, are very different. Nevertheless, all of them present Van Hove singularities of $E_c$, $E_v$ or both, including saddle points which give rise to sharp peaks in the DOS. We start by analyzing the bandstructure of WS$_2$, one of the most studied STMDC. At the K point, where the direct gap is smallest, the Van Hove singularities are the minimum of $E_c$ and maximum of $E_v$, and therefore only give rise to steps of the DOS. These steps are low compared to the sharp peaks originating on the very flat bands near the conduction band minimum between the M and the $\Gamma$ points (see point marked as G in Fig. \ref{fig:bands}), which is not a high symmetry point. Still, the singularity of the DOS itself is not sufficient to explain the high absorption peak that can be seen in the optical conductivity (see Fig. \ref{fig:sigma}). \begin{figure}[h] \centering \includegraphics[keepaspectratio=true]{sigma-fig-combined.ps} \caption{(Color online) Real part of the optical conductivity of 2D transition metal disulphides.} \label{fig:sigma} \end{figure} \begin{figure}[h] \centering \includegraphics{grad-fig-combined} \caption{(Color online) Difference $E_c - E_v$ and the modulus of its gradient for monolayer WS$_2$, TiS$_2$ and ZrS$_2$ in the high symmetry path.\cite{noteG} $E_{v1}$ indicates the highest occupied band, while $E_{v2}$ indicates the energy of the second highest occupied band. $a$ is the lattice constant.} \label{fig:grad} \end{figure} In order to identify the origin of the largest peak at low energy (at 2.56 eV), we analyze the energy difference between the lowest unoccupied band and the highest occupied band, $E_c-E_{v1}$ (the index of $E_c$ will be omitted for simplicity), together with its gradient, along the high symmetry lines of the Brillouin Zone (Fig. \ref{fig:grad}). We find the gradient to be very low between the $\Gamma$ and the $\Lambda$ points (corresponding to transitions signaled in Fig. \ref{fig:bands}) which is the first large optical conductivity peak at 2.56~eV. It is also small near the right arrow of Fig. \ref{fig:bands}, at around 2.7~eV. We define the regions where this band nesting occurs using the criteria $|\nabla_k \left(E_c - E_v \right)| \ll 1$~eV/($2\pi/a$) (where $2\pi/a$ is the modulus of the reciprocal lattice vector). We explored all the BZ to find the extent of the band nesting. Figure \ref{fig:map-Grad-T-WS2} shows $|\nabla_k \left(E_c - E_{v1} \right)|$ for WS$_2$. The large white areas close to $\Lambda$ are the areas where band nesting occurs for these two bands. The band nesting can also be observed for other bands immediately below or above, as for example for the transitions between the second highest band and the conduction band ($E_c-E_{v2}$), also illustrated in Fig. \ref{fig:bands}. For example the 2.96~eV peak in optical conductivity results mostly from contributions of other bands. The bandstructures of the other trigonal prismatic compounds, WSe$_2$, MoS$_2$ and MoSe$_2$ display similar band nesting. The band nesting is also present in the bandstructure of octahedral polytype compounds. Figure \ref{fig:bands} shows the bandstructure and DOS of O-TiS$_2$ single layer. This material exists in the bulk in the octahedral form, and was predicted to be an energetically stable semi-metal \cite{ataca_stable_2012}. However, our calculations show it to be an indirect band gap semiconductor, with a small gap. Experimentally, the bulk form of TiS$_2$ is a very narrow band gap semiconductor\cite{kukkonen_transport_1981,chen_angle-resolved_1980} ($E_g \approx 0.3$~eV). This value is probably underestimated due to the semilocal approximation used for the exchange and correlation energy functional. We also note that, since there is no spin-orbit splitting, all the bands shown are degenerate, and so contribute doubly to the DOS. Following the same reasoning we used for the trigonal prismatic materials and analyzing the energy gradients (Fig. \ref{fig:grad}), we notice that $|\nabla_k \left(E_c - E_v \right)| \ll 1$~eV/($2\pi/a$) in the regions corresponding to the arrows of Fig. \ref{fig:bands}. There is another band bellow, and very close in energy to the highest occupied band, which is also plotted in Fig. \ref{fig:bands}. Since it has transition energies very close to the ones from the highest occupied band, it mostly reinforces the peaks due to the band nesting. All the three transitions have similar energies, being the strongest near M at 1.5~eV; the others contribute to the large broadening of the peak in the optical conductivity (Fig. \ref{fig:sigma}). We analyze the extent of this band nesting over the BZ by plotting $|\nabla_k \left(E_c - E_{v1} \right)|$ for TiS$_2$ (Figure \ref{fig:map-Grad-O-TiS2}). In white we have the zone corresponding to values less than 1~eV/($2\pi/a$). It can be seen that band nesting extends significantly beyond the high symmetry lines. The larger the area, the more intense the absorption peak is expected to be. \begin{figure}[h] \centering \includegraphics[scale=0.2,keepaspectratio=true]{./map-Grad-O-TiS2} \caption{(Color online) Map on the BZ of $|\nabla_k \left(E_c - E_{v1} \right)|$ for TiS$_2$. $a$ is the lattice constant.} \label{fig:map-Grad-O-TiS2} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.2,keepaspectratio=true]{./map-Grad-T-WS2} \caption{(Color online) Map on the BZ of $|\nabla_k \left(E_c - E_{v1} \right)|$ for WS$_2$. $a$ is the lattice constant. In the $\Gamma$-M line, $\nabla_k \left(E_c - E_{v1} \right)$ is undefined due to band crossing.} \label{fig:map-Grad-T-WS2} \end{figure} Another element of this family, ZrS$_2$, behaves in a similar way. ZrS$_2$ has the same octahedral structure and the same number of valence electrons as TiS$_2$. But in this case, the gap is much wider (Fig. \ref{fig:bands}). The transitions marked by the arrows in Fig. \ref{fig:bands} correspond to regions where the gradient of $E_c-E_{v1}$ is small (Fig. \ref{fig:grad}). Hence, the absorption is very high at these energies, as can be seen in Fig. \ref{fig:sigma}. There we have two very close peaks, forming a very broad peak. They correspond to a transition at the M point with an energy $E=2.0$~eV, and the transition indicated by the letter A with an energy $E=2.2$~eV. The transitions at B ($E=1.88$~eV) also give some contribution to the broadening of the peak in the optical conductivity. The transition at M is even stronger than for TiS$_2$. Both TiS$_2$ and ZrS$_2$ have absorption at lower energies than the corresponding to these transitions, but the intensity is almost an order of magnitude smaller. It is interesting to note that TiS$_2$ and ZrS$_2$ have a larger optical conductivity than the corresponding systems based on W or Mo. We have verified all these results for all elements of the 2D STMDC that include WS$_2$, WSe$_2$, MoS$_2$, MoSe$_2$, in the trigonal form, and TiS$_2$, ZrS$_2$, ZrSe$_2$, PdS$_2$, PdSe$_2$, PtS$_2$, PtSe$_2$ in the octahedral form and the band nesting is qualitatively the same. The only variation that we find is quantitative, namely, the intensity of the optical response changes from system to system (Fig. \ref{fig:sigma} shows that the high peaks near the absorption edge are about half as high for PtS$_2$ and PdS$_2$ as for TiS$_2$, for example). However, band nesting is present for all members of this family of 2D materials. \section{Summary} In conclusion, we have shown that all 2D STMDC display band nesting in large regions of the Brillouin Zone. This feature of their bandstructure leads to a large optical response and peaks in the optical conductivity. The octahedral compounds TiS$_2$ and ZrS$_2$ are amongst those with largest band nesting regions. The trigonal prismatic systems, which lack inversion symmetry, also have strong non-linear optical response. This result indicates that despite their thickness, these materials present strong photon-electron coupling. The existence of large electron-photon interaction in 2D opens up the possibility to exciting opportunities for basic research as well as for applications in photonics and opto-electronics. \vspace*{0.2cm} \begin{acknowledgments} We gratefully acknowledge JJ Woo and MC Costa and the computer resources from TACC and GRC. RMR is thankful for the financial support by FEDER through the COMPETE Program and by the Portuguese Foundation for Science and Technology (FCT) in the framework of the Strategic Project PEST-C/FIS/UI607/2011 and grant nr. SFRH/BSAB/1249/2012. We acknowledge the NRF-CRP award "Novel 2D materials with tailored properties: beyond graphene" (R-144-000-295-281). \end{acknowledgments}
proofpile-arXiv_067-6001
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this paper, we derive estimates which compare the running maximum of a martingale with its quadratic variation. Given real numbers $x_n, h_n$, $ n\in\N$ we write \[ x^*_n:= \max_{k\leq n} | x_k |,\quad\quad [x]_n:= x_0^2+\sum _{k=0}^{n-1} (x_{k+1}-x_k)^2,\quad\quad (h\cdot x)_n:= \sum_{k=0}^{n-1} h_k(x_{k+1}-x_k) . \] We will derive pathwise versions of the famous Burkholder--Davis--Gundy inequalities. \begin{theorem}\label{BDGThm} For $1\leq p <\infty$, there exist constants $a_p,b_p<\infty$ such that the following holds: for every $N\in\N$ and every martingale $(X_k)_{k=0}^N$\def\theequation{BDG} \begin{equation} \label{BDGIneq \E[X]_N^{p/2}\leq a_p\E \bigl[ \bigl(X_N^*\bigr)^p \bigr], \quad\quad\E \bigl[ \bigl(X_N^*\bigr)^p \bigr] \leq b_p \E[X]_N^{p/2}. \end{equation} \end{theorem} For $p\in(1, \infty)$ this was established by Burkholder \cite {Bu66}. Under additional assumptions, Burkholder and Gundy \cite {BuGu70} obtain a version for $p\in(0,1]$, while the case $p=1$ of \eqref{BDGIneq} without restrictions is due to Davis \cite{Da70}.\def\theequation{\arabic{section}.\arabic{equation}}\setcounter{equation}{0} For a modern account see, for instance, \cite{ChTe03}. \begin{Trajectorial*} The novelty of this note is that the above martingale inequalities are established as consequences of \emph{deterministic} counterparts. We postpone the general statements and first state the \emph{trajectorial version} of Davis' inequality. \end{Trajectorial*} \begin{theorem}\label{PathDavisThm} Let $x_0, \ldots, x_N$ be real numbers and set\footnote{Throughout this paper we use the convention $0/0=0$.} $h_n:= \frac{x_n}{\sqrt {[x]_n + (x_n^*)^2}}, n\leq N$. Then \begin{equation} \label{PathDavis} \sqrt{[x]}_N \leq3 x^*_N - (h\cdot x)_N,\quad\quad x^*_N \leq6 \sqrt{[x]}_N + 2 (h\cdot x)_N. \end{equation} \end{theorem} While the proof of Theorem~\ref{PathDavisThm} is not trivial, we emphasize that the inequalities in \eqref{PathDavis} are completely elementary in nature. The significance of the result lies in the fact that it implies Davis' inequalities: indeed, if $(X_n)_{n=0}^N$ is a martingale, we may apply \eqref {PathDavis} to each trajectory of $X$ and obtain a bounded and adapted process $H$. The decisive observation is that, by the martingale property, \begin{equation} \label{Int=0} \E\bigl[ (H\cdot X)_N\bigr]= 0, \end{equation} so Davis' inequalities (with $a_1=3, b_1=6$) follow from \eqref {PathDavis} by taking expectations. We recall that the BDG inequalities also apply if $X=(X_t)_{t}$ is a cadlag local martingale, and that this follows from a straightforward limiting procedure. Moreover, the inequalities are considerably simpler to prove for \emph{continuous} local martingales (see, for example, \cite{RoWi00}); in this case, they also hold for $p\in(0,1)$, as proved by Burkholder and Gundy \cite{BuGu70}. The problem of finding the optimal values of the constants $a_p, b_p$ is delicate, and has been open for 47 years and counting; we refer to Osekowski \cite{Os10} for a discussion of the current state of research. The rest of this paper is organized as follows. In Section~\ref{history}, we discuss the history of the pathwise approach to martingale inequalities. In Section~\ref{Heuristics for the pathwise hedging approach}, we explain the intuition behind the hedging strategy $h=(h_k)_k$ used in the pathwise version of Davis' inequality. In Section~\ref{Davis inequality for continuous local martingales}, we give a short proof of one Davis' inequality for continuous martingales; notably, this argument leads to a better constant compared to the previous literature (to the best of our knowledge). In Section~\ref{Davis inequality}, we establish Theorem~\ref{PathDavisThm}. In Section~\ref{Pathwise Burkholder-Gundy Inequality}, we use Theorem~\ref{PathDavisThm} to derive trajectorial versions of the BDG-inequalities in the $p>1$ case; these also lead to their corresponding classical probabilistic counterpart, thus concluding a fully analytic derivation of Theorem~\ref{BDGThm}. \section{History of the trajectorial approach} \label{history}\setcounter{equation}{0} The inspiration of the pathwise approach to martingale inequalities used in this paper comes from mathematical finance, more specifically, the theory of model-independent pricing. The starting point of the field is the paper \cite{Ho98a} of Hobson, which introduces the idea to study option-prices by means of \emph{semi-static hedging}; we explain the concepts using the inequality \begin{equation} \label{SemiStatic} \sqrt{[x]}_N \leq3 x^*_N - (h\cdot x)_N \end{equation} appearing in Theorem~\ref{PathDavisThm}. If the process $x=(x_n)_{n=0}^{N}$ describes the price evolution of a financial asset, the functions $\Phi(x)=\sqrt{[x]}_N$ and $ \Psi (x)=3 x^*_N$ have the natural financial interpretation of being exotic options; specifically, here $\Phi$ is an option on realized variance, while $\Psi$ is a look-back option. The seller of the option $\Phi$ pays the buyer the amount $\Phi (x_0,\ldots,x_N)$ after the option's expiration at time $N$, and $(h\cdot x)_N$ corresponds to the gains or losses accumulated while trading in $x$ according to the portfolio $h=(h_k)_k$. The decisive observation of Hobson is that inequalities of the type \eqref{SemiStatic} can be used to derive \emph{robust bounds} on the relation of the prices of $\Phi$ and $\Psi$: independently of the market model, one should never trade the option $\Phi$ at a price higher than the price of $\Psi$, since the payoff $\Phi$ can be \emph {super-hedged} using the option $\Psi$ plus self-financing trading. Here the \emph{hedge} $3 x^*_N - (h\cdot x)_N$ is designated \emph{semi-static}: it is made up of a static part -- the option $3 x^*_N$ which is purchased at time $0$ and kept during the entire time range -- plus a dynamic part which corresponds to the trading in the underlying asset according to the strategy $h$. Since the publication of \cite{Ho98a} a considerable amount of literature on the topic has evolved (e.g., \cite{Ro93,BrHoRo01a,HoPe02,CoHoPe08,DaObRa10,CoOb11a,CoOb11b,CoWa11,HoNe12,HoKl12}); we refer in particular to the survey by Hobson \cite{Ho11} for a very readable introduction to this area. The most important tool in model-independent finance is the Skorokhod-embedding approach; an extensive overview is given by Ob{\l}{\'o}j in \cite{Ob04}. Starting with the papers \cite{GaHeTo11,BeHePe12} the field has also been linked to the theory of optimal transport, leading to a formal development of the connection between martingale theory and robust hedging (\cite{DoSo12,AcBePeSc13,DoSo13}). A benefit for the theory of martingale inequalities is the following guiding principle: \emph{Every martingale inequality which compares expectations of two functionals has a deterministic counterpart.} In fact, recently Bouchard and Nutz \cite{BoNu13} have coined this into a rigorous theorem in the discrete time setup, see also \cite{BeNu14}. This idea served as a motivation to derive the Doob-maximal inequalities from deterministic, discrete-time inequalities in \cite{AcBePeScTe12}.\hskip.2pt\footnote{Notably, much of the approach of \cite {AcBePeScTe12} was already developed earlier by Ob{\l}{\'o}j and Yor \cite{ObYo06}.} In the present article, we aim to extend the approach to the case of the Burkholder--Davis--Gundy inequalities. \section{Heuristics for the pathwise hedging approach} \label{Heuristics for the pathwise hedging approach}\setcounter{equation}{0} The aim of this section is to explain the basic intuition which lies behind the choice of the integrand in the pathwise Davis inequalities. Arguments are simpler in the case of Brownian motion, which we will now consider. We focus on one of the two inequalities; according to the pathwise hedging approach, we should be looking for a strategy $H$ and a constant $a$ such that $\sqrt{t} \leq a B^*_t + (H\cdot B)_t$. Indeed, a reasonable ansatz to find a super hedging strategy is to search for a function $f(b, b^*,t)$ such that \begin{equation} \label{BDG1Ansatz} \sqrt{t} \leq a B_t^* + \bigl(f\bigl(B, B^*,t\bigr) \cdot B \bigr)_t,\quad\quad t\geq0 . \end{equation} To make an educated guess for the function $f$, we argue on a \emph {purely heuristic} level and consider paths which evolve in a very particular way. Assume first that the path $(B_t(\omega)_t)_{t\geq0}$ stays infinitesimally close to the value $b$ for all $t\geq t_0$: we picture BM as a random walk on a time grid with size $\mathrm{d}t$, making alternating up and down steps of height $\sqrt{\mathrm{d}t}$. Thus, we assume that $B$ evolves in the form \begin{equation} \label{DownUp} B_{t_0+2n \mathrm{d}t} = b,\quad\quad B_{t_0+(2n+1) \,\mathrm{d}t}= b+\sqrt{\mathrm{d}t},\quad\quad n\geq0, \end{equation} where necessarily $b$ lies between $-B_{t_0}^*$ and $B_{t_0}^*$. The left-hand side of \eqref{BDG1Ansatz} is of course increasing, so we have to ensure the same behavior on the right side. A little calculation reveals that this means that $f$ should have the form \begin{equation} \label{heu1} f\bigl( B, B^*, t\bigr)\approx-\frac{ B}{\sqrt t} \quad\quad\mbox{as } t\to \infty; \end{equation} to see this, set $H_t:=f(B_t, B_t^*,t)$ and compare the value $\sqrt {t+2\,\mathrm{d}t}-\sqrt{t} \approx \mathrm{d}t/\sqrt{t} $ with \begin{eqnarray*} (H\cdot B)_{t+2\,\mathrm{d}t}-(H\cdot B)_t & \approx& f\bigl(t,b,b^* \bigr) \,\mathrm{d}B_t+f \bigl(t+\mathrm{d}t,b+\sqrt{\mathrm{d}t},b^* \bigr)\, \mathrm{d}B_{t+\mathrm{d}t} \\ & \approx& f\bigl(t,b,b^*\bigr) \sqrt{\mathrm{d}t}+f \bigl(t+\mathrm{d}t,b+\sqrt{\mathrm{d}t},b^* \bigr) (- \sqrt{\mathrm{d}t}) \\ & \approx&- \bigl[f \bigl(t,b+\sqrt{\mathrm{d}t}, b^* \bigr)-f\bigl(t,b, b^*\bigr) \bigr] \sqrt{\mathrm{d}t} + \mathrm{O}\bigl(\mathrm{d}t^{3/2}\bigr) \\ & \approx&- f_b \,\mathrm{d}t. \end{eqnarray*} To assure that both sides of \eqref{BDG1Ansatz} grow at the same speed, we thus need to require $\mathrm{d}t/\sqrt{t}\approx-f_b \,\mathrm{d}t$ which leads to \eqref{heu1}. Next, we consider a path which exhibits a different kind of extreme evolution: assume that $B_t(\omega)\approx M t$ for some number $M>0$. Simply setting $f( B, B^*, t)\approx- B/\sqrt t$ would lead to $ (f(B, B^*,t) \cdot B )_t \approx- 2M^2 t^{3/2}/3$. Taking $t$ sufficiently large, this quantity would eventually supersede $aB_t^* \approx a M t$ independent of the choice of $a$, and thus \eqref {BDG1Ansatz} would fail. So, this argument suggest to choose a function which is bounded (at least for fixed $(t,B^*)$). Moreover, dealing with a bounded integrand would conveniently allow to follow the explanation after Theorem~\ref{PathDavis} and obtain Davis' inequalities from the pathwise Davis' inequalities. Thus, we could consider the function \begin{equation} \label{ContStrat} f\bigl(B,B^*, t\bigr)=-\frac{B_t}{\sqrt t\vee B^*_t}. \end{equation} Thanks to the additional term $a B_t^*$ in \eqref{BDG1Ansatz}, it is not a problem if $f( B, B^*, t)\approx-2 B/\sqrt t$ is violated for ``small'' values of $t$; and, if $\sqrt{t}$ is large compared to $B^*$, $f(B,B^*, t)\approx -2B_t/\sqrt t$ holds, thus satisfying \eqref{heu1}. Another similar possibility would be to use the function \begin{equation} \label{ContStrat2} f\bigl(B,B^*, t\bigr)=-\frac{B_t}{\sqrt{ t + (B^*_t)^2}}, \end{equation} as in Theorem~\ref{PathDavisThm}; the latter turns out to lead to easier computations in the discrete time case. We choose however $f$ given by \eqref{ContStrat} when dealing with continuous martingales, since this allows us to obtain Davis' inequality with a better constant than the values we could find in the literature. \section{Davis inequality for continuous local martingales} \label{Davis inequality for continuous local martingales}\setcounter{equation}{0} We now derive one pathwise Davis' inequality for continuous local martingales; integrating it yields the corresponding standard Davis' inequality. We notice that Theorem~\ref{BDG1bt} provides the constant $3/2$, which is smaller than the optimal constant for general cadlag martingales (which is known to be $\sqrt{3}$, see \cite{bur02best}). We do not address here the opposite pathwise Davis'\vadjust{\goodbreak} inequality for continuous local martingales since we are only interested in Theorem \ref{thm4.1} for illustrative purposes (since, as mentioned before, Davis inequality for cadlag local martingales follows from the case of martingales in discrete time).\vspace*{-1pt} \begin{theorem}\label{thm4.1} If $M$ is a continuous local martingale such that $M_0=0$ then\vspace*{-1pt} \begin{equation} \label{BDG1bt} \sqrt{[M]_t} \leq\frac{3}{2} M_t^*- \biggl(\frac{M_t}{\sqrt{ [M]_t } \vee M^*_t}\cdot M_t \biggr)_t \quad\quad\mbox{for all } t\geq0 . \end{equation} \end{theorem} \begin{pf} By the Dambis--Dubins--Schwarz time change result, it is enough to consider the case where $M$ is a Brownian Motion, which we will denote by $B$. From Ito's formula applied to the semi-martingales $B^2_t$ and $\sqrt t\vee B_t^*$ we find\vspace*{-1pt} \[ \mathrm{d}\frac{B_t^2}{\sqrt t\vee B_t^*}=-\frac{ B_t^2}{t \vee B_t^{*2} } \,\mathrm{d} \bigl(\sqrt t\vee B^*_t \bigr) + \frac{1}{\sqrt t\vee B^*_t} ( 2B_t\, \mathrm{d}B_t+ \mathrm{d}t ). \] We may thus replace the integral in \eqref{BDG1bt} and arrive at the equivalent formulation\vspace*{-1pt} \begin{equation} \label{BDG1tbE} \frac{B_t^2}{\sqrt t\vee B_t^*}+ \int_0^t \frac{ B_s^2}{s \vee B_s^{*2} }\, \mathrm{d} \bigl(\sqrt s\vee B^*_s \bigr) - \int _0^t \frac{1}{\sqrt s\vee B^*_s} \,\mathrm{d}s \leq3 B_t^*-2\sqrt t. \end{equation} Inequality \eqref{BDG1tbE} gets stronger if we replace each occurrence of $B$ by $B^*$; thus, setting $f(t)= \sqrt t, g(t)= B_t^*$, it is enough to prove the following claim: Let $f, g\dvtx \R^+\to\R^+$ be continuous increasing functions such that $f(0)=g(0)=0$ and $(f\vee g)(a)>0$ if $a>0$. Then, for all $a>0$\vspace*{-1pt} \begin{equation} \label{TechCore} \biggl(\frac{g^2}{f\vee g} \biggr) (a)+ \int_0^a \frac{g^2}{f^2\vee g^2} \,\mathrm{d}(f\vee g) - \int_0^a \frac{1}{f\vee g} \,\mathrm{d}f^2\leq(3 g- 2 f) (a). \end{equation} To show this, observe that, by a change of variables $\int\frac{g^2}{f^2\vee g^2} \,\mathrm{d}(f\vee g)= -\int g^2 \,\mathrm{d}\frac{f\vee g}{f^2\vee g^2}$. Hence, integrating by parts on the interval $(\varepsilon,a)$ and taking the limit $\varepsilon\to0$, we see that the left hand side of \eqref{TechCore} equals\vspace*{-1pt} \[ \int_0^a \frac{\mathrm{d}g^2-\mathrm{d}f^2}{f\vee g }. \] By a change of variables and applying trivial inequalities we obtain \[ \int_0^a \frac{ \mathrm{d}g^2}{f\vee g}= \int _0^a 1_{\{g>0\}} \frac {\mathrm{d}g^2}{f\vee g} \leq \int_0^a \frac{ 1_{\{g>0\}} \,\mathrm{d}g^2}{g} =2g(a) ,\quad\quad \int _0^a \frac{\mathrm{d}f^2}{f\vee g} \geq \int _0^a \frac{\mathrm{d}f^2}{f(\cdot)\vee g(a)} . \] If $f(a)\leq g(a)$, the last integral equals $f^2(a)/g(a)$; otherwise there exists some $b\in[0,a)$ such that $f(b)=g(a)$, and then evaluating separately the integral on $(0,b)$ and on $[b,a)$ we obtain that \[ \int_0^a \frac{\mathrm{d}f^2}{f(\cdot) \vee g(a)}= \frac{f^2(b)}{g(a)} +2\bigl(f(a)-f(b)\bigr)= 2f(a)-g(a) . \] Since $2y-x^2/y\leq3y-2x$ holds for $y>0$, either way \eqref {TechCore} follows.\vadjust{\goodbreak} \end{pf} \section{Davis inequality} \label{Davis inequality}\setcounter{equation}{0} In this section, we prove Theorem~\ref{PathDavisThm}; in fact, we will establish that\footnote{Inequality (\ref{1half}) slightly improves on Inequality (\ref{PathDavis}) by replacing the constant $3$ with the smaller $1+\sqrt{2}$.} \begin{eqnarray} \label{1half} \sqrt{[x]_n} & \leq&(\sqrt{2}+1) x^*_n+(-h \cdot x)_n, \\ \label{2half} x^*_n & \leq&6 \sqrt{[x]_n}+(2h \cdot x)_n , \end{eqnarray} where the dynamic hedging strategy is defined by $ h_n=\frac {x_n}{\sqrt{[x]_n + (x_n^*)^2}}$ as in Theorem~\ref{PathDavisThm}. To prove \eqref{1half}, \eqref{2half} we introduce the convention, used throughout the paper, that any sequence $(y_i)_{i\geq0}$ is defined to be $0$ at time $i=-1$, and we define the auxiliary functions $f,g$ for $m>0, q\geq0$, $|x|\leq m$ by\vspace*{-1pt} \begin{eqnarray} \label{firstf} f(x,m,q)&:=& -2\sqrt{q}+ \sqrt{m^2+q}- \frac{m^2-x^2}{2\sqrt{m^2+q}}, \\ \label{firstg} g(x,m,q)&:=& -2m+ \sqrt{m^2+q}+\frac{m^2-x^2}{2\sqrt{m^2+q}} \end{eqnarray} and continuously extend them to $(x,m,q)=(0,0,0)$ by setting $f(0,0,0)=g(0,0,0)=0$. We will need the following lemma, whose proof is a somewhat tedious exercise in calculus.\vspace*{-1pt} \begin{lemma} \label{DavisIneqfn} For $d\in\R, |x| \leq m, q\geq0, m\geq0$ we have, with $c=\sqrt{2}-1$, \begin{eqnarray} \label{ImportantIneq2} f\bigl(x+d, m \vee|x+d| , q+d^2\bigr) - f(x,m,q) & \leq&\frac{xd}{\sqrt {m^2+q}}+ \bigl( \sqrt{q+d^2} -\sqrt{q} \bigr), \\[-0.5pt] \label{ImportantIneq1} g\bigl(x+d, m \vee|x+d| , q+d^2\bigr) - g(x,m,q) & \leq&- \frac{xd}{\sqrt {m^2+q}}+ c \bigl( \bigl(m\vee|x+d|\bigr) -m \bigr).\quad\quad \end{eqnarray} \end{lemma} Before proving Lemma~\ref{DavisIneqfn} we explain why it implies \eqref{1half} and \eqref{2half}.\vspace*{-1pt} \begin{pf*}{Proof of Theorem~\ref{PathDavisThm}} Since $f(x_0,|x_0|,x_0^2)\leq0$, \eqref{ImportantIneq2} gives\vspace*{-1pt} \begin{eqnarray*} -2 \sqrt{[x]_n} + x_n^*/2 &\leq& f\bigl(x_n, x_n^*, [x]_n\bigr) \leq\sum_{k=0}^{n-1} f\bigl(x_{k+1}, x^*_{k+1}, [x]_{k+1}\bigr)-f \bigl(x_k, x_k^*, [x]_k\bigr) \\[-0.5pt] &\leq&(h\cdot x)_n + \sqrt{[x]_n}, \end{eqnarray*} which implies \eqref{1half}; and since $g(x_0,|x_0|,x_0^2)\leq0$, we get \eqref{2half} from \eqref{ImportantIneq1} as follows\vspace*{-1pt} \begin{eqnarray*} -2 x_n^* + \sqrt{[x]_n} &\leq& g\bigl(x_n, x_n^*, [x]_n\bigr) \leq\sum_{k=0}^{n-1} g\bigl(x_{k+1}, x^*_{k+1}, [x]_{k+1}\bigr)-g \bigl(x_k, x_k^*, [x]_k\bigr) \\[-0.5pt] &\leq&-(h\cdot x)_n + c x^*_n.\vadjust{\goodbreak} \end{eqnarray*} \upqed\end{pf*} Now we prove Lemma~\ref{DavisIneqfn}. \begin{pf*}{Proof of Inequality \eqref{ImportantIneq1}} It is enough to consider the case $m>0$, as the one where $m=0$ then follows by continuity. Then, we can assume that $m=1$ through normalization. Define $h(x,q,d)$ to be the LHS minus the RHS of \eqref {ImportantIneq1}; since $h(x,q,d)=h(-x,q,-d)$, it is sufficient to deal with the case $d\geq0$. \noindent\textit{Case} I $[1\geq|x+d|]$: Here we have to show that \begin{equation} \label{D1} h= \sqrt{1+q+d^2}+ \frac{1-(x+d)^2}{2\sqrt{1+q+d^2}}-\sqrt {1+q}- \frac{1-x^2}{2\sqrt{1+q}}+\frac{xd}{\sqrt{1+q}}\leq0. \end{equation} Since $h_{xx} \geq0$, $h$ is convex, so it is sufficient to treat the boundary cases $x=-1$ and $x=1-d$. To simplify notation, we set $r= \sqrt{1+q}$; notice that $r\geq1$ and $0\leq d\leq2$. \noindent\textit{Sub-case} I.A $[1\geq|x+d|, x=-1]$: Then \eqref{D1} follows from \begin{eqnarray*} &&\sqrt{r^2+d^2}+\frac{1-(d-1)^2}{2\sqrt{r^2+d^2}}-r-\frac{d}r \leq 0 \\ &&\quad\Leftarrow\quad r^2+d^2+d-d^2/2 \leq (r+d/r) \sqrt{r^2+d^2} \\ &&\quad\Leftarrow\quad r^4+d^4/4 +d^2 +r^2d^2+d^3+2dr^2 \leq r^4 +2dr^2+d^2+r^2d^2+2d^3+d^4 /r^2 \\ &&\quad\Leftarrow\quad d^4/4 \leq d^3+ d^4/r^2, \end{eqnarray*} which is true since $0\leq d\leq2$. \noindent\textit{Sub-case} I.B $[1\geq|x+d|, x=1-d]$: Here \eqref {D1} amounts to \begin{eqnarray*} &&\sqrt{r^2+d^2}-r-\frac{1-(1-d)^2}{2r} +\frac{(1-d)d}{r} \leq 0 \\ &&\quad\Leftarrow\quad\sqrt{r^2+d^2} \leq r + d^2/2r \\ &&\quad\Leftarrow\quad r^2+d^2 \leq r^2 + d^2 + d^4/4r^2. \end{eqnarray*} \noindent\textit{Case} II $[1\leq|x+d|]$: Since $|x| \leq1$ and $d\geq0$, we find that $|x+d|\geq1$ implies $x+d=|x+d| \geq1$. In this case $h$ equals \begin{equation} -(2+c) (x+d-1)+ \sqrt{(x+d)^2+q+d^2}-\sqrt{1+q}- \frac{1-x^2}{2\sqrt {1+q}} + \frac{xd}{\sqrt{1+q}}. \end{equation} Since $s\mapsto\sqrt{s^2+1}$ is convex, $h \leq0$ holds iff it holds for all $x$ on the boundary. Moreover if $-1 \leq1-d=x \leq1$, then we already know that $h \leq0$ from the corresponding sub-case $1 \geq |x+d|$; so we only need to show that $h \leq0$ for $x=1, q,d\geq0$ and for $x=-1, q \geq0, d \geq2$, respectively. \noindent\textit{Sub-case} II.A [$1\leq|x+d|, x=1$]: We have to show that, for all $q,d\geq0$, \[ h(1,q,d)= -(2+c)d + \sqrt{(1+d)^2+q+d^2}- \sqrt{1+q}+ \frac{d}{\sqrt {1+q}}\leq0. \] Since $(1+d)^2+d^2=2(d+1/2)^2+1/2$ and $s\mapsto\sqrt{1+s^2}$ is convex, it follows that $h(1,q,d)$ is convex in $d$; hence, the inequality has to be checked only for $d=0$ and for $d \to\infty$. The first case is trivial, and in the latter, after dividing both sides by $d$, we arrive at $-(2+c)+\sqrt{2} + 1/\sqrt{1+q}\leq0$, which holds by our choice of $c$ and the fact that $q\geq0$. \noindent\textit{Sub-case} II.B [$1\leq x+d, x=-1$]: We have to show that, for all $q \geq0, d \geq2$, \[ h(-1,q,d)=-(2+c) (d-2)+ \sqrt{(-1+d)^2+q+d^2}- \sqrt{1+q}- \frac {d}{\sqrt{1+q}} \leq0. \] As above, by convexity in $d$ it suffices to consider the cases $d=2$ and $d \to\infty$. The first one amounts to $\sqrt{5+q}\leq\sqrt {1+q} + 2/\sqrt{1+q} $, which is easily proved taking the squares. The second one, after dividing by $d$, amounts to $-(2+c)+ \sqrt {2}-1/\sqrt{1+q}\leq0$, which holds since $-(2+c)+ \sqrt{2}\leq0 $ by our choice of $c$. \end{pf*} \begin{pf*}{Proof of Inequality \eqref{ImportantIneq2}} As before, we can assume w.l.o.g. that $m=1$ and $d\geq0$. Define $k(x,q,d)$ to be the LHS minus the RHS of \eqref{ImportantIneq2}. \noindent \textit{Case} I [$1\geq|x+d|$]: In this case, $k$ equals \[ \sqrt{1+q+d^2}- \frac{1-(x+d)^2}{2\sqrt{1+q+d^2}} -\sqrt{1+q}+\frac{1-x^2}{2\sqrt{1+q}} - \frac{xd}{\sqrt{1+q}}- 3 \bigl( \sqrt{q+d^2} -\sqrt{q} \bigr) . \] Let us first isolate the terms that depend on $x$. Define $k_0:=(1+q+d^2)^{-1/2}-(1+q)^{-1/2}$, and $k_2:=k- k_0(x+d)^2/2$, so that \[ k_2=\sqrt{1+q+d^2} -\sqrt{1+q}- \frac{1}{2\sqrt{1+q+d^2}} + \frac{1+d^2}{2\sqrt{1+q}} - 3 \bigl( \sqrt{q+d^2} -\sqrt{q} \bigr) . \] Notice that we can write \[ k_0=\int_0^{d^2} k_{1}(s)\, \mathrm{d}s \quad\quad\mbox{for } k_1(s):= \frac{\mathrm{d}}{\mathrm{d}s} (1+q+s)^{-1/2}, \] and similarly $k_2=\int_0^{d^2} k_{3}(s,d^2) \, \mathrm{d}s$ for \begin{eqnarray} k_3\bigl(s,d^2\bigr)&:= & \frac{\mathrm{d}}{\mathrm{d}s} \biggl( \sqrt{1+q+s} -\frac{1-s+d^2}{2\sqrt{1+q+s}}-3\sqrt{q+s} \biggr) \\ &= & \frac{1}{2\sqrt{1+q+s}}+ \frac{2(1+q+s)+1-s+d^2}{4(1+q+s)^{3/2}} -\frac{3}{2\sqrt{q+s}}. \end{eqnarray} Since the $(k_i)_{i}$ do not depend on $x$ and $k_0\leq0$, $\max_{x}k = k_2+k_0\min_{x} (x+d)^2/2$. Since $\min_{-1\leq x \leq1} (x+d)^2$ equals $0$ if $0\leq d \leq1$ and equals $(-1+d)^2$ if $1\leq d$, to show $k\leq0$ we are lead to study the following two sub-cases. \noindent\textit{Sub-case} I.A [$1\geq|x+d|, d\leq1$]: In this case, $k=k_2$; to show that $k_2\leq0$ it is enough to show $k_3\leq0$. Since $0\leq s\leq d^2\leq1$ we get $-s+d^2\leq1$, and so trivially \begin{equation} k_3\leq\frac{2(1+q+s)+1+1}{4(1+q+s)^{3/2}}-\frac{2}{2\sqrt{q+s}}. \end{equation} So, calling $y:=q+s$, it is enough to prove that for all $y\geq0$ \begin{equation} \label{poleq1} \frac{2y+4}{4(1+y)^{3/2}} - \frac{2}{2\sqrt{y}} \leq0, \quad\quad\mbox{i.e.}\quad\quad \sqrt{y}(y+2) \leq(1+y)^{3/2} 2, \end{equation} which is seen to be true by taking squares and bringing everything on the RHS to obtain a polynomial whose coefficients are all positive. \noindent\textit{Sub-case} I.B [$1\geq|x+d|, d\geq1$]: In this case $k = k_2+k_0(1-d)^2/2$, so it is enough to show that $k_3+k_1(1-d)^2/2 \leq0$. Since from $1\geq|x+d|, |x|\leq1$ it follows that $d\leq2$, computations entirely similar\footnote{Use that in this case $0\leq s\leq d^2\leq4$ implies $-s+d^2-(d-1)^2\leq 3$.} to the other sub-case establish the desired result. \noindent\textit{Case} II [$1\leq|x+d|$]: In this case, $x+d=|x+d|\geq1$ and $k$ equals \[ \sqrt{(x+d)^2+q+d^2} -\sqrt{1+q}+\frac{1-x^2}{2\sqrt{1+q}} - \frac{xd}{\sqrt{1+q}}- 3 \bigl( \sqrt{q+d^2} -\sqrt{q} \bigr) . \] Since trivially $\mathrm{d}k/\mathrm{d}x\leq0$, to show $k\leq0$ we can assume that $x=1-d$, in which case we can write $k$ as $k=\int_0^{d^2} \tilde{k}(s) \,\mathrm{d}s$ for \begin{eqnarray} \tilde{k}(s)&:= & \frac{\mathrm{d}}{\mathrm{d}s} \biggl( \sqrt{1+q+s} + \frac{s}{2\sqrt {1+q}}-3\sqrt{q+s} \biggr) \\ &= & \frac{1}{2\sqrt{1+q+s}}+ \frac{1}{2\sqrt{1+q}}-\frac{3}{2\sqrt{q+s}}. \end{eqnarray} Since $1-d=x\in[-1,1]$ we have $d^2\leq4$, and so to get $k\leq0$ it suffices to show that $ \tilde{k} \leq0$ for $s\leq4$. This holds since \[ \tilde{k}\leq\frac{1}{2\sqrt{1+q}}-\frac{2}{2\sqrt{q+s}} \leq0 \quad\quad\mbox{for } s\leq4 . \] \upqed\end{pf*} \section{Pathwise Burkholder--Gundy inequality} \label{Pathwise Burkholder-Gundy Inequality}\setcounter{equation}{0} Garsia has given a simple proof of the fact that the BDG inequalities for general $p\geq1$ are a consequence of Davis inequality ($p=1$) and of the famous lemma by Garsia and Neveu; in this section we revisit his proof and turn it into pathwise discrete-time arguments. Garsia's proof (for which we refer to \cite{Me76}, Chapter~3, Theorems 30 and 32 or to \cite{Ch75}) works similarly to how the Doob $L^p$-inequalities for $p>1$ follow by writing $x^p$ as an integral, applying the (weak) Doob $L^1$-inequality, using Fubini's\vadjust{\goodbreak} theorem, and finally applying H\" older's inequality (see for example \cite{ReYo99}). The difference is that for the BDG inequalities one needs to use a different integral expression for $x^p$, and so one has to consider Davis' inequalities not on the time interval $[0,T]$ but on $[\tau,T]$, where $\tau$ is a stopping time. In the pathwise setting, by the guiding principle stated in Section~\ref{history}, if $L$ is a functional of a martingale $X$ and $\tau$ is a stopping time, a statement of the type $\E[L|F_\tau]\leq0$ will have to be turned into one of the type $L+ (H\cdot X)_T- (H\cdot X)_{\tau}\leq0$; moreover, since there will be no expectations involved, H\"older's inequality will have to be replaced by Young's inequality. We will need to consider discrete time stochastic integrals for which the initial time is different from $0$; given $i<n$ and real numbers $(h_j)_{i\leq j \leq n-1}$ and $(x_j)_{i\leq j \leq n}$, we define \begin{equation} \label{intfromi} (h\cdot x)_i^n:=\sum _{j=i}^{n-1} h_j (x_{j+1}-x_j). \end{equation} Moreover if, for $i\leq j \leq n-1$, $h_j$ is a \emph{function} from $\R^{j+1}$ to $\R$, given real numbers $(x_j)_{0\leq j \leq n}$ we define $(h\cdot x)_i^n$ as \[ \sum_{j=i}^{n-1} h_j(x_0, \ldots, x_j) (x_{j+1}-x_j). \] Either way, we set $(h\cdot x)_i^n:=0$ if $n=i$. We now deduce pathwise Davis' inequalities on $\{i,i+1,\ldots,n \}$ from the ones on $\{0,1,\ldots,n \}$ by a simple time shift.\vspace*{-1pt} \begin{lemma} \label{condDavis} Assume that $\alpha,\beta>0$ and $h_n, k_n\dvtx \R^{n+1} \to\R, n\geq 0$ satisfy \begin{equation} \label{1dav} \sqrt{[x]_n} \leq\alpha x^*_n+(h \cdot x)_n,\quad\quad x^*_n \leq\beta \sqrt{[x]_n} + (k\cdot x)_n \end{equation} for every sequence $(x_n)_{n\geq0}$. Define, for $i\geq0$, $n \geq i$, the functions $f^{(i)}_n, g^{(i)}_n\dvtx \R^{n+1} \to\R$ by \[ {f}^{(i)}_n\bigl((x_j)_{0\leq j\leq n}\bigr) := {h}_{n-i}\bigl((x_{l}-x_{i-1})_{i\leq l \leq n}\bigr),\quad\quad {g}^{(i)}_n\bigl((x_j)_{j\leq n}\bigr) := {k}_{n-i}\bigl((x_{l}-x_{i-1})_{i\leq l \leq n}\bigr). \] Then we have, for $n\geq i\geq0$, \[ \sqrt{[x]_n}-\sqrt{[x]_{i-1}} \leq2 \alpha x^*_n + \bigl(f^{(i)}\cdot x\bigr)^n_i,\quad\quad x^*_n -x^*_{i-1}\leq\beta\sqrt{[x]_n} + \bigl(g^{(i)}\cdot x\bigr)^n_i. \] \end{lemma} \begin{pf}Fix $n\geq i\geq0$, $(x_n)_{n\geq0}$ and let $y^{(i)}_j:=x_{j+i}-x_{i-1}$. Applying \eqref{1dav} to $(y^{(i)}_j)_{j\geq0}$ we find \begin{eqnarray*} \sqrt{[x]_n}-\sqrt{[x]_{i-1}}&\leq&\sqrt{[x]_n-[x]_{i-1}} =\sqrt {\bigl[y^{(i)}\bigr]_{n-i}} \leq \alpha \bigl(y^{(i)}\bigr)^*_{n-i} +\bigl(h \cdot y^{(i)} \bigr)_{n-i} \\ &\leq&\alpha2 x^*_n+\bigl(f^{(i)} \cdot x \bigr)_{i}^n, \end{eqnarray*} and (respectively) \[ x^*_n - x_{i-1}^* \leq\bigl(y^{(i)} \bigr)^*_{n-i} \leq\beta\sqrt {\bigl[y^{(i)} \bigr]_{n-i}}+\bigl(k \cdot y^{(i)}\bigr)_{n-i} \leq \beta\sqrt {[x]_n}+\bigl(g^{(i)} \cdot x\bigr)_{i}^n . \] \upqed\end{pf} Here follows the pathwise version of Garsia--Neveu's lemma. \begin{lemma} \label{GarsiaNeveu} Let $p>1$, $ c_n\in\R, (x_j)_{j\leq n}, (h^{(i)}_n)_{i\leq n} \in\R ^{n+1}$, and assume that $0=a_{-1} \leq a_0 \leq\cdots\leq a_n <\infty$ and \[ a_n -a_{i-1} \leq c_n + \bigl(h^{(i)} \cdot x\bigr)_i^n \quad\quad\mbox{for } n\geq i\geq 0 . \] Then, if we set \[ w_j:=\sum_{i=0}^j p \bigl(a^{p-1}_i - a^{p-1}_{i-1} \bigr) h^{(i)}_j,\quad\quad j\leq n , \] we have that \begin{eqnarray} \label{gnlc1} a_n^p& \leq& pc_n a_{n}^{p-1}+ (w\cdot x)_n , \\ \label{gnlc2} a_n^p& \leq&(p-1)^{p-1} c_n^p + (pw\cdot x)_n . \end{eqnarray} \end{lemma} \begin{pf} From $a^p_n = p(p-1)\int_0^{a_n} s^{p-2} (a_n-s) \,\mathrm{d}s = p\sum_{i=0}^{n} \int^{a_i}_{a_{i-1}} (p-1) s^{p-2} (a_n-s) \,\mathrm{d}s$ and $a_n-s\leq a_n-a_{i-1}$ on $s\in[a_{i-1},a_i]$, we find \eqref{gnlc1} by writing \begin{eqnarray*} a_n^p &\leq& p \sum_{i=0}^{n} \bigl(a^{p-1}_i - a^{p-1}_{i-1}\bigr) (a_n-a_{i-1}) \\ &\leq& p \sum_{i=0}^{n} \bigl(a^{p-1}_i - a^{p-1}_{i-1}\bigr) \bigl[c_n+ \bigl(h^{(i)} \cdot x \bigr)_i^n \bigr] \\ &= & pc_n a_{n}^{p-1} + p \sum _{i=0}^{n} \sum_{j=i}^{n-1} \bigl(a^{p-1}_i - a^{p-1}_{i-1}\bigr) h_j^{(i)} (x_{j+1}-x_j) \\ &= & pc_n a_{n}^{p-1} + \sum _{j=0}^{n-1} \Biggl( \sum_{i=0}^{j} p \bigl(a^{p-1}_i - a^{p-1}_{i-1}\bigr) h_j^{(i)} \Biggr) (x_{j+1}-x_j) = pc_n a_{n}^{p-1} + (w\cdot x)_n. \end{eqnarray*} We then obtain (\ref{gnlc2}) from (\ref{gnlc1}) by applying Young's inequality $ab\leq C_\epsilon a^p/p+\epsilon b^q/q$ (where $C_\epsilon^{-1}=p(\epsilon q)^{p-1}$ and $1/p+1/q=1$) with $\epsilon=1/p$, $a=c_n, b=a_{n}^{p-1}$. \end{pf} Finally, from Theorem~\ref{PathDavisThm}, Lemma~\ref{condDavis} and Lemma~\ref{GarsiaNeveu}, we obtain the following discrete-time pathwise BDG inequalities for $p>1$. We recall that, by convention, $x_{-1}=x^*_{-1}=[x]_{-1}=0$ and $0/0=0$, and in particular the integrand $f_n^{(i)}$ is well defined. \begin{theorem}\label{PathBGThm} Let $x_0, \ldots, x_N$ be real numbers, $c_p:= 6^p(p-1)^{p-1}$ for $p>1$, and define \[ h_n:= \sum_{i=0}^n p^2 \Bigl(\sqrt{[x]_i^{p-1}} -\sqrt {[x]_{i-1}^{p-1}} \Bigr) f^{(i)}_n,\quad\quad g_n:= \sum_{i=0}^n p^2 \bigl(\bigl(x^*_i\bigr)^{p-1} - \bigl(x^*_{i-1}\bigr)^{p-1} \bigr) f^{(i)}_n, \] where \[ f^{(i)}_n:=\frac{x_n-x_{i-1}}{\sqrt{[x]_n-[x]_{i-1} + \max_{i\leq k\leq n}(x_{k}-x_{i-1})^2}}. \] Then \begin{equation} \label{PathBG} \sqrt{[x]_N^{p}} \leq c_p \bigl(x^*_N\bigr)^p - (h\cdot x)_N,\quad\quad \bigl(x^*_N\bigr)^p \leq c_p \sqrt{[x]_N^{p}} + 2(g\cdot x)_N. \end{equation} \end{theorem} We notice that Theorem~\ref{PathBGThm} yields \eqref{BDGIneq}; indeed, given a finite constant $N$ and a martingale $(X_n)_{n=0}^N$, trivially $\sqrt{[X]_N}$ and $X_N^*$ are in $L^p(\P)$ iff $X_n$ is in $L^p(\P)$ for every $n\leq N$, and in this case the adapted integrands $(H_n)_{n=0}^{N-1}$ and $(G_n)_{n=0}^{N-1}$ which we obtain applying Theorem~\ref{PathBGThm} to the paths of $X$ are in $L^q(\P)$ for every $n$ (for $q=p/(p-1)$), thus $H\cdot X$ and $G\cdot X$ are martingales and so \[ \E\bigl[ (H\cdot X)_N\bigr]= 0= \E\bigl[ (G\cdot X)_N\bigr], \] and the Burkholder--Davis--Gundy inequalities for $p>1$ (with $a_p=b_p=6^p(p-1)^{p-1}$) follow from \eqref{PathBG} by taking expectations, completing the proof of Theorem~\ref{BDGThm}. \section*{Acknowledgements} The authors thank Harald Oberhauser for comments on an earlier version of this paper. The first author thanks the Austrian Science Fund for support through project p21209.
proofpile-arXiv_067-6082
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{section: Introduction} The \textit{chiral de Rham complex} (\textit{CDR}) was introduced by Malikov-Schechtman-Vaintrob in \cite{MSV99}. It is a sheaf of vertex superalgebras containing the usual de Rham complex. Gorbounov-Malikov-Schechtman generalized this notion by introducing the sheaf of \textit{chiral differential operators}, and studied the sheaf in a series of papers \cite{GMS00,GMS03,GMS04}. Another construction of the CDR was done by means of formal loop spaces in \cite{KV04} by Kapranov-Vasserot. Moreover the CDR was studied in relation to elliptic genera and mirror symmetry in \cite{BL02,Bo01,BL00}. Recently Lian-Linshaw introduced a new equivariant cohomology theory in \cite{LL}, and studied in detail the CDR in the $C^\infty$-setting. The CDR was also investigated in terms of \textit{SUSY vertex algebras} in \cite{BZHS08,EHKZ13,Hel09,HK07,HZ10,HZ11}. Let $G$ be a compact connected Lie group with complexified Lie algebra $\mathfrak{g}$ and let $M$ be a $G$-manifold. Then the algebra $\Omega(M)$ of differential forms on $M$ has a canonical $G$-action. Together with the Lie derivatives and the interior products, this action of $G$ makes $\Omega(M)$ a $G^*$-\textit{algebra}. It is well- known that the equivariant cohomology of $M$ is computed as the equivariant cohomology of the $G^*$-algebra $\Omega(M)$ (see \cite{GS99}). As a vertex-algebraic analogue of this equivariant cohomology theory, the \textit{chiral equivariant cohomology} was introduced by Lian-Linshaw in \cite{LL}. This cohomology was defined for $O(\mathfrak{sg})$-\textit{algebras}, a vertex-algebraic analogue of $G^*$-algebras. The key to the construction is the fact that the \textit{semi-infinite Weil complex} $\mathcal{W}(\mathfrak{g})$ introduced by Feigin-Frenkel in \cite{FF91} has an $O(\mathfrak{sg})$-algebra structure. This fact was proved by Lian-Linshaw together with the fact that the space $\mathcal{Q}(M)$ of global sections of the CDR of $M$ has an $O(\mathfrak{sg})$-algebra structure. Later in \cite{LLS1}, Lian-Linshaw-Song introduced the notion of $\mathfrak{sg}[t]$-\textit{modules} as an analogue of $\mathfrak{g}$-differential complexes, a complex with a compatible action of the Lie superalgebra $\mathfrak{sg}=\mathfrak{g}\ltimes_{\mathrm{ad}} \mathfrak{g}$. As a classical equivariant cohomology theory, the construction of the chiral equivariant cohomology was generalized for the case of $\mathfrak{sg}[t]$-\textit{modules}. Moreover in \cite{LLS1}, Lian-Linshaw-Song also introduced a ``small" CDR as a subcomplex of the CDR, and pointed out that the global section of the subcomplex is an $\mathfrak{sg}[t]$-module. The small CDR itself is trivial since its vertex superalgebra structure is commutative. However when one consider the corresponding chiral equivariant cohomology, the vertex superalgebra structure is very complicated in general. Such a vertex superalgebra were studied in \cite{LLS1,LLS2}. An interesting example of $\mathfrak{g}$-differential complexes was considered in \cite{Gin99} by Ginzburg. This example comes from the space of multi-vector fields of a Poisson manifold with an action of $\mathfrak{g}$ and the corresponding equivariant cohomology is called the \textit{equivariant Poisson cohomology}. He also pointed out that one can define the same kind of equivariant cohomology for Lie algebroids. In this paper, we construct $\mathfrak{sg}[t]$-modules from Lie algebroids with an action of $\mathfrak{g}$, generalizing the small CDR of Lian-Linshaw-Song. We then define the chiral equivariant Lie algebroid cohomology. The CDR as well as the small version constructed by Lian-Linshaw on smooth manifolds is actually not a sheaf but a presheaf with a property, called a \textit{weak sheaf} by Lian-Linshaw-Song. For this reason, one has a little ambiguity in the choice of morphisms or the gluing properties. Therefore we introduce the notion of VSA-inductive sheaves, which generalize that of sheaves of vertex superalgebras, and formulate the morphisms and the gluing properties. We construct a VSA-inductive sheaf associated with a Lie algebroid and we obtain vertex-algebraic analogue of the Lie algebroid complex. Moreover we prove that the complex above has an $\mathfrak{sg}[t]$-module structure when the Lie algebroid has an action of $\mathfrak{g}$. This leads us to the definition of the chiral equivariant Lie algebroid cohomology. When the Lie algebroid is a tangent bundle, we recover the CDR of Lian-Linshaw-Song. In the classical equivariant cohomology theory, an important role is played by special complexes called $W^*$-\textit{modules}. They have remarkable properties, which make it easy to compute their equivariant cohomologies (see \cite{GS99}). Motivated by this fact, we introduce the notion of chiral $W^*$-modules and prove that they have some properties analogous to those of $W^*$-modules. Moreover we prove that the complexes obtained from the VSA-inductive sheaves associated with a type of Lie algebroid containing the \textit{cotangent Lie algebroids} of Poisson-Lie groups have chiral $W^*$-module structures. We then compute their chiral equivariant Lie algebroid cohomologies by using the properties of chiral $W^*$-modules mentioned above. The article is organized as follows: in Section 2 we recall some basics of vertex superalgebras and the chiral equivariant cohomology. In Section 3, we introduce the chiral $W^*$-modules and a chiral Cartan model for $\mathfrak{sg[t]}$-modules. We mainly consider this chiral Cartan model when $\mathfrak{g}$ is commutative. The result in this section will be used in the last part of Section 6. In Section 4, we introduce the notion of VSA-inductive sheaves. Then we establish some gluing properties. Moreover we construct VSA-inductive sheaves from presheaves of degree-weight-graded vertex superalgebras with some properties. In Section 5, after recalling the notion of Lie algebroids and the Lie algebroid cohomology, we first construct an important VSA-inductive sheaf on $\mathbb{R}^m$ and its small version, which we denote respectively by $\Omega_\mathrm{ch}(\mathbb{R}^{m|r})$ and $\Omega^{\gamma c}_\mathrm{ch}(\mathbb{R}^{m|r})$. Using the gluing property proved in Section 4, we glue the small ones $\Omega^{\gamma c}_\mathrm{ch}(\mathbb{R}^{m|r})$ into the global VSA-inductive sheaf associated with an arbitrary vector bundle. Next for a Lie algebroid, we construct a differential on the VSA-inductive sheaf associated with the Lie algebroid, using the vertex operators of the bigger one $\Omega_\mathrm{ch}(\mathbb{R}^{m|r})$. Thus we obtain a vertex-algebraic analogue of the Lie algebroid complex. Moreover in Section 6, we equip this complex with an $\mathfrak{sg}[t]$-module structure, when the Lie algebroid has an action of a Lie algebra $\mathfrak{g}$. For this construction, we also need the vertex operators of the bigger VSA-inductive sheaf. Then we introduce the chiral equivariant Lie algebroid cohomology. In the last part of Section 6, we compute this cohomology for an important Lie algebroid called a \textit{transformation Lie algebroid}. In particular, we compute that cohomology for the cotangent Lie algebroids associated with \textit{Poisson-Lie groups}. \vspace{10pt} Throughout this paper, $\mathbb{K}$ is the field of real numbers $\mathbb{R}$ or that of complex numbers $\mathbb{C}$, and we will work over $\mathbb{K}$. We assume that a grading on a super vector space is compatible with the super vector space structure. \section{Preliminaries}\label{section: Preliminaries} \subsection{Vertex Superalgebras} We first recall basic definitions and facts concerning vertex superalgebras, which was introduced in \cite{Bor86}. We will follow the formalism and results in \cite{FBZ04,Kac01,LL04}. A \textit{vertex superalgebra} is a quadruple $(V, \mathbf{1},T,Y)$ consisting of a super vector space $V,$ an even vector $\mathbf{1} \in V$, called the vacuum vector, an even linear operator $T : V \to V$, called the translation operator, and an even linear operation $ Y=Y(\, \cdot \, , z) : V \to (\mathrm{End} V)[[z^{\pm1}]], $ taking each $A \in V$ to a field on $V$, called the vertex operator, $$ Y(A,z)= \sum_{n \in \mathbb{Z}} A_{(n)} z^{-n-1}, $$ such that \begin{enumerate}[$\bullet$] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{20pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{30pt} \setlength{\itemindent}{0pt} \item (vacuum axiom)\\ $Y(\mathbf{1} ,z) = \mathrm{id}_V$;\\ $ Y(A,z)\mathbf{1} \in V[[z]], $ and $ Y(A,z)\mathbf{1} | _{z=0} = A $ \quad for any $A \in V$; \item (translation axiom)\\ $T \mathbf{1} = 0$;\\ $ [T,Y(A,z)] = \frac{\mathit{d}}{\mathit{d}z} Y(A,z) $ \quad for any $A \in V$; \item (locality axiom)\\ For any $A, B \in V,$ $Y(A,z)$ and $Y(B,z)$ are mutually local. \end{enumerate} A vertex superalgebra $(V, \mathbf{1}, T, Y)$ is said to be \textit{$\mathbb{Z}$-graded} when the super vector space $V$ is given a $\mathbb{Z}$-grading $V = \oplus_{n \in \mathbb{Z}} V[n]$ such that $\mathbf{1}$ is a vector of weight $0$, $T$ is a homogeneous linear operator of weight $1$, and if $A \in V[n]$, then the field $Y(A,z)$ is homogeneous of conformal dimension $n$. We refer to such a grading as a \textit{weight-grading} on the vertex superalgebra. Note that $\mathrm{Im}\,Y\subset(\mathrm{End}\,V)[[z^{\pm1}]]$ has a canonical structure of vertex superalgebra and that $Y: V\to \mathrm{Im}\,Y$ is an isomorphism of vertex superalgebras. We will often identify these two vertex superalgebras. For a vertex superalgebra $(V, \mathbf{1}, T, Y)$, the data of $Y$ is equivalent to that of bilinear maps $$ (n): V\times V\to V, \quad (A, B)\mapsto A_{(n)}B. $$ Therefore vertex algebras are also written as $(V, \mathbf{1}, T, (n); n\in \mathbb{Z})$. Moreover the translation operator $T$ and the vertex operator $Y(A, z)$ are often denoted by $\partial$ and by $A(z)$, respectively. A purely even vertex superalgebra is called simply a \textit{vertex algebra}. We use the following notation for the \textit{operator product expansion (OPE)} of the mutually local fields $A(z)$ and $B(z)$: $$ A(z)B(w)\sim \sum_{n\ge 0}C_n(w)(z-w)^{-n-1}, $$ where $C_n(w)$ are some fields. Note that the OPE formula gives the commutation relations among the coefficients of $A(z)$ and $B(w)$. (See \cite{FBZ04} and \cite{Kac01} for details.) The following examples of vertex superalgebras will be used for the construction of some kinds of cohomology later. \begin{example}[affine vertex superalgebras]\label{ex:affine va Let $\mathfrak{g}$ be a Lie superalgebra with a supersymmetric invariant bilinear form $B$. Let $\Hat{\mathfrak{g}}=\mathfrak{g} [t^{\pm1}]\oplus \mathbb{K}K$ be the affine Lie algebra associated with $(\mathfrak{g}, B).$ Set $$ N(\mathfrak{g}, B):=U(\hat{\mathfrak{g}})\otimes_{U(\mathfrak{g}[t]\oplus \mathbb{K}K)}\mathbb{K}_1, $$ where $\mathbb{K}_1$ is the one-dimensional $\mathfrak{g}[t]\oplus \mathbb{K}K$-module on which $\mathfrak{g}[t]$ acts by zero and $K$ by $1.$ This $\Hat{\mathfrak{g}}$-module $N(\mathfrak{g}, B)$ has a $\mathbb{Z}_{\ge 0}$-graded vertex superalgebra structure called the \textit{affine vertex superalgebra} associated with $\mathfrak{g}$ and $B$. Note that the operator $a_{(n)}$ has weight $-n$, where $a_{(n)}$ stands for the operator on $N(\mathfrak{g}, B)$ corresponding to $at^n\in \hat{\mathfrak{g}}$. The Lie superalgebra $\mathfrak{g}$ can be seen as a subspace of $N(\mathfrak{g}, B)$ by the injection $\mathfrak{g}\to N(\mathfrak{g}, B),\ a\mapsto a_{(-1)}\mathbf{1}$. We denote by $O(\mathfrak{g}, B)$ the corresponding vertex superalgebra $\mathrm{Im}(Y)=Y(N(\mathfrak{g}, B))\subset (\mathrm{End}\,N(\mathfrak{g}, B))[[z^{\pm1}]]$. \end{example \begin{example}[$\beta \gamma$-systems]\label{ex:beta_gamma-systems Let $V$ be a finite-dimensional vector space. Let $\mathfrak{h}(V)$ $=(V[t^{\pm1}]\oplus V^*[t^{\pm1}]dt)\oplus \mathbb{K}\mathbf{\tau}$ be the \textit{Heisenberg Lie algebra} associated with $V$. Set $$ \mathcal{S}(V):=U(\mathfrak{h}(V))\otimes_{U(V[t]\oplus V^*[t]dt\oplus \mathbb{K}\mathbf{\tau})}\mathbb{K}_1, $$ where $\mathbb{K}_1$ is the one-dimensional $(V[t]\oplus V^*[t]dt\oplus \mathbb{K}\mathbf{\mathbf{\tau}})$-module in which $V[t]\oplus V^*[t]dt$ acts by zero and $\mathbf{\tau}$ by $1.$ We denote by $\beta^{v}_n,\gamma^{\phi}_n$ the elements $v\otimes t^n, \phi \otimes t^{n-1}dt \in \mathfrak{h}(V)$, respectively. The $\mathfrak{h}(V)$-module $\mathcal{S}(V)$ has a $\mathbb{Z}_{\ge 0}$-graded vertex algebra structure called the $\beta \gamma$-\textit{system} associated with $V$. We sometimes denote $\beta^v_n$ and $\gamma^\phi_n$ by $\beta^v_{(n)}$ and $\gamma^\phi_{(n-1)}$, respectively. \end{example \begin{example}[$bc$-systems]\label{ex:bc-systems Let $V$ be a finite-dimensional vector space. We regard $V[t^{\pm1}]\oplus V^*[t^{\pm1}]dt$ as an odd abelian Lie algebra. Consider the one-dimensional central extension $ \mathfrak{j}(V)=(V[t^{\pm1}]\oplus V^*[t^{\pm1}]dt)\oplus \mathbb{K}\mathbf{\tau} $ of that odd abelian Lie algebra with bracket \begin{multline*} [v_1\otimes f_1+\phi_1 \otimes g_1dt,v_2\otimes f_2+\phi_2 \otimes g_2dt] \\ =(\langle v_1,\phi_2\rangle\mathrm{Res}_{t=0}f_1g_2dt +\langle v_2,\phi_1\rangle\mathrm{Res}_{t=0}f_2g_1dt)\mathbf{\tau}. \end{multline*} Set $$ \mathcal{E}(V):=U(\mathfrak{j}(V))\otimes_{U(V[t]\oplus V^*[t]dt\oplus \mathbb{K}\mathbf{\tau})}\mathbb{K}_1, $$ where $\mathbb{K}_1$ is the one-dimensional $(V[t]\oplus V^*[t]dt\oplus \mathbb{K}\mathbf{\tau})$-module in which $V[t]\oplus V^*[t]dt$ acts by zero and $\mathbf{\tau}$ by $1.$ We denote by $b^{v}_n, c^{\phi}_n$ the elements $v\otimes t^n, \phi \otimes t^{n-1}dt,$ respectively. The $\mathfrak{j}(V)$-module $\mathcal{E}(V)$ has a $\mathbb{Z}_{\ge 0}$-graded vertex superalgebra structure called the $bc$-\textit{system} associated with $V$. We sometimes denote $b^v_n$ and $c^\phi_n$ by $b^v_{(n)}$ and $c^\phi_{(n-1)}$, respectively. \end{example \begin{example}[semi-infinite Weil algebras]\label{ex:semi-infinite_Weil_algebras For a vector space $V$, the tensor product vertex superalgebra $$ \mathcal{W}(V):=\mathcal{E}(V)\otimes \mathcal{S}(V), $$ is called the \textit{semi-infinite Weil algebra} associated with $V$ (\cite{FF91}). \end{example We recall some graded structures on vertex superalgebras. A vertex superalgebra $V$ is \textit{degree-graded} if it is given a $\mathbb{Z}$-grading $V=\bigoplus_{p\in\mathbb{Z}}V^p$ such that $ A_{(n)}B\in V^{p+q} $ for all $A\in V^p, B\in V^q$, $n\in\mathbb{Z}$ and $\mathbf{1}\in V^0$. Recall that a $\mathbb{Z}$-grading $V=\bigoplus_{n\in \mathbb{Z}}V[n]$ on a vertex superalgebra $V$ is called a weight-grading if $ A_{(k)}B\in V[n+m-k-1] $ for all $A\in V[n]$, $B\in V[m]$ and $k\in \mathbb{Z}$, and $\mathbf{1}\in V[0]$. A vertex superalgebra $V$ is \textit{degree-weight-graded} if $V$ is both degree and weight-graded and the gradings are compatible, that is, $V=\bigoplus_{p, n\in \mathbb{Z}}V^p[n]$, where $V^p[n]=V^p\cap V[n]$. \begin{example}\label{ex: another grading on N(g, 0)} We can define a degree-weight-grading on the affine vertex superalgeba $N(\mathfrak{g}, B)$ when $\mathfrak{g}$ has two compatible $\mathbb{Z}$-grading and the invariant bilinear form $B$ is $0$. We call the one grading on $\mathfrak{g}$ the weight-grading and the other the degree-grading. Then $N(\mathfrak{g}, 0)$ becomes a degree-weight-graded vertex superalgebra if we give the weight-grading by $ \mathrm{wt}\, a^1_{(n_1)}\dots a^r_{(n_r)}\mathbf{1} :=\sum_{i=1}^r(-n_i+\mathrm{wt}_\mathfrak{g}a^i), $ and the degree-grading by $ \mathrm{deg}\, a^1_{(n_1)}\dots a^r_{(n_r)}\mathbf{1} :=\sum_{i=1}^r\mathrm{deg}_\mathfrak{g}a^i, $ for degree-weight-homogeneous elements $a^1, \dots, a^r \in \mathfrak{g}$ and $n_1, \dots, n_r\in \mathbb{Z}_{<0}$. We call this grading the degree-weight-grading on the vertex superalgebra $N(\mathfrak{g}, 0)$ associated with the grading on $\mathfrak{g}$. \end{example} In the sequel, we will always assume that an action of a degree-weight-graded vertex superalgebra on a degree-weight-graded super vector space is compatible with the gradings. We give some lemmas used in Section \ref{section: Chiral Lie Algebroid Cohomology} and \ref{section: Chiral Equivariant Lie Algebroid Cohomology}. Recall the notion of vertex superalgebra derivation. A \textit{derivation} on a vertex superalgebra $V$ with parity $\bar{i}$ is an endomorphism $d$ on $V$ with parity $\bar{i}$ such that $ [d, Y(A, z)]=Y(d\cdot A, z) $ for any $A\in V$. \begin{lemma}\label{lem: odd derivation is 0 if so on generators} Let $V$ be a vertex superalgebra generated by a subset $S\subset V$. Let $D$ be an odd derivation on the vertex superalgebra $V$ such that $D^2|_S=0$. Then $D^2=0$ hold on $V$. \end{lemma} \begin{proof} Since the operator $[D,D]=2D^2$ is also a derivation, our assertion holds. \end{proof} \begin{lemma}\label{lem: sufficient condition for B-linearilty} Let $f: V\to W$ be a morphism of vertex superalgebras. Let $N$ be a non-negative integer. Suppose $V$ is generated by a subset $S\subset V$. Let $(A_{(n)})_{0\le n \le N}$ and $(B_{(n)})_{0\le n \le N}$ be linear maps on $V$ and $W$, respectively. Suppose the following hold: \begin{gather}\label{eq: B-commutation relation} [A_{(n)},v_{(k)}]=\sum_{i\ge0}\binom{n}{i}(A_{(i)}v)_{(n+k-i)}, \\ \label{eq: B'-commutation relation} [B_{(n)},f(v)_{(k)}]=\sum_{i\ge0}\binom{n}{i}(B_{(i)}f(v))_{(n+k-i)}, \end{gather} for all $v\in S$, $0\le n\le N$, and $k\in\mathbb{Z}$ Then if $f\circ A_{(n)}=B_{(n)}\circ f$ on $S$ for all $0\le n\le N$, then $f\circ A_{(n)}=B_{(n)}\circ f$ holds on $V$ for any $0\le n\le N$. \end{lemma} \begin{proof} It suffices to show that $(f\circ A_{(n)})(v)=(B_{(n)}\circ f)(v)$ for $v\in V$ of the form $s^1_{(n_1)}\cdots s^r_{(n_r)}\mathbf{1}$ with $s^1, \dots, s^r\in S$ and $n_1, \dots, n_r\in\mathbb{Z}$. The assertion is proved by induction on $r$. Note that $A_{(n)}\mathbf{1}=0$ and $B_{(n)}\mathbf{1}=0$ are proved by induction on $n\in\mathbb{N}$ with \eqref{eq: B-commutation relation} and \eqref{eq: B'-commutation relation}. \end{proof} \begin{lemma}\label{lem: generalized commutant} Let $V$ be a vertex superalgebra. Let $\mathcal{A}=(A_{(m)}^\lambda)_{m\ge0, \lambda\in\Lambda}$ be a family of $\mathbb{Z}/2\mathbb{Z}$-homogeneous linear maps on $V$ such that $$ [A_{(m)}^\lambda, v_{(k)}]=\sum_{i\ge0}\binom{m}{i}(A_{(i)}^\lambda v)_{(m+k-i)}, $$ for all $m\ge0$, $\lambda\in\Lambda$, $k\in\mathbb{Z}$ and $v\in V$. Then $V^\mathcal{A}:=\{v\in V \bigm| A_{(m)}^\lambda v=0\ \textrm{for all}\ m\ge0\ \textrm{and}\ \lambda\in\Lambda\}$ is a subalgebra of $V$. \end{lemma} \begin{proof} The equality $A_{(m)}^\lambda \mathbf{1}=0$ is proved by induction on $n$. The subspace $V^\mathcal{A}$ is closed under the $n$-th product due to the assumption. \end{proof} \begin{lemma}\label{lem: tensor commutant} Let $V$ and $W$ be vertex superalgebras. Let $\mathcal{A}=(A_{(m)}^\lambda)_{m\ge0, \lambda\in\Lambda}$ be a family of $\mathbb{Z}/2\mathbb{Z}$-homogeneous linear maps on $V$ and $\mathcal{B}=(B_{(m)}^\lambda)_{m\ge0, \lambda\in\Lambda}$ a family of $\mathbb{Z}/2\mathbb{Z}$-homogeneous linear maps on $W$ such that \begin{gather*} [A_{(m)}^\lambda, v_{(k)}]=\sum_{i\ge0}\binom{m}{i}(A_{(i)}^\lambda v)_{(m+k-i)}, \\ [B_{(m)}^\lambda, w_{(k)}]=\sum_{i\ge0}\binom{m}{i}(B_{(i)}^\lambda w)_{(m+k-i)}, \end{gather*} for all $m\ge0$, $\lambda\in\Lambda$, $k\in\mathbb{Z}$, $v\in V$ and $w\in W$. Then the family of $\mathbb{Z}/2\mathbb{Z}$-homogeneous linear maps on $V\otimes W$, $(A_{(m)}^\lambda\otimes \mathrm{id}+\mathrm{id}\otimes B_{(m)}^\lambda)_{m\ge0, \lambda\in\Lambda}$, satisfies the relation $$ [A_{(m)}^\lambda\otimes \mathrm{id}+\mathrm{id}\otimes B_{(m)}^\lambda, x_{(k)}]=\sum_{i\ge0}\binom{m}{i}\bigl((A_{(m)}^\lambda\otimes \mathrm{id}+\mathrm{id}\otimes B_{(m)}^\lambda) x\bigr)_{(m+k-i)}, $$ for any $m\ge0$, $\lambda\in\Lambda$, $k\in\mathbb{Z}$, $x\in V\otimes W$. \end{lemma} \begin{proof} The assertion is proved by direct computations. \end{proof} \subsection{Chiral Equivariant Cohomology}\label{subsection: Chiral Equivariant Cohomology} We next recall the definition of the chiral equivariant cohomology. We refer the reader to \cite{LL,LLS1} and partly to \cite{FF91}, for more details. Let $\mathfrak{g}$ be a Lie algebra. The Lie superalgebra $\mathfrak{sg}$ is defined by $$ \mathfrak{sg}:=\mathfrak{g}\ltimes\mathfrak{g}_{-1}, $$ where $\mathfrak{g}_{-1}$ is the adjoint representation of $\mathfrak{g}$. Let $O(\mathfrak{sg}, 0)$ be the affine vertex superalgebra associated with the Lie superalgebra $\mathfrak{sg}$ with an invariant bilinear form $0$. The Lie superalgebra derivation $$ \mathfrak{sg} \to \mathfrak{sg}, \quad (\xi, \eta)\mapsto (\eta, 0), $$ induces a vertex superalgebra derivation $$ \mathbf{d}: O(\mathfrak{sg}, 0)\to O(\mathfrak{sg}, 0),\quad (\xi, \eta)(z)\mapsto (\eta, 0)(z). $$ This makes $O(\mathfrak{sg}):=(O(\mathfrak{sg},0), \mathbf{d})$ a differential degree-weight-graded vertex algebra, that is, a degree-weight-graded vertex superalgebra with a square-zero odd vertex superalgera derivation of degree $1$, where the gradings are given by $\mathrm{deg} ((\xi, 0)(z))=0, \mathrm{deg} ((0, \eta)(z))=-1$ and $\mathrm{wt} ((\xi, \eta)(z))=1$. Recall the notion of $O(\mathfrak{sg})$-algebras from \cite{LL}. An $O(\mathfrak{sg})$-\textit{algebra} is a differential degree-weight-graded vertex superalgebra $(\mathcal{A}, d)$ equipped with a morphism of differential degree-weight-graded vertex superalgebras $\Phi_\mathcal{A}: O(\mathfrak{sg})\to (\mathcal{A}, d)$. Next we recall from \cite{LLS1} the notion of $\mathfrak{sg}[t]$-modules containing that of $O(\mathfrak{sg})$-algebras. Recall the Lie superalgebra $\mathfrak{sg}[t]=\mathfrak{sg}\otimes\mathbb{K}[t]$ has a differential $d$ defined by $$ d: \mathfrak{sg}[t] \to \mathfrak{sg}[t], \quad (\xi, \eta)t^n\mapsto (\eta, 0)t^n. $$ An $\mathfrak{sg}[t]$-\textit{module} is a degree-weight-graded complex $(\mathcal{A}, d_{\mathcal{A}})$ equipped with a Lie superalgebra morphism $$ \rho: \mathfrak{sg}[t]\to \mathrm{End} (\mathcal{A}), \quad (\xi, \eta)t^n\mapsto \rho((\xi, \eta)t^n)=L_{\xi, (n)}+\iota_{\eta, (n)}, $$ such that for all $x\in \mathfrak{sg}[t]$ we have \begin{enumerate}[$\bullet$] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{20pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{30pt} \setlength{\itemindent}{0pt} \item $\rho(dx)=[d_\mathcal{A},\rho(x)]$; \item $\rho(x)$ has degree $0$ whenever $x$ is even in $\mathfrak{sg}[t]$, and degree $-1$ whenever $x$ is odd, and has weight $-n$ if $x\in \mathfrak{sg}t^n$. \end{enumerate} In this paper, we assume that the differential has odd parity and that the action of $\mathfrak{sg}[t]$ on $\mathcal{A}$ with the discrete topology is continuous, that is, for any $v\in\mathcal{A}$, $t^k\mathfrak{sg}[t]\cdot v=0$ for some sufficiently large $k\in\mathbb{N}$. Moreover we call an $\mathfrak{sg}[t]$-module a \textit{ differential} $\mathfrak{sg}[t]$-\textit{module}, emphasizing its differential. For a differential $\mathfrak{sg}[t]$-module $(\mathcal{A}, d_\mathcal{A})$, we often write $L_{\xi, (n)}$ and $\iota_{\xi, (n)}$ as $L_{\xi, (n)}^\mathcal{A}$ and $\iota_{\xi, (n)}^\mathcal{A}$, respectively. We will recall the notion of the chiral equivariant cohomology. Consider the semi-infinite Weil algebra $\mathcal{W}=\mathcal{W}(\mathfrak{g})$ associated with a finite-dimensional Lie algebra $\mathfrak{g}$ (see Example \ref{ex:semi-infinite_Weil_algebras}). Let $(\xi_i)_i$ be a basis of $\mathfrak{g}$ with the dual basis $(\xi^*_i)_i$ for $\mathfrak{g}^*$. Recall that the vertex superalgebra $\mathcal{W}(\mathfrak{g})$ is degree-weight-graded. The weight and degree-grading come from the diagonalizable operator ${\omega_{\mathcal{W}}}_{(1)}=(\omega_{\mathcal{E}}+\omega_{\mathcal{S}})_{(1)}$ and the operator ${j_{bc}}_{(0)}+2{j_{\beta\gamma}}_{(0)}$, respectively. Here $\omega_{\mathcal{S}}:=\sum_{i=1}^{\dim \mathfrak{g}}\beta^{x^i}_{-1}\partial\gamma^{x^{*}_i}_{0}\mathbf{1}$, $\omega_{\mathcal{E}}:=-\sum_{i=1}^{\dim \mathfrak{g}}b^{x^i}_{-1}\partial c^{x^{*}_i}_0\mathbf{1}$ and $j_{bc}:=-\sum_{i=1}^{\dim \mathfrak{g}}b^{\xi_i}_{-1}c^{\xi^*_i}_0\mathbf{1}$, $j_{\beta\gamma}:=\sum_{i=1}^{\dim \mathfrak{g}}\beta^{\xi_i}_{-1}\gamma^{\xi^*_i}_0\mathbf{1}$. Set \begin{gather} D:=J+K, \\ \quad J:=-\sum_{i, j=1}^{\dim \mathfrak{g}}\beta^{[\xi_i,\xi_j]}_{-1}\gamma^{\xi^*_j}_0 c^{\xi^*_i}_0\mathbf{1}-\frac{1}{2}\sum_{i, j=1}^{\dim \mathfrak{g}}c^{\xi^*_i}_0c^{\xi^*_j}_0b^{[\xi_i,\xi_j]}_{-1}\mathbf{1}, \quad K:=\sum_{i=1}^{\dim \mathfrak{g}}\gamma^{\xi^*_i}_0b^{\xi_i}_{-1}\mathbf{1}. \end{gather} Then the operator $D_{(0)}$ is a differential on $\mathcal{W}$. The differential degree-weight-graded vertex superalgebra $(\mathcal{W}^\bullet, d_\mathcal{W})$ is called the \textit{semi-infinite Weil complex}, where $d_\mathcal{W}=D_{(0)}$ (\cite{FF91}). Set \begin{gather} \Theta_\mathcal{W}^\xi:=\Theta_\mathcal{E}^\xi+\Theta_\mathcal{S}^\xi, \quad\\ \Theta_\mathcal{E}^\xi:=\sum_{i=1}^{\dim \mathfrak{g}}b^{[\xi,\xi_i]}_{-1}c^{\xi^*_i}_0\mathbf{1}, \quad \Theta_\mathcal{S}^\xi:=-\sum_{i=1}^{\dim \mathfrak{g}}\beta^{[\xi,\xi_i]}_{-1}\gamma^{\xi^*_i}_0\mathbf{1}, \end{gather} for $\xi\in \mathfrak{g}$. The following theorem is proved in \cite[theorem 5.11]{LL04}. \begin{theorem}[Lian-Linshaw] The vertex superalgebra morphism $O(\mathfrak{sg})\to(\mathcal{W}(\mathfrak{g}), D_{(0)}),$ $(\xi, \eta)(z)$ $\mapsto\Theta_{\mathcal{W}}^\xi(z)+b^\eta(z)$ defines an $O(\mathfrak{sg})$-algebra structure on $\mathcal{W}(\mathfrak{g})$. \end{theorem} We will use the following relations proved in \cite[Lemma 5.12]{LL}. \begin{lemma}[Lian-Linshaw] Let $\eta, \xi$ be elements of $\mathfrak{g}$ and $\xi^*$ an element of $\mathfrak{g}^*$. Then the following hold: \begin{gather} \Theta_\mathcal{W}^\xi(z)c^{\xi^*}(w)\sim c^{\mathrm{ad}^*\xi\cdot\xi^*}(w)(z-w)^{-1}, \\ D_{(0)}c^{\xi^*}_0\mathbf{1}=-\frac{1}{2}\sum_{i=1}^{\dim{\mathfrak{g}}}c^{\mathrm{ad}^*\xi_i\cdot\xi^*}_0c^{\xi^*_i}_0\mathbf{1}+\gamma^{\xi^*}_0\mathbf{1}, \\ D_{(0)}\gamma^{\xi^*}_0\mathbf{1}=\sum_{i=1}^{\dim{\mathfrak{g}}}\gamma^{\mathrm{ad}^*\xi_i\cdot\xi^*}_0c^{\xi^*_i}_0\mathbf{1}, \end{gather} where the first formula stands for the OPE of the fields $\Theta_\mathcal{W}^\xi(z)$ and $c^{\xi^*}(z)$, and $\mathrm{ad}^*$ is the coadjoint action of $\mathfrak{g}$ on $\mathfrak{g}^*$. \end{lemma} Recall the notion of the chiral horizontal, invariant and basic subspaces from \cite{LL,LLS1}. Let $(\mathcal{A}, d)$ be a differential $\mathfrak{sg}[t]$-module. The \textit{chiral horizontal, invariant} and \textit{basic subspaces} of $\mathcal{A}$ are respectively \begin{align*} \mathcal{A}_{hor}&:=\bigl\{a\in \mathcal{A} \bigm| \iota_{\eta, (n)}a=0 \ \text{for all}\ \eta\in\mathfrak{g}, n\ge0 \bigr\}, \\ \mathcal{A}_{inv}&:=\bigl\{a\in \mathcal{A} \bigm| L_{\xi, (n)}a=0\ \text{for all}\ \xi\in\mathfrak{g}, n\ge0 \bigr\}, \ \text{and} \\ \mathcal{A}_{bas}&:=\mathcal{A}_{hor}\cap\mathcal{A}_{inv}. \end{align*} Note that if $(\mathcal{A}, d)$ is a differential $\mathfrak{sg}[t]$-module then the subspaces $\mathcal{A}_{hor}$ and $\mathcal{A}_{bas}$ are subcomplexes of $(\mathcal{A}, d)$. We then recall the definitions of the chiral basic and equivariant cohomologies. Let $G$ be a compact connected Lie group. Set $\mathfrak{g}=\mathrm{Lie}(G)^{\mathbb{K}}$. Let $(\mathcal{A}, d)$ be a differential $\mathfrak{sg}[t]$-module. Its \textit{chiral basic cohomology} $\mathbf{H}_{bas}{(\mathcal{A})}$ is the cohomology of the complex $(\mathcal{A}_{bas}, d|_{\mathcal{A}_{bas}})$. The \textit{chiral equivariant cohomology} $\mathbf{H}_{G}{(\mathcal{A})}$ of $(\mathcal{A}, d)$ is the chiral basic cohomology of the tensor product $ (\mathcal{W}(\mathfrak{g})\otimes \mathcal{A}, d_\mathcal{W}\otimes1+1\otimes d_\mathcal{A}). $ \section{Chiral $W^*$-Modules}\label{section: Chiral $W^*$-Modules} \subsection{Definition of Chiral $W^*$-Modules and the Chiral Cartan Model} Let $\mathfrak{g}$ be a finite-dimensional Lie algebra. We denote by $\langle c, \gamma \rangle$ or $\mathcal{W}'$ the subalgebra of the semi-infinite Weil algebra $\mathcal{W}=\mathcal{W}(\mathfrak{g})$ generated by $c^{\xi^*}_0\mathbf{1}$, $\gamma^{\xi^*}_0\mathbf{1}$ with $\xi^*\in\mathfrak{g}^*$. Note that $\mathcal{W}'$ is preserved by the differential $d_\mathcal{W}$. Therefore we have a subcomplex $(\mathcal{W}', d_{\mathcal{W}'})$, where $d_{\mathcal{W}'}:=d_{\mathcal{W}}|_{\mathcal{W}'}$. Note that $(\mathcal{W}', d_{\mathcal{W}'})$ is acyclic. This follows from the same argument as that for the proof of the acyclicity of $(\mathcal{W}, d_\mathcal{W})$ in \cite[Proposition 5]{Akm93}. (See Section \ref{subsection: Chiral Equivariant Cohomology} for the definition of the semi-infinite Weil complex $(\mathcal{W}(\mathfrak{g}), d_\mathcal{W})$.) We denote by $\delta(z-w)_-$ the formal distribution $\sum_{n\ge 0}z^{-n-1}w^n$. \begin{definition}\label{df: chiral W^*-modules} A \textbf{chiral} $W^*$\textbf{-module} (with respect to $\mathfrak{g}$) is a differential $\mathfrak{sg}[t]$-module $(\mathcal{A}, d_{\mathcal{A}})$ given a module structure over the vertex superalgebra $\langle c, \gamma \rangle$ $$ Y^\mathcal{A}(\ , z): \langle c, \gamma \rangle \to (\mathrm{End} \mathcal{A})[[z^{\pm1}]], $$ such that \begin{enumerate}[$(1)$\ ] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{35pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{15pt} \setlength{\itemindent}{0pt} \item $[d_\mathcal{A}, Y^\mathcal{A}(x, z)]= Y^\mathcal{A}(d_{\mathcal{W}'}x, z)$, \quad for all $x \in \langle c, \gamma \rangle$, \item $[L_\xi^\mathcal{A}(z)_{-},Y^\mathcal{A}(c^{\xi^*}_0\mathbf{1}, w)]=Y^\mathcal{A}(c^{\mathrm{ad}^*\xi \cdot\xi^*}_0\mathbf{1}, w)\delta(z-w)_{-}$, for all elements $\xi$ of $\mathfrak{g}$ and all elements $\xi^*$ of $\mathfrak{g}^*$, \item $[\iota_\xi^\mathcal{A}(z)_{-},Y^\mathcal{A}(c^{\xi^*}_0\mathbf{1}, w)]=\langle {\xi^*}, \xi \rangle \delta(z-w)_{-}$, for all $\xi\in \mathfrak{g}$ and all ${\xi^*} \in \mathfrak{g}^*$, \end{enumerate} where $L_{\xi}^\mathcal{A}(z)_{-}:=\sum_{n\ge0}L_{\xi, (n)}^\mathcal{A}z^{-n-1}$ and $\iota_\xi^\mathcal{A}(z)_{-}:=\sum_{n\ge0}\iota_{\xi, (n)}^\mathcal{A}z^{-n-1}$ for an element $\xi$ of $\mathfrak{g}$. \end{definition} For a $\langle c, \gamma \rangle$-module $(\mathcal{A}, Y^\mathcal{A})$, we often use the following notation: \begin{align*} Y^\mathcal{A}(c^{\xi^*}_0\mathbf{1}, z)&=c^{\xi^*, \mathcal{A}}(z)=\sum_{n\in\mathbb{Z}}c^{\xi^*, \mathcal{A}}_{(n)}z^{-n-1}, \\ Y^\mathcal{A}(\gamma^{\xi^*}_0\mathbf{1}, z)&=\gamma^{\xi^*, \mathcal{A}}(z)=\sum_{n\in\mathbb{Z}}\gamma^{\xi^*, \mathcal{A}}_{(n)}z^{-n-1}. \end{align*} for an element $\xi^*$ of $\mathfrak{g}^*$. Note that the formula (2) in the preceding definition is equivalent to the commutation relations $[L_{\xi, (m)}^\mathcal{A},c_{(n)}^{\xi^*, \mathcal{A}}]=c_{(m+n)}^{\mathrm{ad}^*\xi\cdot\xi^*, \mathcal{A}}$ for all $m\in\mathbb{Z}_{\ge0}$ and $n\in\mathbb{Z}$. Similarly the formula (3) is equivalent to the relations $[\iota_{\xi, (m)}^\mathcal{A},c_{(n)}^{\xi^*, \mathcal{A}}]=\langle {\xi^*}, \xi \rangle\delta_{m+n, -1}$ for all $m\in\mathbb{Z}_{\ge0}$ and $ n\in\mathbb{Z}$. Let $(\xi _i)_i$ be a basis of $\mathfrak{g}$ and $(\xi^*_i)_i$ the dual basis for $\mathfrak{g}^*$. Let $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ be a chiral $W^*$-module and $(\mathcal{B}, d_\mathcal{B})$ a differential $\mathfrak{sg}[t]$-module. Set $$ \Phi=\Phi_{\mathcal{A}, \mathcal{B}}:=\mathrm{exp}(\phi(0)_{\ge 0}) \in GL(\mathcal{A}\otimes\mathcal{B}), $$ where $\phi(0)_{\ge 0}:= \sum_{i=1}^{\dim \mathfrak{g}}\sum_{n\ge 0}c^{\xi^*_i, \mathcal{A}}_{(-n-1)}\otimes\iota^\mathcal{B}_{\xi_i, (n)}$. Set \begin{gather} C(\mathcal{A}; \mathcal{B}):=\Phi((\mathcal{A}\otimes\mathcal{B})_{bas}), \\ d=d_{\mathcal{A}, \mathcal{B}}:=\Phi\circ(d_\mathcal{A}\otimes1 +1 \otimes d_\mathcal{B})\circ\Phi^{-1}|_{C(\mathcal{A}; \mathcal{B})}. \end{gather} Note that $C(\mathcal{A}; \mathcal{B})$ is degree-weight-graded as a subspace of the degree-weight-graded super vector space $\mathcal{A}\otimes\mathcal{B}$ since $\Phi$ preserves the degree and weight-gradings. Then we have the following. \begin{lemma}\label{lem: CHIRAL CARTAN MODEL} Let $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ be a chiral $W^*$-module and $(\mathcal{B}, d_\mathcal{B})$ a differential $\mathfrak{sg}[t]$-module. Then the map $\Phi=\Phi_{\mathcal{A}, \mathcal{B}}$ restricted to the chiral basic subspace is an isomorphism of degree-weight-graded complexes: $$ \Phi: ((\mathcal{A}\otimes\mathcal{B})_{bas}, (d_\mathcal{A}\otimes1 +1\otimes d_\mathcal{B})|_{(\mathcal{A}\otimes\mathcal{B})_{bas}})\to (C(\mathcal{A}; \mathcal{B}), d_{\mathcal{A}, \mathcal{B}}). $$ \end{lemma} We call the complex $(C(\mathcal{A}; \mathcal{B}), d_{\mathcal{A}, \mathcal{B}})$ the \textbf{chiral Cartan model} for the differential $\mathfrak{sg}[t]$-module $(\mathcal{B}, d_\mathcal{B})$ with respect to the chiral $W^*$-module $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$. The following proposition is a variation of \cite[Theorem 4.6]{LL}. \begin{proposition}\label{prop: chiral Cartan model for chiral W^*-modules} Let $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ be a chiral $W^*$-module and $(\mathcal{B}, d_\mathcal{B})$ a differential $\mathfrak{sg}[t]$-module. Then the following equalities hold in $\mathrm{End} (\mathcal{A}\otimes\mathcal{B})$: \begin{multline}\label{eq: the transformation of the differential} \Phi\circ(d_\mathcal{A}\otimes 1+ 1\otimes d_\mathcal{B})\circ \Phi^{-1}\\ =d_\mathcal{A}\otimes 1+ 1\otimes d_\mathcal{B} - \sum_{i=1}^{\dim \mathfrak{g}}\sum_{n\ge0}\gamma^{\xi^*_i, \mathcal{A}}_{(-n-1)}\otimes\iota^\mathcal{B}_{\xi_i, (n)}+\sum_{i=1}^{\dim \mathfrak{g}}\sum_{n\ge0}c^{\xi^*_i, \mathcal{A}}_{(-n-1)}\otimes L^\mathcal{B}_{\xi_i, (n)}\\ +\sum_{i, j=1}^{\dim \mathfrak{g}}\sum_{m,n\ge0}c_{(n)}^{\xi^*_i, \mathcal{A}}c_{(-n-m-2)}^{\xi^*_j, \mathcal{A}}\otimes\iota_{[\xi_i,\xi_j], (m)}^\mathcal{B}, \end{multline} and \begin{multline}\label{eq: the transformation of L} \Phi\circ(L^\mathcal{A}_{\xi, (n)}\otimes 1+1\otimes L^\mathcal{B}_{\xi, (n)})\circ\Phi^{-1} \\ \quad =L^\mathcal{A}_{\xi, (n)}\otimes 1+ 1\otimes L^\mathcal{B}_{\xi, (n)} +\sum_{i=1}^{\dim \mathfrak{g}}\sum_{0\le k<n}c^{\xi^*_i, \mathcal{A}}_{(n-k-1)}\otimes \iota^\mathcal{B}_{[\xi,\xi_i], (k)}, \end{multline} \begin{equation}\label{eq: the transformation of iota} \Phi \circ (\iota^\mathcal{A}_{\xi, (n)}\otimes 1 + 1\otimes \iota^\mathcal{B}_{\xi, (n)})\circ \Phi^{-1}=\iota^\mathcal{A}_{\xi, (n)}\otimes 1, \end{equation} for all $n \ge 0$, $\xi\in \mathfrak{g}$. \end{proposition} \begin{proof} The assertion is proved by direct computations of $\mathrm{ad}(\phi(0)_{\ge0})^l$, where $\mathrm{ad}(\phi(0)_{\ge0})=[\phi(0)_{\ge0}, \ \ ]$. Note that we have $\mathrm{ad}(\phi(0)_{\ge0})^3(d_\mathcal{A}\otimes 1+ 1\otimes d_\mathcal{B})=0$, $\mathrm{ad}(\phi(0)_{\ge0})^2 (L^\mathcal{A}_{\xi, (n)}\otimes 1+1\otimes L^\mathcal{B}_{\xi, (n)})=0$ and $(\mathrm{ad}(\phi(0)_{\ge0})^2(\iota^\mathcal{A}_{\xi, (n)}\otimes 1 + 1\otimes \iota^\mathcal{B}_{\xi, (n)})=0$. \end{proof} By Proposition \ref{prop: chiral Cartan model for chiral W^*-modules}, we have $ C(\mathcal{A}; \mathcal{B})=(\mathcal{A}_{hor}\otimes\mathcal{B})^{\Phi\mathfrak{g}[t]\Phi^{-1}}, $ where the right-hand side stands for the invariant subspace under the modified action of $\mathfrak{g}[t]$: $$ \mathfrak{g}[t]\to \mathrm{End}(\mathcal{A}\otimes\mathcal{B})\stackrel{\mathrm{Ad}(\Phi)}{\longrightarrow} \mathrm{End}(\mathcal{A}\otimes\mathcal{B}). $$ \subsection{Commutative Cases} Consider the case when the Lie group $G=T$ is commutative. Set $\mathfrak{t}=\mathrm{Lie}(T)^\mathbb{K}$. The \textit{small chiral Cartan model} for $O(\mathfrak{st})$-algebras was introduced in \cite{LL}. By Lemma \ref{lem: CHIRAL CARTAN MODEL}, we can also define the small chiral Cartan model for any differential $\mathfrak{st}[t]$-module. Note that the action of $\mathfrak{t}[t]$ on $\mathcal{W}(\mathfrak{t})$ is trivial since $\mathfrak{t}$ is commutative. Let $(\mathcal{A}, d_\mathcal{A})$ be a differential $\mathfrak{st}[t]$-module. Set $$ \mathcal{C}(\mathcal{A}):=\langle\gamma\rangle\otimes\mathcal{A}_{inv}\subset C(\mathcal{W}(\mathcal{\mathfrak{t}}); \mathcal{A}), $$ where we denote by $\langle \gamma \rangle$ the subalgebra of $\mathcal{W}'$ generated by $\gamma^{\xi^*}_0\mathbf{1}$ with $\xi^*\in\mathfrak{g}$. Since $\mathfrak{t}$ is commutative, $d_\mathcal{W}|_{\langle \gamma \rangle}=0$ and $[\iota^{\mathcal{A}}_\xi(z)_-,L^{\mathcal{A}}_\eta(w)_- ]=0$ for any $\xi, \eta\in\mathfrak{t}$. Therefore it follows that $\mathcal{C}(\mathcal{A})$ is preserved by the differential $d_{\mathcal{W}(\mathfrak{t}), \mathcal{A}}$. We can prove the following lemma by the same argument as in \cite[Theorem 6.4]{LL}, where the case when $\mathcal{A}$ is an $O(\mathfrak{st})$-algebra is considered. \begin{proposition}\label{prop: small Cartan model} The inclusion $$ (\mathcal{C}(\mathcal{A}), d_{\mathcal{W}(\mathfrak{t}), \mathcal{A}}|_{\mathcal{C}(\mathcal{A})})\to(C(\mathcal{W}(\mathcal{\mathfrak{t}}); \mathcal{A}), d_{\mathcal{W}(\mathfrak{t}), \mathcal{A}}), $$ is a quasi-isomorphism. \end{proposition} We call $(\mathcal{C}(\mathcal{A}), d_{\mathcal{W}(\mathfrak{t}), \mathcal{A}}|_{\mathcal{C}(\mathcal{A})})$ the \textbf{small chiral Cartan model} for the differential $\mathfrak{st}[t]$-module $(\mathcal{A}, d_\mathcal{A})$. The following lemma is proved by an argument similar to that in \cite[Lemma 2.6]{LLS1}, where the case when $\mathcal{A}$ is contained in an $O(\mathfrak{st})$-algebra is considered. \begin{lemma}\label{lem: small Cartan model} $$ \Phi_{\mathcal{W}(\mathfrak{t}), \mathcal{A}}^{-1}(\mathcal{C}(\mathcal{A}))=(\langle c, \gamma\rangle\otimes\mathcal{A}_{inv})_{hor}. $$ \end{lemma} Let $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ be a chiral $W^*$-module. Set $$ C'(\mathcal{A}, \mathcal{W}(\mathfrak{t})):=(\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}\circ\tau\circ\Phi_{\mathcal{W}(\mathfrak{t}), \mathcal{A}}^{-1})(\mathcal{C}(\mathcal{A})) \subset C(\mathcal{A}, \mathcal{W}(\mathfrak{t})), $$ where $\tau: \mathcal{W}(\mathfrak{t})\otimes\mathcal{A}\to\mathcal{A}\otimes\mathcal{W}(\mathfrak{t})$ is the switching map. Then the following proposition follows from Proposition \ref{prop: small Cartan model}. \begin{proposition}\label{prop: small Cartan model for chiral W^*-modules} There exists a canonical isomorphism $$ H(C'(\mathcal{A}; \mathcal{W}(\mathfrak{t})), d'_{\mathcal{A}, \mathcal{W}(\mathfrak{t})})\cong \mathbf{H}_{T}{\mathcal{(A)}}, $$ where $d'_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}:=d_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}|_{C'(\mathcal{A}, \mathcal{W}(\mathfrak{t}))}$. \end{proposition} The following lemma leads us to the next important proposition. \begin{lemma} $$ C'(\mathcal{A}, \mathcal{W}(\mathfrak{t}))=\mathcal{A}_{bas}\otimes\langle c, \gamma\rangle. $$ \end{lemma} \begin{proof} The operators $c^{\xi^*, \mathcal{A}}_{(n)}$ and $L_{\xi, (k)}^\mathcal{A}$ commute with each other since $\mathfrak{t}$ is commutative. Therefore the operators $c^{\xi^*, \mathcal{A}}_{(n)}$ preserve the subspace $\mathcal{A}^{\mathfrak{t}[t]}$. Thus we have $\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}(\mathcal{A}^{\mathfrak{t}[t]}\otimes \langle c, \gamma\rangle)$ $\subset \mathcal{A}^{\mathfrak{t}[t]}\otimes \langle c, \gamma\rangle$. Therefore the subspace $$ \Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}\bigl(\big( \mathcal{A}^{\mathfrak{t}[t]}\otimes \langle c, \gamma\rangle\big)_{hor}\bigr)=\Bigl(\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}\bigl(\mathcal{A}^{\mathfrak{t}[t]}\otimes \langle c, \gamma\rangle\bigr)\Bigr)^{\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})} \mathfrak{t}_{-1}[t] \Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}^{-1}}, $$ is contained in $\bigl(\mathcal{A}^{\mathfrak{t}[t]}\otimes\langle c, \gamma \rangle \bigr)^{\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})} \mathfrak{t}_{-1}[t] \Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}^{-1}}$. By the formula \eqref{eq: the transformation of iota}, we have \begin{equation}\label{eq: rewrite A_bas c gamma} \bigl(\mathcal{A}^{\mathfrak{t}[t]}\otimes\langle c, \gamma \rangle \bigr)^{\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})} \mathfrak{t}_{-1}[t] \Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}^{-1}}=\mathcal{A}_{bas}\otimes \langle c, \gamma \rangle. \end{equation} Thus we have \begin{equation}\label{eq: C'(A; W) is contained in A_bas c gamma} \Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}\bigl(\big( \mathcal{A}^{\mathfrak{t}[t]}\otimes \langle c, \gamma\rangle\big)_{hor}\bigr)\subset\mathcal{A}_{bas}\otimes \langle c, \gamma \rangle. \end{equation} On the other hand, we have \begin{align*} \Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}^{-1}(\mathcal{A}_{bas}\otimes \langle c, \gamma \rangle)&=\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}^{-1}\Bigl(\bigl(\mathcal{A}^{\mathfrak{t}[t]}\otimes\langle c, \gamma \rangle \bigr)^{\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})} \mathfrak{t}_{-1}[t] \Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}^{-1}}\Bigr) \\ &=\Bigl(\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}^{-1}\bigl(\mathcal{A}^{\mathfrak{t}[t]}\otimes\langle c, \gamma \rangle \bigr)\Bigr)^{\mathfrak{t}_{-1}[t]} \\ &\subset \bigl(\mathcal{A}^{\mathfrak{t}[t]}\otimes\langle c, \gamma \rangle \bigr)^{\mathfrak{t}_{-1}[t]}=\bigl(\mathcal{A}^{\mathfrak{t}[t]}\otimes\langle c, \gamma \rangle \bigr)_{hor}. \end{align*} The first equality follows from \eqref{eq: rewrite A_bas c gamma} and the inclusion follows from the formula $\Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}^{-1}=\mathrm{exp}\bigl(-\sum_{i=1}^{\dim \mathfrak{t}}\sum_{n\ge 0}c^{\xi^*_i, \mathcal{A}}_{(-n-1)}\otimes\iota^\mathcal{W}_{\xi_i, (n)}\bigr)$. Together with \eqref{eq: C'(A; W) is contained in A_bas c gamma}, we have $$ \Phi_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}\bigl(\big( \mathcal{A}^{\mathfrak{t}[t]}\otimes \langle c, \gamma\rangle\big)_{hor}\bigr)=\mathcal{A}_{bas}\otimes \langle c, \gamma \rangle. $$ By Lemma \ref{lem: small Cartan model}, the left-hand side is equal to $C'(\mathcal{A}, \mathcal{W}(\mathfrak{t}))$. This completes the proof. \end{proof} The following proposition is a chiral analogue of \cite[Theorem 4.3.1]{GS99}. \begin{proposition} Let $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ be a chiral $W^*$-module. The inclusion \begin{equation}\label{eq: basic into basic otimes W'} (\mathcal{A}_{bas}, d_\mathcal{A})\to (C'(\mathcal{A}; \mathcal{W}(\mathfrak{t})), d'_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}) , \quad a\mapsto a\otimes \mathbf{1}, \end{equation} is a quasi-isomorphism. \end{proposition} \begin{proof} Set \begin{gather*} d:=d'_{\mathcal{A}, \mathcal{W}(\mathfrak{t})}=d_1+d_2, \\ d_1:=1\otimes d_{\mathcal{W'}}=\sum_{i=1}^{\dim \mathfrak{t}}\sum_{n\ge 0}1\otimes\gamma^{\xi^*_i, \mathcal{W}}_{(-n-1)}b^{\xi_i, \mathcal{W}}_{(n)}, \\ d_2:=d_\mathcal{A}\otimes1 - \sum_{i=1}^{\dim \mathfrak{t}}\sum_{n\ge 0}\gamma^{\xi^*_i, \mathcal{A}}_{(-n-1)}\otimes b^{\xi_i, \mathcal{W}}_{(n)}. \end{gather*} Since $(\mathcal{W}', d_{\mathcal{W}'})$ is acyclic, we have \begin{equation}\label{eq: the cohomology of (A_bas otimes W', d_1)} H^i(\mathcal{A}_{bas}\otimes \langle c, \gamma \rangle, d_1)= \begin{cases} \mathcal{A}_{bas}\otimes\mathbb{K}\mathbf{1}, & \text{when}\ i=0,\\ 0, & \text{otherwise}. \end{cases} \end{equation} For $i, j \ge 0$, we set $ C^i:=\mathcal{A}_{bas}\otimes {\mathcal{W}'}^i, C_j:=\bigoplus_{0\le i \le j}C^i, $ and $C_{-1}:=0$. Note that $d_1$ has degree $1$ with respect to this grading $C'(\mathcal{A}; \mathcal{W}(t))=\bigoplus_{i\ge 0}C^i$ and $d_2$ preserves that filtration $C'(\mathcal{A}; \mathcal{W}(t))=\bigcup_{j\ge 0}C_j$. First we prove that for any $j\ge0$ if $\mu\in C_j$ and $d\mu=0$ then there exist an element $\nu\in C_{j-1}$ and $a\in \mathcal{A}_{bas}$ such that $\mu=d\nu+a\otimes \mathbf{1}$ and $d_\mathcal{A}a=0$. This implies that the map induced by the inclusion \eqref{eq: basic into basic otimes W'} on the cohomology is surjective. We use induction on $j$. Assume $j=0$. Let $\mu\in C_0$ with $d\mu=0$. Since $C_0=C^0=\mathcal{A}_{bas}\otimes\mathbb{K}\mathbf{1}$, we have an element $a\in\mathcal{A}_{bas}$ such that $\mu=a\otimes\mathbf{1}$. Considering the degree in the formula $0=d\mu=d_1\mu+d_2\mu$, we have $d_1\mu=0$. Therefore $d_2\mu=0$. From $d_2\mu=d_\mathcal{A}a\otimes\mathbf{1}$, we see $d_\mathcal{A}a=0$. Thus the proof for $j=0$ is completed. Next we assume $j>0$. Let $\mu\in C_j$ with $d\mu=0$. We can write $\mu$ as $ \mu=\mu_j+\mu_{j-1}+\dots+\mu_0 $ for some $\mu_i\in C^i$ with $i=1, \dots, j$. Since $d_1\mu_j$ is the component of $d\mu$ with the maximum degree, we have $d_1\mu_j=0$. By \eqref{eq: the cohomology of (A_bas otimes W', d_1)} and $j\neq0$, we have an element $\nu_{j-1}\in C^{j-1}$ such that $\mu_j=d_1\nu_{j-1}$. Therefore we have \begin{align*} \mu&=d_1\nu_{j-1}+ (\text{terms in}\ C_{j-1}) \\ &=(d\nu_{j-1}-d_2\nu_{j-1})+(\text{terms in}\ C_{j-1}) \\ &=d\nu_{j-1}+\mu', \end{align*} where $\mu'$ is some element of $C_{j-1}$. From $d\mu=0$, we have $d\mu'=0$. By the induction hypothesis, we have an element $\nu'\in C_{j-2}$ and $a'\in \mathcal{A}_{bas}$ such that $\mu'=d\nu'+a'\otimes\mathbf{1}$ and $d_\mathcal{A}a'=0$. Therefore we have $\mu=d\nu_{j-1}+\mu'=d(\nu_{j-1}+\nu')+a'\otimes\mathbf{1}$. It remains to show that the map induced by \eqref{eq: basic into basic otimes W'} on the cohomology is injective. It suffices to show that if $a$ is an element of $\mathcal{A}_{bas}$ such that $d_\mathcal{A}a=0$ and $a\otimes\mathbf{1}=d\nu$ for some $\nu\in \mathcal{A}_{bas}\otimes\mathcal{W}'$ then there exists an element $b$ of $\mathcal{A}_{bas}$ such that $a=d_\mathcal{A}b$. Denote by $\mathcal{W}'^{(i, j)}$ the subspace of $\mathcal{W}'$ of degree $i$ and $j$ with respect to the operators ${j_{bc}}(0)_{\ge0}:=\sum_{i=1}^{\dim \mathfrak{t}}\sum_{n\ge 0}c^{\xi^*_i}_{(-n-1)}b^{\xi_i}_{(n)}$ and ${j_{\beta\gamma}}(0)_{\ge0}:=\sum_{i=1}^{\dim \mathfrak{t}}\sum_{n\ge 0}\gamma^{\xi^*_i}_{(-n-1)}\beta^{\xi_i}_{(n)}$, respectively. Note that $\mathcal{W}'{(0, 0)}=\mathbb{K}\mathbf{1}$ and $\mathcal{W}'=\bigoplus_{i, j \in \mathbb{N}}\mathcal{W}'^{(i, j)}$. Set $ D^{(i, j)}:= \mathcal{A}_{bas}\otimes \mathcal{W}'^{(i, j)}. $ Then we have the following homogeneous operators: \begin{equation}\label{eq: degree of d_1, d_A, d_3} \begin{aligned} d_1: D^{(i, j)}\to D^{(i-1, j+1)}, \\ d_\mathcal{A}\otimes1: D^{(i, j)}\to D^{(i, j)}, \\ d_3: D^{(i, j)}\to D^{(i-1, j)}, \end{aligned} \end{equation} where $d_3:=-\sum_{i=1}^{\dim \mathfrak{t}}\sum_{n\ge 0}\gamma^{\xi^*_i, \mathcal{A}}_{(-n-1)}\otimes b^{\xi_i, \mathcal{W}}_{(n)}$. Let $a$ be an element of $\mathcal{A}_{bas}$ such that $d_\mathcal{A}a=0$ and $a\otimes \mathbf{1}=d\nu$, where $\nu$ is an element of $\mathcal{A}_{bas}\otimes\mathcal{W}'$. We can write $\nu$ as $ \nu=\sum_{i, j\ge0}\nu^{(i, j)}, $ where $\nu^{(i, j)}$ is an element of $D^{(i, j)}$ and the elements $\nu^{(i, j)}$ are $0$ for all but finitely many $(i, j)$. From $a\otimes\mathbf{1} =d\nu$ and \eqref{eq: degree of d_1, d_A, d_3}, we have \begin{equation}\label{eq: D^(0, 0) term} a\otimes\mathbf{1}=(d_\mathcal{A}\otimes1)\nu^{(0, 0)}+d_3\nu^{(1, 0)}, \end{equation} and \begin{equation}\label{eq: D^(i, j) term} 0=(d_\mathcal{A}\otimes1)\nu^{(i, j)}+d_3\nu^{(i+1, j)}+d_1\nu^{(i+1, j-1)}, \end{equation} for all $i, j\ge0$ with $i>0$ or $j>0$, where we set $\nu^{(i, -1)}:=0$. From \eqref{eq: D^(i, j) term} with $j=0$, we have \begin{equation}\label{eq: relations in D^(i, 0)} 0=(d_\mathcal{A}\otimes1)\nu^{(i, 0)}+d_3\nu^{(i+1, 0)}, \end{equation} for all $i>0$. Note that we have \begin{equation}\label{eq: rewrite d_3} d_3=\Bigl[d_\mathcal{A}\otimes1,\sum_{i=1}^{\dim \mathfrak{t}}\sum_{n\ge0}c^{\xi^*_i, \mathcal{A}}_{(-n-1)}\otimes b^{\xi_i, \mathcal{W}}_{(n)}\Bigr]. \end{equation} Indeed by the definition of chiral $W^*$-modules and the commutativity of $\mathfrak{t}$, we have $ \gamma^{\xi^*_i, \mathcal{A}}_{(-n-1)}=[d_\mathcal{A},c^{\xi^*_i, \mathcal{A}}_{(-n-1)}] $ for any $n\ge0$ and $i=1, \dots, \dim \mathfrak{t}$. The formula \eqref{eq: rewrite d_3} follows from this relation and the definition of $d_3$. We set $S:=\sum_{i=1}^{\dim \mathfrak{t}}\sum_{n\ge0}c^{\xi^*_i, \mathcal{A}}_{(-n-1)}\otimes b^{\xi_i, \mathcal{W}}_{(n)}$. Note that we have $[S, [d_\mathcal{A}\otimes1,S]]=0$. Then we have \begin{equation}\label{eq: d_3v^(1, 0) is exact} d_3\nu^{(1, 0)}=(d_\mathcal{A}\otimes1)\sum_{i=1}^N S^{[i]}\nu^{(i, 0)}-S^{[N]}(d_\mathcal{A}\otimes1)\nu^{(N, 0)}, \end{equation} for any $N>0$. This is proved by induction on $N$ with formulae \eqref{eq: relations in D^(i, 0)}, \eqref{eq: rewrite d_3} and $[S, [d_\mathcal{A}\otimes1,S]]=0$. We have a natural number $N(>0)$ such that $\nu^{(N, 0)}=0$. Therefore from the formulae \eqref{eq: D^(0, 0) term} and \eqref{eq: d_3v^(1, 0) is exact}, we have $ a\otimes\mathbf{1}=(d_\mathcal{A}\otimes1)\Bigl(\nu^{(0, 0)}+\sum_{i=1}^N S^{[i]}\nu^{(i, 0)}\Bigr). $ Notice that $S^{[l]}\nu^{(l, 0)}$ belongs to $D^{(0, 0)}$ since $S$ maps $D^{(i, j)}$ into $D^{(i-1, j)}$. Therefore we have $ \nu^{(0, 0)}+\sum_{i=1}^N S^{[i]}\nu^{(i, 0)}=b\otimes\mathbf{1}, $ for some $b\in \mathcal{A}_{bas}$. Thus we have $a=d_\mathcal{A}b$. This completes the proof. \end{proof} By the preceding proposition and Proposition \ref{prop: small Cartan model for chiral W^*-modules}, we have the following theorem. \begin{theorem}\label{thm: CHIRAL BASIC=CHIRAL EQUIVARIANT} Let $G$ be a compact connected Lie group with the Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)^\mathbb{K}$. Let $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ be a chiral $W^*$-module with respect to $\mathfrak{g}$. Assume that $G$ is commutative. Then there exists a canonical isomorphism \begin{equation}\label{eq: ch basic = ch equiv} \mathbf{H}_{bas}{(\mathcal{A})}\cong\mathbf{H}_{G}{(\mathcal{A})}. \end{equation} \end{theorem} \subsection{More on Chiral $W^*$-Modules} Let $\mathfrak{g}$ be a finite-dimensional Lie algebra. Let $(\xi_i)_i$ be a basis of $\mathfrak{g}$ and $(\xi^*_i)_i$ the dual basis for $\mathfrak{g}^*$. Denote by $\langle c \rangle$ the subalgebra of $\mathcal{W}'$ generated by $c^{\xi^*}_0\mathbf{1}$ with $\xi^*\in\mathfrak{g}$. \begin{lemma}\label{lem: sufficient condition for d-compatibility} Let $(A, d_\mathcal{A})$ be a differential $\mathfrak{sg}[t]$-module and $Y^\mathcal{A}$ a $\mathcal{W}'$-module structure on $\mathcal{A}$. Assume \begin{equation}\label{eq: d-compatibility for c} Y^\mathcal{A}(d_{\mathcal{W}'}c^{\xi^*}_0\mathbf{1}, z)=[d_\mathcal{A},c^{\xi^*, \mathcal{A}}(z)], \end{equation} for all $\xi^*\in\mathfrak{g}$. Then the following holds: \begin{equation}\label{eq: d-compatibility for x} Y^\mathcal{A}(d_{\mathcal{W}'}x, z)=[d_\mathcal{A},Y^\mathcal{A}(x, z)], \end{equation} for any $x\in \mathcal{W}'$. \end{lemma} \begin{proof} Let $S\subset \mathcal{W}'$ be a subset. Using the fact that $d_{{\mathcal{W}'}}$ commutes with the translation operator, we can check by induction that if \eqref{eq: d-compatibility for x} holds for all $x\in S$ then \eqref{eq: d-compatibility for x} holds for all $x\in \langle S \rangle$. Therefore it suffices to show that \eqref{eq: d-compatibility for x} holds for $x=\gamma^{\xi^*}_0\mathbf{1}$ with $\xi^*\in\mathfrak{g}^*$. Note that \eqref{eq: d-compatibility for x} holds for all $x\in \langle c \rangle$ by the assumption \eqref{eq: d-compatibility for c}. Let $\xi^*\in\mathfrak{g}^*$. From the formula \begin{equation}\label{eq: formula for d_W c} \gamma^{\xi^*}_0\mathbf{1}=d_{\mathcal{W}'}c^{\xi^*}_0\mathbf{1}+\frac{1}{2}\sum_{i=1}^{\dim \mathfrak{g}}c^{\mathrm{ad}^*\xi_i\cdot\xi^*}_0c^{\xi^*_i}_0\mathbf{1}, \end{equation} we have $$ Y^\mathcal{A}(d_{\mathcal{W}'}\gamma^{\xi^*}_0\mathbf{1}, z)=\frac{1}{2}\sum_{i=1}^{\dim \mathfrak{g}}Y^\mathcal{A}(d_{\mathcal{W}'}c^{\mathrm{ad}^*\xi_i\cdot\xi^*}_0c^{\xi^*_i}_0\mathbf{1}, z). $$ Since \eqref{eq: d-compatibility for x} holds for all $x\in \langle c \rangle$, the right-hand side equals $$ \Bigl[d_\mathcal{A},\frac{1}{2}\sum_{i=1}^{\dim \mathfrak{g}}Y^\mathcal{A}(c^{\mathrm{ad}^*\xi_i\cdot\xi^*}_0c^{\xi^*_i}_0\mathbf{1}, z)\Bigr]. $$ From \eqref{eq: d-compatibility for c} and \eqref{eq: formula for d_W c}, we can see this is equal to $[d_\mathcal{A},\gamma^{\xi^*, \mathcal{A}}(z)]$. \end{proof} The following proposition is useful for checking that a differential $\mathfrak{sg}[t]$-module with a $\mathcal{W}'$-module structure is a chiral $W^*$-module. \begin{proposition}\label{prop: sufficient condition for chiral W^*-modules} Let $(A, d_\mathcal{A})$ be a differential $\mathfrak{sg}[t]$-module and $Y^\mathcal{A}$ a $\mathcal{W}'$-module structure on $\mathcal{A}$. Assume the following: \begin{gather} \label{assump: [iota,gamma]=0} [\iota^\mathcal{A}_\xi(z)_{-}, \gamma^{\xi^*, \mathcal{A}}(w)]=0, \\ \label{assump: Y(dc)=[d,Y(c)]} Y^\mathcal{A}(d_{\mathcal{W}'}c^{\xi^*}_0\mathbf{1}, z)=[d_\mathcal{A},c^{\xi^*, \mathcal{A}}(z)], \\ \label{assump: [iota,c]=delta} [\iota^\mathcal{A}_\xi(z)_{-}, c^{\xi^*, \mathcal{A}}(w)]=\langle \xi^*, \xi \rangle \delta(z-w)_{-}, \end{gather} for all $\xi\in\mathfrak{g}$ and $\xi^*\in\mathfrak{g}^*$. Then the triple $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ is a chiral $W^*$-module. \end{proposition} \begin{proof} By Lemma \ref{lem: sufficient condition for d-compatibility}, it suffices to check $$ [L_\xi^\mathcal{A}(z)_{-},c^{\xi^*, \mathcal{A}}(w)]=c^{\mathrm{ad}^*\xi\cdot\xi^*, \mathcal{A}}(w)\delta(z-w)_{-}, $$ for all $\xi\in\mathfrak{g}$ and $\xi^*\in\mathfrak{g}^*$. Let $\xi\in\mathfrak{g}$ and $\xi^*\in\mathfrak{g}^*$. Applying $\mathrm{ad}(d_\mathcal{A})$ to the both sides of \eqref{assump: [iota,c]=delta}, we have $$ [d_\mathcal{A},[\iota^\mathcal{A}_\xi(z)_{-},c^{\xi^*, \mathcal{A}}(w)]]=0. $$ Therefore from \eqref{assump: Y(dc)=[d,Y(c)]} and $[d_\mathcal{A},\iota^\mathcal{A}_\xi(z)_{-}]=L_\xi^\mathcal{A}(z)_{-}$, we have $$ [L_\xi^\mathcal{A}(z)_{-},c^{\xi^*, \mathcal{A}}(w)]=[\iota_\xi^\mathcal{A}(z)_{-},Y^\mathcal{A}(d_{\mathcal{W}'}c^{\xi^*}_0\mathbf{1}, w)]. $$ By \eqref{assump: [iota,c]=delta}, \eqref{assump: [iota,gamma]=0} and the formula $d_{\mathcal{W}'}c^{\xi^*}_0\mathbf{1}=\gamma^{\xi^*}_0\mathbf{1}-1/2\sum_{i=1}^{\dim \mathfrak{g}}c^{\mathrm{ad}^*\xi_i\cdot\xi^*}_0c^{\xi^*_i}_0\mathbf{1}$, we can see the right-hand side equals $$ -\frac{1}{2}\sum_{i=1}^{\dim \mathfrak{g}}\langle \mathrm{ad}^* \xi_i \cdot\xi^*, \xi \rangle \delta(z-w)_{-}c^{\xi^*_i}(w)+ \frac{1}{2}\sum_{i=1}^{\dim \mathfrak{g}}c^{\mathrm{ad}^*\xi_i\cdot\xi^*}(w)\langle \xi^*_i, \xi\rangle \delta(z-w)_{-}. $$ This is just equal to $c^{\mathrm{ad}^*\xi\cdot\xi^*, \mathcal{A}}(w)\delta(z-w)_{-}$. \end{proof} The following is useful when we equip a differential $\mathfrak{sg}[t]$-module with a chiral $W^*$-module structure. \begin{proposition}\label{prop: construction of chiral W^*-modules} Let $(\mathcal{A}, d_\mathcal{A})$ be a differential $\mathfrak{sg}[t]$-module. Suppose given a module structure of the vertex superalgebra $\langle c \rangle$ on $\mathcal{A}$ $$ Y^\mathcal{A}_0: \langle c \rangle \to (\mathrm{End} \mathcal{A})[[z^{\pm1}]], $$ such that \begin{equation}\label{eq: [c,[d,c]]=0} [ c^{\xi^*, \mathcal{A}}(z),[d_\mathcal{A}, c^{\eta^*, \mathcal{A}}(w)]]=0, \end{equation} for all $\xi^*, \eta^*\in \mathfrak{g}^*$. Then there exists a unique $\langle c, \gamma \rangle$-module structure on $\mathcal{A}$, $$ Y^\mathcal{A}: \langle c, \gamma \rangle \to (\mathrm{End} \mathcal{A})[[z^{\pm1}]], $$ such that $Y^\mathcal{A}|_{\langle c \rangle}=Y^\mathcal{A}_0$ and $[d_\mathcal{A}, Y^\mathcal{A}(x, z)]=Y^\mathcal{A}(d_{\mathcal{W}'}x, z)$ for all $x\in \mathcal{W}'$. Moreover if the operation $Y^\mathcal{A}$ satisfies \begin{align} [\iota^\mathcal{A}_\xi(z)_{-}, c^{\xi^*, \mathcal{A}}(w)]&=\langle \xi^*, \xi \rangle \delta(z-w)_{-},\\ \label{eq: curvature is horizontal in prop: construction of chiral W^*-modules} [\iota^\mathcal{A}_\xi(z)_{-},\gamma^{\xi^*, \mathcal{A}}(w)]&=0, \end{align} for all $\xi \in \mathfrak{g}$ and $\xi^*\in\mathfrak{g}^*$, then the triple $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ is a chiral $W^*$-module. \end{proposition} \begin{proof} The uniqueness of the operation $Y^\mathcal{A}$ follows from the formula $d_{\mathcal{W}'} c^j_0\mathbf{1}=\gamma^j_0\mathbf{1}-1/2\sum_{i, k=1}^{\dim{g}}\Gamma_{i k}^j c^{i}_0c^{k}_0\mathbf{1}$, where $\Gamma_{i j}^k$ is the structure constants of the Lie algebra $\mathfrak{g}$, that is, $[\xi_i, \xi_j]=\sum_{k=1}^{\dim \mathfrak{g}}\Gamma_{i j}^k\xi_k$ for $i, j=1, \dots, \dim \mathfrak{g}$. We check the existence of such a $\mathcal{W}'$-module structure $Y^\mathcal{A}$. We set \begin{equation}\label{eq: df of gamma(z)} \gamma^{\xi^*_j, \mathcal{A}}(z):=[d_\mathcal{A}, c^{\xi^*_j, \mathcal{A}}(z)]+\frac{1}{2}\sum_{i, k=1}^{\dim \mathfrak{g}}\Gamma_{i k}^j \,{\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,c^{\xi^*_i, \mathcal{A}}(z) c^{\xi^*_k, \mathcal{A}}(z)\,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}}, \end{equation} for $j=1, \dots, \dim \mathfrak{g}$. Note that these operators have even parity. We check $[c^{\xi^*_i, \mathcal{A}}(z), \gamma^{\xi^*_j, \mathcal{A}}(w)]=0$ and $[\gamma^{\xi^*_i, \mathcal{A}}(z),\gamma^{\xi^*_j, \mathcal{A}}(w)]=0$ for $i, j=1, \dots, \dim \mathfrak{g}$. This implies the existence of a $\mathcal{W}'$-module structure $Y^\mathcal{A}$ such that $Y^\mathcal{A}|_{\langle c \rangle}=Y^\mathcal{A}_0$. Applying $\mathrm{ad}( d_\mathcal{A})$ to the both sides of \eqref{eq: [c,[d,c]]=0}, we have \begin{equation}\label{eq: [[d,c],[d,c]]=0} \bigl[[d_\mathcal{A},c^{\xi^*, \mathcal{A}}(z)],[d_\mathcal{A},c^{\eta^*, \mathcal{A}}(w)]\bigr]=0, \end{equation} for all $\xi^*, \eta^*\in\mathfrak{g}^*$. We have $ [\gamma^{\xi^*_j, \mathcal{A}}(z),\gamma^{\xi^*_{\Tilde{j}}, \mathcal{A}}(w)]=0 $ by \eqref{eq: [c,[d,c]]=0} and \eqref{eq: [[d,c],[d,c]]=0}. The formula $[\gamma^{\xi^*_i, \mathcal{A}}(z),\gamma^{\xi^*_j, \mathcal{A}}(w)]=0$ follows directly from \eqref{eq: [c,[d,c]]=0}. Thus we have a $\mathcal{W}'$-module structure $Y^\mathcal{A}$ such that $Y^\mathcal{A}|_{\langle c \rangle}=Y^\mathcal{A}_0$. It remains to check $[d_\mathcal{A},Y^\mathcal{A}(x, z)]=Y^\mathcal{A}(d_{\mathcal{W}'} x, z)$ for all $x\in \mathcal{W}'$. By the construction of $Y^\mathcal{A}$, this holds for $x=c^{\xi^*}_0\mathbf{1}$ with $\xi^*\in\mathfrak{g}^*$. Therefore by Lemma \ref{lem: sufficient condition for d-compatibility}, it holds for all $x\in\mathcal{W}'$. Thus we proved the existence part. The latter half of our assertion follows from Proposition \ref{prop: sufficient condition for chiral W^*-modules}. \end{proof} \begin{remark} The condition \eqref{eq: curvature is horizontal in prop: construction of chiral W^*-modules} in Proposition \ref{prop: construction of chiral W^*-modules} can be replaced by the following condition: \begin{equation} [L^\mathcal{A}_\xi(z)_-, c^{\xi^*, \mathcal{A}}(w)]= c^{\mathrm{ad}^*\xi\cdot\xi^*}(w)\delta(z-w)_{-}, \end{equation} for all $\xi\in\mathfrak{g}$ and $\xi^*\in\mathfrak{g}^*$. \end{remark} \section{VSA-Inductive Sheaves}\label{section: VSA-inductive Sheaves} In this section, we introduce the VSA-inductive sheaves. In the next section, we will construct a vertex-algebraic analogue of the Lie algebroid complex as a VSA-inductive sheaf. \subsection{Ind-Objects} Let $\mathcal{C}$ be a category. Recall the category $\mathrm{Ind} (\mathcal{C})$ of ind-objects of $C$ introduced by Grothendieck (\cite{AGV72}). An \textit{inductive system} of $\mathcal{C}$ is a functor $$ X: A \to \mathcal{C}, \quad \mathrm{Ob}(A)\ni \alpha \mapsto X(\alpha)=X_\alpha \in \mathrm{Ob}(\mathcal{C}), $$ from a small filtered category $A$ to $\mathcal{C}$. An inductive system $X: A \to \mathcal{C}$ is also written as $(X_\alpha)_{\alpha\in A}$. An \textit{ind-object} associated to an inductive system $(X_\alpha)_{\alpha\in A}$ is a symbol $``\displaystyle\varinjlim_{\alpha\in A}" X_\alpha$. The objects of the category $\mathrm{Ind}(\mathcal{C})$ are the ind-objects of $\mathcal{C}$. We often express $``\displaystyle\varinjlim_{\alpha\in A}" X_\alpha$ by the corresponding functor $X$ like $X=``\displaystyle\varinjlim_{\alpha\in A}" X_\alpha$. The morphisms of $\mathrm{Ind}(\mathcal{C})$ are defined by $$ \mathrm{Hom}_{\mathrm{Ind}({\mathcal{C}})}(``\varinjlim_{\alpha\in A}" X_\alpha, ``\varinjlim_{\beta\in B}" Y_\beta):=\varprojlim_{\alpha\in A}\varinjlim_{\beta\in B}\mathrm{Hom}_{\mathcal{C}}(X_\alpha, Y_\beta), $$ where the limits in the right-hand side stand for those in the category of sets. For a morphisms of ind-objects $F: (X_\alpha)_{\alpha\in A}\to (Y_\beta)_{\beta\in B}$, $F$ is written as $ F=\bigl([F_\alpha^{j(\alpha)}]\bigr)_{\alpha\in A}, $ where $j: \mathrm {Ob}(A) \to \mathrm{Ob}(B)$ is a map and $[F_\alpha^{j(\alpha)}]$ is an equivalence class of a morphism $F_\alpha^{j(\alpha)}: X_\alpha \to Y_{j(\alpha)}$ in $\displaystyle\varinjlim_{\beta\in B} \mathrm{Hom}_{\mathcal{C}}(X_\alpha, Y_\beta)$. The composition is defined by $ F\circ G := \bigl([F_{j_G(\alpha)}^{j_F(j_G(\alpha))}\circ G_\alpha^{j_G(\alpha)}]\bigr)_{\alpha\in A} $ for morphisms of ind-objects $F=\bigl([F_\beta^{j_F(\beta)}]\bigr)_{\beta\in B}: (Y_\beta)_{\beta \in B}\to (Z_\gamma)_{\gamma\in \Gamma}$ and $G=\bigl([G_\alpha^{j_G(\alpha)}]\bigr)_{\alpha\in A}: (X_\alpha)_{\alpha\in A}\to (Y_\beta)_{\beta\in B}$. For a small filtered category $A$, we will use the notation $f_{\alpha' \alpha}$ to express a morphism $f\in \mathrm{Hom}_A(\alpha, \alpha')$, emphasizing the source and the target. Let $\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})$ be the category of presheaves on a topological space $X$ of super vector spaces over $\mathbb{K}$. Consider the category of ind-objects of the category $\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})$, $\mathrm{Ind}(\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}))$. There exists a functor \begin{equation}\label{eq: functor IndPresh to Presh} \underrightarrow{\mathrm{Lim}}\, : \mathrm{Ind}(\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_\mathbb{K}))\to \mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}), \end{equation} sending an object $\mathcal{F}=``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha$ to the inductive limit presheaf $\underrightarrow{\mathrm{Lim}}\, \mathcal{F}:=\varinjlim_{\alpha\in A}\mathcal{F}_\alpha$, but not its sheafification, and sending a morphism $F=( [F_\alpha^{j(\alpha)}])_{\alpha\in A}: ``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha \to ``\displaystyle\varinjlim_{\beta\in B}"\mathcal{G}_\beta$ to the morphism of presheaves $\underrightarrow{\mathrm{Lim}}\, F$ defined by \begin{equation}\label{eq: DEFINITION OF injlimF} \underrightarrow{\mathrm{Lim}}\, F(U): \varinjlim_{\alpha\in A}\mathcal{F}_\alpha(U) \to \varinjlim_{\beta \in B}\mathcal{G}_\beta(U), \quad [x_{\alpha}] \mapsto [F_{\alpha}^{j(\alpha)}x_{\alpha}], \end{equation} for each open subset $U \subset X$. Set \begin{multline}\label{df: bilinear morphisms of ind-objects} \mathrm{Bilin}_{\mathrm{Ind}(\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_\mathbb{K}))}\bigl(``\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha, ``\varinjlim_{\beta\in B}"\mathcal{G}_\beta; ``\varinjlim_{\gamma\in\Gamma}"\mathcal{H}_\gamma\bigr) \\ :=\varprojlim_{(\alpha, \beta)\in A\times B} \Biggl(\varinjlim_{\gamma\in\Gamma}\mathrm{Bilin}_{\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})}(\mathcal{F}_\alpha, \mathcal{G}_\beta; \mathcal{H}_\gamma)\Biggr), \end{multline} for ind-objects $``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha, ``\varinjlim_{\beta\in B}"\mathcal{G}_\beta, ``\varinjlim_{\gamma\in\Gamma}"\mathcal{H}_\gamma$ of the category $\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})$, where $\mathrm{Bilin}_{\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})}(\mathcal{F}_\alpha, \mathcal{G}_\beta; \mathcal{H}_\gamma)$ is the set of all bilinear morphisms of presheaves from $\mathcal{F}_\alpha\times \mathcal{G}_\beta$ to $\mathcal{H}_\gamma$. We call an element $F$ of the set \eqref{df: bilinear morphisms of ind-objects} a \textit{bilinear morphism} of ind-objects and write it as $$ F: ``\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha\times ``\varinjlim_{\beta\in B}"\mathcal{G}_\beta\to ``\varinjlim_{\gamma\in\Gamma}"\mathcal{H}_\gamma. $$ Each ind-object $``\displaystyle\varinjlim_{\gamma\in\Gamma}"\mathcal{H}_\gamma$ gives rise to a canonical contravariant functor \begin{align*} &\mathrm{Ind}(\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})\bigr)\times \mathrm{Ind}(\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}))\to \mathit{Set}, \\ &\Bigl(``\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha, ``\varinjlim_{\beta\in B}"\mathcal{G}_\beta \bigr)\mapsto \mathrm{Bilin}_{\mathrm{Ind}(C)}\bigl(``\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha, ``\varinjlim_{\beta\in B}"\mathcal{G}_\beta; ``\varinjlim_{\gamma\in\Gamma}"\mathcal{H}_\gamma\bigr), \end{align*} where $\mathit{Set}$ is the category of sets. Similarly, each pair of ind-objects $\Bigl(``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha,$ $ ``\displaystyle\varinjlim_{\beta\in B}"\mathcal{G}_\beta \bigr)$ induces a canonical covariant functor \begin{align*} \mathrm{Ind}(\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})) &\to \mathit{Set}, \\ ``\varinjlim_{\gamma\in\Gamma}"\mathcal{H}_\gamma &\mapsto \mathrm{Bilin}_{\mathrm{Ind}(C)}\bigl(``\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha, ``\varinjlim_{\beta\in B}"\mathcal{G}_\beta; ``\varinjlim_{\gamma\in\Gamma}"\mathcal{H}_\gamma\bigr). \end{align*} Moreover, a bilinear morphism of ind-objects $$ F=\bigl([F_{(\alpha, \gamma)}^{j(\alpha, \gamma)}]\bigr)_{(\alpha, \gamma)\in A\times B}: ``\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha\times ``\varinjlim_{\beta\in B}"\mathcal{G}_\beta\to ``\varinjlim_{\gamma\in\Gamma}"\mathcal{H}_\gamma, $$ induces a bilinear morphism of presheaves $$ \underrightarrow{\mathrm{Lim}}\, F: \varinjlim_{\alpha\in A}\mathcal{F}_\alpha\times \varinjlim_{\beta\in B}\mathcal{G}_\beta\to \varinjlim_{\gamma\in\Gamma}\mathcal{H}_\gamma, $$ in the same way as in \eqref{eq: DEFINITION OF injlimF}. Let $\mathit{Sh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})$ be the full subcategory of $\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})$ consisting of sheaves on $X$ of super vector spaces over $\mathbb{K}$. Then the corresponding category of ind-objects $\mathrm{Ind}(\mathit{Sh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}))$ is a full subcategory of $\mathrm{Ind}(\mathit{Presh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}))$. Note that the category $\mathrm{Ind}(\mathit{Sh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}))$ is bigger than the category of \textit{ind-sheaves} introduced by Kashiwara and Schapira (\cite{KS99}). The latter is the category of ind-objects of the category of sheaves with compact supports. \subsection{Definition of VSA-Inductive Sheaves} We denote by $\mathbb{K}_X$ the presheaf on a topological space $X$ of constant $\mathbb{K}$-valued functions. We regard $\mathbb{K}_X$ as an inductive system indexed by a set with one element and denote by $``\displaystyle\varinjlim"\mathbb{K}_X$ the corresponding ind-object. \begin{definition} A \textbf{vertex superalgebra inductive sheaf (VSA-inductive sheaf)} on a topological space $X$ is a quadruple $\bigl( \mathcal{F}, \underline {\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ consisting of \begin{enumerate}[$\bullet$] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{20pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{30pt} \setlength{\itemindent}{5pt} \item an ind-object of $\mathit{Sh}_X(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})$, $ \mathcal{F}=``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha, $ \item an even morphism of ind-objects $\underline{\mathbf{1}}: ``\varinjlim"\mathbb{K}_X\to \mathcal{F}$, \item an even morphism of ind-objects $\underline{T}: \mathcal{F}\to\mathcal{F}$, \item an even bilinear morphisms of ind-objects $\underline{(n)}: \mathcal{F}\times \mathcal{F} \to \mathcal{F}$, \end{enumerate} such that the map $\mathcal{F}(f)$ is even and injective for any morphism $f$ in $A$ and the quadruples $ \bigl(\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(U), \mathbf{1}, T, (n); n\in\mathbb{Z}\bigr) $ are vertex superalgebras for all open subsets $U\subset X$, where $\mathbf{1}=\mathbf{1}(U):=\bigl(\displaystyle\underrightarrow{\mathrm{Lim}}\, {\underline{\mathbf{1}}}(U)\bigr)(1)$, $T=T(U):=\displaystyle\underrightarrow{\mathrm{Lim}}\, \underline{T}(U)$, $(n)=(n)(U):=\displaystyle\underrightarrow{\mathrm{Lim}}\, \underline{(n)}(U)$. \end{definition} Let $\mathcal{V}_1=\bigl( \mathcal{F}_1, \underline {\mathbf{1}}_1, \underline{T}_1, \underline{(n)}_1; n\in \mathbb{Z}\bigr)$ and $\mathcal{V}_2=\bigl( \mathcal{F}_2, \underline {\mathbf{1}}_2, \underline{T}_2, \underline{(n)}_2; n\in \mathbb{Z}\bigr)$ be VSA-inductive sheaves on the same topological space $X$. We call a morphism of ind-objects $\Phi=([\Phi_\alpha^{j(\alpha)}])_{\alpha\in \mathrm{Ob}(A)}: \mathcal{F}_1\to \mathcal{F}_2$ a \textbf{base-preserving morphism} of VSA-inductive sheaves from $\mathcal{V}_1$ to $\mathcal{V}_2$ if $\Phi$ satisfies $\Phi\circ\underline{\mathbf{1}}_1=\underline{\mathbf{1}}_2$, $\Phi\circ\underline{T}_1=\underline{T}_2\circ\Phi$ and $\Phi\circ\underline{(n)}_1=\underline{(n)}_2\circ(\Phi\times\Phi)$ for all $n\in\mathbb{Z}$, where $\Phi\times\Phi$ is the morphism of ind-objects given by $\Phi\times\Phi:=\bigl([\Phi_\alpha^{j(\alpha)}\times\Phi_{\alpha'}^{j(\alpha')}]\bigr)_{(\alpha, \alpha')\in\mathrm{Ob}(A)\times\mathrm{Ob}(A)}$. \begin{remark}\label{rem: VSA-inductive sheaves form a category} Let $X$ be a topological space. VSA-inductive sheaves on $X$ form a category with base-preserving morphisms of VSA-inductive sheaves. This category is a subcategory of $\mathrm{Ind}(\mathit{Sh}_{X}(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}))$ and hence of $\mathrm{Ind}(\mathit{Presh}_{X}(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}))$. \end{remark} \begin{notation} Denote by $\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X$ the category of VSA-inductive sheaves on a topological space $X$ obtained in Remark \ref{rem: VSA-inductive sheaves form a category}. \end{notation} \begin{lemma}\label{lem: PRESHEAF associated with A VSA-inductive sheaf} Let $\mathcal{V}=\bigl( \mathcal{F}, \underline {\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ be a VSA-inductive sheaf on a topological space $X$. Then the assignment $$ U\to \bigl(\underrightarrow{\mathrm{Lim}}\, \mathcal{F}(U), \mathbf{1}, T, (n); n\in\mathbb{Z}\bigr), $$ with restriction maps of the presheaf $\underrightarrow{\mathrm{Lim}}\, \mathcal{F}$ defines a presheaf on $X$ of vertex superalgebras. \end{lemma} \begin{proof} We must check the restriction maps are vertex superalgebra morphisms. We can see this since $\displaystyle\underrightarrow{\mathrm{Lim}}\, {\underline{\mathbf{1}}}$, $\displaystyle\underrightarrow{\mathrm{Lim}}\, \underline{T}$, $\displaystyle\underrightarrow{\mathrm{Lim}}\, \underline{(n)}$ are morphisms of presheaves. \end{proof} Let $\mathcal{V}$ and $\mathcal{V}'$ be VSA-inductive sheaves. We write a morphism $\Phi$ as $\Phi: \mathcal{V}\to\mathcal{V}'$ even when $\Phi$ is not a morphism of VSA-inductive sheaves but simply a morphism of the underlying ind-objects of sheaves. When we say that $\Phi: \mathcal{V}\to\mathcal{V}'$ is a morphism of ind-objects, we mean $\Phi$ is a morphism between the underlying ind-objects of sheaves. \begin{lemma} Let $\Phi: \mathcal{V}_1\to \mathcal{V}_2$ be a base-preserving morphism of VSA-inductive sheaves on the same topological space. Then the map $\underrightarrow{\mathrm{Lim}}\, \Phi: \underrightarrow{\mathrm{Lim}}\, {\mathcal{V}_1} \to \underrightarrow{\mathrm{Lim}}\, {\mathcal{V}_2}$ is a morphism of presheaves of vertex superalgebras. \end{lemma} \begin{proof} This follows directly from the definition of the morphisms. \end{proof} \begin{remark} When we restrict the functor given in \eqref{eq: functor IndPresh to Presh} $$ \underrightarrow{\mathrm{Lim}}\, : \mathrm{Ind}(\mathit{Presh}_{X}(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})) \to \mathit{Presh}_X(\mathit{Vec}_{\mathbb{K}}^{\mathrm{super}}), $$ to the subcategory $\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X$, we have a functor \begin{equation}\label{eq: functor from VSA-ISh} \mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X \to \mathit{Presh}_X(\mathit{VSA_{\mathbb{K}}}), \end{equation} where $\mathit{Presh}_X(\mathit{VSA_{\mathbb{K}}})$ is the category of presheaves on $X$ of vertex superalgebras over $\mathbb{K}$. \end{remark} We also denote by $\underrightarrow{\mathrm{Lim}}\, $ the functor \eqref{eq: functor from VSA-ISh}. \begin{remark}\label{rem: LOCAL CALCULATION OF THE PRESHEAF associated with A Vsa IndSh} Let $\mathcal{V}=\bigl( \mathcal{F}, \underline {\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ be a VSA-inductive sheaf. Let $U\subset X$ be an open subset and $U=\bigcup_{\lambda\in\Lambda}U_{\lambda}$ an open covering. Then the map induced by restriction maps $$ \underrightarrow{\mathrm{Lim}}\, {\mathcal{V}}(U)\to \prod_{\lambda \in\Lambda}\underrightarrow{\mathrm{Lim}}\, {\mathcal{V}}(U_\lambda), $$ is injective since $\mathcal{F}_\alpha$ are sheaves, where $\mathcal{F}= ``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha$, and the map $\mathcal{F}(f)$ is injective for any morphism $f$ in $A$. \end{remark} \begin{definition}\label{df: grading operator on a VSA-inductive sheaf} Let $\mathcal{V}=\bigl( \mathcal{F}=``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_{\alpha}, \underline {\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ be a VSA-inductive sheaf on a topological space $X$. \begin{enumerate}[(i)] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{20pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{15pt} \setlength{\itemindent}{0pt} \renewcommand{\makelabel}{\upshape} \item A \textbf{Hamiltonian} or a \textbf{weight-grading operator} on $\mathcal{V}$ is an even morphism $\underline{H}: \mathcal{F}\to\mathcal{F}$ of ind-objects such that there exists a family $(H^\alpha_\alpha: \mathcal{F}_\alpha\to\mathcal{F}_\alpha)_{\alpha\in \mathrm{Ob}(A)}$ of even morphisms of sheaves satisfying the following conditions: \begin{enumerate}[$(1)$] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{35pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{15pt} \setlength{\itemindent}{0pt} \item $\underline{H}=([H_\alpha^\alpha])_{\alpha\in\mathrm{Ob}(A)}$. \item Each $H_\alpha^\alpha$ is a diagonalizable operator on $\mathcal{F}_\alpha^\alpha$, namely, the operator $H_\alpha^\alpha(U): \mathcal{F}_\alpha(U)\to\mathcal{F}_\alpha(U)$ is diagonalizable for any open subset $U\subset X$. \item For all $n\in \mathbb{Z}$, \begin{equation}\label{eq: Hamiltonian} \underline{(n)}\circ(\mathrm{id}\times\underline{H}+\underline{H}\times\mathrm{id})=(\underline{H}-(-n-1))\circ\underline{(n)}. \end{equation} \end{enumerate} \item A \textbf{degree-grading operator} on $\mathcal{V}$ is an even morphism $\underline{J}: \mathcal{F}\to\mathcal{F}$ of ind-objects such that there exists a family $(J^\alpha_\alpha: \mathcal{F}_\alpha\to\mathcal{F}_\alpha)_{\alpha\in \mathrm{Ob}(A)}$ of even morphisms of sheaves satisfying the following conditions: \begin{enumerate}[$(1)$] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{35pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{15pt} \setlength{\itemindent}{0pt} \item $\underline{J}=([J_\alpha^\alpha])_{\alpha\in\mathrm{Ob}(A)}$. \item Each $J_\alpha^\alpha$ is a diagonalizable operator on $\mathcal{F}_\alpha^\alpha$. \item For all $n\in \mathbb{Z}$, \begin{equation}\label{eq: degree-grading operator} \underline{(n)}\circ(\mathrm{id}\times\underline{J}+\underline{J}\times\mathrm{id})=\underline{J}\circ\underline{(n)}. \end{equation} \end{enumerate} \end{enumerate} \end{definition} \begin{remark} In the above definition, the family $(H^\alpha_\alpha)_{\alpha\in\mathrm{Ob}(A)}$ and $(J^\alpha_\alpha)_{\alpha\in\mathrm{Ob}(A)}$ are unique by the relation $(1)$ and the injectivity of the morphisms in the inductive system $\mathcal{F}$. \end{remark} \begin{definition} \begin{enumerate}[(i)] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{24pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{15pt} \setlength{\itemindent}{0pt} \renewcommand{\makelabel}{\upshape} \item A $\mathbb{Z}$-\textbf{graded VSA-inductive sheaf} is a VSA-inductive sheaf given a Hamiltonian with only integral eigenvalues. \item A \textbf{degree-graded VSA-inductive sheaf} is a VSA-inductive sheaf given a degree-grading operator with only integral eigenvalues. \item A VSA-inductive sheaf $\mathcal{V}=\bigl( \mathcal{F}=``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_{\alpha}, \underline {\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ is said to be \textbf{degree-weight-graded} if $\mathcal{V}$ is given a Hamiltonian $\underline{H}=([H_\alpha^\alpha])_{\alpha\in\mathrm{Ob}(A)}$ and a degree-grading operator $\underline{J}=([J_\alpha^\alpha])_{\alpha\in\mathrm{Ob}(A)}$ such that for each $\alpha\in\mathrm{Ob}(A)$, $\mathcal{F}_\alpha$ $=\bigoplus_{n, l\in\mathbb{Z}} \mathcal{F}_\alpha^l[n]$, where $\mathcal{F}_\alpha^l[n]:=\mathcal{F}_\alpha[n]\cap\mathcal{F}_\alpha^l$. Here $\mathcal{F}_\alpha[n]$ and $\mathcal{F}_\alpha^l$ are the subsheaves $\mathrm{Ker}(H_\alpha^\alpha-n)$ and $\mathrm{Ker}(J_\alpha^\alpha-l)$, respectively. \end{enumerate} \end{definition} \begin{lemma} If $(\mathcal{V}, \underline{H})$ is a $\mathbb{Z}$-graded VSA-inductive sheaf, then $(\varinjlim\mathcal{V}, \underrightarrow{\mathrm{Lim}}\, \underline{H})$ is a presheaf of $\mathbb{Z}$-graded vertex superalgebras. The same type of assertion holds in the degree-graded case and in the degree-weight-graded case. \end{lemma} \begin{proof} We will prove the assertion in the weight-graded case. The others are proved similarly. Let $\mathcal{V}=\bigl( \mathcal{F}=``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_{\alpha}, \underline {\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ be a $\mathbb{Z}$-graded VSA-inductive sheaf with a Hamiltonian $\underline{H}=([H_\alpha^\alpha])_{\alpha\in\mathrm{Ob}(A)}$. Let $f_{\alpha, \alpha'}: \alpha'\to\alpha$ be a morphism in $A$. Then the corresponding morphism $\mathcal{F}(f_{\alpha, \alpha'}): \mathcal{F}_{\alpha'}\to\mathcal{F}_\alpha$ preserves the grading. Indeed, from the injectivity of the morphisms in the inductive system $\mathcal{F}$ and the relation $H_\alpha^\alpha\circ\mathcal{F}(f_{\alpha, \alpha'})\sim H_{\alpha'}^{\alpha'}$ in $\varinjlim_{\beta\in A}\mathrm{Hom}(\mathcal{F}_{\alpha'}, \mathcal{F}_\beta)$, we have $H_\alpha^\alpha\circ\mathcal{F}(f_{\alpha, \alpha'})=\mathcal{F}(f_{\alpha, \alpha'})\circ H_{\alpha'}^{\alpha'}$. Thus we have $\varinjlim_{\alpha\in A}\mathcal{F}_\alpha$ $=\varinjlim_{\alpha\in A}(\bigoplus_{n\in\mathbb{Z}}\mathcal{F}_\alpha[n])$ $=\bigoplus_{n\in\mathbb{Z}}(\varinjlim_{\alpha\in A}\mathcal{F}_\alpha[n])$. Therefore $\underrightarrow{\mathrm{Lim}}\, \underline{H}$ is a diagonalizable operator with only integral eigenvalues on the presheaf $\underrightarrow{\mathrm{Lim}}\, \mathcal{F}=\varinjlim_{\alpha\in A}\mathcal{F}_\alpha$. From the relation \eqref{eq: Hamiltonian}, the operator $\underrightarrow{\mathrm{Lim}}\, \underline{H}$ is a Hamiltonian on $\underrightarrow{\mathrm{Lim}}\, \mathcal{V}$. \end{proof} \begin{notation} Denote by $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X$ the category of degree-weight-graded VSA-inductive sheaves on a topological space $X$. Its morphisms are morphisms of VSA-inductive sheaves on $X$ commuting with the Hamiltonians and the degree-grading operators. \end{notation} \subsection{Gluing Inductive Sheaves} Let $X$ be a topological space and $A$ a small filtered category. We consider the subcategory of $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X$ whose objects are degree-weight-graded VSA-inductive sheaves $\mathcal{V}=\bigl( \mathcal{F}, \underline{\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ such that $\mathcal{F}$ is a functor from $A$ and whose morphisms are morphisms $\Phi: \mathcal{F} \to \mathcal{G}$ of degree-weight-graded VSA-inductive sheaves such that there exist a family of morphisms of sheaves $(\Phi_\alpha^\alpha: \mathcal{F}_\alpha \to \mathcal{G}_\alpha)_{\alpha\in A}$ satisfying $F=\bigl([\Phi_\alpha^\alpha]\bigr)_{\alpha\in A}$. We denote this category by $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X^A$. We will often call a morphism of this category a \textbf{strict morphism}. If two degree-weight-graded VSA-inductive sheaves are isomorphic via a strict isomorphism, by which we mean a isomorphism in $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X^A$, then we say they are \textbf{strictly isomorphic}. We also call a morphism of ind-objects $\Phi: \mathcal{F} \to \mathcal{G}$, not necessarily a morphism of VSA-inductive sheaves, \textit{strict} if the same condition above is satisfied. \begin{remark}\label{rem: restriction of VSA-inductive sheaves} Let $\mathcal{V}=\bigl(\mathcal{F}, \underline{\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z} \bigr)$ be a VSA-inductive sheaf on $X$. Let $U\subset X$ be an open subset. Consider the inductive system \begin{align*} \mathcal{F}|_U: A &\to \mathit{Sh}_U(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}}), \\ \text{objects}:\quad \alpha &\mapsto \mathcal{F}_\alpha|_U, \\ \text{morphisms}:\quad f &\mapsto \mathcal{F}(f)|_U, \end{align*} obtained by restricting $\mathcal{F}$ to $U$. The corresponding ind-object $``\displaystyle\varinjlim_{\alpha\in A}"(\mathcal{F}|_U)_\alpha$ is a VSA-inductive sheaf on $U$ with $\underline{\mathbf{1}}, \underline{T}, \underline{(n)}$ restricted to $U$. \end{remark} For a VSA-inductive sheaf $\mathcal{V}$ on $X$ and an open subset $U$ of $X$, we denote by $\mathcal{V}|_U$ the VSA-inductive sheaf on $U$ given in Remark \ref{rem: restriction of VSA-inductive sheaves} and call it the \textbf{restriction} of the VSA-inductive sheaf $\mathcal{V}$. Let us glue VSA-inductive sheaves. Let $X=\bigcup_{\lambda \in \Lambda}U_\lambda$ be an open covering of $X$ and $(\mathcal{V}^\lambda)_{\lambda\in\Lambda}$ a family of degree-weight-graded VSA-inductive sheaves, where $\mathcal{V}^\lambda=\bigl(\mathcal{F}^\lambda, \underline{\mathbf{1}}^\lambda, \underline{T}^\lambda, \underline{(n)}^\lambda; n\in\mathbb{Z} \bigr)$ is an object of $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_{U_\lambda}^A$. Let $\underline{H}^\lambda$ and $\underline{J}^\lambda$ be the Hamiltonian and the degree-grading operator on $\mathcal{V}^\lambda$, respectively. Suppose given a family of strict isomorphisms of degree-weight-graded VSA-inductive sheaves $(\vartheta_{\lambda \mu}: \mathcal{V}^\mu|_{U_\mu \cap U_\lambda} \to \mathcal{V}^\lambda|_{U_\lambda \cap U_\mu})_{\lambda, \mu \in \Lambda}$ satisfying the following condition: \begin{align*} (0)\quad \vartheta_{\lambda \lambda}=\mathrm{id},\quad \text{and}\quad (\vartheta_{\lambda \mu}|_{U_\lambda\cap U_\mu\cap U_\nu})\circ(\vartheta_{\mu \nu}|_{U_\mu\cap U_\nu\cap U_\lambda})=(\vartheta_{\lambda \nu}|_{U_\nu\cap U_\lambda\cap U_\mu}), \quad\\ \text{for all}\quad \lambda, \mu, \nu \in \Lambda.\quad\quad\quad\quad \end{align*} We will often omit the subscripts such as $|_{U_\lambda\cap U_\mu\cap U_\nu}$ in the sequel. In addition to the condition $(0)$, we assume the following conditions: \begin{enumerate}[$(1)$] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{35pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{15pt} \setlength{\itemindent}{0pt} \item For any $\alpha, \alpha' \in \mathrm{Ob}(A)$, we have $\mathrm{Hom}_A(\alpha, \alpha')\neq\emptyset$ or $\mathrm{Hom}_A(\alpha', \alpha)\neq\emptyset$, \item There exist a $\alpha_0\in \mathrm{Ob}(A)$ and sheaf morphisms $\underline{\mathbf{1}}^{\lambda, \alpha_0}: \mathbb{K}_X\to \mathcal{F}^\lambda_{\alpha_0}$ with $\lambda\in\Lambda$ such that $$ \underline{\mathbf{1}}^\lambda=[\underline{\mathbf{1}}^{\lambda, \alpha_0}], $$ for all $\lambda\in\Lambda$. (Note that the element $\alpha_0$ can be taken independently of $\lambda\in\Lambda$.) \item There exist a map $j_T: \mathrm{Ob}(A)\to \mathrm{Ob}(A)$ and sheaf morphisms $\underline{T}^{\lambda, j_T(\alpha)}_\alpha: \mathcal{F}^\lambda_\alpha \to \mathcal{F}^\lambda_{j_T(\alpha)}$ with $\alpha\in \mathrm{Ob}(A)$ and $\lambda \in \Lambda$ such that $$ \underline{T}^\lambda=\bigl([\underline{T}^{\lambda, j_T(\alpha)}_\alpha]\bigr)_{\alpha\in \mathrm{Ob}(A)}, $$ for all $\lambda \in \Lambda$. (Note that the map $j_T$ can be taken independently of $\lambda\in \Lambda$.) \item For each $n\in \mathbb{Z}$, there exist a map $j_{(n)}: \mathrm{Ob}(A)\times\mathrm{Ob}(A)\to\mathrm{Ob}(A)$ and bilinear sheaf morphisms $\underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}: \mathcal{F}^\lambda_\alpha\times \mathcal{F}^\lambda_{\alpha'}\to \mathcal{F}^\lambda_{j_{(n)}(\alpha, \alpha')}$ with $(\alpha, \alpha')\in \mathrm{Ob}(A)\times \mathrm{Ob}(A)$ and $\lambda\in\Lambda$ such that $$ \underline{(n)}^\lambda=\bigl([\underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}]\bigr)_{(\alpha, \alpha')\in \mathrm{Ob}(A)\times\mathrm{Ob}(A)}, $$ for all $\lambda\in\Lambda$. (Note that the map $j_{(n)}$ can be taken independently of $\lambda \in \Lambda$.) \item The degree and weight-grading on each $\mathcal{F}_\alpha^\lambda$ are bounded from the above and the below uniformly with respect to $\lambda$. Moreover the weight-grading on $\mathcal{F}_\alpha^\lambda$ is bounded from the below uniformly with respect to $\alpha$ as well as $\lambda$. In other words, there exist an integer $N$ and natural numbers $N_\alpha, L_\alpha$ with $\alpha\in\mathrm{Ob}(A)$ such that $\mathcal{F}_\alpha^\lambda=\bigoplus_{N\le n \le N_\alpha}\bigoplus_{|l|\le L_\alpha}(\mathcal{F}_\alpha^\lambda)^l[n]$, where $(\mathcal{F}_\alpha^\lambda)^l[n]$ is the subsheaf of degree $l$ and weight $n$. \end{enumerate} The uniqueness in the following proposition means that if $\bigl(\mathcal{V}, (\Phi^\lambda: \mathcal{V}|_{U_\lambda}\to\mathcal{V}^\lambda)_{\lambda\in\Lambda}\bigr)$ and $\bigl(\mathcal{V}', (\Phi'^\lambda: \mathcal{V}'|_{U_\lambda}\to\mathcal{V}^\lambda)_{\lambda\in\Lambda}\bigr)$ are the pairs as in the proposition below then there exists a strict isomorphism of degree-weight-graded VSA-inductive sheaves $F: \mathcal{V}\to \mathcal{V}'$ such that $\Phi'^\lambda\circ F|_{U_\lambda}=\Phi^\lambda$ for all $\lambda\in \Lambda$. \begin{proposition}\label{prop: GLUING VsaIndShs} Under the above assumptions, there exists an object $\mathcal{V}$ of $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X^A$ and strict isomorphisms $(\Phi^\lambda: \mathcal{V}|_{U_\lambda}\to \mathcal{V}^\lambda)_{\lambda\in\Lambda}$ of degree-weight-graded VSA-inductive sheaves such that $(\Phi^\lambda|_{U_\lambda\cap U_\mu}) \circ (\Phi^\mu|_{U_\mu\cap U_\lambda})^{-1}=\vartheta_{\lambda \mu}$ for all $\lambda, \mu \in \Lambda$. Moreover such a pair is unique up to strict isomorphisms. \end{proposition} \begin{proof} First we see the existence part. Since $\vartheta_{\lambda \mu}$ is strict, we can write $\vartheta_{\lambda \mu}$ as $ \vartheta_{\lambda \mu}=([\vartheta_{\lambda\mu, \alpha}^\alpha])_{\alpha\in\mathrm{Ob}(A)}, $ where $\vartheta_{\lambda\mu, \alpha}^\alpha$ is a sheaf morphism from $\mathcal{F}^\mu_\alpha|_{U_\mu\cap U_\lambda}$ to $\mathcal{F}^\lambda_\alpha|_{U_\lambda\cap U_\mu}$. For each $\lambda\in \Lambda$ we have $ \mathrm{id}=\vartheta_{\lambda\lambda}=([\vartheta_{\lambda\lambda, \alpha}^\alpha])_{\alpha\in\mathrm{Ob}(A)} $ and therefore $ \mathrm{id}\sim\vartheta_{\lambda\lambda, \alpha}^\alpha $ for all $\alpha\in\mathrm{Ob}(A)$, where $\sim$ means that the two sheaf morphisms are equivalent in $\displaystyle\varinjlim_{\alpha'}\mathrm{Hom}(\mathcal{F}^\lambda_{\alpha}, \mathcal{F}^\lambda_{\alpha'})$. By the injectivity of the morphisms of the inductive system $\mathcal{F}^\lambda$, we have \begin{equation}\label{eq: id=theta_lambda-lambda} \mathrm{id}=\vartheta_{\lambda\lambda, \alpha}^\alpha. \end{equation} Similarly we have $ \vartheta_{\lambda\mu, \alpha}^\alpha\circ\vartheta_{\mu\nu, \alpha}^\alpha=\vartheta_{\lambda\nu, \alpha}^\alpha $ for all $\alpha\in\mathrm{Ob}(A)$ and $\lambda, \mu, \nu\in\Lambda$ by the assumption. Therefore for each $\alpha\in \mathrm{Ob}(A)$, we can glue the sheaves $(\mathcal{F}^\lambda_\alpha)_{\lambda\in\Lambda}$ with the sheaf morphisms $(\vartheta_{\lambda\mu, \alpha}^\alpha)_{\lambda, \mu\in\Lambda}$. Denote the resulting sheaf on $X$ by $\mathcal{F}_\alpha$. For each $f_{\alpha\alpha'}: \alpha'\to \alpha$, we will glue the morphisms $(\mathcal{F}^\lambda(f_{\alpha \alpha'}))_{\lambda\in\Lambda}$ to obtain a sheaf morphism $\mathcal{F}_{\alpha'}\to\mathcal{F}_\alpha$. By the definition of morphisms of ind-objects, we have $$ \vartheta_{\lambda\mu, \alpha}^\alpha\circ\mathcal{F}^\mu(f_{\alpha \alpha'})\sim\vartheta_{\lambda\mu, \alpha'}^{\alpha'}, $$ where $\sim$ means that the two sheaf morphisms are equivalent in the inductive limit $\displaystyle\varinjlim_{\alpha\in A}\mathrm{Hom}(\mathcal{F}^\mu_{\alpha'}|_{U_\mu\cap U_\lambda}, \mathcal{F}^\lambda_\alpha|_{U_\lambda\cap U_\mu})$. Therefore we have $\vartheta_{\lambda\mu, \alpha}^\alpha\circ\mathcal{F}^\mu(f_{\alpha \alpha'})=\mathcal{F}^\lambda(f_{\alpha\alpha'})\circ\vartheta_{\lambda\mu, \alpha'}^{\alpha'}$ by the injectivity of the morphisms in the inductive systems. Thus we can glue the morphisms $(\mathcal{F}^\lambda(f_{\alpha \alpha'}))_{\lambda\in\Lambda}$ to obtain the sheaf morphism $\mathcal{F}_{\alpha'}\to\mathcal{F}_\alpha$, which we denote by $\mathcal{F}(f_{\alpha \alpha'})$. Note that $\mathcal{F}(f_{\alpha \alpha'})$ is injective since $\mathcal{F}^\lambda(f_{\alpha \alpha'})$ are injective for all $\lambda\in\Lambda$. By the construction, the assignment \begin{align*} \mathcal{F}&: A\to \mathit{Sh}_X(Vec_\mathbb{K}^\mathrm{super}), \\ \text{objects}&: \alpha\mapsto \mathcal{F}_\alpha, \\ \text{morphisms}&: f_{\alpha \alpha'}\mapsto \mathcal{F}(f_{\alpha \alpha'}), \end{align*} defines a functor. Thus we have an inductive system $\mathcal{F}$ in $\mathit{Sh}_X(Vec_\mathbb{K}^{\mathrm{super}})$, therefore the corresponding ind-object $``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha$. By the assumption, for each $n\in \mathbb{Z}$, we have a map $j_{(n)}: \mathrm{Ob}(A)\times\mathrm{Ob}(A)\to\mathrm{Ob}(A)$ and bilinear sheaf morphisms $\underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}: \mathcal{F}^\lambda_\alpha\times \mathcal{F}^\lambda_{\alpha'}\to \mathcal{F}^\lambda_{j_{(n)}(\alpha, \alpha')}$ with $(\alpha, \alpha')\in \mathrm{Ob}(A)\times \mathrm{Ob}(A)$ and $\lambda\in\Lambda$ such that $$ \underline{(n)}^\lambda=\bigl([\underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}]\bigr)_{(\alpha, \alpha')\in \mathrm{Ob}(A)\times\mathrm{Ob}(A)}, $$ for all $\lambda\in\Lambda$. Fix $n\in\mathbb{Z}$. For each $(\alpha, \alpha')\in\mathrm{Ob}(A)\times\mathrm{Ob}(A)$, we will glue morphisms $\bigl(\underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}\bigr)_{\lambda\in\Lambda}$ to get a bilinear sheaf morphism $\mathcal{F}_\alpha\times\mathcal{F}_{\alpha'}\to\mathcal{F}_{j_{(n)}(\alpha, \alpha')}$. It suffices to check that the morphisms $\bigl(\underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}\bigr)_{\lambda\in\Lambda}$ commute with the gluing maps. Since each $\vartheta_{\lambda\mu}$ is a morphism of VSA-inductive sheaves, we have $$ \underline{(n)}^\lambda \circ (\vartheta_{\lambda\mu}\times\vartheta_{\lambda\mu})=\vartheta_{\lambda\mu}\circ\underline{(n)}^\mu, $$ and therefore $$ \underline{(n)}^{\lambda, j_{(n)}(\alpha, \alpha')}_{(\alpha, \alpha')}\circ(\vartheta_{\lambda\mu, \alpha}^\alpha\times \vartheta_{\lambda\mu, \alpha'}^{\alpha'})\sim \vartheta_{\lambda\mu, j_{(n)}(\alpha, \alpha')}^{j_{(n)}(\alpha, \alpha')}\circ\underline{(n)}^{\mu, j_{(n)}(\alpha, \alpha')}_{(\alpha, \alpha')}, $$ for all $(\alpha, \alpha')\in \mathrm{Ob}(A)\times\mathrm{Ob}(A)$. By the same argument for proving \eqref{eq: id=theta_lambda-lambda}, we have $$ \underline{(n)}^{\lambda, j_{(n)}(\alpha, \alpha')}_{(\alpha, \alpha')}\circ(\vartheta_{\lambda\mu, \alpha}^\alpha\times \vartheta_{\lambda\mu, \alpha'}^{\alpha'})= \vartheta_{\lambda\mu, j_{(n)}(\alpha, \alpha')}^{j_{(n)}(\alpha, \alpha')}\circ\underline{(n)}^{\mu, j_{(n)}(\alpha, \alpha')}_{(\alpha, \alpha')}, $$ for all $(\alpha, \alpha')\in \mathrm{Ob}(A)\times\mathrm{Ob}(A)$. Thus we can glue the morphisms $\underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}$ with ${\lambda\in\Lambda}$. We denote by $\underline{(n)}_{(\alpha, \alpha')}^{j_{(n)}(\alpha, \alpha')}$ the resulting bilinear morphism of sheaves. We claim that $\underline{(n)}:=\bigl([\underline{(n)}_{(\alpha, \alpha')}^{j_{(n)}(\alpha, \alpha')}]\bigr)_{(\alpha, \alpha')\in \mathrm{Ob}(A)\times\mathrm{Ob}(A)}$ is a bilinear morphism of ind-objects. We must check \begin{equation}\label{eq: (n) f*f sim (n)} \underline{(n)}_{(\alpha, \alpha')}^{j_{(n)}(\alpha, \alpha')}\circ(\mathcal{F}(f_{\alpha \Tilde{\alpha}})\times \mathcal{F}(f_{\alpha' \Tilde{\alpha}'})) \sim \underline{(n)}_{(\Tilde{\alpha}, \Tilde{\alpha}')}^{j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}, \end{equation} for any object $(\Tilde{\alpha}, \Tilde{\alpha}')\in \mathrm{Ob}(A)\times\mathrm{Ob}(A)$ and morphism $f_{\alpha \Tilde{\alpha}}\times f_{\alpha' \Tilde{\alpha}'}$. Let $(\Tilde{\alpha}, \Tilde{\alpha}')\in \mathrm{Ob}(A)\times\mathrm{Ob}(A)$ be an arbitrary object and $f_{\alpha \Tilde{\alpha}}\times f_{\alpha' \Tilde{\alpha}'}$ an morphism. For each $\lambda\in\Lambda$, we have $$ \underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}\circ(\mathcal{F}^\lambda(f_{\alpha \Tilde{\alpha}})\times \mathcal{F}^\lambda(f_{\alpha' \Tilde{\alpha}'})) \sim \underline{(n)}_{(\Tilde{\alpha}, \Tilde{\alpha}')}^{\lambda, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}. $$ Therefore for each $\lambda\in\Lambda$, we have an object $\alpha''(\lambda)\in\mathrm{Ob}(A)$ and morphisms $f_{\alpha''(\lambda)\, j_{(n)}(\alpha, \alpha')}: j_{(n)}(\alpha, \alpha') \to \alpha''(\lambda)$, $f_{\alpha''(\lambda)\, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}: j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}') \to \alpha''(\lambda)$ in $A$ such that \begin{multline}\label{eq: f (n) f*f = f (n) in lambda} \mathcal{F}^\lambda(f_{\alpha''(\lambda)\, j_{(n)}(\alpha, \alpha')})\circ\underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}\circ(\mathcal{F}^\lambda(f_{\alpha \Tilde{\alpha}})\times \mathcal{F}^\lambda(f_{\alpha' \Tilde{\alpha}'}))\\ = \mathcal{F}^\lambda(f_{\alpha''(\lambda)\, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')})\circ\underline{(n)}_{(\Tilde{\alpha}, \Tilde{\alpha}')}^{\lambda, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}. \end{multline} By the assumption of this proposition, we have \begin{equation*} \mathrm{Hom}_A(j_{(n)}(\alpha, \alpha'), j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}'))\neq\emptyset\quad \text{or} \quad \mathrm{Hom}_A(j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}'), j_{(n)}(\alpha, \alpha'))\neq\emptyset. \end{equation*} When $\mathrm{Hom}_A(j_{(n)}(\alpha, \alpha'), j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}'))\neq\emptyset$, we have a morphism $f_{j_{(n)}(\alpha, \alpha')\, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}$ in this set. The right-hand side of \eqref{eq: f (n) f*f = f (n) in lambda} is equivalent to $$ \mathcal{F}^\lambda(f_{\alpha''(\lambda)\, j_{(n)}(\alpha, \alpha')})\circ\mathcal{F}^\lambda(f_{j_{(n)}(\alpha, \alpha')\, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')})\circ\underline{(n)}_{(\Tilde{\alpha}, \Tilde{\alpha}')}^{\lambda, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}, $$ in $\varinjlim_{\beta\in A}\mathrm{Bilin}_{\mathit{Presh}_{U_\lambda}(\mathit{Vec}^{\mathrm{super}}_{\mathbb{K}})}(\mathcal{F}_{\Tilde{\alpha}}^\lambda, \mathcal{F}_{\Tilde{\alpha}'}^\lambda; \mathcal{F}_\beta^\lambda)$. By the injectivity of the morphism of the inductive system $\mathcal{F}^\lambda$, we have $$ \underline{(n)}_{(\alpha, \alpha')}^{\lambda, j_{(n)}(\alpha, \alpha')}\circ(\mathcal{F}^\lambda(f_{\alpha \Tilde{\alpha}})\times \mathcal{F}^\lambda(f_{\alpha' \Tilde{\alpha}'})) =\mathcal{F}^\lambda(f_{j_{(n)}(\alpha, \alpha')\, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')})\circ\underline{(n)}_{(\Tilde{\alpha}, \Tilde{\alpha}')}^{\lambda, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}. $$ We have this equality for any $\lambda\in\Lambda$. Note that $f_{j_{(n)}(\alpha, \alpha')\, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}$ does not depend on $\lambda\in\Lambda$. Therefore we have the following relation for glued morphisms: $$ \underline{(n)}_{(\alpha, \alpha')}^{j_{(n)}(\alpha, \alpha')}\circ(\mathcal{F}(f_{\alpha \Tilde{\alpha}})\times \mathcal{F}(f_{\alpha' \Tilde{\alpha}'})) =\mathcal{F}(f_{j_{(n)}(\alpha, \alpha')\, j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')})\circ\underline{(n)}_{(\Tilde{\alpha}, \Tilde{\alpha}')}^{j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}')}. $$ This means \eqref{eq: (n) f*f sim (n)}. When $\mathrm{Hom}_A(j_{(n)}(\Tilde{\alpha}, \Tilde{\alpha}'), j_{(n)}(\alpha, \alpha'))\neq\emptyset$, we can also obtain \eqref{eq: (n) f*f sim (n)} in a similar way. Thus we have a bilinear morphism of ind-objects $\underline{(n)}=\bigl([\underline{(n)}_{(\alpha, \alpha')}^{j_{(n)}(\alpha, \alpha')}]\bigr)_{(\alpha, \alpha')\in \mathrm{Ob}(A)\times\mathrm{Ob}(A)}: \mathcal{F}\times\mathcal{F}\to\mathcal{F}$. In a similar way, we obtain morphisms of ind-objects $ \underline{T}=\bigl([\underline{T}_\alpha^{j_T(\alpha)}]\bigr)_{\alpha\in\mathrm{Ob}(A)}: \mathcal{F}\to\mathcal{F}, $ $\underline{\mathbf{1}}=[\underline{\mathbf{1}}^{\alpha_0}]: ``\displaystyle\varinjlim"\mathbb{K}_X\to\mathcal{F}, $ $\underline{H}=([H_\alpha^{\alpha}])_{\alpha\in\mathrm{Ob}(A)}: \mathcal{F}\to\mathcal{F}$ and $\underline{J}=([J_\alpha^{\alpha}])_{\alpha\in\mathrm{Ob}(A)}: \mathcal{F}\to\mathcal{F}$ from the morphisms $\underline{T}^\lambda=\bigl([\underline{T}_\alpha^{\lambda, j_T(\alpha)}]\bigr)_{\alpha\in\mathrm{Ob}(A)}$, $\underline{\mathbf{1}}^\lambda=[\underline{\mathbf{1}}^{\lambda, \alpha_0}]$, $\underline{H}^\lambda=([H_\alpha^{\lambda, \alpha}])_{\alpha\in\mathrm{Ob}(A)}$ and $\underline{J}^\lambda=([J_\alpha^{\lambda, \alpha}])_{\alpha\in\mathrm{Ob}(A)}$ with $\lambda\in\Lambda$, respectively. Since the gluing maps commutes with the Hamiltonians and the degree-grading operators, we can glue the sheaves $(\mathcal{F}_\alpha^\lambda)^l[n]$ with $\lambda\in\Lambda$. Denote the resulting sheaf on $X$ by $\mathcal{F}_\alpha^l[n]$. By the assumption (5), $(\mathcal{F}_\alpha)^l[n]=0$ for all but finitely many $l$ and $n$. Therefore the presheaf $\bigoplus_{n, l\in\mathbb{Z}}(\mathcal{F}_\alpha)^l[n]$ is a sheaf. Thus the sheaf $\mathcal{F}_\alpha$ is canonically isomorphic to the sheaf $\bigoplus_{n, l\in\mathbb{Z}}(\mathcal{F}_\alpha)^l[n]$ since they are locally isomorphic. This grading comes from the operators $H_\alpha^\alpha$ and $J_\alpha^\alpha$. In other words, the operators $H_\alpha^\alpha$ and $J_\alpha^\alpha$ are diagonalizable. The relation \eqref{eq: Hamiltonian} for $\underline{H}$ and the relation \eqref{eq: degree-grading operator} for $\underline{J}$ follow from the fact that the corresponding relations hold locally. We must check the quadruple $\mathcal{V}:=(\mathcal{F}, \underline{\mathbf{1}}, \underline{T}, \underline{(n)}; n\in\mathbb{Z})$ is a VSA-inductive sheaf on $X$. Let $V$ be an arbitrary open subset of $X$. It suffices to show that the quadruple $(\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V), \mathbf{1}, T, (n); n\in\mathbb{Z})$ is a vertex superalgebra, where $\mathbf{1}:=\underrightarrow{\mathrm{Lim}}\, \underline{\mathbf{1}}(V)(1)$, $T:=\underrightarrow{\mathrm{Lim}}\, \underline{T}(V)$ and $(n):=\underrightarrow{\mathrm{Lim}}\, \underline{(n)}(V)$. The map induced by the restriction maps and the isomorphisms $\mathcal{F}_\alpha|_{U_\lambda}\cong\mathcal{F}^\lambda_\alpha$, \begin{equation}\label{eq: inclusion into the product VSA} \varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V)\to\prod_{\lambda\in\Lambda}\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V\cap U_\lambda)\cong\prod_{\lambda\in\Lambda}\varinjlim_{\alpha\in A}\mathcal{F}_\alpha^\lambda(V\cap U_\lambda), \end{equation} is injective since the morphisms $\mathcal{F}^\lambda(f_{\alpha\alpha'})$ are all injective. Via this map, we regard $\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V)$ as a subspace of $\prod_{\lambda\in\Lambda}\varinjlim_{\alpha\in A}\mathcal{F}_\alpha^\lambda(V\cap U_\lambda)$. Then by the construction, $\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V)$ is preserved by $\prod_{\lambda\in\Lambda}T^\lambda$ and $\prod_{\lambda\in\Lambda}(n)^\lambda$ with $n\in\mathbb{Z}$, where $T^\lambda$ and $(n)^\lambda$ are the translation operator and the $n$-th product of $\varinjlim_{\alpha\in A}\mathcal{F}_\alpha^\lambda(V\cap U_\lambda)$, respectively. Moreover $(\mathbf{1}^\lambda)_{\lambda\in\Lambda}\in \varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V)$, where $\mathbf{1}^\lambda$ is the vacuum vector of $\varinjlim_{\alpha\in A}\mathcal{F}_\alpha^\lambda(V\cap U_\lambda)$. Note that $T=(\prod_{\lambda\in\Lambda}T^\lambda)|_{\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V)}$, $(n)=(\prod_{\lambda\in\Lambda}(n)^\lambda)|_{\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V)}$ and $\mathbf{1}=(\mathbf{1}^\lambda)_{\lambda\in\Lambda}$. Moreover by the assumption (5), the weight-grading on $\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V)$ is bounded from the below. Therefore the formal distribution $\sum_{n\in\mathbb{Z}}A_{(n)}z^{-n-1}$ is a field for any $A\in\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V)$. Thus $(\varinjlim_{\alpha\in A}\mathcal{F}_\alpha(V), \mathbf{1}, T, (n); n\in \mathbb{Z})$ is a vertex superalgebra. Therefore the quadruple $\mathcal{V}=(\mathcal{F}, \underline{\mathbf{1}}, \underline{T}, \underline{(n)}; n\in\mathbb{Z})$ with $\underline{H}$ and $\underline{J}$ is an object of the category $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X^A$. It remains to construct a strict isomorphism $\mathcal{V}|_{U_\lambda}\cong\mathcal{V}^\lambda$ for each $\lambda\in \Lambda$. Let $\lambda\in\Lambda$. We set $$ \Phi^\lambda:=\bigl([\Phi^{\lambda, \alpha}_\alpha]\bigr)_{\alpha\in \mathrm{Ob}(A)}, $$ where $\Phi^{\lambda, \alpha}_\alpha$ is the usual sheaf isomorphism from $\mathcal{F}_\alpha|_{U_\lambda}$ to $\mathcal{F}_\alpha^\lambda$, which preserves the degree-weight-grading. This defines a morphism of ind-objects, or equivalently, $$ \Phi_\alpha^{\lambda, \alpha}\circ\mathcal{F}(f_{\alpha \alpha'})\sim\Phi_{\alpha'}^{\lambda, \alpha'}, $$ for any object $\alpha, \alpha'$ of $A$ and morphism $f_{\alpha \alpha'}: \alpha'\to \alpha$ in $A$. Indeed by the construction of $\mathcal{F}(f_{\alpha \alpha'})$, we have $$ \Phi_\alpha^{\lambda, \alpha}\circ (\mathcal{F}(f_{\alpha \alpha'})|_{U_\lambda})=\mathcal{F}^\lambda(f_{\alpha \alpha'})\circ\Phi_{\alpha'}^{\lambda, \alpha'}, $$ for each object $\alpha, \alpha'$ of $A$ and morphism $f_{\alpha \alpha'}: \alpha'\to \alpha$ in $A$. The relation $(\Phi^\lambda|_{U_\lambda\cap U_\mu}) \circ (\Phi^\mu|_{U_\mu\cap U_\lambda})^{-1}=\vartheta_{\lambda \mu}$ holds since $(\Phi^{\lambda, \alpha}_\alpha|_{U_\lambda\cap U_\mu}) \circ (\Phi^{\mu, \alpha}_\alpha|_{U_\mu\cap U_\lambda})^{-1}=\vartheta_{\lambda \mu, \alpha}^\alpha$ for all $\alpha\in \mathrm{Ob}(A)$. Moreover $\Phi^\lambda$ is a strict isomorphism of VSA-inductive sheaves. In other words, $\Phi^\lambda$ commutes with $\underline{(n)}$, $\underline{T}$ and $\underline{\mathbf{1}}$ and in addition the strict inverse morphism $(\Phi^\lambda)^{-1}$ exists. This follows from the construction of the operators $\underline{(n)}$, $\underline{T}$, $\underline{\mathbf{1}}$ and from the definition of $\Phi^\lambda$. Moreover $\Phi^\lambda$ commutes with the Hamiltonians and the degree-grading operators since each $\Phi_\alpha^{\lambda, \alpha}$ does. Thus the existence part is proved. The uniqueness part is proved by the argument as in usual sheaf cases as well as the arguments used above with the fact that the morphisms $\mathcal{F}^\lambda(f_{\alpha'\alpha})$ are injective for all $f_{\alpha'\alpha}\in\mathrm{Hom}_A(\alpha, \alpha')$. \end{proof} We can also glue morphisms. Let $X=\bigcup_{\lambda \in \Lambda}U_\lambda$ be as before. Let $\bigl((\mathcal{V}^\lambda)_{\lambda\in\Lambda},$ $ (\vartheta_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$, $ \bigl((\mathcal{V}'^\lambda)_{\lambda\in\Lambda},$ $(\vartheta'_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$ be families of degree-weight-graded VSA-inductive sheaves with strict isomorphisms as in Proposition \ref{prop: GLUING VsaIndShs}. In other words, $\mathcal{V}^\lambda=\bigl(\mathcal{F}^\lambda, \underline{\mathbf{1}}^\lambda, \underline{T}^\lambda,$ $ \underline{(n)}^\lambda; n\in\mathbb{Z} \bigr)$ and $\mathcal{V}'^\lambda=\bigl(\mathcal{F}'^\lambda, \underline{\mathbf{1}'}^\lambda, \underline{T}'^\lambda, \underline{(n)}'^\lambda; n\in\mathbb{Z} \bigr)$ are objects of $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_{U_\lambda}^A$, and $\vartheta_{\lambda \mu}: \mathcal{F}^\mu|_{U_\mu \cap U_\lambda} \to \mathcal{F}^\lambda|_{U_\lambda \cap U_\mu}$ and $\vartheta'_{\lambda \mu}: \mathcal{F}'^\mu|_{U_\mu \cap U_\lambda} \to \mathcal{F}'^\lambda|_{U_\lambda \cap U_\mu}$ are strict isomorphisms of degree-weight-graded VSA-inductive sheaves on $U_\lambda \cap U_\mu$ such that the conditions $(0)$-$(5)$ hold. Let $\mathcal{V}$ with $(\Phi^\lambda)_{\lambda \in \Lambda}$ and $\mathcal{V}'$ with $(\Phi'^\lambda)_{\lambda\in\Lambda}$ be objects of $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X^A$ with strict isomorphisms obtained by gluing $\bigl((\mathcal{V}^\lambda)_{\lambda\in\Lambda}, (\vartheta_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$ and $\bigl((\mathcal{V}'^\lambda)_{\lambda\in\Lambda},$ $ (\vartheta'_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$, respectively. Suppose given a family of morphisms of ind-objects of sheaves $(F^\lambda: \mathcal{V}^\lambda \to \mathcal{V}'^\lambda)_{\lambda\in\Lambda}$ such that $\vartheta'_{\lambda \mu}\circ (F^\mu|_{U_\mu\cap U_\lambda}) = (F^\lambda|_{U_\lambda\cap U_\mu})\circ \vartheta_{\lambda \mu}$ for all $\lambda, \mu \in \Lambda$. We assume that there exist a map $j_F: \mathrm{Ob}(A)\to\mathrm{Ob}(A)$ and sheaf morphisms $F^{\lambda, j_F(\alpha)}_\alpha$ with $\alpha\in \mathrm{Ob}(A)$ and $\lambda\in \Lambda$ such that $ F^\lambda=\bigl( [F^{\lambda, j_F(\alpha)}_\alpha] \bigr)_{\alpha\in \mathrm{Ob}(A)} $ for all $\lambda \in \Lambda$. \begin{proposition}\label{prop: GLUING MORPHISMS OF VsaIndSh} In the above situation, there exists a unique strict morphism $F: \mathcal{V}\to \mathcal{V}'$ of ind-objects of sheaves on $X$ such that $\Phi'^\lambda\circ F|_{U_\lambda}=F^\lambda\circ \Phi^\lambda$ for any $\lambda\in\Lambda$. Moreover if the morphisms $F^\lambda$ are all morphisms of VSA-inductive sheaves, the resulting morphism of ind-objects $F$ is also a morphism of VSA-inductive sheaves. \end{proposition} \begin{proof} We can construct $F: \mathcal{V}\to \mathcal{V}'$, following the same argument as for the construction of $\underline{(n)}$ in Proposition \ref{prop: GLUING VsaIndShs}. The uniqueness part is proved by the same argument as in usual sheaf cases as well as the same arguments as in the proof of Proposition \ref{prop: GLUING VsaIndShs}. The latter half of this proposition is also checked by arguments similar to those above. \end{proof} Consider three families of degree-weight-graded VSA-inductive sheaves with strict isomorphisms as in Proposition \ref{prop: GLUING VsaIndShs}, $\bigl((\mathcal{V}^\lambda)_{\lambda\in\Lambda}, (\vartheta_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$, $\bigl((\mathcal{V}'^\lambda)_{\lambda\in\Lambda},$ $ (\vartheta'_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$ and $\bigl((\mathcal{V}''^\lambda)_{\lambda\in\Lambda},$ $ (\vartheta''_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$. Suppose given families of morphisms of ind-objects of sheaves $(F^\lambda: \mathcal{V}^\lambda \to \mathcal{V}'^\lambda)_{\lambda\in\Lambda}$ and $(F'^\lambda: \mathcal{V}'^\lambda \to \mathcal{V}''^\lambda)_{\lambda\in\Lambda}$ as in Proposition \ref{prop: GLUING MORPHISMS OF VsaIndSh}. In other words, $\vartheta'_{\lambda \mu}\circ (F^\mu|_{U_\mu\cap U_\lambda}) = (F^\lambda|_{U_\lambda\cap U_\mu})\circ \vartheta_{\lambda \mu}$ and $\vartheta''_{\lambda \mu}\circ (F'^\mu|_{U_\mu\cap U_\lambda}) = (F'^\lambda|_{U_\lambda\cap U_\mu})\circ \vartheta'_{\lambda \mu}$ hold for all $\lambda, \mu \in \Lambda$, and moreover, there exist maps $j_F: \mathrm{Ob}(A)\to\mathrm{Ob}(A)$, $j_{F'}: \mathrm{Ob}(A)\to\mathrm{Ob}(A)$ and sheaf morphisms $F^{\lambda, j_F(\alpha)}_\alpha$, $F'^{\lambda, j_{F'}(\alpha)}_\alpha$ with $\alpha\in \mathrm{Ob}(A)$ and $\lambda\in \Lambda$ such that $ F^\lambda=\bigl( [F^{\lambda, j_F(\alpha)}_\alpha] \bigr)_{\alpha\in \mathrm{Ob}(A)} $ and $ F'^\lambda=\bigl( [F'^{\lambda, j_{F'}(\alpha)}_\alpha] \bigr)_{\alpha\in \mathrm{Ob}(A)} $ for all $\lambda \in \Lambda$. Let $F: \mathcal{V}\to \mathcal{V}'$, $F': \mathcal{V}'\to\mathcal{V}''$ and $G: \mathcal{V}\to\mathcal{V}''$ be the morphisms obtained by gluing $(F^\lambda)_{\lambda\in\Lambda}$, $(F'^\lambda)_{\lambda\in\Lambda}$ and $(F'^\lambda\circ F^\lambda)_{\lambda\in\Lambda}$, respectively. \begin{proposition}\label{prop: FUNCTORIALITY OF GLUING} In the above situation, the composite $F'\circ F$ agrees with the morphism $G$. \end{proposition} \begin{proof} This proposition is a direct corollary of Proposition \ref{prop: GLUING MORPHISMS OF VsaIndSh}. \end{proof} \begin{remark}\label{rem: Isomorphic as VSA-inductive sheaves if locally isomorphic} Two VSA-inductive sheaves are strictly isomorphic if they are strictly isomorphic locally via strict isomorphisms which coincide on the overlaps of their domains. \end{remark} \begin{remark}\label{rem: morphisms coincides if so locally} Two morphisms of ind-objects between VSA-inductive sheaves coincide with each other if they coincide locally. \end{remark} \subsection{From Presheaves to VSA-Inductive Sheaves}\label{subsection: From Presheaves to VSA-Inductive Sheaves} We construct VSA-inductive sheaves from presheaves of vertex superalgebras with some properties. We denote by $\mathit{Presh}_X(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$ the full subcategory of the category of presheaves on $X$ of degree-weight-graded vertex superalgebras over $\mathbb{K}$ whose objects are presheaves $\Tilde{\mathcal{V}}$ of degree-weight-graded vertex superalgebras on $X$ such that the weight-grading on $\Tilde{\mathcal{V}}(U)$ is bounded from the below uniformly with respect to open subsets $U\subset X$ and the subpresheaf $\Tilde{\mathcal{V}}[n]$ defined by the assignment $ U\mapsto \Tilde{\mathcal{V}}(U)[n] $ is a sheaf of super vector spaces for any $n \in\mathbb{Z}$. Let $\Tilde{\mathcal{V}}$ be an object of $\mathit{Presh}_X(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$. We set $$ \Tilde{\mathcal{V}}[\le N]:=\bigoplus_{n\le N}\Tilde{\mathcal{V}}[n]. $$ for $N\in \mathbb{N}$. These are sheaves by the assumptions. Consider the canonical inductive system of sheaves $(\Tilde{\mathcal{V}}[\le N])_{N\in \mathbb{N}}$. Then the corresponding ind-object $``\displaystyle\varinjlim_{n\in \mathbb{N}}"\Tilde{\mathcal{V}}[\le N]$ has a canonical VSA-inductive sheaf structure induced by the morphisms $\underline{\mathbf{1}}^0: \mathbb{K}_X\to \Tilde{\mathcal{V}}[\le 0]$ defined by $\mathbb{K}\to \Gamma(\Tilde{\mathcal{V}}[\le 0]), 1\mapsto \mathbf{1}$, $T_N: \Tilde{\mathcal{V}}[\le N]\to\Tilde{\mathcal{V}}[\le N+1]$, and $(n)_{N, M}: \Tilde{\mathcal{V}}[\le N]\times \Tilde{\mathcal{V}}[\le M]\to \Tilde{\mathcal{V}}[\le N+M-n-1]$, where $\mathbf{1}$, $T_N$, $(n)_{N, M}$ come from the vertex superalgebra structure on $\Tilde{\mathcal{V}}$. Note that the VSA-inductive sheaf $``\displaystyle\varinjlim_{n\in \mathbb{N}}"\Tilde{\mathcal{V}}[\le N]$ has a canonical degree-weight-graded structure. We refer to this degree-weight-graded VSA-inductive sheaf as the degree-weight-graded VSA-inductive sheaf associated with $\Tilde{\mathcal{V}}$. \begin{lemma}\label{lem: MAKE VsaIndSh from PRESHEAVES OF Z-GRADED Vsa} There exists a canonical functor $$ \mathit{Presh}_X(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh} \to \textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X^\mathbb{N}, $$ sending an object $\Tilde{\mathcal{V}}$ of $\mathit{Presh}_X(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$ to the degree-weight-graded VSA-inductive sheaf associated with $\Tilde{\mathcal{V}}$. \end{lemma} \begin{proof} For a morphism $\Tilde{F}: \Tilde{\mathcal{V}}\to\Tilde{\mathcal{W}}$ in $\mathit{Presh}_X(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$, we assign the morphism $F:=\Bigl([\Tilde{F}|_{\Tilde{\mathcal{V}}[\le N]}\bigr]\Bigr)_{N\ge 0}$ in $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X^\mathbb{N}$. The functoriality follows from the definition directly. \end{proof} \begin{remark} The composite of the functor $\underrightarrow{\mathrm{Lim}}\, $ given in \eqref{eq: functor from VSA-ISh} and the one given in Lemma \ref{lem: MAKE VsaIndSh from PRESHEAVES OF Z-GRADED Vsa} is the identity functor. \end{remark} \begin{remark}\label{rem: homogeneous morphism of presheaves induces that of ind-objects} Let $\Tilde{\mathcal{V}}, \Tilde{\mathcal{W}}$ be objects of the category $\mathit{Presh}_X(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$ and $\mathcal{V}, \mathcal{W}$ the VSA-inductive sheaves associated with $\Tilde{\mathcal{V}}, \Tilde{\mathcal{W}}$, respectively. Let $\Tilde{F}: \Tilde{\mathcal{V}}\to\Tilde{\mathcal{W}}$ be a homogeneous linear morphism of degree $d$. In other words, $\Tilde{F}(U): \Tilde{\mathcal{V}}(U)\to \Tilde{\mathcal{W}}(U)$ is a homogeneous linear map of degree $d$ for any open subset $U\subset X$. Then $\Tilde{F}$ induces a morphism $F$ of ind-objects: $F:=\bigl([\Tilde{F}|_{\Tilde{\mathcal{V}}[\le N]}]\bigr)_{N\ge 0}: \mathcal{V}\to \mathcal{W}$. Moreover the corresponding morphism $\underrightarrow{\mathrm{Lim}}\, F$ of presheaves is nothing but the morphism $\Tilde{F}: \Tilde{\mathcal{V}}=\underrightarrow{\mathrm{Lim}}\, \mathcal{V}\to\Tilde{\mathcal{W}}=\underrightarrow{\mathrm{Lim}}\, \mathcal{W}$. \end{remark} \begin{remark} The functor given in Lemma \ref{lem: MAKE VsaIndSh from PRESHEAVES OF Z-GRADED Vsa} commutes with the restriction. More precisely, if $U\subset X$ is an open subset and $\mathcal{V}$ is the degree-weight-graded VSA-inductive sheaf associated with an object $\Tilde{\mathcal{V}}$ of $\mathit{Presh}_X(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$ then we have $\mathcal{V}|_U=\mathcal{V}_U$, where $\mathcal{V}_U$ stands for the VSA-inductive sheaf associated with the presheaf $\Tilde{\mathcal{V}}|_U$. Here we denote by $\Tilde{\mathcal{V}}|_U$ the presheaf, not its sheafification, obtained by restricting $\Tilde{\mathcal{V}}$ to $U$. \end{remark} \subsection{More on VSA-Inductive Sheaves} Let $\varphi: X\to Y$ be a continuous map between topological spaces. Consider the functor induced by the push-forward functor $\varphi_*$ of presheaves: \begin{align} \label{eq: push-forward functor of ind-objects} \varphi_*&: \mathrm{Ind}(\mathit{Presh}_X(Vec_\mathbb{K}^\mathrm{super}))\longrightarrow \mathrm{Ind}(\mathit{Presh}_Y(Vec_\mathbb{K}^\mathrm{super})), \\ \text{objects}&:\quad\quad\quad \mathcal{F}=``\varinjlim_{\alpha\in A}\mathcal{F}"\longmapsto \varphi_*\mathcal{F}:=``\varinjlim_{\alpha\in A}"\varphi_*\mathcal{F}, \\ \label{df: push-forward of morphism of ind-objects} \text{morphisms}&: F=\bigl([F_\alpha^{j(\alpha)}]\bigr)_{\alpha\in \mathrm{Ob}(A)}\longmapsto \varphi_*F:=\bigl([\varphi_*F_\alpha^{j(\alpha)}]\bigr)_{\alpha\in \mathrm{Ob}(A)}. \end{align} The push-forward of bilinear morphism of ind-objects is given in a way similar to that in \eqref{df: push-forward of morphism of ind-objects}. \begin{lemma} Let $\varphi: X\to Y$ be a continuous map between topological spaces and $\mathcal{V}=\bigl( \mathcal{F}, \underline {\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ a VSA-inductive sheaf on $X$. The quadruple $\varphi_*\mathcal{V}:=\bigl( \varphi_*\mathcal{F}, \varphi_*\underline {\mathbf{1}}, \varphi_*\underline{T}, \varphi_*\underline{(n)}; n\in \mathbb{Z}\bigr)$ is a VSA-inductive sheaf on $Y$. Moreover if $F$ is a morphism in $\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X$ then $\varphi_*F$ is a morphism in $\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_Y$. \end{lemma} \begin{proof} We can immediately see that $\varphi_*\mathcal{V}$ is a VSA-inductive sheaf. The latter half of this lemma follows from the functoriality of the push-forward functor of presheaves. \end{proof} By the above lemma, we can restrict the functor \eqref{eq: push-forward functor of ind-objects} to obtain a functor $\varphi_*: \mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X\to\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_Y$ sending an object $\mathcal{V}$ to $\varphi_*\mathcal{V}$ and a morphism $F$ to $\varphi_*F$. We call the VSA-inductive sheaf $\varphi_*\mathcal{V}$ the \textbf{push-forward} of $\mathcal{V}$. \begin{remark} The push-forward functor commutes with the functor $\underrightarrow{\mathrm{Lim}}\, $ as well as the one given in Lemma \ref{lem: MAKE VsaIndSh from PRESHEAVES OF Z-GRADED Vsa}. \end{remark} Let $\mathcal{V}_1=\bigl( \mathcal{F}_1, \underline {\mathbf{1}}_1, \underline{T}_1, \underline{(n)}_1; n\in \mathbb{Z}\bigr)$ and $\mathcal{V}_2=\bigl( \mathcal{F}_2, \underline {\mathbf{1}}_2, \underline{T}_2, \underline{(n)}_2;$ $n\in \mathbb{Z}\bigr)$ be VSA-inductive sheaves on topological spaces $X_1$ and $X_2$, respectively. A \textbf{morphism} of vertex superalgebra inductive sheaves from $\mathcal{V}_1$ to $\mathcal{V}_2$ is by definition a pair $(\varphi, \Phi)$ of a continuous map $\varphi: X_2 \to X_1$ and a base-preserving morphism $\Phi: \mathcal{V}_1 \to \varphi_*\mathcal{V}_2$ of VSA-inductive sheaves on $Y$. \begin{remark}\label{rem: ALL VSA-inductive sheaves form a category} VSA-inductive sheaves form a category with morphisms defined above, whose composition is defined by $ (\varphi', \Phi')\circ(\varphi, \Phi):=(\varphi'\circ\varphi, \Phi'\circ\varphi'_*\Phi) $ for morphisms of VSA-inductive sheaves $(\varphi, \Phi): \mathcal{V}_1\to \mathcal{V}_2$ and $(\varphi', \Phi'): \mathcal{V}_2\to \mathcal{V}_3$. \end{remark} \begin{notation} Denote by $\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}$ the category of VSA-inductive sheaves obtained in Remark \ref{rem: ALL VSA-inductive sheaves form a category}. \end{notation} \begin{remark} If $\varphi: X\to Y$ is a continuous map and $(\mathcal{V},\underline{H} ,\underline{J})$ is a degree-weight-graded VSA-inductive sheaf on $X$, then $(\varphi_*\mathcal{V}, \varphi_*\underline{H}, \varphi_*\underline{J})$ is a degree-weight-graded VSA-inductive sheaf on $Y$. In a similar way above, the category of degree-weight-graded VSA-inductive sheaves is defined. \end{remark} \begin{notation} Let us denote by $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}$ the category of degree-weight-graded VSA-inductive sheaves. \end{notation} Let $\mathcal{V}$ be a VSA-inductive sheaf and $\mathcal{F}=``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_{\alpha}$ the underlying ind-object. Then we have two canonical ind-objects of sheaves, $\mathcal{F}_{\bar{0}}:=``\displaystyle\varinjlim_{\alpha\in A}"(\mathcal{F}_\alpha)_{\bar{0}}$ and $\mathcal{F}_{\bar{1}}:=``\displaystyle\varinjlim_{\alpha\in A}"(\mathcal{F}_\alpha)_{\bar{1}}$. In addition, suppose that $\mathcal{V}$ is degree-graded. Then we have ind-objects of sheaves, $\mathcal{F}^l:=``\displaystyle\varinjlim_{\alpha\in A}"\mathcal{F}_\alpha^l$, where $\mathcal{F}_\alpha^l$ is the subsheaf of degree $l\in\mathbb{Z}$. \begin{definition} Let $\mathcal{V}=\bigl( \mathcal{F}, \underline {\mathbf{1}}, \underline{T}, \underline{(n)}; n\in \mathbb{Z}\bigr)$ be a degree-weight-graded VSA-inductive sheaf. A \textbf{differential} on $\mathcal{V}$ is an odd morphism of ind-objects, $D: \mathcal{F}\to\mathcal{F}$ such that \begin{equation}\label{eq: deg 1 wt 0 condition of differential on VSA-inductive sheaf} [\underline{H}, D]=0, \quad [\underline{J}, D]=D, \end{equation} \begin{equation}\label{eq: square 0 condition of differential on VSA-inductive sheaf} D^2=0, \end{equation} \begin{equation}\label{eq: derivation condition of differential on VSA-inductive sheaf} D\circ\underline{(n)}-(-1)^{i}\underline{(n)}\circ(\mathrm{id}\times D)=\underline{(n)}\circ (D\times\mathrm{id}), \end{equation} on $\mathcal{F}_{\bar{i}}\times\mathcal{F}$ for all $n\in\mathbb{Z}$ and $i=0, 1$, and \begin{equation}\label{eq: degree derivation condition of differential on VSA-inductive sheaf} D\circ\underline{(n)}-(-1)^l\underline{(n)}\circ(\mathrm{id}\times D)=\underline{(n)}\circ (D\times\mathrm{id}), \end{equation} on $\mathcal{F}^l\times\mathcal{F}$ for all $l\in\mathbb{Z}$. Here $\underline{H}$ is the Hamiltonian of $\mathcal{V}$ and $\underline{J}$ is the degree-grading operator of $\mathcal{V}$. \end{definition} By a \textbf{differential degree-weight-graded VSA-inductive sheaf}, we mean a degree-weight-graded VSA-inductive sheaf given a differential. \begin{remark}\label{rem: presheaf of differential VSA-inductive sheaf} Let $(\mathcal{V}, D)$ be a differential degree-weight-graded VSA-inductive sheaf. Then $(\underrightarrow{\mathrm{Lim}}\, \mathcal{V},$ $\underrightarrow{\mathrm{Lim}}\, D)$ is a presheaf of differential degree-weight-graded vertex superalgebras. \end{remark} \begin{notation} We denote by $\textit{Diff-}\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_X$ the category of differential degree-weight-graded VSA-inductive sheaves on a topological space $X$, whose morphisms are morphisms of degree-weight-graded VSA-inductive sheaves on $X$ commuting with the differentials. \end{notation} \begin{remark} Let $\varphi: X\to Y$ be a continuous map between topological spaces and $(\mathcal{V}, D)$ a differential degree-weight-graded VSA-inductive sheaf on $X$. Then the pair $(\varphi_*\mathcal{V}, \varphi_*D)$ is a differential degree-weight-graded VSA-inductive sheaf on $Y$. \end{remark} \begin{notation} We denote by $\textit{Diff-}\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}$ the category of differential degree-weight-graded VSA-inductive sheaves, whose morphisms are morphisms of degree-weight-graded VSA-inductive sheaves which commute with the differentials. \end{notation} \section{Chiral Lie Algebroid Cohomology}\label{section: Chiral Lie Algebroid Cohomology} In this section, we will construct VSA-inductive sheaves associated with vector bundles. For that purpose, we will construct VSA-inductive sheaves on an affine space. Then we will glue them to obtain a VSA-inductive sheaves on a manifold, using the facts proved in the preceding section. By a manifold, we will mean a $C^\infty$-manifold. Let $M$ be a manifold. We simply denote by $TM$ the tangent bundle tensored by $\mathbb{K}$, $TM\otimes_{\mathbb{R}}\mathbb{K}$. We use a similar notation for the cotangent bundle $T^*M$, the sheaf of functions $C^\infty=C^\infty_M$ and the sheaf of vector fields $\mathscr{X}=\mathscr{X}_M$. We will mean a real or complex vector bundle simply by a vector bundle, following $\mathbb{K}$ is $\mathbb{R}$ or $\mathbb{C}$. $TM\otimes_{\mathbb{R}}\mathbb{K}$. \subsection{Lie Algebroids}\label{subsection: Lie Algebroids} In this subsection, we recall the notion of Lie algebroids. We refer the reader to \cite{DZ05,HM90,Mac05,Pra67,Vai97,Vai91} for more details. Let $M$ be a manifold. A \textit{Lie algebroid} on $M$ is a vector bundle $A$ together with a vector bundle map $a: A \to TM$ over $M$, called the anchor map of $A$, and a $\mathbb{K}$-linear Lie bracket $[\,,]$ on $\Gamma(A)$ such that $ [X,fY]=f[X,Y]+a(X)(f)Y $ for all $X, Y\in \Gamma(A), f\in C^\infty(M).$ When $\mathbb{K}$ is $\mathbb{R}$, the corresponding Lie algebroids are called \textit{real Lie algebroids}. Similarly when $\mathbb{K}=\mathbb{C}$, the corresponding Lie algebroids are called \textit{complex Lie algebroids}. \begin{example}[tangent bundles]\label{ex: tangent bundle Lie algebroid The tangent bundle $TM$ of a manifold $M$ with bracket the Lie bracket of vector fields and with anchor the identity of $TM$ is a Lie algebroid on $M.$ \end{example \begin{example}[transformation Lie algebroids]\label{ex: transformation Lie algebroid Let $ \rho: \mathfrak{g} \to \mathscr{X}(M) $ be an infinitesimal action of a Lie algebra $\mathfrak{g}$ on a manifold $M.$ Then there is a natural Lie algebroid structure on the trivial vector bundle $M \times \mathfrak{g}$ with the anchor map $ a_\rho(m, \xi):=\rho(\xi)(m) $ for $(m, \xi)\in M\times \mathfrak{g}$ and the bracket on $\Gamma(M\times \mathfrak{g})\cong C^\infty(M, \mathfrak{g})$ $$ [X,Y]_\rho:=[X,Y]_{\mathfrak{g}}+a_\rho(X)Y-a_\rho(Y)X, $$ for $X, Y\in C^\infty(M, \mathfrak{g}).$ This Lie algebroid $(M\times \mathfrak{g}, a_\rho, [\,,]_\rho)$ is called the \textit{transformation Lie algebroid} associated with $\rho$. \end{example \begin{example}[cotangent Lie algebroids]\label{ex: cotangent Lie algebroid Let $(M, \Pi)$ be a Poisson manifold with the Poisson bivector field $\Pi.$ Consider the map $$ \Pi^\sharp: T^*M \to TM,\ \Pi^\sharp(\alpha)(\beta):=\Pi(\alpha, \beta)\ \text{for}\ \alpha, \beta \in \Gamma(T^*M). $$ Then, with $\Pi^\sharp$ as the anchor map, the cotangent bundle $T^*M$ becomes a Lie algebroid on $M$, where the Lie bracket on $\Gamma(T^*M)$ is given by $$ [\alpha,\beta]:=d(\Pi(\alpha, \beta))+i_{\Pi^\sharp\alpha}d\beta-i_{\Pi^\sharp\beta}d\alpha, $$ for $\alpha, \beta \in \Gamma(T^*M).$ The Lie algebroid $(T^*M, \Pi^\sharp, [\,,])$ is called the \textit{cotangent Lie algebroid} of $(M, \Pi).$ \end{example Let us recall the notion of the Lie algebroid representation. Let $(A, a, [\,,])$ be a Lie algebroid on a manifold $M$ and $E$ a vector bundle on $M$. An $A$-\textit{connection} on $E$ is a map $ \nabla: \Gamma(A) \times \Gamma(E) \to \Gamma(E) $ such that \begin{enumerate}[$\bullet$] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{20pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{30pt} \setlength{\itemindent}{10pt} \item $\nabla_{X+Y}s=\nabla_X s+\nabla_Y s,$ \item $\nabla_X(s+s')=\nabla_X s+\nabla_X s',$ \item $\nabla_{fX}s=f\nabla_X s,$ \item $\nabla_X(fs)=f\nabla_X s +a(X)(f)s,$ \end{enumerate} for all $X, Y\in \Gamma(A),\ s, s'\in \Gamma(E),$ and $f\in C^\infty(M).$ An $A$-connection $\nabla$ on $E$ is said to be \textit{flat} if $ \nabla_{[X,Y]}s=\nabla_X(\nabla_Y s)-\nabla_Y(\nabla_X s) $ for all $X, Y\in \Gamma(A)$ and $s\in \Gamma(E).$ A flat $A$-connection on $E$ is also called a \textit{representation} of $A$ on $E$. \begin{example}[trivial representations]\label{ex: trivial representation of Lie algebroid Let $A$ be a Lie algebroid on $M$ and $V$ a vector space. The \textit{trivial representation} of $A$ on $M \times V$ is given by $ \nabla_X f:=a(X)(f) $ for $X\in \Gamma(A)$ and $f: M \to V.$ \end{example Let us now recall the definition of the Lie algebroid cohomology. For a Lie algebroid $(A, a, [\,,])$ on a manifold $M$ and a representation $(E, \nabla)$ of $A$ on a vector bundle $E$, consider a complex $\Omega^\bullet(A; E):=\Gamma((\wedge^\bullet A^*)\otimes E)$ and a differential $d_{\text{Lie}}^E: \Omega^\bullet(A; E) \to \Omega^{\bullet+1}(A; E)$ defined by \begin{multline*} (d_{\text{Lie}}^E \omega)(X_1, \dots, X_{n+1}) = \sum_{i=1}^{n+1}(-1)^{i+1}\nabla_{X_i}(\omega(X_1, \dots, \check{X_i}, \dots, X_{n+1})) \\ +\sum_{1\le i<j\le n+1}(-1)^{i+j}\omega([X_i,X_j], X_1, \dots, \check{X_i}, \dots, \check{X_j}, \dots, X_{n+1}), \end{multline*} for $\omega \in \Omega^n(A; E)$ and $X_1, \dots, X_{n+1}\in \Gamma(A).$ The cohomology space $H^\bullet(A; E)$ of the complex $(\Omega^\bullet(A; E), d_{\text{Lie}}^E)$ is called the \textit{Lie algebroid cohomology} with coefficients in $E$. Any $X\in \Gamma(A)$ induces the \textit{Lie derivative} $L_X: \Omega^\bullet(A; E) \to \Omega^\bullet(A; E)$ and the \textit{interior product} $\iota_X: \Omega^\bullet(A; E) \to \Omega^{\bullet-1}(A; E):$ \begin{gather*} (L_X\omega)(X_1, \dots, X_n)=\nabla_X(\omega(X_1, \dots, X_n)) -\sum_{i=1}^{n}\omega(X_1, \dots, [X,X_i], \dots, X_n), \\ (\iota_X\omega)(X_1, \dots, X_{n-1})=\omega(X, X_1, \dots, X_{n-1}), \end{gather*} where $\omega \in \Omega^n(A; E)$ and $X_1, \dots, X_n \in \Gamma(A).$ The Lie derivatives are derivations of degree $0$ and the interior products are derivations of degree $-1$. They satisfy the Cartan relations: \begin{gather*} [d_{\text{Lie}}^E,\iota_X]=L_X, \\ [L_X,L_Y]=L_{[X,Y]}, \\ [L_X,\iota_Y]=\iota_{[X,Y]}, \\ [\iota_X,\iota_Y]=0, \end{gather*} for all $X, Y\in \Gamma(A).$ When $(E, \nabla)$ is the trivial representation on the trivial line bundle, we simply denote by $(\Omega^\bullet(A), d_{\text{Lie}})$ and $H^\bullet(A)$ the corresponding complex and cohomology, respectively. \subsection{VSA-Inductive Sheaves on $\mathbb{R}^m$} In this subsection, we define an important object of the category $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_{\mathbb{R}^m}^\mathbb{N}$, which we will denote by $\Omega_\mathrm{ch}(\mathbb{R}^{m|r})$. To this end, we first construct an object, denoted by $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})$, of the category $\mathit{Presh}_{\mathbb{R}^m}(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$, following the argument in \cite{LL}. Fix natural numbers $m$ and $r$. We consider the supermanifold $\mathbb{R}^{m|r}=\bigl(\mathbb{R}^m,$ $ C^\infty_{\mathbb{R}^m}\otimes_{\mathbb{K}} \bigwedge_{\mathbb{K}}(\theta^1, \dots, \theta^r)\bigr)$. We denote the structure sheaf $C^\infty_{\mathbb{R}^m}\otimes_\mathbb{K} \bigwedge_\mathbb{K}(\theta^1, \dots, \theta^r)$ by $C^\infty_{\mathbb{R}^{m|r}}$. Let $U$ be an open subset of $\mathbb{R}^m$. Consider the supercommutative superalgebra of functions on $U\subset \mathbb{R}^{m|r}$, $ C^{\infty}_{\mathbb{R}^{m|r}}(U)=C^\infty_{\mathbb{R}^m}(U)\otimes_{\mathbb{K}}\bigwedge\nolimits_{\mathbb{K}}(\theta^1, \dots, \theta^r), $ even derivations $\partial/\partial x^1, \dots, \partial/\partial x^m$ and odd derivations $\partial/\partial \theta^1, \dots, \partial/\partial \theta^r$ on it. Here $(x^1, \dots, x^m, \theta^1, \dots, \theta^r)$ is a standard supercoordinate on $U$. We consider the supercommutative Lie superalgebra $$ D^{m|r}(U):=\mathrm{Span}_\mathbb{K}\{ \partial/\partial x^i, \partial/\partial \theta^j | i=1, \dots, m,\ j=1, \dots, r\}, $$ acting on $C^{\infty}_{\mathbb{R}^{m|r}}(U)$ naturally, and set $$ \Lambda^{m|r}(U):= D^{m|r}(U) \ltimes C^{\infty}_{\mathbb{R}^{m|r}}(U). $$ We put $$ \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U):=N\bigl(\Lambda^{m|r}(U), 0\bigr)/\mathcal{I}^{m|r}(U), $$ where $\mathcal{I}^{m|r}(U)$ is the ideal of the affine vertex superalgebra $N(\Lambda^{m|r}(U), 0) \cong O(\Lambda^{m|r}(U), 0)$ (see Example \ref{ex:affine va}) generated by \begin{gather*} \frac{d}{dz}f(z)-\sum_{i=1}^m\,{\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,{\frac{d}{dz}x^i(z) \frac{\partial f}{\partial x^i}(z)}\,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}}\,-\sum_{j=1}^r\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,\frac{d}{dz}\theta^j(z) \frac{\partial f}{\partial \theta^j}(z)\,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}, \\ (fg)(z)-\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,f(z)g(z)\,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}, \ 1(z)-\mathrm{id}, \end{gather*} where $f, g\in C^\infty_{\mathbb{R}^{m|r}}(U)\subset\Lambda^{m|r}(U)$. For $A\in N\bigl(\Lambda^{m|r}(U), 0\bigr)$, we denote by $\overline{A}$ the corresponding element in $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)$. The Lie superalgebra $\Lambda^{m|r}(U)$ has compatible two gradings when we give $D^{m|r}(U)$, $C^{\infty}_{\mathbb{R}^{m|r}}(U)$ weight $0, -1$, respectively, and if we give $\partial/\partial x^i$, $\partial/\partial\theta^j$ and $C^\infty_{\mathbb{R}^m}(U)\otimes\bigwedge^l(\theta^1, \dots, \theta^r)$ degree $0$, $-1$ and $l$, respectively. This induces a degree-weight-grading on the vertex superalgebra $N\bigl(\Lambda^{m|r}(U), 0\bigr)$ (see Example \ref{ex: another grading on N(g, 0)}). Then the ideal $\mathcal{I}^{m|r}(U)$ is a homogeneous ideal. Therefore the vertex superalgebra $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)$ becomes a degree-weight-graded vertex superalgebra. We set $$ \Gamma_{m}(U):=N\bigl(D_m(U)\ltimes C^\infty_{\mathbb{R}^m}(U), 0\bigr)/\mathcal{I}_m(U), $$ where $D_m(U)$ is the commutative Lie algebra $\mathrm{Span}_{\mathbb{K}}\{ \partial/\partial x^i | i=1, \dots, m\}$ and $\mathcal{I}_m(U)$ is the ideal of the affine vertex algebra $N\bigl(D_m(U)\ltimes C^\infty_{\mathbb{R}^m}(U), 0\bigr)$ generated by \begin{gather}\label{eq: relations for Gamma_m(U)} \frac{d}{dz}f(z)-\sum_{i=1}^m\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,\frac{d}{dz}x^i(z)\frac{\partial f}{\partial x^i}(z) \,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,, \\ (fg)(z)-\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,f(z)g(z)\,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}, \ 1(z)-\mathrm{id}, \end{gather} where $f, g\in C^\infty_{\mathbb{R}^m}(U)$. Regard $D_m(U)\ltimes C^\infty_{\mathbb{R}^m}$ as a degree-weight-graded Lie subalgebra of $\Lambda^{m|r}(U)$. Then $\Gamma_m(U)$ becomes a degree-weight-graded vertex algebra (the degree-grading is trivial). For $A\in N\bigl(D_m(U)\ltimes C^\infty_{\mathbb{R}^m}(U), 0\bigr)$, we denote by $\overline{A}$ the corresponding element in $\Gamma_m(U)$. \begin{remark}\label{rem: generators of Gamma_m(U)} $\Gamma_m(U)$ is spanned by the vectors of the form $$ \overline{\partial/\partial x^{i_1}}_{(n_1)}\dots\overline{\partial/\partial x^{i_k}}_{(n_k)}\overline{x^{i'_1}}_{(n'_1)}\dots\overline{x^{i'_{k'}}}_{(n'_{k'})}\overline{f}_{(-1)}\mathbf{1}, $$ with $n_1, \dots, n_k\le-1,\ n'_1, \dots, n'_{k'}\le-2,$ $\ i_1, \dots, i_k, i'_1, \dots, i'_{k'}\in \{ 1, \dots, m\}$ and $f\in C^\infty_{\mathbb{R}^m}(U)$. This follows from the relations \eqref{eq: relations for Gamma_m(U)} by induction. \end{remark} We can rewrite the vertex superalgebra $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)$, using the vertex superalgebras $\Gamma_m(U)$ and $\mathcal{E}(W_r)$. Here $\mathcal{E}(W_r)$ is the $bc$-system associated with $W_r=\mathrm{Span}_{\mathbb{K}}\{ \theta^j |\ j=1, \dots, r\}$ regarded as an even vector space (see Example \ref{ex:bc-systems} for the definition of $bc$-systems and the notation used in the following proof). The $\mathbb{Z}$-graded vertex superalgebra $ \Gamma_m(U)\otimes\mathcal{E}(W_r)$ is degree-weight-graded when the degree-grading is given by the operator $j_{bc, (0)}$, where $j_{bc}=-\sum_{j=1}^r b^{\partial/\partial\theta^j}_{-1}c^{\theta^j}_{0}\mathbf{1}$. \begin{lemma}\label{lem: omega_ch=beta-gamma_bc} There exists a canonical isomorphism of degree-weight-graded vertex superalgebras $$ \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)\cong \Gamma_m(U)\otimes \mathcal{E}(W_r). $$ \end{lemma} \begin{proof} The assertion follows by the same argument as in \cite{LL}, where the case when $m=r$ is considered. The canonical isomorphism is induced by the linear map $\alpha: \Lambda^{m|r}(U) \to \Gamma_m(U)\otimes \mathcal{E}(W_r)$ defined by \begin{align*} \alpha(\partial/\partial x^i)&:= \overline{\partial/\partial x^i}_{(-1)}\mathbf{1} \otimes \mathbf{1},\ i=1, \dots, m, \\ \alpha(\partial/\partial \theta^j)&:= \mathbf{1} \otimes b^{\partial/\partial\theta^{j}}_{-1}\mathbf{1},\ j=1, \dots, r, \\ \alpha(f\otimes \theta^{j_1}\dotsm\theta^{j_l})&:=\overline{f}_{(-1)}\mathbf{1} \otimes c^{\theta^{j_1}}_0\dotsm c^{\theta^{j_l}}_0\mathbf{1}, \\ &\text{for}\ f\otimes \theta^{j_1}\cdots\theta^{j_l}\in C^\infty_{\mathbb{R}^m}(U)\otimes \bigwedge(\theta^1, \dots, \theta^r), \end{align*} where we regard $(\partial/\partial\theta^{1}, \dots, \partial/\partial\theta^{r})$ as the basis dual to $(\theta^{1}, \dots, \theta^{r})$ for $W_r^*$. \end{proof} We identify $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)$ with $\Gamma_m(U)\otimes \mathcal{E}(W_r)$ via this isomorphism. We denote by $b^j_n$ and $c^j_n$ the element $b^{\partial/\partial\theta^{j}}_{n}=b^{\partial/\partial\theta^{j}}\otimes t^n$ and $c^{\theta^{j}}_n=\theta^{j}\otimes t^{n-1}dt$ in $\mathfrak{j}(W_r)$ (see Example \ref{ex:bc-systems} for the definition of the Lie superalgebra $\mathfrak{j}(W_r)$). Consider the vector space $V_m(U):=\oplus_{i=1}^m\mathbb{K}x^i\subset C^\infty(U)$. We regard $(\partial/\partial x^1,$ $\dots,$ $\partial/\partial x^m)$ as the basis dual to $(x^1, \dots, x^m)$ and identify the Lie algebra $D_m(U)=\mathrm{Span}_\mathbb{K}\{ \partial/\partial x^1, \dots, \partial/\partial x^m \}$ as the dual vector space $V_m(U)^*$. Consider the Heisenberg Lie algebra associated with $D_m(U)$: $$ \mathfrak{h}_m(U):=\mathfrak{h}(D_m(U))=(D_m(U)[t^{\pm1}]\oplus V_m(U)[t^{\pm1}]dt)\oplus \mathbb{K}\mathbf{\tau}, $$ with commutation relations $ [\beta^{\partial/\partial x^i}_p,\gamma^{x^{i'}}_q]=\delta_{p, -q}\frac{\partial}{\partial x^i}(x^{i'})\mathbf{\tau}, $ where $\beta^{\partial/\partial x^i}_p$ and $\gamma^{x^{i'}}_q$ stand for, respectively, $\partial/\partial x^i\otimes t^p$ and $x^{i'}\otimes t^{q-1}dt$ as in Example \ref{ex:beta_gamma-systems}. Consider the Heisenberg Lie algebra associated with the vector space $\mathbb{K}^m$: $$ \mathfrak{h}_m:=\mathfrak{h}(\mathbb{K}^m)=\mathbb{K}^m[t^{\pm1}]\oplus (\mathbb{K}^m)^*[t^{\pm1}]dt\oplus\mathbb{K}\mathbf{\tau}. $$ Let $(e_1, \dots, e_m)$ be the standard basis of $\mathbb{K}^m$ and $(\phi_1, \dots, \phi_m)$ the dual basis. We denote by $\beta^i_p$ and $\gamma^j_q$ the elements $e_1\otimes t^p$ and $\phi_i \otimes t^{q-1}dt$. We identify $\mathfrak{h}_m(U)$ with $\mathfrak{h}_m$ through the isomorphism induced by $ V_m(U)\to \mathbb{K}^m,\ x^i\mapsto e^i,\ i=1, \dots, m. $ We also denote $\beta^{\partial/\partial x^i}_p$ and $\gamma^{x^{i'}}_q$ by $\beta^i_p$ and $\gamma^{i'}_q$, respectively. We emphasize that the Lie algebra $\mathfrak{h}_m$ does not depend on the open subset $U$. Let ${\mathfrak{h}_m(U)}_{\ge0}\subset \mathfrak{h}_m(U)$ be the Lie subalgebra generated by $\mathbf{\tau}$ and $\beta^i_p, \gamma^i_p$ with $i=1, \dots, m$ and $p, q \ge0$. We make $C^\infty_{\mathbb{R}^m}(U)$ a ${\mathfrak{h}_m(U)}_{\ge0}$-module by setting \begin{gather*} \beta^i_p\cdot f=\gamma^i_p\cdot f=0,\ p>0, \\ \beta^i_0\cdot f=\frac{\partial}{\partial x^i}(f),\ \gamma^i_0\cdot f=x^i f,\ \mathbf{\tau} \cdot f=f, \end{gather*} where $i=1, \dots, m$ and $f\in C^\infty_{\mathbb{R}^m}(U)$. As in \cite{LL}, we consider the $\mathfrak{h}_m(U)$-module $$ \Tilde{\Gamma}_m(U):=U(\mathfrak{h}_m(U))\otimes_{U({\mathfrak{h}_m(U)}_{\ge0})}C^\infty_{\mathbb{R}^m}(U). $$ Following \cite{LL}, we define the action of the commutative Lie algebra $C^\infty_{\mathbb{R}^m}(U)[t^{\pm1}]$ on $\Tilde{\Gamma}_m(U)\cong U(\mathfrak{h}_m(U)_{< 0})\otimes C^\infty_{\mathbb{R}^m}(U)$ as follows, where $\mathfrak{h}_m(U)_{<0}$ stands for the Lie subalgebra of $\mathfrak{h}_m(U)$ generated by $\beta^i_p, \gamma^i_p$ with $p<0$ and $i=1, \dots, m$. We define the action of $ft^k\in C^\infty_{\mathbb{R}^m}(U)[t^{\pm1}]$, denoted by $f_{(k)}$, using the PBW monomial basis of $U(\mathfrak{h}_m(U)_{< 0})$ for the basis $\beta^i_p, \gamma^i_p,\ p<0, i=1, \dots, m$ with an order such that $\beta^i_p>\gamma^{i'}_{q}$ for any $i, i'=1, \dots, m$ and $p, q<0$. First, for $ft^k\in C^\infty_{\mathbb{R}^m}(U)[t^{\pm1}]$ with $k\ge -1$ we set $$ f_{(k)}(1\otimes g):=\delta_{k, -1}(1\otimes fg), \quad g\in C^\infty_{\mathbb{R}^m}(U), $$ and for $ft^k\in C^\infty_{\mathbb{R}^m}(U)[t^{\pm1}]$ with $k < -1$ we inductively define $f_{(k)}$ on $\mathbb{K}1\otimes C^\infty_{\mathbb{R}^m}(U)$ by setting $$ f_{(k)}(1\otimes g):=\frac{1}{k+1}\sum_{i=1}^m\sum_{q<0}q\gamma^i_q\Bigl(\frac{\partial f}{\partial x^i}\Bigr)_{(k-q)}(1\otimes g), \quad g\in C^\infty_{\mathbb{R}^m}(U). $$ Next we define $f_{(k)}(P\otimes g)$ for $ft^k\in C^\infty_{\mathbb{R}^m}(U)[t^{\pm1}]$, a PBW monomial $P$ of positive length and $g\in C^\infty_{\mathbb{R}^m}(U)$. The PBW monomial $P$ is of the form $\gamma^i_p P'$ or $\beta^i_p P'$ with a PBW monomial $P'$ of length less than that of $P$. We set $$ f_{(k)}(P\otimes g):= \begin{cases} \gamma^i_pf_{(k)}(P'\otimes g), & \text{if}\ P=\gamma^i_p P', \\ \beta^i_pf_{(k)}(P'\otimes g)-\bigl(\frac{\partial f}{\partial x^i}\bigl)_{(k+p)}(P'\otimes g), & \text{if}\ P=\beta^i_p P'. \end{cases} $$ Thus we get operators $f_{(k)}$ on $\Tilde{\Gamma}_m(U)$. We put $$ \gamma^i(z):=\sum_{n\in\mathbb{Z}}\gamma^i_n z^{-n}, \quad \beta^i(z):=\sum_{n\in\mathbb{Z}}\beta^i_n z^{-n-1}, $$ for $i=1, \dots, m$ and $$ f(z):=\sum_{k\in\mathbb{Z}}f_{(k)}z^{-k-1}, $$ for $f \in C^\infty_{\mathbb{R}^m}(U)$. Note that when $f=x^i$, we have $x^i(z)=\gamma^i(z)$, or equivalently, $x^i_{(k)}=\gamma^i_{k+1}$ for all $k\in \mathbb{Z}$. For $f\in C^\infty_{\mathbb{R}^m}(U)$, we also use the notation $$ f(z)=\sum_{n\in\mathbb{Z}}f_n z^{-n}, $$ or equivalently, $f_{(k)}=f_{k+1}$ for $k\in \mathbb{Z}$ and simply denote by $f$ the element $1\otimes f\in \Tilde{\Gamma}(U)$. Since $\beta^i(z)$ and $\gamma^i(z)$ come from the action of the Heisenberg Lie algebra $\mathfrak{h}_m(U)$, we have \begin{gather} \label{eq: [beta(z),beta(w)]=0} [\beta^i(z),\beta^{i'}(w)]=0, \\ [\gamma^i(z),\gamma^{i'}(w)]=0, \quad [\beta^i(z),\gamma^{i'}(w)]=\delta_{i, i'}\delta(z-w), \notag \end{gather} for $i, i'=1, \dots, m$. The following lemmas are proved in \cite[Section 2.5]{LL}. \begin{lemma}[Lian-Linshaw] \label{lem: [beta(z),f(w)]=df/dx(w)delta(z-w)} The following hold: \begin{equation}\label{eq: [beta(z),f(w)]=df/dx(w)delta(z-w)} [\gamma^i(z),f(w)]=0, \quad [\beta^i(z),f(w)]=\frac{\partial f}{\partial x^i}(w)\delta(z-w). \end{equation} for all $f \in C^\infty_{\mathbb{R}^m}(U)$ and $i=1, \dots, m$. \end{lemma} \begin{lemma}[Lian-Linshaw]\label{lem: relations for f(z)} The following hold: \begin{gather} \label{eq: [f(z),g(z)]=0} [f(z),g(w)]=0, \\ \label{eq: df(z)=df/dx(z)dx(z)} \frac{d}{dz}f(z)=\sum_{i=1}^m\frac{d}{dz}\gamma^i(z)\frac{\partial f}{\partial x^i}(z), \end{gather} for $f, g \in C^\infty_{\mathbb{R}^m}(U)$ and \begin{equation} \label{eq: 1(z)=1} 1(z)=\mathrm{id}. \end{equation} \end{lemma} As a corollary of the above two lemmas, we have the following proposition, which corresponds to \cite[Corollary 2.21]{LL}. \begin{proposition}\label{prop: VERTEX ALGEBRA STRUCTURE ON GAMMA-TILDE} $\Tilde{\Gamma}_m(U)$ has a unique vertex algebra structure such that \begin{enumerate}[$\bullet$] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{20pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{30pt} \setlength{\itemindent}{0pt} \item the vacuum vector is $\mathbf{1} :=1\otimes 1,$ \item the translation operator is $T:=\mathrm{Res}_{z=0}\sum_{i=1}^m\,{\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,\beta^i(z)\partial_z\gamma^i(z)\,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}}$, \item the vertex operators satisfy $Y(1\otimes f, z)=f(z)$ for $f\in C^\infty_{\mathbb{R}^m}(U)$ and $Y(\beta^i_{-1}\otimes1, z)=\beta^i(z)$ for $i=1, \dots, m.$ \end{enumerate} \end{proposition} \begin{proof} We will apply the existence theorem of Frenkel-Kac-Radul-Wang (see \cite[Proposition 3.1]{FKRW95}). The relations $[T,f(z)]=\partial_zf(z)$ and $[T,\beta^i(z)]$ $=\partial_z\beta^i(z)$ are checked by direct computations with \eqref{eq: [beta(z),f(w)]=df/dx(w)delta(z-w)}. By Lemmas \ref{lem: [beta(z),f(w)]=df/dx(w)delta(z-w)} and \ref{lem: relations for f(z)}, the other conditions in the existence theorem are satisfied. \end{proof} By Proposition \ref{prop: VERTEX ALGEBRA STRUCTURE ON GAMMA-TILDE} and $f_{(-1)}g=fg$, we have \begin{equation}\label{eq: (fg)(z)=f(z)g(z)} (fg)(z)=f(z)g(z), \end{equation} for $f, g\in C^\infty_{\mathbb{R}^m}(U)$. By OPEs \eqref{eq: [beta(z),beta(w)]=0}, \eqref{eq: [f(z),g(z)]=0} and \eqref{eq: (fg)(z)=f(z)g(z)}, we have a vertex algebra morphism $ N\bigl(D_m(U)\ltimes C^\infty_{\mathbb{R}^m}(U), 0\bigr)\to \Tilde{\Gamma}_m(U) $ sending $f$ and $\partial/\partial x^i$ to $f=1\otimes f$ and $\beta^i_{-1}\mathbf{1}=\beta^i_{-1}\otimes1$, respectively. Moreover by the relations \eqref{eq: df(z)=df/dx(z)dx(z)}, \eqref{eq: 1(z)=1} and \eqref{eq: (fg)(z)=f(z)g(z)}, this morphism factors through the ideal $\mathcal{I}_m(U)$, and hence we have a morphism from $\Gamma_m(U)$. We can see this map is bijective, by taking into consideration the form of the basis of $\Tilde{\Gamma}_m(U)$ and Lemma \ref{rem: generators of Gamma_m(U)}. Thus we have the following. \begin{proposition}\label{prop: GAMMA=GAMMA-TILDE} The linear map \begin{align*} D_m(U)\ltimes C^\infty_{\mathbb{R}^m}(U)&\to \Tilde{\Gamma}_m(U), \\ f &\to 1\otimes f, \\ \partial/\partial x^i &\to \beta^i_{-1}\otimes1, \end{align*} induces an isomorphism of vertex algebras $$ \Gamma_m(U) \xrightarrow{\cong} \Tilde{\Gamma}_m(U). $$ \end{proposition} We identify $\Gamma_m(U)$ with $\Tilde{\Gamma}_m(U)$ through this isomorphism. Let $\mathfrak{h}_{m, \ge0}$ and $\mathfrak{h}_{m, <0}$ be the Lie subalgebras of $\mathfrak{h}_m$ generated by $\beta^i_p, \gamma^i_p$ with $p\ge 0,\ i=1, \dots, m$, and $\beta^i_p, \gamma^i_p$ with $p<0,\ i=1, \dots, m$, respectively. By Poincar\'{e}-Birkhoff-Witt theorem, the decomposition $\mathfrak{h}_m=\mathfrak{h}_{m, <0}\oplus \mathfrak{h}_{m, \ge0}$ induces an isomorphism as vector spaces $$ \Tilde{\Gamma}_m(U)\cong U(\mathfrak{h}_{m, <0})\otimes C^\infty_{\mathbb{R}^m}(U). $$ Using the isomorphism in Lemma \ref{prop: GAMMA=GAMMA-TILDE}, we have an isomorphism \begin{equation}\label{eq: basis of Gamma} \Gamma_m(U)\cong U(\mathfrak{h}_{m, <0})\otimes C^\infty_{\mathbb{R}^m}(U). \end{equation} Therefore, by Lemma \ref{lem: omega_ch=beta-gamma_bc} we have \begin{equation}\label{eq: omega_ch=beta-gamma_bc-C_infty} \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)\cong U(\mathfrak{h}_{m, <0})\otimes C^\infty_{\mathbb{R}^m}(U)\otimes \mathcal{E}(W_r). \end{equation} By restricting this isomorphism to the space of weight $n\in \mathbb{N}$, $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)[n]$, we get an isomorphism \begin{equation}\label{eq: omega_ch=beta-gamma_bc-C_infty[n]} \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)[n]\cong (U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r))[n]\otimes C^\infty_{\mathbb{R}^m}(U), \end{equation} where $(U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r))[n]$ stands for the weight $n$ space of $U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r)$ with respect to the grading induced by the $\mathbb{Z}_{\ge0}$-grading on $\mathcal{E}(W_r)$ and the grading on $U(\mathfrak{h}_{m, <0})$ given by $\mathrm{wt}\,\beta^i_p=\mathrm{wt}\,\gamma^i_p=-p$. Note that $(U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r))[n]$ is finite-dimensional. For two open subsets $V\subset U\subset \mathbb{R}^m$, we define the restriction map $ \mathrm{Res}^{m|r}_{V, U}: \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)\to \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(V) $ as follows. The restriction map $C^\infty_{\mathbb{R}^{m|r}}(U)\to C^\infty_{\mathbb{R}^{m|r}}(V)$ induces a Lie superalgebra morphism $ \Lambda^{m|r}(U)\to\Lambda^{m|r}(V) $ and this morphism induces a morphism $ N(\Lambda^{m|r}(U), 0) \to N(\Lambda^{m|r}(V), 0) $ of vertex superalgebras. Since this vertex superalgebra morphism is induced by an algebra morphism, namely, a morphism preserving the product and the unit, the generators of the ideal $\mathcal{I}^{m|r}(U)$ is mapped into $\mathcal{I}^{m|r}(V)$. Therefore we have a vertex superalgebra morphism from $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)$ to $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(V)$. We define the restriction map $$ \mathrm{Res}^{m|r}_{V, U}: \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U)\to \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(V), $$ as this vertex superalgebra morphism. Note that this restriction map preserves the degree-weight-grading. The assignment $$ U\mapsto\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})(U), $$ and the maps $\mathrm{Res}^{m|r}_{V, U}$ define a presheaf of degree-weight-graded vertex superalgebras. We denote by $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})$ the presheaf. We claim that this presheaf is an object of the category $\mathit{Presh}_{\mathbb{R}^m}(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$ (see Section \ref{subsection: From Presheaves to VSA-Inductive Sheaves} for the definition of this category). It suffices to check the presheaves $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})[n]$ are sheaves for all $n\in \mathbb{N}$, where the presheaf $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})[n]$ is defined by the assignment $$ U\mapsto \Gamma(U, \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r}))[n], $$ and the maps $\mathrm{Res}^{m|r}_{V, U}\big|_{\Gamma(U, \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r}))[n]}$. The restriction map $ \mathrm{Res}^{m|r}_{V, U}\big|_{\Gamma(U, \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r}))[n]} $ becomes $$ \mathrm{id}\otimes\mathrm{Res}_{V, U}\!\!: \!(U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r))[n]\,\otimes C^\infty_{\mathbb{R}^m}(U)\!\to\! (U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r))[n]\,\otimes C^\infty_{\mathbb{R}^m}(V), $$ through the isomorphism \eqref{eq: omega_ch=beta-gamma_bc-C_infty[n]}, where $\mathrm{Res}_{V, U}$ stands for the restriction map of $C^\infty_{\mathbb{R}^m}$. Therefore we have an isomorphism of presheaves $$ \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})[n] \cong (U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r))[n]\,\otimes_\mathbb{K} C^\infty_{\mathbb{R}^m}. $$ Since $C^\infty_{\mathbb{R}^m}$ is a sheaf and $(U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r))[n]$ is finite-dimensional, the presheaf $(U(\mathfrak{h}_{m, <0})\otimes \mathcal{E}(W_r))[n]\,\otimes C^\infty_{\mathbb{R}^m}$ is a sheaf of super vector spaces, and therefore $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})[n]$ is also a sheaf. Thus $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})$ is an object of the category $\mathit{Presh}_{\mathbb{R}^m}(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$. \begin{notation} We denote by $\Omega_\mathrm{ch}(\mathbb{R}^{m|r})$ the VSA-inductive sheaf associated with the object $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})$ of $\mathit{Presh}_{\mathbb{R}^m}(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$. (See Lemma \ref{lem: MAKE VsaIndSh from PRESHEAVES OF Z-GRADED Vsa}.) \end{notation} \begin{remark}\label{rem: SUPERFUNCTIONS INCLUDED IN OMEGA_CHIRAL} By the isomorphism \eqref{eq: omega_ch=beta-gamma_bc-C_infty[n]} with $n=0$, there exists an isomorphism, $$ \Gamma(U, \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r}))[0]\cong C^\infty_{\mathbb{R}^m}(U)\otimes \bigwedge(c^1_0, \dots, c^r_0), $$ for any open subset $U\subset \mathbb{R}^m$. Therefore the following isomorphism of sheaves exists: \begin{equation}\label{eq: WEIGHT ZERO SPACE IS CLASSICAL} \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})[0] \cong C^\infty_{\mathbb{R}^m}\otimes\bigwedge(\theta^1, \dots, \theta^r). \end{equation} Here $\theta^j$ is identified with $c^j_0=c^{\theta^j}_0 $ for $j=1, \dots, r$. \end{remark} We identify $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})[0]$ with $C^\infty_{\mathbb{R}^m}\otimes\bigwedge(\theta^1, \dots, \theta^r)$ via the isomorphism \eqref{eq: WEIGHT ZERO SPACE IS CLASSICAL}. \subsection{VSA-Inductive Sheaves for Vector Bundles} Let $E$ be a vector bundle of rank $r$ on a manifold $M$ of dimension $m$. Let $\mathcal{U}=\bigl(U_\lambda \bigr)_{\lambda \in \Lambda}$ be an arbitrary family of open subsets $U_\lambda$ in $M$ with a chart $\mathbf{x}_\lambda=(x_\lambda^1, \dots, x_\lambda^m)$ of $U_\lambda$, and a frame $\mathbf{e}_\lambda=(e^1_\lambda, \dots, e^r_\lambda)$ of $E|_{U_\lambda}$ such that $\bigl\{ (U_\lambda, \mathbf{x}_\lambda)\bigr\}_{\lambda \in \Lambda}$ is an altas on $M$ and is contained in the $C^\infty$-structure of $M$. We call such a family $\mathcal{U}$ a \textbf{framed covering} of $E$. Let $\bigl(f^E_{\lambda \mu}=(f^{E, j j'}_{\lambda \mu})_{1\le j, j'\le r}\bigr)_{\lambda, \mu\in\Lambda}$ be the transition functions of $E$ associated with $(\mathbf{e}_\lambda)_{\lambda\in\Lambda}$. In other words, we have $e^j_\mu=\sum_{j'=1}^r f^{E, j' j}_{\lambda \mu}e^{j'}_\lambda$. We denote by $\mathbf{c_\lambda}=(c_\lambda^1, \dots, c_\lambda^r)$ the frame dual to $\mathbf{e}_\lambda=(e^1_\lambda, \dots, e^ r_\lambda)$ for $E^*|_{U_\lambda}$. We set $$ \Omega_\mathrm{ch}(E; \mathcal{U})_\lambda:=(\mathbf{x_\lambda}^{-1})_*(\Omega_\mathrm{ch}(\mathbb{R}^{m|r})|_{\mathbf{x_\lambda}(U_\lambda)}). $$ Note that $\Omega_\mathrm{ch}(E; \mathcal{U})_\lambda$ is nothing but the VSA-inductive sheaf associated with the object $(\mathbf{x_\lambda}^{-1})_*(\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})|_{\mathbf{x_\lambda}(U_\lambda)})$ of $\mathit{Presh}_{U_\lambda}(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$, where $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})|_{\mathbf{x_\lambda}(U_\lambda)}$ is the presheaf restricted to $\mathbf{x_\lambda}(U_\lambda)$ but not its sheafification. Notice that $\underrightarrow{\mathrm{Lim}}\,\Omega_\mathrm{ch}(E; \mathcal{U})_\lambda=(\mathbf{x_\lambda}^{-1})_*(\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})|_{\mathbf{x_\lambda}(U_\lambda)})$. We consider the following two subpresheaves of $\underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})$: \begin{align} \underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r})&: U \mapsto \Gamma(U, \underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r})):=\bigl\langle C^\infty_{\mathbb{R}^m}(U)\bigr\rangle \otimes \langle c^1_0\mathbf{1}, \dots, c^r_0\mathbf{1}\rangle, \\ \underrightarrow{\Omega^{\gamma bc}_{\mathrm{ch}}}(\mathbb{R}^{m|r})&: U \mapsto \Gamma(U, \underrightarrow{\Omega^{\gamma bc}_{\mathrm{ch}}}(\mathbb{R}^{m|r})):=\bigl\langle C^\infty_{\mathbb{R}^m}(U)\bigr\rangle \otimes \mathcal{E}(W_r), \end{align} where $\bigl\langle C^\infty_{\mathbb{R}^m}(U)\bigr\rangle \subset\Gamma_m(U)$ and $\langle c^1_0\mathbf{1}, \dots, c^r_0\mathbf{1}\rangle\subset\mathcal{E}(W_r)$ stand for the subalgebra generated by $C^\infty_{\mathbb{R}^m}(U)$ and $\{c^1_0\mathbf{1}, \dots, c^r_0\mathbf{1}\}$, respectively. The presheaves $\underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r})[n]$ and $\underrightarrow{\Omega^{\gamma bc}_{\mathrm{ch}}}(\mathbb{R}^{m|r})[n]$ are sheaves for all $n\in\mathbb{N}$ since we have an isomorphism $\bigl\langle C^\infty_{\mathbb{R}^m}(U)\bigr\rangle\cong U\bigl(\langle\gamma^i_p\ |\ p<0, i=1, \dots, m\rangle\bigr) \otimes C^\infty_{\mathbb{R}^m}(U)$ from the isomorphism in Proposition \ref{prop: GAMMA=GAMMA-TILDE}, where $ U\bigl(\langle\gamma^i_p\ |\ p<0, i=1, \dots, m\rangle\bigr)$ is the universal enveloping algebra of the commutative Lie subalgebra of $\mathfrak{h}_m$ generated by $\gamma^i_p$ with $p<0, i=1, \dots, m$. Therefore the presheaves $\underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r})$ and $\underrightarrow{\Omega^{\gamma bc}_{\mathrm{ch}}}(\mathbb{R}^{m|r})$ are objects of the category $\mathit{Presh}_{\mathbb{R}^m}(\textit{DegWt-VSA}_\mathbb{K})_\mathrm{bdw, sh}$. Thus we have VSA-inductive sheaves $\Omega^{\gamma c}_\mathrm{ch}(\mathbb{R}^{m|r})$ and $\Omega^{\gamma bc}_{\mathrm{ch}}(\mathbb{R}^{m|r})$, where $\Omega^{\gamma c}_\mathrm{ch}(\mathbb{R}^{m|r})$ and $\Omega^{\gamma bc}_{\mathrm{ch}}(\mathbb{R}^{m|r})$ are the VSA-inductive sheaves associated with $\underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r})$ and $\underrightarrow{\Omega^{\gamma bc}_{\mathrm{ch}}}(\mathbb{R}^{m|r})$, respectively. Note that the vertex superalgebra $\Gamma(U, \underrightarrow{\mathrm{Lim}}\, \Omega^{\gamma c}_\mathrm{ch}(\mathbb{R}^{m|r}))=\Gamma(U, \underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r}))$ is generated by the weight $0$ space for any open subset $U\subset \mathbb{R}^m$. Set $$ \Gamma_{m}'(U):=N\bigl(C^\infty_{\mathbb{R}^m}(U), 0\bigr)/\mathcal{I}'_m(U), $$ where $\mathcal{I}'_m(U)$ is the ideal of the affine vertex algebra $N\bigl(C^\infty_{\mathbb{R}^m}(U), 0\bigr)$ generated by \begin{gather*} \frac{d}{dz}f(z)-\sum_{i=1}^m\,{\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,\frac{d}{dz}x^i(z)\frac{\partial f}{\partial x^i}(z)\,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}}, \\ (fg)(z)-\,{\baselineskip0pt\lineskip0.3pt\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}\,f(z)g(z)\,\vcenter{\hbox{$\cdot$}\hbox{$\cdot$}}}, \quad 1(z)-\mathrm{id}, \end{gather*} with $f, g\in C^\infty_{\mathbb{R}^m}(U)$. \begin{remark} The canonical morphism, $ \Gamma_m'(U)\to\Gamma_m(U), $ induced by the inclusion of Lie algebras $C^\infty_{\mathbb{R}^m}(U)\to D_m(U)\ltimes C^\infty_{\mathbb{R}^m}(U)$ is injective. This follows from the form of the basis of $\Gamma_m(U)$. Therefore there exists an isomorphism of degree-weight-graded vertex algebras from $\Gamma_m'(U)$ to $\bigl\langle C^\infty_{\mathbb{R}^m}(U)\bigr\rangle \subset \Gamma_m(U)$. \end{remark} We set \begin{align} \Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})_\lambda&:=(\mathbf{x_\lambda}^{-1})_*(\Omega^{\gamma c}_\mathrm{ch}(\mathbb{R}^{m|r})|_{\mathbf{x_\lambda}(U_\lambda)}), \\ \Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})_\lambda&:=(\mathbf{x_\lambda}^{-1})_*(\Omega^{\gamma bc}_{\mathrm{ch}}(\mathbb{R}^{m|r})|_{\mathbf{x_\lambda}(U_\lambda)}). \end{align} We will glue $\bigl(\Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})_\lambda\bigr)_{\lambda\in\Lambda}$ and $\bigl(\Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})_\lambda\bigr)_{\lambda\in\Lambda}$. Fix $U_\lambda$ with $\mathbf{x}_\lambda, \mathbf{e}_\lambda$ and $U_{\Tilde{\lambda}}$ with $\mathbf{x}_{\Tilde{\lambda}}, \mathbf{e}_{\Tilde{\lambda}}$ such that $U_\lambda\cap U_{\Tilde{\lambda}}\neq \emptyset$. Set $U:=\mathbf{x}_\lambda(U_\lambda\cap U_{\Tilde{\lambda}})$, $\Tilde{U}:=\mathbf{x}_{\Tilde{\lambda}}(U_{\Tilde{\lambda}}\cap U_\lambda)$ and $\varphi=\varphi_{\Tilde{\lambda} \lambda}:= (\mathbf{x}_{\Tilde{\lambda}}|_{U_{\Tilde{\lambda}}\cap U_\lambda})\circ(\mathbf{x}_\lambda|_{U_\lambda\cap U_{\Tilde{\lambda}}})^{-1}: U\to \Tilde{U}$. We construct strict isomorphisms of VSA-inductive sheaves \begin{gather*} \Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})_{\Tilde{\lambda}}\big|_{U_{\Tilde{\lambda}}\cap U_\lambda} \to \Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})_\lambda\big|_{U_\lambda\cap U_{\Tilde{\lambda}}}, \\ \Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})_{\Tilde{\lambda}}\big|_{U_{\Tilde{\lambda}}\cap U_\lambda} \to \Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})_\lambda\big|_{U_\lambda\cap U_{\Tilde{\lambda}}}, \end{gather*} motivated by gluing vector fields on the supermanifold $\Pi E=\bigr(M, \bigwedge(E^*)\bigr)$. Let $V\subset U_\lambda\cap U_{\Tilde{\lambda}}$ be an open subset. We denote by $\Tilde{\beta}^i_n$, $\Tilde{\gamma}^i_n$, $\Tilde{b}^j_n$ and $\Tilde{c}^j_n$ the operators $\beta^i_n$, $\gamma^i_n$, $b^j_n$ and $c^j_n$ on $\Gamma\Bigl(\mathbf{x}_{\Tilde{\lambda}}(V), \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})\Bigr)$, respectively. We define elements of the vertex superalgebra $\Gamma\Bigl(\mathbf{x}_\lambda(V), \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})\Bigr)$ as follows: \begin{gather} \label{df: phi^*f} \varphi^*\Tilde{f}:=\Tilde{f}, \\ \label{df: phi^*b} \varphi^*\Tilde{b}^j:=\sum_{1\le j'\le r}f^{E, j' j}_{\lambda \Tilde{\lambda}}b^{j'}_{-1}\mathbf{1}, \\ \label{df: phi^*c} \varphi^*\Tilde{c}^j:=\sum_{1\le j'\le r}f^{E, j j'}_{\Tilde{\lambda} \lambda}c^{j'}_0\mathbf{1}, \end{gather} for $\Tilde{f}\in C^\infty(V)$, $i=1, \dots, m$ and $j=1, \dots, r$. Here we use a usual notation for functions, identifying $C^\infty_{\mathbb{R}^m}\bigl(\mathbf{x}_\lambda(V)\bigr)$ and $C^\infty_{\mathbb{R}^m}\bigl(\mathbf{x}_{\Tilde{\lambda}}(V)\bigr)$ with $C^\infty(V)$ via $\mathbf{x}_\lambda$ and $\mathbf{x}_{\Tilde{\lambda}}$, respectively. \begin{lemma}\label{lem: OPEs of OMEGA_CHIRAL} The following OPEs hold: \begin{gather} (\varphi^*\Tilde{f})(z)(\varphi^*\Tilde{g})(w)\sim 0, \\ (\varphi^*\Tilde{b}^j)(z)(\varphi^*\Tilde{b}^{j'})(w)\sim 0, \\ (\varphi^*\Tilde{c}^j)(z)(\varphi^*\Tilde{c}^{j'})(w)\sim 0, \\ (\varphi^*\Tilde{b}^j)(z)(\varphi^*\Tilde{c}^{j'})(w)\sim \frac{\delta_{j, j'}}{z-w}, \\ (\varphi^*\Tilde{f})(z)(\varphi^*\Tilde{b}^j)(w)\sim0, \\ (\varphi^*\Tilde{f})(z)(\varphi^*\Tilde{c}^j)(w)\sim0. \end{gather} \end{lemma} \begin{proof} These OPEs are checked by direct computations. \end{proof} \begin{remark} When we consider the transformation rules of vector fields, it is natural to set $$ \varphi^*\Tilde{\beta}^i:=\sum_{1\le i'\le m}\beta^{i'}_{-1}\frac{\partial x^{i'}}{\partial \Tilde{x}^i}+\sum_{1\le j, k, l\le r}\frac{\partial f^{E, j k}_{\lambda \Tilde{\lambda}}}{\partial \Tilde{x}^i}f^{E, k l}_{\Tilde{\lambda} \lambda}c^l_0b^j_{-1}\mathbf{1}. $$ But the required relation $(\varphi^*\Tilde{\beta}^i)(z)(\varphi^*\Tilde{\beta}^{i'})(w) \sim 0$ does not hold in general. \end{remark} By Lemma \ref{lem: OPEs of OMEGA_CHIRAL}, we have morphisms of degree-weight-graded vertex superalgebras \begin{align} \underrightarrow{\vartheta'_{\lambda \Tilde{\lambda}}}(V): \Gamma\bigl(\mathbf{x}_{\Tilde{\lambda}}(V), \underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r})\bigr) &\to \Gamma\bigl(\mathbf{x}_\lambda(V), \underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r})\bigr), \\ \Tilde{f} &\mapsto \varphi^*\Tilde{f}, \notag \\ \Tilde{c}^j_0\mathbf{1} &\mapsto \varphi^*\Tilde{c}^j, \notag \\ \intertext{and,} \underrightarrow{\vartheta''_{\lambda \Tilde{\lambda}}}(V): \Gamma\bigl(\mathbf{x}_{\Tilde{\lambda}}(V), \underrightarrow{\Omega^{\gamma bc}_{\mathrm{ch}}}(\mathbb{R}^{m|r})\bigr) &\to \Gamma\bigl(\mathbf{x}_\lambda(V), \underrightarrow{\Omega^{\gamma bc}_{\mathrm{ch}}}(\mathbb{R}^{m|r})\bigr), \\ \Tilde{f} &\mapsto \varphi^*\Tilde{f}, \notag \\ \Tilde{c}^j_0\mathbf{1} &\mapsto \varphi^*\Tilde{c}^j, \notag \\ \Tilde{b}^j_0\mathbf{1} &\mapsto \varphi^*\Tilde{b}^j. \notag \end{align} These morphisms $\Bigl(\underrightarrow{\vartheta'_{\lambda \Tilde{\lambda}}}(V)\Bigr)_V$ and $\Bigl(\underrightarrow{\vartheta''_{\lambda \Tilde{\lambda}}}(V)\Bigr)_V$ form morphisms of presheaves of degree-weight-graded vertex superalgebras $\underrightarrow{\vartheta'_{\lambda \Tilde{\lambda}}}$ and $\underrightarrow{\vartheta''_{\lambda \Tilde{\lambda}}}$, respectively. Therefore by Lemma \ref{lem: MAKE VsaIndSh from PRESHEAVES OF Z-GRADED Vsa} we have strict morphisms of VSA-inductive sheaves \begin{gather} \vartheta'_{\lambda \Tilde{\lambda}}: \Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})_{\Tilde{\lambda}}\big|_{U_{\Tilde{\lambda}}\cap U_\lambda} \to \Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})_\lambda\big|_{ U_\lambda\cap U_{\Tilde{\lambda}}}, \\ \vartheta''_{\lambda \Tilde{\lambda}}: \Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})_{\Tilde{\lambda}}\big|_{U_{\Tilde{\lambda}}\cap U_\lambda} \to \Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})_\lambda\big|_{U_\lambda\cap U_{\Tilde{\lambda}}}. \end{gather} Thus we have families of strict morphisms $(\vartheta'_{\lambda \mu})_{\lambda, \mu \in \Lambda}$ and $(\vartheta''_{\lambda \mu})_{\lambda, \mu \in \Lambda}$. These morphisms satisfy the following. \begin{lemma} The following hold: \begin{gather*} \vartheta'_{\lambda \lambda}= \mathrm{id}, \quad \vartheta'_{\lambda \mu}\circ\vartheta'_{\mu \nu}=\vartheta'_{\lambda \nu}, \\ \vartheta''_{\lambda \lambda}=\mathrm{id}, \quad \vartheta''_{\lambda \mu}\circ\vartheta''_{\mu \nu}=\vartheta''_{\lambda \nu}, \end{gather*} for all $\lambda, \mu, \nu\in \Lambda$. \end{lemma} \begin{proof} It suffices to check \begin{gather*} \underrightarrow{\vartheta'_{\lambda \lambda}}= \mathrm{id}, \quad \underrightarrow{\vartheta'_{\lambda \mu}}\circ\underrightarrow{\vartheta'_{\mu \nu}}=\underrightarrow{\vartheta'_{\lambda \nu}}, \\ \underrightarrow{\vartheta''_{\lambda \lambda}}=\mathrm{id}, \quad \underrightarrow{\vartheta''_{\lambda \mu}}\circ\underrightarrow{\vartheta''_{\mu \nu}}=\underrightarrow{\vartheta''_{\lambda \nu}}, \end{gather*} for all $\lambda, \mu, \nu\in \Lambda$. We can see that these relations hold on the generators from \eqref{df: phi^*f}, \eqref{df: phi^*b} and \eqref{df: phi^*c}. Thus we get the above relations. \end{proof} By the construction, $\bigl((\Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})_\lambda)_{\lambda\in\Lambda}, (\vartheta'_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$ satisfies the assumptions (1)-(5) in Proposition \ref{prop: GLUING VsaIndShs}. Therefore by Proposition \ref{prop: GLUING VsaIndShs}, we can glue the degree-weight-graded VSA-inductive sheaves $\bigl((\Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})_\lambda)_{\lambda\in\Lambda}, (\vartheta'_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$ to a degree-weight-graded VSA-inductive sheaf on $M$. We denote by $\Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})$ the resulting one. In the same way, we have a weight-degree-graded VSA-inductive sheaf on $M$, which we denote by $\Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})$, by gluing $\bigl((\Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})_\lambda)_{\lambda\in\Lambda},$ $(\vartheta''_{\lambda \mu})_{\lambda, \mu \in \Lambda}\bigr)$. \begin{remark}\label{rem: we can take any framed covering} By Remark \ref{rem: Isomorphic as VSA-inductive sheaves if locally isomorphic}, the objects $\Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})$ and $\Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})$ of the category $\textit{DegWt-}\mathit{VSA_{\mathbb{K}}}\textit{-IndSh}_M^\mathbb{N}$ \textit{do not} depend on the choice of framed coverings and are unique up to isomorphism. \end{remark} Therefore we simply denote by $\Omega^{\gamma c}_\mathrm{ch}(E)$ and $\Omega^{\gamma bc}_{\mathrm{ch}}(E)$ the VSA-inductive sheaves $\Omega^{\gamma c}_\mathrm{ch}(E; \mathcal{U})$ and $\Omega^{\gamma bc}_{\mathrm{ch}}(E; \mathcal{U})$, respectively. \begin{remark} From the way of gluing and Remark \ref{eq: WEIGHT ZERO SPACE IS CLASSICAL}, the sheaf $\underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(E)}[0]$ is canonically isomorphic to the sheaf of local sections of $\bigwedge E^*$, denoted by $\underline{\bigwedge E^*}$. \end{remark} \begin{remark} Let $E$ be the tangent bundle $TM$ of a manifold $M$. The presheaf associated with the VSA-inductive sheaf $\Omega^{\gamma c}_\mathrm{ch}(TM; \mathcal{U}^M)$ coincides with the small CDR for $M$ constructed in \cite{LL}, denoted by $\mathcal{Q}'_M$, where $\mathcal{U}_M=(U_\lambda)_{\lambda\in\Lambda}$ is the framed covering consisting of all open subsets $U_\lambda$ with a chart $\mathbf{x}_\lambda$ of $U_\lambda$ and the standard frame $(\partial/\partial x_\lambda^i)_{i=1, \dots, m}$. \end{remark} \subsection{Chiral Lie Algebroid Complex} We consider the case when the vector bundle $E$ is a Lie algebroid. We construct a differential on $\Omega^{\gamma c}_\mathrm{ch}(E)$. Let $(A, a, [ , ])$ be a Lie algebroid on a manifold $M$. We set $m:=\dim M$ and $r:=\mathrm{rank}\, A$. Let $\mathcal{U}=(U_\lambda)_{\lambda\in\Lambda}$ be a framed covering of $A$. As before we denote by $\mathbf{x}_\lambda$ and $\mathbf{e}_\lambda$ the chart and the frame associated with $U_\lambda$, respectively. First we define a differential on each $\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_\lambda$. Then we will glue them. Fix $\lambda \in \Lambda$. We write the anchor map and the bracket as $$ a(e_\lambda^j)=\sum_{\substack{1\le i\le m}}f^{\lambda, {i j}}\frac{\partial}{\partial x_\lambda^i}, $$ and $$ [e_\lambda^j,e_\lambda^k]=\sum_{1\le l \le r}\Gamma^{\lambda, j k}_l e_\lambda^l, $$ for $j, k=1, \dots, r$, where $f^{\lambda, {i j}}, \Gamma^{\lambda, j k}_l \in C^\infty(U_\lambda)$. Let $V$ be an open subset of $U_\lambda$. Motivated by the differential $d_{\mathrm{Lie}}$ for the Lie algebroid cohomology, we define an odd element $Q^\lambda(V)$ of degree $1$ and weight $1$ in $\Gamma\Bigl(V, \underrightarrow{\mathrm{Lim}}\, {\Omega_\mathrm{ch}(A; \mathcal{U})_\lambda}\Bigr)=\Gamma\Bigl(\mathbf{x}_\lambda(V), \underrightarrow{\Omega_{\mathrm{ch}}}(\mathbb{R}^{m|r})\Bigr)$ by setting $$ Q^\lambda(V):=\sum_{\substack{1\le i \le m \\ 1\le j \le r}}\beta^i_{-1}f^{\lambda, {i j}} c^j_{0}\mathbf{1} -\frac{1}{2}\sum_{1\le j, k, l \le r}\Gamma^{\lambda, j k}_{l} c^j_0 c^k_0 b^l_{-1}\mathbf{1}. $$ Notice that the corresponding vertex operator $\underrightarrow{D^\lambda_{\mathrm{Lie}}}(V):=Q^\lambda(V)_{(0)}$ preserves the subspace $\Gamma\Bigl(V, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_\lambda}\Bigr)=\Gamma\Bigl( \mathbf{x}_\lambda(V), \underrightarrow{\Omega^{\gamma c}_{\mathrm{ch}}}(\mathbb{R}^{m|r}) \Bigr)$ and moreover coincides with the differential $d_{\mathrm{Lie}}$ on the weight $0$ space identified with $\Gamma(V, \bigwedge E^*)$. \begin{lemma}\label{lem: D^2=0 PRESHEAF VERSION} $$ (\underrightarrow{D^\lambda_{\mathrm{Lie}}}(V))^2=0. $$ \end{lemma} \begin{proof} It suffices to check $ (\underrightarrow{D^\lambda_{\mathrm{Lie}}}(V))^2=0 $ on the weight $0$ subspace by Lemma \ref{lem: odd derivation is 0 if so on generators}. This follows from the fact that the differential $\underrightarrow{D^\lambda_{\mathrm{Lie}}}(V)$ on the weight $0$ subspace is nothing but the differential $d_{\mathrm{Lie}}$ for the Lie algebroid cohomology. \end{proof} We can see that the linear maps $\underrightarrow{D^\lambda_{\mathrm{Lie}}}(V)$ with open subsets $V\subset U_\lambda$ define a linear endomorphism $\underrightarrow{D^\lambda_{\mathrm{Lie}}}$ of the presheaf $\underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_\lambda}$ by the definition of the restriction maps. Thus we have a differential $\underrightarrow{D^\lambda_{\mathrm{Lie}}}$ on $\underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_\lambda}$. \begin{lemma}\label{lem: THE DIFFERENTIAL IS GLUED PRESHEAF VERSION} The following holds: $$ \underrightarrow{D^\lambda_{\mathrm{Lie}}}|_{U_\lambda\cap\mu}\circ\underrightarrow{\vartheta'_{\lambda \mu}}=\underrightarrow{\vartheta'_{\lambda \mu}}\circ \underrightarrow{D^\mu_{\mathrm{Lie}}}|_{U_\mu\cap U_\lambda}, $$ for any $\lambda, \mu \in \Lambda$. \end{lemma} \begin{proof} By Lemma \ref{lem: sufficient condition for B-linearilty} with $f=\underrightarrow{\vartheta'_{\lambda \mu}}$ and $N=0$, it suffices to check the relation on the weight $0$ subspace. This follows from the fact that $\underrightarrow{\vartheta'_{\lambda \mu}}$ and $\underrightarrow{D^\lambda_{\mathrm{Lie}}}$ coincide with the gluing map of the sheaf of sections $\underline{\bigwedge A^*}$ and the differential for the Lie algebroid cohomology, respectively. \end{proof} The morphism $\underrightarrow{D^\lambda_{\mathrm{Lie}}}$ consists of homogeneous maps. By Remark \ref{rem: homogeneous morphism of presheaves induces that of ind-objects}, the operator $\underrightarrow{D^\lambda_{\mathrm{Lie}}}$ induces a strict morphism of ind-objects of sheaves $D^\lambda_{\mathrm{Lie}}\!: \Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_\lambda\!\to \Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_\lambda$. \begin{lemma}\label{lem: THE DIFFERENTIAL IS GLUED} The following holds: $$ D^\lambda_{\mathrm{Lie}}|_{U_\lambda\cap\mu}\circ\vartheta'_{\lambda \mu}=\vartheta'_{\lambda \mu}\circ D^\mu_{\mathrm{Lie}}|_{U_\mu\cap U_\lambda}, $$ for any $\lambda, \mu \in \Lambda$. \end{lemma} \begin{proof} This relation follows from \ref{lem: THE DIFFERENTIAL IS GLUED PRESHEAF VERSION}. \end{proof} By Lemma \ref{lem: THE DIFFERENTIAL IS GLUED}, we can glue the strict morphisms $(D^\lambda_{\mathrm{Lie}})_{\lambda\in\Lambda}$ into a morphism $D_{\mathrm{Lie}}$ on $\Omega^{\gamma c}_\mathrm{ch}(A)$. Note that by Remark \ref{rem: morphisms coincides if so locally}, the morphism $D_{\mathrm{Lie}}$ is independent of the choice of framed coverings. \begin{lemma} The morphism $D_{\mathrm{Lie}}$ is a differential on $\Omega^{\gamma c}_\mathrm{ch}(A)$. \end{lemma} \begin{proof} By Remark \ref{rem: morphisms coincides if so locally}, it suffices to check the relations \eqref{eq: deg 1 wt 0 condition of differential on VSA-inductive sheaf}, \eqref{eq: square 0 condition of differential on VSA-inductive sheaf}, \eqref{eq: derivation condition of differential on VSA-inductive sheaf} and \eqref{eq: degree derivation condition of differential on VSA-inductive sheaf} locally. This follows from the fact that $\underrightarrow{D_{\mathrm{Lie}}^\lambda}$ is a differential. \end{proof} Let $\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr)$ be the space of all global sections of the presheaf $\underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}$ obtained by applying the functor $\underrightarrow{\mathrm{Lim}}\, $ to the degree-weight-graded VSA-inductive sheaf $\Omega^{\gamma c}_\mathrm{ch}(A)$. By the lemma above and Remark \ref{rem: presheaf of differential VSA-inductive sheaf}, we have the following. \begin{theorem} The pair $\Bigl(\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr), \underrightarrow{\mathrm{Lim}}\, {D_{\mathrm{Lie}}}(M)\Bigr)$ is a differential degree-weight-graded vertex superalgebra. \end{theorem} The theorem above leads us to the following definition. \begin{definition} Let $A$ be a Lie algebroid on a manifold $M$. Its \textbf{chiral Lie algebroid cohomology}, denoted by $H_{\mathrm{ch}}(A)$, is the cohomology of the complex $\Bigl(\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr), \underrightarrow{\mathrm{Lim}}\, {D_{\mathrm{Lie}}}(M)\Bigr)$. \end{definition} \begin{remark} From the construction, the chiral Lie algebroid cohomology $\!H_{\mathrm{ch}}(A)$ is a $\mathbb{Z}_{\ge 0}$-graded vertex superalgebra and the subspace of weight $0$, $H_{\mathrm{ch}}(A)[0]$, coincides with the classical Lie algebroid cohomology. \end{remark} \section{Chiral Equivariant Lie Algebroid Cohomology}\label{section: Chiral Equivariant Lie Algebroid Cohomology} \subsection{Definition of Chiral Equivariant Lie Algebroid Cohomology} Let $(A, a, [ , ])$ be a Lie algebroid on a manifold $M$. We will first equip $\Bigl(\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr),$ $\underrightarrow{\mathrm{Lim}}\, {D_{\mathrm{Lie}}}(M)\Bigr)$ with a differential $\mathfrak{s}\Gamma(M, A)[t]$-module structure, where $\Gamma(M, A)$ is the Lie algebra of global sections of $A$. Set $m:=\dim M$ and $r:=\mathrm{rank}\, A$. Let $\mathcal{U}=(U_\lambda)_{\lambda\in\Lambda}$ be a framed covering of $A$. As before we denote by $\mathbf{x}_\lambda$ and $\mathbf{e}_\lambda$ the chart and the frame associated with $U_\lambda$, respectively. Let $X\in \Gamma(M, A)$ be a global section. Fix $\lambda\in\Lambda$. We can write $X$ on $U_\lambda$ as $X|_{U_\lambda}=\sum_{j=1}^r f^{\lambda, j} e_\lambda^j$, where $f^{\lambda, j}$ is a function on $U_\lambda$. We set $$ \iota_{X}(V):=\sum_{j=1}^r f^{\lambda, j} b^j_{-1}\mathbf{1} \in \Gamma\Bigl(\mathbf{x}_\lambda (V), \underrightarrow{\Omega^{\gamma bc}_{\mathrm{ch}}}(\mathbb{R}^{m|r})\Bigr)=\Gamma(V, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma bc}_{\mathrm{ch}}(A; \mathcal{U}}_{\lambda})), $$ for an open subset $V\subset U_\lambda$. For each $n\in\mathbb{Z}$, the corresponding vertex operators $\iota_X(V)_{(n)}$ with open subsets $V\subset U_\lambda$ form a morphism $\underrightarrow{\iota_{X, (n)}^\lambda}$ on the presheaf $\underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma bc}_{\mathrm{ch}}(A; \mathcal{U}}_{\lambda})$. Note that for $n\ge0$, the morphism $\underrightarrow{\iota_{X, (n)}^\lambda}$ preserves the subpresheaf $\underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_{\lambda}}$. Let $n\ge0$. Since the morphism $\underrightarrow{\iota_{X, (n)}^\lambda}$ is homogeneous of weight $-n$, it induces a strict morphism of ind-objects $\iota_{X, (n)}^\lambda: \Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_{\lambda}\to \Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_{\lambda}$ by Remark \ref{rem: homogeneous morphism of presheaves induces that of ind-objects}. \begin{lemma} Let $n\in \mathbb{Z}_{\ge0}$. Then $$ \iota_{X, (n)}^\lambda|_{U_\lambda\cap\mu}\circ\vartheta'_{\lambda \mu}=\vartheta'_{\lambda \mu}\circ \iota_{X, (n)}^\mu|_{U_\mu\cap U_\lambda}, $$ hold for all $\lambda, \mu\in\Lambda$. \end{lemma} \begin{proof} It suffices to show $\underrightarrow{\iota_{X, (n)}^\lambda}\big|_{U_\lambda\cap\mu}\circ\underrightarrow{\vartheta'_{\lambda \mu}}=\underrightarrow{\vartheta'_{\lambda \mu}}\circ \underrightarrow{\iota_{X, (n)}^\mu}\big|_{U_\mu\cap U_\lambda}$ for each $\lambda, \mu\in\Lambda$. By Lemma \ref{lem: sufficient condition for B-linearilty}, it suffices to check the relation on the weight $0$ subspace. This follows from the fact that the operator $\underrightarrow{\iota_{X, (0)}^\lambda}$ coincides with the inner product for $X$ on the weight $0$ subspace and the fact that the operator $\underrightarrow{\iota_{X, (n)}^\lambda}$ has weight $n$. \end{proof} Thus for $n\ge0$, we have a morphism of ind-objects $\iota_{X, (n)}: \Omega^{\gamma c}_\mathrm{ch}(A)\to\Omega^{\gamma c}_\mathrm{ch}(A)$ by gluing the strict morphisms $(\iota_{X, (n)}^\lambda)_{\lambda\in\Lambda}$, which is independent of the choice of framed covering as in the case of $D_{\mathrm{Lie}}$. We set $$ L_{X, (n)}:= [D_{\mathrm{Lie}},\iota_{X, (n)}], $$ for $n \ge 0$. \begin{lemma}\label{lem: CHIRAL CARTAN RELATIONS} Let $X, Y$ be global sections of $A$ and $n, k$ non-negative integers. Then the following relations hold: \begin{enumerate}[(i)] \setlength{\topsep}{1pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{1pt} \setlength{\parsep}{0pt} \setlength{\leftmargin}{20pt} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{0pt} \setlength{\labelsep}{3pt} \setlength{\labelwidth}{15pt} \setlength{\itemindent}{0pt} \renewcommand{\makelabel}{\upshape} \item $[L_{X, (n)},\iota_{Y, (k)}]=\iota_{[X,Y], (n+k)}$, \item $[L_{X, (n)},L_{Y, (k)}]=L_{[X,Y], (n+k)}$, \item $[D_{\mathrm{Lie}},L_{X, (n)}+\iota_{Y, (k)}]=L_{Y, (k)}$. \end{enumerate} \end{lemma} \begin{proof} By Remark \ref{rem: morphisms coincides if so locally}, it suffices to check the relations locally. Since the operators $D_{\mathrm{Lie}}^\lambda=D_{\mathrm{Lie}}|_{U_\lambda}$, $\iota_{Y, (k)}^\lambda=\iota_{Y, (k)}|_{U_\lambda}$ and $L_{X, (n)}^\lambda:=L_{X, (n)}|_{U_\lambda}$ come from morphisms of presheaves, it suffices to check the corresponding morphisms $\underrightarrow{D_{\mathrm{Lie}}^\lambda}=\underrightarrow{\mathrm{Lim}}\, D_{\mathrm{Lie}}^\lambda$, $\underrightarrow{\iota_{Y, (k)}^\lambda}=\underrightarrow{\mathrm{Lim}}\, \iota_{Y, (k)}^\lambda$ and $\underrightarrow{L_{X, (n)}^\lambda}:=\underrightarrow{\mathrm{Lim}}\, L_{X, (n)}^\lambda$ on $\underrightarrow{\mathrm{Lim}}\, \Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})_\lambda$ satisfy the same relations. This is done by direct computations of OPEs. \end{proof} \begin{theorem} Let $\mathfrak{g}$ be a Lie algebra. Suppose given a morphism of Lie algebras $ x^A: \mathfrak{g}\to \Gamma(M, A), \quad \xi\mapsto x^A_\xi. $ Then the assignment $$ \mathfrak{sg}[t]\ni(\xi, \eta)t^n\mapsto \underrightarrow{\mathrm{Lim}}\, {L_{x^A_\xi, (n)}}(M)+\underrightarrow{\mathrm{Lim}}\, {\iota_{x^A_\eta, (n)}}(M)\in\mathrm{End}\bigl(\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr)\bigr), $$ defines a differential $\mathfrak{sg}[t]$-module structure on the complex $\Bigl(\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr),$ $ \underrightarrow{\mathrm{Lim}}\, {D_{\mathrm{Lie}}}(M)\Bigr)$. \end{theorem} \begin{proof} By the above lemma, it suffices to check that the operator $\underrightarrow{\mathrm{Lim}}\, {\iota_{X, (n)}}(M)$ has degree $-1$ and weight $-n$ for any $n\in\mathbb{N}$ and $X\in\Gamma(M, A)$. Note that the continuity of the action of $\mathfrak{sg}[t]$ follows from the fact that the weight-grading on $\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr)$ is bounded from the below. For that purpose, it suffices to show that $[\underline{J}, \iota_{X, (n)}]=-\iota_{X, (n)}$ and $[\underline{H}, \iota_{X, (n)}]=-n\iota_{X, (n)}$ for any $n\in\mathbb{N}$ and $X\in\Gamma(M, A)$, where $\underline{J}$ and $\underline{H}$ is the degree-grading operator and the Hamiltonian of $\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr)$. These relations follow from the fact that the same relations hold locally. \end{proof} This leads us to the following definitions. (See Section \ref{subsection: Chiral Equivariant Cohomology} for the definitions of differential $\mathfrak{sg}[t]$-modules, the chiral basic cohomology and the chiral equivariant cohomology) \begin{definition} Let $G$ be a compact connected Lie group and $\mathfrak{g}$ the Lie algebra $\mathrm{Lie}(G)^\mathbb{K}$. Let $A$ be a Lie algabroid on a manifold $M$ with a Lia algebra morphism $\mathfrak{g}\to \Gamma(M, A)$. The \textbf{chiral basic Lie algebroid cohomology} of $A$, denoted by $H_{\mathrm{ch}, bas}(A)$, is the chiral basic cohomology of the differential $\mathfrak{sg}[t]$-module $\Bigl(\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr), \underrightarrow{\mathrm{Lim}}\, {D_{\mathrm{Lie}}}(M)\Bigr)$. The \textbf{chiral equivariant Lie algebroid cohomology} of $A$, denoted by $H_{\mathrm{ch}, G}(A)$, is the chiral equivariant cohomology of the differential $\mathfrak{sg}[t]$-module $\Bigl(\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr), \underrightarrow{\mathrm{Lim}}\, {D_{\mathrm{Lie}}}(M)\Bigr)$. \end{definition} \begin{remark} The chiral basic Lie algebroid cohomology $H_{\mathrm{ch}, bas}(A)$ and the chiral equivariant Lie algebroid cohomology $H_{\mathrm{ch}, G}(A)$ are $\mathbb{Z}_{\ge0}$-graded vertex superalgebras. Indeed, $[\underrightarrow{\mathrm{Lim}}\, {L_{x^A_\xi, (n)}}(M), v_{(k)}]=\sum_{i\ge0}\binom{n}{i}(\underrightarrow{\mathrm{Lim}}\, {L_{x^A_\xi, (i)}}(M)v)_{(n+k-i)}$ and $[\underrightarrow{\mathrm{Lim}}\, {\iota_{x^A_\xi, (n)}}(M), v_{(k)}]=\sum_{i\ge0}\binom{n}{i}(\underrightarrow{\mathrm{Lim}}\, {\iota_{x^A_\xi, (i)}}(M)v)_{(n+k-i)}$ hold for any $n\ge0$, $k\in\mathbb{Z}$, $v\in\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr)$ and $\xi\in\mathfrak{g}$ since the same relations hold locally. Then the assertion follows from Lemma \ref{lem: generalized commutant} and Lemma \ref{lem: tensor commutant} together with the fact that $\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr)$ is $\mathbb{Z}_{\ge0}$-graded. Moreover $H_{\mathrm{ch}, bas}(A)[0]$ and $H_{\mathrm{ch}, G}(A)[0]$ coincide with the classical basic Lie algebroid cohomology and the classical equivariant Lie algebroid cohomology, respectively. This follows from the fact that $\Gamma\bigl(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)}\bigr)$ is $\mathbb{Z}_{\ge0}$-graded and the fact that $\underrightarrow{\mathrm{Lim}}\, {L_{x^A_\xi, (0)}}(M)$ and $\underrightarrow{\mathrm{Lim}}\, {\iota_{x^A_\xi, (0)}}(M)$ coincides with the classical Lie derivative and the classical interior product, respectively. \end{remark} \subsection{Transformation Lie Algebroid Cases} Let $M$ be a manifold with an infinitesimal action of a finite-dimensional Lie algebra $\mathfrak{g}$: $$ x^M: \mathfrak{g}\to \mathscr{X}(M), \quad \xi\mapsto x^M_\xi. $$ Let $A=M\times \mathfrak{g}$ be the corresponding transformation Lie algebroid (see Example \ref{ex: transformation Lie algebroid}). Let $(\xi_j)_j$ be a basis of $\mathfrak{g}$, $(\xi^*_j)_j$ the dual basis for $\mathfrak{g}^*$ and $(\Gamma^k_{i j})_{i, j, k}$ the structure constants, that is, constants satisfying $[\xi_i,\xi_j]=\sum_{k=1}^{\dim \mathfrak{g}}\Gamma^k_{i j}\xi_k$ for each $i, j=1, \dots, \dim\mathfrak{g}$. Let $\mathcal{U}=(U_\lambda)_{\lambda\in\Lambda}$ be a framed covering of $A$ consisting of open subsets with the constant frame $(\xi_j)_j$ as the frame on them. We can use the VSA-inductive sheaf $\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})$ for computations of the cohomologies $H_{\mathrm{ch}}(A)$, $H_{\mathrm{ch}, bas}(A)$ and $H_{\mathrm{ch}, G}(A)$, since they do not depend on the choice of framed coverings. By the construction of $\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})$, we see $\underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})}=\underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}\otimes\langle c \rangle$, where $C^{\infty, \gamma c}_{\mathrm{ch}, M}:=\Omega^{\gamma c}_\mathrm{ch}(M\times \{0\})$, and $\langle c \rangle$ is the subalgebra of $\mathcal{E}(\mathfrak{g})$ generated by $c^{\xi^*}_0\mathbf{1}$ with $\xi^*\in\mathfrak{g}^*$. Therefore by the construction, the differential for the corresponding chiral Lie algebroid cohomology $H_{\mathrm{ch}}(A)$ is nothing but the differential for the continuous Lie algebra cohomology with coefficients in the $\mathfrak{g}[t]$-module $\underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)$. The action of the element $\xi_j t^n\in\mathfrak{g}[t]$ on $\underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)$ is given by the operator $$ \sum_{k\ge0}\sum_{i=1}^{\dim M}f^{i j}_{-k-n} \beta^i_k, $$ on each space $\underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(U_\lambda)$ of local sections, where the vector field $x^M_{\xi_j}|_{U_\lambda}$ is written as $\sum_{i=1}^{\dim M}f^{i j} \partial/\partial x^i$ with $f^{i j}\in C^\infty(U_\lambda)$. Thus we have $$ H_{\mathrm{ch}}(A)=H\bigl(\mathfrak{g}[t]; \underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)\bigr). $$ Consider the Lie algebra morphism \begin{equation}\label{eq: canonical Lie algebra morphism of transformation Lie algebroids} \mathfrak{g}\to \Gamma(M, A)=C^\infty(M)\otimes \mathfrak{g}, \quad \xi \mapsto 1\otimes \xi. \end{equation} We will compute the corresponding chiral equivariant cohomology of $\mathcal{A}:=\Gamma(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})})$, namely, $H_{\mathrm{ch}, G}(A)$. Notice that \begin{equation}\label{eq: iota for transformation Lie algebroids} \iota^\mathcal{A}_{\xi, (n)}=b^{\xi}_n, \end{equation} for all $\xi\in\mathfrak{g}$ and $n\ge0$. Then the chiral basic cohomology $H_{\mathrm{ch}, bas}(A)$ of $\mathcal{A}=\Gamma(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})})$ is as follows: \begin{equation}\label{eq: chiral basic cohomology of transformation Lie algebroid} H_{\mathrm{ch}, bas}^i(A)= \begin{cases} \underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)^{\mathfrak{g}[t]}, & \text{when $i=0$,} \\ 0, & \text{otherwise.} \end{cases} \end{equation} Indeed, from \eqref{eq: iota for transformation Lie algebroids} we have $\bigl(\Gamma(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})})\bigr)_{hor}=\underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)$ and therefore $$ \bigl(\Gamma(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})})\bigr)_{bas}=\underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)^{\mathfrak{g}[t]}. $$ Equip $\mathcal{A}=\Gamma(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A; \mathcal{U})})=\underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)\otimes\langle c \rangle$ with the $\langle c \rangle$-module structure given by the left multiplication. We claim that this $\langle c \rangle$-module structure induces a chiral $W^*$-module structure. Proposition \ref{prop: construction of chiral W^*-modules} will be applied. Let $d_\mathcal{A}$ be the differential for $\mathcal{A}$, that is, that for the Lie algebra cohomology $H\bigl(\mathfrak{g}[t], \underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)\bigr)$. From the definition of $d_\mathcal{A}$ and the commutation relations in $\mathcal{E}(\mathfrak{g})$, we have \begin{align} [d_\mathcal{A},c^{\xi^*_l, \mathcal{A}}(z)]&=\Bigl[-\frac{1}{2}\sum_{i, j , k =1}^{\dim \mathfrak{g}}\sum_{\substack{s, t\le0,\\ \notag s+t+u=0}}\Gamma_{i j}^kc^{\xi^*_i}_sc^{\xi^*_j}_tb^{\xi_k}_u,\ c^{\xi^*_l, \mathcal{A}}(z)\Bigr] \\ \notag &=-\frac{1}{2}\sum_{i, j , k =1}^{\dim \mathfrak{g}}\sum_{\substack{s, t\le0,\\ s+t+u=0}}\Gamma_{i j}^kc^{\xi^*_i}_sc^{\xi^*_j}_t\langle \xi^*_l, \xi_k\rangle z^u \\ \notag &=-\frac{1}{2}\sum_{i, j=1}^{\dim \mathfrak{g}}\sum_{\substack{s, t\le0,\\ s+t+u=0}}\Gamma_{i j}^lc^{\xi^*_i}_sc^{\xi^*_j}_t z^u \\ &=-\frac{1}{2}\sum_{i, j=1}^{\dim \mathfrak{g}}\Gamma_{i j}^lc^{\xi^*_i, \mathcal{A}}(z)c^{\xi^*_j, \mathcal{A}}(z).\label{eq: the formula of [d_A,c] for translation Lie algebroid} \end{align} Therefore we have $[c^{\xi^*, \mathcal{A}}(z),[d_\mathcal{A}, c^{\eta^*, \mathcal{A}}(w)]]=0$ for all $\xi^*, \eta^*\in\mathfrak{g}$. By Lemma \ref{prop: construction of chiral W^*-modules}, we obtain a $\langle c, \gamma \rangle$-module structure $Y^\mathcal{A}$ on $\mathcal{A}$ by extending the above $\langle c \rangle$-module structure. From the definition of $\gamma^{\xi^*_l, \mathcal{A}}(z)$ (see the proof of Proposition \ref{prop: construction of chiral W^*-modules}) and \eqref{eq: the formula of [d_A,c] for translation Lie algebroid}, we have $$ \gamma^{\xi^*_l, \mathcal{A}}(z)=[d_\mathcal{A},c^{\xi^*_l, \mathcal{A}}(z)]+\frac{1}{2}\sum_{i, j=1}^{\dim \mathfrak{g}}\Gamma_{i j}^lc^{\xi^*_i, \mathcal{A}}(z)c^{\xi^*_j, \mathcal{A}}(z)=0, $$ for all $l=1, \dots, \dim \mathfrak{g}$. Therefore $[\iota_\xi^\mathcal{A}(z)_-,\gamma^{\xi^*, \mathcal{A}}(x)]=0$ for all $\xi\in\mathfrak{g}$ and $\xi^*\in\mathfrak{g}^*$. Recall that $\iota^\mathcal{A}_{\xi, (n)}=b^{\xi}_n$ for all $\xi\in\mathfrak{g}$ and $n\ge0$. From this, we have $[\iota_\xi^\mathcal{A}(z)_-,c^{\xi^*, \mathcal{A}}(w)]=\langle\xi^*, \xi\rangle\delta(z-w)_-$ for all $\xi\in\mathfrak{g}$ and $\xi^*\in\mathfrak{g}^*$. Thus we can apply Proposition \ref{prop: construction of chiral W^*-modules} and we see that the triple $(\mathcal{A}, d_\mathcal{A}, Y^\mathcal{A})$ is a chiral $W^*$-module. We have proved the following. \begin{theorem} For a transformation Lie algebroid $A=M\times\mathfrak{g}$ with the Lie algebra morphism \eqref{eq: canonical Lie algebra morphism of transformation Lie algebroids}, the differential $\mathfrak{sg}[t]$-module $\Gamma(M, \underrightarrow{\mathrm{Lim}}\, {\Omega^{\gamma c}_\mathrm{ch}(A)})$ has a canonical structure of a chiral $W^*$-module. \end{theorem} Therefore by Theorem \ref{thm: CHIRAL BASIC=CHIRAL EQUIVARIANT} and \eqref{eq: chiral basic cohomology of transformation Lie algebroid}, we have the following. \begin{corollary}\label{prop: chiral equivariant Lie algebroid cohomology for tramsf. Lie algebroids when comm} Let $G$ be a compact connected Lie group, $\mathfrak{g}$ the Lie algebra $\mathrm{Lie}(G)^\mathbb{K}$ and $A=M\times\mathfrak{g}$ a transformation Lie algebroid. Assume that $G$ is commutative. Then the following holds: \begin{equation} H_{\mathrm{ch}, G}^{i}(A)= \begin{cases} \underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, M}}(M)^{\mathfrak{g}[t]}, & \text{when $i=0$,} \\ 0, & \text{otherwise.} \end{cases} \end{equation} \end{corollary} We consider a special case. Let $(G, \Pi)$ be a compact connected Poisson-Lie group with the Lie algebra $\mathfrak{g}$ and $(G^*, \Pi^*)$ the dual Poisson-Lie group of $G$. Recall the Lie algebra morphism $$ \mathfrak{g}\to\Gamma(G^*, T^*G^*), \quad \xi\mapsto \xi^l, $$ where we denote by $\xi^l$ the left invariant $1$-form on $G^*$ whose value at $e$ is $\xi\in\mathfrak{g}$. Consider the corresponding chiral equivariant cohomology. By \cite[Proposition 5.25]{Lu90}, we have an isomorphism of Lie algebroids $$ T^*G^*\cong G^*\times \mathfrak{g}, $$ using the left invariant one-forms on $G^*$. Here we equip the trivial bundle $G^*\times \mathfrak{g}$ with the transformation Lie algebroid structure defined by the infinitesimal left dressing action. The following is a chiral analogue of \cite[Corollary 4.20]{Gin99}. \begin{proposition} In the above setting, assume that $(G, \Pi)$ is commutative. Then the following holds: \begin{equation} H_{\mathrm{ch}, G}^{i}(T^*G^*)= \begin{cases} \underrightarrow{\mathrm{Lim}}\, {C^{\infty, \gamma c}_{\mathrm{ch}, G^*}}(G^*), & \text{when $i=0$,} \\ 0, & \text{otherwise.} \end{cases} \end{equation} \end{proposition} \begin{proof} The infinitesimal left dressing action is trivial since $T$ is commutative. Therefore our assertion follows from Corollary \ref{prop: chiral equivariant Lie algebroid cohomology for tramsf. Lie algebroids when comm}. \end{proof} \section*{Acknowledgments} The author wishes to express his sincere gratitude to his advisor Professor Atsushi Matsuo for helpful advice and continuous encouragement during the course of this work. He is also thankful to Professor Hiroshi Yamauchi (Tokyo Women's Christian University) for advice and encouragement. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
proofpile-arXiv_067-6166
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \newcommand{\mbox{Tr}}{\mbox{Tr}} \renewcommand{\H}{\mathcal{H}} \newcommand{\mbox{SU}}{\mbox{SU}} \newcommand{\chi^{{\rm U}(\infty)}}{\chi^{{\rm U}(\infty)}} \newcommand{\rm f}{\rm f} \def{1 \over 2}{{1 \over 2}} \def{\mathcal N}{{\mathcal N}} \def{\rm Tr}{{\rm Tr}} \def\lambda{\lambda} \def\slashed{\slashed} \def\bar{\lambda}{\bar{\lambda}} \def\mu{\mu} \def\bar{\mu}{\bar{\mu}} \def\bar{w}{\bar{w}} \def\eta{\eta} \def\epsilon{\epsilon} \def\alpha{\alpha} \def\beta{\beta} \def\dot{\alpha}{\dot{\alpha}} \def\bar{a}{\bar{a}} \def\tilde{\eta}{\widetilde{\eta}} \def\bar{Q}{\bar{Q}} \def\Or[#1]{{\text{O}}\left({#1}\right)} \def\dotl[#1,#2]{\left\langle #1, #2 \right\rangle} \def\dotlb[#1,#2]{[ #1, #2 ]} \def\dotp[#1,#2]{(#1) \cdot (#2)} \def\aff[#1,#2]{\hat{#1}(#2)} \def{\cal N}=4 SYM{{\cal N}=4 SYM} \def\>{\rangle} \def\<{\langle} \def\weight[#1,#2,#3]{\{(#1),#2,#3\}} \def\ads[#1]{$\text{AdS}_{#1}$} \def\overset{\leftrightarrow}{\nabla}{\overset{\leftrightarrow}{\nabla}} \def{\vec x}{{\vec x}} \def${\cal I}^+${${\cal I}^+$} \def\partial{\partial} \def$\cal P_O${$\cal P_O$} \newcommand{\vect}[1]{{\bf{#1}}} \linespread{1.3} \begin{document} \unitlength = 1mm \ \\ \begin{center} { \LARGE \textsc{Higher Spin de Sitter Holography from Functional Determinants} } \vspace{0.8cm} Dionysios Anninos$^1$, Frederik Denef$^{2}$, George Konstantinidis$^1$ and Edgar Shaghoulian$^1$ \vspace{1cm} \vspace{0.5cm} $^1$ {\it Stanford Institute for Theoretical Physics, Stanford University}\\ $^2$ {\it Institute for Theoretical Physics, University of Leuven}\\ \vspace{1.0cm} \end{center} \begin{abstract} We discuss further aspects of the higher spin dS/CFT correspondence. Using a recent result of Dunne and Kirsten, it is shown how to numerically compute the partition function of the free $Sp(N)$ model for a large class of $SO(3)$ preserving deformations of the flat/round metric on $\mathbb{R}^3/S^3$ and the source of the spin-zero single-trace operator dual to the bulk scalar. We interpret this partition function as a Hartle-Hawking wavefunctional. It has a local maximum about the pure de Sitter vacuum. Restricting to $SO(3)$ preserving deformations, other local maxima (which exceed the one near the de Sitter vacuum) can peak at inhomogeneous and anisotropic values of the late time metric and scalar profile. Numerical experiments suggest the remarkable observation that, upon fixing a certain average of the bulk scalar profile at $\mathcal{I}^+$, the wavefunction becomes normalizable in all the other (infinite) directions of the deformation. We elucidate the meaning of double trace deformations in the context of dS/CFT as a change of basis and as a convolution. Finally, we discuss possible extensions of higher spin de Sitter holography by coupling the free theory to a Chern-Simons term. \end{abstract} \pagebreak \setcounter{page}{1} \pagestyle{plain} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} A natural object to consider when studying an asymptotically (approximately) de Sitter spacetime, such as the universe during the inflationary era, is the wavefunction \cite{Hartle:1983ai} as a function of small fluctuations of the bulk fields. For a free massless scalar $\phi$ (such as the inflaton in slow roll inflation) in the Bunch-Davies vacuum state\footnote{It might be more appropriate to call it the Bunch-Davies-Hartle-Hawking-Euclidean-Schomblond-Spindel-Chernikov-Tagirov-Mottola-Allen-Sasaki-Tanaka-Yamamoto-Critchley-Dowker-Candelas-Raine-Boerner-Duerr vacuum state.} $| E \rangle$ \cite{Bunch:1978yq,Mottola:1984ar,Allen:1985ux,Chernikov:1968zm,Schomblond:1976xc,Sasaki:1994yt,Dowker:1975tf,Candelas:1975du,Boerner:1969ff} in a fixed four-dimensional de Sitter background: \begin{equation}\label{massless} ds^2 = \frac{\ell^2}{\eta^2} \left( - d\eta^2 + d\vec{x}^2 \right)~, \quad \eta \in (-\infty,0)~, \end{equation} one finds the late time Hartle-Hawking Gaussian wavefunctional at $\eta = \eta_c \to 0$ \cite{Maldacena:2002vr}: \begin{equation} \lim_{\eta_c \to 0} | \Psi_{HH} \left[ \varphi(\vec{x}),\eta_c \right] | \sim \exp \left[{- \frac{{\ell^2}}{2} \int \frac{d^{3} k}{(2\pi)^3} \; k^3 \; \varphi_{\vec{k}} \; \varphi_{-\vec{k}} } \right]~, \end{equation} where $\varphi_{\vec{k}}$ are the Fourier components of the late time profile $\varphi(\vec{x})$. Such a Gaussian wavefunction gives rise to the scale-invariant fluctuations of the cosmic background radiation. Understanding the behavior of such a wavefunction for a large range of values for its arguments, which include the metric, and with the inclusion of quantum corrections is a basic problem in quantum cosmology. The perturbative Bunch-Davies wavefunction (\ref{massless}) was noticed to be a simple analytic continuation \cite{Maldacena:2002vr} of the partition function of a free massless field in a fixed Euclidean anti-de Sitter space. These observations, coupled with the correspondence between anti-de Sitter space and conformal field theory, motivate the proposal that at late times (or large spatial volume) $\Psi_{HH}$ is computed by a statistical (and hence Euclidean) conformal field theory, in what has come to be known as the dS/CFT conjecture \cite{Maldacena:2002vr,Strominger:2001pn,Witten:2001kn}.\footnote{See \cite{Anninos:2012qw} for a discussion of several aspects of de Sitter space. Other proposals include \cite{Alishahiha:2004md,Dong:2010pm,Dong:2011uf,McFadden:2009fg,Banks:2011qf,Banks:2008ep,Freivogel:2006xu,Anninos:2011kh,Roberts:2012jw,Harlow:2012dd,Garriga:2008ks,Parikh:2004wh,Anninos:2009yc,Anninos:2010gh,Anninos:2011zn,Anninos:2011af,Hertog:2011ky}.} In its weakest form dS/CFT conjectures that the Taylor coefficients of the logarithm of the late time Hartle-Hawking wavefunctional expanded about the empty de Sitter vacuum at large $N \sim \left( \ell/\ell_{p} \right)^{\#}$ are the correlation functions of such a non-unitary CFT. Namely, at some late time cutoff $\eta = \eta_c \to 0$ we have: \begin{equation}\label{pertdscft} \log \Psi_{HH}[\varphi(\vec{x}),\eta_c] = \sum_{n=1}^{\infty} \frac{1}{n !} \left( \int d^3 {x}_1 \ldots \int d^3 {x}_n \; \varphi(\vec{x}_1) \ldots \varphi(\vec{x}_n) \; \langle \mathcal{O}(\vec{x}_1) \ldots \mathcal{O}(\vec{x}_n) \rangle_{CFT} \right)~. \end{equation} The correlators $\langle \mathcal{O}(\vec{x}_1) \ldots \mathcal{O}(\vec{x}_n) \rangle_{CFT}$, where the operator $\mathcal{O}$ has been rescaled by an appropriate $\eta_c$ dependent factor, compute late time bulk correlation functions with future boundary conditions \cite{Anninos:2011jp}. The bulk late time profiles $\varphi(\vec{x})$ are taken to be infinitesimal such that $\Psi_{HH}[\varphi(\vec{x}),\eta_c]$ is merely a generating function of late time correlators about $\varphi(\vec{x})=0$. In its strongest form, the claim is that the CFT is a non-perturbative definition of $\Psi_{HH}$ for finite deviations away from the pure de Sitter vacuum and at finite $N$. Particularly, $\Psi_{HH}$ is computed by the partition function of the putative CFT with sources turned on. The single-trace operators are dual to the bulk de Sitter fields. Abstractly speaking, if we could write down a complete basis $\mathcal{B}$ (which may include information about topology, geometry and matter) for the Hilbert space of our theory, $\Psi_{HH}$ would be computing the overlap of the Hartle-Hawking state $| E \rangle$ with a particular state $|\beta\rangle \in \mathcal{B}$. One could also consider computing the partition function with sources for more general multi-trace operators turned on. As we shall see this computes the overlap of $| E \rangle$ with states which are not sharp eigenstates of the field operator or its conjugate momentum. More dramatically, if there are single-trace operators which are irrelevant they may correspond to an exit from the de Sitter phase (see for instance \cite{Bzowski:2012ih}). de Sitter space arises as a non-linear classical solution to four-dimensional higher spin gravity \cite{Vasiliev:1990en,Vasiliev:1999ba,Iazeolla:2007wt,Vasiliev:1986td,Vasiliev:1992av,Vasiliev:1995dn}. This theory has a tower of light particles with increasing spin, including a spinless bulk scalar with mass $m^2\ell^2 = +2$ and a spin-two graviton. The scalar potential has a minimum about the pure de Sitter solution and the kinetic terms of the higher spin particles carry the right signs and are canonical. Hence the de Sitter vacuum is perturbatively stable and free of tachyons and ghosts in this theory. Beyond perturbation theory, the late time Hartle-Hawking wavefunctional of asymptotically de Sitter space in higher spin gravity (at least when the topology at $\mathcal{I}^+$ is sufficiently simple) is conjectured to be computed by the partition function of the $Sp(N)$ model \cite{arXiv:1108.5735} (see also \cite{Das:2012dt,Ng:2012xp}), with $N \sim (\ell/\ell_p)^2~$. This model comes in two flavors, either a free theory of $N$ anti-commuting scalar fields transforming as vectors under the $Sp(N)$ symmetry or as a critical theory obtained from the $Sp(N)$ model by a double trace deformation \cite{LeClair:2007iy}. The partition function of the critical model (at least at large $N$) as a functional of the sources of the single-trace operators computes the wavefunction in the ordinary field basis. On the other hand, the free theory computes the wavefunction in a slightly modified basis. The quantum mechanical analogue of this basis is given by eigenstates of the Hermitian operator $\hat{\varsigma} = \left( \beta \hat{x} - \alpha \hat{p} \right)$ with $\alpha$, $\beta \in \mathbb{R}$. Given the wavefunction in the coordinate basis $\psi(x)$ we can compute in the $\hat{\varsigma}$-basis by performing the transform: \begin{equation}\label{tranqminv} \psi(\varsigma) = \frac{1}{\sqrt{2\pi\alpha}}\int dx \; e^{-\frac{i}{\alpha} \; \left( {\frac{{\beta}{x^2}}{2} - \varsigma { x } } \right)} \psi(x)~, \quad \psi(x) = \frac{1}{\sqrt{2\pi \alpha}}\; e^{\frac{ i \beta x^2 }{2 \alpha } }\int \; d\varsigma \; e^{-\frac{i \varsigma x}{ \alpha}} \psi(\varsigma). \end{equation} Normalizability in the $\hat{x}$-basis implies normalizability in the $\hat{\varsigma}$-basis and vice-versa. However, a node-less $\psi(x)$ will not necessarily give a node-less $\psi(\varsigma)$. \begin{comment} From this expression, we can further relate the wavefunction in momentum space to the wavefunction in $\varsigma$ space as: \begin{equation}\label{tranqminv} \psi(p) =\frac{sgn(\alpha)+i sgn(\beta)}{2 sgn(\alpha)\sqrt{\pi \beta}} \int \; d\varsigma \; e^{\frac{-i(\varsigma + \alpha p)^2}{2\alpha\beta}} \; \psi(\varsigma)~. \end{equation} \end{comment} The free $Sp(N)$ model computes the late time Hartle-Hawking wavefunction of the bulk scalar in the eigenbasis of the late time operator \cite{Anninos:2012ft}: \begin{equation}\label{convention2} {\sqrt{N} \; \eta_c^2} \; \hat{\sigma} = \hat{\phi} - \eta_c^3 \; \hat{\pi}_{\phi} ~, \quad \quad \hat{\pi}_\phi \equiv - \frac{i}{\sqrt{\det{g_{ij}}}} \frac{\delta}{\delta \phi}~, \end{equation} where $\hat{\phi}$ is the bulk scalar field operator and $\hat{\pi}_{\phi}$ is the field momentum density operator. We have taken $|\eta_c| \ll 1 $ as a late time cutoff and the combination $g_{ij}/\eta_c^2$ represents the spatial metric at this time in Fefferman-Graham gauge. It is somewhat remarkable that there exists a basis for which the wavefunctional is computed by a free dual theory given that in the ordinary field basis it is computed by a strongly coupled theory. The partition function of the free $Sp(N)$ theory on an $\mathbb{R}^3$ topology is an explicit resummation of the correlation functions and there are no non-perturbative phenomena. The remaining coordinates of the wavefunctional, which are late time profiles of the higher spin fields including the bulk graviton, are computed in the ordinary field basis for both the free and critical models. The wavefunctionals for the bulk scalar in the two different bases are related by a functional version of the transform (\ref{tranqminv}). As shown in \cite{Anninos:2012ft}, in the large $N$ limit performing the transform amounts to finding the saddles of a functional equation. One of these saddles (not necessarily the dominant one) gives the field basis wavefunction whose perturbative expansion agrees with that computed perturbatively in the bulk about the pure de Sitter solution. In most of what follows, we will perform computations in the $\hat{\sigma}$-basis, which amounts to the calculation of a functional determinant. This is a considerably hard object to compute and we will limit ourselves to situations where we have conditioned all bulk fields except the metric $g_{ij}$ and the scalar $\phi$ to have vanishing late time profiles. We should emphasize that this amounts to a very sharp conditioning and ultimately the situation could be altered by allowing higher spin deformations.\footnote{It might be worth mentioning that non-linear bulk solutions exist in higher spin gravity for which {\it only} the bulk metric and scalar are turned on \cite{Sezgin:2005hf}.} The main body of this paper is dedicated to the study of the partition function of the $Sp(N)$ model for a large class of $SO(3)$ preserving deformations of the bulk scalar and graviton. That is to say we study the late time wavefunction for bulk graviton and scalar configurations with late time profile on an $\mathbb{R}^3$ topology: \begin{equation} ds^2 = dr^2 + f(r)^2 \; r^2 \; d\Omega^2~, \quad \phi = \phi(r)~. \end{equation} where $d\Omega^2_2$ is the round metric on an $S^2$ whose $SO(3)$ symmetry is the one preserved. We also study the analogous problem on an $S^3$ topology, which amounts to a simple conformal transformation of the $\mathbb{R}^3$ case, where the late time profiles take the form: \begin{equation} ds^2 = d\psi^2 + f(\psi)^2 \; \sin^2\psi \; d\Omega^2~, \quad \phi = \phi(\psi)~. \end{equation} This allows us to examine the behavior of the wavefunction of higher spin de Sitter space for inhomogeneous and anisotropic deformations which extend the rather uniform and homogenous deformations studied in \cite{Anninos:2012ft}. In \cite{Anninos:2012ft} it was found that the wavefunction in the $\hat{\sigma}$-basis diverges as a function of uniform mode of the bulk scalar on the round metric on $S^3$. One of the questions we would like to understand is whether such non-normalizabilities persist for less `global' late time configurations such as the above $SO(3)$ deformations. The above metric deformations are conformally equivalent to the flat/round metric on $\mathbb{R}^3/S^3$. We find particularly striking numerical evidence, discussed in section \ref{secconj}, that upon fixing the uniform mode of the bulk scalar on $S^3$ (in the conformal frame where it is endowed with the standard metric) all other directions of a general $SO(3)$ late time deformation are normalizable. We have also analyzed a different geometric deformation, this time homogeneous but anisotropic, which does not keep the metric in the same conformal class. This is a squashing deformation $\alpha$ of the round metric on $S^3$ expressed as an $S^1$ fiber over an $S^2$: \begin{equation}\label{squashedsphere} ds^2 = \frac{1}{4} \left( d\theta^2 + \cos^2\theta d\phi^2 + \frac{1}{1+\alpha}\left( d\psi + \sin\theta d\phi \right)^2 \right)~, \end{equation} along with a uniform late time profile for the bulk scalar. When $\alpha = 0 $ the metric (\ref{squashedsphere}) reduces to the standard metric on $S^3$. Once again, so long as the scalar profile is kept fixed we find that the wavefunction is bounded in the $\alpha$ direction. We begin by briefly reviewing the $Sp(N)$ model in section \ref{sectwo}. In section \ref{secfour}, using technology developed in \cite{Dunne:2006ct}, we compute the wavefunction (in the $\hat{\sigma}$-basis) for some Gaussian radial deformations of the bulk scalar. This amounts to computing a functional determinant of a scalar field with a radially dependent mass term. In section \ref{balloonsec} we compute the wavefunction for a radial deformation of the geometry, in the presence of a radial mass deformation, that takes it from the flat metric on $\mathbb{R}^3$ to a general form $ds^2 = dr^2 + r^2 f(r)^2 d\Omega^2_2\;$. In section \ref{secconj} we compute the wavefunction for several harmonics on the three-sphere and linear combinations thereof and note that the wavefunction seems to diverge only when the zero harmonic becomes large and negative. We then discuss the behavior of the wavefunction on a squashed sphere with a uniform profile of the late time bulk scalar in section \ref{secthree}. In section \ref{secthreehalf} we make some general remarks about double trace deformations. We end by speculating on possible extensions of higher spin holography in section \ref{secsix}. Most of our calculations can carry over to the $O(N)$ model and its AdS$_4$ dual in higher spin gravity \cite{Klebanov:2002ja,Sezgin:2002rt,hep-th/0103247,arXiv:0912.3462}. \section{Wavefunctionals and the free $Sp(N)$ model}\label{sectwo} We wish to study the Hartle-Hawking wavefunctional of an asymptotically de Sitter higher spin gravity for deformations of the bulk scalar and graviton away from the pure de Sitter solution. We will restrict the topology of space to be $\mathbb{R}^3$ or $S^3$ and allow only deformations that decay sufficiently fast at infinity. In this section, we remind the reader of the $Sp(N)$ theory and discuss how to compute its functional determinant for certain $SO(3)$ invariant radial deformations. One motivation to do so is to understand the behavior of the wavefunction of higher spin de Sitter space for mass deformations that are more `localized' than those studied in \cite{Anninos:2012ft} which were uniform over the entire $S^3$. It is also worth noting that computing analogous pieces of the Hartle-Hawking wavefunction for a simple toy model of Einstein gravity coupled to a scalar field with a simple potential would require significant numerical work even in the classical limit, let alone at finite $N$. One would have to find a complex solution of the Euclidean equations of motion that caps off smoothly in the interior and has the prescribed boundary values at large volume, and compute its on-shell action. \subsection{Wavefunctional}\label{wf} Recall that the action of the free $Sp(N)$ model on a curved background $g_{ij}$ with a source, $m(x^i)$, turned on for the $J^{(0)} = \Omega_{AB} \chi^A \chi^B \equiv \chi \cdot \chi$ operator (dual to the bulk scalar) is given by: \begin{equation} S = \frac{1}{2} \int d^3 x \sqrt{g} \; \Omega_{AB} \left( \partial_i \chi^A \partial_j \chi^B g^{ij} + \frac{R[g]}{8} \chi^A \chi^B + m(x^i) \chi^A \chi^B \right)~, \quad \{ A, B\} = 1,2,\ldots,N~, \end{equation} where $N$ should be even. The fields $\chi^A$ are anti-commuting scalars that transform as $Sp(N)$ vectors and $\Omega_{AB}$ is the symplectic form. Notice that due to the presence of the conformal coupling, the action is invariant under a local Weyl transformation of the metric: $g_{ij} \to e^{2W(x^i)} g_{ij}$, so long as we also rescale the source as $m(x^i) \to e^{-2W(x^i)} m(x^i)$. From the bulk point of view this amounts to performing a coordinate transformation $\eta = e^{-W(x^i)} \eta$, as can be seen by studying the Starobinski-Fefferman-Graham expansion \cite{Starobinsky:1982mr,fg,Anninos:2010zf} near $\eta = 0$: \begin{equation}\label{fg} \frac{ds^2}{\ell^2} = -\frac{d\eta^2}{\eta^2} + \frac{1}{\eta^2} \; g_{ij}(x^i) dx^i dx^j +\ldots~, \quad \phi = \eta \; \nu({x^i}) + \eta^2 \; \mu(x^i) + \ldots \; . \end{equation} Notice that $\mu(x^i) \equiv \sqrt{N} m(x^i)$ is {\it not} the coefficient of the slowest falling power of $\eta$ for bulk scalar. At the linearized level, the $\nu(x^i)$ profile is dual to the vev of the $J^{(0)}$ operator in the presence of an infinitesimal $m(x^i)$ source. Computing the late time Hartle-Hawking wavefunctional in the $\hat{\sigma}$-basis with $\sigma = m(x^i)$ amounts to computing the partition function of the $Sp(N)$ theory with finite sources turned on. Given that it is a Gaussian theory, we can integrate out the anti-commuting $\chi^A$ fields and find:\ \begin{equation} \lim_{\eta_c\to 0} \Psi_{HH} \left[g_{ij}, m(x^i), \eta_c \right] = Z_{free} [g_{ij}, m(x^i)] = \left( \det \left[ - \nabla_g^2 + \frac{R[g]}{8} + m(x^i) \right] \right)^{N/2}~, \end{equation} where: \begin{equation} Z_{free} [g_{ij}, m(x^i)] \equiv \prod_{A=1}^N \int \mathcal{D} \chi^A \; e^{-S[\chi^A,m(x^i)]}~. \end{equation} In the case of a metric $g_{ij} = e^{2W(x^i)} \delta_{ij}$ that is conformally equivalent to the flat metric on $\mathbb{R}^3$, it is convenient to compute the functional determinant in the conformal frame where $g_{ij}$ is the flat metric. This amounts to rescaling the source to: \begin{equation} \hat{m}(x^i) = e^{2W(x^i)} m(x^i)~. \end{equation} We will use this fact in section \ref{balloonsec}. \subsection{Functional determinant for radial deformations}\label{radialwf} We have seen that for conformally flat metrics our problem reduces to computing a functional determinant: \begin{equation} \det \left[ - \nabla_{\mathbb{R}^3}^2 + \hat{m}(x^i) \right]~, \end{equation} where $\nabla_{\mathbb{R}^3}^2$ is the Laplacian of the round metric on $\mathbb{R}^3$, namely $ds^2 = dr^2 + r^2 d\Omega^2_2$. The above object is badly divergent unless we regulate it somehow. We will regulate it using a heat kernel or zeta function approach, both of which give the same answer. In fact, this precise problem has been studied by Dunne and Kirsten in \cite{Dunne:2006ct} for functions $\hat{m}(x^i)$ which only depend on the radial coordinate, i.e. $\hat{m}(x^i) = \hat{m}(r)$, and which vanish sufficiently fast at infinity. It was shown that the zeta function regulated determinant is given by the following sum: \begin{equation}\label{dunne} \log \left( \frac{\det \left[ - \nabla^2 + \mu^2 + \hat{m}(r) \right] }{\det \left[ - \nabla^2 + \mu^2 \right] } \right) = \sum_{l=0}^\infty (2l+1) \left( \log T^{(l)}(\infty) - \frac{ \int_0^\infty dr \; r \; \hat{m}(r) }{2 l+1} \right)~. \end{equation} In the above, the factor $(2l +1)$ originates from the degeneracy of eigenfunctions on a two-sphere and $T^{(l)}(r)$ solves the equation: \begin{equation} - \frac{d^2}{dr^2} T^{(l)} (r) - 2 \left[ \frac{(1+l)}{r} + \mu \frac{ I_{3/2+l}(\mu r) }{ I_{1/2+l}(\mu r) } \right] \frac{d}{dr} T^{(l)} (r) + \hat{m}(r) T^{(l)}(r) = 0~, \end{equation} with boundary conditions $T^{(l)}(0) = 1$ and $d T^{(l)}(0) / dr = 0$. The parameter $\mu^2 \in \mathbb{R}$ is a constant mass parameter that we will set to zero. The derivation of the above formula employs the Gelfand-Yaglom theorem \cite{Gelfand:1959nq}, which expresses the regulated functional determinant of a one-dimensional Schr\"{o}dinger operator in terms of a single boundary value problem. The problem of computing the logarithm of a ratio of functional determinants for purely radial operators reduces to an infinite number of Gelfand-Yaglom problems, one for each $l$, whose solutions need to be summed (this is the first piece on the right hand side of (\ref{dunne})) and regularized (this is the second piece on the right hand side of (\ref{dunne})). The applicability of the formula requires $\hat{m}(r)$ to vanish faster than $r^{-2}$ at infinity, and these are the only types of deformations for which we will compute the wavefunction in the latter sections. When implementing the above formula we must sum up to a certain cutoff $l = l_{max}$ which we take to be $l_{max} = 45$. A discussion of how the error decreases with $l_{max}$ is given in appendix \ref{r3s3}. \section{Simple examples of radial deformations}\label{secfour} The purpose of this section is to exploit the general formula (\ref{dunne}) for a simple set of radial functions. By studying $\Psi_{HH}$ as a functional of $\hat{m}(r)$ we can identify some qualitative features already observed in \cite{Anninos:2012ft}, such as regions where the wavefunction oscillates and grows exponentially, as well as some new ones. Furthermore, we can study its dependence on more detailed features of the localized deformation. The zeroes of the wavefunction in the $\hat{\sigma}$-basis occur only when the effective potential $V_{eff}(r) = l(l+1)/r^2 +\hat{m}(r)$ of the differential operator $-\nabla^2_{\mathbb{R}^3} + \hat{m}(r)$ is negative for some range of $r$. If the effective potential were positive for all $r$ it could not have vanishing eigenvalues, and hence the wavefunction could not vanish. Thus, we expect all oscillations of the wavefunction in the $\hat{\sigma}$-basis to occur in directions where $\hat{m}(r)$ is negative for some range of $r$. Assessing the magnitude of the wavefunction as a functional of $\hat{m}(r)$ is a more complicated task. We observe that the wavefunctional acquires increasingly high local maxima between its oscillations only in regions where the quantity $I_{\hat m} \equiv \int_0^\infty dr \;r \; \hat{m}(r)$ appearing in (\ref{dunne}) becomes large and negative. It is important to note that because we are working with the flat metric on $\mathbb{R}^3$, which has no scale, our functional determinants will have an associated scaling symmetry given by $r \to r/\lambda$ and $\hat{m}(r) \to \hat{m}(r/\lambda)/\lambda^2$. We should thus fix the scaling when studying the functional determinant/wavefunction. \begin{figure} \begin{center} { \includegraphics[height=1.7in]{pimple_g.pdf} \includegraphics[height=1.7in]{dimple_g.pdf} \includegraphics[height=1.54in]{dbgauss_g.pdf} } \caption{Examples of the radial deformations (\ref{gauss}) on the left, (\ref{r2gauss}) in the middle and (\ref{dbgauss}) on the right. We have suppressed the polar coordinate $\theta$ of the $S^2$ but kept the azimuthal direction.} \label{gaussians} \end{center} \end{figure} \subsection{Single Gaussian} We first consider $\hat{m}(r)$ to be given by a general single Gaussian profile: \begin{equation}\label{gauss} \hat{m}(r) = A \; \frac{e^{-r^2/\lambda^2}}{\lambda^2}~, \quad \quad \int_0^\infty dr \; r \; \hat{m}(r) = \frac{A}{2}~. \end{equation} See the left panel of figure \ref{gaussians} for an illustration of this deformation. Using equation (\ref{dunne}) we can explore $|\Psi_{HH}(\lambda,A)|$, where from now on whenever we write $\Psi_{HH}$ it is implied as a late time wavefunctional. Using the scaling relation $r \to r/\widetilde{\lambda}$ and $\hat{m}(r) \to \hat{m}(r/\widetilde{\lambda})/\widetilde{\lambda}^2$ we can set $\lambda = 1$. In figure \ref{singlegauss} we show a plot of the functional determinant. We immediately notice the same qualitative feature that was present for the constant mass deformation on a round $S^3$ (displayed later on in figure \ref{sigmafig}). \begin{figure} \begin{center} { {\includegraphics[height=2.2in]{singleGaussian.pdf}}} \caption{Plot of $|\Psi_{HH}(\lambda,A)|^2$ for $N=2$ for the Gaussian profile (\ref{gauss}) with $\lambda=1$ using $l_{max} = 45$. The solid blue line is an interpolation of the numerically determined points (shown in red). The wavefunction grows and oscillates in the negative $A$ direction.}\label{singlegauss} \end{center} \end{figure} Namely, it oscillates and grows exponentially in the negative $A$ direction. This is somewhat expected since our deformation is qualitatively similar to the mass deformation on the flat metric on $\mathbb{R}^3$ one gets by the conformal transformation of a constant mass on $S^3$ (see appendix \ref{r3s3}). In particular, all oscillations occur for $A<0$ and the magnitude of the local maxima increases for increasing $|A|$ for fixed $\lambda$. \begin{figure} \begin{center} {\includegraphics[height=2.8in]{one_gauss_ring_a_is_5.pdf} \quad \includegraphics[width=2.8in]{gaussian_r2_Aslice_-022.pdf}}\caption{Left: Density plot of $|\Psi_{HH}(\lambda,a,A)|^2$ for $N=2$ for the profile (\ref{r2gauss}) as a function of A (vertical) and $\lambda$ (horizontal) for $a=5$ using $l_{max} = 45$. Again, the wavefunction grows and oscillates in the negative $A$ and positive $\lambda$ directions. Right: Plot of $|\Psi_{HH}(\lambda,a,A)|^2$ for the profile (\ref{r2gauss}) as a function of $\lambda$ for $A=-0.022$ and $a=5$.}\label{singlegaussii} \end{center} \end{figure} \subsection{Gaussian Ring} We can also study the functional determinant of a profile of the type: \begin{multline}\label{r2gauss} \hat{m}(r) = A \; e^{-(r-a)^2/\lambda^2} \; r^2~, \\ \int_0^\infty dr \; r \; \hat{m}(r) = \frac{A \; \lambda}{4} \left[2 e^{-{a^2}/{\lambda ^2}} \lambda \left(a^2+\lambda^2\right)+a \sqrt{\pi } \left(2 a^2+3 \lambda ^2\right) \left(1+\text{Erf}\left[\frac{a}{\lambda }\right]\right)\right], \end{multline} which describes a Gaussian-like ring peaked around $r \sim 1/2 \left(a+\sqrt{a^2+4 \lambda ^2}\right)$. $\text{Erf}[x]$ denotes the error function. See the middle panel of figure \ref{gaussians} for an illustration of this deformation. The factor of $r^2$ is included to ensure that the profile is continuously differentiable near the origin. Again, using the scaling relation $r \to r/\widetilde{\lambda}$ and $\hat{m}(r) \to \hat{m}(r/\widetilde{\lambda})/\widetilde{\lambda}^2$ we either fix the value of $\lambda$, $|a|$ or $|A|$. We show an example in figure \ref{singlegaussii} where we have fixed the value of $a$. \subsection{Double Gaussian} As a third example we consider a double Gaussian profile: \begin{equation}\label{dbgauss} \hat{m}(r) = r^2 \left( A_1 \; e^{-(r-a_1)^2/\lambda_1^2} + A_2 \; e^{-(r-a_2)^2/\lambda_2^2} \right)~. \end{equation} See the right panel of figure \ref{gaussians} for an illustration of this deformation. An example of $|\Psi_{HH}(\lambda_i,a_i,A_i)|^2$ with $a_1 = 0$ is shown in figure \ref{doublegauss}. Once again we observe a pattern of maxima encircled by regions where the wavefunction squared vanishes identically. Furthermore, the wavefunction grows for increasingly negative values of $A_1$ and $A_2$. \begin{figure} \begin{center} {\includegraphics[height=2.9in]{double_gaussian_no_r2.pdf} \quad \includegraphics[height=2.8in]{double_gaussian_lamlam.pdf} } \caption{Left: Density plot of $|\Psi_{HH}(\lambda_i,a_i,A_i)|^2$ for $N=2$ for the double Gaussian profile (\ref{dbgauss}) as a function of $A_1$ ($x$-axis) and $A_2$ ($y$-axis) for $a_1=0$, $a_2=5$, $\lambda_1=\lambda_2=1$ using $l_{max} = 45$. The wavefunction grows and oscillates for negative $A_1$ and $A_2$. Right: Density plot of $|\Psi_{HH}(\lambda_i,a_i,A_i)|^2$ for the double Gaussian profile (\ref{dbgauss}) as a function of $\lambda_1$ ($x$-axis) and $\lambda_2$ ($y$-axis) with $A_1 = -1$, $A_2 = -1/100$, $a_1 = 0$ and $a_2 = 5$.}\label{doublegauss} \end{center} \end{figure} \section{Radial deformations of flat $\mathbb{R}^3$ and pinching limits}\label{balloonsec} In this section, we introduce and study a class of $SO(3)$ preserving deformations of the flat metric on $\mathbb{R}^3$. We show that they are conformally equivalent to the flat metric on $\mathbb{R}^3$. Thus, the wavefunction can only depend on such deformations of the metric if we also turn on a radial mass $m_b(r)$. Turning on such a mass, we can then perform the symmetry transformation discussed at the end of section \ref{wf} to get a mass deformation $\hat{m}(r)$ on the flat metric on $\mathbb{R}^3$. We will pick a functional form that is related by the symmetry transformations of \ref{wf} to a constant mass deformation on a deformed three-sphere geometry (see appendix \ref{r3s3} for details). Depending on the sign of a parameter, the deformed three-sphere will look either like a peanut or an inverse peanut, i.e. like bulbous pears inverted relative to one another and conjoined on their fatter ends. The partition function that we compute can then also be understood as the answer for the partition function on this deformed three-sphere with a constant mass deformation. The pinching limit will be when the waist of the peanut-shaped geometry vanishes. We emphasize that how we perceive these deformations of the late time metric depends greatly on what we decide are natural constant time slices, since there always exists a conformal frame where the late time metric is the flat one. Ideally, it would be useful to analyze a qualitatively similar geometric deformation that would take the original geometry outside its conformal class, but we must restrict to the former case in this section since we will be constrained by considering $SO(3)$ preserving deformations. Section \ref{secthree} will go beyond this restriction by considering a new conformal class. \subsection{Balloon Geometry} Consider the following class of $SO(3)$ preserving metrics defined on $\mathbb{R}^3$: \begin{equation}\label{evap} ds^2 = dr^2 + r^2 f_\zeta(r)^2 d\Omega^2_2~, \quad d\Omega^2_2 \equiv d\theta^2 + \sin^2\theta d\phi^2~, \end{equation} with $r \in [0,\infty)$. Consider a family of smooth functions $f_\zeta(r)$ with $\zeta\leq\zeta^* $ for positive $\zeta^*$ that tend to unity both at large $r$ and near $r=0$. We require that $f_{\zeta}(r)$ vanishes at some $r = r^*$ for the critical value $\zeta = \zeta^*$. We furthermore impose that \begin{equation}\label{noconical} \lim_{\zeta \to \zeta^*} \left. \frac{d^2}{dr^2} \left( r^2 f_\zeta(r)^2 \right) \right\vert_{r=r^*} = 2~. \end{equation} \begin{figure} \begin{center} { \includegraphics[height=2in]{balloon3DPlot.pdf} } \caption{The ``balloon" deformation of $\mathbb{R}^3$, defined by (\ref{evap}) and (\ref{balloonfunction}), represented schematically for positive $\zeta$. }\label{schematic} \end{center} \end{figure} For positive $\zeta$, the geometries described by (\ref{evap}) can be pictured as a two-sphere whose size at some finite $\zeta < \zeta^*$ grows, then shrinks and subsequently grows again. If $f_\zeta(r) = 1$ the geometry is of course nothing more than the flat metric on $\mathbb{R}^3$. As we approach $\zeta = \zeta^*$, the size of the two-sphere tends to vanish at $r = r^*$ eventually pinching the geometry into a warped three-sphere and a slightly deformed metric on $\mathbb{R}^3$. The choice (\ref{noconical}) ensures there are no conical singularities at the pinching point. It would be of interest in and of itself to study geometries with conical singularities (see for example \cite{Bordag:1996fw}). As a concrete example, we will take: \begin{equation}\label{balloonfunction} f_\zeta(r)^2 = 1 - \zeta \left( r^2 + \frac{1}{\zeta^*} (\gamma r)^4 \right) e^{- (r - a)^2/\lambda^2} ~. \end{equation} The parameters $a$ and $\lambda$ are chosen and $\gamma$ is tuned to obey the condition (\ref{noconical}). Though $\zeta^*$ is not an independent parameter it is useful to isolate in the expression. A schematic representation of this deformation, for positive $\zeta$, is presented in figure \ref{schematic}. \subsection{Conformal Flatness of Balloon Geometry} It is important to note that the geometry (\ref{evap}) is conformally flat. This can be shown in a straightforward fashion. Consider a coordinate transformation $r = g(x)$. It immediately follows that if the following ordinary differential equation: \begin{equation}\label{conformal_ode} x \frac{d g(x)}{dx} = g(x) f_\zeta(g(x))~, \end{equation} has a smooth solution for $g(x)$ whose derivative is positive for all $x > 0$ then our metric becomes: \begin{equation}\label{conformal} ds^2 = \left( \frac{d g(x)}{dx} \right)^2 \left( dx^2 + x^2 d\Omega_2^2 \right)~. \end{equation} Though we cannot solve the non-linear o.d.e analytically, we can easily evaluate it numerically and confirm for several cases that $g(x)$ satisfies the necessary requirements. Hence, our metric (\ref{evap}) is indeed conformally equivalent to the flat metric on $\mathbb{R}^3$. In figure \ref{gexample} we give a numerical example of this. \begin{figure} \begin{center} {\includegraphics[width=3in]{g_of_r_plot_90pct.pdf} \quad \includegraphics[width=3in]{g_prime_of_r_plot_90pct.pdf}} \caption{Plot of $g(x)$ (left) and $dg(x)/dx$ (right) as obtained by numerically solving equation \ref{conformal_ode} ($\zeta^*=1741.51$, $\zeta=9\zeta^*/10$, $\gamma=1.36612$, $a=-95$, $\lambda=30$).}\label{gexample} \end{center} \end{figure} This result is already of some interest even for the case of ordinary Einstein gravity. It informs us that upon conditioning that all other fields vanish at late times, the absolute value of the late time Hartle-Hawking wavefunction, $|\Psi_{HH}[g_{ij}]|$, is independent of any radial $SO(3)$ preserving deformation of the late time metric. Indeed, from the bulk perspective a smooth conformal transformation of the late time three-metric can be induced by a time diffeomorphism that preserves the Starobinski-Fefferman-Graham form. From a holographic perspective, the late time wavefunction is computed by the partition function of a three-dimensional conformal field theory and thus only depends on the conformal metric (recall there are no conformal anomalies in three dimensions). \subsection{Wavefunctions and balloon geometries} We now examine what happens to the functional determinant as we vary the waist parameter $\zeta$ for an example. We will also turn on a mass that would correspond to a uniform mass $m$ on the deformed three-sphere (as discussed further in appendix \ref{balloonapp}), which upon the conformal transformation discussed there becomes: \begin{equation} m_b(r) = \left(\frac{2}{1+r^2}\right)^2 m~. \end{equation} This is the mass deformation on the balloon geometry. The final deformation $\hat{m}(x)$, to be used in the Dunne-Kirsten formula, is obtained by performing a conformal rescaling of the balloon geometry to the flat metric on $\mathbb{R}^3$: $ds^2=dx^2+x^2\, d\Omega_2^2$. This requires a conformal rescaling of $m_b(r)$ to: \begin{equation} \hat{m}(x)=\left(\frac{dg(x)}{dx}\right)^2 \left(\frac{2}{1+r(x)^2}\right)^2 m~. \end{equation} Thus, we will study the functional determinant as a function of $m$ and $\zeta$. In figures \ref{balloonfigi} and \ref{balloonfigii} we display our numerical results. As expected, at $m = 0$, nothing changes as we vary $\zeta$ since the balloon geometries are conformally flat. However, when we turn on $m \neq 0$ the wavefunction becomes sensitive to changes in $\zeta$. Interestingly, decreasing the girth of the throat while keeping everything else fixed is favored, at least near $m = 0$. Thus, though supressed exponentially with respect to the local maximum of the wavefunction at $m = 0$, the wavefunction does not vanish in the pinching limit. It is tempting to speculate that such pieces of the wavefunction might be connected to the fragmentation picture of \cite{Bousso:1998bn}. \begin{figure} \begin{center} \includegraphics[width=3in]{balloon_density.pdf} \end{center} \caption{Density plot of the $|\Psi_{HH}(\zeta,m)|^2$ for $N=2$ as a function of the pinching parameter $\zeta$ (vertical axis) and an overall mass deformation $m$ (horizontal axis) using $l_{max}=45$, $\zeta_*=1741.51$, $\gamma=1.36612$, $a=-95$ and $\lambda=30$.} \begin{center}\label{balloonfigi} {\includegraphics[width=3in]{s3balloonall.pdf} \quad \includegraphics[width=3in]{s3balloonzoom.pdf}} \caption{Left: $|\Psi_{HH}(\zeta,m)|^2$ for $N=2$ as a function of $m$ for $\zeta=-\zeta^*/2$ (red dots), $\zeta=0$ (blue squares), $\zeta=\zeta^*/2$ (green diamonds) and $\zeta=9 \zeta^*/10$ (black triangles) using $l_{max}=45$, $\zeta_*=1741.51$, $\gamma=1.36612$, $a=-95$ and $\lambda=30$. Right: Same as left but for different plot range.}\label{balloonfigii} \end{center} \end{figure} \section{Spherical harmonics and a conjecture}\label{secconj} In this section, we present numerical evidence that when mapping the problem back to the three-sphere (using the discussion in appendix \ref{r3s3}), all profiles give a normalizable wavefunction upon fixing their average value over the whole three-sphere. Thus, it is conceivable that the only divergence of the wavefunction occurs precisely for large and negative values of a uniform profile over the whole three-sphere \cite{Anninos:2012ft}; a single direction in an infinite dimensional configuration space! For instance, as we shall show below, by mapping the Gaussian profile (\ref{gauss}) to the three-sphere and removing the zero mode from its expansion in terms of three-sphere harmonics, the resultant profile produces a normalizable wavefunction as a function of its amplitude. \subsection{Three-sphere harmonics} We now study some examples of $SO(3)$ invariant deformations which correspond to harmonics of the three-sphere conformally mapped back to $\mathbb{R}^3$. These harmonics are the eigenfunctions of the Klein-Gordon operator on the three-sphere with metric $ds^2 = d\psi^2 + \sin^2\psi \; d\Omega_2^2$. The $k^{th}$ harmonic (independent of the $S^2$ coordinates) is given by: \begin{equation}\label{sphericalharmonic} F_k(\psi) = c_k ~\text{csc}(\psi) ~\text{sin}[\,(1+k) \psi \,]~, \quad k = 0,1,2,\ldots~. \end{equation} As explained in appendix \ref{r3s3}, to evaluate the partition function for this deformation using the method of Dunne and Kirsten we must first perform the coordinate transformation $\psi(r) = 2 \cot^{-1} r^{-1}$, and then scale the deformation by the inverse of the conformal factor that maps the three-sphere metric to the metric on $\mathbb{R}^3$. The final radial deformation which is to be used in the Dunne-Kirsten formula is: \begin{equation}\label{sphericalharmonicR3} \hat m_k(r) = c_k \frac{2 ~\text{sin}[\,2(1+k) \text{tan}^{-1}r\,]}{ \left(r+r^3\right)}~. \end{equation} Taking $k=0$ corresponds to the zero mode which has been previously studied in \cite{Anninos:2012ft} and was found to be oscillatory and divergent as the coefficient $c_k$ goes to large negative values. In figure \ref{harmonicsPlots} we plot the partition functions for higher harmonics as a function of the coefficient $c_k$. We notice that they are all well-behaved and normalizable, at least in the range we have explored. This motivates us to consider deformations which are linear combinations of spherical harmonics. Having looked at a deformation that is the linear combination of the zero mode with the first harmonic, as well as a deformation that is the linear combination of the first harmonic with the second harmonic, we notice that the partition function is not divergent so long as the coefficient of the zero mode is kept fixed. \begin{figure}[p!] \begin{center} { \includegraphics[height=1.7in]{Harmonic1.pdf} \;\; \quad \includegraphics[height=1.7in]{logHarmonic1_s.pdf} } \caption{Left: Plot of $|\Psi_{HH} (c_1) |^2$ for $N=2$ for the first harmonic mapped to $\mathbb{R}^3$ given in (\ref{sphericalharmonicR3}) using $l_{max} = 45$. Right: Plot of $\log |\Psi_{HH} (c_1) |^2$ for $N=2$ using $l_{max} = 45$.\newline\newline}\label{harmonicsPlots} { \includegraphics[height=1.27in]{logHarmonic0.pdf} \;\; \includegraphics[height=1.27in]{logHarmonic1.pdf} \;\; \includegraphics[height=1.27in]{logHarmonic2.pdf} \;\; \includegraphics[height=1.27in]{logHarmonic3.pdf} \;\; \includegraphics[height=1.27in]{logHarmonic4.pdf} \;\;\includegraphics[height=1.27in]{logHarmonic5.pdf} \;\; \includegraphics[height=1.27in]{logHarmonic6.pdf} \;\; \includegraphics[height=1.27in]{logHarmonic7.pdf} \;\; \includegraphics[height=1.27in]{logHarmonic8.pdf} } \caption{Plot of $\log |\Psi_{HH} (c_k) |$ for $N=2$ for the first nine spherical harmonics mapped to $\mathbb{R}^3$ given in (\ref{sphericalharmonicR3}) using $l_{max} = 45$. Notice that only the zeroth harmonic is non-normalizable in the negative $c_0$ direction.}\label{harmonicsPlots} \end{center} \end{figure} Postponing a more systematic study for the future, here we simply consider a modified version of the single Gaussian deformation given in (\ref{gauss}). Previously, we had found that as the overall coefficient of the profile becomes large and negative, the partition function diverges. The new profile we will study is obtained by mapping the Gaussian profile to the three-sphere, subtracting off its zero mode, and mapping it back to $\mathbb{R}^3$. Notice that the Gaussian profile mapped to the three-sphere is constructed from an infinite number of harmonics, and here we are subtracting the piece that seems problematic from our analysis of harmonics and finite linear combinations thereof. Since $F_0(\psi)=c_0$ is simply a constant, the condition to be met is: \begin{equation}\label{sphericalharmonic} \int_0^\pi d\psi \left(\frac{r(\psi)^2+1}{2}\right)^2 \hat m(r(\psi))~ \sin^2 \psi = 0~, \end{equation} where $r(\psi)=\text{tan}(\frac{\psi}{2})$ and $\hat m(r)=A e^{-r^2}-4 a_1/(r^2+1)^2$. In this case the integral can be done explicitly and we can solve for the coefficient $a_1$ analytically. The final form of the single Gaussian radial deformation orthogonal to the zero mode of the three-sphere becomes: \begin{equation}\label{normalizedGaussian} \hat m(r)=A ~e^{-r^2}-\frac{8 A \left(1-e \sqrt{\pi }~ \text{Erfc}[1]\right)}{\sqrt{\pi } \left(1+r^2\right)^2}~. \end{equation} \begin{figure} \begin{center} { \includegraphics[height=1.7in]{normalizedGaussian.pdf} \includegraphics[height=1.7in]{normalizedGaussianLog.pdf}} \caption{$|\Psi_{HH}(A)|^2$ (left) and $\text{log}|\Psi_{HH}(A)|$ (right) for $N=2$ as a function of $A$, the overall size of the radial deformation in (\ref{normalizedGaussian}) which is constructed to be orthogonal to the zero mode of the three-sphere ($l_{max}=45$).}\label{normGauss} \end{center} \end{figure}We plot the functional determinant as a function of $A$ in figure \ref{normGauss}. Interestingly, the partition function is once again well-behaved for large values of $A$. An analogous analysis for the balloon geometries, where we fix the zero-mode on the conformally related three-sphere, also results in the boundedness of the partition function in the $\zeta$ direction ($\zeta$ defined in (\ref{balloonfunction})). The above results motivate the conjecture: \newline\newline {\it The partition function of any $SO(3)$ symmetric ``radial" deformation for which the three-sphere zero mode harmonic is fixed is bounded}. \newline Notice that the three-sphere is chosen as the geometry in the conformal class on which to fix the uniform profile. For example, in the case of the peanut geometries one can show that fixing the uniform profile on the peanut is \emph{not} sufficient to ensure that the partition function is bounded (though fixing some non-uniform profile would suffice). This is simply because keeping the uniform mass profile fixed to some negative value while taking $\zeta$ large and negative, which corresponds to the conformally related sphere getting fatter at the waist, implies that the uniform profile on the sphere is getting large and negative. Consistent with the rest of our observations, we see that the partition function is unbounded in this direction. Furthermore, the next section will provide evidence for a more general conjecture that would extend beyond the conformal class of the sphere. \section{Three-sphere squashed and massed}\label{secthree} In this section, we would like to briefly revisit and extend some of the observations of \cite{Anninos:2012ft} for a constant mass deformation on an $S^3$. There, it was observed that in the $\hat{\sigma}$-basis the wavefunction on an $S^3$ with a uniform mass deformation oscillated and diverged at large negative $m_{S^3}$. We explore in this section a new direction which is the squashing parameter of the round metric on the three-sphere (in the presence of a non-zero uniform scalar profile) and its effect on the zeroes and maxima. Unlike the previous $SO(3)$ preserving deformations which were inhomogeneous, this $SO(3) \times U(1)$ preserving deformation is homogeneous yet anisotropic. Furthermore, the squashed three-sphere is not conformal to the ordinary round three-sphere. In fact, squashed spheres with different values of the squashing parameter belong to distinct conformal classes. Part of our motivation is to provide further evidence that the zeroes of the wavefunction are extended and that the local maxima of the wavefunction (other than the pure de Sitter one) will no longer necessarily peak about homogeneous and isotropic geometries. Additionally, and in the same spirit as the observations made in section \ref{secconj}, we find that upon fixing the value of the uniform profile over the whole squashed three-sphere the wavefunction is normalizable in the squashing direction. \subsection{Squashed and massed} Consider turning on a constant mass $m_{S^3}$ for the free $Sp(N)$ model on the round metric on an $S^3$ whose radius $a$ is fixed to one unless otherwise specified. The partition function is given by \cite{Anninos:2012ft}: \begin{equation}\label{s3z} N^{-1} \log Z_{free} [m_{S^3}] = \frac{1}{16} \left({\log} \; 4-\frac{3 {\zeta}(3)}{\pi ^2}\right) -\frac{\pi}{8} \int^{m_{S^3}}_0 d \sigma \sqrt{1 - 4\sigma} \cot \left(\frac{\pi}{2}\sqrt{1-4\sigma} \right)~. \end{equation} The analyticity of $Z_{free}[m_{S^3}]$ in the complex $m_{S^3}$-plane is ensured by the uniform convergence of the (regularized) infinite product of analytic eigenvalues which defines it. For physical applications we restrict to real $m_{S^3}$ and note that this implies that the Taylor series expanded about any $m_{S^3} = m^0_{S^3}$ converges to the partition function above. Furthermore, the partition function has a zero if and only if one of the product eigenvalues in the functional determinant vanishes, which can only happen for $m_{S^3} < 0$. As shown in figure \ref{sigmafig}, this wavefunction grows exponentially and oscillates in the negative $m_{S^3}$ direction.\footnote{Divergences of the Hartle-Hawking wavefunctional have been discussed in other circumstances such as Einstein gravity coupled to a scalar field with a quadratic scalar potential and vanishing cosmological constant \cite{Hawking:1983hj} or the wavefunction of dS$_3$ on a toroidal boundary \cite{Castro:2012gc}. Their physical interpretation remains to be understood. For example, it might be a result of very sharp conditioning or an indication of an instability.} It is worth noting that the $m_{S^3}$ dependent part of the phase of the wavefunction vanishes. The real part of $\log Z_{free}/N$ goes as $|m_{S^3}|$ for large negative $m_{S^3}$ and $-(m_{S^3})^{3/2}$ for large positive $m_{S^3}$. \begin{figure} \begin{center} {\includegraphics{psihhsigma.pdf}} \end{center} \caption{Plot of $|\Psi_{HH}[{m_{S^3}}]|^2$ given by expression (\ref{s3z}) for $N = 2$.}\label{sigmafig} \end{figure} It is worth studying what happens to the zeroes and local maxima of $Z_{free}$ in the presence of an additional deformation. A computationally convenient deformation is to squash the round metric on $S^3$ into that of a squashed sphere, which is a homogeneous yet anisotropic geometry. In this sense, this deformation is complementary to the inhomogeneous deformations we have been studying so far. We review the metric and eigenvalues of the squashed sphere with squashing parameter $\rho$ in appendix \ref{squashed}. Our method of regularization is a straightforward extension of heat kernel techniques used in section 3.2 of \cite{Anninos:2012ft}, and details can be found therein. In figure \ref{sqM} we present a plot of the wavefunction as a function of the mass $m_{S^3}$ and the squashing parameter $\rho$ (the round metric on $S^3$ occurs at $\rho = 0$). We find that the local maxima are in general pushed away from $\rho = 0$ and the zeroes of the wavefunction are extended to enclose the local maxima. The fact that the zeroes are extended (codimension one in the $(\rho, m_{S^3})$ plane) is perhaps not surprising given that the zeroes in figure \ref{sigmafig} arise from $\Psi_{HH}[{m_{S^3}}]$ (which is purely real) changing sign. This feature should not disappear, at least for small perturbations to the equation which determines $\Psi_{HH}[{m_{S^3}}]$. Furthermore, the maxima of figure \ref{sigmafig} are pushed even higher when squashing is allowed. This can be seen in the right hand side of figure \ref{sqM}, where one notices that the second local maximum of $\Psi_{HH}[{m_{S^3}}]$ at $\rho = 0$ is pushed further up to some $\rho >0$. This observation strongly suggests that away from the origin, and under such extreme conditioning of the late time profiles of the bulk fields, the wavefunction peaks in regions where the metric and perhaps even the higher spin fields are highly excited. On the other hand, the perturbative de Sitter saddle centered at the origin remains a true local maximum. \begin{figure} \begin{center} {\includegraphics[width=2.8in]{SqM.pdf} \; \includegraphics[width=3.5in]{SquashedS3peak2.pdf}} \caption{Left: Density plot of $|\Psi_{HH}[\rho,m_{S^3}]|^2$ for $N=2$. The fainter peak centered at the origin reproduces perturbation theory in the empty de Sitter vacuum. Horizontal lines are lines of constant $m_{S^3}$ and vertical lines are lines of constant $\rho$. Right: Plot of $|\Psi_{HH}[\rho,-2.25]|^2$ for $N=2$. Notice that it peaks away from $\rho = 0$.}\label{sqM} \end{center} \end{figure} \begin{figure} \begin{center} {\includegraphics[width=3.1in]{sqS3m22m77b.pdf} \quad \includegraphics[width=3.1in]{sqs3rho4p5.pdf} } \caption{Left: Density plot of $|\Psi_{HH}[\rho,m_{S^3}]|^2$ for $N=2$ for a slightly larger range. Notice the local de Sitter maximum visible in figure \ref{sqM} is already too faint to be seen. Horizontal lines are lines of constant $m_{S^3}$ and vertical lines are lines of constant $\rho$. Right: Plot of $2 \log |\Psi_{HH}[\rho,-4.5]|$ for $N=2$.}\label{sqMfull} \end{center} \end{figure} \section{Basis change, critical $Sp(N)$ model, double trace deformations} \label{secthreehalf In this section we further discuss the transformation to the field basis and clarify the role of double trace deformations. We would like to comment on the transform to the bulk field basis, which gives us the wavefunctional as a functional of $\nu \equiv \sqrt{N} \widetilde{\sigma}$ (with $\nu$ defined in (\ref{fg}) and $\widetilde{\sigma}$ being related the source of the single-trace operator dual to the bulk scalar), in the large $N$ limit. As discussed in \cite{Anninos:2012ft}, one has to consider the free theory deformed by a relevant double trace operator $f (\chi \cdot \chi)^2/(8N)$. We also keep a source $- i f \widetilde{\sigma}$ turned on for the single-trace $\chi \cdot \chi$ operator. The parameter $f \in \mathbb{C}$ has units of energy. Performing a Hubbard-Stratonovich transformation by introducing an auxiliary scalar field $\sigma$ we find: \begin{equation} S^{(f)} = \frac{1}{2} \int d^3x \sqrt{g} \left( \Omega_{AB} \left( \partial_i \chi^A \partial_j \chi^B g^{ij} - i f \widetilde{\sigma} \chi^A \chi^B\right) - \left( \frac{N \sigma^2}{f} - \sigma \; \Omega_{AB} \chi^A \chi^B \right) \right)~. \end{equation} Integrating out the $\chi^A$ fields, the partition function becomes: \begin{equation}\label{critz} Z^{(f)}_{crit} [\widetilde{\sigma}] = e^{-\frac{N f}{2} \int d^3 x \sqrt{g} \; \widetilde{\sigma}^2 }\int \mathcal{D} \sigma \; \exp \left[ N \int d^3 x \sqrt{g} \left( \frac{ \sigma^2}{2 f} + i \widetilde{\sigma} {\sigma} \right) \right] Z_{free} [\sigma]~, \end{equation} where we have transformed variables $\sigma \to ( \sigma - i f \widetilde{\sigma})$. Having interpreted $Z_{free}[\sigma] = \langle \sigma | E \rangle = \Psi_{HH}[\sigma]$, we see that the above expression is a Fourier type transform of the $\hat{\sigma}$-basis wavefunction. In order for this to become the actual basis changing transform (\ref{tranqminv}) to the eigenbasis of the field operator, the constant $f$ must be taken to infinity. The $f \to \infty$ limit (where we are keeping the size of the three-sphere fixed) corresponds to sending the ultraviolet cutoff (of the infrared fixed point theory) to infinity, in the same sense as \cite{Wilson:1972cf}, or roughly speaking it corresponds to the late time limit in the bulk.\footnote{Another way to think about this is keeping $f$ fixed and taking the large size limit of the three-sphere that the CFT lives on. This is because the dimensionless quantity is $|f| a$ where $a$ is the size of the sphere. Indeed at late times the three-sphere grows large.} We begin by performing a perturbative analysis for infinitesimal deformations of the free $Sp(N)$ theory at large $N$ on an $\mathbb{R}^3$. \subsection{Perturbative analysis on $\mathbb{R}^3$} For the sake of simplicity, we will put the theory on the flat metric on $\mathbb{R}^3$, akin to studying perturbations in a small piece of $\mathcal{I}^+$.\footnote{The parallel story in anti-de Sitter space has been studied extensively \cite{Witten:2001ua,Mueck:2002gm,Gubser:2002zh,Gubser:2002vv}. From the bulk perspective in planar AdS$_4$: $ds^2 = \ell_A^2 (dz^2 + d\vec{x}^2)/z^2$, at least perturbatively about the empty AdS$_4$ vacuum, the finite $f_A \in \mathbb{R}$ double trace deformed theory computes correlation functions of the bulk scalar quantized with mixed boundary conditions. Near the boundary $z \to 0$ of AdS$_4$, the bulk scalar with mass $m^2\ell_{A}^2 = -2$ behaves as $\phi(z,\vec{x}) \sim \alpha(\vec{x}) z^2 + \beta(\vec{x}) z$ with $\alpha(\vec{x}) = f_A \beta(\vec{x})$. This boundary condition is different from the conformally invariant one which sets either $\alpha(\vec{x})$ or $\beta(\vec{x})$ to zero, corresponding to the free or critical $O(N)$ models respectively. In de Sitter space we would consider a wavefunction of a scalar of mass $m^2\ell^2 = +2$ computed by imposing future boundary conditions \cite{Maldacena:2002vr,Harlow:2011ke,Anninos:2011jp} $\phi(\eta,\vec{x}) \sim \alpha(\vec{x}) \eta^2 + \beta(\vec{x}) \eta$ (with $\alpha(\vec{x}) = f_D \beta(\vec{x})$) and Bunch-Davies conditions $\phi \sim e^{i k \eta}$ for $k |\eta| \gg 1$. At the level of perturbation theory this is computed by continuing the Euclidean AdS$_4$ partition function by $z \to -i\eta$, $\ell_{A} \to i \ell$ and $f _A \to i f_D$ (where $f_D \in \mathbb{R}$ for the `normalizable' profile of the scalar field to be real).} For reasons that will be clear momentarily we choose $f= i |f|$ to be pure imaginary. We are interested in setting up a perturbative expansion about the $\sigma \sim 0$ Gaussian peak of $Z_{free}[\sigma]$. From (\ref{critz}) we can compute the two point function of $\mathcal{O} \equiv \chi \cdot \chi$ at large $N$ in the double trace deformed $Sp(N)$ theory on an $\mathbb{R}^3$. We do this by taking two variational derivatives of the logarithm of (\ref{critz}) with respect to $\widetilde{\sigma}(x^i)$ and evaluating at $\widetilde{\sigma}(x^i)=0$: \begin{equation} \langle \mathcal{O}_{\vec{k}} \mathcal{O}_{-\vec{k}} \rangle_f = {N} \left( \frac{|f|^2 G(k)}{N+2 i |f| G(k)} \right)~, \quad G(k) \equiv \langle \mathcal{O}_{\vec{k}} \mathcal{O}_{-\vec{k}} \rangle_{f=0} = - \frac{N}{k}~, \end{equation} where $k$ is the magnitude of the momentum. For $k \gg |f|$ this becomes the two-point function of the free $Sp(N)$ model, whereas for $k \ll |f|$ this becomes the two-point function of the critical $Sp(N)$ model. Expanding for large $|f|$ we find: \begin{equation}\label{2pf} \langle \mathcal{O}_{\vec{k}} \mathcal{O}_{-\vec{k}} \rangle_f = - i \frac{N |f|}{2} - \frac{N}{4} k + \ldots \end{equation} Notice that the real part of the two-point function (\ref{2pf}) is negative. Recalling (\ref{pertdscft}), this is the Gaussian suppression of the Hartle-Hawking wavefunction near the de Sitter vacuum. Also, the local momentum independent term has become a phase for pure imaginary $f$. We can compare this to the bulk Hartle-Hawking wavefunction for a free $m^2\ell^2 = +2$ scalar in planar coordinates computed in (\ref{planarwf}) of the appendix. This allows us to (roughly) identify $|f|^{-1}$ with the late time cutoff $|\eta_c|$ at large $|f|$. In a similar way we can compute the rest of the perturbative correlators of the critical $Sp(N)$ theory \cite{LeClair:2007iy}. Beyond such perturbative analyses, we must resort to a saddle point approximation which we now proceed to. \subsection{Large $N$ saddles for uniform $S^3$ profiles} We now put the theory on the round metric on $S^3$. At large $N$, we can evaluate (\ref{critz}) by solving the saddle point equation (for $\sigma = \sigma(\widetilde{\sigma})$ and $\widetilde{\sigma}$ uniform over the whole three-sphere): \begin{equation}\label{saddle} \frac{16\pi \sigma}{f} + 16 \pi i \widetilde{\sigma} = \sqrt{1 - 4 \sigma} \cot \left( \frac{\pi}{2} \sqrt{1 - 4 \sigma} \right)~. \end{equation} For a given solution $\Sigma_i$ of (\ref{saddle}), we can evaluate $Z_{crit}[\widetilde{\sigma},\Sigma_i]$. For example, there is a solution $\Sigma_0$ where $\sigma \sim 0$ when $\widetilde{\sigma} \sim 0$ (with $f \to \infty$). It is the piece of the wavefunctional evaluated from this saddle near $\widetilde{\sigma} = 0$ that reproduces the dS invariant perturbation theory in the Bunch-Davies state (about the pure de Sitter vacuum). In figure \ref{saddlesfig} we plot $Z_{crit}$ for the first few $\Sigma_i$ at large $f = i |f|$. These have $\widetilde{\sigma} = 0$ near the subsequent zeroes of $\cot \left( \frac{\pi}{2} \sqrt{1 - 4 \sigma} \right)$ in (\ref{saddle}). Notice that for all large $N$ saddles $Z_{crit}[\widetilde{\sigma}]$ is peaked at $\widetilde{\sigma} = 0$ but the saddles coming from the more negative $\sigma$ peaks contribute more near $\widetilde{\sigma} = 0$. Away from the large $N$ limit we must compute the integral in (\ref{critz}) without resorting to a saddle point approximation. If we restrict to uniform $\sigma$ and $\widetilde{\sigma}$, we note that as we increase $f=i|f|$ more and more of the growing negative $\sigma$ peaks in $Z_{free}[\sigma]$ contribute to the integral before it is cutoff by the rapid oscillations due to the $i N \sigma^2/|f|$ piece. One can check that the integral grows for large and negative $\widetilde{\sigma}$ upon fixing $|f|$. \begin{figure} \begin{center} {\includegraphics[width=2.1in]{psihh1.pdf} \includegraphics[width=2.1in]{psihh2.pdf} \includegraphics[width=2.1in]{psihh3.pdf}} \caption{Plots of the $N$th root of $|\Psi_{HH}[\widetilde{\sigma},\Sigma_0]|$, $|\Psi_{HH}[\widetilde{\sigma},\Sigma_1]|$, $|\Psi_{HH}[\widetilde{\sigma},\Sigma_2]|$ at large $f$ from left to right. Notice that the higher saddles dominate near $\widetilde{\sigma}=0$ but fall off faster for large $\widetilde{\sigma}$.}\label{saddlesfig} \end{center} \end{figure} Some of the saddles in the large $N$ limit should correspond to classical (complex) bulk solutions with a uniform late time profile of the scalar on the round metric of the three-sphere. Some of these solutions labelled by a continuous parameter, which only involve the bulk metric and scalar, were found by Sezgin and Sundell in \cite{Sezgin:2005hf}. It is worth noting that though it may sound confusing that there are bulk solutions that have only the metric and scalar turned on (since all higher spin fields interact on an equal footing), this is natural from the CFT since turning on an $SO(4)$ symmetric source for $J^{(0)} = \chi \cdot \chi$ need not source the traceless higher spin currents due to symmetry reasons. The metric is non-vanishing since in effect we have also turned on a source for it by having the round metric on $S^3$ at the boundary. At finite $N$, all these saddles mix quantum mechanically. Each $\Psi_{HH}[\widetilde{\sigma},\Sigma_i]$ comes with a phase, so one should be careful when summing contributions from different saddles. Finally, it would be extremely interesting to understand the Lorentzian cosmologies associated to the wavefunction using the ideas developed in \cite{Hartle:2008ng,Hartle:2007gi} (see also \cite{Sarangi:2005cs}). In order to have a classical cosmology (or an ensemble of such cosmologies) we must ensure that the wavefunction takes a WKB form with a phase oscillating much more rapidly than its absolute value. At least in the large $N$ limit, and for large $|f|$, this is ensured by the first term in (\ref{critz}). \subsection{Double trace deformations as convolutions} We can also consider keeping $f$ finite and real. This defines a double-trace deformed field theory, in and of its own right, whose partition function $Z^{(f)} [{\sigma}']$ can be computed in the large $N$ limit. We also keep a uniform (on $S^3$) source ${\sigma}'$ turned on for the single-trace $\chi \cdot \chi$ operator. This partition function is no longer computing overlaps between the Bunch-Davies vacuum and some late time field configuration which is an eigenstate of the field operator $\hat{\phi}$. Given that $Z_{free}[\sigma] = \langle \sigma | E \rangle = \Psi_{HH}[\sigma]$, we see that $Z^{(f)} [{\sigma}']$ is computing instead a convolution of the wavefunction in the $\hat{\sigma}$-basis:\footnote{Though we will not do so here, we can study higher multitrace deformations of the $Sp(N)$ theory and arrive at similar expressions. Thus, the fact that multi-trace operators are irrelevant seems far less threatening than having low-spin, single-trace operators which are irrelevant and correspond to bulk tachyons. \begin{equation} Z^{(f)} [{\sigma}'] = \int \mathcal{D} \sigma \exp \left[ N \int d\Omega_3 \; \frac{\left({\sigma}' - \sigma \right)^2}{2f} \right] \Psi_{HH}[\sigma]~. \end{equation} We can also view $Z^{(f)} [{\sigma}']$ as computing the overlap of the Hartle-Hawking state with the state: \begin{equation} | f \rangle \equiv \int \mathcal{D} \sigma \exp \left[ N \int d\Omega_3 \; \frac{\left({\sigma}' - \sigma \right)^2}{2f} \right] | \sigma \rangle~. \end{equation} Notice that though the integral itself is convergent for finite $f$, the resulting function $Z^{(f)} [{\sigma}']$ will grow exponentially at large negative ${\sigma}'$. One could also consider more generally a complex valued $f \in \mathbb{C}$ which would correspond to a kind of windowed Fourier transform. \subsection{Euclidean AdS$_4$ with an $S^3$ boundary} The situation can be contrasted with the case of Euclidean AdS$_4$ (with an $S^3$ boundary) \cite{Gubser:2002vv}. The partition function $Z^{O(N)}_{crit}[{\sigma}_A]$ of the critical $O(N)$ model on an $S^3$, dual to Euclidean AdS$_4$ in higher spin gravity, is obtained again from the free $O(N)$ model by a double trace deformation in the limit $f_A \to + \infty$: \begin{equation}\label{critzon} Z^{O(N)}_{crit} [{\sigma}_A] = \lim_{f_A \to\infty} e^{\frac{N f_A}{2} \int d\Omega_3 {\sigma}_A^2 }\int \mathcal{D} \sigma \; \exp \left[ N \int d\Omega_3 \left( {\frac{ \sigma^2}{2 f_A} - {{\sigma}_A {\sigma}} }\right) \right] Z^{O(N)}_{free} [\sigma]~, \quad f_A \in \mathbb{R}~. \end{equation} The first term in front of the integral is local in the limit $f_A \to +\infty$ and we can remove it by adding a counterterm. To ensure convergence of the integral we must choose an appropriate contour, which in this case is given by $\sigma$ running along the imaginary axis (see for example \cite{Anninos:2012ft}). $Z^{O(N)}_{free}[\sigma]$ is the partition function of the free $O(N)$ model and is related to the free $Sp(N)$ partition function by $N \to -N$. Note that $Z^{O(N)}_{free}$ has poles precisely at the values where the wavefunction in the $\hat{\sigma}$-basis (\ref{s3z}) vanishes. At large $N$ the integral (\ref{critzon}) can be evaluated by a saddle point approximation. In figure \ref{oncrit} we display $Z^{O(N)}_{crit} [{\sigma}_A]$ for the case of a uniform source $f_A {\sigma}_A$ over the whole $S^3$. \begin{figure} \begin{center} {\includegraphics[width=3.3in]{critON.pdf} } \caption{Plot of the $N^{th}$ root of the finite part of $Z^{O(N)}_{crit}[{\sigma}_A]$ for a uniform source ${\sigma}_A$ over the whole three-sphere. We have normalized such that $Z^{O(N)}_{crit}[{\sigma}_A] = 1$ at ${\sigma}_A=0$.}\label{oncrit} \end{center} \end{figure} In principle, this plot should be reproducible by computing the regularized on-shell Vasiliev action on the asymptotically Euclidean AdS$_4$ Sezgin-Sundell solution \cite{Sezgin:2005hf}. \section{Extensions of higher spin de Sitter holography?}\label{secsix} So far, our discussion was restricted to the minimal bosonic higher spin theory. A natural question that arises, particularly given possible interpretational issues of the wavefunction such as its (non)-normalizability, is whether this theory is part of a larger framework. We briefly discuss possible extensions of higher spin de Sitter holography, inspired by the analogous situation in anti-de Sitter space. \subsection{AdS$_4$} There exist parity violating deformations of the bulk equations of motion which deform the original Vasiliev equations (in anti-de Sitter space) to a one-parameter family \cite{Vasiliev:1992av,Vasiliev:1995dn,Vasiliev:1999ba}. It was proposed that the dual description is given by coupling the theory to a Chern-Simons theory with level $k$, at least for simple enough topologies. The new parameter is given by the 't Hooft coupling $\lambda = N/k$ which is small when the dual is higher spin gravity. Such a theory was shown in the infinite $N$ limit with fixed small $\lambda$ \cite{Aharony:2011jz,Chang:2012kt,Giombi:2011kc} to have a spectrum of single-trace operators which is precisely that of the free $U(N)$ model, namely a tower of higher spin currents which are conserved (up to $\mathcal{O}(1/\sqrt{N})$ corrections), in accordance with a bulk higher spin theory. As discussed in \cite{Chang:2012kt,Giombi:2012ms} in the context of anti-de Sitter higher spin gravity, one can also endow the bulk higher spin fields with $U(M)$ Chan-Paton factors, such that the fields all lie in the adjoint representation of the $U(M)$. In the dual theory, this corresponds to adding a $U(M)$ flavor symmetry to the $U(N)$ model. If the flavor symmetry is weakly gauged, which can be achieved through the procedure described in \cite{Witten:2003ya}, then one can form single-trace operators of the form $\text{Tr} \left( A B A B \ldots A B\right)$. The fields $A$ and $B$ transform in the $(\square,\overline{\square})$ and $(\overline{\square},\square)$ of the $U(N) \times U(M)$ gauge group respectively. As $M/N$ increases the `glue' between each $\text{Tr} AB$ (which are dual to higher spin fields in the bulk) becomes stronger. From the bulk point of view, it was suggested that the higher spin fields, now endowed with additional $U(M)$ interactions, form bound states with binding energy that increases as we crank up $M/N$. These bound states would correspond to the $\text{Tr} \left( A B A B \ldots A B\right)$ operators. An appropriately supersymmetrized version of this story \cite{Chang:2012kt,Giombi:2012ms} was conjectured to connect the higher spin (supersymmetric) theory to the ABJ model \cite{Aharony:2008gk} where such long string operators are dual to bulk strings. \subsection{dS$_4$} It is convenient in our discussion for the bulk to contain a spin-one gauge field in its spectrum. We thus consider the non-minimal higher spin model with even and odd spins whose dual (at least at the level of correlation functions on $\mathbb{R}^3$) is a free/critical $U(N)$ theory with $N$ anti-commuting scalars transforming as $U(N)$ vectors. We refer to this as the $\widetilde{U}(N)$ model. Given that the parity violating deformations of the Vasiliev equations in AdS retain the original field content and reality conditions, it seems natural that they are present for the de Sitter theory as well. Thus, one might consider that such theories are dual to parity violating extensions of the $\widetilde{U}(N)$ theory obtained by adding a level $k$ Chern-Simons term to the free anti-commuting complex scalars. The Lagrangian of this theory is \begin{equation}\label{csds} S_{CSM} = - \frac{i k}{4\pi} \int d^3 x \left( \epsilon_{ijk} \; A^a_i \partial_{i} A^a_{k} + \frac{1}{3}\epsilon_{ijk} f^{abc} A^a_i A^b_j A^c_k \right) + \int d^3 x \; \nabla_i \chi^{A} \nabla^i \bar{\chi}^{\bar{A}}~. \end{equation} The fields $A_i^a$ are (possibly complexified) $U(N)$ gauge fields, the $\chi^{A}$ fields are anti-commuting complex scalars transforming in the fundamental of the $U(N)$ gauge symmetry, and the $\nabla_i$ derivative is covariant with respect to $U(N)$ gauge transformations. Though classically this theory is conformally invariant, this need not be the case when we include loops, as the $\beta$-function might be non-vanishing. The ordinary $U(N)$ Chern-Simons theory coupled to a vector of charged scalars has two exactly marginal deformations at infinite $N$ as in \cite{Aharony:2011jz,Maldacena:2011jn,Maldacena:2012sf}. These are the 't Hooft coupling $\lambda \equiv N/k$ and the coupling constant $\lambda_6$ of the triple trace interaction $(\chi^A \bar{\chi}^{\bar{A}})^3$. Though the Chern-Simons-$\widetilde{U}(N)$ theory (\ref{csds}) is non-unitary, it is conceivable that it is also has a vanishing $\beta$-function at large $N$ \cite{workinprogress}. This theory would also have an unchanged spectrum of single-trace operators at large $N$ and small $\lambda$ by the same arguments as those in \cite{Aharony:2011jz,Girardello:2002pp}.\footnote{At finite $k$, the partition function on a non-trivial topology such as $\mathcal{M} = S^1 \times \text{Riemm}_g$ will grow as \cite{Banerjee:2012gh} (see also \cite{Radicevic:2012in}): $Z_{CFT} \sim \exp\left({(g-1)N^2 \log{k}+\mathcal{O}(N)}\right)$ where $g$ is the genus of the $\mathcal{M}$. This drastically favors higher topologies if interpreted as a probability, but it is unclear whether and how one should compare topologies and what the correct normalization for $\Psi_{HH}$ is. It is interesting that at finite $k$ one might also encounter monopole operators. In ABJM, such operators are dual to D0-branes in the bulk. It is unclear how they should be understood in the context of de Sitter space and higher spin gravity. For instance, they have a conformal weight that goes like $k$, which might suggest taking $k$ to be complex or imaginary \cite{Witten:2010cx} in the de Sitter case. It is also worth noting that the potentially infinite wealth of topological data at $\mathcal{I}^+$ might be at odds with the finiteness of de Sitter entropy \cite{Banks:2005bm}. In Einstein gravity adding too much topology at $\mathcal{I}^+$ often results in bulk singularities \cite{Andersson:2002nr}. The issue of topology in the context of dS/CFT is further discussed in \cite{foam}.} One can also consider adding $U(M)$ Chan-Paton factors to the bulk higher spin de Sitter theory. This merely requires tensoring the $*$ algebra with that of $M\times M$ matrices. Once again, this will not affect the reality conditions on the higher spin fields and they will all transform in the adjoint of the $U(M)$. This corresponds to adding a $U(M)$ flavor symmetry to the $\widetilde{U}(N)$ vector model, which can be weakly gauged (see appendix \ref{u1appendix} for a discussion). The single-trace operators $\text{Tr} \left( A B A B \ldots A B\right)$ have increasingly real conformal weight. From the point of view of dS/CFT this would imply that the bulk theory has a tower of tachyonic bulk fields since the conformal weight of a bulk field goes as $\Delta_\pm \sim 3/2 \pm \sqrt{9/4-m^2\ell^2}$. One might suspect that these will be the continuations of the higher spin bound states previously discussed for the anti-de Sitter case. Thus we see that even though the fundamental constituents (i.e. the higher spin particles) of such an extension of higher spin de Sitter gravity are not pathological (at least at the level of perturbation theory), they may form configurations which resemble tachyonic fields in de Sitter space. It is of interest to understand whether the late time behavior of such a theory can ever be asymptotically de Sitter \cite{workinprogress}. From the CFT point of view these are highly irrelevant operators which are {\it not} conserved currents. For there to be a late time de Sitter phase, one would require that turning on such irrelevant deformations can flow the theory to a UV fixed point. \section*{Acknowledgements} It is a great pleasure to thank Tarek Anous, Shamik Banerjee, Gerald Dunne, Daniel Harlow, Jim Hartle, Tom Hartman, Sean Hartnoll, Simeon Hellerman, Thomas Hertog, Diego Hofman, Juan Maldacena, Alex Maloney, Steve Shenker, Eva Silverstein, Andy Strominger and Lenny Susskind for useful discussions. The authors are also grateful to the ``Cosmology and Complexity" conference in Hydra for their hospitality while this work was in progress. This work has been partially funded by DOE grant DE-FG02-91ER40654 and by a grant of the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.
proofpile-arXiv_067-6189
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Refraction of light is the fundamental optical phenomenon. Significant progress in fabrication of nanoscale structures led to creation of optical metamaterials which allow us to manipulate the way light refracts. For example, at the interface of negative-index metamaterials, the angle of refraction turns out to be negative \cite{2001_Shelby,2007_Lezec}. In~\cite{2011_Yu}, an array of optically thin resonators with subwavelength separation was used to modulate the phase of incident light along the interface. It was demonstrated that, depending on the designed phase gradient, the refraction angle can be controlled at will for any incident angle. The authors of~\cite{2002_Enoch} showed that a metamaterial with a near-to-zero refractive index acts as an antenna with an extremely high directivity --- a source embedded in a slab of such metamaterial emits the waves whose refraction at the interfaces with the surrounding media causes concentration of the outgoing energy in a narrow cone. A phenomenon that is reverse to this directive emission was predicted by Feng in~\cite{2012_Feng}. Feng showed that the direction of the incoming energy flow bends towards the interface normal for any incident angle when p-polarized (transverse magnetic) light enters an $\varepsilon$-near-zero metamaterial (the metamaterial with vanishingly small real part of permittivity). He clarified that such all-angle collimation of the incident light (in the original paper, term "omnidirectional bending" was used) is a result of material losses. In our work we show that similar all-angle collimation can be realized with s-polarized (transverse electric) incident light at the interface of metamaterials with a vanishingly small real part of permeability, so-called $\mu$-near-zero metamaterials. Note, however, that the theoretical framework we use is different from the one employed by Feng~\cite{2012_Feng}: To obtain our result we apply the theory of inhomogeneous waves which is commonly used to describe refraction of light in lossy media~\cite{2004_Fedorov,1983_Chen}. Such approach allows us to generalize the result by Feng~\cite{2012_Feng} and shed some light on its polarization dependence. We show that all-angle collimation of incident light in $\varepsilon$-near-zero and $\mu$-near-zero metamaterials is the manifestation of the same phenomenon which takes place under different polarization conditions. A vanishingly small real part of permeability can be found in negative-index metamaterials near permeability resonances which are used to achieve a negative index of refraction. To confirm our idea about all-angle collimation of incident light in $\mu$-near-zero metamaterials, we calculate the transmission angle of the Poynting vector at the interface with the negative-index metamaterial recently reported by Garc{\'\i}a-Meca {\it et~al.}~\cite{2011_Garcia-Meca}. \section{Inhomogeneous waves in a lossy metamaterial} \label{sec:2} Since all-angle collimation of incident light is a consequence of losses in a medium~\cite{2012_Feng} and the propagation of light in lossy media differs from that in lossless media, we first summarize the basic features of light waves in lossy media~\cite{2004_Fedorov,1983_Chen,Fedorov}. Unlike the case of light waves in lossless media, the equiamplitude and equiphase planes of light waves in lossy media are not parallel, and such waves are called {\it inhomogeneous waves}. The summarized results in this section will be used in the following sections to determine the direction of the Poynting vector of the wave transmitted through a metamaterial. We consider an interface between two isotropic media (see Fig.~\ref{fig:fig1}). The first medium is a lossless dielectric with a real refractive index $n_0$ and the second medium is a lossy metamaterial with complex permittivity $\varepsilon=\varepsilon'-i\varepsilon''$ and permeability $\mu=\mu'-i\mu''$. An incident plane wave with a real wave vector $\vc{k}_0$ comes from the first medium. The incident angle $\theta_0$ is an angle between $\vc{k}_0$ and the unit vector normal to the interface $\uv{q}$, which is pointing to the second medium. The complex electric and magnetic fields, $\vc{E}$ and $\vc{H}$, respectively, of the transmitted wave are written as \begin{equation} \label{eq:EH} \vc{E} = \vc{e}\exp[i(\omega t-\vc{k}\cdot\vc{r})], \qquad \vc{H} = \vc{h}\exp[i(\omega t-\vc{k}\cdot\vc{r})], \end{equation} where $\vc{e}$ and $\vc{h}$ are complex amplitude vectors, and $\vc{k}$ and $\vc{r}$ are a wave vector and a position vector, respectively, with $\omega$ and $t$ being the wave frequency and time. Since the second medium is lossy, the wave vector of the transmitted wave is complex: $\vc{k}=\vc{k}'-i\vc{k}''$, where $\vc{k}'$ and $\vc{k}''$ are the real phase and attenuation vectors, respectively, and they are written as \begin{equation} \label{eq:k} \vc{k}' = \vc{p} + q'\uv{q}, \qquad \vc{k}'' = q''\uv{q}. \end{equation} In Eq.~\eqref{eq:k} the phase vector $\vc{k}'$ is decomposed as a sum of two vectors, namely, $\vc{p}$ and $q'\uv{q}$ (see Fig.~\ref{fig:fig1}). Vectors $\vc{p}=[\uv{q}\times[\vc{k}'\times\uv{q}]]$ and $q'\uv{q}$ are, respectively, parallel and perpendicular to the interface. The attenuation vector $\vc{k}''$ is always normal to the interface. Thus vectors $\vc{k}'$ and $\vc{k}''$ are not parallel (the only exception is the case of normal incidence, $\vc{p}=0$) and the transmitted wave is inhomogeneous as mentioned earlier. The normal components of vectors $\vc{k}'$ and $\vc{k}''$ have magnitudes $q'=(\vc{k}'\cdot\uv{q})$ and $q''=(\vc{k}''\cdot\uv{q})$, respectively, which are given by~\cite{Fedorov} \begin{equation} \label{eq:q} q' = \sgn\{\xi''\}\frac{\omega}{c_0}\sqrt{(|\xi|+\xi')/2}, \qquad q'' =\frac{\omega}{c_0}\sqrt{(|\xi|-\xi')/2}, \end{equation} where $\xi'=\varepsilon'\mu'-\varepsilon''\mu''-n_0^2\sin^2\theta_0$, $\xi''=\varepsilon'\mu''+\varepsilon''\mu'$, $|\xi|=\sqrt{\xi'^2+\xi''^2}$, and $\sgn\{\xi''\}$ represents the sign of $\xi''$. Equation~\eqref{eq:q} says that the normal component $q'$ of the phase vector $\vc{k}'$ is positive (i.~e., $\vc{k}'$ is directed away from the interface) if $\xi''>0$, and negative (i.~e., $\vc{k}'$ is directed towards the interface) if $\xi''<0$. The latter case corresponds to negative refraction~\cite{Fedorov}. \begin{figure}[t] \centering \includegraphics{fig1} \caption{\label{fig:fig1}% Refraction of the wave at the interface of a lossy material. The incident plane wave has a real wave vector $\vc{k}_0$, while the wave vector $\vc{k}$ of the transmitted wave is complex due to material losses: $\vc{k}=\vc{k}'-i\vc{k}''$. In general, the phase $\vc{k}'$ and attenuation $\vc{k}''$ vectors are not parallel, and the transmitted wave is inhomogeneous.} \end{figure} The refractive index $m'$ and the attenuation coefficient $m''$ are formally defined by $|\vc{k}'|=m'\frac{\omega}{c_0}$ and $|\vc{k}'|=m''\frac{\omega}{c_0}$. In practice they can be found from the following equations: \begin{equation} \label{eq:m} m' = \sqrt{(|\xi|+\xi'+2n_0^2\sin^2\theta_0)/2}, \qquad m'' = \sqrt{(|\xi|-\xi')/2}. \end{equation} Note that, according to Eq.~\eqref{eq:m}, both $m'$ and $m''$ depend on the incident angle $\theta_0$. The transmission angle $\theta$ is the angle between $\vc{k}'$ and $\uv{q}$, and can be found from the Snell's law, $m'\sin\theta=n_0\sin\theta_0$. \section{Poynting vector} In this section we derive the equation for the Poynting vector of the wave transmitted into a metamaterial. For this purpose we first consider the decomposition of the complex amplitude vectors $\vc{e}$ and $\vc{h}$ of the transmitted wave into the s- and p-polarized modes (TE and TM modes, respectively). Since the plane of incidence is a plane spanned by vectors $\vc{k}_0$ and $\uv{q}$, the vector $\vc{s}=[\vc{k}_0\times\uv{q}]=[\vc{k}\times\uv{q}]$ is normal to the plane of incidence (see Fig.~\ref{fig:fig1}). Using vector $\vc{s}$, we can decompose the complex electric vector amplitude $\vc{e}$ into the s- and p-polarized components: $\vc{e}=\vc{e}_s+\vc{e}_p$, where $\vc{e}_s=\vc{s}^{-2}(\vc{e}\cdot\vc{s})\vc{s}$ and $\vc{e}_p=\vc{s}^{-2}[\vc{s}\times[\vc{e}\times\vc{s}]]= \vc{s}^{-2}(\vc{e}\cdot\uv{q})[\vc{s}\times\vc{k}]$, or \begin{equation} \label{eq:e} \vc{e} = A_s\vc{s}+A_p[\vc{s}\times\vc{k}], \end{equation} where $A_s=\vc{s}^{-2}(\vc{e}\cdot\vc{s})$ and $A_p=\vc{s}^{-2}(\vc{e}\cdot\uv{q})$ are the complex amplitudes of the s- and p-polarized components, respectively. Amplitudes $A_s$ and $A_p$ of the transmitted wave are connected with the corresponding amplitudes of the incident wave by Fresnel coefficients \cite{2004_Fedorov,1983_Chen}, i.~e., $A_p=0$ if the incident wave is s-polarized, and $A_s=0$ if it is p-polarized. To find the decomposition of the complex magnetic vector amplitude $\vc{h}$, we substitute Eq.~\eqref{eq:e} into the identity $\vc{h}=(\mu_0\mu\omega)^{-1}[\vc{k}\times\vc{e}]$, and obtain \begin{equation} \label{eq:h} \vc{h} = \varepsilon_0\varepsilon\omega A_p\vc{s} - \frac{A_s}{\mu_0\mu\omega}[\vc{s}\times\vc{k}]. \end{equation} Now, with the help of Eq.~\eqref{eq:EH}, we write a complex time-averaged Poynting vector as $\vc{S}=\frac{1}{2}[\vc{E}\times\vc{H}^*]= \frac{1}{2}[\vc{e}\times\vc{h}^*]\exp[-2(\vc{k}''\cdot\vc{r})]$, where "$^*$" means complex conjugate. Substituting Eqs.~\eqref{eq:e} and \eqref{eq:h} into the last expression, we find that vector $\vc{S}$ can be written as a sum of three components, $\vc{S}=\frac{1}{2}(\vc{S}_s+\vc{S}_p+\vc{S}_{sp})\exp[-2(\vc{k}''\cdot\vc{r})]$, where \begin{subequations} \label{eq:S} \begin{align} \vc{S}_s & = \frac{|A_s|^2}{\mu_0\mu^*\omega} [\vc{s}\times[\vc{k}^*\times\vc{s}]], \\ \vc{S}_p & = \varepsilon_0\varepsilon^*\omega|A_p|^2[\vc{s}\times[\vc{k}\times\vc{s}]], \\ \vc{S}_{sp} & = -\frac{A_s^*A_p}{\mu_0\mu^*\omega} [[\vc{k}\times\vc{s}]\times[\vc{k}^*\times\vc{s}]]. \label{eq:Ssp} \end{align} \end{subequations} The s-polarized component $\vc{S}_s$ depends only on $A_s$, while the p-polarized component $\vc{S}_p$ depends only on $A_p$. The cross-polarized component $\vc{S}_{sp}$ depends on both $A_s$ and $A_p$. According to Eq.~\eqref{eq:Ssp}, the cross-polarized component $\vc{S}_{sp}$ exists only in lossy media where the wave vector $\vc{k}$ is complex. A real time-averaged Poynting vector $\vc{P}$ corresponds to the real part of $\vc{S}$. Expanding the vector products and taking the real parts of Eq.~\eqref{eq:S}, we find that $\vc{P}=\frac{1}{2}(\vc{P}_s+\vc{P}_p+\vc{P}_{sp})\exp[-2(\vc{k}''\cdot\vc{r})]$, where \begin{subequations} \label{eq:P} \begin{align} \vc{P}_s & = \frac{\vc{s}^2|A_s|^2}{\mu_0|\mu|^2\omega} (\mu'\vc{k}'+\mu''\vc{k}''), \\ \vc{P}_p & = \varepsilon_0\omega\vc{s}^2|A_p|^2(\varepsilon'\vc{k}'+\varepsilon''\vc{k}''), \\ \vc{P}_{sp} & = \frac{2q''\vc{s}^2}{\mu_0|\mu|^2\omega} (\mu'\Im\{A_s^*A_p\}-\mu''\Re\{A_s^*A_p\})\vc{s} \end{align} \end{subequations} with $\Re\{\}$ and $\Im\{\}$ being the real and imaginary parts of the corresponding expressions. Equation~$\eqref{eq:P}$ says that the s- and p-polarized components of the Poynting vector are proportional to the sum of vectors $\vc{k}'$ and $\vc{k}''$, while the cross-polarized component $\vc{P}_{sp}$ is proportional to vector $\vc{s}$ and thus normal to the plane of incidence. The component $\vc{P}_{sp}$ is responsible for the transversal shift of the transmitted light beam. A similar shift named Imbert-Fedorov shift takes place for the reflected light beam for the case of total internal reflection \cite{1972_Imbert,2013_Fedorov}. Despite the fact that these two shifts look similar, there are several different viewpoints on the component $\vc{P}_{sp}$: The authors of~\cite{1981_Halevi} argue that "there is no mechanism for energy transport in the direction perpendicular to the plane of incidence" and set $\vc{P}_{sp}$ equal to zero, based on the fact that "the Poynting vector is defined only up to an arbitrary, additive, solenoidal vector". Fedorov in his book \cite{2004_Fedorov} believes that this component is real and responsible for the light pressure in the direction perpendicular to the plane of incidence. The authors of~\cite{2011_Dmitruk} came to the conclusion that the appearance of $\vc{P}_{sp}$ is caused by excitation of surface electric polariton mode or surface magnetic mode by the resonant or non-resonant manner. In this work, however, we only consider s- ($A_p=0$) or p-polarized ($A_s=0$) incident wave for which $\vc{P}_{sp}=0$. By inspecting Eq.~\eqref{eq:P}, we find that both $\vc{P}_s$ and $\vc{P}_p$ are parallel to $\vc{k}''$ if $\mu'$ or $\varepsilon'$ is equal to zero, respectively. Meanwhile, for any incident angle, the attenuation vector $\vc{k}''$ is always normal to the interface [see Eq.~\eqref{eq:k}]. Therefore, we conclude that, at the interface of a material with a vanishingly small real part of permeability ($\mu'=0$), the s-polarized incident wave gives rise to the transmitted wave whose energy flow is directed normally to the interface, irrespective of the incident angle. A similar argument holds for the p-polarized incident wave at the interface of a material with a vanishingly small real part of permittivity ($\varepsilon'=0$), as shown by Feng~\cite{2012_Feng}. In conclusion we would like to note that it is possible to express the Poynting vector in Eq.~\eqref{eq:P} in terms of the s- and p-polarized components of the magnetic field $\vc{H}$. The easiest way to do so is to apply the usual electromagnetic duality by replacing $\varepsilon_0$ and $\varepsilon$ by $\mu_0$ and $\mu$, and using the corresponding amplitudes $B_s$ and $B_p$ ($\vc{h} = B_s\vc{s}+B_p[\vc{s}\times\vc{k}]$) instead of $A_s$ and $A_p$. After such procedures we will find that all-angle collimation in $\varepsilon$-near-zero and $\mu$-near-zero metamaterials happens for s- and p-polarized magnetic field $\vc{H}$, respectively, which is opposite to the case of $\vc{E}$. However, this is not surprising, since vectors $\vc{E}$ and $\vc{H}$ are perpendicular in the incoming wave, and, for example, s-polarization in terms of $\vc{E}$ corresponds to p-polarization in terms of $\vc{H}$. \section{Transmission angles} \begin{figure}[t] \centering \includegraphics[width=7cm]{fig2} \caption{\label{fig:fig2}% (a) Variation of the real $\mu'$ and imaginary $\mu''$ parts of the permeability $\mu$ as a function of wavelength $\lambda$. (b) and (c) The transmission angles $\psi_s$ and $\psi_p$ for the s- and p-polarized components of the Poynting vector as functions of wavelength $\lambda$ and incident angle $\theta_0$. All above functions are calculated for the interface between vacuum and the metamaterial reported in~\cite{2011_Garcia-Meca}. The spectral region of negative refraction is located between the two dotted vertical lines.} \end{figure} Now we consider the transmission angles $\psi_s$ and $\psi_p$ for s- and p-polarized components of the Poynting vector. Angles $\psi_s$ and $\psi_p$ are defined as the angles between vectors $\vc{P}_s$ and $\vc{P}_p$, respectively, and the unit normal $\uv{q}$. Consider, for example, the angle $\psi_s$. We can find this angle from the equation, $\tan\psi_s=|[\vc{P}_s\times\uv{q}]|/(\vc{P}_s\cdot\uv{q})$. Using Eq.~\eqref{eq:P} and taking into account that $[\vc{k'}\times\uv{q}]=\vc{s}$ and $\vc{k}''\parallel\uv{q}$, we find that $\tan\psi_s=\mu'|\vc{s}|/(\mu'q'+\mu''q'')$. Similarly we obtain $\tan\psi_p=\varepsilon'|\vc{s}|/(\varepsilon'q'+\varepsilon''q'')$. This form of equations for $\psi_s$ and $\psi_p$ was previously obtained in~\cite{1981_Halevi}. Using the equations of $|\vc{s}|=m'\frac{\omega}{c_0}\sin\theta$, $q'=m'\frac{\omega}{c_0}\cos\theta$, and $q''=m''\frac{\omega}{c_0}$, we finally obtain \begin{equation} \label{eq:tsp} \tan\psi_s = \frac{\mu'm'\sin\theta}{\mu'm'\cos\theta+\mu''m''}, \qquad \tan\psi_p = \frac{\varepsilon'm'\sin\theta}{\varepsilon'm'\cos\theta+\varepsilon''m''}. \end{equation} Here we see another manifestation of all-angle collimation. Namely, in case of $\mu'=0$ or $\varepsilon'=0$ the corresponding transmission angle, $\psi_s$ or $\psi_p$, is equal to zero, irrespective of the incident angle. Moreover, Eq. \eqref{eq:tsp} says that the transmission angles $\psi_s$ and $\psi_p$ are not equal, which means that the direction of the energy flow in a lossy material is different for s- and p-polarized incident wave~\cite{1981_Halevi}. However, for any natural material this difference is negligibly small, since usually $\mu''/\mu'\ll1$ and $\varepsilon''/\varepsilon'\ll1$, which, according to Eq.~\eqref{eq:tsp}, means that $\psi_s\simeq\psi_p\simeq\theta$. Nevertheless, the difference between $\psi_s$ and $\psi_p$ can be significant in metamaterials where the above inequalities may not hold. To be more quantitative we calculate the values of $\psi_s$ and $\psi_p$ for the case in which the first medium is vacuum ($n_0=1$) and the second medium is the negative-index metamaterial reported in~\cite{2011_Garcia-Meca}. We retrieve the relevant parameters for the permittivity $\varepsilon$ and permeability $\mu$ of this metamaterial, performing the parameter fitting for the Drude model~\cite{2011_Fedorov}. Figure \ref{fig:fig2}(a) shows the dependence of the real and imaginary parts of the retrieved permeability $\mu$ on the wavelength $\lambda$. We see that this dependence has a resonance feature and the real part $\mu'$ of the permeability is equal to zero at the wavelengths 732 and 768\,nm. At these wavelengths we expect all-angle collimation for incident s-polarized wave. Using the retrieved functions for $\varepsilon$ and $\mu$, we calculate $\psi_s$ and $\psi_p$ by Eq.~\eqref{eq:tsp} where values of $m'$, $m''$, and $\theta$ have been obtained using the equations in section \ref{sec:2}. Figures \ref{fig:fig2}(b) and \ref{fig:fig2}(c) show the dependencies of the transmission angles $\psi_s$ and $\psi_p$ on the wavelength $\lambda$ and the incident angle $\theta_0$. In spite of negative refraction, we consider $\psi_s$ and $\psi_p$ as angles between two vectors and set them positive. As expected, we see in Fig.~\ref{fig:fig2}(b) that, at wavelengths where $\mu'=0$, the transmission angle $\psi_s$ is zero for any incident angle. Therefore, the direction of energy flow in the second medium will be normal to the interface for any incident angle. By comparing Figs.~\ref{fig:fig2}(b) and \ref{fig:fig2}(c), we clearly see the difference between $\psi_s$ and $\psi_p$. This difference is more significant at the wavelength $\lambda=732$\,nm where we have all-angle collimation for the s-polarized component of the Poynting vector. We hope that this observation will encourage experimentalists to verify the difference between the transmission angles for s- and p-polarized incident light. Before proceeding to the conclusions, we would like to briefly discuss the case in which light propagates to the reverse direction, that is, from $\varepsilon$-near-zero or $\mu$-near-zero lossy metamaterial into an ordinary medium. One may naively think that we will have a directive emission similar to the one in~\cite{2002_Enoch}. However, a simple generalization of our result to the reverse problem leads to an unphysical solution: In this work we have assumed that a homogeneous (the attenuation vector equals zero) plane wave comes from a lossless medium to the interface of a lossy metamaterial. Therefore, the most natural way to formulate the reverse problem is to assume that a homogeneous damped (the nonzero attenuation vector is parallel to the wave vector) plane wave comes from a lossy metamaterial to the interface with a lossless medium. This assumption implies that somewhere deep in the metamaterial there is an embedded light source, whose radiation, close to the interface, can be approximated by homogeneous damped waves. Upon arrival to the interface, the attenuation vector of these homogeneous damped waves will have a nonzero tangential component (the only exception is the waves propagating along the normal to the interface). Since the tangential components of attenuation vectors are continuous across any interface~\cite{Fedorov}, the wave outgoing from the metamaterial will have nonzero attenuation vectors, that is, the wave transmitted into the lossless surrounding medium will be inhomogeneous. A closer examination shows that the attenuation vector of the outgoing wave will point towards the interface, which means that the amplitude of the outgoing wave will grow unlimitedly, which is unphysical. Therefore, we conclude that the radiation of a light source embedded in a lossy metamaterial can not be represented by homogeneous damped waves; to find a correct solution one should consider the mechanism of light emission from such source in more details, which is out of scope of this work. \section{Conclusions} In conclusion we have theoretically studied the transmission of light waves in $\mu$-near-zero metamaterials. Similar to the case of $\varepsilon$-near-zero metamaterials~\cite{2012_Feng}, we have found the effect of all-angle collimation of incident light in $\mu$-near-zero metamaterials for the s-polarized incident wave. Thus we have provided the generalized footing, based on which we can show, with sufficient clarity, that all-angle collimation of incident light in $\varepsilon$-near-zero and $\mu$-near-zero metamaterials is the manifestation of the same phenomenon under different polarization of incident light. We have presented specific results for the negative-index metamaterial with a permeability resonance where the real part of the permeability becomes zero. Additionally, we have shown that the transmission angle of the Poynting vector depends on the polarization of the incident wave, and this difference is very significant in the spectral region where all-angle collimation of incident light takes place. \section*{Acknowledgments} This work was supported by a Grant-in-Aid for Scientific Research from the Ministry of Education and Science of Japan. Part of the work by V.\,Yu.~Fedorov was also supported by the Japan Society for the Promotion of Science (JSPS). \end{document}
proofpile-arXiv_067-6204
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{1} The radio pulsar J1119--6127, at the center of the supernova remnant (SNR) G292.2--0.5, was discovered in the Parkes multibeam 1.4 GHz pulsar survey with a spin period $P$=408~ms, spin-down rate $\dot{P}$=4$\times$10$^{-12}$ s~s$^{-1}$, characteristic age $\tau_c$$\approx$1.9~kyr, and spin-down luminosity $\dot{E}$=2.3$\times$10$^{36}$~ergs~s$^{-1}$ (Camilo et al. 2000). Its surface dipole magnetic field strength is $B$=4.1$\times$10$^{13}$~G, close to the quantum critical field strength $B_{QED}$=4.4$\times$10$^{13}$~G, making PSR~J1119--6127 a high-magnetic field pulsar. It has also shown sporadic, or rotating radio transient-like behavior, preceded by large amplitude glitch-induced changes in the spin-down parameters (Weltevrede et al. 2011). The X-ray counterpart to the radio pulsar was first resolved with \textit{Chandra} in 2002, providing the evidence for a compact pulsar wind nebula (PWN) of $\sim$3$\arcsec$$\times$6$\arcsec$ in angular size (Gonzalez \& Safi-Harb 2003). A follow up study performed with \textit{Chandra} in 2004, combined with the 2002 observation, allowed for an imaging and spectroscopic study of the pulsar and PWN independently of each other (Safi-Harb \& Kumar 2008). The PWN showed elongated jet-like features extending at least $\sim$7$\arcsec$ north and south of the pulsar, and a longer southern jet becoming evident after accumulating $\sim$130~ks of combined \textit{Chandra} data. The pulsar spectrum was described by a two-component model consisting of a powerlaw (PL) with photon index $\Gamma$$\approx$1.9 and a thermal component, either a blackbody (BB) with temperature $kT$$\approx$0.21~keV or a neutron star atmospheric (NSA) model with $kT$$\approx$0.14~keV. The PWN spectrum was described by a nonthermal model with $\Gamma$=1.1--1.4 (Safi-Harb \& Kumar 2008). An \textit{XMM-Newton} study of the pulsar revealed pulsations with an unusually high pulsed fraction of 74$\pm$14\% in the 0.5--2.0 keV energy range (Gonzalez et al. 2005). The profile is single-peaked and phase-aligned with its radio counterpart. No evidence of pulsations was detected above 2.5 keV (Ng et al. 2012). Gamma-ray pulsations were reported from PSR~J1119--6127 using \textit{Fermi}, making it the source with the highest inferred $B$-field detected among $\gamma$-ray pulsars (Parent et al. 2011). On 2016 July 27 and 28, PSR J1119--6127 exhibited two short (0.02--0.04~s), energetic hard X-ray bursts detected by the \textit{Fermi} Gamma-ray Burst Monitor and \textit{Swift} Burst Alert Telescope (Younes et al. 2016; Kennea et al. 2016; G\"o\u{g}\"u\c{s} et al. 2016). Using the \textit{Swift} X-ray Telescope (XRT) and \textit{NuSTAR} data, Archibald et al. (2016) reported a large glitch ($\Delta$$\nu$/$\nu$=5.74(8)$\times$10$^{-6}$), pulsar flux increase by a factor $\gtrsim$160 (0.5--10~keV), and the appearance of strong X-ray pulsations above 2.5~keV for the first time. On the other hand, the pulsed radio emission from PSR~J1119--6127 disappeared after the onset of magnetar-like bursts, but reappeared two weeks later (Burgay et al. 2016a, 2016b; Majid et al. 2017). Using the \textit{Fermi} Large Area Telescope data obtained from 2016 July 27--August 12, Younes et al. (2016) reported the lack of detection of $\gamma$-ray pulsations and derived an upper limit, consistent with the measured pre-burst $\gamma$-ray flux of the pulsar. \begin{figure*}[ht] \includegraphics[width=0.5\textwidth]{f1a.eps} \includegraphics[width=0.5\textwidth]{f1b.eps} \includegraphics[width=0.5\textwidth]{f1c.eps} \includegraphics[width=0.5\textwidth]{f1d.eps} \caption{Broadband (0.5--7.0~keV) pre-burst (left) and post-burst (right) images of PSR~J1119--6127 and its PWN in logarithmic scale. The top and bottom panels show the raw, unsmoothed and exposure corrected, smoothed images in units of counts~pixel$^{-1}$ and photons~cm$^{-2}$~s$^{-1}$~arcsec$^{-2}$, respectively. The pre-burst image was obtained by combining the 2002 and 2004 observations. The post-burst image shows faint nebulosity near the saturated PSF wings of the bright pulsar that suggests jet and tori-like features, consistent with a PWN interpretation. North is up and East is to the left. } \end{figure*} We requested a \textit{Chandra} Director's Discretionary Time observation to study the pulsar and, particularly, the effect of the magnetar-like bursts on its surrounding nebula. The superior angular resolution of \textit{Chandra} is required to disentangle the emission of the pulsar and its compact PWN. Here, we present these results together with a reanalysis of the archived pre-burst \textit{Chandra} observations. The distance to the PSR~J1119--6127 is taken as 8.4 kpc from HI absorption measurements to the SNR (Caswell et al. 2004) and we scale all derived quantities in units of $d_{8.4}$=$D/8.4$~kpc, where $D$ is the actual distance to the pulsar. \begin{figure*}[th] \includegraphics[width=0.5\textwidth]{f2a.eps} \includegraphics[width=0.5\textwidth]{f2b.eps} \caption{Surface brightness profiles of PSR~J1119--6127 for different blur values. The broadband (0.5--7.0 keV) radial profiles are shown for one of the pre-burst (left) and post-burst (right) observations. } \end{figure*} \section{Observation and Data Reduction} \label{2} We obtained new observations of PSR~J1119$-$6127 on 2016 October 27 (ObsID 19690), three months after the reports of the magnetar-like bursts. We reprocessed all previous observations taken on 2004 November 2--3 (ObsID 6153), 2004 October 31--November 1 (ObsID 4676), and 2002 March 31 (ObsID 2833). The target was positioned at the aimpoint of the Advanced CCD Imaging Spectrometer (ACIS) in all four observations. The standard processing of the data was performed using the \textit{chandra\_repro} script in CIAO version 4.9\footnote{http://cxc.harvard.edu/ciao} (CALDB v.4.7.6) to create new level 2 event files. The resulting effective exposure times were 55.5~ks, 18.9~ks, 60.5~ks, and 56.8~ks for ObsIDs 19690, 6153, 4676, and 2833, respectively. Throughout this paper, we refer to the 2002 and 2004 \textit{Chandra} observations as `pre-burst' and the data taken in 2016 as `post-burst'. \section{Imaging analysis} \label{3.1} Figure~1 shows the broadband (0.5--7.0 keV) pre-burst (left) and post-burst (right) zoomed-in raw, unsmoothed (top) and exposure corrected, smoothed (bottom) images of the region around PSR~J1119--6127, centered on the pulsar coordinates at $\alpha_{J2000}$=11$^{h}$19$^{m}$14\fs260 and $\delta_{J2000}$=$-$61\degr27\arcmin49\farcs30. The bottom-panel images have been exposure-corrected using the CIAO task \textit{fluximage} with a binsize of 1 pixel and smoothed using a Gaussian function of radius 2 pixels. The pre-burst image shows a nebula of size $\sim$6$\arcsec$$\times$15$\arcsec$ in the north-south direction (Safi-Harb \& Kumar 2008) while the post-burst image shows small-scale fine structures around the pulsar. The post-burst PWN morphology along the southeast--northwest direction can be considered as an equatorial torus $\approx$10\farcs0$\times$2\farcs5 in size, running perpendicular to the small jet-like structure $\approx$1\farcs5$\times$3\farcs5 southwest of the pulsar. The overall PWN is titled at an angle of $\sim$35$\degr$--40$\degr$ towards the west. A detailed spatially resolved imaging or spectroscopic study of the PWN structures is not possible with the current data due to the paucity of photon counts. To further constrain the morphology of the diffuse emission around the pulsar at small angular scales and to probe the pulsar extent, we used the \textit{Chandra} Ray Tracer (ChaRT\footnote{http://cxc.harvard.edu/ciao/PSFs/chart2/index.html}) and MARX\footnote{http://space.mit.edu/CXC/MARX/} (ver. 5.3.2) software packages. The point spread functions (PSFs) were simulated using the best-fit pulsar spectrum (see Section 4) and ChaRT for each observation. The ChaRT output was then supplied to MARX to produce the simulated event files and PSF images. Different values (0.25, 0.28, 0.30, 0.32, 0.35) of the \textit{AspectBlur} parameter (which accounts for the known uncertainty in the determination of the aspect solution) were used to search for an excess corresponding to the extended emission. We created broadband (0.5--7.0~keV) radial profiles up to 15$\arcsec$ by extracting net counts in circular annuli centered on the point source, with an annular background from 30$\arcsec$--40$\arcsec$, and rebinned the data to obtain better statistical precision. Figure~2 shows a comparison of the surface brightness profiles for the pre-burst (ObsID 4676) and post-burst (ObsID 19690) data with the profiles generated with ChaRT/MARX for different blur values. The data show some slight excess compared to the simulated profiles beyond a radius of $\approx$1\farcs5 for both the pre-burst and post-burst observations, although excess is clearly seen only beyond 11$\arcsec$ for the post-burst data. We performed spatial fitting on the 0.5--7.0~keV pulsar image to study the morphology of the extended emission quantitatively with \textit{Sherpa}\footnote{http://cxc.harvard.edu/sherpa/}. The ChaRT/MARX generated PSF was loaded as a table model to be used as a convolution kernel for the point source emission. The multi-component source model in \textit{Sherpa} included a 2D Beta model (\textit{beta2d}) for describing the extended component of the source emission, a PSF-convolved 2D Gaussian model (\textit{gauss2d}) to describe the point source component, and a \textit{const2d} model to describe the constant background level contributing to the total emission. The best-fit parameters were determined by the C-statistic and Nelder-Mead optimization method. We obtained a diffuse emission radius of 10\farcs.3$\pm$1\farcs.2 and 6\farcs.2$\pm$0\farcs.8 for the post-burst and pre-burst data, respectively, in comparison with the point source full-width half maximum (FWHM) of 1\farcs2, suggesting an expansion of the nebula. \begin{figure*}[th] \includegraphics[angle=-90, width=0.5\textwidth]{f3a.eps} \includegraphics[angle=-90, width=0.5\textwidth]{f3b.eps} \caption{{Left}: Post-burst (black; PL fit) and pre-burst (red; BB+PL fit) X-ray spectra of PSR~J1119--6127. {Right}: Post-burst (black) and pre-burst (red) PWN spectra fitted with a PL model. The bottom panel shows the residuals in terms of sigmas. The pre-burst data are combined together using the \textit{combine\_spectra} task in CIAO and rebinned for display purposes.} \end{figure*} \section{Spectral analysis} \label{4} The spectral analysis was performed using the X-ray spectral fitting package, XSPEC version 12.9.1, and restricted to 0.5--7.0~keV as these energy bands were not background dominated. The contributions from background point sources were removed prior to the extraction of spectra. All the spectra extracted were grouped by a minimum of 10 counts per bin and the errors were calculated at the 90\% confidence level. \subsection{Pulsar spectrum} For a point source observed on-axis with \textit{Chandra}, $\sim$90\% of the encircled energy lies within 1\farcs2 at 1.49~keV and within 2\farcs5 at 6.4~keV\footnote{http://cxc.harvard.edu/proposer/POG/html/chap6.html}. Guided by the radial profile plots (Figure~2), 90\% encircled energy fraction, and sherpa modelling, we here consider 1\farcs5 as the best extraction radius for the pulsar. We chose an annular ring of 3$\arcsec$--5$\arcsec$ centered on the pulsar as background, to minimize the contamination from the surrounding PWN. Due to the evidence of pulsar brightening in the post-burst data, we investigated the possibility of pileup using the CIAO task \textit{pileup\_map}\footnote{http://cxc.harvard.edu/ciao/ahelp/pileup\_map.html.}. We obtained an average of 0.2~photons per frame of 3.2~s in the centermost pixel of the post-burst pulsar image, which translates into a pileup fraction of $\sim$10\%. For a quantitative estimate of the post-burst pulsar spectrum, we used the \textit{jdpileup} model of the \textit{Chandra} spectral fitting software \textit{Sherpa} convolved with an absorbed PL and BB model, which gave a pileup fraction of 5.2\% and 21\% for the two models, respectively. The pre-burst data did not show any evidence of pileup. The post-burst pulsar spectrum was first fit with different one- and two-component models. A PL model yielded a better fit ($\chi^2_{\nu}$/dof=0.906/291) with $N_H$=1.7$\times$10$^{22}$~cm$^{-2}$ and $\Gamma$=1.8, while the BB model gave a low $N_H$=0.7$\times$10$^{22}$~cm$^{-2}$ and $kT$=1.0~keV for $\chi^2_{\nu}$/dof=1.437/291, with excess emission seen above $\sim$3~keV. The addition of a second component was statistically not required for a single PL model, but the fit improved when a PL component was added to the BB model with $N_H$=1.6$^{+0.3}_{-0.2}$$\times$10$^{22}$~cm$^{-2}$, $kT$=0.4$\pm$0.1~keV, and $\Gamma$=1.5$^{+0.3}_{-0.4}$ for $\chi^2_{\nu}$/dof=0.892/289. We next fitted the spectra by including a pileup component to the PL and BB models, and the results are shown in Table~1. The pre-burst spectrum of the pulsar was also explored with different models, and as elaborated in Safi-Harb \& Kumar (2008), a two-component BB+PL model best described the pulsar spectra (Table~1). Figure~3 (left) shows the best-fit post-burst (PL; black) and pre-burst (BB+PL; red) pulsar spectra. To better evaluate the contamination in the spectra from the surrounding PWN, the pulsar spectra were explored using larger extraction radii of 2\farcs0 and 2\farcs5 and we obtained similar spectral parameters as for the 1\farcs5 region. \subsection{PWN spectrum} We extracted an annular ring of 2$\arcsec$--10$\arcsec$ region for the overall PWN to determine any spectral variations in the PWN between the epochs, following the imaging analysis and sherpa modelling results. The background was extracted from a nearby source-free elliptical region\footnote{The spectral fits were explored with different annular and elliptical backgrounds and binning, and the spectral parameters were consistent within uncertainties to those shown in Table 1.}. All the extracted regions were simultaneously fit with a PL model by tying $N_H$ together. Figure~3 (right) and Table~1 show the best-fit model and spectral parameters for the PWN. The pre-burst spectral fit results obtained here are consistent with the results for the $\sim$6$\arcsec$$\times$15$\arcsec$ region presented in Safi-Harb \& Kumar (2008). \begin{table*}[ht] \caption{Spectral fits to the PSR J1119--6127 and its PWN} \center \begin{tabular}{l l l l l l l} \hline\hline Parameter & \multicolumn{3}{c}{Pulsar} & \multicolumn{2}{c}{PWN} \\ \cline{2-6} & Pre-burst & \multicolumn{2}{c}{Post-burst} & Pre-burst & Post-burst \\ \cline{3-4} & BB+PL & pileup*PL & pileup*BB & PL & PL \\ \hline $ N_{H}$ ($10^{22}$ cm$^{-2}$) & 1.6$^{+0.7}_{-0.5}$ & 1.8$\pm$0.2 & 1.1$\pm$0.1 & \multicolumn{2}{c}{1.8$^{+0.6}_{-0.5}$} \\ \cdashline{5-6} $\Gamma$ & 2.0$_{-0.9}^{+0.8}$ & 2.0$\pm$0.2 & \nodata & 1.2$\pm$0.8 & 2.2$\pm$0.5 \\ kT (keV) & 0.2$\pm$0.1 & \nodata & 0.7$\pm$0.1 & \nodata & \nodata\\ $F_{unabs}$ (PL)$^a$ & 7.4$_{-1.8}^{+1.0}\times$10$^{-14}$ & 5.7$_{-1.1}^{+1.4}\times$10$^{-12}$ & \nodata & 2.3$_{-1.5}^{+3.5}\times$10$^{-14}$ & 2.2$_{-0.9}^{+1.1}\times$10$^{-13}$ \\ $F_{unabs}$ (BB)$^a$ & 1.8$_{-0.8}^{+1.5}\times$10$^{-13}$ & \nodata & 1.1$^{+0.3}_{-0.1}\times$10$^{-11}$ & \nodata & \nodata \\ $\chi_{\nu}^2$/dof & 1.008/60 & 0.893/289 & 0.988/289 & \multicolumn{2}{c}{0.912/57} \\ \cdashline{5-6} Count rate$^b$ & (4.8$\pm$0.3)$\times$10$^{-3}$ & \multicolumn{2}{c}{(1.08$\pm$0.01)$\times$10$^{-1}$} & (8.3$\pm$2.3)$\times$10$^{-4}$ & (6.9$\pm$0.4)$\times$10$^{-3}$ \\ \cdashline{3-4} $L_{X}$$^c$ & 2.2$_{-1.1}^{+1.4}\times$10$^{33}$ & 4.8$_{-0.9}^{+1.2}\times$10$^{34}$ & 9.3$_{-0.8}^{+2.5}$$\times$10$^{34}$ & 1.9$_{-1.3}^{+1.9}\times$10$^{32}$ & 1.9$_{-0.8}^{+0.9}\times$10$^{33}$ \\ $\eta_X$=$L_{X}$/$\dot{E}$ & 0.001 & 0.02 & 0.04 & 0.0001 & 0.001 \\ \hline \end{tabular} \tablecomments{Galactic absorption is modeled with \textit{tbabs} in XSPEC (Wilms et al. 2000). The post-burst pileup fractions for the PL and BB models are 5.2\% and 21\%, respectively. The pre-burst and post-burst PWN spectra are simultaneously fit by tying their $N_H$ together. Errors are at 90\% confidence level. \\ $^a$ Unabsorbed flux (0.5--7.0 keV) in units of ergs~cm$^{-2}$~s$^{-1}$. \\ $^b$ Background subtracted count rates (0.5--7.0 keV) in units of counts~s$^{-1}$. \\ $^c$ X-ray luminosity (0.5--7.0 keV) in units of ergs s$^{-1}$ assuming isotropic emission at a distance of 8.4~kpc. \\ } \end{table*} \section{Discussion and conclusion} \label{4} The high-magnetic field ($B$$\gtrsim$$B_{QED}$) pulsars are believed to be an important class of objects for studying the relationship between magnetars and radio pulsars. Seven high-$B$ pulsars have been identified so far. These include the radio pulsars J1119--6127, J1718--3718, J1734--3333, J1814--1744, J1847--0130, the X-ray pulsar J1846--0258, and the rotating radio transient J1819--1458 (Ng \& Kaspi 2011). Based on the X-ray observations of PSR~J1718--3718, Kaspi \& McLaughlin (2005) suggested that high-$B$ pulsars may be quiescent magnetars. The first evidence for such a link was found when PSR~J1846$-$0258 in SNR Kes~75 showed magnetar-like bursts and radiative changes such as a flux increase and spectral softening in X-rays (Gavriil et al. 2008; Kumar \& Safi-Harb 2008). PSR~J1119--6127 is the first radio pulsar, and the second high-$B$ pulsar, to display a magnetar-like burst. Here, we present a discussion of the results from our pre-burst and post-burst study on PSR J1119--6127 and its compact nebula. The \textit{Chandra} observations of PSR~J1119--6127, made three months after its magnetar-like bursts, can be described by a single PL ($\Gamma$=2.0$\pm$0.2) or BB ($kT$=0.7$\pm$0.1~keV) model with pileup, in contrast to its quiescent spectrum, which required a combination of PL and BB models. Here, we prefer a PL model over a BB model for the post-burst pulsar data due to the following reasons. Firstly, the pileup fraction required to fit the spectrum with a BB model is much higher than a PL model, suggesting that a BB fit is only possible if a large pileup fraction absorbs high-energy photons that are better fit with a PL tail. Secondly, the $N_H$ obtained from a BB model is much lower compared to that obtained for the pulsar and its PWN (Table~1), as well as from the SNR diffuse emission regions immediately surrounding the pulsar (Kumar et al. 2012). Thirdly, when the post-burst pulsar spectrum was fitted with a BB+PL model (although a second component was not statistically required), we find that $\sim$85\% of the total unabsorbed flux is dominated by the nonthermal component. These results, together with the fact that the pulsar's emission beyond 3~keV could not be described by a BB model alone, imply that the X-ray emission in the post-burst state is mainly magnetospheric in nature. Assuming isotropic emission, the pulsar's luminosity $L_{X,PSR}$=4$\pi$$d_{8.4}^2$$F_{unabs}$$\approx$4.8$\times$10$^{34}$~$d_{8.4}^2$~ergs~s$^{-1}$, implying an X-ray efficiency $\eta_{X, PSR}$=$L_{X, PSR}/\dot{E}$$\approx$0.02 in the 0.5--7.0~keV energy range. It is interesting to note that the pulsar's $\eta_X$ is less than 1 during its magnetar-like burst, indicating that its spin-down energy could still power the X-ray emission. Magnetar bursts are usually accompanied by dramatic changes in the persistent emission properties and spectral evolution, such as hardening/softening, change in pulsed fraction, pulse profiles, flux changes etc. (Rea \& Esposito 2011). The burst-induced radiative changes observed for PSR~J11119--6127 are very similar to those seen in magnetars, suggesting an activity associated with the pulsar's high-magnetic field. Such results have been found in the case of PSR~J1846--0258 as well, further implying that the high-$B$ pulsars could be powered by both rotational and magnetic energy (Camilo 2008). The pulsar's pre-burst data showed a compact PWN of size $\sim$6$\arcsec$$\times$15$\arcsec$, while we see a change in the PWN morphology with faint tori and jet-like features surrounding the pulsar in the post-burst data. In the 92 days that elapsed from the detection of the first burst to the Chandra observation, the maximum distance the ejected particles could have traveled, assuming an 8.4 kpc distance and a speed of light velocity, is 1\farcs9. Therefore, the new extended feature of $\sim$10$\arcsec$ radius cannot be associated with the recent burst unless the distance to the pulsar is overestimated by at least an order of magnitude. The PWN spectrum also showed a change in photon index from 1.2$\pm$0.8 to 2.2$\pm$0.5 following the burst, although not unusual since the X-ray spectra of most PWNe have $\Gamma$=1--2.5 (Kargaltsev \& Pavlov 2008). The post-burst X-ray luminosity of the PWN is 1.9$_{-0.8}^{+0.9}\times$10$^{33}$~ergs~s$^{-1}$ (0.5--7.0~keV), implying an X-ray efficiency $\eta_{X,PWN}$$\approx$0.001, consistent with the typical values of $\sim$10$^{-5}$ to 10$^{-1}$ observed in other rotation-powered pulsars with PWNe (Kargaltsev \& Pavlov 2008). The flux from the compact nebula has also increased by an order of magnitude in comparison with the pulsar's quiescent state, consistent with the pre-burst and post-burst surface brightness profiles (Figures~1 and 2). Although small-scale variabilities in PWN structures are seen in many nebulae, the changes as observed in PSR~J1119--6127 are difficult to interpret in terms of spin-down energy alone. \begin{table*}[th] \caption{Comparison of the X-ray properties of PWNe around high-$B$ pulsars and magnetars} \center \begin{tabular}{l l l l l l l l l l} \hline\hline Pulsar & P & $B$ & Distance & $\dot{E}$ & $\Gamma$ & $L_{X}$/$\dot{E}$ & Ref. \\ & (s) & (10$^{13}$~G) & (kpc) & (ergs~s$^{-1}$) & & & & \\ \hline PSR~J1119--6127$^a$ & 0.408 & 4.1 & 8.4 & 2.3$\times$10$^{36}$ & 2.0$\pm$0.2 & 0.001 & This work \\ PSR~J1819--1458 & 4.26 & 5.0 & 3.6 & 3.0$\times$10$^{32}$ & 3.7$\pm$0.3 & 0.15 & Rea et al. 2009; Camero-Arranz et al. 2013\\ Swift~J1834.9--0846 & 2.48 & 14 & 4.0 & 2.1$\times$10$^{34}$ & 2.2$\pm$0.2 & 0.1 & Younes et al. 2016 \\ PSR~J1846--0258 & 0.324 & 4.9 & 6.0 & 8.1$\times$10$^{36}$ & 1.93$\pm$0.03 & 0.02 & Kumar \& Safi-Harb 2008; Ng et al. 2008 \\ SGR~J1935+2154$^b$ & 3.24 & 22 & 11.7 & 1.7$\times$10$^{34}$ & 3.8$\pm$0.3 & 0.35 & Israel et al. 2016; Surnis et al. 2016\\ \hline \end{tabular} \tablecomments{$^a$ Post-burst X-ray efficiency is quoted here (pre-burst $\eta_X$$\sim$0.0001; Safi-Harb \& Kumar 2008).\\ $^b$ Diffuse emission could be either a dust scattering halo or a wind nebula. \\ } \end{table*} It has been proposed that magnetars can produce relativistic particle outflows during an outburst or from a steady flux of Alfv\'en waves powering a wind nebula (Thompson \& Duncan 1996; Harding 1999). Signatures of X-ray emission from a wind nebula have been reported in a few high-$B$ pulsars and magnetars such as PSR~J1846--0258, PSR~J1819--1458, Swift~J1834.9--0846, and SGR~J1935+2154 (Safi-Harb 2013). Table~2 summarizes the X-ray properties of PWNe observed around these highly magnetized neutron stars. PSR~J1846--0258, which features a very prominent X-ray nebula, showed small-scale variability in its PWN after its magnetar-like bursts in 2006 (Ng et al. 2008; Kumar \& Safi-Harb 2008). For the extended emission around the PSR~J1819--1458 (Rea et al. 2009; Camero-Arranz et al. 2013), Swift~J1834.9--0846 (Kargaltsev et al. 2012; Esposito et al. 2013), and SGR~J1935+2154 (Israel et al. 2016), the authors favor a PWN or a scattering halo origin. Bright X-ray sources with large column densities can lead to an extended dust scattering halo, with the halo brightness proportional to the source flux (Predehl \& Schmitt 1995). The diffuse emission region around Swift~J1834.9--0846 has been identified as an inner symmetric region ($\lesssim$50$\arcsec$) of scattering halo and an outer asymmetric region ($\sim$150$\arcsec$) of a possible magnetar wind nebula, since the emission remained fairly constant in flux and spectral shape across three years (Younes et al. 2012, 2016). A detailed investigation of a halo component associated with PSR~J1119--6127's magnetar-like burst would require modeling beyond the scope of this Letter. However, for a dust scattering halo, one would expect to find symmetric structures around the point source with a relatively steeper (softer) photon index than the source as the scattering cross-section varies with the inverse-square of the energy. The fact that the compact PWN has an asymmetric structure (Figure~1), with a hard spectral index (2.2$\pm$0.5) comparable to that of the pulsar (2.0$\pm$0.2) does support a PWN interpretation. We further note that the spectral index of PSR~J1119--6127 is also harder than the other magnetars associated with dust scattering haloes. Israel et al. (2016) suggests that the extended emission seen around SGR~1935+2154, with a $\Gamma$=3.8$\pm$0.3, could also be magnetically powered due to its unusually high X-ray efficiency. From Table~2, we see that the X-ray efficiency is much less than 1 for PSR~J1119--6127, but its photon index is comparable to the magnetically powered nebula around Swift~J1834.9--0846 ($\Gamma$=2.2$\pm$0.2). Despite some similarities in properties, none of other PWNe exhibited any notable change in morphology or spectrum as seen for PSR~J1119--6127. In summary, while the PWN in PSR~J1119--6127 can still be energetically powered by its spin-down power, the changes observed in the PWN spectrum point towards a new source of energy powered by the magnetar bursts. The change in PWN morphology could be somewhat related to the recent bursts, but the large scale changes must have happened over longer timescales and could perhaps be related to an earlier undetected burst. Unfortunately, the spacing between observations is insufficient to make firm conclusions about the timescale of these changes. We cannot rule out a dust scattering halo component for PSR J1119--6127, but require additional deeper observations at different epochs to separate any halo component and confirm the nature of the PWN emission and its morphology post-burst. \acknowledgments The authors are grateful to Patrick Slane and the \textit{Chandra} science team for making this DDT observation possible, as well as thank the \textit{Chandra} HelpDesk for assistance with data analysis. We thank the referee for a very careful reading that helped improve and clarify the manuscript. HB and MAM are supported by the NSF award number 1516512. SSH acknowledges support by the Natural Sciences and Engineering Research Council of Canada (through the Canada Research Chair and Discovery Grant programs) and the Canadian Space Agency. This research made use of NASA's ADS and HEASARC maintained at the Goddard Space Flight Center.
proofpile-arXiv_067-6281
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The `spiked model' is the simplest probabilistic model of a data matrix with a latent low-dimensional structure. Consider, to begin with, the case of a symmetric matrix. The data are written as the sum of a low-rank matrix (the signal) and Gaussian component (the noise): \begin{align} {\boldsymbol A} = \sum_{i=1}^k\lambda_i {\boldsymbol v}_i{\boldsymbol v}_i^{{\sf T}} + {\boldsymbol W}\, . \label{eq:SpikedDef} \end{align} Here $\lambda_1\ge \lambda_2\ge \dots\ge \lambda_k$ are non-random numbers, ${\boldsymbol v}_i\in\mathbb{R}^n$ are non-random vectors, and ${\boldsymbol W}\sim{\sf GOE}(n)$ is a matrix from the Gaussian Orthogonal Ensemble\footnote{Recall that this means that ${\boldsymbol W}={\boldsymbol W}^{{\sf T}}$, and the entries $(W_{ij})_{ i\le j\le n}$ are independent with $(W_{ii})_{i\le n}\sim_{iid}{\sf N}(0,2/n)$ and $(W_{ij})_{i<j\le n}\sim_{iid}{\sf N}(0,1/n)$.}. The asymmetric (rectangular) version of the same model is also of interest. In this case we observe ${\boldsymbol A}\in \mathbb{R}^{n\times d}$ given by \begin{align} {\boldsymbol A} = \sum_{i=1}^k\sqrt{\lambda_i} \, {\boldsymbol u}_i{\boldsymbol v}_i^{{\sf T}} + {\boldsymbol W}\, . \label{eq:SpikedDef-Rectangular} \end{align} where ${\boldsymbol W}$ is a noise matrix with entries $(W_{ij})_{i\le n,j\le d}\sim_{iid}{\sf N}(0,1/n)$. An important special case assumes ${\boldsymbol u}_i\sim{\sf N}(0,{\boldsymbol I}_n/n)$. In this case\footnote{For the formal analysis of this model, it will be convenient to consider the case of deterministic vectors ${\boldsymbol u}_i$, ${\boldsymbol v}_i$ satisfying suitable asymptotic conditions. However, these conditions hold almost surely, e.g. ${\boldsymbol u}_i\sim{\sf N}(0,{\boldsymbol I}_n/n)$.} the rows of ${\boldsymbol A}$ are i.i.d. samples from a high-dimensional Gaussian ${\boldsymbol a}_i\sim{\sf N}({\boldsymbol 0},{\boldsymbol \Sigma})$ where ${\boldsymbol \Sigma} = (\sum_{i=1}^k\lambda_i{\boldsymbol v}_i{\boldsymbol v}_i^{{\sf T}}+{\boldsymbol I}_d)/n$. Theoretical analysis of this spiked covariance model has led to a number of important statistical insights \cite{JohnstoneICM,johnstone2009consistency}. Within probability theory, the spiked model \eqref{eq:SpikedDef} is also known as `deformed GOE' or `deformed Wigner random matrix', and the behavior of its eigenvalues and eigenvectors has been studied in exquisite detail \cite{baik2005phase,baik2006eigenvalues,feral2007largest,capitaine2009largest,benaych2011eigenvalues,benaych2012singular,knowles2013isotropic}. The most basic phenomenon unveiled by this line of work is the so-called BBAP phase transition, first discovered in the physics literature \cite{hoyle2004principal}, and named after the authors of \cite{baik2005phase}. Let $k_*$ be the number of rank-one terms with $|\lambda_i|>1$. Then the spectrum of ${\boldsymbol A}$ is formed by a bulk of eigenvalues in the interval $[-2,2]$ (whose distribution follows Wigner's semicircle), plus $k_*$ outliers that are in one-to-one correspondence with the large rank-one terms in (\ref{eq:SpikedDef}). The eigenvectors associated to the outliers exhibit a significant correlation with the corresponding vectors ${\boldsymbol v}_i$. To simplify the discussion, in the rest of this introduction we will assume that $\lambda_i\ge 0$ for all $i$. The spiked model \eqref{eq:SpikedDef}, \eqref{eq:SpikedDef-Rectangular} and their generalizations have also been studied from a statistical perspective \cite{johnstone2001distribution,paul2007asymptotics}. A fundamental question in this context is to estimate the vectors ${\boldsymbol v}_i$ from a single realization of the matrix ${\boldsymbol A}$. It is fair to say that this question is relatively well understood when the vectors ${\boldsymbol v}_i$ are unstructured, e.g. they are a uniformly random orthonormal set (distributed according to the Haar measure). In this case, and in the high-dimensional limit $n,d \to\infty$, the best estimator of vector ${\boldsymbol v}_i$ is the $i$-th eigenvector of ${\boldsymbol A}$. Random matrix theory provides detailed information about its asymptotic properties. This paper is concerned with the case in which the vectors ${\boldsymbol v}_i$ are structured, e.g. they are sparse, or have bounded entries. This structure is not captured by spectral methods, and other approaches lead to significantly better estimators. This scenario is relevant for a broad range of applications, including sparse principal component analysis \cite{johnstone2009consistency,zou2006sparse,deshpande2014information}, non-negative principal component analysis \cite{lee1999learning,montanari2016non}, community detection under the stochastic block model \cite{deshpande2016asymptotic,abbe2017community,moore2017computer}, and so on. Understanding what are optimal ways of exploiting the structure of signals is ---to a large extent---an open problem. Significant progress has been achieved recently under the assumption that the vectors $(v_{1,j},\dots, v_{k,j})$ $\in \mathbb{R}^k$ (i.e., the $k$-dimensional vectors obtained by taking the $j$-th component of the vectors ${\boldsymbol v}_1,\dots,{\boldsymbol v}_k$) are approximately i.i.d. (across $j \in \{1,\dots, n\}$) with some common distribution $\mu_{{\boldsymbol U}}$ on $\mathbb{R}^k$. This is, for instance, the case if each ${\boldsymbol v}_\ell$ has i.i.d. components, and distinct vectors are independent (but mutual independence between ${\boldsymbol v}_1,\dots,{\boldsymbol v}_k$ is not required). Following heuristic derivations using statistical physics methods (see, e.g. \cite{lesieur2017constrained}), closed form expressions have been rigorously established for the Bayes-optimal estimation error in the limit $n\to\infty$ (with $\lambda_i$'s fixed). We refer to \cite{deshpande2014information,deshpande2016asymptotic} for special cases and to \cite{krzakala2016mutual,barbier2016mutual,lelarge2016fundamental,miolane2017fundamental} for an increasingly general theory. Unfortunately, there is no general algorithm that computes the Bayes-optimal estimator and is guaranteed to run in polynomial time. Markov Chain Monte Carlo can have exponentially large mixing time and is difficult to analyze \cite{gamerman2006markov}. Variational methods are non-convex and do not come with consistency guarantees \cite{blei2017variational}. Classical convex relaxations do not generally achieve the Bayes optimal error, since they incorporate limited prior information \cite{javanmard2016phase}. In the positive direction, approximate message passing (AMP) algorithms have been successfully applied to a number of low-rank matrix estimation problems \cite{rangan2018iterative,parker2014bilinear,montanari2016non,vila2015hyperspectral,kabashima2016phase}. In particular, AMP was proved to achieve the Bayes optimal estimation error in special cases of the model (\ref{eq:SpikedDef}), in the high-dimensional limit $n\to\infty$ \cite{deshpande2013finding,deshpande2014information}. In fact, a bold conjecture from statistical physics suggests that the estimation error achieved by AMP is the same that can be achieved by the optimal polynomial-time algorithm. An important feature of AMP is that it admits an exact characterization in the limit $n\to\infty$ that goes under the name of \emph{state evolution} \cite{DMM09,BM-MPCS-2011,bolthausen2014iterative}. There is however one notable case in which the state evolution analysis of AMP falls short of its goal: when AMP is initialized near an unstable fixed point. This is typically the case for the problem of estimating the vectors ${\boldsymbol v}_i$'s in the spiked model (\ref{eq:SpikedDef}). (We refer to the next section for a discussion of this point.) In order to overcome this problem, we propose a two-step algorithm: \begin{enumerate} \item We compute the principal eigenvectors ${\boldsymbol \varphi}_1,\dots,{\boldsymbol \varphi}_{k_*}$ of ${\boldsymbol A}$, which correspond to the outlier eigenvalues. \item We run AMP with an initialization that is correlated with these eigenvectors. \end{enumerate} Our main result (Theorem \ref{thm:Main}) is a general asymptotically exact analysis of this type of procedure. The analysis applies to a broad class of AMP algorithms, with initializations that are obtained by applying separable functions to the eigenvectors ${\boldsymbol \varphi}_1,\dots,{\boldsymbol \varphi}_{k_*}$ (under some technical conditions). Let us emphasize that \emph{our core technical result (state-evolution analysis) is completely general and applies beyond low-rank matrix estimation.} The rest of the paper is organized as follows. \begin{description} \item[Section \ref{sec:Example}] applies our main results to the problem of estimating a rank-one matrix in Gaussian noise (the case $k=1$ of the model (\ref{eq:SpikedDef})). We compute the asymptotic empirical distribution of our estimator. In particular, this characterizes the asymptotics of all sufficiently regular separable losses. We then illustrate how this state evolution analysis can be used to design specific AMP algorithms, depending on what prior knowledge we have about the entries of ${\boldsymbol v}_1$. In a first case study, we only know that ${\boldsymbol v}_1$ is sparse, and analyze an algorithm based on iterative soft thresholding. In the second, we assume that the empirical distribution of the entries of ${\boldsymbol v}_1$ is known, and develop a Bayes-AMP algorithm. The asymptotic estimation error achieved by Bayes-AMP coincides (in certain regimes) with the Bayes-optimal error (see Corollary \ref{coro:AMP-OPT}). When this is not the case, no polynomial-time algorithm is known that outperforms our method. \item[Section \ref{sec:Inference}] shows how AMP estimates can be used to construct confidence intervals and $p$-values. In particular, we prove that the resulting $p$-values are asymptotically valid on the nulls, which in turn can be used to establish asymptotic false discovery rate control using a Benjamini-Hochberg procedure. \item[Section \ref{sec:ExampleRectangular}] generalizes the analysis of Section \ref{sec:Example} to the case of rectangular matrices. This allows, in particular, to derive optimal AMP algorithms for the spiked covariance model. The theory for rectangular matrices is completely analogous to the one for symmetric ones, and indeed can be established via a reduction to symmetric matrices. \item[Section \ref{sec:Degenerate}] discusses a new phenomenon arising in case of degeneracies between the values $\lambda_1,\dots,\lambda_k$. For the sake of concreteness, we consider the case ${\boldsymbol A} = \lambda {\boldsymbol A}_0+{\boldsymbol W}$, where ${\boldsymbol A}_0$ is a rank-$k$ matrix obtained as follows. We partition $\{1,\dots,n\}$ in $q=k+1$ groups and set $A_{0,ij} = k/n$ if $i,j$ belong to the same group and $A_{0,ij}=-1/n$ otherwise. Due to its close connections with the stochastic block model of random graphs, we refer to this as to the `Gaussian block model'. It turns out that in such degenerate cases, the evolution of AMP estimates does not concentrate around a deterministic trajectory. Nevertheless, state evolution captures the asymptotic behavior of the algorithm in terms of a random initialization (whose distribution is entirely characterized) plus a deterministic evolution. \item[Section \ref{sec:Symmetric}] presents our general result in the case of a symmetric matrix ${\boldsymbol A}$ distributed according to the model (\ref{eq:SpikedDef}). Our theorems provide an asymptotic characterization of a general AMP algorithm in terms of a suitable state evolution recursion. A completely analogous result holds for rectangular matrices. The corresponding statement is presented in the supplementary material. \item[Section \ref{sec:ProofOutline}] provides an outline of the proofs of our main results. Earlier state evolution results do not allow to rigorously analyze AMP unless its initialization is independent from the data matrix ${\boldsymbol A}$. In particular, they do not allow to analyze the spectral initialization used in our algorithm. In order to overcome this challenge, we prove a technical lemma (Lemma \ref{lemma:Distr}) that specifies an approximate representation for the conditional distribution of ${\boldsymbol A}$ given its leading outlier eigenvectors and the corresponding eigenvalues. Namely, ${\boldsymbol A}$ can be approximated by a sum of rank-one matrices, corresponding to the outlier eigenvectors, plus a projection of a new random matrix ${\boldsymbol A}^{\mbox{\tiny\rm new}}$ independent of ${\boldsymbol A}$. We leverage this explicit independence to establish state evolution for our algorithm. \end{description} Complete proofs of the main results are deferred to the Appendices \ref{sec:ProofMain} and \ref{sec:ProofMainGeneral}. For the reader's convenience, we present separate proofs for the case of rank $k=1$, and then for the general case, which is technically more involved. The proofs concerning the examples in Section \ref{sec:Example} and \ref{sec:ExampleRectangular} are also presented in the appendices. As mentioned above, while several of our examples concern low-rank matrix estimation, the main result in Section \ref{sec:Symmetric} is significantly more general, and is potentially relevant to a broad range of applications in which AMP is run in conjunction with a spectral initialization. \section{Estimation of symmetric rank-one matrices} \label{sec:Example} In order to illustrate our main result (to be presented in Section \ref{sec:Symmetric}), we apply it to the problem of estimating a rank-one symmetric matrix in Gaussian noise. We will begin with a brief heuristic discussion of AMP and its application to rank-one matrix estimation. The reader is welcome to consult the substantial literature on AMP for further background \cite{BM-MPCS-2011,javanmard2013state,bayati2015universality,berthier2017state}. \subsection{Main ideas and heuristic justification} Let ${\boldsymbol x}_{0}={\boldsymbol x}_0(n)\in \mathbb{R}^n$ be a sequence of signals indexed by the dimension $n$, satisfying the following conditions: \begin{itemize} \item[$(i)$] Their rescaled $\ell_2$-norms converge $\lim_{n\to\infty}\|{\boldsymbol x}_0(n)\|_2/\sqrt{n}= 1$; \item[$(ii)$] The empirical distributions of the entries of ${\boldsymbol x}_0(n)$ converges weakly to a probability distribution $\nu_{X_0}$ on $\mathbb{R}$, with unit second moment. \end{itemize} We then consider the following spiked model, for ${\boldsymbol W}\sim {\sf GOE}(n$): \begin{align} {\boldsymbol A} = \frac{\lambda}{n} \, {\boldsymbol x}_0{\boldsymbol x}_0^{{\sf T}} + {\boldsymbol W}\, . \label{eq:SpikedDefSpecial} \end{align} Given one realization of the matrix ${\boldsymbol A}$, we would like to estimate the signal ${\boldsymbol x}_0$. Note that this matrix is of the form (\ref{eq:SpikedDef}) with $k=1$, $\lambda_1 = \lambda\|{\boldsymbol x}_0(n)\|_2^2/n\to \lambda$ and ${\boldsymbol v}_1 = {\boldsymbol x}_0(n)/\|{\boldsymbol x}_0(n)\|_2$. In order to discuss informally the main ideas in AMP, assume for a moment to be given an additional noisy observation of ${\boldsymbol x}_0$, call it ${\boldsymbol y}\in\mathbb{R}^n$, which is independent of ${\boldsymbol A}$ (i.e., independent of ${\boldsymbol W}$, since ${\boldsymbol x}_0$ is deterministic). More specifically, assume ${\boldsymbol y} \sim {\sf N}(\mu_0{\boldsymbol x}_0,\sigma_0^2{\boldsymbol I}_n)$. How can we denoise this observation, and incorporate the quadratic observation ${\boldsymbol A}$ in \eqref{eq:SpikedDefSpecial}? A first idea would be to denoise ${\boldsymbol y}$, using an entry-wise scalar denoiser $f_0:\mathbb{R}\to\mathbb{R}$. We denote the vector obtained by applying $f_0$ component-wise by $f_0({\boldsymbol y})$. Of course, the choice of $f_0$ depends on our knowledge of ${\boldsymbol x}_0$. For instance if we know that ${\boldsymbol x}_0$ is sparse, then we could apply component-wise soft thresholding: \begin{align} f_0(y_i) = \eta\big(y_i; \tau\big)\, , \end{align} where $\eta(x;\tau) = {\rm sign}(x)(|x|-\tau)_+$, and $\tau$ is a suitable threshold level. Classical theory guarantees the accuracy of such a denoiser \cite{DJ94a,DJ98}. However, $f_0({\boldsymbol y})$ does not exploit the observation ${\boldsymbol A}$ in any way. We could try to improve this estimate by multiplying $f_0({\boldsymbol y})$ by ${\boldsymbol A}$: \begin{align} {\boldsymbol x}^1 = {\boldsymbol A} f_0({\boldsymbol y}) = \frac{\lambda}{n}\<{\boldsymbol x}_0,f_0({\boldsymbol y})\>\, {\boldsymbol x}_0 + {\boldsymbol W} f_0({\boldsymbol y})\, . \label{eq:FirstIteration} \end{align} It is not hard to see that the second term is a centered Gaussian vector whose entries have variance close to $\|f_0({\boldsymbol y})\|^2/n \to \sigma_{1}^2\equiv \E \{ f_0(\mu_0 X_0 + \sigma_0 G )^2 \}$, while the first term is essentially deterministic by the law of large numbers. We thus obtain that ${\boldsymbol x}^1$ is approximately ${\sf N}(\mu_1{\boldsymbol x}_0, \sigma^2_1{\boldsymbol I}_n)$, where \begin{align} \mu_{1} = \lambda \E\{ X_0 f_0(\mu_0 X_0+\sigma_0 G) \}\, , \;\;\;\;\; \sigma_{1}^2& = \E \{ f_0(\mu_0 X_0 + \sigma_0 G )^2 \}\, .\label{eq:First-StateEvolution} \end{align} Here expectation is taken with respect to $X_0\sim\nu_{X_0}$ independent of $G\sim{\sf N}(0,1)$. This analysis also suggests how to design the function $f_0$: ideally, it should maximize the signal-to-noise ratio (SNR) $\mu^2_1/\sigma^2_1$. Of course, the precise choice of $f_0$ depends on our prior knowledge of ${\boldsymbol x}_0$. For instance, if we know the law $\nu_{X_0}$, we can maximize this ratio by taking $f_0(y) = \E\{X_0|\mu_0 X_0 + \sigma_0 G =y\}$. At this point it would be tempting to iterate the above procedure, and consider the non-linear power iteration \begin{align} {\boldsymbol x}_{\mbox{\tiny\rm PI}}^{t+1} = {\boldsymbol A}\, f_t({\boldsymbol x}^t_{\mbox{\tiny\rm PI}})\, ,\label{eq:PowerIteration} \end{align} for a certain sequence of functions $f_t:\mathbb{R}\to\mathbb{R}$. (As above, $f_t({\boldsymbol x}^t_{\mbox{\tiny\rm PI}})$ is the vector obtained by applying $f_t$ component-wise to ${\boldsymbol x}_{\mbox{\tiny\rm PI}}^t$, and we will use superscripts to indicate the iteration number.) While this approach has been studied in the literature \cite{journee2010generalized,yuan2013truncated,chen2018projected}, sharp results could only be established in a high SNR regime where $\lambda=\lambda(n)\to\infty$ at a sufficiently fast rate. Indeed, analyzing the recursion \eqref{eq:PowerIteration} is difficult because ${\boldsymbol A}$ is correlated with ${\boldsymbol x}_{\mbox{\tiny\rm PI}}^t$ (unlike in Eq.~\eqref{eq:FirstIteration}), and hence the simple calculation that yields Eq.~\eqref{eq:First-StateEvolution} is no longer permitted. This problem is compounded by the fact that we do not have an additional observation ${\boldsymbol y}$ independent of ${\boldsymbol A}$, and instead we plan to use a spectral initialization ${\boldsymbol x}^0\propto {\boldsymbol \varphi}_1$ that depends on the top eigenvector ${\boldsymbol \varphi}_1$ of ${\boldsymbol A}$. As a consequence, even the first step of the analysis (given in Eq.~\eqref{eq:First-StateEvolution}) is no longer obvious. Let us emphasize that these difficulties are not a limitation of the proof technique. For $t>1$, the iterates \eqref{eq:PowerIteration} are no longer Gaussian or centered around $\mu_t{\boldsymbol x}_0$, for some scaling factor $\mu_t$. This can be easily verified by considering, for instance, the function $f_t(x) = x^2$ (we refer to \cite{bayati2015universality} which carries out the calculation for such an example). AMP solves the correlation problem in nonlinear power iteration by modifying Eq.~\eqref{eq:PowerIteration}: namely, we subtract from from ${\boldsymbol A} f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})$ the part that is correlated to the past iterates. Let ${\mathfrak S}_t\equiv\sigma(\{{\boldsymbol x}_{\mbox{\tiny\rm L}}^0,{\boldsymbol x}_{\mbox{\tiny\rm L}}^2,\dots,{\boldsymbol x}_{\mbox{\tiny\rm L}}^t\})$ be the $\sigma$-algebra generated by iterates up to time $t$. The correction that compensates for correlations is most conveniently explained by using the following Long AMP recursion, introduced in \cite{berthier2017state}: \begin{align} {\boldsymbol x}_{\mbox{\tiny\rm L}}^{t+1} &= {\boldsymbol A}\, f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})-\E\{{\boldsymbol W} f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})|{\mathfrak S}_t\} + \overline{\alpha}_{t}{\boldsymbol x}_0+\sum_{s=0}^t\alpha_{t,s} {\boldsymbol x}^s_{\mbox{\tiny\rm L}}\label{eq:LongAMP}\\ &= \sum_{s=0}^t\alpha_{t,s} {\boldsymbol x}^s_{\mbox{\tiny\rm L}} + \left( \overline{\alpha}_{t} + \tfrac{\lambda}{n}\<{\boldsymbol x}_0,f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})\> \right){\boldsymbol x}_0 + {\boldsymbol W}\, f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})-\E\{{\boldsymbol W} f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})|{\mathfrak S}_t\}\, . \end{align} where $(\overline{\alpha}_{t})_{0\le t}$, $(\alpha_{t,s})_{0\le s\le t}$ are suitable sequences of deterministic numbers. In words, the new vector ${\boldsymbol x}_{\mbox{\tiny\rm L}}^{t+1}$ is a linear combination of iterates up to time $t$, plus a term ${\boldsymbol x}_0 ( \overline{\alpha}_{t} + \lambda\<{\boldsymbol x}_0,f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})\>/n)$ that is essentially deterministic, plus a random term $({\boldsymbol W}\, f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})-\E\{{\boldsymbol W} f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})|{\mathfrak S}_t\})$ that is uncorrelated with the past. If the past iterates $({\boldsymbol x}_{\mbox{\tiny\rm L}}^s)_{0\le s\le t}$ are jointly Gaussian, then the first two components (linear and deterministic) are also jointly Gaussian with $({\boldsymbol x}_{\mbox{\tiny\rm L}}^s)_{0\le s\le t}$. Since the third (random) term is uncorrelated with the past iterates, it can be shown by induction that the sequence $({\boldsymbol x}_{\mbox{\tiny\rm L}}^t)_{0\le t\le T}$ is approximately Gaussian as $n\to\infty$, for any fixed $t$ (in the sense of finite dimensional marginals), and centered around ${\boldsymbol x}_0$, see \cite{berthier2017state}. At first sight, this might appear as a mathematical trick, with no practical implications. Indeed Eq.~\eqref{eq:LongAMP} does not provide an algorithm. We are explicitly using the true signal ${\boldsymbol x}_0$ which we are supposed to estimate, and the expectation $\E\{{\boldsymbol W} f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})|{\mathfrak S}_t\}$ is, at best, hard to compute. However it turns out that (for a certain choice of the numbers $(\overline{\alpha}_{t})_{0\le t}$, $(\alpha_{t,s})_{0\le s\le t}$), the term subtracted from ${\boldsymbol A}\, f_t({\boldsymbol x}^t_{\mbox{\tiny\rm L}})$ in Eq.~\eqref{eq:LongAMP} can be approximated by ${\sf b}_t f_{t-1}({\boldsymbol x}_{\mbox{\tiny\rm L}}^{t-1})$ with a coefficient ${\sf b}_t$ that can be computed easily. We will not try to justify this approximation here (see, for instance, \cite{berthier2017state}). We will instead use the resulting algorithm (given below in Eq. \eqref{eq:AMPspecial0}) as the starting point of our analysis. \subsection{General analysis} \label{sec:GeneralRank1} Motivated by the discussion in the previous section, we consider the following general algorithm for rank-one matrix estimation in the model \eqref{eq:SpikedDefSpecial}. In order to estimate ${\boldsymbol x}_0$, we compute the principal eigenvector of ${\boldsymbol A}$, to be denoted by ${\boldsymbol \varphi}_1$, and apply the following iteration, with initialization ${\boldsymbol x}^0= \sqrt{n}{\boldsymbol \varphi}_1$: \begin{align} {\boldsymbol x}^{t+1} &= {\boldsymbol A}\, f_t({\boldsymbol x}^t) -{\sf b}_t f_{t-1}({\boldsymbol x}^{t-1})\, ,\;\;\;\;\;{\sf b}_t = \frac{1}{n}\sum_{i=1}^n f_{t}'(x^t_i) \, .\label{eq:AMPspecial0} \end{align} Here $f_t({\boldsymbol x}) = (f_t(x_1),\dots,f_t(x_n))^{{\sf T}}$ is a separable function for each $t$. As mentioned above, we can think of this iteration as an approximation of Eq.~\eqref{eq:LongAMP} where all the terms except the first one have been estimated by $-{\sf b}_t f_{t-1}({\boldsymbol x}^{t-1})$. The fact that this is an accurate estimate for large $n$ is far from obvious, but can be established by induction over $t$ \cite{berthier2017state}. Note that ${\boldsymbol x}_0$ can be estimated from the data ${\boldsymbol A}$ only up to an overall sign (since ${\boldsymbol x}_0$ and $-{\boldsymbol x}_0$ give rise to the same matrix ${\boldsymbol A}$ as per Eq.~(\ref{eq:SpikedDefSpecial})). In order to resolve this ambiguity, we will assume, without loss of generality, that $\<{\boldsymbol x}_0,{\boldsymbol \varphi}_1\>\ge 0$. \begin{theorem}\label{thm:Rank1} Consider the $k=1$ spiked matrix model of \myeqref{eq:SpikedDefSpecial}, with ${\boldsymbol x}_0(n)\in\mathbb{R}^n$ a sequence of vectors satisfying assumptions $(i)$, $(ii)$ above, and $\lambda>1$ . Consider the AMP iteration in Eq.~\eqref{eq:AMPspecial0} with initialization ${\boldsymbol x}^0 = \sqrt{n} \, {\boldsymbol \varphi}_1$ (where, without loss of generality $\<{\boldsymbol x}_0,{\boldsymbol \varphi}_1\> \ge 0$). Assume $f_t:\mathbb{R}\to\mathbb{R}$ to be Lipschitz continuous for each $t\in {\mathbb N}$. Let $(\mu_t, \sigma_t)_{t \geq 0}$ be defined via the recursion \begin{align} \mu_{t+1} & = \lambda \E[ X_0 f_t(\mu_t X_0+\sigma_t G)]\, , \label{eq:MutRecX0}\\ \sigma_{t+1}^2& = \E [f_t(\mu_t X_0 + \sigma_t G )^2]\, , \label{eq:SigtRecX0} \end{align} where $X_0\sim \nu_{X_0}$ and $G \sim {\sf N} (0,1)$ are independent, and the initial condition is $\mu_0= \sqrt{1-\lambda^{-2}}$, $\sigma_0 = 1/\lambda$. Then, for any function $\psi:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ with $|\psi({\boldsymbol x})-\psi({\boldsymbol y})|\le C(1+\|{\boldsymbol x}\|_2+\|{\boldsymbol y}\|_2)\|{\boldsymbol x} - {\boldsymbol y}\|_2$ for a universal constant $C>0$, the following holds almost surely for $t \geq 0$: \begin{align} \lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^n \psi (x_{0,i},x^t_i) = \E \left\{ \psi( X_0, \mu_t X_0 +\sigma_t G) \right\} \ . \label{eq:rank1_SE} \end{align} \end{theorem} The proof of this theorem is presented in Appendix \ref{sec:ProofMain}. One peculiarity of our approach is that we do not commit to a specific choice of the nonlinearities $f_t$, and instead develop a sharp asymptotic characterization for any---sufficiently regular---nonlinearity. A poor choice of the functions $f_t$ might result in large estimation error, and yet Theorem \ref{thm:Rank1} will continue to hold. On the other hand, the state evolution characterization can be used to design optimal nonlinearities in a principled way. Given Eqs.~\eqref{eq:MutRecX0} and \eqref{eq:SigtRecX0}, the general principle is quite transparent. The optimal nonlinearity is defined in terms of a scalar denoising problem. For $X_0\sim \nu_{X_0}$ and $G\sim{\sf N}(0,1)$ independent, consider the problem of estimating $X_0$ from the noisy observation $Y = \mu_t\, X_0+\sigma_tG$. At step $t$, $f_t$ should be constructed as to maximize the ratio $\E[ X_0 f_t(\mu_t X_0+\sigma_t G)]/\E [f_t(\mu_t X_0 + \sigma_t G )^2]^{1/2}$. Two specific instantiations of this principle are given in Sections \ref{sec:SparseSpike} and \ref{sec:BayesAMP}. \begin{remark} The state evolution recursion of Eqs.~\eqref{eq:MutRecX0}, \eqref{eq:SigtRecX0} in Theorem \ref{thm:Rank1} was already derived by Fletcher and Rangan in \cite{rangan2018iterative}. However, as explained in \cite[Section 5.3]{rangan2018iterative}, their results only apply to cases in which AMP can be initialized in a way that: $(i)$~has positive correlation with the spike ${\boldsymbol x}_0$ (and this correlation does not vanish as $n\to\infty$); $(ii)$~is independent of ${\boldsymbol A}$. Theorem \ref{thm:Rank1} analyzes an algorithm which does not require such an initialization, and hence applies more broadly. \end{remark} \subsection{The case of a sparse spike} \label{sec:SparseSpike} In some applications we might know that the spike ${\boldsymbol x}_0$ is sparse. We consider a simple model in which ${\boldsymbol x}_0$ is known to have at most $n{\varepsilon}$ nonzero entries for some ${\varepsilon}\in (0,1)$. Because of its importance, the use of nonlinear power iteration methods for this problem has been studied by several authors in the past \cite{journee2010generalized,yuan2013truncated,ma2013sparse}. However, none of these works obtains precise asymptotics in the moderate SNR regime (i.e., for $\lambda$, ${\varepsilon}$ of order one). In contrast, sharp results can be obtained by applying Theorem \ref{thm:Rank1}. Here we will limit ourselves to taking the first steps, deferring a more complete analysis to future work. We focus on the case of symmetric matrices for simplicity, cf. Eq.~\eqref{eq:SpikedDefSpecial}, but a generalization to rectangular matrices is straightforward along the lines of Section \ref{sec:ExampleRectangular}. The sparsity assumption implies that the random variable $X_0$ entering the state evolution recursion in \myeqref{eq:MutRecX0} should satisfy $\nu_{X_0}(\{0\})\ge 1-{\varepsilon}$. Classical theory for the sparse sequence model \cite{DJ94a,DJ98} suggests taking $f_t$ to be the soft thresholding denoiser $f_t(x) = \eta(x;\tau_t)$, for $(\tau_t)_{t\ge 0}$ a well-chosen sequence of thresholds. The resulting algorithm reads \begin{align} {\boldsymbol x}^{t+1} &= {\boldsymbol A}\, \hat{\boldsymbol x}^t-{\sf b}_t \hat{\boldsymbol x}^{t-1}\, ,\;\;\;\; \hat{\boldsymbol x}^t = \eta({\boldsymbol x}^t;\tau_t)\, ,\label{eq:SparseAlg}\\ &\;\;\;\;\;{\sf b}_t = \frac{1}{n}\|\hat{\boldsymbol x}^t\|_0\, , \nonumber \end{align} where $\|{\boldsymbol v}\|_0$ is the number of non-zero entries of vector ${\boldsymbol v}$. The initialization is, as before ${\boldsymbol x}^0 = \sqrt{n}{\boldsymbol \varphi}_1$. The algorithm alternates soft thresholding, to produce sparse estimates, and power iteration, with the crucial correction term $-{\sf b}_t \hat{\boldsymbol x}^{t-1}$. Theorem \ref{thm:Rank1} can be directly applied to characterize the performance of this algorithm for any fixed distribution $\nu_{X_0}$ of the entries of ${\boldsymbol x}_0$. For instance, we obtain the following exact prediction for the asymptotic correlation between estimates $\hat{\boldsymbol x}^t$ and the signal ${\boldsymbol x}_0$: \begin{align} \lim_{n\to\infty}\frac{|\<\hat{\boldsymbol x}^t({\boldsymbol A}),{\boldsymbol x}_0\>|}{\|\hat{\boldsymbol x}^t({\boldsymbol A})\|_2\|{\boldsymbol x}_0\|_2} = \frac{\mu_{t+1}}{\lambda\sigma_{t+1}}\, . \end{align} For a given distribution $\nu_{X_0}$, it is easy to compute $\mu_t,\sigma_t$ using Eq.~\eqref{eq:MutRecX0} with $f_t({\boldsymbol x}^t) = \eta({\boldsymbol x}^t; \tau_t)$. We can also use Theorem \ref{thm:Rank1} to characterize the minimax behavior over $n{\varepsilon}$-sparse vectors. We sketch the argument next: similar arguments were developed in \cite{DMM09,donoho2013accurate} in the context of compressed sensing. The basic idea is to lower bound the singnal-to-noise ratio (SNR) $\mu^2_{t+1}/\sigma^2_{t+1}$ iteratively as a function of the SNR at the previous iteration, over the set of probability distributions ${\mathcal F}_{{\varepsilon}} = \{\nu_{X_0}: \; \nu_{X_0}(\{0\})\ge 1-{\varepsilon}, \, \int x^2\nu_{X_0}({\rm d} x) = 1\}$. As shown in Appendix \ref{sec:Reduction}, it is sufficient to consider the extremal points of the set ${\mathcal F}_{{\varepsilon}}$, which are given by the three-points priors \begin{equation} \pi_{p,a_1,a_2} \equiv (1-{\varepsilon})\delta_0+{\varepsilon} p\delta_{a_1}+{\varepsilon}(1-p)\delta_{a_2}, \quad pa_1^2+(1-p)a_2^2 =1, \quad p\in [0,1]. \end{equation} We then define the following SNR maps \begin{align} S_* (\gamma,\theta;\nu_{X_0})& \equiv \frac{[\E\{X_0\eta(\sqrt{\gamma}X_0+G;\theta)\}]^2}{\E\{\eta(\sqrt{\gamma}X_0+G;\theta)^2\}}\, , \label{eq:Sstar_def}\\ S(\gamma;\theta)&\equiv \inf\Big\{S_* (\gamma,\theta;\pi_{p,a_1,a_2}) :\;\; pa_1^2+(1-p)a_2^2 =1, \, p\in[0,1]\Big\}\, . \label{eq:Sdef} \end{align} The interpretation of these quantities is as follows: $\gamma\mapsto S_* (\gamma,\theta;\nu_{X_0})$ describes the evolution of the signal-to-noise ratio after one step of AMP, when the signal distribution is $\nu_{X_0}$; the map $\gamma\mapsto S(\gamma;\theta)$ is the same evolution, for the least favorable prior, which can be taken of the form $\pi_{p,a_1,a_2}$. Notice that the function $S_* (\gamma,\theta;\pi_{p,a_1,a_2})$ can be evaluated by performing a small number (six, to be precise) of Gaussian integrals. The function $S$ is defined by a two-dimensional optimization problem, which can be computed numerically quite efficiently. We define the sequences $(\underline{\gamma}_t)_{t\ge 0}$, $(\theta_t)_{t\ge 0}$ by setting $\underline{\gamma}_0 = \lambda^2-1$, and then recursively \begin{align} \underline{\gamma}_{t+1} = \lambda^2S(\underline{\gamma}_t;\theta_t)\, ,\;\;\; \theta_t = \arg\max_{\theta\in [0,\infty]} S(\underline{\gamma}_t;\theta)\, . \label{eq:ugam_rec} \end{align} The next proposition provides the desired lower bound for the signal-to-noise ratio over the class of sparse vectors. \begin{proposition} Assume the setting of Theorem \ref{thm:Rank1}, and furthermore $\|{\boldsymbol x}_0(n)\|_0\le n{\varepsilon}$. Let $(\hat{\boldsymbol x}^t = \hat{\boldsymbol x}^t({\boldsymbol A}) )_{t\ge 0}$ be the sequence of estimates produced by the AMP iteration \myeqref{eq:SparseAlg} with initialization ${\boldsymbol x}^0 = \sqrt{n} {\boldsymbol \varphi}_1$, and thresholds $\tau_t=\theta_t\hat{\sigma}_t$ where $\hat{\sigma}_t$ is a estimator of $\sigma_t$ from data ${\boldsymbol x}^0,\dots,{\boldsymbol x}^t$ such that $ \hat{\sigma}_t \stackrel{\text{a.s.}}{\to} \sigma_t$. (For instance, take $\hat{\sigma}_t^2 \equiv \big\|f_{t-1}({\boldsymbol x}^{t-1})\big\|_2^2/n$ for $t \geq 1$. For $t=0$, take $\hat{\sigma}_0^2 \equiv 1/\hat{\lambda}$, where $\hat{\lambda}$ is given in \myeqref{eq:lambda_hat}.) Then for any fixed $t\ge 0$ we have, almost surely, \begin{align} \lim_{n\to\infty} \frac{|\<\hat{\boldsymbol x}^t({\boldsymbol A}),{\boldsymbol x}_0\>|}{\|\hat{\boldsymbol x}^t({\boldsymbol A})\|_2\|{\boldsymbol x}_0\|_2} = \frac{\mu_{t+1}}{\lambda\sigma_{t+1}}\ge \frac{\sqrt{\underline{\gamma}_{t+1}}}{\lambda}\, . \label{eq:sparse_spike_lb} \end{align} Here, $(\mu_{t+1}, \sigma_{t+1})$ are recursively defined as follows, starting from $\mu_0=\sqrt{1-\lambda^{-2}}$ and $\sigma_0^2= \lambda^{-2}$: \begin{align} \mu_{t+1} & = \lambda \E\{ X_0 \eta(\mu_t X_0+\sigma_t G; \, \theta_t \sigma_t)\} \, , \qquad \sigma_{t+1}^2 = \E \{ \eta(\mu_t X_0 + \sigma_t G; \, \theta_t \sigma_t )^2\} \,. \label{eq:SE_sparse_spike} % \end{align} \label{prop:sparse_spike} \end{proposition} The proof of Proposition \ref{prop:sparse_spike} is given in Appendix \ref{app:sparse_spike}. The proposition reduces the analysis of algorithm \eqref{eq:SparseAlg} to the study of a one-dimensional recursion $\underline{\gamma}_{t+1} = \lambda^2 S(\underline{\gamma}_t;\theta_t)$, which is much simpler. We defer this analysis to future work. We emphasize that the AMP algorithm in \myeqref{eq:SparseAlg} with thresholds $\tau_t=\theta_t\hat{\sigma}_t$ does not require knowledge of either the sparsity level ${\varepsilon}$ or the SNR parameter $\lambda$---these quantities are only required to compute the sequence of lower bounds $(\underline{\gamma}_t)_{t\ge 0}$. \subsection{Bayes-optimal estimation} \label{sec:BayesAMP} As a second application of Theorem \ref{thm:Rank1}, we consider the case in which the asymptotic empirical distribution $\nu_{X_0}$ of the entries of ${\boldsymbol x}_0$ is known. This case is of special interest because it provides a lower bound on the error achieved by any AMP algorithm. To simplify some of the formulas below, we assume here a slightly different normalization for the initialization, but otherwise we use the same algorithm as in the general case, namely \begin{align} {\boldsymbol x}^0 & = \ \sqrt{n\lambda^2 (\lambda^2-1)}\, {\boldsymbol \varphi}_1\, ,\label{eq:AMP_Bayes_In}\\ {\boldsymbol x}^{t+1} &= {\boldsymbol A}\, f_t({\boldsymbol x}^t) -{\sf b}_t f_{t-1}({\boldsymbol x}^{t-1})\, ,\;\;\;\;\;{\sf b}_t = \frac{1}{n}\sum_{i=1}^n f_{t}'(x^t_i) \, .\label{eq:AMPspecial000} \end{align} In order to define the optimal nonlinearity, consider again the scalar denoising problem of estimating $X_0$ from the noisy observation $Y = \sqrt{\gamma}\, X_0+G$ (note that $X_0, G\in\mathbb{R}$ are scalar random variables). The minimum mean square error is \begin{align} {\sf mmse}(\gamma) = \E\big\{\big[X_0-\E(X_0|\sqrt{\gamma}\, X_0+G)\big]^2\big\}\, . \end{align} With these notations, we can introduce the state evolution recursion \begin{align} \gamma_0 & = \lambda^2-1\, ,\label{eq:SEspecial_1}\\ \gamma_{t+1}&= \lambda^2\big\{1-{\sf mmse}(\gamma_t)\big\}\, . \label{eq:SEspecial_2} \end{align} These describe the evolution of the effective signal-to-noise ratio along the algorithm execution. The optimal non-linearity $f_t(\,\cdot\,)$ after $t$ iterations is the minimum mean square error denoiser for signal-to-noise ratio $\gamma_t$: \begin{align} f_t(y) &\equiv \lambda\, F( y;\gamma_t) \,,\label{eq:OptFspecial}\\ F(y;\gamma) & \equiv \E\{ X_0 \mid \gamma\, X_0+\sqrt{\gamma} \,G=y \} .\label{eq:Fdef} \end{align} After $t$ iterations, we produce an estimate of ${\boldsymbol x}_0$ by computing $\hat{\boldsymbol x}^t({\boldsymbol A})\equiv f_t({\boldsymbol x}^t)/\lambda = F({\boldsymbol x}^t; \gamma_t)$. We will refer to this choice as to Bayes AMP. \begin{remark} Implementing the Bayes-AMP algorithm requires to approximate the function $F(y;\gamma)$ of Eq.~\eqref{eq:Fdef}. This amounts to a one-dimensional integral and can be done very accurately by standard quadrature methods: a simple approach that works well in practice is to replace the measure $\nu_{X_0}$ by a combination of finitely many point masses. Analogously, the function ${\sf mmse}(\gamma)$ (which is needed to compute the sequence $\gamma_t$), can be computed by the same method\footnote{AMP noes not require high accuracy in the approximations of the nonlinear functions $f_t$. As shown several times in the appendices (see, e.g., Appendix \eqref{sec:ProofMain}) the algorithm is stable with respect to perturbations of $f_t$.}. \end{remark} We are now in position to state the outcome of our analysis for Bayes AMP, whose proof is deferred to Appendix \ref{app:ProofRank1}. \begin{theorem} \label{thm:Special} Consider the spiked matrix model (\ref{eq:SpikedDefSpecial}), with ${\boldsymbol x}_0(n)\in\mathbb{R}^n$ a sequence of vectors satisfying assumptions $(i)$, $(ii)$ above, and $\lambda>1$. Let $({\boldsymbol x}^t)_{t\ge 0}$ be the sequence of iterates generated by the Bayes AMP algorithm defined in Eqs.~(\ref{eq:AMPspecial0}), with initialization (\ref{eq:AMP_Bayes_In}), and optimal choice of the nonlinearity defined by Eq.~(\ref{eq:OptFspecial}). Assume $F(\,\cdot\, ;\gamma):\mathbb{R}\to\mathbb{R}$ to be Lipschitz continuous for any $\gamma\in (0,\lambda^2]$. Finally, define state evolution by Eqs. (\ref{eq:SEspecial_1}), (\ref{eq:SEspecial_2}). Then, for any function $\psi:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ with $|\psi({\boldsymbol x})-\psi({\boldsymbol y})|\le C(1+\|{\boldsymbol x}\|_2+\|{\boldsymbol y}\|_2)\|{\boldsymbol x} - {\boldsymbol y}\|_2$ for a universal constant $C>0$, the following holds almost surely for $t \geq 0$: \begin{align} \lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n\psi(x_{0,i},x_i^t) = \E\big\{\psi\big(X_0,\gamma_t X_0+\gamma_t^{1/2} Z\big)\big\}\, , \label{eq:special_AMP_conv} \end{align} where expectation is taken with respect to $X_0\sim \nu_{X_0}$ and $Z\sim{\sf N}(0,1)$ mutually independent, and we assumed without loss of generality that $\<{\boldsymbol \varphi}_1,{\boldsymbol x}_0\>\ge 0$. In particular, let $\gamma_{\mbox{\tiny\rm ALG}}(\lambda)$ denote the smallest strictly positive solution of the fixed point equation $\gamma = \lambda^2[1-{\sf mmse}(\gamma)]$. Then the AMP estimate $\hat{\boldsymbol x}^t({\boldsymbol A})= f_t({\boldsymbol x}^t)/\lambda$ achieves \begin{align} \lim_{t\to\infty}\lim_{n\to\infty}\frac{|\<\hat{\boldsymbol x}^t({\boldsymbol A}),{\boldsymbol x}_0\>|}{\|\hat{\boldsymbol x}^t({\boldsymbol A})\|_2\|{\boldsymbol x}_0\|_2} &= \frac{\sqrt{\gamma_{\mbox{\tiny\rm ALG}}(\lambda)}}{\lambda}\, ,\\ \lim_{t\to\infty}\lim_{n\to\infty}\frac{1}{n}\min_{s\in \{+1,-1\}} \| s \hat{\boldsymbol x}^t({\boldsymbol A})-{\boldsymbol x}_0\|_2^2&= 1- \frac{\gamma_{\mbox{\tiny\rm ALG}}(\lambda)}{\lambda^2}\, . \end{align} Finally, the algorithm has total complexity $O(n^2 \log n)$. \end{theorem} \begin{remark}\label{rem:LipExpectation} The assumption on $F(\,\cdot\,;\gamma):\mathbb{R}\to\mathbb{R}$ being Lipschitz continuous is required in order to apply our general theory. Note that this is implied by either of the following: $(i)$ ${\rm supp}(\nu_{X_0})\in [-M,M]$ for some constant $M$; $(ii)$ $\nu_{X_0}$ has log-concave density. \end{remark} It is interesting to compare the above result with the Bayes optimal estimation accuracy. The following statement is a consequence of the results of \cite{lelarge2016fundamental} (see Appendix \ref{app:BayesOpt}). \begin{proposition} \label{thm:BayesOpt} Consider the spiked matrix model (\ref{eq:SpikedDefSpecial}), with ${\boldsymbol x}_0(n)\in\mathbb{R}^n$ a vector with i.i.d. entries with distribution $\nu_{X_0}$ with bounded support and $\int x^2 \nu_{X_0}({\rm d} x) = 1$. Then there exists a countable set $D\subseteq \mathbb{R}_{\ge 0}$ such that, for $\lambda\in\mathbb{R}\setminus D$, the Bayes-optimal accuracy in the rank-one estimation problem is given by \begin{align} \lim_{n\to\infty} \, \sup_{\hat{\boldsymbol x}(\,\cdot\,)} \E\left\{\frac{\<\hat{\boldsymbol x}({\boldsymbol A}),{\boldsymbol x}_0\>^2}{\|\hat{\boldsymbol x}({\boldsymbol A})\|_2^2\|{\boldsymbol x}_0\|^2_2} \right\} = \frac{\gamma_{\mbox{\tiny\rm Bayes}}(\lambda)}{\lambda^2}\, ,\label{eq:BayesOpt} \end{align} where the supremum is over (possibly randomized) estimators, i.e. measurable functions $\hat{\boldsymbol x}:\mathbb{R}^{n\times n}\times [0,1]\to \mathbb{R}^n$, where $[0,1]$ is endowed with the uniform measure. Here $\gamma_{\mbox{\tiny\rm Bayes}}(\lambda)$ is the fixed point of the recursion (\ref{eq:SEspecial_2}) that maximizes the following free energy functional \begin{align} \Psi(\gamma,\lambda) = \frac{\lambda^2}{4}+\frac{\gamma^2}{4\lambda}-\frac{\gamma}{2}+{\rm I}(\gamma)\, ,\label{eq:FreeEnergy} \end{align} where ${\rm I}(\gamma) = \E\log\frac{{\rm d} p_{Y|X_0}}{{\rm d} p_{Y}}(Y,X_0)$ is the mutual information for the scalar channel $Y = \sqrt{\gamma}\, X_0+G$, with $X_0\sim \nu_{X_0}$ and $G\sim{\sf N}(0,1)$ mutually independent. \end{proposition} Together with this proposition, Theorem \ref{thm:Special} precisely characterizes the gap between Bayes-optimal estimation and message passing algorithms for rank-one matrix estimation. Simple calculus (together with the relation ${\rm I}'(\gamma)={\sf mmse}(\gamma)/2$ \cite{Guo05mutualinformation}) implies that the fixed point of the recursion (\ref{eq:SEspecial_2}) coincide with the stationary points of $\gamma\mapsto\Psi(\gamma,\lambda)$. We therefore have the following characterization of the Bayes optimality of Bayes-AMP. \begin{corollary}\label{coro:AMP-OPT} Under the setting of Theorem \ref{thm:Special} (in particular, $\lambda>1$), let the function $\Psi(\gamma,\lambda)$ be defined as in Eq.~\eqref{eq:FreeEnergy}. Then Bayes-AMP asymptotically achieves the Bayes-optimal error (and $\gamma_{\mbox{\tiny\rm ALG}}(\lambda)=\gamma_{\mbox{\tiny\rm Bayes}}(\lambda)$) if and only if the global maximum of $\gamma\mapsto \Psi(\gamma,\lambda)$ over $(0,\infty)$ is also the first stationary point of the same function (as $\gamma$ grows). \end{corollary} As illustrated in Section \ref{sec:TwoPoints}, this condition holds for some cases of interest, and hence message passing is asymptotically optimal for these cases. \begin{remark} In some applications, it is possible to construct an initialization ${\boldsymbol x}^0$ that is positively correlated with the signal ${\boldsymbol x}_0$ and independent of ${\boldsymbol A}$. If this is possible, then the spectral initialization is not required and Theorem \ref{thm:Special} follows immediately from \cite{BM-MPCS-2011}. For instance, if $\nu_{X_0}$ has positive mean, then it is sufficient to initialize ${\boldsymbol x}^0 = {\boldsymbol 1}$. This principle was exploited in \cite{deshpande2013finding,deshpande2014information,montanari2016non}. However such a positively correlated initialization is not available in general: the spectral initialization analyzed here aims at overcoming this problem. \end{remark} \begin{remark}\label{rmk:OptBayes} No polynomial-time algorithm is known that achieves estimation accuracy superior to the one guaranteed by Theorem \ref{thm:Special}. In particular, it follows from the optimality of posterior mean with respect to square loss and the monotonicity of the function $\gamma\mapsto \lambda^2\{1-{\sf mmse}(\gamma)\}$ that Bayes AMP is optimal among AMP algorithms. That is, for any other sequence of nonlinearities $f_t(\,\cdot\,)$, we have \begin{align} \lim_{n\to\infty}\frac{\big|\<f_t({\boldsymbol x}^t),{\boldsymbol x}_0\>\big|}{\|f_t({\boldsymbol x}^t)\|_2\|{\boldsymbol x}_0\|_2}&= \frac{\mu_{t+1}}{\lambda \sigma_{t+1}} \le \frac{\sqrt{\gamma_{t+1}}}{\lambda}\, . \end{align} As further examples, \cite{javanmard2016phase} analyzes a semi-definite programming (SDP) algorithm for the special case of a two-points symmetric mixture $\nu_{X_0} = (1/2)\delta_{+1}+(1/2)\delta_{-1}$. Theorem \ref{thm:Special} implies that, in this case, message passing is Bayes optimal (since $\gamma_{\mbox{\tiny\rm ALG}}=\gamma_{\mbox{\tiny\rm Bayes}}$ follows from \cite{deshpande2016asymptotic}). In contrast, numerical simulations and non-rigorous calculations using the cavity method from statistical physics (see \cite{javanmard2016phase}) suggest that SDP is sub-optimal. \end{remark} \begin{remark} A result analogous to Theorem \ref{thm:Special} for the symmetric two-points distribution $\nu_{X_0} = (1/2)\delta_{+1}+(1/2)\delta_{-1}$ is proved in \cite[Theorem 3]{mossel2016density} in the context of the stochastic block model of random graphs. Note, however, that the approach of \cite{mossel2016density} requires the graph to have average degree $d\to\infty$, $d=O(\log n)$. \end{remark} \subsection{An example: Two-points distributions} \label{sec:TwoPoints} Theorem \ref{thm:Special} is already interesting in very simple cases. Consider the two-points mixture \begin{align} \nu_{X_0} &= {\varepsilon}\, \delta_{a_+} +(1-{\varepsilon})\delta_{-a_-}\, ,\label{eq:EmpiricalSpecial}\\ & a_+ = \sqrt{\frac{1-{\varepsilon}}{{\varepsilon}}}\, ,\;\;\;\;\;\; a_- = \sqrt{\frac{{\varepsilon}}{1-{\varepsilon}}}\, . \end{align} Here the coefficients $a_+,a_-$ are chosen to ensure that $\int x\nu_{X_0}({\rm d} x) = 0$, $\int x^2\nu_{X_0}({\rm d} x) = 1$. The conditional expectation $F(y;\gamma)$ of Eq.~(\ref{eq:Fdef}) can be computed explicitly, yielding \begin{align} F(y;\gamma)&= \frac{{\varepsilon} a_{+}e^{a_+ y-\gamma a_+^2/2}-(1-{\varepsilon})a_-e^{- a_- y-\gamma a_-^2/2}}{{\varepsilon} e^{a_+ y-\gamma a_+^2/2}+(1-{\varepsilon})e^{-a_- y-\gamma a_-^2/2}}\, .\label{eq:F_2pts} \end{align} \begin{figure}[t!] \includegraphics[width=0.48\textwidth]{eps05.pdf}\hspace{-0.1cm} \includegraphics[width=0.48\textwidth]{eps025.pdf}\\ \vspace{-1.75cm} \includegraphics[width=0.48\textwidth]{eps005.pdf}\hspace{-0.1cm} \includegraphics[width=0.48\textwidth]{eps0025.pdf} \put(-310,205){${\varepsilon}=0.5$} \put(-70,205){${\varepsilon}=0.25$} \put(-310,25){${\varepsilon}=0.05$} \put(-70,25){${\varepsilon}=0.025$} \caption{Estimation in the single spiked model (\ref{eq:SpikedDefSpecial}) with entries of ${\boldsymbol x}_0$ following the two-points distribution of Eq.~(\ref{eq:EmpiricalSpecial}), and four different values of the sparsity ${\varepsilon}\in\{0.025,0.05,0.25,0.5\}$. Continuous thick blue line: asymptotic accuracy achieved by AMP (with spectral initialization). Red circles: numerical simulations with the AMP algorithm (form matrices of dimension $n=2000$ and $t=200$ iterations). Continuous thin blue line: Bayes optimal estimation accuracy. Dashed blue line: other fixed points of state evolution. Red line: Accuracy achieved by principal component analysis. Vertical dashed black lines: the thresholds $\lambda_{\mbox{\tiny\rm IT}}$ and $\lambda_{\mbox{\tiny\rm ALG}}$.}\label{fig:curvesAMP} \end{figure} Figure \ref{fig:curvesAMP} reports the results of numerical simulations with the AMP algorithm decribed in the previous section. We also plot $\gamma_*(\lambda)/\lambda^2$ as a function of $\lambda$, where $\gamma_*(\lambda)$ is the fixed point of the state-evolution equation (\ref{eq:SEspecial_2}). The figure shows plots for four values of ${\varepsilon}\in (0,1/2]$. The qualitative behavior depends on the value of ${\varepsilon}$. For ${\varepsilon}$ close enough to $1/2$, Eq.~(\ref{eq:SEspecial_2}) only has one stable fixed point\footnote{This is proved formally in \cite{deshpande2016asymptotic} for ${\varepsilon}=1/2$ and holds by a continuity argument for ${\varepsilon}$ close enough to $1/2$. However, here we will limit ourselves to a heuristic discussion based on the numerical solution of Eq.~(\ref{eq:SEspecial_2}).} that is also the minimizer of the free energy functional (\ref{eq:FreeEnergy}). Hence $\gamma_{\mbox{\tiny\rm ALG}}(\gamma) = \gamma_{\mbox{\tiny\rm Bayes}}(\lambda)$ for all values of $\lambda$: message passing is always Bayes optimal. For ${\varepsilon}$ small enough, there exists $\lambda_0({\varepsilon})<1$ such that Eq.~(\ref{eq:SEspecial_2}) has three fixed points for $\lambda \in (\lambda_{0}({\varepsilon}),1)$: $\gamma_0(\lambda)<\gamma_1(\lambda)<\gamma_2(\lambda)$ whereby $\gamma_0=0$ and $\gamma_2$ are stable and $\gamma_1$ is unstable. AMP is controlled by the smallest stable fixed point, and hence $\gamma_{\mbox{\tiny\rm ALG}}(\lambda) = 0$ for all $\lambda<1$. On the other hand, by minimizing the free energy (\ref{eq:FreeEnergy}) over these fixed points, we obtain that there exists $\lambda_{\mbox{\tiny\rm IT}}({\varepsilon}) \in (\lambda_{0}({\varepsilon}),1)$ such that $\gamma_{\mbox{\tiny\rm Bayes}}(\lambda) = 0$ for $\lambda<\lambda_{\mbox{\tiny\rm IT}}({\varepsilon})$ while $\gamma_{\mbox{\tiny\rm Bayes}}(\lambda) = \gamma_2(\lambda)$ for $\lambda>\lambda_{\mbox{\tiny\rm IT}}({\varepsilon})$. We conclude that AMP is asymptotically sub-optimal for $\lambda\in (\lambda_{\mbox{\tiny\rm IT}}({\varepsilon}),1)$, while it is asymptotically optimal for $\lambda\in [0,\lambda_{\mbox{\tiny\rm IT}}({\varepsilon}))$ and $\lambda\in (1,\infty)$. \section{Confidence intervals, $p$-values, asymptotic FDR control} \label{sec:Inference} As an application of Theorem \ref{thm:Special}, we can construct confidence intervals that achieve a pre-assigned coverage level $(1-\alpha)$, where $\alpha\in (0,1)$. Indeed, Theorem \ref{thm:Special} informally states that the AMP iterates ${\boldsymbol x}^t$ are approximately Gaussian with mean (proportional to) the signal ${\boldsymbol x}_0$. This relation can be inverted to construct confidence intervals. We begin by noting that we do not need to know the signal strength $\lambda$. Indeed, for $\lambda>1$, the latter can be estimated from the maximum eigenvalue of ${\boldsymbol A}$, $\lambda_{\max}({\boldsymbol A})$, via \begin{align} \hat{\lambda}({\boldsymbol A}) \equiv \frac{1}{2}\Big\{\lambda_{\max}({\boldsymbol A}) + \sqrt{\lambda_{\max}({\boldsymbol A})^2-4}\Big\}\, . \label{eq:lambda_hat} \end{align} This is a consistent estimator for $\lambda>1$, and can replace $\lambda$ in the iteration of Eq.~(\ref{eq:AMPspecial0}) and initialization (\ref{eq:AMP_Bayes_In}) as well as in the state evolution iteration of Eqs.~\eqref{eq:SEspecial_1} and \eqref{eq:SEspecial_2}. We discuss two constructions of confidence intervals: the first one uses the Bayes AMP algorithm of Section \ref{sec:BayesAMP}, and the second instead uses the general algorithm of Section \ref{sec:GeneralRank1}. The optimality of Bayes AMP translates into shorter confidence intervals but also requires knowledge of the empirical distribution $\nu_{X_0}$. \vspace{0.5cm} \noindent{\bf Bayes-optimal construction.} In order to emphasize the fact that we use the estimated $\lambda$ both in the AMP iteration and in the state evolution recursion, we write $\overline{\boldsymbol x}^t$ for the Bayes AMP iterates and $\hat{\gamma}_t$ for the state evolution parameter, instead of ${\boldsymbol x}^t$ and $\gamma_t$. We then form the intervals: \begin{align} \hat{J}_i(\alpha;t) = \left[\frac{1}{\hat{\gamma}_t} \overline{x}^t_i- \frac{1}{\sqrt{\hat{\gamma}_t}} \Phi^{-1}\left(1- \frac{\alpha}{2} \right), \ \frac{1}{\hat{\gamma}_t} \overline{x}^t_i+\frac{1}{\sqrt{\hat{\gamma}_t}} \Phi^{-1} \left(1- \frac{\alpha}{2} \right) \right]\, . \label{eq:ConfInterval} \end{align} We can also define corresponding $p$-values by \begin{align} p_i = 2\left(1-\Phi\Big(\frac{1}{\sqrt{\hat{\gamma}_t}} |\overline{x}^t_i|\Big)\right)\, . \label{eq:pi_Bayes} \end{align} \vspace{0.5cm} \noindent{\bf General construction (no prior knowledge).} Given a sequence of Lipschitz functions $f_t:\mathbb{R}\to \mathbb{R}$, we let ${\boldsymbol x}^t$ be the general AMP iterates as per Section \ref{sec:GeneralRank1}, cf. Eq.~(\ref{eq:AMPspecial0}). In order to form confidence intervals, we need to estimate the parameters $\mu_t$, $\sigma_t$. In view of Theorem \ref{thm:Rank1}, a possible choice is given by \begin{align} \hat{\sigma}_t^2 & \equiv \frac{1}{n}\big\|f_{t-1}({\boldsymbol x}^{t-1})\big\|_2^2\,,\\ \hat{\mu}_t^2 & \equiv \frac{1}{n}\big\|{\boldsymbol x}^t\big\|_2^2- \frac{1}{n}\big\|f_{t-1}({\boldsymbol x}^{t-1})\big\|_2^2\, . \end{align} We then construct confidence intervals and $p$-values \begin{align} \hat{J}_i(\alpha;t) &= \left[\frac{1}{\hat{\mu}_t} x^t_i- \frac{\hat{\sigma}_t}{\hat{\mu}_t} \Phi^{-1}\left(1-\frac{\alpha}{2}\right), \ \frac{1}{\hat{\mu}_t} x^t_i+\frac{\hat{\sigma}_t}{\hat{\mu}_t} \Phi^{-1}\left(1-\frac{\alpha}{2} \right)\right]\, ,\label{eq:ConfInterval_2}\\ p_i(t) &= 2\left(1-\Phi\Big(\frac{1}{\hat{\sigma}_t} |x^t_i|\Big)\right)\, . \label{eq:pi_def} \end{align} \begin{corollary}\label{coro:ConfInt} Consider the spiked matrix model (\ref{eq:SpikedDefSpecial}), under the assumptions of Theorem \ref{thm:Rank1} (in case of no prior knowledge) or Theorem \ref{thm:Special} (for the Bayes optimal construction). Defining the confidence intervals $\hat{J}_i(\alpha;t)$ as per Eqs. ~(\ref{eq:ConfInterval}) ~(\ref{eq:ConfInterval_2}), we have almost surely \begin{align} \lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n {\mathbb I}\big(x_{0,i}\in \hat{J}_i(\alpha;t)\big) = 1-\alpha\, . \label{eq:AS-ConfInt} \end{align} Further assume that the fraction of non-zero entries in the spike is $\|{\boldsymbol x}_0(n)\|_0/n\to {\varepsilon}\in [0,1)$, and $\nu_{X_0}(\{0\}) = 1-{\varepsilon}$. Then the $p$-values constructed above are asymptoticaly valid for the nulls. Namely, let $i_0 = i_0(n)$ any index such that $x_{0,i_0}(n) = 0$. Then, for any $\alpha\in [0,1]$, and any fixed $t\ge 0$ \begin{align} \lim_{n\to\infty} {\mathbb P}\big(p_{i_0(n)}(t)\le \alpha \big) = \alpha\, . \label{eq:ValidNull} \end{align} \end{corollary} The proof of this result is presented in Appendix \ref{app:CoroConfInt}. Notice that, by dominated convergence, this corollary also implies validity of the confidence intervals on average, namely $\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n {\mathbb P}\big(x_{0,i}\in \hat{J}_i(\alpha;t)\big) = 1-\alpha$. As mentioned above, cf. Remark \ref{rmk:OptBayes}, the Bayes-optimal construction maximizes the ratio $\mu_t/\sigma_t$ and therefore minimizes the length of confidence intervals. This requires however additional knowledge of the empirical distribution $\nu_{X_0}$. Corollary \ref{coro:ConfInt} allows to control the probability of false positives when using the $p$-values $p_i$, see Eq.~\eqref{eq:ValidNull}. We might want to use these $p$-values to select a subset of variables $\hat{S}\subseteq [p]$ to be considered for further exploration. For such applications, it is common to aim for false discovery rate (FDR) control. The $p$-values $p_i$ guarantee asymptotic FDR control through a simple Benjamini-Hochberg procedure \cite{benjamini1995controlling}. For a threshold $s\in [0,1]$, we define the following estimator of false discovery proportion \cite{efron2012large}: \begin{align} \widehat{\rm FDP}(s;t) \equiv \frac{ n s}{1\vee \Big( \sum_{i=1}^n{\mathbb I}_{\{p_i(t)\le s\}}\Big)}\, . \label{eq:FDP_def} \end{align} Using this notion, we define a threshold and a rejection set as follows. Fix $\alpha \in (0,1)$, let \begin{align} s_*(\alpha;t) \equiv \inf\big\{\, s \in [0,1]: \; \widehat{\rm FDP}(s;t)\ge \alpha\, \big\}\, ,\;\;\;\; \hat{S}(\alpha;t) \equiv \big\{i\in [n]: \; p_i(t)<s_*(\alpha;t)\big\} \label{eq:thresh_rej_set} \end{align} The false discovery rate for this procedure is defined as usual \begin{align} {\rm FDR}(\alpha,t;n) \equiv \E\left\{\frac{ |\hat{S}(\alpha;t)\cap \{i:\, x_{0,i}=0\}| }{1\vee |\hat{S}(\alpha;t)| }\right\} \end{align} Our next corollary shows that the above procedure is guaranteed to control FDR in an asymptotic sense. Its proof can be found in Appendix \ref{app:FDR}. \begin{corollary} \label{corr:FDR} Consider the spiked matrix model (\ref{eq:SpikedDefSpecial}), under the assumptions of Theorem \ref{thm:Rank1} (in case of no prior knowledge) or Theorem \ref{thm:Special} (for the Bayes optimal construction). Further assume that the fraction of non-zero entries in the spike is $\|{\boldsymbol x}_0(n)\|_0/n \to {\varepsilon}\in [0,1)$, and $\nu_{X_0}(\{0\}) = 1-{\varepsilon}$. Then, for any fixed $t\ge 0$, \begin{align} \lim_{n\to\infty}{\rm FDR}(\alpha,t;n) = (1-{\varepsilon})\alpha\, . \label{eq:FDRbound} \end{align} \end{corollary} \begin{remark} \label{rem:FDR} The procedure defined by threshold and rejection set in \myeqref{eq:thresh_rej_set} does not assume knowledge of the sparsity level ${\varepsilon}$. If one knew ${\varepsilon}$, then an asymptotic false discovery rate of exactly $\alpha$ can be obtained by defining \cite{storey2002direct} \[ \widehat{\rm FDP}(s;t) \equiv \frac{ n (1-{\varepsilon})s}{1\vee \Big( \sum_{i=1}^n{\mathbb I}_{\{p_i(t)\le s\}}\Big)}\, . \] With the threshold and rejection set defined as in \myeqref{eq:thresh_rej_set}, such a procedure would have an asymptotic FDR equal to $\alpha$, and higher power than the procedure using the estimator in \myeqref{eq:FDP_def}. \end{remark} \def{\sf c}{{\sf c}} \def{\overline{\mu}}{{\overline{\mu}}} \def{\overline{\sigma}}{{\overline{\sigma}}} \def{\overline{\gamma}}{{\overline{\gamma}}} \section{Estimation of rectangular rank-one matrices} \label{sec:ExampleRectangular} The algorithms and analysis developed in previous sections can be generalized to rectangular matrices. We illustrate this by generalizing the rank-one result of Theorem \ref{thm:Rank1}. We consider a data matrix ${\boldsymbol A}\in\mathbb{R}^{n\times d}$ given by \begin{align} {\boldsymbol A} = \frac{\lambda}{n}{\boldsymbol u}_0{\boldsymbol x}_0^{{\sf T}}+{\boldsymbol W}\, ,\label{eq:RectangularSpikedRankOne} \end{align} where $({\boldsymbol W}_{ij})_{i\le n,j\le d}\sim_{iid}{\sf N}(0,1/n)$. To be definite, we will think of sequences of instances indexed by $n$ and assume $n,d\to\infty$ with aspect ratio $d(n)/n\to \alpha\in (0,\infty)$. We will make the following assumptions on the sequences of vectors ${\boldsymbol u}_0 = {\boldsymbol u}_0(n)$, ${\boldsymbol x}_0={\boldsymbol x}_0(n)$: \begin{itemize} \item[$(i)$] Their rescaled $\ell_2$-norms converge: $\lim_{n\to\infty}\|{\boldsymbol u}_0(n)\|_2/\sqrt{n}= 1$, $\lim_{n\to\infty}\|{\boldsymbol x}_0(n)\|_2/\sqrt{d(n)}= 1$; \item[$(ii)$] The empirical distributions of the entries of ${\boldsymbol x}_0(n)$ and ${\boldsymbol u}_0(n)$ converges weakly to probability distributions $\nu_{X_0}$, $\nu_{U_0}$, on $\mathbb{R}$, with unit second moment. \end{itemize} In analogy with the symmetric case, we initialize the AMP iteration by using the principal right singular vector of ${\boldsymbol A}$, denoted by ${\boldsymbol \varphi}_1$ (which we assume to have unit norm). In the present case, the phase transition for the principal singular vector takes place at $\lambda^2\sqrt{\alpha}=1$ \cite{paul2007asymptotics,BaiSilverstein}. Namely, if $\lambda^2\sqrt{\alpha}>1$ then the correlation between $|\<{\boldsymbol x}_0,{\boldsymbol \varphi}_1\>|/\|{\boldsymbol x}_0\|$ stays bounded away from zero as $n,d\to\infty$. Setting ${\boldsymbol x}^0= \sqrt{d}{\boldsymbol \varphi}_1$ and $g_{t-1}({\boldsymbol u}^{t-1}) ={\boldsymbol 0}$, we consider the following AMP iteration: \begin{align} {\boldsymbol u}^t & = {\boldsymbol A} f_t({\boldsymbol x}^t) - {\sf b}_t g_{t-1}({\boldsymbol u}^{t-1})\, ,\;\;\;\;\;\;\; {\sf b}_t =\frac{1}{n}\sum_{i=1}^df_t'(x^t_i)\, ,\\ {\boldsymbol x}^{t+1} & ={\boldsymbol A}^{{\sf T}} g_{t}({\boldsymbol u}^t)-{\sf c}_{t} f_t({\boldsymbol x}^t)\, ,\;\;\;\;\;\;\;\;\;\; {\sf c}_t =\frac{1}{n}\sum_{i=1}^ng_t'(u^t_i)\, . \end{align} The asymptotic characterization of this iteration is provided by the next theorem, which generalizes Theorem \ref{thm:Rank1} to the rectangular case. \begin{theorem}\label{thm:Rank1-Rectangular} Consider the $k=1$ spiked matrix model of Eq.~\eqref{eq:RectangularSpikedRankOne}, with $n,d\to\infty$, $d/n\to \alpha$. Assume ${\boldsymbol x}_0(n)\in\mathbb{R}^d$, ${\boldsymbol u}_0(n)\in\mathbb{R}^d$ to be two sequences of vectors satisfying assumptions $(i)$, $(ii)$ above, and $\lambda^2\sqrt{\alpha}>1$ . Consider the AMP iteration in Eq.~\eqref{eq:AMPspecial0} with initialization ${\boldsymbol x}^0 = \sqrt{n} \, {\boldsymbol \varphi}_1$ (where, without loss of generality $\<{\boldsymbol x}_0,{\boldsymbol \varphi}_1\> \ge 0$). Assume $f_t,g_t:\mathbb{R}\to\mathbb{R}$ to be Lipschitz continuous for each $t\in {\mathbb N}$. Let $(\mu_t, \sigma_t)_{t \geq 0}$ be defined via the recursion \begin{align} \mu_{t+1} & = \lambda \E[ U_0\, g_t({\overline{\mu}}_t U_0+{\overline{\sigma}}_t G)]\, , \;\;\;\;\;\;\;\;\;\; \sigma_{t+1}^2 = \E [g_t({\overline{\mu}}_t U_0 + {\overline{\sigma}}_t G )^2]\, , \label{eq:Rect-X0}\\ {\overline{\mu}}_{t} & = \lambda\alpha\, \E[ X_0 \, f_t(\mu_t X_0+\sigma_t G)]\, , \;\;\;\;\;\;\;\;\;\; {\overline{\sigma}}_{t}^2 = \alpha\, \E [f_t(\mu_t X_0 + \sigma_t G )^2]\, , \label{eq:Rect-U0} \end{align} where $X_0\sim \nu_{X_0}$, $U_0\sim \nu_{U_0}$ and $G \sim {\sf N} (0,1)$ are independent, and the initial condition is \begin{align} \mu_0= \sqrt{\frac{1-\alpha^{-1}\lambda^{-4}}{1+\lambda^{-2}}}\, ,\;\;\;\;\; \sigma_0 = \sqrt{\frac{\lambda^{-2}+\alpha^{-1}\lambda^{-4}}{1+\lambda^{-2}}}\, . \end{align} (This is to be substituted in Eq.~\eqref{eq:Rect-U0} to yield ${\overline{\mu}}_0,{\overline{\sigma}}_0$.) Then, for any function $\psi:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ with $|\psi({\boldsymbol x})-\psi({\boldsymbol y})|\le C(1+\|{\boldsymbol x}\|_2+\|{\boldsymbol y}\|_2)\|{\boldsymbol x} - {\boldsymbol y}\|_2$ for a universal constant $C>0$, the following holds almost surely for $t \geq 0$: \begin{align} \lim_{n \to \infty} \frac{1}{d(n)} \sum_{i=1}^{d(n)} \psi (x_{0,i},x^t_i) = \E \left\{ \psi( X_0, \mu_t X_0 +\sigma_t G) \right\} \ , \label{eq:rank1_SE_Rect_A}\\ \lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^n \psi (u_{0,i},u^t_i) = \E \left\{ \psi( U_0, {\overline{\mu}}_t U_0 +{\overline{\sigma}}_t G) \right\} \ . \label{eq:rank1_SE_Rect_B} \end{align} \end{theorem} As a special class of examples covered by this setting, we can consider the case in which we are given i.i.d. Gaussian samples $({\boldsymbol y}_i)_{i\le n}\sim{\sf N}({\boldsymbol 0},{\boldsymbol \Sigma})$, with covariance matrix ${\boldsymbol \Sigma} = \rho^2\tilde{\boldsymbol x}_0\tilde{\boldsymbol x}_0^{{\sf T}}+{\boldsymbol I}_d$ where $\tilde{\boldsymbol x}_0 = {\boldsymbol x}_0/\sqrt{d}$. Letting ${\boldsymbol A}$ be the matrix with $i$-th row equal to ${\boldsymbol y}_i/\sqrt{n}$, this takes the form of Eq.~\eqref{eq:RectangularSpikedRankOne}, with ${\boldsymbol u}_{0}\sim{\sf N}(0,{\boldsymbol I}_n)$, and $\lambda = \rho/\sqrt{\alpha}$. Notice that the sequence of random Gaussian vectors ${\boldsymbol u}_0(n)$, $n\ge 1$ satisfies conditions $(i)$, $(ii)$ above almost surely, with limit distribution $\nu_{U_0}$ equal to the standard Gaussian measure. In this case, the optimal choice of the function $g_t$ in Eq.~\eqref{eq:Rect-X0} is of course linear: $g_t(u) = a_tu$ for some $a_t>0$. The value of the constant $a_t$ is immaterial, because it only amounts to a common rescaling of the $\mu_t,\sigma_t$, which can be compensated by a redefinition of $f_t$ in Eq.~\eqref{eq:Rect-U0}. We set $a_t = \lambda{\overline{\mu}}_t/({\overline{\mu}}_t^2+{\overline{\sigma}}_t^2)$. Substituting in Eq.~\eqref{eq:Rect-X0}, we obtain $\mu_{t+1} = \sigma_{t+1}^2=\gamma_{t+1}$, where \begin{align} \gamma_{t+1} = \frac{\lambda^2{\overline{\gamma}}_t^2}{1+{\overline{\gamma}}_t^2}\, ,\label{eq:GaussianCov-1} \end{align} where ${\overline{\gamma}}_t= {\overline{\mu}}_t^2/{\overline{\sigma}}_t^2$. Taking the ratio of the two equations in \eqref{eq:Rect-U0}, we obtain \begin{align} {\overline{\gamma}}_{t} = \lambda^2\alpha \frac{\E\{X_0 f_t(\gamma_t X+\sqrt{\gamma_t}G)\}^2}{\E\{f_t(\gamma_t X+\sqrt{\gamma_t}G)^2\}}\, . \label{eq:GaussianCov-2} \end{align} We thus reduced the problem of covariance estimation in the spiked model ${\boldsymbol \Sigma} = \rho^2\tilde{\boldsymbol x}_0\tilde{\boldsymbol x}_0^{{\sf T}}+{\boldsymbol I}_d$, to the analysis of a one-dimensional recursion defined by Eqs.~\eqref{eq:GaussianCov-1}, \eqref{eq:GaussianCov-2}. \section{Degenerate cases and non-concentration} \label{sec:Degenerate} The spectral initialization at unstable fixed points leads to a new phenomenon that is not captured by previous theory \cite{BM-MPCS-2011}: the evolution of empirical averages (e.g. estimation accuracy) does not always concentrate around a deterministic value. Our main result, Theorem \ref{thm:Main} below, provides a description of this phenomenon by establishing a state evolution limit that is \emph{dependent on the random initial condition}. The initial condition converges in distribution to a well defined limit, which--- together with state evolution---yields a complete characterization of the asymptotic behavior of the message passing algorithm. The non-concentration phenomenon arises when the deterministic low-rank component in Eq.~(\ref{eq:SpikedDef}) has degenerate eigenvalues. This is unavoidable in cases in which the underlying low-rank model to be estimated has symmetries. Here we illustrate this phenomenon on a simple model that we will refer to as the Gaussian Block Model (GBM). For $q\ge 3$ a fixed integer, let ${\boldsymbol \sigma}= (\sigma_1,\dots,\sigma_n)$ be a vector of vertex labels with $\sigma_i\in\{1,\dots,q\}$ and consider deterministic matrix ${\boldsymbol A}_0\in\mathbb{R}^{n\times n}$ (with ${\rm rank}({\boldsymbol A}_0) = q-1$) defined by: \begin{align} A_{0,ij}= \begin{cases} (q-1)/n & \mbox{if $\sigma_i = \sigma_j$}\\ -1/n & \mbox{otherwise.}\\ \end{cases} \end{align} We assume the vertex labeling to be perfectly balanced. i.e. $\sum_{i=1}^n{\boldsymbol 1}_{\sigma_i=\sigma}=n/q$ for $\sigma\in\{1,\dots,q\}$: While most of our discussion holds under an approximate balance condition, this assumption avoids some minor technical complications. Notice that ${\boldsymbol A}_0$ is an orthogonal projector on a subspace ${\mathcal V}_n\in\mathbb{R}^{n}$ of dimension $q-1$. We observe the noisy matrix (with noise ${\boldsymbol W}\sim{\sf GOE}(n)$) \begin{align} {\boldsymbol A} = \lambda {\boldsymbol A}_0+{\boldsymbol W}\, ,\label{eq:BlockMatrix} \end{align} and would like to estimate ${\boldsymbol A}_0$ from these noisy observations. The matrix ${\boldsymbol A}$ takes the form of Eq.~(\ref{eq:SpikedDef}) with $k=q-1$, $\lambda_1=\dots=\lambda_k = \lambda$ and ${\boldsymbol v}_1$, \dots, ${\boldsymbol v}_k$ an orthonormal basis of the space ${\mathcal V}_n$. We will assume $\lambda >1$ so that $k_* = k$. In particular, for $q\ge 3$, the low-rank signal has degenerate eigenvalues. We use the following AMP algorithm to estimate ${\boldsymbol A}_0$. We compute the top $k$ eigenvectors of ${\boldsymbol A}$, denoted by ${\boldsymbol \varphi}_1,\dots,{\boldsymbol \varphi}_k\in\mathbb{R}^n$ and generate ${\boldsymbol x}^t\in\mathbb{R}^{n\times q}$ for $t \geq 0$, according to \begin{align} {\boldsymbol x}^0&= [\sqrt{n}{\boldsymbol \varphi}_1|\cdots|\sqrt{n}{\boldsymbol \varphi}_k|{\boldsymbol 0}]\, ,\label{eq:BlockSpectral}\\ {\boldsymbol x}^{t+1} & = {\boldsymbol A} f({\boldsymbol x}^t) - f({\boldsymbol x}^{t-1})\, {\sf B}_t^{{\sf T}}\, ,\label{eq:AMP-Block} \end{align} where the `Onsager coefficient' ${\sf B}_t\in \mathbb{R}^{q\times q}$ is a matrix given by \begin{align} {\sf B}_t = \frac{1}{n}\sum_{i=1}^n\frac{\partial f}{\partial {\boldsymbol x}}({\boldsymbol x}^t_i,y_i)\,. \end{align} Here $\frac{\partial f}{\partial {\boldsymbol x}}\in \mathbb{R}^{q\times q}$ denotes the Jacobian matrix of the function $f:\mathbb{R}^q\to\mathbb{R}^q$. Furthermore, the function $f:\mathbb{R}^q\to\mathbb{R}^q$ is defined by letting, for $\sigma\in\{1,\dots,q\}$: \begin{align} f({\boldsymbol z})_\sigma = \lambda\left[\frac{q e^{z_{\sigma}}}{\sum_{\tau=1}^q e^{z_{\tau}}}-1\right]\, ,\label{eq:FunctionBlock} \end{align} and $f({\boldsymbol x})$ is defined for ${\boldsymbol x}\in\mathbb{R}^{n\times q}$ by applying the same function row by row. This choice of the function $f$ corresponds to Bayes-optimal estimation as can be deduced from the state evolution analysis below: we will not discuss this point in detail here. The output ${\boldsymbol x}^t$ after $t$ iterations of (\ref{eq:AMP-Block}) can be interpreted as an estimate of the labels ${\boldsymbol \sigma}$ in the following sense. Let ${\boldsymbol x}_0\in\mathbb{R}^{n\times q}$ be the matrix whose $i$-th row is ${\boldsymbol x}_{0,i}= {\boldsymbol P}^{\perp}{\boldsymbol e}_{{\boldsymbol \sigma}_i}$, with ${\boldsymbol P}^{\perp}\in\mathbb{R}^{q\times q}$ the projector orthogonal to the all ones vector, and ${\boldsymbol e}_1,\dots,{\boldsymbol e}_q$ the canonical basis in $\mathbb{R}^q$. Note that ${\boldsymbol A}_0= (q/n){\boldsymbol x}_0{\boldsymbol x}_0^{{\sf T}}$. Then ${\boldsymbol x}^t$ is an estimator of ${\boldsymbol x}_0$ (up to a permutation of the labels' alphabet $\{1,\dots,q\}$). Let ${\sf S}_q$ be the group of $q\times q$ permutation matrices. We evaluate the estimator ${\boldsymbol x}^t$ via the overlap \begin{align} {\rm Overlap}_n(\lambda;t) \equiv \max_{{\boldsymbol \Pi}\in {\sf S}_q}\frac{\<{\boldsymbol x}^t,{\boldsymbol x}_0{\boldsymbol \Pi}\>}{\|{\boldsymbol x}^t\|_F\|{\boldsymbol x}_0\|_F}\, , \end{align} where $\< \cdot, \cdot\>$ denotes the Frobenius inner product. In Figure \ref{fig:Degenerate}, we plot the evolution of the overlap in two sets of numerical simulations, for $q=3$ and $q=4$. Each curve is obtained by running AMP (with spectral initialization) on a different realization of the random matrix ${\boldsymbol A}$. The non-concentration phenomenon is quite clear: \begin{itemize} \item For fixed number of iterations $t$ and large $n$, the quantity ${\rm Overlap}_n(\lambda;t)$ has large fluctuations, that do not seem to vanish as $n\to\infty$. \item Despite this, the algorithm is effective in reconstructing the signal: after $t= 10$ iterations, the accuracy achieved is nearly independent of the initialization. \end{itemize} \begin{figure}[t!] \includegraphics[width=0.48\textwidth]{block_k3_lam15.pdf}\hspace{-0.15cm} \includegraphics[width=0.48\textwidth]{block_k4_lam175.pdf} \put(-90,-10){$t$} \put(-340,-10){$t$} \caption{Estimation in the Gaussian Block Model of Eq.~(\ref{eq:BlockMatrix}) using the AMP algorithm with spectral initialization of Eqs.~(\ref{eq:BlockSpectral}), (\ref{eq:AMP-Block}). We plot the reconstruction accuracy (overlap) as a function of the number of iterations for $q=3$, $\lambda=1.5$, $n=6000$ (left frame), and $q=4$, $\lambda=1.75$, $n=8000$ (right frame). Each set of symbols corresponds to a different realization of the random matrix ${\boldsymbol A}$, and curves report the corresponding prediction of Theorem \ref{thm:Block}. Dashed black lines report the Bayes optimal accuracy as per \cite{barbier2016mutual,lelarge2016fundamental}.} \label{fig:Degenerate} \end{figure} The empirical data in Figure \ref{fig:Degenerate} are well described by the state evolution prediction that is shown as continuous curves in the same figure. In this case, state evolution operates on the pair of matrices ${\boldsymbol M}_t, {\boldsymbol Q}_t\in\mathbb{R}^{q\times q}$, which are updated according to \begin{align} {\boldsymbol M}_{t+1}& = \lambda\E\Big\{f\big(q{\boldsymbol M}_t{\boldsymbol e}_{\sigma} +{\boldsymbol Q}_t^{1/2}{\boldsymbol G}\big)\, {\boldsymbol e}_{\sigma}^{{\sf T}}{\boldsymbol P}^{\perp}\Big\}\,, \label{eq:SE_Block_1}\\ {\boldsymbol Q}_{t+1}& = \E\Big\{f\big(q{\boldsymbol M}_t{\boldsymbol e}_{\sigma} +{\boldsymbol Q}_t^{1/2}{\boldsymbol G}\big)\, f\big(q{\boldsymbol M}_t{\boldsymbol e}_{\sigma} +{\boldsymbol Q}_t^{1/2}{\boldsymbol G}\big)^{{\sf T}}\Big\}\, ,\label{eq:SE_Block_2} \end{align} where $f:\mathbb{R}^q\to\mathbb{R}^q$ is defined as per Eq.~(\ref{eq:FunctionBlock}), and expectation is with respect to $\sigma$ uniform in $\{1,\dots,q\}$ independent of ${\boldsymbol G}\sim{\sf N}(0,{\boldsymbol I}_q)$. Note that ${\boldsymbol Q}_t$ is symmetric and both ${\boldsymbol Q}_t{\boldsymbol 1}= {\boldsymbol M}_t{\boldsymbol 1} = {\boldsymbol 1}^{{\sf T}} {\boldsymbol M}_t = 0$ for all $t\ge 1$. The state evolution prediction for the present model is provided by the next theorem, which is proved in Appendix \ref{app:Block}. \begin{theorem}\label{thm:Block} Let ${\boldsymbol A}\in\mathbb{R}^{n\times n}$ be the random matrix of Eq.~(\ref{eq:BlockMatrix}) with $\lambda>1$, and let ${\boldsymbol \varphi}_1,\dots,{\boldsymbol \varphi}_k$ be its top $k$ eigenvectors. Denote by ${\boldsymbol x}^t$ the sequence of estimates produced by the AMP algorithm of Eq.~(\ref{eq:BlockSpectral}) with the spectral initialization in Eq.~(\ref{eq:AMP-Block}). Let $\{{\boldsymbol M}_t, {\boldsymbol Q}_t\}_{t\ge 0}$ be the state evolution iterates with initialization ${\boldsymbol M}_0= ({\boldsymbol x}^0)^{{\sf T}}{\boldsymbol x}_0/n$ and ${\boldsymbol Q}_0=\lambda^{-1}{\rm diag}(1,1,\dots,1,0)$ Then, for any function $\psi:\mathbb{R}^{2q}\to \mathbb{R}$ with $|\psi({\boldsymbol x})-\psi({\boldsymbol y})|\le C(1+\|{\boldsymbol x}\|_2+\|{\boldsymbol y}\|_2)\|{\boldsymbol x}-{\boldsymbol y}\|_2$, we have, almost surely \begin{align} \lim_{n\to\infty}\left|\frac{1}{n}\sum_{i=1}^n\psi({\boldsymbol x}_i^t,{\boldsymbol x}_{0,i}) - \E\big\{\psi(q{\boldsymbol M}_t{\boldsymbol e}_{\sigma}+{\boldsymbol Q}_t^{1/2}{\boldsymbol G},{\boldsymbol P}^{\perp}{\boldsymbol e}_{\sigma})\big\}\right|= 0\, . \label{eq:SE_Block} \end{align} where expectation is with respect to $\sigma$ uniform in $\{1,\dots,q\}$ independent of ${\boldsymbol G}\sim{\sf N}(0,{\boldsymbol I}_q)$. Further as $n\to\infty$, ${\boldsymbol M}_0$ converges in distribution as \begin{align} {\boldsymbol M}_0 \stackrel{{\rm d}}{\Rightarrow} \sqrt{q^{-1}\left(1-\lambda^{-2}\right)}\, \left[\begin{matrix} {\boldsymbol O}^{{\sf T}}_{(q-1)\times q}\\ {\boldsymbol 0}_{1\times q} \end{matrix}\right]\, ,\label{eq:InitBlock} \end{align} where ${\boldsymbol O}\in \mathbb{R}^{q\times (q-1)}$ is Haar distributed orthogonal matrix with column space orthogonal to ${\boldsymbol 1}$. \end{theorem} The continuous curves in Figure \ref{fig:Degenerate} are obtained as described in the last theorem. For each experiment we generate a random matrix ${\boldsymbol A}$ according to Eq.~(\ref{eq:BlockMatrix}), compute the spectral initialization of Eq.~(\ref{eq:BlockSpectral}) and set ${\boldsymbol M}_0= ({\boldsymbol x}^0)^{{\sf T}}{\boldsymbol x}_0/n$. We then compute the state evolution sequence $\{({\boldsymbol M}_t,{\boldsymbol Q}_t)\}_{t\ge 0}$ via Eqs.~(\ref{eq:SE_Block_1}), (\ref{eq:SE_Block_2}), and use Eq.~(\ref{eq:SE_Block}) to predict the evolution of the overlap. The variability in the initial condition ${\boldsymbol M}_0$ leads to a variability in the predicted trajectory $\{({\boldsymbol M}_t,{\boldsymbol Q}_t)\}_{t\ge 0}$ that matches well with the empirical data. Finally, as mentioned above, AMP converges to an accuracy that is roughly independent of the matrix realization for large $t$, and matches the Bayes optimal prediction of \cite{barbier2016mutual,lelarge2016fundamental}. While a full explanation of this phenomenon goes beyond the scope of the present paper, this behavior can be also explained by Theorem \ref{thm:Block}: the initialization ${\boldsymbol M}_0$ breaks the symmetry between the $q$ blocks uniformly, as per Eq.~(\ref{eq:InitBlock}). Once the symmetry is broken, the state evolution iteration of Eqs.~(\ref{eq:SE_Block_1}), (\ref{eq:SE_Block_2}) converges to a fixed point that is unique up to permutations. \section{Main result} \label{sec:Symmetric} \subsection{Notations and definitions} \label{sec:Notation} We say that a function $\psi:\mathbb{R}^d\to \mathbb{R}$ is pseudo-Lipschitz of order $k$ (and write $\psi\in {\rm PL}(k)$) if there exists a constant $L$ such that $|\psi({\boldsymbol x})-\psi({\boldsymbol y})|\le L(1+(\|{\boldsymbol x}\|/\sqrt{d})^{k-1}+(\|{\boldsymbol y}\|/\sqrt{d})^{k-1})\|{\boldsymbol x}-{\boldsymbol y}\|_2/\sqrt{d}$. Recall that a sequence of probability distributions $\nu_n$ on $\mathbb{R}^m$ \emph{converges weakly} to $\nu$ ($\nu_n\stackrel{w}{\Rightarrow} \nu$) if, for any bounded Lipschitz function $\psi:\mathbb{R}^m\to\mathbb{R}$, $\lim_{n\to\infty}\E \psi({\boldsymbol X}_n) = \E \psi({\boldsymbol X})$ where expectation is with respect to ${\boldsymbol X}_n\sim \nu_n$, ${\boldsymbol X}\sim \nu$. Given a (deterministic) sequence of matrices ${\boldsymbol Z}_n\in\mathbb{R}^{n \times d}$ indexed by $n$ (with $d\ge 1$ fixed), we say that the empirical distribution of ${\boldsymbol Z}_n$ converges weakly to a probability distribution $\nu$ on $\mathbb{R}^d$ if, letting ${\boldsymbol z}_{i}={\boldsymbol Z}_n^{{\sf T}}{\boldsymbol e}_i$ denote the $i$-th row of ${\boldsymbol Z}_n$, for each $i$ we have \begin{align} \frac{1}{n}\sum_{i=1}^n \delta_{{\boldsymbol z}_{n,i}} \stackrel{w}{\Rightarrow} \nu\, . \end{align} Equivalently, $\lim_{n\to\infty}n^{-1}\sum_{i=1}^n\psi({\boldsymbol z}_{i}) = \E\psi({\boldsymbol z})$ for ${\boldsymbol z}\sim \nu$ and any bounded Lipschitz function $\psi$. We apply the same terminology if we are given $d$ vectors $({\boldsymbol z}^{(n)}_1,\dots, {\boldsymbol z}^{(n)}_d)$, where ${\boldsymbol z}^{(n)}_\ell\in\mathbb{R}^n$: in this case ${\boldsymbol Z}_n$ is the matrix with columns ${\boldsymbol z}^{(n)}_1,\dots, {\boldsymbol z}^{(n)}_d$. Given two probability measures $\mu$ (on the space ${\mathcal X}$) and $\nu$ (on the space ${\mathcal Y}$), a coupling $\rho$ of $\mu$ and $\nu$ is a probability distribution on ${\mathcal X}\times{\mathcal Y}$ whose first marginal coincides with $\mu$ and second coincides with $\nu$. We denote the set of couplings of $\mu,\nu$ by ${\cal C}(\mu,\nu)$. For $k\ge 1$, the Wasserstein-$k$ ($W_k$) distance between two probability measures $\mu$, $\nu$ on $\mathbb{R}^d$ is defined by \begin{align} W_k(\mu,\nu) \equiv \inf_{\rho\in{\cal C}(\mu,\nu)}\E_{({\boldsymbol X},{\boldsymbol Y})\sim \rho}\big\{\|{\boldsymbol X}-{\boldsymbol Y}\|_2^k\}^{1/k}\, ,\label{eq:WassersteinDef} \end{align} where the infimum is over all the couplings of $\mu$ and $\nu$. A sequence of probability distributions $\nu_n$ on $\mathbb{R}^m$ \emph{converges in $W_k$} to $\nu$ ($\nu_n\towa{k} \nu$) if $\lim_{n\to\infty} W_k(\nu_n,\nu) = 0$. An equivalent definition is that, for any $\psi\in{\rm PL}(k)$, $\lim_{n\to\infty}\E \psi({\boldsymbol X}_n) = \E \psi({\boldsymbol X})$ where expectation is with respect to ${\boldsymbol X}_n\sim \nu_n$, ${\boldsymbol X}\sim \nu$ \cite[Theorem 6.9]{villani2008optimal}. Generalizing from the definitions introduced for weak convergence, given sequence of matrices ${\boldsymbol Z}_n\in\mathbb{R}^{n\times d}$ indexed by $n$ (with $d\ge 1$ fixed), we say that the empirical distribution of ${\boldsymbol Z}_n$ converges in $W_k$ to $\nu$ (a probability distribution on $\mathbb{R}^d$), if letting ${\boldsymbol z}_{i}={\boldsymbol Z}_n^{{\sf T}}{\boldsymbol e}_i$ denote the $i$-th row of ${\boldsymbol Z}_n$, \begin{align} \frac{1}{n}\sum_{i=1}^n \delta_{{\boldsymbol z}_{n,i}} \towa{k} \nu\, . \end{align} Equivalently, $\lim_{n\to\infty}n^{-1}\sum_{i=1}^n\psi({\boldsymbol z}_i) = \E\psi({\boldsymbol z})$ for any $\psi\in{\rm PL}(k)$ (where ${\boldsymbol z}\sim \nu$). Again the same terminology is used for $d$-tuples of vectors $({\boldsymbol z}^{(n)}_1,\dots, {\boldsymbol z}^{(n)}_d)$. We will typically use upper case bold symbols for matrices (e.g. ${\boldsymbol A}$, ${\boldsymbol B}$,\dots), lower case bold for vectors (e.g. ${\boldsymbol u}$, ${\boldsymbol v},\dots$) and lower case plain font for scalars (e.g. $x,y,\dots$). However, we will often denote random variables and random vectors using upper case. We often consider vectors (or matrices) whose elements are indexed by arbitrary finite sets. For instance, given finite sets $S_1, S_2$, ${\boldsymbol Q}\in \mathbb{R}^{S_1\times S_2}$ is a matrix ${\boldsymbol Q}=(Q_{i,j})_{i\in S_1,j\in S_2}$. When there is an obvious ordering of the elements of $S_1$, $S_2$, such a matrix is understood to be identified with a matrix in $\mathbb{R}^{n_1\times n_2}$, where $n_i = |S_i|$. For instance $\mathbb{R}^{[m]\times [n]}$ is identified with $\mathbb{R}^{m\times n}$. Given a vector ${\boldsymbol v}\in\mathbb{R}^m$ an a set $S\subseteq [m]$ we denote by ${\boldsymbol v}_S\in\mathbb{R}^S$ the subvector indexed by elements of $S$. Analogously, for a matrix ${\boldsymbol M}\in\mathbb{R}^{m\times n}$, we let ${\boldsymbol M}_{R,S}\in \mathbb{R}^{R\times S}$ be the submatrix with row indices in $R$ and column indices in $S$. If the submatrix includes all the rows, we adopt the shorthand ${\boldsymbol M}_{[m],S}$. Finally, we adopt the convention that all vectors (including the rows of a matrix) are viewed as column vectors, unless explicitly transposed. \subsection{Statement of the result: Symmetric case} Recall the spiked model of Eq.~(\ref{eq:SpikedDef}), which we copy here for the reader's convenience: \begin{align} {\boldsymbol A} = \sum_{i=1}^k\lambda_i {\boldsymbol v}_i{\boldsymbol v}_i^{{\sf T}} + {\boldsymbol W} \equiv {\boldsymbol V}{\boldsymbol \Lambda}{\boldsymbol V}^{{\sf T}} +{\boldsymbol W}\, .\label{eq:LowRankPlusNoise} \end{align} Here ${\boldsymbol v}_i\in\mathbb{R}^n$ are non-random orthonormal vectors and ${\boldsymbol W}\sim{\sf GOE}(n)$. We denote by ${\boldsymbol \varphi}_1,\dots,{\boldsymbol \varphi}_n$ the eigenvectors of ${\boldsymbol A}$, with corresponding eigenvalues $z_1\ge z_2\ge \dots\ge z_n$. For a sequence of functions $f_t(\,\cdot \, ):\mathbb{R}^{q}\times\mathbb{R}\to \mathbb{R}^q$, we consider the AMP algorithm that produces a sequence of iterates ${\boldsymbol x}^t$ according to the recursion \begin{align} {\boldsymbol x}^{t+1} = {\boldsymbol A} f_t({\boldsymbol x}^t,{\boldsymbol y}) - f_{t-1}({\boldsymbol x}^{t-1},{\boldsymbol y})\, {\sf B}_t^{{\sf T}}\, . \label{eq:AMP} \end{align} Here ${\boldsymbol y}\in \mathbb{R}^n$ is a fixed vector, and it is understood that $f(\cdot;t)$ is applied row-by-row. Namely, denoting by ${\boldsymbol x}^t_i\in \mathbb{R}^q$ the $i$-th row of ${\boldsymbol x}^t$, the $i$-th row of $f_t({\boldsymbol x}^{t};{\boldsymbol y})$ is given by $f_t({\boldsymbol x}^t_i,y_i)$. The `Onsager coefficient' ${\sf B}_t\in \mathbb{R}^{q\times q}$ is a matrix given by \begin{align} {\sf B}_t = \frac{1}{n}\sum_{i=1}^n\frac{\partial f_t}{\partial {\boldsymbol x}}({\boldsymbol x}^t_i,y_i)\, ,\label{eq:OnsagerDef} \end{align} where $\frac{\partial f_t}{\partial {\boldsymbol x}}\in \mathbb{R}^{q\times q}$ denotes the Jacobian matrix of the function $f_t(\,\cdot,y):\mathbb{R}^q\to\mathbb{R}^q$. The algorithm is initialized with ${\boldsymbol x}^0 \in \mathbb{R}^{n \times q}$ and $f_{-1}({\boldsymbol x}^{-1},{\boldsymbol y}) \in \mathbb{R}^{n \times q}$ is taken to be the all-zeros matrix. \begin{remark} Notice that the present setting generalizes the one of Section \ref{sec:Example} in two directions (apart from the more general model for the matrix ${\boldsymbol A}$, cf. Eq.~\eqref{eq:LowRankPlusNoise}). First, the state of the algorithm is a matrix ${\boldsymbol x}^t\in\mathbb{R}^{n\times q}$ with $q$ an arbitrary fixed integer. While it is natural to take $q$ equal to the number of outliers in the spectrum of ${\boldsymbol A}$ (i.e. $q=k_*$ according to the notations introduced below), we believe that a more general choice of $q$ can be useful for certain applications. Further, the nonlinearity $f_t$ is a function of ${\boldsymbol x}^t$ but also on the independent vector ${\boldsymbol y}$ that can be regarded as side information: again, we believe this additional freedom will be useful for future applications of our main result. \end{remark} We will make the following assumptions: \begin{enumerate}[font={\bfseries},label={(A\arabic*)}] \item\label{ass:Spikes} The values $\lambda_i(n)$ have finite limits as $n\to \infty$, that we denote by $\lambda_i$. Further, assume there exist $k_+$, $k_-$ such that $\lambda_1\ge \dots \lambda_{k_+}>1>\lambda_{k_++1}$ and $\lambda_{k-k_-}>-1> \lambda_{k-k_-+1}\ge \dots\ge \lambda_k$. We let $S \equiv (1,\dots, k_+,k-k_-+1,\dots,k)$, $k_* = k_++k_-$ and $\hat{S} \equiv (1,\dots, k_+,n-k_-+1,\dots,n)$. Further, we let ${\boldsymbol \Lambda}_S$ denote the diagonal matrix with entries $({\boldsymbol \Lambda}_S)_{ii} = \lambda_i$, $i\in S$. \item\label{ass:SpikesInit} Setting $q\ge k_*$, we initialize the iteration (\ref{eq:AMP}) by setting ${\boldsymbol x}^0\in\mathbb{R}^{n\times q}$ equal to the matrix with first $k_*$ ordered columns given by $(\sqrt{n}{\boldsymbol \varphi}_i)_{i\in \hat{S}}$, and ${\boldsymbol 0}$ for the remaining $q-k_*$ columns. \item\label{ass:SpikesDistr} The joint empirical distribution of the vectors $(\sqrt{n}{\boldsymbol v}_\ell(n))_{\ell\in S}$, and ${\boldsymbol y}$ has a limit in Wasserstein-$2$ metric. Namely, if we let $\tilde{\boldsymbol v}_i = (\sqrt{n}v_{\ell,i})_{\ell\in S}\in\mathbb{R}^{k_*}$, then there exists a random vector ${\boldsymbol U}$ taking values in $\mathbb{R}^{k_*}$ and a random variable $Y$, with joint law $\mu_{{\boldsymbol U},Y}$, such that \begin{align} \frac{1}{n}\sum_{i=1}^n\delta_{\tilde{\boldsymbol v}_i,y_i} \stackrel{W_2}{\Rightarrow} \mu_{{\boldsymbol U},Y}\, . \end{align} \item\label{ass:Lip} The functions $f_t(\,\cdot\,,\,\cdot\,):\mathbb{R}^q\times\mathbb{R}\to\mathbb{R}^q$ are Lipschitz continuous. \end{enumerate} State evolution operates on the pair of matrices ${\boldsymbol M}_t\in\mathbb{R}^{q\times k_*}$, ${\boldsymbol Q}_t\in\mathbb{R}^{q\times q}$, with ${\boldsymbol Q}_t\succeq {\boldsymbol 0}$, evolving according to \begin{align} {\boldsymbol M}_{t+1} &= \E\Big\{f_t({\boldsymbol M}_t{\boldsymbol U}+{\boldsymbol Q}_t^{1/2}{\boldsymbol G},Y){\boldsymbol U}^{{\sf T}}\big\} {\boldsymbol \Lambda}_S\, , \label{eq:Mt_def} \\ {\boldsymbol Q}_{t+1} & = \E\Big\{f_t({\boldsymbol M}_t{\boldsymbol U}+{\boldsymbol Q}_t^{1/2}{\boldsymbol G},Y) f_t({\boldsymbol M}_t{\boldsymbol U}+{\boldsymbol Q}_t^{1/2}{\boldsymbol G},Y)^{{\sf T}}\Big\}\, , \label{eq:Qt_def} \end{align} where expectation is taken with respect to $({\boldsymbol U},Y)\sim\mu_{{\boldsymbol U},Y}$ independent of ${\boldsymbol G}\sim{\sf N}(0,{\boldsymbol I}_q)$. These recursions are initialized with ${\boldsymbol Q}_0, {\boldsymbol M}_0$ which will be specified in the statement of Theorem \ref{thm:Main} below. We denote by ${\mathcal R}({\boldsymbol \Lambda}) \subseteq \mathbb{R}^{S\times [k]}$ the set of orthogonal matrices ${\boldsymbol R}$ (with ${\boldsymbol R}\bR^{{\sf T}}= {\boldsymbol I}_{S}$) such that $R_{ij}=0$ if $\lambda_i\neq\lambda_j$ or if $j\not \in S$. Notice that the $k_*\times k_*$ submatrix ${\boldsymbol R}_{S,S}$ of ${\boldsymbol R}\in{\mathcal R}({\boldsymbol \Lambda})$ is a block-diagonal orthogonal matrix, with blocks in correspondence with the degenerate $\lambda_i$'s. As such, these matrices form a compact group, which we will denote by ${\mathcal R}_*({\boldsymbol \Lambda}) \subseteq \mathbb{R}^{k_*\times k_*}$. This group can be endowed with the Haar measure, which is just the product of Haar measures over the orthogonal group corresponding to each block. We define the Haar measure on ${\mathcal R}({\boldsymbol \Lambda})$ by adding $k-k_*$ columns equal to $0$ for column indices $j\in [k]\setminus S$. \begin{theorem}\label{thm:Main} Let $({\boldsymbol x}^t)_{t \geq 0}$ be the AMP iterates generated by algorithm (\ref{eq:AMP}), under assumptions \ref{ass:Spikes} to \ref{ass:Lip}, for the spiked matrix model (\ref{eq:SpikedDef}). For $\eta_n\ge n^{-1/2+{\varepsilon}}$ such that $\eta_n\to 0$ as $n\to\infty$, define the set of matrices \begin{align} {\mathcal{G}}_n({\boldsymbol \Lambda})\equiv \Big\{{\boldsymbol Q} \in \mathbb{R}^{S\times [k]}:\, \min_{{\boldsymbol R}\in{\mathcal R}({\boldsymbol \Lambda})}\|{\boldsymbol Q}- ({\boldsymbol I}-{\boldsymbol \Lambda}_{S}^{-2})^{1/2}{\boldsymbol R}\|_{F}\le \eta_n\Big\}\, , \end{align} Let ${\boldsymbol \Omega}\equiv {\boldsymbol \Phi}_{\hat{S}}^{{\sf T}}{\boldsymbol V}\in\mathbb{R}^{k_*\times k}$ where ${\boldsymbol \Phi}_{\hat{S}}\in\mathbb{R}^{n\times k_*}$ is the matrix with columns $({\boldsymbol \varphi}_i)_{i\in\hat{S}}$ and ${\boldsymbol V}\in \mathbb{R}^{n\times k}$ is the matrix with columns $({\boldsymbol v}_i)_{i\in [k]}$. Denote by ${\boldsymbol \Omega}_0\in \mathbb{R}^{S\times S}$ the submatrix corresponding to the $k_*$ columns of ${\boldsymbol \Omega}$ with index in $S$, and let $\tilde{\boldsymbol \Omega}_0 = ({\boldsymbol I}-{\boldsymbol \Omega}_0{\boldsymbol \Omega}_0^{{\sf T}})^{1/2}$. Then, for any pseudo-Lipschitz function $\psi:\mathbb{R}^{q+k_*+1}\to \mathbb{R}$, $\psi\in{\rm PL}(2)$, the following holds almost surely for $t \geq 0$: \begin{align} \lim_{n\to\infty}\left|\frac{1}{n}\sum_{i=1}^n\psi({\boldsymbol x}_i^t,\tilde{\boldsymbol v}_i,y_i) - \E\big\{\psi({\boldsymbol M}_t{\boldsymbol U}+{\boldsymbol Q}_t^{1/2}{\boldsymbol G},{\boldsymbol U},Y)\big\}\right|= 0\, . \label{eq:main_SE_result} \end{align} Here $\tilde{\boldsymbol v}_i = (\sqrt{n}v_{\ell,i})_{\ell\in S}\in\mathbb{R}^{k_*}$ and expectation is with respect to $({\boldsymbol U},Y)\sim \mu_{{\boldsymbol U},Y}$ independent of ${\boldsymbol G}\sim{\sf N}(0,{\boldsymbol I}_q)$. Finally, $({\boldsymbol M}_t,{\boldsymbol Q}_t)$ is the state evolution sequence specified by Eqs. \eqref{eq:Mt_def} and \eqref{eq:Qt_def} with initialization $({\boldsymbol M}_0)_{[k_*],[k_*]}={\boldsymbol \Omega}_0$, $({\boldsymbol M}_0)_{[q]\setminus [k_*],[k_*]}={\boldsymbol 0}$, $({\boldsymbol Q}_0)_{[k_*],[k_*]} =\tilde{\boldsymbol \Omega}_0^2$, and $({\boldsymbol Q}_{0})_{i,j} = 0$ if $(i,j)\not \in [k_*]\times [k_*]$. Further, ${\mathbb P}({\boldsymbol \Omega}\in {\mathcal{G}}_n({\boldsymbol \Lambda})) \ge 1-n^{-A}$ for any $A>0$ provided $n>n_0(A)$, and ${\boldsymbol \Omega}$ converges in distribution to $({\boldsymbol I}-{\boldsymbol \Lambda}_S^{-2})^{1/2}{\boldsymbol R}$, with ${\boldsymbol R}$ Haar distributed on ${\mathcal R}({\boldsymbol \Lambda})$. \end{theorem} The theorem is proved for the case of a rank one spike in Appendix \ref{sec:ProofMain}. The proof for the general case is given in Appendix \ref{sec:ProofMainGeneral}. In the following section, we provide a brief overview of the key steps in the proof. \begin{remark} Theorem \ref{thm:Main} focuses on the case of symmetric square matrices ${\boldsymbol A}$. However, a standard reduction (see, for instance, \cite[Section 6]{berthier2017state}) allows to obtain a completely analogous statement for rectangular matrices, namely ${\boldsymbol A}\in\mathbb{R}^{n\times d}$ with \begin{align} {\boldsymbol A} = \sum_{i=1}^k\lambda_i{\boldsymbol u}_i{\boldsymbol v}_i^{{\sf T}} + {\boldsymbol W}\, , \end{align} where ${\boldsymbol W}$ is a noise matrix with independent entries $W_{ij}\sim{\sf N}(0,1/n)$. We already considered the case $k=1$ of this model in Section \ref{sec:ExampleRectangular}. Given Theorem \ref{thm:Main}, the generalization to $k>1$ rectangular matrices is straightforward: we provide a precise statement in Appendix \ref{app:RectangularGeneral}. Another generalization of interest would be to non-Gaussian matrices. It might be possible to address this by using the methods of \cite{bayati2015universality}. \end{remark} \section{Proof outline} \label{sec:ProofOutline} We first consider the rank one spiked model in Eq. \eqref{eq:SpikedDefSpecial}, and give an outline of the proof of Theorem \ref{thm:Rank1}. Letting ${\boldsymbol v} \equiv \frac{{\boldsymbol x}_0}{\sqrt{n}}$, Eq. \eqref{eq:SpikedDefSpecial} can be written as \begin{equation} {\boldsymbol A} = \lambda{\boldsymbol v}\bv^{{\sf T}}+{\boldsymbol W}. \label{eq:rank1_v} \end{equation} Recalling that $({\boldsymbol \varphi}_1, z_1)$ are the principal eigenvector and eigenvalue of ${\boldsymbol A}$, we write ${\boldsymbol A}$ as the sum of a rank one projection onto the space spanned by ${\boldsymbol \varphi}_1$, plus a matrix that is the restriction of ${\boldsymbol A}$ to the subspace orthogonal to ${\boldsymbol \varphi}_1$. That is, \begin{equation} {\boldsymbol A} = z_1 {\boldsymbol \varphi}_1 {\boldsymbol \varphi}_1^{{\sf T}} + {\boldsymbol P}^{\perp}\left (\lambda {\boldsymbol v} {\boldsymbol v}^{{\sf T}} + {\boldsymbol W} \right) {\boldsymbol P}^{\perp}\, , \end{equation} where ${\boldsymbol P}^{\perp}= {\boldsymbol I} - {\boldsymbol \varphi}_1 {\boldsymbol \varphi}_1^{\sf T}$ is the projector onto the space orthogonal to ${\boldsymbol \varphi}_1$. The proof of Theorem \ref{thm:Rank1} is based on an approximate representation of the conditional distribution of ${\boldsymbol A}$ given $({\boldsymbol \varphi}_1, z_1)$. To this end, we define the matrix \begin{align} \tilde{\boldsymbol A} = z_1 {\boldsymbol \varphi}_1 {\boldsymbol \varphi}_1^{{\sf T}} + {\boldsymbol P}^{\perp}\left (\lambda {\boldsymbol v} {\boldsymbol v}^{{\sf T}} +\tilde{\boldsymbol W} \right) {\boldsymbol P}^{\perp}\, , \end{align} where $\tilde{\boldsymbol W} \sim {\sf GOE}(n)$ is independent of ${\boldsymbol W}$. The proof is based on a key technical lemma (Lemma \ref{lemma:Distr}) which shows that for large enough $n$, the conditional distribution of ${\boldsymbol A}$ given $({\boldsymbol \varphi}_1, z_1)$ is close in (in total variation distance) to that of $\tilde{\boldsymbol A}$ with high probability. Given $\tilde{\boldsymbol A}$, we consider a sequence of AMP iterates $(\tilde{\boldsymbol x}^t)_{t \geq 0}$ obtained by replacing ${\boldsymbol A}$ with $\tilde{\boldsymbol A}$ in Eq.~\eqref{eq:AMPspecial0} . That is, we set \begin{align} \tilde{\boldsymbol x}^0 & = \sqrt{n}\, {\rm sign}(\<{\boldsymbol x}_0,{\boldsymbol \varphi}_1\>) \, {\boldsymbol \varphi}_1, \, \qquad \tilde{\boldsymbol x}^{t+1} = \tilde{\boldsymbol A}\, f_t(\tilde{\boldsymbol x}^t)-{\sf b}_t f_{t-1}(\tilde{\boldsymbol x}^{t-1})\, .\label{eq:AMPspecial_tilde0} \end{align} Theorem \ref{thm:Rank1} is proved in three steps: \begin{enumerate} \item Using the conditional distribution lemma (Lemma \ref{lemma:Distr}), we show that for any PL(2) test function $\psi: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$, almost surely \begin{equation} \lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^n \psi (x^t_i, x_{0,i}) = \lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^n \psi (\tilde{x}^t_i, x_{0,i}), \label{eq:AMP_orig_mod} \end{equation} whenever the limit on the right exists. \item Step 1 allows us to establish Theorem \ref{thm:Rank1} by analyzing the modified AMP iteration in \myeqref{eq:AMPspecial_tilde0}. For the modified AMP, the initialization $\tilde{\boldsymbol x}^0$ is independent of $\tilde{\boldsymbol W}$. Consequently, adapting techniques from standard AMP analysis we show that the following holds almost surely for any PL(2) test function $\psi: \mathbb{R}^3 \to \mathbb{R}$: \begin{equation} \lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^n \psi (\tilde{x}^t_i, x_{0,i}, \sqrt{n} \varphi_{1,i}) = \E \left\{ \psi( \alpha_t X_0 + \beta_t L + \tau_t G_0, X_0, L) \right\}. \label{eq:mod_SE0} \end{equation} Here the random variables $(X_0, L, G_0)$ are jointly distributed as follows: $X_0 \sim \nu_{X_0}$ and $G_0 \sim {\sf N}(0,1)$ are independent, and $L= \sqrt{1-\lambda^{-2}} X_0+ \lambda^{-1} G_1$, where $G_1 \sim {\sf N}(0,1)$ is independent of both $X_0$ and $G_0$. It is shown in Corollary \ref{coro:ConvergenceEigenvectors} that (almost surely) the empirical distribution of $({\boldsymbol x}_0, \sqrt{n}{\boldsymbol \varphi}_{1})$ converges in $W_2$ to the distribution of $(X_0, L)$. The constants $(\alpha_t, \beta_t, \tau_t)$ in \myeqref{eq:mod_SE0} are iteratively defined using a suitable state evolution recursion given in Eqs. \eqref{eq:AlphaRec}--\eqref{eq:TauRec}. \item The proof of Theorem \ref{thm:Rank1} is completed by showing that for $t \geq 0$, \begin{equation} \E \left\{ \psi( \alpha_t X_0 + \beta_t L + \tau_t G_0, X_0, L) \right\} = \E \left\{ \psi( \mu_t X_0 + \sigma_t G, X_0 ) \right\}, \label{eq:SE_orig_mod} \end{equation} where $(\mu_t, \sigma_t)_{t \geq 0}$ are the state evolution parameters defined in the statement of Theorem \ref{thm:Rank1}. \end{enumerate} Combining Eqs. \eqref{eq:AMP_orig_mod}--\eqref{eq:SE_orig_mod} yields the claim of Theorem \ref{thm:Rank1}. The detailed proof of this theorem is given in Appendix \ref{sec:ProofMain}. \vspace{10pt} \emph{General case}: For the general spiked model \myeqref{eq:SpikedDef}, the proof of the state evolution result (\myeqref{eq:main_SE_result} of Theorem \ref{thm:Main}) is along similar lines. Here the modified matrix $\tilde{\boldsymbol A}$ is defined as \begin{align} \tilde{\boldsymbol A} &\equiv \sum_{i\in \hat{S}}z_i{\boldsymbol \varphi}_i{\boldsymbol \varphi}_i^{{\sf T}} + {\boldsymbol P}^{\perp} \left(\sum_{i=1}^k\lambda_i {\boldsymbol v}_i{\boldsymbol v}_i^{{\sf T}} \, + \, \tilde{\boldsymbol W} \right){\boldsymbol P}^{\perp} \,,\label{eq:SpikedModelModified0} \end{align} where ${\boldsymbol P}^{\perp}$ is the projector onto the orthogonal complement of the space spanned by $({\boldsymbol \varphi}_i)_{i\in\hat{S}}$, and $\tilde{\boldsymbol W}\sim{\sf GOE}(n)$ is independent of ${\boldsymbol W}$. (Recall that $\hat{S}$ contains the indices $i$ for which $|\lambda_i| >1$.) Lemma \ref{lemma:Distr} shows that with high probability the conditional distributions of ${\boldsymbol A}$ and $\tilde{\boldsymbol A}$ are close in total variation distance. We then consider iterates $(\tilde{\boldsymbol x})_{t \geq 0}$ generated via the AMP iteration using $\tilde{\boldsymbol A}$: \begin{align} \tilde{\boldsymbol x}^0 & = \sqrt{n} \, [{\boldsymbol \varphi}_1|\cdots|{\boldsymbol \varphi}_{k_*}|{\boldsymbol 0}|\cdots|{\boldsymbol 0}] \, ,\\ \tilde{\boldsymbol x}^{t+1} &= \tilde{\boldsymbol A}\, f_t(\tilde{\boldsymbol x}^t,{\boldsymbol y}) - f_{t-1}(\tilde{\boldsymbol x}^{t-1},{\boldsymbol y})\, {\sf B}_t^{{\sf T}} \, .\label{eq:AMPspecial_tilde_gen0} \end{align} Using Lemma \ref{lemma:Distr}, we first show that once the state evolution result \myeqref{eq:main_SE_result} holds for $\tilde{\boldsymbol x}^t$, it also holds for ${\boldsymbol x}^t$. The result for $\tilde{\boldsymbol x}^t$ is then shown in two steps, which are analogous to Eqs. \eqref{eq:mod_SE0} and \eqref{eq:SE_orig_mod} for the rank one case. \section*{Acknowledgements} We thank Leo Miolane for pointing out a gap in an earlier proof of Proposition \ref{thm:BayesOpt}. A.~M. was partially supported by grants NSF CCF-1714305 and NSF IIS-1741162. R.~V. was partially supported by a Marie Curie Career Integration Grant (Grant Agreement No. 631489).
proofpile-arXiv_067-6331
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} Meson production experiments with photon and electron beams have been very active at facilities worldwide. A main interest is to gain information on the properties of nucleon resonances ($N^*$) such as their pole positions and electromagnetic transition form factors, using data complementary to those from the $\pi N$ scattering. Even the so-called over-complete measurement is planned to accurately determine the amplitudes and thus $N^*$ properties with significantly reduced model-dependence. Lots of achievements towards this direction have been made, particularly with the free-proton target experiments, and are summarized in this workshop~\cite{volker}. To gain a complete picture of the $N^*$ properties, we need not only proton-target data but also neutron-target data. Measurements of the neutron-target data, including polarization observables, have been also active recently. For example, unpolarized differential cross sections ($d\sigma/d\Omega_\pi$) and the polarization observable $E$ for $\gamma n\to\pi^0n$ have been measured at the MAMI~\cite{mami1,mami2}, those for $\gamma n\to\pi^-p$ at the JLab~\cite{clas1,clas2}, and those for $\gamma n\to\eta n$ at the MAMI~\cite{mami3,mami4,mami5}. More new (preliminary) results have been presented in this workshop~\cite{nstar}. The primary interest in the neutron-target data is the electromagnetic neutron-to-$N^*$ transition form factors. These $\gamma^{(*)}n\to N^*$ form factors are combined with the $\gamma^{(*)}p\to N^*$ form factors to give the isospin structure of the $\gamma^{(*)}N\to N^*$ form factors that are interesting quantities for understanding the hadron structures. The isospin decomposition is also necessary when we apply the form factors to calculations of neutrino-induced meson productions~\cite{nuDCC}. In addition to these primary interests, the actual data have also brought unexpected surprises; $\gamma n\to\eta n$ cross section data~\cite{mami3,graal,cbelsa} revealed a narrow peak at $W\sim 1.68$~GeV ($W$: the meson-baryon invariant mass) which had not been found in the $\pi N$ and $\gamma p$ reaction data. The deuteron has been primarily used in the experiments to extract the neutron-target data. Conventionally, a certain set of kinematical cuts is applied to the deuteron data to supposedly isolate the quasi-free samples. However, this procedure has been always with a concern that the kinematical cuts and/or nuclear effects such as final state interactions (FSI) could distort the extracted neutron data from the true free neutron-target observables. The purpose of this work is to critically examine within a theoretical model the effects of the FSI and kinematical cuts on (un)polarized $\gamma n\to\pi N$ cross sections extracted from $\gamma d\to\pi NN$ cross sections. \section{Model} \label{sec:model} This study will be based on a $\gamma d\to\pi NN$ reaction model that consists of the impulse [Fig.~\ref{fig:diag}(left)], $NN$ rescattering [Fig.~\ref{fig:diag}(center)], and $\pi N$ rescattering [Fig.~\ref{fig:diag}(right)] mechanisms. \begin{figure} \includegraphics[width=1\textwidth]{diagram} \caption{ Diagrammatic representation of reaction mechanisms considered in this work for $\gamma d\to\pi N N$: (left) impulse, (center) $NN$ rescattering, (right) $\pi N$ rescattering mechanisms. } \label{fig:diag} \end{figure} The model needs to be incorporated with realistic elementary amplitudes for the subprocesses involved. Regarding $\gamma N \to \pi N$ and $\pi N \to \pi N$ amplitudes, we employ those generated with a dynamical coupled-channels (DCC) model~\cite{knls13,knls16}. The DCC model takes account of coupled-channels relevant to the nucleon resonance region, and has been developed through analyzing $\sim 27,000$ data points of $\pi N, \gamma N \to \pi N, \eta N, K\Lambda, K\Sigma$ from the thresholds up to $W\ \raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}\ 2.1$~GeV. As for the deuteron wave function and the $NN\to NN$ amplitudes, we employ those generated with the CD-Bonn potential~\cite{cdbonn}. \section{Results \small{(all numerical results are preliminary)}} \label{sec:result} First we confront parameter-free model predictions for $\gamma d\to \pi NN$ cross sections with data to examine the soundness of the model and FSI effects. Here, 'parameter-free' means that we did not adjust any model parameters using $\gamma d\to \pi NN$ cross section data. The comparisons are presented in Fig.~\ref{fig:1}. \begin{figure} \includegraphics[width=0.52\textwidth]{gd-pi0pn_305} \hspace{-3mm} \includegraphics[width=0.52\textwidth]{gd-pimpp_310} \caption{ The pion angular distribution for $\gamma d\to \pi^0 pn$ (left) and $\gamma d\to \pi^- pp$ (right). The photon energy in the laboratory frame is indicated in the figure. The curves are obtained with the impulse approximation (green dotted), the impulse and $NN$ rescattering mechanisms (blue dashed), and the full model (red solid) that also includes the $\pi N$ rescattering mechanism. The data are from Ref.~\cite{gd-pi0pn_data} (magenta squares) and Ref.~\cite{gd-pi0pn_data2} (black diamonds) for $\gamma d\to \pi^0 pn$, and from Ref.~\cite{gd-pimpp_data} for $\gamma d\to \pi^- pp$. } \label{fig:1} \end{figure} The agreement with data is reasonably good overall. For $\gamma d\to \pi^0 pn$, a large reduction of the cross sections due to the $NN$ rescattering is essential for the good agreement with the data. This reduction is mainly caused by the orthogonality between the deuteron wave function and the $^3S_1$ partial wave in the final $NN$ state. The $\pi N$ rescattering effect is rather moderate. Meanwhile, for $\gamma d\to \pi^- pp$ where the orthogonality mentioned above does not come into play, the FSI effects are rather small. We mention that the DCC-based deuteron reaction model predicts $\gamma d\to \eta pn$ cross sections that are in excellent agreement with data~\cite{etaN}. We also present in Fig.~\ref{fig:2} the polarization asymmetry $E$ defined by $E = (\sigma_{+-}-\sigma_{++})/(\sigma_{+-}+\sigma_{++})$, where $\sigma_{++}$ ($\sigma_{+-}$) is the $\gamma d\to\pi NN$ cross section for which the photon circular polarization and the deuteron spin orientation are parallel (antiparallel). \begin{figure}[b] \includegraphics[width=0.52\textwidth]{gd-pi0pn-E_500} \hspace{-3mm} \includegraphics[width=0.52\textwidth]{gd-pimpp-E_500} \caption{ The polarization asymmetry $E$ for $\gamma d\to \pi^0 pn$ (left) and $\gamma d\to \pi^- pp$ (right). The other features are the same as those in Fig.~\ref{fig:1}. } \label{fig:2} \end{figure} Comparing Figs.~\ref{fig:1} and \ref{fig:2}, we find that the FSI effects on the $\gamma d\to\pi^0pn$ cross sections are significantly canceled in the ratio $E$. Now we extract cross sections (including polarization observables) for quasi-free $\gamma n\to\pi N$ from those for $\gamma d\to\pi NN$ in a conventional manner. A set of kinematical cuts is applied to the $\gamma d\to\pi NN$ cross sections generated with the DCC-based model, and then further correction is made for the Fermi motion (Fermi-unsmearing). By comparing the extracted quasi-free cross sections with the corresponding free ones, which are calculated with the same elementary amplitudes used in the $\gamma d\to\pi NN$ model, we address the questions on how the extracted cross sections could be distorted by the FSI and kinematical cuts. We employ the kinematical cuts that have been used in recent experimental analyses to extract $\gamma n\to\pi^-p$ from $\gamma d\to\pi^-pp$, as summarized in Table~\ref{tab:1}. \begin{table}[t] \begin{center} \begin{tabular}[t]{c|cc}\hline & $d\sigma/d\Omega_\pi$~\cite{clas2} & $E$~\cite{clas1} \\\hline $\pi^-$ momentum (GeV)& $>0.1$ & $>0.4$ \\ Faster proton momentum (GeV)& $>0.36$ & $>0.4$ \\ Slower proton momentum (GeV)& $<0.2$ & $<0.1$ \\ $\Delta\phi=|\phi_{\pi^-} - \phi_{\rm faster\ proton}|$ & - & $160^\circ < \Delta\phi < 200^\circ$ \\\hline \end{tabular} \end{center} \caption{The kinematical cuts used in the experimental analyses~\cite{clas1,clas2} for extracting unpolarized cross sections ($d\sigma/d\Omega_\pi$) and the polarization asymmetry $E$ for $\gamma n\to \pi^-p$ from those for $\gamma d\to \pi^-pp$. The azimuthal angle difference between $\pi^-$ and the faster proton is denoted by $\Delta\phi$. } \label{tab:1} \end{table} We also use the same cuts to extract $\gamma n\to\pi^0 n$ from $\gamma d\to\pi^0pn$. For the Fermi-unsmearing, we follow the procedure described in Appendix~B of Ref.~\cite{tarasov2016}. We show in Fig.~\ref{fig:3} the quasi-free $\gamma n\to \pi^0 n$ cross sections extracted from theoretical $\gamma d\to\pi^0pn$ cross section, using the kinematical cuts of Table~\ref{tab:1} and the Fermi unsmearing. The green triangles (blue circles) [red squares] in the figure are extracted from the $\gamma d\to \pi^0 pn$ cross sections calculated with the impulse approximation (the impulse and $NN$ rescattering mechanisms) [the full model]. \begin{figure} \includegraphics[width=0.52\textwidth]{gn-pi0n-1350} \hspace{-3mm} \includegraphics[width=0.52\textwidth]{gn-pi0n-1660} \caption{ The pion angular distribution for $\gamma n\to \pi^0n$ extracted from $\gamma d\to \pi^0 pn$. The photon energy ($E_\gamma$) in the laboratory frame and the final $\pi^0n$ invariant mass ($W$) for $\gamma d\to \pi^0 pn$ are indicated in the figure. The points with error bars are extracted from theoretical $\gamma d\to \pi^0 pn$ differential cross sections, using the kinematical cuts of Table~\ref{tab:1} and the Fermi unsmearing. When the theoretical $\gamma d\to \pi^0 pn$ cross sections are calculated with the impulse approximation (the impulse and $NN$ rescattering mechanisms) [the full model], the green triangles (blue circles) [red squares] are obtained. The errors are only statistical from the Monte-Carlo integral, and are not shown when smaller than the point size. The solid curve is the free $\gamma n\to \pi^0n$ cross sections at $W$ from the DCC model. } \label{fig:3} \end{figure} The phase-space integral has been done with the Monte-Carlo method, and thus the numerical results are given by the points with error bars; the errors include only statistical ones associated with the Monte-Carlo method. The kinematical cuts remove the forward $\pi$ kinematical regions. We can see that a significant reduction due to the FSI remains even after the kinematical cuts have been applied. The $NN$ and $\pi N$ FSI contributions are comparably visible. Therefore, only the quasi-free cross sections extracted from $\gamma d\to \pi^0 pn$ calculated with the impulse approximation reproduce well the free cross sections. It would be interesting to compare the FSI effects on the quasi-free $\gamma n\to \pi^0 n$ and those on the $\gamma p\to \pi^0 p$ cross sections. This is particularly important because an experimental analysis sometimes assumes that the FSI effects are the same for both, as in Ref.~\cite{mami1}. Therefore, we show in Fig.~\ref{fig:4} the quasi-free $\gamma p\to \pi^0 p$ cross sections. \begin{figure} \includegraphics[width=0.52\textwidth]{gp-pi0p-1360} \hspace{-3mm} \includegraphics[width=0.52\textwidth]{gp-pi0p-1660} \caption{ The pion angular distribution for $\gamma p\to \pi^0p$ extracted from $\gamma d\to \pi^0 pn$. The other features are the same as those in Fig.~\ref{fig:3}. } \label{fig:4} \end{figure} By comparing Figs.~\ref{fig:3} and \ref{fig:4}, the FSI effects are somewhat different between $\gamma n\to\pi^0 n$ and $\gamma p\to\pi^0 p$ cross sections; generally a few percents difference in the reduction rates. Next we discuss the polarization observable $E$ for the quasi-free $\gamma n\to \pi^0 n$. This polarization asymmetry is defined by $ E = (\sigma_{1/2}-\sigma_{3/2})/(\sigma_{1/2}+\sigma_{3/2})$, where $\sigma_{3/2}$ ($\sigma_{1/2}$) is the $\gamma N$ cross section for which the photon circular polarization and the nucleon spin orientation are parallel (antiparallel). We show in Fig.~\ref{fig:5} $E$ for the quasi-free $\gamma n\to \pi^0 n$. \begin{figure} \includegraphics[width=0.52\textwidth]{gn-pi0n-E-1500} \hspace{-3mm} \includegraphics[width=0.52\textwidth]{gn-pi0n-E-1660} \caption{ The polarization observable $E$ for $\gamma n\to \pi^0n$ extracted from $\gamma d\to \pi^0 pn$. The other features are the same as those in Fig.~\ref{fig:3}. } \label{fig:5} \end{figure} The magnitude of the FSI effects on $E$ are smaller than those on $d\sigma/d\Omega_\pi$ because of the partial cancellation in the ratio. The previous experimental analysis~\cite{mami2} assumed no FSI effects on $E$, invoking the cancellation. However, we can still see nonnegligible FSI effects on $E$ for $\gamma n\to \pi^0 n$ at some $W$, and thus depending on the precision of data, corrections from this effect would be needed. Finally, we study the FSI effects on $d\sigma/d\Omega_\pi$ and $E$ for the quasi-free $\gamma n\to\pi^-p$ extracted from $\gamma d\to\pi^-pp$. Our numerical result is give in Fig.~\ref{fig:6}. \begin{figure} \includegraphics[width=0.52\textwidth]{gn-pimp-1500} \hspace{-3mm} \includegraphics[width=0.52\textwidth]{gn-pimp-1660} \includegraphics[width=0.52\textwidth]{gn-pimp-E-1500} \hspace{-3mm} \includegraphics[width=0.52\textwidth]{gn-pimp-E-1660} \caption{ The $d\sigma/d\Omega_\pi$ and $E$ for $\gamma n\to \pi^-p$ extracted from $\gamma d\to \pi^- pp$. The other features are the same as those in Fig.~\ref{fig:3}. } \label{fig:6} \end{figure} We again find that the quasi-free cross sections extracted from $\gamma d\to\pi^-pp$ in the impulse approximation reproduce well the free cross sections. For the backward pion kinematics, we find $\sim 9$\% reduction of $d\sigma/d\Omega_\pi$ at $W=1500$~MeV due to the $\pi N$ rescattering. Tarasov et al. also reported a similar finding~\cite{tarasov2011}. Regarding the polarization asymmetry $E$, we do not find a visible FSI effect. However, we find that the extracted quasi-free $E$ at the kinematical ends are noticeably different from the free $E$. This might be an artifact of the kinematical cuts, and a more elaborate study is underway. As seen in Figs.~\ref{fig:3}-\ref{fig:6}, the FSI effects often shift the extracted $\gamma n\to\pi N$ observables from the corresponding free ones. This may seem to indicate that we cannot correctly extract $\gamma n\to\pi N$ observables using the conventional method based on the kinematical cuts. Meanwhile, it is not computationally practical to determine the $\gamma n\to\pi N$ amplitudes (and thus the observables) by directly fitting them to the $\gamma d\to\pi NN$ data through a reaction model such as the one used in this work. A possible option would be to take an iterative procedure as follows: \begin{enumerate} \item Start with a certain set of parameters of a dynamical model that generates $\gamma n\rightarrow \pi N$ amplitudes. Implement the amplitudes into a dynamical $\gamma d\to\pi NN$ reaction model. \item With the same set of kinematical cuts as used in an experiment, calculate $\gamma d\to\pi NN$ cross sections with the dynamical model including the FSI, and extract the $\gamma n\to\pi N$ cross sections using the conventional method. By comparing the extracted $\gamma n\rightarrow \pi N$ cross sections with the free ones from the $\gamma n\rightarrow \pi N$ amplitudes built in the $\gamma d$ model, the FSI correction factor on the extracted cross sections is obtained. \item Extract $\gamma n\rightarrow \pi N$ data from $\gamma d\to\pi NN$ experimental data using the conventional method. The obtained quantity is further multiplied by the FSI correction factor estimated in the previous step, then $\gamma n\rightarrow \pi N$ cross section data are extracted. \item Calculate $\gamma n\rightarrow \pi N$ cross sections with the elementary amplitudes that have been built in the $\gamma d$ model, and compare them with the extracted data. If they agree, the extraction of the $\gamma n\rightarrow \pi N$ data as well as that of the corresponding amplitude are complete. If not, fit the extracted data with the dynamical model for $\gamma n\rightarrow \pi N$ by adjusting their parameters, then return to the step 1. \end{enumerate} \begin{acknowledgements} The author acknowledges H. Kamano, T.-S.H. Lee and T. Sato for their collaborations. This work was supported in part by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo-FAPESP, Process No.~2016/15618-8. Numerical computations in this work were carried out with SR16000 at YITP in Kyoto University, the High Performance Computing system at RCNP in Osaka University, the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and resources provided on Blues and/or Bebop, high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. \end{acknowledgements}
proofpile-arXiv_067-6399
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Recent years have seen the development of an abundance of specialized tools for the study of orbifolds. A variety of cohomology theories have been crafted specifically to investigate particular classes of orbifolds arising in complex geometry, algebraic geometry, symplectic geometry and string topology. Among these are: the {\em de Rham cohomology of orbifolds\/} \cite[Chapter 2]{ALR}, {\em orbifold Dolbeault cohomology\/} \cite{Bai54, Bai56, Sat56}, {\em Bott--Chern orbifold cohomology\/} \cite{Ang13}, {\em orbifold quantum cohomology\/} \cite[Section $2.2.5$]{CCLT}, {\em intersection homology\/} \cite{GM} and the {\em Chen--Ruan orbifold cohomology ring\/} \cite{CR}. These theories do yield information about the ordinary singular cohomology of orbifolds but the coefficients tend to be real, complex or rational, as appropriate. In general, the singular integral cohomology ring of a given orbifold remains sometimes intractable object, difficult to compute. Among orbifolds for which the integral cohomology ring does succumb to the traditional methods of algebraic topology, are those having cohomology which can be shown to be concentrated in even degree. The prime example is that of weighted projective spaces. Additively, their integral cohomology agrees with that for ordinary projective spaces but the ring structure is complicated by an abundance of divisibility arising from the weights, \cite{Ka, Amr}, \cite[Section $5.1$]{BSS}. Included also in this class of ``nice'' orbifolds are: weighted Grassmannians \cite{CR-wgr, AM}, certain singular toric varieties, toric orbifolds\footnote{In the literature, these are also called {\em quasi-toric orbifolds.\/}}, and orbifolds which arise as complex vector bundles over spaces with even cohomology. With this in mind, the determination of {\em verifiable\/} sufficient conditions on an orbifold, which ensure that the integral cohomology is concentrated in even degrees, becomes a natural question. Motivated by Kawasaki's computation for weighted projective spaces, the approach taken here begins by identifying classes of orbifolds which can be constructed by a sequence of canonical cofibrations, in a manner not unlike that for CW complexes. We employ ideas originated by Goresky in \cite{Gor} and developed in a toric context by Poddar and Sarkar in \cite{PS}. Confounding the process however, is the ineluctable need to replace ordinary cells with ``{\bf q}-cells'', the quotient of a disc by the action of a finite group. The task reduces then to keeping at bay the intrusion of torsion into odd cohomological degrees. The basic structure of a {\bf q}-CW complex is reviewed and developed further in Section \ref{sec_q-CW_complex}. We note in Proposition \ref{prop_qCW_simp_complex} that a {\bf q}-CW complex is homotopic to a simplicial complex. In order to compute the integral cohomology of a {\bf q}-CW complex inductively, using a specific sequence of cofibrations, we introduce the notion of a {\em building sequence\/} $\{(Y_{i},0_{i})\}_{i = 1}^{\ell}$. Here, $Y_{i}$ is obtained from $Y_{i-1}$ by attaching a {\bf q}-cell of the form $e^{{k_{i}}}/G_{i}$, where $e^{{k_{i}}}$ is an open disc and $G_{i}$ is a finite group acting linearly. The image of the origin $0_{i}$ in the {\bf q}-cell $e^{{k_{i}}}/G_{i}$ plays an important role. The results in the following sections are anchored in the observation that when a finite group $G$ acts linearly on a closed disc $\overline{D}^{n}$, the torsion which can arise in $H_{\ast}(S^{n-1}\!/G; \mathbb{Z})$ depends on $|G|$ only. In particular, we conclude that if $p$ is co-prime to all the primes dividing $|G|$, then $H_{\ast}(S^{n-1}\!/G; \mathbb{Z})$ has no $p$-torsion. Section \ref{sec_hom_q-cw_comp} contains the main theorem about {\bf q}-CW complexes. \begin{theorem}\label{thm_even_cells_no_p-torsion} Let $X$ be a {\bf q}-CW complex with no odd dimensional {\bf q}-cells and $p$ a prime number. If $\{(Y_{i},0_{i})\}_{i = 1}^{\ell}$ is a building sequence such that $\operatorname{gcd} \{p, |G_{i}|\} = 1$ for all $i$ with $e^{2k_{i}} /G_{i} = Y_{i}\setminus Y_{i-1}$, then $H_{\ast}(X; \mathbb{Z})$ has no $p$-torsion and $H_{\text{odd}}(X; \mathbb{Z}_{p})$ is trivial. \end{theorem} \noindent Successive applications of this theorem yield its important theorem. \begin{theorem}\label{thm_even_cells_no_torsion} Let $X$ be a {\bf q}--CW complex with no odd dimensional {\bf q}--cells. If for each prime $p$ there is a building sequence $\{(Y_{i},0_{i})\}_{i = 1}^{\ell}$ such that $\operatorname{gcd} \{p, |G_{i}|\} = 1$ for all $i$ with $e^{2k_{i}} /G_{i} = Y_{i}\setminus Y_{i-1}$, then $H_{\ast}(X; \mathbb{Z})$ has no torsion and $H_{\text{odd}}(X; \mathbb{Z})$ is trivial. \end{theorem} Applications and examples are given in Section \ref{sec_Application_and_examples}. The topological construction of a toric orbifold $X(Q,\lambda)$, arising from a simple polytope $Q$ and an $\mathcal{R}$-characteristic pair $(Q,\lambda)$, is reviewed in Subsection \ref{toric_orbifolds}. Then, we re-visit the notion of a {\em retraction sequence} of triples, $\{(B_{k},E_{k},v_{k})\}_{k=1}^{\ell}$, for a simple polytope $Q$, which was introduced in \cite{BSS}. Each space $B_{k}$ is a polytopal complex obtained by removing successively, carefully chosen vertices, beginning with $Q$ itself. Each $E_{k}$ is a particular face in $B_{k}$ and $v_{k}$ is a special vertex in $E_{k}$ called a {\em free\/} vertex. Then Proposition \ref{ret_build} provides the means by which we can use a {\bf q}-CW structure to study toric orbifolds. So, a toric orbifold which satisfies the hypothesis of Theorem \ref{thm_even_cells_no_torsion} has cohomology which is concentrated in even degrees. Moreover, from the point of view of retraction sequences, the orders $|G_{i}|$ appearing in Theorem \ref{thm_even_cells_no_torsion}, are computable explicitly from the $\mathcal{R}$-characteristic data $(Q,\lambda)$, which may be denoted as $g_{E_i}(v_i)$. This result appears in Theorem \ref{no-tor-on-toric}. An example is given next which proves that the gcd condition in Theorem \ref{no-tor-on-toric} is {\em weaker\/} than the hypothesis of \cite[Theorem $1.1$]{BSS}, the main result in that paper about the integral cohomology of toric orbifolds. Our attention turns in Subsection \ref{toric_varieties} to the study of certain projective toric varieties. Following a brief review of the basic construction of a toric variety $X_{\Sigma}$ from a fan $\Sigma$, we note that a {\em simplicial\/} {\em polytopal \/} fan arises as the {\em normal\/} fan of a simple polytope $Q$. Moreover, the fan determines an $\mathcal{R}$-characteristic pair $(Q,\lambda)$. In this case, we have $$X(Q,\lambda) \cong X_{\Sigma}$$ \noindent as orbifolds with torus action. The results of Section \ref{sec_hom_q-cw_comp} are applied then to this class of toric varieties. When the results are combined with \cite[Theorem 5.4]{BSS}, we get a complete description of the integral cohomology ring under conditions {\em weaker\/} than those of \cite[Theorem 1.2]{BSS}. In Subsection \ref{subsec_torus_orb}, retraction sequence techniques are shown, under certain conditions, to apply equally well to a generalization of a toric orbifold called a {\it torus orbifold\/}, introduced by Hattori and Masuda\/ \cite{HM}. Here, the simple polytope $Q$ is replaced by a manifold with corners $P$ which has additional properties. As observed by Masuda and Panov \cite[Section $4.2$]{MP}, they too can be constructed by an $\mathcal{R}$-characteristic pair $(P,\lambda)$. This allows us to obtain a result for torus orbifolds similar to Theorem \ref{no-tor-on-toric}. A discussion of weighted Grassmannians follows in Subsection \ref{subsec_wGr}. Abe and Matsumura \cite[Section 2]{AM} show that the Schubert cell structure, which is indexed by Young diagrams, determines a {\bf q}-cell structure on weighted Grassmannians. This allows for an application of Theorem \ref{thm_even_cells_no_p-torsion} at the end of this subsection. The paper concludes with a discussion of {\bf q}-CW complexes which may have odd dimensional {\bf q}-cells. The main theorem gives a condition on a prime $p$ and a building sequence $\{(Y_{i},0_{i})\}_{i = 1}^{\ell}$ for a {\bf q}-CW complex $X$ which ensures that $H_{\ast}(X;\mathbb{Z})$ has no $p$-torsion. Also a condition of having $p$-torsion in $H_{\ast}(X;\mathbb{Z})$ is given. {\bf Acknowledgements}: This work was supported in part by grants 210386 and 426160 from Simons Foundation. The fourth author was partially supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT $\&$ Future Planning (No. 2016R1A2B4010823) \section{$\mathbf{q}$-CW complexes}\label{sec_q-CW_complex} In this section, we introduce the notion of $\mathbf{q}$-CW complex structure in which the analogue of an open cell is the quotient of an open disk by an action of a finite group. The construction mirrors the construction of ordinary CW complexes given, for example, in Hatcher \cite{Hat}. We note that $\mathbf{q}$-CW complex structures were used to compute the rational homology of certain singular spaces having torus symmetries in \cite{PS}. \begin{definition} Let $G$ be a finite group acting linearly on a closed $n$-dimensional disc $\overline{D}^{n}$ centered at the origin. Such an action preserves $\partial{\overline{D}^n} = S^{n-1}$. We call the quotient $ \overline{D}^{n}/G $ an $n$-dimensional $\mathbf{q}$-{\it disc}. If $\overline{e}^n$ is $G$-equivariantly homeomorphic to $\overline{D}^n$, we call $e^n/G$ a $\mathbf{q}$-{\em cell}. By abuse of notation, we shall denote the boundary of $\overline{e}^n$ by $S^{n-1}$ without confusion. \end{definition} A $\mathbf{q}$-CW complex is constructed inductively as follows. Start with a discrete set $ X_{0} $, where points are regarded as $0$-dimensional $\mathbf{q}$-cells. Inductively, form the $n$-dimensional $\mathbf{q}$-skeleton $ X_{n} $ from $ X_{n-1} $ by attaching finitely many $n$-dimensional $\mathbf{q}$-cells $ \{ e^{n}/{G_{\alpha}}\}$ via continuous maps $\{\phi_{\alpha}\}$ where $ \phi_{\alpha} : S^{n-1}/G_{\alpha} \to X_{n-1} $. This means that $ X_{n}$ is the quotient space of the disjoint union $ X_{n-1}\bigsqcup_{\alpha} \{\overline{e}^n/G_{\alpha}\} $ of $ X_{n-1} $ with a finite collection of $n$-dimensional $\mathbf{q}$-disks $ \overline{e}^{n}/G_{\alpha} $ under the identification $ x\sim \phi_{\alpha}(x)$ for $ x\in S^{n-1}/G_{\alpha} $. If $X = X_n$ for some finite $n$, we call $X$ a {\em finite} $\mathbf{q}$-{\em CW complex} of dimension $n$. The topology of $ X $ is the quotient topology built inductively. We say $Y$ is a $\mathbf{q}$-CW subcomplex of $X$, if $Y$ is a $\mathbf{q}$-CW complex and $Y$ is a subset of $X$. \begin{definition} \begin{enumerate} \item Let $0_{i} \in X$ denote the image of origin in the $\mathbf{q}$-cell $e^{k_i}/G_{i}$. We call $0_i$ a {\it special point} corresponding to the $i$-th $\mathbf{q}$-cell. We may also refer to $0_{i}$ as a special point of $X$. \item Let $Y$ be a $\mathbf{q}$-CW subcomplex of a $\mathbf{q}$-CW complex $X$. Then a special point $0_{i} \in Y$ is called \emph{free} in $Y$ if there is a neighborhood of $0_{i}$ in $Y$ homeomorphic to $e^k/G_{i}$ for some $k \in \mathbb{Z}_{\geq 0}$ and finite group $G_i$. \end{enumerate} \end{definition} \begin{remark} \begin{enumerate} \item Every finite $\mathbf{q}$-CW complex has a free special point. A finite $\mathbf{q}$-CW complex can be obtained by attaching one $\mathbf{q}$-cell at each time. Attaching $\mathbf{q}$-cells need not be in increasing dimension. \item If $0_{i}$ is a free special point of a $\mathbf{q}$-CW complex $Y$, then $Y$ can be obtained from a $\mathbf{q}$-CW subcomplex $Y'$ by attaching a $\mathbf{q}$-cell $\overline{e}^{k_{i}}/G_{i}$. In this case, the natural deformation retract from $(\overline{e}^{k_{i}} \setminus \{0_{i}\})/G_{i}$ to $\partial{\overline{e}^{k_{i}}}/G_{i}$ induces a deformation retract from $Y \setminus \{0_{i}\}$ to $Y'$. \item Let $S^{k-1}_{\frac{1}{2}} = \{x \in e^k \cong D^k ~|~ |x| = \frac{1}{2}\}$. Then $S^{k-1}_{\frac{1}{2}}/G_i \cong S^{k-1}/G_i$ and the previous remark implies that the following is a cofibration: \begin{equation} \begin{tikzcd} S^{k-1}/G_i \arrow{r}{\phi_i} & Y' \arrow[hookrightarrow]{r} & Y \end{tikzcd} \end{equation} where $\phi_i$ is defined by the deformation retraction. \item If $k_{i}=0, 1, 2$ then $\overline{e}^{k_{i}}/G_{i}$ is homeomorphic to a closed disc of dimension $k_{i}$. So attaching $\overline{e}^{k_{i}}/G_{i}$ is nothing but attaching a disc of dimension $0, 1$ or $ 2$. Hence we may assume $G_{i}$ is trivial in these cases. \item Following Goresky \cite{Gor} one may obtain a CW structure on an effective orbifold. Often however, it is too complicated for computational purpose. \end{enumerate} \end{remark} \begin{prop}\label{prop_qCW_simp_complex} If $X$ is a $\mathbf{q}$-CW complex, then $X$ is homotopic to a simplicial complex. \end{prop} \begin{proof} We prove it by induction. By definition of $\mathbf{q}$-CW complex, $X_0$ has a simplicial complex structure. Suppose by induction, the space $X_{i-1}$ has a simplicial complex structure. Assume that $X_i$ is obtained by attaching a $\mathbf{q}$-cell $\overline{e}^k/G_i$ via the map $\phi_i : S^{k-1}/G_i \to X_{i-1}$. Using Theorem 3.6 of Illman \cite{Il}, we get a simplicial complex structure on $\overline{e}^k/G_i$. Then by the simplicial approximation theorem (\cite[Theorem2C.1]{Hat}), there is a homotopy from $\phi_i$ to a simplicial map $\xi_i : S^{k-1}/G_i \to X_{i-1}$. Then one can complete the proof by following the arguments in the proof of Theorem 2C.5 of \cite{Hat}. \end{proof} Now, we introduce the following definition of \emph{building sequence} which enables us to compute the integral cohomology ring of certain spaces with $\mathbf{q}$-CW complex structure and detect the existence of $p$-torsion. \begin{definition} Let $X$ be a finite $\mathbf{q}$-CW complex with $\ell$ many $\mathbf{q}$-cells $e^{k_i}/G_i$ for $i \in \{1, \ldots, \ell\}$ and $1 \leq k_i \leq \dim X$. Let $Y_i$ be $i$-th stage $\mathbf{q}$-CW subcomplex of $X$ and $0_i$ a free special point in $Y_i$. We say $\{(Y_i, 0_i)\}_{i =1}^{\ell}$ is a {\em building sequence} for $X$. \end{definition} We recall that the finite group $G$ acts on $\overline{D}^n$ linearly. Thus the boundary $S^{n-1}$ is preserved by $G$-action. The quotient space $S^{n-1}/G$ is called an \emph{orbifold lens space} in \cite{BSS}. We call a finite group $K$ a \emph{$|G|$-torsion} if $K$ is trivial or the prime factors of the order $|K|$ is a subset of the prime factors of $|G|$. Now, the transfer homomorphism \cite[III.2]{Bor} leads us the following lemma. \begin{prop}\label{prop_homology_orb_lens_sp} The homology of an orbifold lens space $S^{n-1}/G$ is $$H_j(S^{n-1}/G;{\mathbb{Z}})= \begin{cases} \mathbb{Z} & \text{ if } j=0,\\ a~ |G|\text{-torsion} &\text{ if } 1\leq j \leq n-2, \\ a ~|G|\text{-torsion} \text{ or } {\mathbb{Z}} & \text{ if } j=n-1.\end{cases} $$ In particular, if $G$-action preserves the orientation of $S^{n-1}$, then $H_{n-1}(S^{n-1}/G;{\mathbb{Z}})\cong {\mathbb{Z}}$. \end{prop} \begin{proof} We see $H_0(S^{n-1}/G;{\mathbb{Z}})\cong {\mathbb{Z}}$ trivially. For $j=1, \dots, n-2$, consider the following isomorphism \begin{equation}\label{eq_transfer_isom} H^\ast(X/G;\mathbf{k}) \cong H^\ast(X;\mathbf{k})^G, \end{equation} where $X$ is a locally compact Hausdorff space, $G$ is a finite group and $\mathbf{k}$ is a field of characteristic zero or coprime to $|G|$, see \cite[III.2]{Bor}. The result follows now from the universal coefficient theorem. In particular, if $G$-action preserves the orientation, then $H_{n-1}(S^{n-1}/G;{\mathbb{Z}})\cong {\mathbb{Z}}$ because $S^{n-1}/G$ is orientable. \end{proof} The universal coefficient theorem leads to the following. \begin{corollary}\label{no_tor_in_lens} If $p$ is coprime to the prime factors $p_1, \dots, p_r$ of $|G|$, then the group $H_{\ast}(S^{n-1}/G;{\mathbb{Z}})$ has no $p$ torsion and $H_{j}(S^{n-1}/G; {\mathbb{Z}}_p)$ is trivial if $j \neq 0, n-1$. \end{corollary} We end this section by discussing the degree of an attaching map. Let $X$ be a $\mathbf{q}$-CW complex and $\phi : S^k/G \to X_j \subseteq X$ an attaching map where $X_j$ is the $j$-dimensional $\mathbf{q}$-skeleton. Then $\phi$ induces homomorphisms $$(\phi_{\ast})_k : H_k(S^k/G; {\mathbb{Z}}) \to H_k(X_j; {\mathbb{Z}}).$$ Now, $H_k(X_j; {\mathbb{Z}})$ is isomorphic to ${\mathbb{Z}}^s \oplus K$ for some non-negative integer $s$ and finite group $K$. By Proposition \ref{prop_homology_orb_lens_sp}, we have $H_k(S^k/G; {\mathbb{Z}})$ is finite or ${\mathbb{Z}}$. The next definition of {\it degree} will play an important role in Section \ref{sec_cells_in_every_dim}. \begin{definition}\label{def_degree} Let $H_k(S^k/G; {\mathbb{Z}}) \cong {\mathbb{Z}}$ and write $(\phi_{\ast})_k(1)=(d_1, \ldots, d_s, x)$ for $1\in {\mathbb{Z}}$. We define the \emph{degree} of $\phi$ by \begin{equation}\label{eq_degree_of_attaching} \mathrm{deg} \phi:= \operatorname{gcd}\{d_i \mid 1\leq i \leq s, ~ d_i\neq 0 \}. \end{equation} If all $d_1, \ldots, d_s$ are zero or $H_k(X_j;{\mathbb{Z}})$ has no free part, then we define degree of $\phi$ to be 0. \end{definition} \section{Integral homology of $\mathbf{q}$-CW complexes with cells in even dimensions} \label{sec_hom_q-cw_comp} This section is devoted to the proof of Theorem \ref{thm_even_cells_no_p-torsion}, which gives a sufficient condition for determining the torsion in the cohomology of certain families of finite $\mathbf{q}$-CW complexes. Notice that one can verify Theorem \ref{thm_even_cells_no_torsion} by applying Theorem \ref{thm_even_cells_no_p-torsion} for each prime $p$. \begin{proof}[Proof of Theorem \ref{thm_even_cells_no_p-torsion}] We prove the claim by the induction on the free special points in the sequence $\{(Y_i, 0_i)\}_{i=1}^\ell$. First, notice that if $i=1$, $Y_i$ is a point and if $i=2$ then $Y_i$ is a $\mathbf{q}$-CW complex obtained by attaching an even dimensional $\mathbf{q}$-cell to a point. That is, $Y_2$ is homeomorphic to a suspension on $S^{2k_i-1}/G_i$. Then by Proposition \ref{prop_homology_orb_lens_sp}, the claim is true. Now we assume that $H_{\ast}(Y_{i - 1}; \mathbb{Z})$ has no $p$-torsion and $H_{odd}(Y_{i -1}; {\mathbb{Z}}_p)$ is trivial for $i >1$. To complete the induction, we shall prove that the same holds for $Y_i$. Note that $Y_i$ can be constructed from $Y_{i-1}$ by attaching a $\mathbf{q}$-cell $\overline{e}^{2k_i}/G_i$ to it. So we have the cofibration: \begin{equation} \begin{tikzcd} S^{2k_i-1}/G_i \arrow{r} & Y_{i-1} \arrow[hookrightarrow]{r} & Y_i. \end{tikzcd} \end{equation} This cofibration induces the following long exact sequence in homology, \begin{equation}\label{eq_les_of_pair} \begin{tikzcd}[row sep=tiny] \cdots \arrow{r} & H_{j+1}(Y_{i}) \arrow{r} & \widetilde{H}_{j}(S^{2k_i -1}/G_i) \arrow{r} & \quad \\ H_{j}(Y_{i -1}) \arrow{r} & H_{j}(Y_i) \arrow{r} & \widetilde{H}_{j-1}(S^{2k_i - 1}/G_i) \arrow{r} & \cdots. \end{tikzcd} \end{equation} Suppose that $j$ is odd. By the induction hypothesis, the group $H_j(Y_{i - 1}; \mathbb{Z})$ has no $p$-torsion and $H_j(Y_{i -1}; {\mathbb{Z}}_p)$ is trivial. On the other hand, $\widetilde{H}_{j-1}(S^{2k_i - 1}/G_i; \mathbb{Z})$ has no $p$-torsion and $H_{j-1}(S^{2k_i -1}/G_i; {\mathbb{Z}}_p)$ is trivial by Corollary \ref{no_tor_in_lens}. Therefore, from \eqref{eq_les_of_pair} we have $H_j(Y_{i}; \mathbb{Z})$ has no $p$-torsion and $H_j(Y_{i}; {\mathbb{Z}}_p)$ is trivial. Next, we assume that $j$ is even. Then, in the exact sequence \eqref{eq_les_of_pair}, the groups $\widetilde{H}_{j-1}(S^{2k_i - 1}/G_i; \mathbb{Z})$, $H_j(Y_{i - 1}; \mathbb{Z})$ and $\widetilde{H}_{j}(S^{2k_i - 1}/G_i; \mathbb{Z})$ have no $p$-torsion. Therefore $H_j(Y_{i}; \mathbb{Z})$ has no $p$-torsion. This completes the induction. \end{proof} \section{Applications and Examples}\label{sec_Application_and_examples} The goal of this section is to illustrate the results of Section \ref{sec_hom_q-cw_comp} with some broad applications which improve upon certain previous results. \subsection{Toric orbifolds}\label{toric_orbifolds} Toric orbifolds were introduced in \cite{DJ} and explicitly studied in \cite{PS}\footnote{After \cite{DJ}, \emph{toric manifold} was renamed in \cite{BP} as \emph{quasitoric manifold}, and the authors of \cite{PS} used \emph{quasitoric orbifolds} instead of \emph{toric orbifolds} of \cite{DJ}. }. In \cite{BSS}, the authors introduced the concept of a retraction sequence\footnote{Retraction sequences have strong connection with shellability of a simplicial complex. The authors of \cite{BSS} are preparing a separated paper about this. } for a simple polytope and discussed the integral homology and cohomology of toric orbifolds. In this subsection we show that a retraction sequence determines a $\mathbf{q}$-CW structure on the toric orbifold and then we compare the main theorem of \cite[Theorem 1.1]{BSS} and Theorem \ref{thm_even_cells_no_torsion}. For convenience we adhere the notation of \cite{BSS}. We begin by summarizing the definition of a \emph{retraction sequence} of a simple polytope. Given an $n$-dimensional simple polytope $Q$ with $\ell$ vertices, we construct a sequence of triples $\{(B_k, E_k, b_k)\}_{1 \leq k \leq \ell}$, inductively. The first term is defined by $B_1 = Q=E_1$ and $b_1$ is a vertex of $B_1$. Next, given $(k-1)$-th term $(B_{k-1}, E_{k-1}, b_{k-1})$, the next term $(B_{k}, E_{k}, b_{k})$ is defined by setting \begin{equation}\label{eq_B_k_in_retraction_seq} B_{k}=\bigcup \{E \mid E \text{ is a face of } B_{k-1} \text{ and } b_{k-1}\notin V(E)\}. \end{equation} If a vertex $b_k$ exists in $B_k$ satisfying that $b_k$ has a neighborhood homeomorphic to ${\mathbb{R}}^{d}_{\geq 0}$ as manifold with corners for some $d \in \{1, \dots, \dim B_{k}\}$, then we let $E_k$ be the unique face containing $b_k$ with $\dim E_k=d$. We call $b_k$ a \emph{free vertex} of $B_k$. Such a sequence is called a {\it retraction sequence} for $Q$ if the sequence ends up with $(B_\ell, E_\ell, b_\ell)$ such that $B_\ell=E_\ell=b_\ell$ for some vertex $b_\ell \in V(Q)$. We call a simple polytope $Q$ \emph{admissible} if there exists at least one free vertex in each polytopal subcomplex $B_k$ defined in \eqref{eq_B_k_in_retraction_seq}. Hence, given $k$-th term $B_k$ as defined in \eqref{eq_B_k_in_retraction_seq}, any choice of free vertex of $B_k$ defines a retraction sequence. Two different retraction sequences of a prism are described in Figure \ref{fig_ret_of_prism} below, which shows that the prism is admissible by its symmetry. \begin{figure}[ht] \begin{tikzpicture}[scale=0.6] \draw[fill=yellow, opacity=0.5] (0,1)--(0,3)--(1.3,3.2)--(2,2)--(2,0)--cycle; \node at (0,3) {\scriptsize$\bullet$}; \draw (0,1)--(2,0)--(2,2)--(0,3)--cycle; \draw (0,3)--(1.3,3.2)--(2,2); \draw[dashed] (1.3,3.2)--(2,2)--(2,0)--(1.3,1.2)--cycle; \draw[dashed] (0,1)--(1.3,1.2); \node[left] at (0,3) {\scriptsize $b_1$}; \draw[->] (2.5,1)--(3,1); \begin{scope}[xshift=100] \draw[fill=yellow!50] (1.3,3.2)--(2,2)--(2,0)--(1.3,1.2)--cycle; \draw (0,1)--(1.3,1.2)--(2,0)--cycle; \node at (1.3,3.2) {\scriptsize$\bullet$}; \node[left] at (1.3,3.2) {\scriptsize $b_2$}; \draw[->] (2.5,1)--(3,1); \end{scope} \begin{scope}[xshift=200] \draw (2,2)--(2,0); \draw[fill=yellow!50] (0,1)--(1.3,1.2)--(2,0)--cycle; \node at (0,1) {\scriptsize$\bullet$}; \node[above] at (0,1) {\scriptsize $b_3$}; \draw[->] (2.5,1)--(3,1); \end{scope} \begin{scope}[xshift=270] \draw[very thick] (2,2)--(2,0); \draw (1.3,1.2)--(2,0); \node at (2,2) {\scriptsize$\bullet$}; \node[left] at (2,2) {\scriptsize $b_4$}; \draw[->] (2.5,1)--(3,1); \end{scope} \begin{scope}[xshift=350] \draw[very thick] (1.3,1.2)--(2,0); \node at (1.3,1.2) {\scriptsize$\bullet$}; \node[above] at (1.3,1.2) {\scriptsize $b_5$}; \draw[->] (2.5,1)--(3,1); \end{scope} \begin{scope}[xshift=420] \node at (2,0) {\scriptsize$\bullet$}; \node[above] at (2,0) {\scriptsize $b_6$}; \end{scope} \begin{scope}[yshift=-130] \draw[fill=yellow, opacity=0.5] (0,1)--(0,3)--(1.3,3.2)--(2,2)--(2,0)--cycle; \node at (0,3) {\scriptsize$\bullet$}; \draw (0,1)--(2,0)--(2,2)--(0,3)--cycle; \draw (0,3)--(1.3,3.2)--(2,2); \draw[dashed] (1.3,3.2)--(2,2)--(2,0)--(1.3,1.2)--cycle; \draw[dashed] (0,1)--(1.3,1.2); \node[left] at (0,3) {\scriptsize $b_1$}; \draw[->] (2.5,1)--(3,1); \begin{scope}[xshift=100] \draw (1.3,3.2)--(2,2)--(2,0)--(1.3,1.2)--cycle; \draw[fill=yellow!50] (0,1)--(1.3,1.2)--(2,0)--cycle; \node at (0,1) {\scriptsize$\bullet$}; \node[above] at (0,1) {\scriptsize $b_2'$}; \draw[->] (2.5,1)--(3,1); \end{scope} \begin{scope}[xshift=170] \draw[fill=yellow!50] (1.3,3.2)--(2,2)--(2,0)--(1.3,1.2)--cycle; \node at (2,2) {\scriptsize$\bullet$}; \node[right] at (2,2) {\scriptsize $b_3'$}; \draw[->] (3,1)--(3.5,1); \end{scope} \begin{scope}[xshift=270] \draw [very thick] (1.3,3.2)--(1.3,1.2); \draw (1.3,1.2)--(2,0); \node at (1.3,3.2) {\scriptsize$\bullet$}; \node[left] at (1.3,3.2) {\scriptsize $b_4'$}; \draw[->] (2.5,1)--(3,1); \end{scope} \begin{scope}[xshift=350] \draw[very thick] (1.3,1.2)--(2,0); \node at (1.3,1.2) {\scriptsize$\bullet$}; \node[above] at (1.3,1.2) {\scriptsize $b_5'$}; \draw[->] (2.5,1)--(3,1); \end{scope} \begin{scope}[xshift=420] \node at (2,0) {\scriptsize$\bullet$}; \node[above] at (2,0) {\scriptsize $b_6'$}; \end{scope} \end{scope} \end{tikzpicture} \caption{Two retraction sequences.} \label{fig_ret_of_prism} \end{figure} Next, we recall briefly the construction of \emph{toric orbifolds}. A \emph{toric orbifold} is constructed from a combinatorial pair $(Q, \lambda)$, called an \emph{$\mathcal{R}$-characteristic pair}, of an $n$-dimensional simple convex polytope $Q$ and a function $$\lambda \colon \mathcal{F}(Q)=\{F_1, \dots , F_m\} \to {\mathbb{Z}}^n,$$ where $\mathcal{F}(Q)$ is the set of codimension one faces of $Q$, called \emph{facets}, and $\lambda$ satisfies the following condition: \begin{equation}\label{eq_R-char_function_condition} \{\lambda(F_{i_1}), \dots, \lambda(F_{i_k})\} \text{ is linearly independent, whenever } \bigcap_{j=1}^k F_{i_j}\neq \emptyset. \end{equation} We call such a function $\lambda$ an \emph{$\mathcal{R}$-characteristic function} on $Q$. For the notational convenience, we sometimes denote $\lambda_i=\lambda(F_i)$ and call it an \emph{$\mathcal{R}$-characteristic vector} assigned to the facet $F_i$. \begin{remark}\label{rmk_simple_lattice_polytope} One example of such a function can be provided by a \emph{simple lattice polytope} which is a simple polytope obtained by the convex hull of finitely many lattice points in $\mathbb{Z}^n\subset \mathbb{R}^n$. One can naturally assign each facet of simple lattice polytope its primitive normal vector as an $\mathcal{R}$-characteristic vector. Such a polytope is one of the key combinatorial objects in toric geometry, as we shall explain briefly in the next subsection. \end{remark} We may regard ${\mathbb{Z}}^n$, the target space of $\lambda$, as the ${\mathbb{Z}}$-submodule of the Lie algebra of $T^n$. Let $E(x) = F_{i_1} \cap \cdots \cap F_{i_k}$ denote the face with $x$ in its (relative) interior, and $T_{E(x)} \in T^n$ the subtorus generated by the images of $\lambda(F_{i_1}), \dots, \lambda(F_{i_k})$ under the map $ {\rm span}_{\mathbb{Z}} \{\lambda_{i_1},\dots, \lambda_{i_k}\} \hookrightarrow {\mathbb{Z}}^n \xrightarrow{\exp} T^n$. Then, given an $\mathcal{R}$-characteristic pair $(Q,\lambda)$, we construct the following quotient space \begin{equation}\label{eq_def_of_toric_orb} X(Q, \lambda):= Q\times T^n / \sim_\lambda, \end{equation} where $(x, t)\sim_\lambda (y, s)$ if and only if $x=y$ and $t^{-1}s \in T_{E(x)}$. Now $X(Q, \lambda)$ has an orbifold structure induced by the group operation as described in \cite[Section 2]{PS}. The natural $T^n$-action, given by the multiplication on the second factor, induces the orbit map \begin{equation}\label{eq_orbit_map} \pi: X(Q, \lambda) \to Q \end{equation} defined by the projection onto the first factor. The space $X(Q, \lambda)$ is called the \emph{toric orbifold} associated to an $\mathcal{R}$-characteristic pair $(Q, \lambda)$. We remark that the authors of \cite{PS} gave an axiomatic definition of toric orbifolds, which generalizes the definition of toric manifolds of \cite[Section 1]{DJ}. Notice that the preimage $\pi^{-1}(F_{i})$ of a facet $F_i$ is a codimension 2 subspace fixed by a circle subgroup of $T^n$ generated by $\lambda_{i}$. Moreover, the preimage $\pi^{-1}(E)$ of a face $E=F_{i_1}\cap \dots \cap F_{i_k}$ is the intersection of $\pi^{-1}(F_{i_1}), \dots, \pi^{-1}(F_{i_k})$, which is again $T^n$-invariant. Indeed, it is shown in \cite[Section 2.3]{PS} that $\pi^{-1}(E)$ is also a toric orbifold for each face $E$ of $Q$, whose $\mathcal{R}$-characteristic pair is described below. As above, let $E=F_{i_1}\cap \dots \cap F_{i_k}$ be a face of codimension $k$ in $Q$, where $F_{i_1}, \dots, F_{i_k}$ are facets. One can define a natural projection \begin{equation}\label{eq_def_of_rho_E} \rho_E \colon \mathbb{Z}^n \to \mathbb{Z}^n / (({\rm span}_{\mathbb{Z}}\{\lambda_{i_1}, \dots, \lambda_{i_k}\}\otimes_\mathbb{Z} \mathbb{R}) \cap \mathbb{Z}^n), \end{equation} where $({\rm span}_{\mathbb{Z}}\{\lambda_{i_1}, \dots, \lambda_{i_k}\}\otimes_\mathbb{Z} \mathbb{R}) \cap \mathbb{Z}^n$ is a free $\mathbb{Z}$-module of rank $k$. We regard $E$ as an independent simple polytope whose facets $\mathcal{F}(E)$ are of the form: $$\mathcal{F}(E) = \{ E\cap F_j \mid F_j \in \mathcal{F}(Q)~ \text{and}~ j \notin \{ i_1, \dots, i_k\}, ~\text{and}~ E\cap F_j\neq \varnothing\}.$$ Now, the composition of the maps $\rho_E$ and $\lambda$ yields an $\mathcal{R}$-characteristic function for $E$ \begin{equation}\label{eq_def_of_induced_char_ftn} \lambda_E \colon \mathcal{F}(E) \to \mathbb{Z}^{n-k}, \end{equation} defined for $\lambda_E(E\cap F_j)$ to be the primitive vector of $(\rho_E\circ \lambda)(F_j)$. Indeed, condition \eqref{eq_R-char_function_condition} for the $\mathcal{R}$-characteristic function $\lambda_E$ follows naturally from $\lambda$. Next, we define certain integers associated to each vertex. Let $v$ be a vertex of $Q$ and $E$ a face of codimension $k$ containing $v$. Then, there are facets $F_{j_1}, \ldots, F_{j_{n-k}}$ such that $v= (E \cap F_{j_1}) \cap \cdots \cap (E \cap F_{j_{n-k}}).$ We define \begin{equation}\label{eq_g_E(v)} g_E(v) := \big| \det\left[ \lambda_E(E \cap F_{j_1})^t, \ldots, \lambda_E(E \cap F_{j_{n-k}})^t \right] \big|, \end{equation} where $t$ denotes the transpose. In particular, when $E=Q$ and $v=F_{i_1}\cap \cdots \cap F_{i_n}$, the number $g_Q(v)$ is \begin{equation}\label{eq_g_Q(v)} g_Q(v)=\big| \det \left[ \lambda(F_{i_1})^t \cdots \lambda(F_{i_n})^t \right] \big|. \end{equation} Given an $\mathcal{R}$-characteristic pair $(Q, \lambda)$ and each term $(B_i, E_i, b_i)$ of a retraction sequence of $Q$, we denote naturally $g_{B_i}(v):=g_{E_i}(v)$ for each free vertex $v$ of $B_i$, which makes the statement of Theorem \ref{thm_gcd_con} simpler. \begin{remark} The number $g_E(v)$ encodes the singularity of $X(E, \lambda_E)$. Indeed, the local group of the orbifold chart around the point $\pi_E^{-1}(v)$, where $\pi_E \colon X(E, \lambda_E) \to E$ is the orbit map, has the order $g_E(v)$. \end{remark} The next theorem gives a sufficient condition ensuring that the cohomology of a toric orbifold is concentrated in even degree and torsion free. \begin{theorem}{\cite[Theorem 1.1]{BSS}}\label{thm_gcd_con} Given any toric orbifold $X(Q,\lambda)$ over an admissible simple polytope, assume that $$\operatorname{gcd} \big\{ g_{B_i}(v) \mid v ~\text{is a free vertex in}~B_i \big\}=1,$$ for each $B_i$ which appears in a retraction sequence of $Q$ with $\dim B_i > 1$; then $H^\ast(X(Q, \lambda) ;{\mathbb{Z}})$ is torsion free and concentrated in even degrees. \end{theorem} We remark that \cite[Theorem 1.1]{BSS} was proved for toric orbifolds over admissible simple polytopes, but it was not mentioned explicitly. Retraction sequences and building sequences are related by the following. \begin{prop}\label{ret_build} If $\{(B_i, E_i, b_i)\}_{i=1}^\ell$ is a retraction sequence of an $n$-dimensional simple polytope $Q$ and $(Q, \lambda)$ is an $\mathcal{R}$-characteristic pair, then it induces a building sequence $\{(Y_{i}, 0_{i})\}_{i=1}^{\ell}$ of $X(Q, \lambda)$. \end{prop} \begin{proof} Let $E_i$ be a face of $Q$ with $\dim E_i=n-k$, and $U_i$ the open subset of $E_i$ obtained by deleting the faces of $E_i$ not containing the vertex $b_i$. Observe that $b_i=\bigcap_{j=1}^{n-k} (E_i \cap F_{i_j})$ for some facets $F_{i_1}, \ldots, F_{i_{n-k}}$ of $Q$ and $U_i$ is homeomorphic to ${\mathbb{R}}^{n-k}_{\geq 0}$ as manifold with corners. Note that $\pi^{-1}(E_i)$ is a toric orbifold and homeomorphic to $X(E_i, \lambda_{E_i})$, see \cite[Proposition 3.3]{BSS}. So, we can apply the orbifold chart construction on a toric orbifold described in \cite[Subsection 2.1]{PS} to get that $\pi^{-1}(U_i)$ is homeomorphic to the quotient of an open disc $D^{2(n-k)}$ by a finite group $G_i$ which is given by $$G_{i} = {\mathbb{Z}}^{n-k} / \text{span}_{\mathbb{Z}} \{ \lambda_{E_i}(E_i\cap F_{i_1}), \ldots, \lambda_{E_i}(E_i\cap F_{i_{n-k}}) \}.$$ Since $Q = \bigcup_{i=1}^{\ell} U_i$, we get a $\mathbf{q}$-CW complex structure on $X(Q, \lambda)$ with $\mathbf{q}$-cells $\pi^{-1}(U_i)$ for $i=1, \ldots, \ell$. Setting $Y_i = \bigcup_{j \geq \ell -i+1} (\pi^{-1}(U_j))$ gives a building sequence $\{(Y_i, 0_i)\}_{i=1}^{\ell}$ where $0_i = \pi^{-1}(b_i)$ for $i=1, \ldots, \ell$. \end{proof} We note that the order $|G_i |$ is exactly $g_{E_i}(b_i)$ defined as in \eqref{eq_g_E(v)}. \begin{prop}\label{prop_new_theorem_is_better} If a toric orbifold $X(Q, \lambda)$ satisfies the assumption of Theorem \ref{thm_gcd_con}, then it satisfies the assumption of Theorem \ref{thm_even_cells_no_torsion}. \end{prop} \begin{proof} Let $X(Q, \lambda)$ be a toric orbifold over an $n$-dimensional admissible simple polytope $Q$ with $\ell$ vertices, and $p$ a prime. Since $B_1=Q$, we have $$\operatorname{gcd}\{g_Q(v) \mid v \in V(Q)\}=1$$ and so there is a $b_1 \in V(Q) $ such that $\operatorname{gcd}\{p, g_Q(b_1)\}=1$. We consider $(B_1, E_1, b_1) = (Q, Q, b_1)$ for the first step of the retraction sequence. Since $Q$ is simple, $B_2$ has $n$ many free vertices, say $\{v_{i_1}, \ldots, v_{i_n}\}$, namely, the ones at the \emph{ends} of the $n$ deleted edges joining $b_1$. The assumption of Theorem \ref{thm_gcd_con} implies that $$\operatorname{gcd}\{g_{B_2}(v_{i_1}), \ldots, g_{B_2}(v_{i_n}) \} = 1,$$ which guarantees that there is a vertex $b_2 \in \{v_{i_1}, \ldots, v_{i_n}\}$ with $\operatorname{gcd}\{p, g_{B_2}(b_2)\}=1$. Let $E_2$ be the face of $B_1=Q$ determined by the edges of $B_2$ which intersect $b_2$. So $(B_2, E_2, b_2)$ can be a second term of a retraction sequence. Since $Q$ is an admissible simple polytope, then $B_3$ has at least one free vertex. Continuing this process, since the number of vertices $\ell=|V(Q)|$ is finite, one get a retraction $\{(B_i, E_i, b_i)\}_{i=1}^\ell$ of $Q$ such that $\operatorname{gcd}\{p, g_{E_i}(b_i)\}=1$ for $i=1, \ldots, \ell$. Therefore the result follows from Proposition \ref{ret_build} and its proof. \end{proof} The next theorem is the direct application of Theorem \ref{thm_even_cells_no_torsion} to toric orbifolds. \begin{theorem}\label{no-tor-on-toric} Let $X(Q, \lambda)$ be a toric orbifold. If, for each prime $p$, there is a retraction sequence $\{(B_i, E_i, v_i)\}_{i=1}^\ell$ of $Q$ such that $\operatorname{gcd}\{p, g_{E_i}(v_i)\}=1$, then $H^\ast(X(Q, \lambda);{\mathbb{Z}}) $ is torsion free and concentrated in even degrees. \end{theorem} \begin{proof} The proof follows from Proposition \ref{ret_build} and Theorem \ref{thm_even_cells_no_torsion}. \end{proof} The next example illustrates the fact that the converse of Proposition \ref{prop_new_theorem_is_better} is not true in general. Hence, Theorem \ref{no-tor-on-toric} generalizes Theorem \ref{thm_gcd_con}. \begin{example}\label{ex_converse_not_true} In this example we construct a toric orbifold which satisfies the condition of Theorem \ref{no-tor-on-toric}, but fails to satisfy the hypothesis of Theorem \ref{thm_gcd_con}. Consider the $\mathcal{R}$-characteristic pair $(Q, \lambda)$ illustrated in Figure \ref{fig-eg1}. \begin{figure}[ht] \begin{tikzpicture} \draw[thick] (0,0)--(3,0)--(3,3)--(0,3)--cycle; \draw[thick] (4,4)--(4,1)--(3,0); \draw[thick, dashed] (1,1)--(0,0); \draw[thick, dashed] (1,1)--(1,4); \draw[thick, dashed](1,1)--(4,1); \node[left] at (0,3) {\footnotesize $v_{145}$}; \node[below right] at (3,3) {\footnotesize $v_{125}$}; \node[right] at (4,4) {\footnotesize $v_{235}$}; \node[above] at (1,4) {\footnotesize $v_{345}$}; \node[left] at (0,0) {\footnotesize $v_{146}$}; \node[right] at (3,0) {\footnotesize $v_{126}$}; \node[right] at (4,1) {\footnotesize $v_{236}$}; \node[above left] at (1,0.9) {\footnotesize $v_{346}$}; \node[fill=white] at (1.5,1.8) {\footnotesize $\lambda(F_1)=(0,1,2)$}; \node[fill=white] at (2,3.5) {\footnotesize $\lambda(F_5)=(1,0,0)$}; \draw[thick] (0,3)--(1,4)--(4,4)--(3,3); \node[right] at (5,0) {\footnotesize $\lambda(F_6)=(2,1,0)$}; \draw [<-] (5,0) [out=210, in=300] to (2.2,0); \draw[dotted] (2.2,0)--(2,0.5); \node[right] at (5,1.5) {\footnotesize $\lambda(F_2)=(0,1,0)$}; \draw[<-] (5,1.5) -- (3.5,2); \node[right] at (5,2.5) {\footnotesize $\lambda(F_3)=(0,0,1)$}; \draw [<-] (5,2.5) [out=140, in=0] to (4,3); \draw[dotted] (4,3) [out=180, in=60] to (2.5,2.5); \node[left] at (-1,2) {\footnotesize $\lambda(F_4)=(0,2,1)$}; \draw [<-] (-1,2) [out=40, in=180] to (0,2.5); \draw [dotted] (0,2.5)--(0.5, 2.5); \end{tikzpicture} \caption{An $\mathcal{R}$-characteristic function on 3-cube $Q$.} \label{fig-eg1} \end{figure} One can compute that \begin{align*} g_Q(v_{125})=2,~ g_Q(v_{235})=1,~ g_Q(v_{345})=2,~g_Q(v_{145})=3,\\ g_Q(v_{126})=4,~ g_Q(v_{236})=2, ~g_Q(v_{346})=4,~g_Q(v_{146})=6. \end{align*} Now we consider two different retraction sequences of $Q$, as in Figure \ref{fig_ret_of_cube}. The first retraction sequence is given by \begin{align}\label{eq_ret_(a)} \begin{split} &(B_1, B_1, v_{235}) \to (B_2, F_6, v_{236})\to (B_3, F_4, v_{345}) \to (B_4, F_1, v_{125})\to \\ &(B_5, F_4\cap F_6, v_{346})\to (B_6, F_1 \cap F_4, v_{145})\to (B_7, F_1 \cap F_6, v_{126}) \to\\ & (B_8, v_{146}, v_{146}), \end{split} \end{align} which is illustrated in Figure \ref{fig_ret_of_cube}-Retraction (a). In the second term of the sequence \eqref{eq_ret_(a)}, we compute $g_{F_6}(v_{236})$ as follows; since $$(\text{span}\{\lambda(F_6)\}\otimes_{\mathbb{Z}} {\mathbb{R}})\cap {\mathbb{Z}}^3 =\text{span}\{(2,1,0)\},$$ a choice of basis $\{(1,0,0), (2,1,0), (0,0,1)\}$ of ${\mathbb{Z}}^3$ induces the $\mathcal{R}$-characteristic function $$\lambda_{F_6} \colon \mathcal{F}(F_6)=\{F_i\cap F_6\mid i=1, 2,3,4\} \to {\mathbb{Z}}^3/\text{span}\{(2,1,0)\}\cong {\mathbb{Z}}^2,$$ on $F_6$ defined as in \eqref{eq_def_of_induced_char_ftn} by $\lambda_{F_6}(F_1\cap F_6)=(-1,1)$, $\lambda_{F_6}(F_2\cap F_6)=(-1,0)$, $\lambda_{F_6}(F_3\cap F_6)=(0,1)$ and $\lambda_{F_6}(F_4\cap F_6)=(-4,1).$ In particular, $$g_{F_6}(v_{236})=\left| \det \begin{bmatrix} \lambda_{F_6}(F_2\cap F_6)^t & \lambda_{F_6}(F_3\cap F_6)^t \end{bmatrix} \right| =\left| \det \begin{bmatrix} -1 & 0 \\ 0 & 1\end{bmatrix} \right| = 1.$$ Similarly, one can compute the integers defined in \eqref{eq_g_E(v)} with respect to the retraction sequence \eqref{eq_ret_(a)} as follows: $$g_{F_4}(v_{345})=g_{F_1} (v_{125})=g_{F_4\cap F_6}(v_{346})= g_{F_1 \cap F_4}(v_{145})=g_{F_1 \cap F_6}(v_{126})=1.$$ Hence, the $\mathcal{R}$-characteristic pair $(Q, \lambda)$ defined in Figure \ref{fig-eg1} satisfies the hypothesis of Theorem \ref{no-tor-on-toric}. \begin{figure} \centerline{ \scalebox{.60}{ \input{torus4.pdf_t} } } \caption{Two retraction sequences of $Q$.} \label{fig_ret_of_cube} \end{figure} Next, we consider a retraction sequence whose first choice of free vertex is $v_{126}$, see Figure \ref{fig_ret_of_cube}-Retraction (b). The vertices $v_{125}, v_{146}, v_{236}$ are free vertices in $B_2$. Similarly as above, one can compute that $g_{F_4}(v_{146})=2,~g_{F_5} (v_{125})=2,$ and $g_{F_3}(v_{236})=2.$ Hence, $$\operatorname{gcd}\{g_{F_4}(v_{146}), g_{F_5} (v_{125}), g_{F_3}(v_{236})\} =2,$$ which fails to satisfy the assumption of Theorem \ref{thm_gcd_con} for $B_2$. \end{example} \subsection{Toric varieties}\label{toric_varieties} Toric varieties are important objects in algebraic geometry which have been studied from the beginning of nineteen-seventies. They featured prominently in early investigations of the phenomenon of mirror symmetry. One approach to constructing them begins with a lattice $N(\cong {\mathbb{Z}}^n)$ and a \emph{fan} $\Sigma$ which is a finite collection of \emph{cones} in a vector space $N_{\mathbb{R}}:=N \otimes_{\mathbb{Z}} {\mathbb{R}} (\cong {\mathbb{R}}^n)$ satisfying certain conditions, see \cite[Chapter 1--3]{CLS}, \cite[Chapter 5]{Ew}, \cite[Chapter 1]{Ful-ITV} and \cite[Chapter 1]{Oda} for details. We begin by reviewing briefly the definition of cones and fans. A \emph{cone} is a positive hull of finitely many elements in $N_{\mathbb{R}}$. To be more precise, for a finite subset $S=\{v_1, \dots, v_k\}\subset N_{\mathbb{R}}$, the cone of $S$ is a subset of $N_{\mathbb{R}}$ of the form $$\sigma:=\text{Cone}(S)=\left\{ \sum_{i=1}^k c_i v_i ~\Big|~ c_i\geq 0\right\}.$$ To each cone $\sigma=\text{Cone}(S)$, one can associate a \emph{dual cone} $\sigma^\vee \subset M_{\mathbb{R}}$, where $M_{\mathbb{R}}$ is a vector space dual to $N_{\mathbb{R}}$, defined as follows: $$\sigma^\vee= \{y \in M_{\mathbb{R}} \mid \left< y, v\right> \geq 0 \text{ for all } v\in \sigma\}.$$ One can easily see that $(\sigma^\vee)^\vee=\sigma$. A cone $\sigma_1:=\text{Cone}(S_1)$ is called a \emph{face} of another cone $\sigma_2:=\text{Cone}(S_2)$ if there is a vector $w\in \sigma_2^\vee$ and corresponding hyperplane $H_w:=\{ x\in N_{\mathbb{R}} \mid \left<w, x \right> =0\}$ such that $\sigma_1= H_w \cap \sigma_2$ and $\sigma_2$ lie in the positive half space $H_w^+:=\{x\in N_{\mathbb{R}} \mid \left<w, x\right> \geq 0\}$. Notice that $\sigma_2^\vee$ is a face of $\sigma_1^\vee$, if $\sigma_1$ is a face of $\sigma_2$. The set of one dimensional face of a cone $\sigma$ is called the \emph{generating rays} of $\sigma$. A cone $\sigma$ is called \emph{strongly convex} if it does not contain a nontrivial linear subspace of $N_{\mathbb{R}}$, and is called \emph{simplicial} if its generating rays are linearly independent in $N_{\mathbb{R}}$. Let $M$ be the lattice dual to $N$. The lattice points $S_\sigma:=\sigma^\vee\cap M$ form a finitely generated semigroup (see \cite[Proposition 1.2.17]{CLS}) which yields an algebraic variety $$U_\sigma:= \text{Spec}({\mathbb{C}}[S_\sigma]),$$ which is called the \emph{affine toric variety} associated to a cone $\sigma$. \begin{remark}\label{rmk_torus_action_on_affine_toric} Every affine toric variety $U_\sigma$ associated to a cone $\sigma\in N_{\mathbb{R}}$ equips with the action of $({\mathbb{C}}^\ast)^n$, which is obtained by identifying $({\mathbb{C}}^\ast)^n\cong \text{Spec}({\mathbb{C}}[M])$. Here we regard $M$ as the dual cone of $\{0\} \in N_{\mathbb{R}}$. See \cite[Chapter 1]{CLS} or \cite[Section 1.3]{Ful-ITV}. \end{remark} A \emph{fan} $\Sigma$ in $N_{\mathbb{R}}$ is a finite collection of strongly convex cones such that \begin{enumerate} \item if $\sigma_2\in \Sigma$ and $\sigma_1$ is a face of $\sigma_2$, then $\sigma_1 \in \Sigma$; \item if $\sigma_1, \sigma_2\in \Sigma$, then $\sigma_1 \cap \sigma_2$ is a face of both $\sigma_1$ and $\sigma_2$. \end{enumerate} One natural way of defining a fan in toric geometry is to use a full dimensional lattice polytope $Q \subset M_{\mathbb{R}}$, see Remark \ref{rmk_simple_lattice_polytope}. To be more precise, let $\mathcal{F}(Q)=\{F_1, \dots, F_m\}$ be facets of $Q$ as in Subsection \ref{toric_orbifolds}. For each $i$, let $\lambda_i\in N$ denote the vector such that \begin{equation}\label{eq_simple_polytope} Q=\{ y\in M_{\mathbb{R}}\mid \left< y, \lambda_i \right> + a_i \geq 0 \text{ for some} ~ a_i\in {\mathbb{Z}}~ \mbox{with}~ i=1, \dots, m \}. \end{equation} Observe that $\lambda_i$ in \eqref{eq_simple_polytope} represents the inward normal vector of the facet $$F_i =\{ y\in M_{\mathbb{R}}\mid \left< y, \lambda_i \right> + a_i = 0\} \cap Q$$ for $i=1, \ldots, m$. Let a face $E$ of $Q$ be given by $E=F_{i_1} \cap \dots \cap F_{i_k}$ for some facets $F_{i_1} , \dots, F_{i_k}$. Then we can associate a cone $\sigma_E$ to each face $E$ of $Q$ as follows: $$\sigma_E = \text{Cone}\{\lambda_{i_1}, \dots, \lambda_{i_k}\}.$$ The fan $\Sigma_Q$ associated to a lattice polytope $Q$, which is called the \emph{normal fan} of $Q$, is the collection $\{\sigma_E\mid E \text{ is a face of } Q\}$ whose face structure naturally follows from the face structure of $Q$. In general, we call a fan $\Sigma$ \emph{polytopal} if $\Sigma$ is a normal fan of a lattice polytope. Finally, the \emph{toric variety} $X_\Sigma$ corresponding to a fan $\Sigma$ is defined by taking disjoint union $\bigsqcup_{\sigma\in \Sigma}U_\sigma$ and the gluing is by rational maps determined by the generating rays of two cones $\sigma_1$ and $\sigma_2$ with nonempty intersections. In particular, the torus $({\mathbb{C}}^\ast)^n\cong U_{\{0\}} \subset U_\sigma$ of each affine toric variety is identified by the gluing, so yields the action of $({\mathbb{C}}^\ast)^n$ on $X_\Sigma$. Further details may be found in \cite[Section 1.4]{Ful-ITV}. It is well-known from literature, such as \cite{CLS, Ew, Ful-ITV}, that a fan $\Sigma$ is \emph{polytopal} if and only if the corresponding toric variety $X_\Sigma$ is \emph{projective}, and $\Sigma$ is \emph{simplicial}, i.e., $\Sigma$ consists of simplicial cones, if and only if $X_\Sigma$ is an \emph{orbifold.} Observe that if a polytopal fan $\Sigma$ is simplicial, then the corresponding polytope $Q$ is simple. Moreover, the primitive inward normal vector $\lambda_i$ of each facet $F_i\in \mathcal{F}(Q)$ satisfies the condition \eqref{eq_R-char_function_condition} since $Q$ is a simple lattice polytope. Hence, such a fan $\Sigma$ naturally yields an $\mathcal{R}$-characteristic pair $(Q, \lambda)$, introduced in Subsection \ref{toric_orbifolds}, by defining $\lambda(F_i)$ to be the primitive inward normal vector of a facet $F_i\in \mathcal{F}(Q)$, see Remark \ref{rmk_simple_lattice_polytope}. In particular, if an $\mathcal{R}$-characteristic pair $(Q, \lambda)$ is defined from a polytopal fan $\Sigma$ as above, then one can show that there is a $T^n$-equivariant homeomorphism \begin{equation}\label{eq_toric_orb_iso} X(Q, \lambda) \cong X_\Sigma, \end{equation} where $T^n$-action on $X(Q, \lambda)$ is the multiplication on the second factor (see \eqref{eq_def_of_toric_orb}) and $T^n$-action on $X_\Sigma$ is provided by regarding $T^n$ as a compact torus in $({\mathbb{C}}^\ast)^n$, see \cite[Chapter 12]{CLS} and \cite[Section 7]{DJ}. Hence, one can apply theorems of Subsection \ref{toric_orbifolds} to a projective toric variety from a simplicial fan, i.e., a toric variety from a normal fan of a simple lattice polytope. Recently, the authors of \cite{BSS} computed the integral cohomology ring of certain toric varieties associated to a normal fan $\Sigma$ of a simple polytope $Q$. The description was given in terms of the \emph{weighted Stanley--Reisner ring} $w\mathcal{SR}[\Sigma]$ of a simplicial fan $\Sigma$, which is a certain subring of the usual Stanley--Reisner ring $\mathcal{SR}[Q]={\mathbb{Z}}[x_1, \dots, x_m]/\mathcal{I}$, see \cite[(5.3)]{BSS}. This subring is determined by an \emph{integrality condition} which encodes the singularity of each fixed point. Then, assuming $H^{odd}(X_\Sigma;{\mathbb{Z}})=0$, it was shown that there is an isomorphism between $H^\ast(X_\Sigma;{\mathbb{Z}})$ and $w\mathcal{SR}[\Sigma]/ \mathcal{J}$, where $\mathcal{J}$ is an ideal generated by the linear relations in \eqref{eq_linear_relations}, which is determined by the geometry of $\Sigma$. Hence, by the aid of Theorem \ref{thm_even_cells_no_p-torsion}, we get the following theorem which generalizes a result of \cite[Theorem 5.3]{BSS}. \begin{theorem}\label{cor_cohom_of_toric_orb_with_gcd_cond.} Let $X_\Sigma$ be a projective toric variety associated to a normal fan $\Sigma$ of a simple polytope $Q$ with $m$ facets. If for each prime $p$ there is a retraction sequence $\{(B_i, E_i, v_i)\}_{i=1}^\ell$ of $Q$ such that $\operatorname{gcd}\{p, g_{E_i}(v_i)\}=1$, then the cohomology ring of $X_\Sigma$ is $$H^\ast(X_\Sigma;\mathbb{Z}) \cong w\mathcal{SR}[\Sigma]/\mathcal{J}\subseteq {\mathbb{Z}}[x_1, \dots, x_m]/\mathcal{I}+\mathcal{J},$$ where $\mathcal{I}$ is the Stanley--Reisner ideal of $Q$ and $\mathcal{J}$ is the ideal generated by linear relations \begin{equation}\label{eq_linear_relations} \sum_{i=1}^m \langle \lambda_i, \mathbf{e}_j\rangle x_i=0, \quad j=1, \dots, n, \end{equation} where $\mathrm{deg} x_i=2$, $\mathbf{e}_j$ denotes the $j$-th standard unit vector in $\mathbb{Z}^n$ and $\lambda_i$ is defined by \eqref{eq_simple_polytope}. \end{theorem} \subsection{Torus orbifolds}\label{subsec_torus_orb} A \emph{torus orbifold} is a $2n$-dimensional closed orbifold with an action of the $n$-dimensional real torus with non-empty fixed points. Introduced by Hattori and Masuda \cite{HM}, they are a far reaching generalization, beyond the spaces $X(Q, \lambda)$, of singular toric varieties having orbifold singularities. In this subsection, we recall the definition of locally standard torus orbifolds and apply Theorem \ref{thm_even_cells_no_p-torsion} and \ref{thm_even_cells_no_torsion} on this class. The faces of a manifold with corners can be defined following \cite[Section 6]{Dav}. A manifold with corners is called \emph{nice} if a codimension $2$ face is a connected component of the intersection of two codimension $1$ faces. Let $P$ be an $n$-dimensional nice manifold with corners and $\mathcal{F}(P)=\{F_1, \dots, F_m\}$ the codimension-$1$ faces of $P$. One can define $\mathcal{R}$-characteristic function $\lambda$ on $P$ satisfying the condition \eqref{eq_R-char_function_condition}. In Figure \ref{fig-eg2}, we give some examples of nice manifolds with corners and $\mathcal{R}$-characteristic functions on them. A torus orbifold $X_P(\lambda)$ from a pair $(P, \lambda)$, where $P$ is a nice manifold with corners and $\lambda$ is an $\mathcal{R}$-characteristic function on $P$, can be defined similarly to the construction of a toric orbifold from an $\mathcal{R}$-characteristic pair, see for example \cite{KMZ, MMP, MP} and \eqref{eq_def_of_toric_orb} in Subsection \ref{toric_orbifolds}. Moreover, all the notation of the local structures, such as $g_Q(v)$ and $g_E(v)$ of a toric orbifold in Subsection \ref{toric_orbifolds} naturally extends to a torus orbifold $X_P(\lambda)$. We remark that a certain non-convex manifold with corners and corresponding torus manifolds are studied in \cite{PS2} . \begin{example}\label{eg02} In this example we exhibit two 3-dimensional manifolds with corners and an $\mathcal{R}$-characteristic function on each. The figure (a) is a cylinder over an eye-shape, which is not a simple polytope. In figure (b), the intersection of the facets $ABCDEF$ and $BCHEFG$ is the disjoint union of edges $BC$ and $EF$, so it is also not a simple polytope. \begin{figure}[ht] \centerline{ \scalebox{.56}{ \input{torus1.pdf_t} } } \caption{Two $\mathcal{R}$-characteristic functions.} \label{fig-eg2} \end{figure} \end{example} \begin{figure}[ht] \centerline{ \scalebox{.50}{ \input{torus2.pdf_t} } } \caption{Some retraction sequences of a nice manifold with corners.} \label{fig-eg3} \end{figure} We remark that a retraction sequence may exist for a nice manifold with corners which is not necessarily a simple polytope; see Figure \ref{fig-eg3} for an example. We also note that if $\{(B_i, E_i, v_i)\}_{i=1}^\ell$ is a retraction sequence of a nice manifold with corners $P$ and $X_P(\lambda)$ is a torus orbifold, then one can construct a building sequence for $X_P(\lambda)$ analogous to Proposition \ref{ret_build}. Now we can apply Theorem \ref{thm_even_cells_no_torsion} to these class of orbifolds. \begin{corollary} Let $X_P(\lambda)$ be a torus orbifold over the nice manifold with corners $P$. If for each prime $p$ there is a retraction $\{(B_i, E_i, v_i)\}_{i=1}^\ell$ of $P$ such that $\operatorname{gcd}\{p, g_{E_i}(v_i)\}=1$, then $H^\ast(X_P(\lambda); {\mathbb{Z}})$ is torsion free and concentrated in even degrees. \end{corollary} \subsection{A $\mathbf{q}$-cell structure for Weighted Grassmannians}\label{subsec_wGr} In this subsection, we introduce briefly the construction of a weighted Grassmannian $\mathbf{w}Gr(d,n)$ \cite{CR-wgr} and its $\mathbf{q}$-CW structure \cite[Section 2]{AM}. In addition, we show an application of Theorem \ref{thm_even_cells_no_p-torsion} to certain weighted Grassmanians. We first consider the action of the general linear group $GL_n({\mathbb{C}})$ on the $d$-th exterior product $\wedge^d{\mathbb{C}}^n$ of ${\mathbb{C}}^n$ induced from the canonical action of $GL_n({\mathbb{C}})$ on ${\mathbb{C}}^n$. It naturally induces the action of $GL_n({\mathbb{C}})$ on the projective space $\mathbb{P}(\wedge^d {\mathbb{C}}^n)$. The standard Pl\"ucker embedding of the Grassmannian $Gr(d,n)$ into $\mathbb{P}(\wedge^d {\mathbb{C}}^n)$ is given by the $GL_n({\mathbb{C}})$ orbit of $[e_1\wedge \dots \wedge e_d]$. Now, we consider $$aPl(d,n)^\ast := GL_n({\mathbb{C}})\cdot ({\mathbb{C}} e_1\wedge \cdots \wedge e_d) \setminus \{\mathbf 0\},$$ where $GL_n({\mathbb{C}})\cdot ({\mathbb{C}} e_1\wedge \cdots \wedge e_d)$ denotes the $GL_n({\mathbb{C}})$-orbit of the line ${\mathbb{C}} e_1\wedge \cdots \wedge e_d \subset \wedge^d{\mathbb{C}}^n$ generated by $e_1\wedge \cdots \wedge e_d$, and $\mathbf{0}$ denotes the origin in $\wedge^d{\mathbb{C}}^n$. Next, for the maximal torus $T$ in $GL_n({\mathbb{C}})$, we consider \begin{equation}\label{eq_group_K} K:=T\times {\mathbb{C}}^\ast \cong ({\mathbb{C}}^\ast)^{n+1} \end{equation} and its action on $\wedge^d {\mathbb{C}}^n$, where $T$ acts naturally as a subgroup of $GL_n({\mathbb{C}})$ and $\mathbb{C}^\ast$ acts by the scalar multiplication on the vector space $\wedge^d {\mathbb{C}}^n$. Then, $aPl(d, n)$ is equipped with a $K$-action induced from the $K$-action on $\wedge^d {\mathbb{C}}^n$. Finally, we choose a vector $\mathbf{w}:=(w_1, \dots, w_n, r) \in {\mathbb{Z}}^n_{\geq 0} \oplus {\mathbb{Z}}_{>0}$ and we define $$\mathbf{w}D:=\{(t^{w_1}, \dots, t^{w_n}, t^r)\in K \mid t\in {\mathbb{C}}^\ast\},$$ the weighted diagonal subgroup of $K$ with respect to the vector $\mathbf{w}$. Then, the weighted Grassmannian $\mathbf{w}Gr(d,n)$ is defined by the quotient $$\mathbf{w}Gr(d,n):=aPl(d,n)^\ast / \mathbf{w}D.$$ Notice that $\mathbf{w}Gr(d, n)$ equips with the action of residual torus $K/\mathbf{w}D$. \begin{example} \begin{enumerate} \item Choose $\mathbf{w}=(0, \dots, 0, 1)\in {\mathbb{Z}}^n_{\geq 0}\oplus {\mathbb{Z}}_{>0}$, which defines $\mathbf{w}D=(1, \dots, 1, t)\subset ({\mathbb{C}}^\ast)^{n+1}$. The quotient space $aPl(d, n)^\ast/\mathbf{w}D$ is the image of the standard Pl\"ucker embedding of Grassmannian $Gr(d,n)$ into $\mathbb{P}(\wedge^d{\mathbb{C}}^n)$. \item Consider the case when $d=1$ and choose an arbitrary weighted vector $\mathbf{w}=(w_1, \dots, w_n, r)$. Then, $GL_n({\mathbb{C}})$-orbit of the line ${\mathbb{C}} e_1$ is isomorphic to ${\mathbb{C}}^n$. Hence, $aPl(1, n)^\ast={\mathbb{C}}^n \setminus \{\mathbf 0\}$. Notice that the non-zero scalar multiple of the first column of $GL_n({\mathbb{C}})$ generates the space ${\mathbb{C}}^n \setminus \{\mathbf 0\}$. Hence, the action of $\mathbf{w}D$ on ${\mathbb{C}}^n \setminus \{ \mathbf{0}\}$ is given by $$(t^{w_1}, \dots, t^{w_n}, t^r) \cdot (z_1, \dots, z_n)= (t^{w_1+r}z_1, \dots, t^{w_n+r}z_1),$$ which leads us to the weighted projective space ${\mathbb{CP}}^n_{(w_1+r, \dots, w_n+r)}$. \end{enumerate} \end{example} One natural way to describe a $\mathbf{q}$-CW complex structure on $\mathbf{w}Gr(d,n)$ is to use the classical Schubert cell decomposition of $Gr(d, n)$. Let $\alpha=\{i_1 < \dots< i_d\}$ be a subset of $[n]:=\{1, \dots, n\}$. Then, a $(d\times n)$-matrix of the following row echelon form represents a cell $e_\alpha$ with dimension $\sum_{k=1}^d (i_k-k)$ of $Gr(d, n)$; $$\begin{blockarray}{ccccccccccccccc} \begin{block}{(ccccccccccccccc)} \ast &\cdots &\ast &1&0&\cdots &&&&&&&&\cdots &0 \\ \ast&\cdots&\ast&0&\ast&\cdots&\ast&1&0&\cdots&&&&\cdots&0 \\ \ast&\cdots&\ast&0&\ast&\cdots&\ast&0&&&&&&& \\ \vdots&&\vdots&\vdots&&&&\vdots&&&\ddots&&&&\vdots \\ \ast&\cdots&\ast&0&\ast&\cdots&\ast&0&\ast&\cdots&\ast&1&0&\cdots&0\\ \end{block} &&&\overset{\uparrow}{i_1\text{-th}}&&&&\overset{\uparrow}{i_2\text{-th}}&&\cdots&&\overset{\uparrow}{i_d\text{-th}}&&& \end{blockarray}$$ The call $e_\alpha$ is called the \emph{Schubert cell} corresponding to the subset $\alpha$. In the literature, a {\em Young diagram} corresponding to $(i_1-1, \dots, i_d-d)$ is used to describe $e_\alpha$. For example, if $n=10$, $d=3$ and $\alpha=\{2<5<9\}$, the Schubert cell $e_{ \{2,5,9\}}$ and the corresponding Young diagram are described below in Figure \ref{fig_Schubert_cell_and_Young_diag}. \begin{figure}[h] \begin{tikzpicture} \node at (0,0) {$\begin{bmatrix} \ast&1&0&0&0&0&0&0&0&0\\ \ast&0&\ast&\ast&1&0&0&0&0&0\\ \ast&0&\ast&\ast&0&\ast&\ast&\ast&1&0 \end{bmatrix} $}; \draw[<->] (3,0)--(4,0); \begin{scope}[scale=0.4, xshift=110mm, yshift=-15mm] \draw (0,0)--(6,0)--(6,1)--(3,1)--(3,2)--(1,2)--(1,3)--(0,3)--cycle; \draw (1,0)--(1,2); \draw (2,0)--(2,2); \draw (3,0)--(3,2); \draw (5,0)--(5,1); \draw (4,0)--(4,1); \draw (0,1)--(3,1); \draw (0,2)--(1,2); \end{scope} \end{tikzpicture} \caption{A Schubert cell and Young diagram.} \label{fig_Schubert_cell_and_Young_diag} \end{figure} \begin{remark} \begin{enumerate} \item The Young diagram as defined in \cite{Ful-YT} has the weakly decreasing number of boxes in each rows. In this article, we use the \emph{weakly increasing number of boxes} in each rows as a convention. \item The collection of Young diagram has an obvious partial order, namely two Young diagrams $\square_1 \subseteq \square_2$ if and only if $\square_1$ ``fits" inside $\square_2$. \end{enumerate} \end{remark} Finally, the Schubert cell decomposition is given by \begin{equation}\label{eq_Schubert_cell_decomp} Gr(d, n)=\bigsqcup_{\alpha\in [n]} e_\alpha. \end{equation} The complex dimension of the cell $e_\alpha$ is exactly the number of boxes in the corresponding Young diagram, and the information about attaching maps can be obtained from the partial order given by the inclusion relations among the Young diagrams. In order to describe a $\mathbf{q}$-CW structure for $\mathbf{w}Gr(d,n)$, we note that the Schubert cell decomposition \eqref{eq_Schubert_cell_decomp} together with the orbit map $\pi \colon aPl(d,n)^\ast \to Gr(d,n)$ induces the following cell decomposition; \begin{equation}\label{eq_cell_decomp_aPl} aPl(d,n)^\ast=\bigsqcup_{\alpha\in [n]}\pi^{-1}(e_\alpha). \end{equation} Next, we recall from \cite[Subsection 2.2]{AM} that each cell $\pi^{-1}(e_\alpha)$ is $K$-invariant, where $K$ is defined in \eqref{eq_group_K}. Hence, the cell decomposition \eqref{eq_cell_decomp_aPl} descends to the cell decomposition of the weighted Grassmannian. \begin{proposition}(\cite[Proposition 2.3]{AM}) $$\mathbf{w}Gr(d,n) = \bigsqcup_{\alpha \in [n]} e_\alpha/G_\alpha,$$ where $G_\alpha=\{ (t^{w_1}, \dots, t^{w_n}, t^r)\in K \mid t\in {\mathbb{C}}^\ast, ~ t^{w_\alpha}=1, ~ w_\alpha= r+\sum_{i\in \alpha}w_i\}$. \end{proposition} \begin{figure} \begin{tikzpicture} \begin{scope}[scale=0.4] \node at (0.5,-2) {$\varnothing$}; \draw (0,0)--(0,1)--(1,1)--(1,0)--cycle; \draw (3,2)--(4,2)--(4,3)--(2,3)--(2,2)--(3,2)--(3,3); \draw (-2,3)--(-2,2)--(-1,2)--(-1,4)--(-2,4)--(-2,3)--(-1,3); \draw (-0.5, 5+1)--(1.5, 5+1)--(1.5, 6+1)--(0.5, 6+1)--(0.5, 7+1)--(-0.5, 7+1)--cycle; \draw (-0.5, 6+1)--(0.5, 6+1)--(0.5, 5+1); \draw (-0.5, 8+2)--(1.5, 8+2)--(1.5, 10+2)--(-0.5, 10+2)--cycle; \draw (0.5, 8+2)--(0.5, 10+2); \draw (-0.5, 9+2)--(1.5, 9+2); \draw (0.5,-1.1)--(0.5, -0.5); \draw (-0.2 ,1.2)--(-0.8,1.8); \draw (1.2, 1.2)--(1.8, 1.8); \draw (-1.5,4.3)--(-0.7,5.8); \draw (3, 3.3)--(1.7, 5.8); \draw (0.5,9.7)--(0.5, 8.3); \end{scope} \begin{scope}[xshift=5cm, scale=0.4] \node at (0,-2) {$e_{12}/G_{12}$}; \node at (0,0.5) {$e_{13}/G_{13}$}; \node at (-3, 3) {$e_{23}/G_{23}$}; \node at (3, 3){$e_{14}/G_{14}$}; \node at (0, 7) {$e_{24}/G_{24}$}; \node at (0, 11) {$e_{34}/G_{34}$}; \draw (0,-1.1)--(0, -0.5); \draw (-1.2 ,1.2)--(-1.8,2); \draw (1.2, 1.2)--(1.8, 2); \draw (-2,4)--(-1.2,5.8); \draw (2, 4)--(1.2, 5.8); \draw (0,9.7)--(0, 8.3); \end{scope} \end{tikzpicture} \caption{Lattice of Young diagrams and $\mathbf{q}$-CW structure of $\mathbf{w}Gr(2,4)$.} \label{fig_Young_lattice_Gr(2,4)} \end{figure} \begin{example} Consider the weighted Grassmannian $\mathbf{w}Gr(2,4)$ determined by the weight $\mathbf{w}=(w_1, \dots, w_4, r)\in {\mathbb{Z}}_{\geq0}^4 \oplus {\mathbb{Z}}_{>0}$. The lattice structure of Young diagrams and $\mathbf{q}$-CW structure are described in Figure \ref{fig_Young_lattice_Gr(2,4)}. Here, $$G_{ij}=\{(t^{w_1}, t^{w_2}, t^{w_3}, t^{w_4}, t^r) \mid t\in {\mathbb{C}}^\ast,~t^{r+w_i+w_j}=1\}.$$ For example, when $\mathbf{w}=(1,1,1,1,1)$, for each $\{i,j\}\subset [4]$, $G_{ij}\cong {\mathbb{Z}}/3{\mathbb{Z}}$. Hence, from Theorem \ref{thm_even_cells_no_p-torsion}, we conclude that $H^\ast(\mathbf{w}Gr(2,4);{\mathbb{Z}})$ has no $p$-torsion unless $p=3$. In general, one can conclude from this observation that the cohomology ring of the weighted Grassmannian $\mathbf{w}Gr(d, n)$, determined by the weighted vector of the form $\mathbf{w}=(1,\dots, 1,1)\in {\mathbb{Z}}_{\geq 0}^n \oplus {\mathbb{Z}}_{>0}$, has no $p$-torsion unless $p=d+1$. \end{example} \section{An extension to more general $\mathbf{q}$-CW complexes} \label{sec_cells_in_every_dim} In this section, we discuss general $\mathbf{q}$-CW complexes which do not necessarily consist of even dimensional $\mathbf{q}$-cells. Up to this point, the degree, in the sense of Definition \ref{def_degree} has played no role. In fact, let $\{(Y_i, 0_i)\}_{i=1}^\ell$ be a building sequence satisfying the assumption of Theorem \ref{thm_even_cells_no_torsion}. The attaching map $$\phi_i \colon S^{2k_i-1}/G_i \to Y_{i-1}$$ induces the \emph{degree} for each $i=1, \dots, \ell$. However, the induced homology map $$(\phi_i)_\ast \colon H_{2k_i-1}(S^{2k_i-1}/G_i;{\mathbb{Z}}) \to H_{2k_i-1}(Y_{i-1};{\mathbb{Z}})$$ is a zero map because $H_{2k_i-1}(Y_{i-1};{\mathbb{Z}})$ is trivial by Theorem \ref{thm_even_cells_no_torsion}. If a building sequence $\{(Y_i, 0_i)\}_{i=1}^\ell$ for a $\mathbf{q}$-CW complex $X$ has odd dimensional cells, a non-trivial attaching map is possible; this leads to the following theorem. \begin{theorem}\label{no_p-torsion_thm} Let $X$ be a $\mathbf{q}$-CW complex, $p$ a prime number, and $\{(Y_i, 0_i)\}_{i=1}^\ell$ a building sequence for $X$ such that $\operatorname{gcd}\{p, |G_i|\}=1$ for all $i$ with $e^{k_{i}}/G_i =Y_i \setminus Y_{i-1}$. If the degree of each attaching map \begin{equation}\label{eq_attaching_map} S^{k_{i}-1}/G_i \to Y_{i-1} \end{equation} as defined in Section \ref{sec_q-CW_complex}, is coprime to $p$ or zero, then $H_{\ast}(X; \mathbb{Z})$ has no $p$-torsion. \end{theorem} \begin{proof} Since $Y_{i}$ is obtained by attaching a $\mathbf{q}$-cell $\overline{e}^{k_i}/G_{i}$ to $Y_{i-1}$, we have the cofibration: \begin{equation} \begin{tikzcd} S^{k_i-1}/G_i \arrow{r} & Y_{i -1} \arrow[hookrightarrow]{r} & Y_i \end{tikzcd} \end{equation} which yields the long exact sequence: \begin{equation}\label{eq_les_near_top_homology} \begin{tikzcd}[row sep=tiny, column sep=small] \cdots \rar& \widetilde{H}_{j}(S^{k_i-1}/G_i) \rar& H_{j}(Y_{i-1}) \arrow{r} & H_{j}(Y_{i}) \arrow{r} & \widetilde{H}_{j-1}(S^{k_i-1}/G_i) \arrow{r} & \cdots . \end{tikzcd} \end{equation} The proof is similar to that of Theorem \ref{thm_even_cells_no_p-torsion}, except in the case when $j$ corresponds to the top two dimensions, $k_i$ and $k_i-1$. In this case, $\widetilde{H}_{k_i}(S^{k_i-1}/G_i)=0$ for dimensional reasons. By the same induction hypothesis as Theorem \ref{thm_even_cells_no_p-torsion}, we assume that $H_{k_i}(Y_{i-1})$ and $H_{k_i-1}(Y_{i-1})$ have no $p$-torsion. Hence, the sequence \eqref{eq_les_near_top_homology} becomes \begin{equation}\label{eq_les_near_top_homology(2)} \begin{tikzcd}[row sep=tiny] 0 \arrow{r} & {\mathbb{Z}}^r \oplus K_1 \arrow{r}{f} & H_{k_i}(Y_{i}) \arrow{r}{g} & \widetilde{H}_{k_i-1}(S^{k_i-1}/G_i) &\\ \arrow{r}{(\phi_i)_{\ast}} & {\mathbb{Z}}^s\oplus K_2 \arrow{r}{f'} & H_{k_i-1}(Y_{i}) \arrow{r}{g'} & \text{$|G_i|$-torsion} \arrow{r} & \cdots \end{tikzcd} \end{equation} for some $r, s \in {\mathbb{N}}\cup \{0\}$ and some $p$-torsion free finite groups $K_1$ and $K_2$. In order to show both $H_{k_i}(Y_{i})$ and $H_{k_i-1}(Y_{i})$ have no $p$-torsion, we need to consider the following three cases prescribed by Proposition \ref{prop_homology_orb_lens_sp}. \begin{enumerate} \item If $\widetilde{H}_{k_i-1}(S^{k_i-1}/G_i)=0$, then $H_{k_i}(Y_{i})$ is isomorphic to ${\mathbb{Z}}^r \oplus K_1$, hence it has no $p$-torsion. Next suppose that $H_{k_i-1}(Y_{i})$ has a $p$-torsion element, say $x$, then $g'(x)=0$ because $\operatorname{gcd}\{ p, |G_i|\}=1$. Hence, $x\in \ker g' = {\rm im}\hspace{1pt} f'$. Since $\widetilde{H}_{k_i-1}(S^{k_i-1}/G_i)=0$, the map $f'$ is injective. It contradicts the $p$-torsion freeness of $K_2$. \item If $\widetilde{H}_{k_i-1}(S^{k_i-1}/G_i)$ is a non-trivial $|G_i|$-torsion, then an argument similar to that above shows that $H_{k_i}(Y_{i})$ has no $p$-torsion. Next, consider the following exact sequence: $$ \begin{tikzcd} \text{ a $|G_i|$-torsion} \arrow{r}{(\phi_i)_{\ast}} & {\mathbb{Z}}^s\oplus K_2 \arrow{r}{f'} & H_{k_i-1}(Y_{i}) \arrow{r}{g'} & \text{a $|G_i|$-torsion}. \end{tikzcd} $$ Suppose that $H_{k_i-1}(Y_{i})$ has a $p$-torsion element $x$. Then the assumption $\operatorname{gcd}\{p, |G_i|\}=1$ implies $g'(x)=0$. Hence, we have the following nontrivial preimage \begin{equation}\label{eq_preimage} (f')^{-1}(x)=\{(a,b)\in {\mathbb{Z}}^s \oplus K_2 \mid f'(a,b)= x \}. \end{equation} Notice that the first coordinate of an element in \eqref{eq_preimage} cannot be $0$, because $K_2$ is a $p$-torsion free finite group and for a homomorphism $h: G \to H$ of finite groups, the order of $h(a)$ divides the order of $a \in G$. Hence, there exists at least one nontrivial element $a \in {\mathbb{Z}}^s$ such that $f'(a,b)=x$. In particular, since $x$ is a $p$-torsion element, $$\{n\cdot p\cdot (a,b)\mid n\in {\mathbb{Z}}\} \subseteq \ker f' ={\rm im}\hspace{1pt} (\phi_i)_{\ast},$$ which contradicts the finiteness of a $|G_i|$-torsion. \item Finally, we consider the case when $\widetilde{H}_{k_i-1}(S^{k_i-1}/G_i)={\mathbb{Z}}$ and break the exact sequence \eqref{eq_les_near_top_homology(2)} into two parts as follows: \begin{align} 0 \longrightarrow ~ &{\mathbb{Z}}^r \oplus K_1 \overset{f}{\longrightarrow} H_{k_i}(Y_{i}) \overset{g}{\longrightarrow} {\rm im}\hspace{1pt} g \longrightarrow 0, \label{eq_first_ex_seq_after_breaking}\\ 0 \longrightarrow {\rm coker}\hspace{1pt} g \overset{(\phi_i)_{\ast}}{\longrightarrow} ~~ & {\mathbb{Z}}^s\oplus K_2 \overset{f'}{\longrightarrow} H_{k_i-1}(Y_{i}) \overset{g'}{\longrightarrow} \text{a $|G_i|$-torsion}. \label{eq_second_ex_seq_after_breaking} \end{align} Since ${\rm im}\hspace{1pt} g \cong n{\mathbb{Z}}$ for some $n\in {\mathbb{N}} \cup \{0\}$, applying the same argument as in Case (1) to \eqref{eq_first_ex_seq_after_breaking} shows that $H_{k_i}(Y_{i})$ has no $p$-torsion elements. In order to show that $H_{k_i-1}(Y_{i})$ has no $p$-torsion elements, we first assume that $g$ is a non-zero map. Then ${\rm coker}\hspace{1pt} g$ is finite, which implies that $H_{k_i-1}(Y_{i})$ is $p$-torsion free by the same argument as in the second part of the case (2). When the map $g$ is a zero map, then coker $g = {\mathbb{Z}} $. Now $$(\phi_i)_{\ast} \colon \widetilde{H}_{k_i-1}(S^{k_i-1}/G_i) \to H_{k_i-1}(Y_{i-1})$$ is the map induced from the attaching map $\phi_i \colon S^{k_i-1}/G_i \to Y_{i-1}$. Hence, the image of $1\in {\mathbb{Z}}\cong \widetilde{H}_{k_i-1}(S^{k_i-1}/G_i)$ via $(\phi_i)_{\ast}$ determines the degree (see Definition \ref{def_degree}) of the attaching map, which is coprime to $p$ by the assumption. Now, we suppose that there is a $p$-torsion element $a\in H_{k_i-1}(Y_{i})$. Since $g'(a)=0$, there is an element, say $((c_1, \dots, c_s), y)\in {\mathbb{Z}}^s\oplus K_2$, such that $f'((c_1, \dots, c_s), y)=a$. We notice that $(c_1, \dots, c_s)\neq (0, \dots, 0)$, because $f'((0, \dots, 0), y)$ cannot be a $p$-torsion element in $H_{k_i-1}(Y_{i})$. Observe that $$f'(p\cdot (c_1, \dots, c_s), p\cdot y)= p\cdot a=0\in H_{k_i-1}(Y_{i}).$$ Hence, there exists $m\in {\mathbb{Z}}$ such that $$(\phi_i)_{\ast}(m) = m \cdot (\phi_i)_{\ast}(1) = m \cdot ((d_1, \dots, d_s), x)=(p \cdot (c_1, \dots, c_s), p \cdot y).$$ Since the degree $$\mathrm{deg} (\phi_i \colon S^{k_i-1}/G_i \to Y_{i-1}) = \operatorname{gcd} \{d_i \mid 1\leq i\leq s,~d_i\neq 0\}$$ of the attaching map is coprime to $p$ by the assumption, we conclude that $p$ divides $m$, i.e., $m=p\cdot m'$ for some $m'\in {\mathbb{Z}} \setminus \{0\}$. So, $y \equiv m' \cdot x$ (mod $p$) and $c_j=m' \cdot d_j$ for $j=1, \ldots, s$. Thus, we have \begin{align*} a=f'((c_1, \dots, c_s), m'\cdot x) =m' \cdot f'((d_1, \dots, d_s), x ) =m' \cdot (f'\circ (\phi_i)_{\ast})(1)=0 \end{align*} which is a contradiction. Hence, we conclude that $H_{k_i-1}(Y_{i})$ has no $p$-torsion elements. \end{enumerate} \end{proof} \begin{corollary}\label{no_torsion_coro} If $X$ is a $\mathbf{q}$-CW complex satisfying the assumption of Theorem \ref{no_p-torsion_thm} for each prime $p$, then $H_{\ast}(X; \mathbb{Z})$ has no torsion. \end{corollary} \begin{corollary} Let $\{(Y_i, 0_i)\}_{i=1}^\ell$ be a building sequence for a $\mathbf{q}$-CW complex $X$ such that $H_{\ast}(Y_{i -1}; {\mathbb{Z}})$ has $p$-torsion and $\operatorname{gcd}\{p, |G_j|\}=1$ with $e^{k_i}/G_i =Y_i \setminus Y_{i-1}$ for $j > i$. Then $H_{\ast}(Y_{i}; {\mathbb{Z}})$ has $p$-torsion, and hence $H_{\ast}(X; {\mathbb{Z}})$ has $p$-torsion. \end{corollary} \begin{proof} This follows from \eqref{eq_les_near_top_homology}. \end{proof} \renewcommand{\refname}{References} \bibliographystyle{alpha}
proofpile-arXiv_067-6496
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Funding Information} Natural Sciences and Engineering Research Council of Canada (NSERC).
proofpile-arXiv_067-6520
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Most electricity systems worldwide are undergoing major transformations driven by the need of improving their sustainability and efficiency. These affect both their technical and economical operation and pose new challenges for all the different agents that participate in the electricity supply chain: generation, transportation, distribution and consumption. One key aspect of these transformations is the deployment of advanced metering infrastructure (AMI) technologies, which are being integrated in most electricity systems worldwide \cite{depuru2011smart}. In particular smart meters are being progressively installed in individual households and allow tracking electricity consumption dynamics at a very disaggregated level and high frequency rates. Moreover, since the recent deregulation of many electricity markets worldwide, there are several agents (market operators, gencos, retailers, consumers, etc.) that are highly interested on obtaining the maximum value from this type of data. In this sense, new business opportunities arise that are linked to the application and development of state-of-the-art analytics and data science techniques. \cite{yildiz2017recent}. Specifically, developing an accurate and computationally efficient short-term forecasting model for individual consumers (smart meters) has several important applications within the new smart grid paradigm. One of them relates to the successful implementation of demand response (DR) policies. Once real time price tariffs are fully deployed in distribution systems, the forecasting model can learn from historical consumption patterns, as well as from each individual consumers’ reaction to prices (elasticity). This model can then be used to anticipate (forecast) the impact of different pricing strategies in both aggregated and disaggregated loads, and design these to better benefit the system. In this sense it can be helpful to identify those consumers more suitable for DR policies, anticipate and prevent systems peaks or congested lines, adapt load consumption to renewable generation availability, identify consumption anomalies that may originate from equipment failure or from electricity theft, etc. Furthermore, smart-meter datasets may help to better understand which are the main factors that drive electricity consumption \cite{wang2018review}, and how these can be used to improve the system's efficiency. For instance, electricity retailers might be interested in precise load forecasts for optimal participation in futures and spot markets \cite{conejo2010decision}. Similarly, distribution companies might benefit from this sort of predictions to improve the network reliability. For these reasons, the time series recorded by smart meters open the possibility of improving the existing forecasting models for electricity loads, from an entire system to a single household. Until recently, most of the models proposed in the literature, and used by the industry, have focused on forecasting aggregated loads at a system or substation level. However, these models are not appropriate for dealing with time series from more disaggregated loads (buildings or individual households), \cite{hong2016probabilistic}. For instance, at a household level, time series are very volatile, include a high amount of noise, and are unevenly (and nonlinearly) affected by calendar effects and meteorological variables. This has motivated the used of new modeling techniques, especially in the field of Machine Learning (ML), that are able to capture potential nonlinearities among the variables, do not assume any data generation process (non-parametric), and can exploit these large datasets for their training. In particular, Artificial Neural Networks (ANN) based models have proven effective in this type of context. One of their potential advantages, compared to traditional time series approaches, is to be able to jointly treat and combine information from several time series to improve forecasting accuracy, i.e., without studying the series in isolation \cite{bandara2017forecasting}. In this work we will focus on a particular ANN network architecture: Recurrent Neural Networks (RNN) with Long Short-term Memory (LSTM) \cite{hochreiter1997long}. These are specifically designed to capture sequential dependencies between the data, as is the case of time series, and information from different consumers' dynamics. However, the main drawback of these models is that, in order to be effective, they demand a very high computational burden for their training. This may hinder their practical implementation in a smart meter framework, as this application involves processing and forecasting up to hundreds of thousands of time series. To overcome this difficulty, in this work we propose a new forecasting approach to deal effectively with a large number of time series, as is the case of smart meter data. In particular, we propose to train a single RNN-LSTM model over a subset of the available smart meter time series. We show how, after an appropriate training and parameter tuning, the resulting model can accurately predict future loads of individual consumers, even if these were not included in the original training set. Hence, the resulting tool has a great potential for large scale applications (Big Data) as once the single network is trained, accurate individual forecast can be obtained at almost no computational cost. We test the validity of our approach under an extensive set of numerical experiments based on a real-world dataset that includes several thousand of household load time series. Results indicate that the proposed methodology is able to improve the forecasting accuracy of relevant benchmarks and also benefit from geo-demographic segmentation of consumers in the dataset. \subsection{Literature Review} There are several works that have focused on comparing the performance of different forecasting techniques in the context of smart meter electricity consumption data. For instance, reference \cite{edwards2012predicting} compares alternative ML techniques to predict the hourly consumption loads of three residential households. A large battery of simulations suggest that the best forecasting technique depends on the particular household under study, although Least Squares Support Vector Machines shows overall good performance. A review of forecasting techniques, based on Artificial Intelligence, for electrical load demands is presented in \cite{raza2015review}. It shows that ANNs have a great potential, but present challenges related to initialization parameters, slow convergence or local minima. In this vein, the authors in \cite{ma2017modeling} revise several load forecasting techniques in the context of distributed energy systems. They conclude that the existing methodologies present either low forecast accuracy or high computational burden. With the same aim, \cite{gajowniczek2017electricity} tests the performance of several techniques (based both on classical time series and ML) to forecast individual household consumptions. Apart from considering standard predictors based on historical loads, meteorological variables and calendar effects, the authors incorporate household activity patterns, which enhance the accuracy of the predictions. Similar conclusions are obtained in \cite{hsiao2014household}, where the behavior patterns are also considered to improve the predictive accuracy of individual household electricity demands. Moreover, \cite{sevlian2018scaling} characterizes empirically how the forecasting accuracy of different techniques varies with the level of aggregation of the series. Other works seek to procure probabilistic forecasts for smart meter data, in this case \cite{taieb2016forecasting} presents an additive quantile regression model that incorporates a boosting procedure for variable selection. The model outperforms standard probabilistic approaches for aggregated data. In a recent work and in a similar application, \cite{taieb2019hierarchical} proposes a methodology to compute coherent probabilistic forecasts for hierarchical time series. A predictive methodology based on LASSO is proposed in \cite{li2017sparse} to deal with sparsity and select relevant lag orders for autoregressive models. The model is applied to individual consumption data and its performance is further enhanced by including consumption series from other users. With the recent advent of cloud and distributed computing, computationally intensive techniques such as Deep Learning (DL), and in particular RNN, have been applied to forecast disaggregated electricity loads. With the idea of exploiting similarities between time series, \cite{bandara2017forecasting} proposes a forecasting framework that combines clustering and recurrent neural networks. In particular, first subgroups of time series are created based on cross-similarities and then a predictive LSTM network is trained on each subgroup. The model is tested under time series from the banking industry (monthly) a withdrawals from automatic teller machines (daily). A similar idea in the context of load forecasting is employed by \cite{quilumba2014using}, where clusters of consumers with similar load patterns are formed before the adjusting a NN forecasting model for the aggregated series. A Self-Recurrent Wavelet Neural Network is used in \cite{chitsaz2015short} to forecast the load of buildings in microgrids. The predictive tool makes used of feedback loops to better deal with volatile time series. Moreover, \cite{tascikaraoglu2016short} shows how an approach incorporating temporal and spatial information, can be used to identify relevant features among different households and improve the forecasting accuracy. Reference \cite{yildiz2018household} also employs a clustering analysis to extract typical daily consumption patterns that can be used to improve the accuracy of the forecasting tool. At a district level, \cite{ahmad2019deep} addresses the problem of accurate short-term load forecasting including both meteorological and technical variables. The proposed model is based on a DL algorithm that combines different back-propagation techniques to ease its computational burden. Similarly, a DL forecasting model for individual consumption time series, based on Conditional Restricted Boltzmann Machine, is presented in \cite{mocanu2016deep}. The results, tested on a single consumer historical data set, show that it outperforms ANN, Support Vector Machines and RNN techniques. % With the same aim, \cite{shi2017deep} presents a pooling-based deep recurrent neural network designed to avoid over-fitting. The method outperforms traditional ARIMA, Support Vector Regression and RNN. A deep neural network model based on LSTM architecture is used in \cite{kong2017short} to forecast residential loads. It is shown how, in order to forecast aggregated loads, in may be convenient to aggregate individual forecasts. % Finally, another LSTM model is adapted in \cite{wang2019probabilistic} to provide probabilistic forecasts for household loads. To this end, a pinball loss function is used to train the model by evaluating the quantile errors. \subsection{Contributions} To the best of the authors' knowledge, this is the first work that aims to develop an efficient forecasting tool for high frequency and disaggregated loads, making an emphasis on its scalability, and hence potential application for massive (hundreds of thousand) smart-meter time-series. It is worth noting that the proposed pooling based method and RNN model share some common features with the ones proposed in \cite{shi2017deep} and \cite{bandara2017forecasting}, respectively. However, these are extended to work with high frequency time series, to account for exogenous variables, that enrich the forecasting model, and to provide accurate forecast for out-of-sample consumers. To summarize, considering the state of the art described in the previous section, the main contributions of this work are three-fold: \begin{enumerate} \item To make use of a single LSTM model to produce accurate forecasts for general high-frequency time series, in our case households' load smart-meters. \item To build the LSTM model so that it can be trained efficiently using only a subset of the smart meters, making it scalable and suitable to deal with massive time-series (Big Data). \item To validate the proposed methodology with a real-world dataset involving thousands of smart meters and by analyzing how geo-demographic segmentation can impact forecasting accuracy, and improving the out-of-sample performance over relevant benchmarks. \end{enumerate} With these contributions, the proposed forecasting model can be efficiently trained to provide accurate near real-time forecast for hundreds of thousands of smart meters. As we show in this work, training a unique LSTM network, which is the most demanding computational task, would need to be done only once every few months, and by using just a representative subset of the smart meters time series. Once the network is trained, and its input properly defined, it can provide short-term smart meters’ forecast at almost no computational costs, as the next sections show. \subsection{Paper Organization} The rest of the manuscript is organized as follows. In Section \ref{DA}, we describe the real-world dataset and summarize the main properties of the associated time series. In Section \ref{Met}, the proposed methodology based on RNN is introduced together with the numerical results obtained in an extensive back-testing including relevant benchmarks. Finally, Section \ref{Con} concludes by emphasizing the potential of the proposed methodology in large-scale applications. \section{Data and Descriptive Analysis}\label{DA} We have accessed the public energy consumption registers from \cite{dataLondon}. In particular, this dataset contains a sample of 5,567 London households with the corresponding energy consumption, in kWh (per half hour), for each unique household identifier, date and time, and CACI ACORN categories (classification of residential neighborhoods). There are six categories that provide a geo-demographic segmentation of London's population. Each of these six categories are further divided into a total of 18 groups, to increase homogeneity, and our methodology will be applied to each of these groups. More details about CACI ACORN classification can be found in \cite{CACI}. The proposed methodology is general and deals with high-frequency time series. Our dataset contains a frequency of half hour, but we have aggregated it into an hourly frequency to reduce a bit the dimension and the associated volatility. That means each smart meter in each of the 18 groups contains 8760 hours corresponding to its energy consumption, in kWh. Note that, in any case, our methodology can also deal with half-hour data. Finally, to validate even more our methodology against the heterogeneity of the groups, we have also created another Global group with 200 smart meters randomly selected from the overall sample. As a summary, the first row in Table \ref{summarySM} contains the names of the 18 ACORN groups together with the Global Group, the second and third columns indicate the number of smart meters used by our methodology to train the LSTM model, and to test the forecasting performance, respectively. It should be noticed that no information about the meters in the test set have been used when training the model. We have considered 80\% of the available meters in each group for the training set and the other 20\% of the meters for the testing one. Moreover, the fourth column in Table \ref{summarySM} shows the number of periods (hours) used to train the LSTM whereas the fifth column indicates the number of days the trained LSTM has been used to perform the 24-hours ahead forecasts in our validation (out-of-sample test) scheme. The proposed methodology jointly treats and combines information from all the time series in a given group, hence the estimation window for the LSTM model depends not only on the number of past hours but also on the number of time series. In particular, the larger the size of the group, the smaller the number of hours needed to train the LSTM model. We have considered that the number of hours in the training set is $450000$ hours over the number of meters, with a minimum of 720 hours (roughly one month) and a maximum of 7200 hours (roughly 10 months). \begin{table}[ht] \caption{Summary of the complete dataset}\label{summarySM} \centering \small \begin{tabular}{lrrrr} \hline Groups & Meters in train & Meters in test & Hours in train & Days in test\\ \hline ACORN-A & 74 & 19 & 5040 & 155 \\ ACORN-B & 14 & 4 & 7200 & 65 \\ ACORN-C & 78 & 19 & 4320 & 185 \\ ACORN-D & 139 & 35 & 2880 & 245 \\ ACORN-E & 801 & 200 & 720 & 335 \\ ACORN-F & 358 & 90 & 720 & 335 \\ ACORN-G & 112 & 28 & 2880 & 245 \\ ACORN-H & 266 & 67 & 1440 & 305 \\ ACORN-I & 30 & 8 & 7200 & 65 \\ ACORN-J & 45 & 11 & 7200 & 65 \\ ACORN-K & 98 & 24 & 3600 & 215 \\ ACORN-L & 181 & 45 & 2160 & 275 \\ ACORN-M & 64 & 16 & 5760 & 125 \\ ACORN-N & 91 & 23 & 3600 & 215 \\ ACORN-O & 55 & 14 & 6480 & 95 \\ ACORN-P & 54 & 14 & 6480 & 95 \\ ACORN-Q & 469 & 117 & 720 & 335 \\ ACORN-U & 22 & 6 & 7200 & 65 \\ Global Group & 160 & 40 & 2160 & 275 \\ \hline \end{tabular} \end{table} \normalsize We have also removed smart meters with a high number of missing values (more than 20) and almost zero consumption (standard deviation less than 0.01 kWh). We have replaced the remaining of the missing values by the most recent real observation for each smart meter. Finally, we have also considered hourly weather data for London area in the same dates as in the smart-meters registers, as can be found in \cite{kaggle}. In total, our final dataset contains hourly consumption data for 3891 smart meters (for all the 19 groups) along 2013, with a total of 8760 observations for each smart meter. In addition, our dataset also contains temperature and humidity data for each of the 8760 periods, together with extra features (calendar) as the day of the week and the hour in a day, counting a total of 33 features. Next, a summary to understand the behavior of this dataset is provided. To have an idea, we have selected two smart-meters by chance from the ACORN-A group that contains the most affluent people in the UK \emph{lavish lifestyles}. This group contains 93 households but still this group is heterogeneous. In particular, we have selected by chance the 18th and the 92nd households, and in Figures \ref{fig:SM18} and \ref{fig:SM92} we can observe the evolution of their consumption over 2013, respectively. Note the different behavior of these two time-series. Consumer 18th has a higher consumption on average with a higher volatility, but also longer periods with no consumption. On the other hand, 92nd consumer has a lower and more stable consumption, lower but strictly positive. \begin{figure} \includegraphics[width=6in]{SM18.png}\caption{Consumption for smart-meter 18 along 2013}\label{fig:SM18} \end{figure} \begin{figure} \includegraphics[width=6in]{SM92.png}\caption{Consumption for smart-meter 92 along 2013}\label{fig:SM92} \end{figure} In Figure \ref{fig:MeanSD}, we can see a scatter plot containing the average consumption in 2013 for each smart meter in ACORN-A versus the corresponding standard deviation. Note the clear linear relationship between consumption and variability, indicating the necessity to normalize the 93 consumptions to make the group more homogeneous. \begin{center} \begin{figure} \center \includegraphics[width=5in]{MeanSD.png}\caption{Average consumption vs standard deviation for each smart meter}\label{fig:MeanSD} \end{figure} \end{center} Now we will analyze the seasonality. Because we are dealing with hourly data, we expect to have several seasonalities, in particular one of order 24 (daily), other one of order 168 (weekly), and possible one of order 8760 (yearly), but the last one is not observable because we have only data for one year. We can analyze the daily seasonality of the different smart meters. For instance, Figure \ref{fig:Season18} reveals a household that decreases consumption during night hours. A similar analysis on weekly seasonality shows that it may also play a relevant role in the time series. Hence, we will consider both the daily and weekly patterns in our methodology. Moreover, from previous figures we can also observe the asymmetric distribution of electricity consumption. This motivates our choice to apply a logarithm transformation to make it more symmetric. \begin{figure} \center \includegraphics[width=6in]{SeasonId18.png}\caption{Daily consumption per hour for smart-meter 18 along 2013}\label{fig:Season18} \end{figure} \begin{figure} \center \includegraphics[width=6in]{SeasonId92.png}\caption{Daily consumption per hour for smart-meter 92 along 2013}\label{fig:Season92} \end{figure} If we analyze the autocorrelations of household consumptions, useful and different patterns appear. From Figures \ref{fig:acf18} and \ref{fig:acf92} we can observe different patterns of consumption for 18th and 92nd households, respectively. In particular, household 18th presents a high consumption dependency respect to the previous period that decays slowly (long memory). On the other hand, 92nd household presents a small dependency with a higher rate of decay (short memory). In both cases we can observe the dependency respect to previous 24 hours which is an indicator of the daily seasonality previously mentioned. \begin{figure} \center \includegraphics[width=6in]{acfId18.png}\caption{Autocorrelations for smart-meter 18 along 2013}\label{fig:acf18} \end{figure} \begin{figure} \center \includegraphics[width=6in]{acfId92.png}\caption{Autocorrelations for smart-meter 92 along 2013}\label{fig:acf92} \end{figure} Next, we analyze the relations between the consumption and meteorological variables. For instance, Figures \ref{fig:Temp18} and \ref{fig:Temp92} show the relation between consumption and temperature for the 18th and 92nd households. We can observe that this relation changes throughout the year as expected, so that it should be considered in the forecasting model. We have also repeated this analysis to other smart-meters selected by chance from the other 18 groups obtaining similar insights. \begin{figure} \center \includegraphics[width=6in]{TempId18.png}\caption{Consumption for smart-meter 18 vs temperature per month of 2013}\label{fig:Temp18} \end{figure} \begin{figure} \center \includegraphics[width=6in]{TempId92.png}\caption{Consumption for smart-meter 92 vs temperature for each month of 2013}\label{fig:Temp92} \end{figure} As a summary of the descriptive analysis performed in this section, we have observed the collection of time-series analyzed present some complex properties like high-frequency (hourly in our case), non-linearities (especially in relations with meteorological variables), and high volatility (mainly due to the high disaggregation of the consumption). This analysis provides the inspiration and motivation for the methodology proposed in the next Section. \section{Methodology}\label{Met} The most difficult property to deal with this type of massive time-series is the high volatility, mainly due to the underlying human behavior, that makes the relation between signal and noise unclear. One may use univariate approaches for each series, but this approach is unable to be implemented in practice (maybe millions of models to be trained/estimated) and also does not deal adequately with the non-stationarity in the data. For this reason, we propose a methodology to deal with a large number of time series. In particular, instead of analyzing each series in isolation, we propose to jointly treat and combine information from several time series contained in a homogeneous group. These group series may be obtained by natural grouping (in our case, the ACORN groups) or statistical clustering (for instance based on features). A review of several clustering techniques to group similar electricity consumers is presented in \cite{chicco2012}. Once the time series are classified into groups, a single RNN with LSTM model is trained over each group. This type of model is specifically designed to capture sequential dependencies between the data, as is the case of time series, and also learn from different consumers dynamics. But note the previous classification is not essential. Out numerical results show for the Global Group, including time series by chance, that accurate results can also be obtained without the previous clustering. But if a previous classification of the time series into homogeneous groups is available, or a clustering of time series is performed previously to attain homogeneous groups, our methodology is able to exploit better this classification to forecast better each group. In this Section, we will explain in detail the proposed methodology. As mentioned before, our methodology will be based on ANN and DL because they are flexible and powerful in general, an in particular they can deal with multi-dimensional time series with complex interactions in a natural way. Specifically, we will use RNN for sequential data (time series) which has been proven to work successfully in natural language processing and speech recognition. Recurrent neural networks are a type of neural network where outputs from previous time steps are taken as inputs for the current time step. This creates a network graph or circuit diagram with cycles, which can be unfolded over the input sequence to make predictions over many time steps. \iffalse \begin{figure} \center \includegraphics[width=6in]{RNNUnfold_CRuiz.png}\caption{Recurrent Neural Network}\label{fig:RNNunfold} \end{figure} \fi To train an RNN, the stochastic gradient descent is used, but gradients tend to vanish to zero too early, loosing then the available information. For this reason, LSTM networks were proposed as a type of RNN where the gradients tend to zero slower, improving then the performance. \iffalse \begin{figure} \center \includegraphics[width=6in]{lstm.png}\caption{Long Short-Term Memory}\label{fig:lstm} \end{figure} \fi \subsection{Framework and Notation} The main and original idea of our methodology is to train one single model for all the considered time series in a given group. This is another reason to use ANN, because these models tend to perform better as the size of the input increases. Hence, we will join the information from all the time series in a group to train a LSTM model per group. Once the model has been trained using a long history, it can be used to forecast future loads not only for the smart meters considered in the training set but also for new smart meters (see Table \ref{summarySM}). Next, the main notation to understand our implementation is explained. We will use the TensorFlow framework \cite{Tensorflow} which is an open source software library for numerical computations written in C++ (and not confined exclusively to ANN). This framework was originally developed by researchers in Google to perform distributed computation, deal with large datasets, automatic differentiation, optimization algorithms, etc. The main ingredient of the TensorFlow framework is a tensor (multidimensional matrix). As we are dealing with time series, we need a 3-dimensional framework to treat them. The first dimension is the number of samples. In our case, as we are jointly combining the information of many smart meters, the number of samples will be the number of smart meters considered in the training set times the number of periods (hours) in the training set (see second and fourth columns in Table \ref{summarySM}). The second dimension is the \emph{timesteps}. These are separate time periods of a given variable and for a given observation with influence in future periods. As it was analyzed in the previous section, it was a clear seasonality of order 24 from the daily behavior. For this reason, we have selected timesteps as 24. Finally, for the third and last dimension we need to consider the features (predictors or regressors) we have available in our dataset. In our case, we have meteorological variables (like apparent temperature and humidity), the aggregated consumption from all the smart meters in each group, 23 dummy variables for the hour of the day, and 6 dummies for the day of the week, summing up to 33 variables for the features dimension (considering also the consumption for each smart meter). To make the implementation easier, we use Keras \cite{Keras}, which is a high-level neural networks API running on top of TensorFlow. The Keras and TensorFlow frameworks allow us to implement and train the LSTM models in an easy way. In particular, we can design the network topology and estimate the weights by minimizing a differentiable loss function through the (mini-batch) gradient descent method, and compute the derivatives using back-propagation (chain’s rule). In particular, we have trained 19 LSTM models, one for each ACORN groups and the additional Global Group, with around 12,000 weights each of them that need to be estimated along a highly non-linear function from the loss function, that compares the forecasts with the real values. \subsection{Data Preparation} Before applying the recurrent neural network model, it is needed to perform some data pre-processing. This step usually needs to deal with missing values and outliers, make some feature extraction, scale and normalize the data, etc. In our case, as mentioned in Section \ref{DA}, we have replaced the missing values by the most recent observation in each smart meter. Then, we proceed to make the consumption more symmetric and take the natural logarithm as the first transformation. Later, we normalize the consumption in order to have the same mean and variance for all the households. These transformations are useful to make the time series more stationary, implying better performance for the LSTM model. However, we do not transform the data to remove the seasonality because the LSTM model is able to capture it in a natural way through the \emph{timesteps} defined previously. Finally, to use TensorFlow we need to convert the original input data into 3-dimensional tensors for time-series. To do that, we need to build a design matrix and a target matrix. The design matrix is a tensor containing the information from the past that the LSTM will use to forecast the future. In our case, it will be a 3-dimensional array where the first dimension is $N$*$T$, being $N$ the number of smart meters and $T$ the number of periods considered in the past, i.e. the second and fourth columns in Table \ref{summarySM}. The second dimension is the timesteps, 24 in our case, denoting the number of recent observations the LSTM will be used to forecast next hours. Finally, the last dimension is the number of features, 33 in our case as explained before. The target matrix is again a 3-dimensional array where the first dimension is $N$ because we are interested in forecasting next period, the second dimension is the forecasting horizon for the next period, 24 in our case, and the last dimension is again 33. An example of one row of the design matrix for one smart meter would be: \[ z_1, z_2, \ldots, z_{24} \] where each value denotes past consumption for a given hour. An example of the same row for the target matrix and the same smart meter would be: \[ z_{25}, z_{26}, \ldots, z_{48} \] Note we need to build these arrays for each of the available features. Next, the details of the LSTM model will be explained. \subsection{The Proposed LSTM Model} The LSTM model in Keras is defined as a sequence of layers. In our experiments, all the LSTM models are trained with the same network topology as follows. The first layer in the network defines the 3-dimensional units of the tensors as explained before. This layer is a LSTM one containing 32 units or number of neurons. Then we need to define the activation function to transform the output from each unit as the input for the next layer. The choice of this activation function is important and will affect the forecasting performance. We have used the \textit{hyperbolic tangent} as the activation function, but others can be used as the \textit{sigmoid} or \textit{softmax} ones. Then, multiple layers can be stacked by adding them to the sequential process. In our case, we have added another LSTM layer with 16 units, and finally a dense layer (or fully connected one) with 24 units that will provide the corresponding 24-hours ahead forecast. Moreover, to avoid overfitting in such a large network and improve performance, we need to add some type of regularization or dropout. If we choose a regularization approach, it reduces overfitting by adding some bias to the estimation while reducing the variance. On the other hand, if we choose dropout, it reduces overfitting by randomly making zero some units in the layer during the training steps. We have chosen this last approach. In particular, in the first recurrent layer we have selected a dropout rate of 10\% of the recurrent units followed by a dropout rate of 10\% for the input units of the layer. For the last LSTM layer, we have selected a dropout rate of 5\%, both for the recurrent units and for the input ones. Once we have designed the topology of the network, we need to compile it to make the estimation more efficient. To do that, we need to select the loss function and the optimization algorithm to train the network. For the loss function we have selected the \textit{mean absolute value}, and for the optimization algorithm we have selected the \textit{Adam} optimizer, which is a version of the stochastic gradient method, with a learning rate of $0.001$. All the previous hyper-parameters have been selected after trying other network architecture and values and observing the out-of-sample loss function. After the network is compiled for each the 19 LSTM models, we can proceed to fit or optimize the associated weights. This is the most expensive step in the methodology from a computational point of view. For this reason, we propose to train the network only once in a year, with a large number of time series and observations, and use the trained LSTM to perform the 24-hours ahead forecasts for all the desired periods and smart meters in the future. This last forecasting step can be executed very fast. The fitting or optimization process to train the network uses the back-propagation algorithm, together with the optimization algorithm and loss function defined previously. The back-propagation algorithm requires to define the number of epochs or times the optimization algorithm uses the complete training set. In our case, we have selected 40 epochs. Each epoch can be partitioned into a fixed-sized number of rows from the training set, called batch. This subset of the training set will improve the performance of the optimization algorithm. We have selected a batch size of 1000 hours (around 40 days). Once the 19 LSTM models have been trained, they are ready to perform the forecasts for all groups as the next section shows. \subsection{The back-testing} In this subsection, we explain how we have developed the back-testing scheme to evaluate the performance of the proposed methodology. First, we have information about 3891 household consumptions for all hours in 2013, i.e. 8760 hours. We have organized these 3891 smart meters in 18 groups according to the ACORN classification, as summarized in Table \ref{summarySM}, plus the Global Group with 200 smart meters selected by chance out of the previous 18 groups. For each group, we train a LSTM (as explained in previous section) using the number of periods indicated in the fourth column in Table \ref{summarySM}, leaving the last days in the sample (fifth column in Table \ref{summarySM}) to test the 24-hours ahead forecasts for each of these days. Moreover, for each group, we have selected around 80\% of the smart meters to train the models (as indicated in the second column of Table \ref{summarySM}), leaving the other 20\% (third column in Table \ref{summarySM}) to test the forecasts. That means, for the out-of-sample evaluation of the models, we consider two dimensions: a time direction with out-of-sample periods, and a smart-meter direction for out-of-sample households (as they were new customers). Besides the proposed LSTM model, we have used two well-known benchmarks: i) a seasonal \textit{naive} approach, and ii) an \textit{auto.arima} approach, as described in \cite{hyndman2008}. We have also tried a \textit{tbats} method, as described in \cite{de2011forecasting}, but this method becomes very unstable for some of the time series providing bad results in practice. For the seasonal \textit{naive} approach, we forecast the next 24-hours ahead consumptions, with the last 24-hour ones. This is one of the most successful approaches in practice, because it can be implemented and executed with little computational effort, it is stable and scalable, and provides reasonable performance for massive time-series. For the \textit{auto.arima} approach, note this is an automatic framework for univariate time-series. Its performance is reasonable in general, but it does not deal well with possible non-stationarity in the data and non-linearities. Moreover, this approach is unable to implement in practice for a big energy utility because it requires the training of maybe millions of models, one for each available customer. In any case, we have implemented this approach for all the 3891 smart meters in order to analyze and compare better the results. Finally, we have obtained results from our proposed LSTM model. This model requires in practice training only once or twice a year and then, individual forecasts can be obtained with little computational costs. Because we train only one model for all the time series in a group at once, our proposal has a great potential for large scale applications. In the next subsection, the numerical results obtained by implementing previous approaches are shown and discussed. To do that, we have computed the mean-absolute error (MAE) for each approach, and for each out-of-sample period and smart meter. In particular, if $z_{n,t+h}$ denotes the real consumption of smart meter $n$, for period $t+h$, and for horizon $h=1,\ldots,24$, then the MAE for the $i$-th approach, $\hat{z}_{i,n,t+h}$ is defined as: \[ \text{MAE}(i,n,t) = \frac{1}{24}\sum_{h=1}^{24} |\hat{z}_{i,n,t+h}-z_{n,t+h}|, \] for each of the out-of-sample periods $t=1, \ldots, T'$, and each of the smart meters $n=1,\ldots,N'$. Finally, in next Subsection median MAEs are computed for each group and method: the median MAE in testing meters corresponds to the median of MAE along the out-of-sample periods $t$ and then the median along the testing meters $n$, whereas the median MAE in testing days corresponds to the median of MAE along the meters $n$ and then the median along the out-of-sample periods $t$. Figure \ref{fig:scheme} illustrates the performance scheme. \begin{figure}\centering \includegraphics[width=6in]{scheme.png}\caption{Scheme of out-of-sample evaluation along meters and time for each group and method}\label{fig:scheme} \end{figure} \subsection{Results} In this section, we summarize the main numerical results obtained when applying the three approaches (\textit{naive}, \textit{auto.arima}, and the proposed LSTM model) for each of the 19 groups considered. In Figure \ref{fig:TestMeters}, we can observe the median MAEs for the three implemented approaches along the testing meters. In particular, for each of the smart meters in the testing set for each group, we have computed the median error for all the MAEs obtained in the out-of-sample days in the back-testing. It can be noted how the \textit{auto.arima} approach is on average a 5\% better than the \textit{naive} approach but sometimes is a bit worse. On the other hand, the LSTM approach attains the best performance. In particular, the performance of the LSTM approach is around 19\% better than that of the \textit{auto.arima} and around 24\% better than that of the \textit{naive} approach. That implies a good performance of the proposed methodology even for new customers in the database. \begin{figure} \center \includegraphics[width=6in]{TestSummaryMeters.png}\caption{Out-of-sample performance for smart meters in the testing set}\label{fig:TestMeters} \end{figure} On the other hand, Figure \ref{fig:TestPeriods} shows the median MAEs for all smart meters along the out-of-sample days. Note the errors of \textit{auto.arima} and \textit{naive} approaches behave similarly along time (\textit{auto.arima} performs around 3\% than the \textit{naive}), while the proposed LSTM outperforms the \textit{naive} approach by 24\% on average and the \textit{auto.arima} one by 21\% on average over \emph{all} the out-of-sample periods. \begin{figure} \center \includegraphics[width=6in]{TestSummaryDays.png}\caption{Out-of-sample performance for testing days}\label{fig:TestPeriods} \end{figure} To have a better idea about the forecasted load profiles by the three considered methods, we have selected the ACORN-U group because it has the largest error in Figure \ref{fig:TestMeters}. In particular, Figure \ref{fig:ACORNUpath} shows the evolution of the median MAEs along the 65 out-of-sample days for all the meters in the ACORN-U group. Note that although forecasting performance depends on time, the proposed LSTM procedure has consistently better out-of-sample errors than the benchmarks. \begin{figure} \center \includegraphics[width=6in]{ACORNUpath.png}\caption{Out-of-sample performance for testing days (ACORN-U group)}\label{fig:ACORNUpath} \end{figure} Finally, It is worth analyzing the behavior of our our proposal for the Global Group. This group contains 200 time series selected by chance from the other 18 groups. We have chosen 160 of those series to train the unique LSTM and used it to forecast the other 40 series in the testing set. Figure \ref{fig:GlobalGrouppath} shows the evolution of the median MAEs along the 275 days in the out-of-sample period for those 40 meters. Again, our method outperforms consistently the two benchmarks by more than 20\%, implying the effectiveness of the proposal even when a classification of groups is not available a priori. Of course, if a previous classification of the time series into homogeneous groups is available, our methodology is able to exploit better this classification to forecast better each group. \begin{figure} \center \includegraphics[width=6in]{GlobalGrouppath.png}\caption{Out-of-sample performance for testing days (Global Group)}\label{fig:GlobalGrouppath} \end{figure} The evaluation of these models indicates our proposal have achieved promising and competitive results for all the geo-demographic groups considered in our dataset, implying a good potential for its implementation in large-scale smart-meter load forecasting. \section{Conclusions}\label{Con} We have proposed a general methodology, based on recurrent neural networks and specifically a LSTM model, that is able to forecast consumptions from electricity smart-meters in an efficient way for a utility. These forecasts could be used in the implementation of demand response policies, to anticipate and prevent systems peaks or congested lines, to adapt load consumption to renewable generation availability, to identify consumption anomalies that may originate from equipment failure or from electricity theft, etc. Instead of using traditional and univariate approaches for each smart meter, we propose a single but complex LSTM model that captures the main features of individual consumptions and also information from different household consumptions. As a consequence, the model attains promising results respect to competitive benchmarks, more than 20\% better performance on average along all the testing periods and all the testing meters in our back-testing experiment. We would like to mention some disadvantages of the proposed approach. As any other complex neural-network, there are some difficulties in the design of the network. In particular, there is no a clear way to design the network topology about the configuration of the layers and the corresponding units or neurons. And there are many hyper-parameters to be considered, like the number of layers, the number of units, the activation functions, the type of regularization or dropout, optimization parameters, etc. That implies a great effort and time before the network starts working properly. In addition, neural networks in general are difficult to understand and interpret. Moreover, the competitive good results depend on the degree of homogeneity of the considered smart meters. In this work, we have used 19 groups of households, the smallest one with 18 smart meters and the biggest one 1001 ones. We have obtained promising results for the near 4000 smart meters, even when the groups have some degree of heterogeneity. But we do not expect good results for smart meters with very unusual profiles (outliers) in its group. On the other hand, the success of the proposed model is partially explained by the large amount of data used to train the model. Because we are jointly treating and combining the information of all the available time series in a group, the network is able to capture non-linear relations, seasonalities and other hidden patterns that other traditional approaches cannot capture because lack of enough data. As a summary, our methodology is able to outperform competitive univariate forecasting tools for electricity consumption, providing an implementable and scalable approach for massive time-series forecasting. In particular, it may provide near real-time forecast for hundreds of thousands of smart meters. The model training would need to be done only once every few months, and by using just a representative subset of the smart meters time series. Once the network is trained, and its input properly defined, it can provide short-term smart meters’ forecast at almost no computational costs. \section*{Acknowledgment} The authors gratefully acknowledge the financial support from the Spanish government through projects MTM2017-88979-P and ECO2015-66593-P, and from Fundación Iberdrola through ``Ayudas a la Investigación en Energía y Medio Ambiente 2018''. \bibliographystyle{IEEEtran}
proofpile-arXiv_067-6637
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} One of the main goals of exoplanet searches is to understand our Solar System, with inner and outer planets including the benchmark planets: Earth and Jupiter. Although hot Jupiters around Sun-like stars and Earth-mass planets around M-dwarfs have been detected during the last decades \citep{gillon16,anglada-escude16} few Jupiter-like planets have been detected around Sun-like stars and such detections when they are made have only been possible with a single technique (e.g., \citealt{kuzuhara13,wittenmyer16}). $\epsilon$ Indi A is a good candidate to search for Solar System analogs. $\epsilon$ Indi A (HIP 108870, HR 8387, HD 209100, GJ 845) is a nearby K2V star (3.62\,pc according to \citealt{leeuwen07}) with a mass of 0.762$\pm$0.038\,$M_\odot$ \citep{demory09}, and a luminosity of 0.22\,$L_\odot$. This star is also accompanied by a binary brown dwarf with a separation of about 1459\,au \citep{scholz03}. A clear long-term signal has been established in the radial velocity data by \cite{endl02} and \cite{zechmeister13}. This signal is much larger than from the relatively distant brown dwarfs and supports the existence of another companion with a period longer than 30 years. The non-detection of this signal in direct imaging by \cite{janson09} suggests that the companion inducing the radial velocity signal has a relatively low temperature and a particularly interesting object for follow-up by multiple techniques. To find small Keplerian signals and to investigate the previously proposed long period companion around $\epsilon$ Indi A, we analyze the data from \cite{zechmeister13} and the recent HARPS data in the ESO archive in the Bayesian framework. With new data and noise modelling techniques there is the potential for detection of weak signals (e.g. \citealt{feng16, feng17b}). Using the {\small Agatha} software \citep{feng17a}, we compare noise models to find the so-called Goldilocks model \citep{feng16}. We also calculate the Bayes factor periodograms (BFPs) to test the sensitivity of signals to the choice of noise models. These signals are further diagnosed by calculating the BFPs for various noise proxies and their residuals. The combination of RV and astrometry has been used to detect and characterize a few short period planets, though no cold Jupiters have been detected in a similar way. Thanks to the two decades of baseline provided by the Hipparcos and Gaia survey \citep{leeuwen07,brown18,lindegren18}, we now have at least two epochs with well determined positions and proper motions, which are quite sensitive to the stellar reflex motion caused by Jupiter-like planets. Since the barycentric motion of the satellite is modeled in combination with proper motions in the analyses of the raw data \citep{lindegren18}, the astrometric survey data provides the position and velocity of a star at a given epoch viewed from the barycenter of the Solar System. $\epsilon$ Indi A provides a test case for a nearby object with a significant change in RV and change in position and proper motion between Hipparcos and Gaia over a long time baseline of observations. This paper is structured as follows. First, we introduce the data in section \ref{sec:data}. Then we describe the statistical and numerical methods for the analysis of RV data and constrain the long period signal present in the data. In section \ref{sec:signal}, we concentrate on the HARPS data and constrain the activity signals using posterior samplings. In section \ref{sec:combined}, we analyze the RV and astrometry data to constrain the mass and orbital parameters of $\epsilon$ Indi A b. Finally, we discuss and conclude in section \ref{sec:conclusion}. \section{RV data and model}\label{sec:data} We obtain the HARPS data by processing the spectra in the ESO archive (Programmes 60.A-9036, 072.C-0488, 072.C-0513, 073.C-0784, 074.C-0012, 077.C-0530, 078.C-0833, 079.C-0681, 081.D-0870, 183.C-0972, 087.D-0511, 091.C-0853, 091.C-0844, 094.C-0894, 192.C-0852, 196.C-1006, 098.C-0446) using the TERRA algorithm \citep{anglada12}. The HARPS data also include various noise proxies courtesy of the HARPS pipeline, including Bisector span (BIS) and full width half maximum of spectra lines (FWHM). These are supplemented by TERRA-generated indices including R$'_{\rm HK}$ (or CaHK index), intensity of the H-alpha line (H-alpha), indices from sodium lines (NaD1 and NaD2), and the differential RV sets which record wavelength dependent noise \citep{feng17b}. We only use the 3AP3-2 differential set which is found to be correlated with RVs. In the left-hand panel of Fig. \ref{fig:data}, we show all the HARPS RVs including high cadence epochs (called ``HARPS''; 4198 points). We note that the RVs from JD2455790 to JD2455805 are measured with high cadence, leading to 3636 RVs spread over two weeks. Such high-cadence data were obtained to study high frequency stellar oscillations and most of them are measured with a considerably lower signal to noise ratio than the rest of the data. To remove the stellar and instrumental systematics, we define the HARPSlow data set by excluding high cadence RVs with signal to noise ratio less than 110 (HARPShigh; shown by the grey points in Fig. \ref{fig:data}). This dataset consists of 518 points. We are aware of the ill-determined offset for post-2015 data \citep{curto15} due to fibre change, which also give relatively high BIS (see the right panel of Fig. \ref{fig:data}). We name the pre-2015 data set as ``HARPSpre'', and the post-2015 data as ``HARPSpost''. Thus the HARPS set is a combination of HARPSlow and HARPShigh while the HARPSlow is a combination of HARPSpre and HARPSpost. We will use these sub-sets to test the sensitivity of signals to the choice of datasets\footnote{All of these data sets are available at \url{http://star-www.herts.ac.uk/~ffeng/research\_note/eps_indi/}.} in section \ref{sec:signal}. An offset parameter is needed to combine HARPSpre and HARPSpost. To avoid potential degeneracy between this offset and Keplerian signals especially with period comparable with the HARPSpost time span, we use HARPSpre to investigate short-period signals. To constrain the long period signal, we use HARPSpre, HARPSpost as well as the CES long camera (LC) and very long camera (VLC) data sets from \cite{zechmeister13}. We also use the UVES/VLT RVs which are measured at epochs of 2453272, 2457905, and 2457993 to constrain the trend. The UVES data is shown in Tables \ref{tab:data}, \ref{tab:data2}, and \ref{tab:data3}. The other data sets are available at \url{http://star.herts.ac.uk/~hraj/HD209100/}. \begin{figure*} \begin{center} \includegraphics[scale=0.55]{paper_data2.pdf} \caption{HARPS data of $\epsilon$ Indi A. Left panel: the HARPS data set consists of the high cadence RVs (HARPShigh) with signal to noise ratio less than 110 (grey) and the low cadence data (HARPSlow). The HARPSlow set includes the pre-2015 (HARPSpre; black) and post-2015 (HARPSpost; red) RVs. Right panel: correlation between BISs and RVs of the HARPSlow set after subtraction by corresponding best-fitting parabola.} \label{fig:data} \end{center} \end{figure*} To model the RV variation caused by stellar activity and planetary perturbations, we use a combined moving average (MA) and Keplerian model: \begin{equation} \hat{v}_r(t_i)=v_r^p(t_i)+\sum_{l=1}^q w_l {\rm exp}\left(-\frac{|t_i-t_{i-k}|}{\tau}\right)~, \label{eqn:rv1} \end{equation} where $\{w_l\}$ and $\tau$ are respectively the amplitude and timescale of the $q^{\rm th}$ order MA model (or MA(q)). The planet-related RV variation is \begin{equation} v_r^p(t_i)=\sum_{k=1}^{N_p}v_k^p(t_i)+b~, \label{eqn:rv2} \end{equation} where $b$ is the RV offset and independent offsets are adopted for different instruments. The RV variation caused by planet $k$ is \begin{equation} v_k^p(t_i)=\sqrt{\frac{Gm_{pk}^2}{(m_{pk}+m_s)a_k(1-e_k^2)}}\sin{I_k}\left[\cos{\omega_k+\nu_k(t_i)}+e_k\cos(\omega_k)\right]~, \label{eqn:rv3} \end{equation} where $m_{pk}$ is the mass, $a_k$, $e_k$, $I_k$, $\omega_k$ and $\nu_k$ are the orbital elements of the $k^{\rm th}$ planet. The true anomaly and eccentricity anomaly are derived from the mean anomaly at the reference epoch for a given time. Independent sets of parameters of the MA model are applied to different data sets. We use a superscript to denote the parameter for a given set. For example, $\tau^{\rm HARPSpre}$ is the correlation time scale for the HARPSpre data set. \section{Confirmation of a cold Jupiter}\label{sec:detection} A significant RV trend has been found in the CES and HARPS data sets \citep{endl02,zechmeister13}, suggesting a long-period planetary companion to $\epsilon$ Indi A. To constrain this RV trend better by including newer HARPS and UVES data, we analyze the CES LC and VLC data sets in combination with the HARPSpre, HARPSpost, and UVES sets. The LC and VLC data sets are corrected by accounting for the 1.8\,m\,s$^{-1}$/\,yr acceleration caused by the proper motion (so-called ``perspective acceleration''). Based on Bayesian model comparison \citep{feng17a}, we model the trend in the combined data using one Keplerian function, and model the noise in LC, VLC, and UVES using white noise. We use the first and second order moving average models (i.e. MA(1) and MA(2)) respectively to model the noise in HARPSpre and HARPSpost, in the same manner as e.g., \cite{tuomi12,feng16}. We also vary the offsets between data sets and adopt the prior distributions for all parameters from \cite{feng16}. Specifically, we use a one-sided Gaussian prior distribution $\mathcal{N}(0,0.2)$ for eccentricity to account for the eccentricity distributions found in radial velocity planets \citep{kipping13} and in transit systems \citep{kane12,VanEylen18}. Since there is no universal eccentricity distribution for all types of planets and stars, our use of a semi-Gaussian only captures the broad feature of the real eccentricity distributions. Nevertheless, we will test the sensitivity of our results to eccentricity priors later. We use a log uniform distribution for time scale parameters, and a uniform distribution for other parameters. Based on adapted MCMC samplings \citep{haario06}, we find a significant signal at a period of $17155_{-4748}^{+6940}$\,d or $47_{-13}^{+19}$\,yr, which is well constrained by MCMC samplings. The one-planet model improves the maximum likelihood by about $10^{144}$ (or BF=$10^{136}$) compared with the baseline model. It increases the maximum likelihood by about $10^{37}$ (or BF=$10^{31}$) compared with the baseline model combined with a linear trend, and increase the maximum likelihood by $10^{22}$ (or BF=$10^{18}$) compared with the baseline combined with a parabola. Hence a linear trend or parabola or a longer period signal is not favored by the data. Since this trend is not found in noise proxies, it is likely to be caused by a wide-orbit planet with a minimum mass $m\sin{I}=2.6_{-0.72}^{+2.0}~M_{\rm Jup}$, semi-major axis $a=12_{-2.0}^{+2.1}$\,au, semi-amplitude $K=25_{-5.5}^{+16}$\,m/s, eccentricity of $e=0.26_{-0.071}^{+0.078}$, argument of periapsis $\omega=60_{-16}^{+35}$\,deg, and mean anomaly of $M_0=170_{-82}^{+50}$\,deg at the first epoch (or BJD2448929.56) of the combined RV data. The optimal value is determined at the {\it maximum a posteriori} and the uncertainties correspond to the 10 and 90\% quantiles of the posterior distributions. This solution for $\epsilon$ Indi Ab is shown in Fig. \ref{fig:fit}. The offset between VLC and LC sets is 4.6$\pm$2.8\,m/s, which is within the uncertainty of 8\,m/s determined from a sample of VLC and LC sets \citep{zechmeister13}. The offset between the HARPSpre and HARPSpost set is 12$\pm$1.3\,m/s, consistent with the offset between pre-and-post fibre exchange for a K2 star given by \cite{curto15}. The mass determined in this work is consistent with the range given in \cite{janson09}, who adopt a 2.6\,m\,s$^{-1}$/\,yr planet-induced acceleration of $\epsilon$ Indi A relative to its barycenter (or stellar reflex motion) determined from epochs earlier than JD2455000 (or June, 2009), leading to an estimation of mass limit of 5-20$M_{\rm Jup}$. On the other hand, \cite{zechmeister13} adopt a 2.4\,m\,s$^{-1}$/\,yr slope and find a broad limit of $M\sin{i}>0.97$\,$M_{\rm Jup}$ and $P>30$\,yr. With the benefit of the number of years of precise HARPS epochs we are in a position to determine a more modest slope value of 1.8\,m\,s$^{-1}{\rm yr}^{-1}$ for HARPSpre epochs and a negative slope value of -2.9\,m\,s$^{-1}{\rm yr}^{-1}$ for the HARPSpost epochs. Moreover, the combination of UVES and HARPS sets also strongly favor a curvature in the RV variation. The early epochs of UVES overlap with HARPS epochs and thus enable a good offset calibration. The recent epochs of UVES suggest a significant decrease in RV (see Fig. \ref{fig:fit}) as already seen in the HARPSpost set. Thus a Keplerian function is more appropriate than the previously used linear trend to model the RV variation and constrain the signal. \begin{figure} \begin{center} \includegraphics[scale=0.6]{fit_epsIndi_res_Esd02.pdf} \caption{Best-fit for the data sets of CES-LC, CES-VLC, HARPSpre, HARPSpost, and UVES. The offset and correlated noise are subtracted from the data. The red line denotes the Keplerian signal with a period of 47\,yr based on the analysis of RV data. } \label{fig:fit} \end{center} \end{figure} Since $\epsilon$ Indi Ab is on such a very wide orbit and so within the dataset of known planets can not be expected to follow the eccentricity distribution found in relatively short period planets \citep{kipping13,juric08}. Hence we test whether the values of orbital parameters are sensitive to the choice of eccentricity prior. We change the standard deviation of the one-sided Gaussian prior from 0.2 to 0.4, and also adopt a uniform eccentricity prior. The orbital parameters for different parameters and the log BF are shown in Table \ref{tab:test}. It is evident that the orbital parameters from different solutions and the offsets as well as jitters are consistent with each other and are thus not sensitive to the choice of eccentricity prior. \begin{table} \centering \caption{Sensitivity of orbital parameters to the choice of eccentricity priors. The optimal value is calculated at the {\it maximum a posteriori} (MAP) and the uncertainties correspond to the 10\% and 90\% qantiles. The ln(BF) of one-planet model with respect to the baseline model is estimated by the Bayesian information criterion (BIC; \citealt{schwarz78}), and ln(BF)>5 is the signal detection threshold \citep{feng17a}. } \renewcommand{\arraystretch}{1.25} \begin{tabular}{l*3{c}} \hline\hline & $P(e)=2\mathcal{N}(0,\,0.2)$ &$P(e)=2\mathcal{N}(0,\,0.4)$& $P(e)=1$\\ &$0<e<1$&$0<e<1$&$0<e<1$\\\hline $P$ [yr]&$47_{-13}^{+19}$&$44_{-9.3}^{+23}$&$49_{-14}^{+19}$\\ $K$ [m/s]&$25_{-5.5}^{+16}$&$29_{-9.1}^{+13}$&$28_{-7.9}^{+1.7}$\\ $e$&$0.26_{-0.071}^{+0.078}$&$0.26_{-0.049}^{+0.11}$&$0.26_{-0.054}^{+0.11}$\\ $\omega$ [deg]&$60_{-16}^{+35}$&$80_{-36}^{+17}$ &$64_{-18}^{+32}$\\ $M_0$ [deg]&$170_{-82}^{+50}$ &$140_{-54}^{+80}$&$167_{-80}^{+51}$\\ $b^{\rm LC}$&$-17_{-15}^{+7.7}$&$-17_{-16}^{+7.7}$&$-19_{-15}^{+9.7}$\\ $\sigma^{\rm LC}$&$4.2_{-2.2}^{+2.9}$ &$3.5_{-1.5}^{+3.5}$ &$2.8_{-0.79}^{+4.2}$\\ $b^{\rm VLC}$&$-20_{-17}^{+6.9}$&$-22_{-15}^{+8.8}$ &$-23_{-17}^{+9.7}$\\ $\sigma^{\rm VLC}$&$0.14_{--0.13}^{+2.4}$&$0.59_{-0.35}^{+1.9}$ &$0.52_{-0.25}^{+2.1}$\\ $b^{\rm HARPSpre}$&$-25_{-17}^{+7.5}$&$-27_{-15}^{+9}$&$-29_{-17}^{+9.9}$\\ $\sigma^{\rm HARPSpre}$&$1.5_{-0.076}^{+0.055}$&$1.4_{-0.024}^{+0.11}$&$1.4_{-0.034}^{+0.099}$\\ $w_1^{\rm HARPSpre}$&$0.67_{-0.027}^{+0.09}$&$0.68_{-0.038}^{+0.081}$&$0.68_{-0.038}^{+0.078}$\\ $w_2^{\rm HARPSpre}$&$0.34_{-0.10}^{+0.028}$&$0.31_{-0.066}^{+0.067}$ &$0.35_{-0.11}^{+0.024}$\\ ${\rm ln}{\frac{\tau^{\rm HARPSpre}}{\rm 1 day}}$&$0.97_{-0.18}^{+0.26}$&$1.1_{-0.34}^{+0.082}$&$1.0_{-0.21}^{+0.21}$\\ $b^{\rm HARPSpost}$&$-12_{-18}^{+7.2}$&$-17_{-13}^{+11}$&$-15_{-18}^{+9.5}$ \\ $\sigma^{\rm HARPSpost}$&$1.0_{-0.083}^{+0.21}$&$1.0_{-0.056}^{+0.24}$&$1.1_{-0.19}^{+0.11}$\\ $w_1^{\rm HARPSpost}$& $0.92_{-0.051}^{+0.068}$& $0.97_{-0.097}^{+0.016}$&$0.92_{-0.053}^{+0.062}$ \\ ${\rm ln}{\frac{\tau^{\rm HARPSpost}}{\rm 1 day}}$&$2.6_{-0.77}^{+3.9}$ & $6_{-4.2}^{+0.27}$&$2.7_{-0.9}^{+3.7}$\\ $b^{\rm UVES}$&$-12_{-17}^{+7.6}$ &$-15_{-15}^{+9}$ &$-16_{-17}^{+9.7}$\\ $\sigma^{\rm UVES}$&$0.63_{-0.14}^{+0.085}$& $0.6_{-0.11}^{+0.12}$ &$0.58_{-0.086}^{+0.15}$\\ ln(BF)&314&314&314\\\hline \end{tabular} \label{tab:test} \end{table} \section{Diagnostics of signals using Bayes factor periodograms} \label{sec:signal} In this section we focus on the constraints provided by the long-term HARPS dataset alone and investigate evidence for other signals within the data sets of HARPS, HARPSlow, and HARPSpre using the {\small Agatha} software \citep{feng17a}, which is essentially a framework of red noise periodograms. By comparison of different noise models in Agatha, we find that a variety of optimal noise models: (1) HARPS --- fifth order moving average (or MA(5)) in combination with FWHM and NaD1, (2) HARPSlow --- MA(2) combined with BIS and NaD1, (3) HARPSpre --- MA(2) combined with S-index and NaD1. To find primary signals, we calculate the Bayes factor periodogram (BFP) for the MA(1) model in combination with proxies for different data sets, and show them in Fig. \ref{fig:BFP1sig}. In this figure, the signal around 11\,d is significant in the HARPS data set because the high cadence data favors short period signals. There are also strong powers around this signal for the HARPSlow and HARPSpre data sets even though the high cadence and HARPSpost epochs have been excluded. For all data sets, the signal at 18\,d is significant although the HARPS data set favors the 11\,d signal more. The rather strong and noisy power around 11\,d in the left panels are contributed by 11 days of consecutive data which comprise the high cadence RVs. In particular, the signal at a period of 278\,d is significant in the BFPs for MA(1) but become much weaker in the BFPs for MA(1) combined with noise proxies, suggesting an activity origin. The other strong signals in these BFPs are either aliases or harmonics of these three signals, as we will see in the subsequent analysis. \begin{figure} \begin{center} \includegraphics[scale=0.4]{BFP_1signal_HD209100.pdf} \caption{BFPs for MA(1) model in combination with noise proxies for the HARPS (left), HARPSlow (middle) and HARPSpre (right) data sets. The upper panels show the BFPs for the MA(1) model while the lowers ones for the MA(1) model in combination with the optimal noise proxies. The signals at periods of 10, 12, 18 and 278\,d are shown by dotted red lines. The thresholds of $\ln({\rm BF})=0$ and 5 are shown by the horizontal dotted and dashed lines, respectively. } \label{fig:BFP1sig} \end{center} \end{figure} Since the HARPSpre data set is more conservative than the HARPShigh and HARPSlow sets due to its sampling and is not subject to the uncertainty in radial velocity offset due to fibre change, we calculate the BFPs for the raw HARPSpre data and the RV residual signals and for various noise proxies, and show these BFPs in Fig. \ref{fig:diagnostic}. In panels P1-P14, we observed that the signals at periods of about 11, 18, and 278\,d are significant in the data. The 278\,d signal is likely caused by activity because its significance is reduced after accounting for the correlation between RVs and noise proxies, as seen in P5-P7. This signal is found to be significant in the BFPs of the NaD1, $R_{\rm HK}$, NaD2, and H$\alpha$ indices. In addition, the signal at a period of around 2500\,d is also significant in the BFPs of NaD1 (P9), R$'_{HK}$ (P13), NaD2 (P17), and BIS (P23 and P24). We interpret this signal as the magnetic cycle and the 278\,d signal as a secondary cycle in the magnetic variation. We also observe strong signals around 18\,d and its double period 36\,d in NaD1, NaD2, R$'_{HK}$, and BIS. Nevertheless, the 18\,d signal does not disappear in the BFPs accounting for linear correlations between RVs and NaD1 and R$'_{HK}$. Hence we conclude that this signal is either due to a nonlinear effect of stellar rotation or due to a planet, which has an orbital period similar to rotation period. The signal at about 11\,d is found to be unique and significant in the residual of NaD1 after subtraction of the 2500 and 18\,d signals. Thus this signal is probably caused by stellar activity or is an alias of activity-induced signals. We also look into the ASAS and Hipparcos photometric data but find few useful epochs and identify no significant signals. Therefore we conclude that the primary and secondary magnetic cycles of $\epsilon$ Indi are 2500 and 278\,d. The rotation period of $\epsilon$ Indi is about 36\,d, approximately double 18\,d. This rotation period is rather different from the 22\,d value estimated by \cite{saar97} from Ca II measurements. Considering that the 36\,d rotation period is derived from a relatively large dataset of high precision RVs and multiple activity indicators, we believe that 36\,d is a more reliable value of rotation period. On the other hand, the half rotation period, 18\,d, is more significant than the rotation period in the RV data. This phenomenon is also found in the RVs of other stars (e.g. HD 147379; \citealt{feng18d}), and is probably caused by a spot or spot complexity which more significantly modulates spectral lines over one half of the rotation period. The signal at a period of about 11\,d is also related to the stellar rotation since $1/(1/36+1/18)\approx11$. Based on the Bayesian quantification of these signals using the MA(1) model \citep{tuomi12,feng16}, the semi-amplitudes of the signals at periods of 10.064$^{+0.003}_{-0.001}$\footnote{Since the 10 and 12\,day signals are aliases of each other, we only report one signal here. }, 17.866$^{+0.006}_{-0.003}$ and 278$^{+1.7}_{-0.60}$\,d signals are 1.4$_{-0.31}^{+0.18}$, 2.1$_{-0.19}^{+0.37}$, and 2.3$_{-0.31}^{+0.34}$\,m/s, respectively. The non-detection of additional signals puts an upper limit of 1\,m/s on the semi-amplitude RV variation induced by potential planets with periods less than the HARPSpre baseline (i.e. $P<4000$\,days) in the system. \begin{figure*} \begin{center} \includegraphics[scale=0.55]{diagnostics.pdf} \caption{BFPs for RVs and noise proxies. Top row: BFPs for MA(1) for raw HARPSpre and HARPSpre subsequently subtracted by signals at periods of 11, 18, 36, 278, and 2500\,d. The 11\,d signal is denoted by two periods at 10 and 12\,d according to posterior samplings. These periods are shown in the top of the figure. Second row: BFPs for MA(1) combined with S-index and NaD1 for raw HARPSpre and HARPSpre subtracted in turn by signals at periods at 18 and 11\,d, and the Lomb-Scargle periodogram for the window function (P8). The third row onward shows BFPs for MA(1) for noise proxies and their residuals. P12 and P16 are respectively the zoom-in versions of P11 and P15 and have a period range of $[5,~20]$\,d. For each panel, the Pearson correlation coefficient in the top right corner shows the correlation between noise proxies and RVs which are detrended by a second order polynomial. All panels are denoted by ``Pn'' with n varying from 1 to 24. The horizontal dotted and dashed lines denote $\ln{\rm BF}=0$ and 5, respectively. The BFP for the FWHM is not shown because there is not significant periodicity. The values of ln(BF) are indicated by thresholds 0 and 5 rather than shown by axis labels since different panels have different ranges of ln(BF). } \label{fig:diagnostic} \end{center} \end{figure*} Based on the above analysis, different noise proxies are sensitive to different stellar activities. NaD1 and R$'_{HK}$ are sensitive to all activity-induced signals while BIS is sensitive to stellar rotation signal. H$\alpha$ is only sensitive to the secondary magnetic cycle. Considering the differential sensitivity of activity indicators to activity signals, a model including a linear correlation between RVs and these indicators would remove activity-noise as well as introduce extra noise. This is evident from the comparison of P1 with P5 in Fig. \ref{fig:diagnostic}. Although the 278\,d signal disappears after accounting for the linear correlation with S-index and NaD1, a longer period around 3000\,d and some short-period signals become significant (also compare P2 and P6, P3 and P7). Such extra signals are caused by fitting an inappropriate linear function of noise proxies to RVs while these proxies are not perfectly correlated with RVs linearly. Hence these proxies would bring their own signals into the likelihood/posterior distribution, which is a combined function of data and model, in order to mitigate the main activity signals in RVs. While the traditional activity indicators are limited in removing activity-induced signals, differential RVs are better in removing wavelength-dependent signals without introducing extra noise because they provide a simple way to weight spectral orders {\it a posteriori} \citep{feng17c}. Hence a reliable diagnostics of the nature of signals is to test the sensitivity of signals to different noise models, data chunks, and wavelengths. Since the activity-induced RV variations are not strictly periodic, Keplerian functions or a Gaussian process model cannot adequately model them. Hence the fit of a one-planet model is appropriate for parameter inference (see section \ref{sec:detection}). \section{Combined analysis of RV and astrometry}\label{sec:combined} \subsection{Combined model of RV and astrometry} In a star-planet or more generally star-companion system, the position of the star in its orbital plane is calculated according to Kepler’s equation is \begin{equation} \bm{r}_s(t)= \begin{bmatrix} x(t)\\ y(t)\\ 0 \end{bmatrix} =\frac{m_p}{m_p+m_s}a \begin{bmatrix} \cos{E(t)}-e\\ \sqrt{1-e^2}\sin{E(t)}\\ 0 \end{bmatrix}~, \label{eqn:rs} \end{equation} where $m_s$ and $m_p$ are respectively the masses of star and planet, $a$ is the semi-major axis of the planet with respect to the star, $E(t)$ is the eccentricity anomaly, and $e$ is the eccentricity. The corresponding stellar reflex velocity is \begin{equation} \bm{v}_s(t)= \begin{bmatrix} v_x(t)\\ v_y(t)\\ 0 \end{bmatrix} =\frac{Gm_p^2}{(m_p+m_s)a(1-e^2)} \begin{bmatrix} -\sin{\nu(t)}\\ \cos{\nu(t)}+e\\ 0 \end{bmatrix}~, \label{eqn:vs} \end{equation} where $\nu(t)$ is the true anomaly at time $t$, $G$ is the gravitational constant. The stellar position $\bm{r}_s(t)$ is converted to the observer frame coordinates, $\bm{r}_s^{\rm obs} (t)$, by applying Euler rotations using \begin{equation} r_s^{\rm obs}=R_Z(\Omega)R_X(-I)R_Z(\omega)\bm{r}_s(t)~, \end{equation} where $\Omega$ is the longitude of ascending node, $\omega$ is the argument of periastron, $I$ is the inclination of the stellar orbit with respect to the sky plane. The directions of X axis (along North of the sky plane), Y axis (along East of the sky plane), and Z axis (along the line of sight towards the observer) forms a right-handed Cartesian coordinate system in the observer frame. The expansion of equation (5) gives the observed location of star in the XYZ coordinate system, \begin{equation} \begin{bmatrix} x^{\rm obs}(t)\\ y^{\rm obs}(t)\\ z^{\rm obs}(t)\\ \end{bmatrix} = \begin{bmatrix} A&F&-\sin{\Omega}\sin{I}\\ B&G&\cos{\Omega}\sin{I}\\ -\sin{\omega}\sin{I}&-\cos{\omega}\sin{I}&\cos{I} \end{bmatrix} \begin{bmatrix} x(t)\\ y(t)\\ 0 \end{bmatrix} ~, \end{equation} where \begin{align} A&=\cos{\Omega}\cos{\omega}-\sin{\Omega}\sin{\omega}\cos{I}\\ B&=\sin{\Omega}\cos{\omega}+\cos{\Omega}\sin{\omega}\cos{I}\\ F&=-\cos{\Omega}\sin{\omega}-\sin{\Omega}\cos{\omega}\cos{I}\\ G&=-\sin{\Omega}\sin{\omega}+\cos{\Omega}\cos{\omega}\cos{I} \end{align} are the so-called Thiele Innes constants. Hence the variation of right ascension, declination, and parallax caused by a planet at a given time is \begin{align} \alpha^p(t)&=\tilde{\omega}(t)y^{\rm obs}(t)=\tilde{\omega}(t)[Bx(t)+Gy(t)]~,\nonumber\\ \delta^p(t)&=\tilde{\omega}(t)x^{\rm obs}(t)=\tilde{\omega}(t)[Ax(t)+Fy(t)]~, \label{eqn:adp} \end{align} The stellar reflex velocity is the time derivative of $\bm{r}_s^{\rm obs} (t)$, which is \begin{align} \mu_\alpha(t)&=\tilde{\omega}(t)[Bv_x(t)+Gv_y(t)]~,\nonumber\\ \mu_\delta^p(t)&=\tilde{\omega}(t)[Av_x(t)+Fv_y(t)]~, \label{eqn:muadp} \end{align} The planet-induced RV is \begin{equation} v_r^p=-v_z^{\rm obs}=v\sin{I}[\cos{(\omega+\nu)}+e\cos{\omega}]~. \label{eqn:vrp} \end{equation} The stellar motion is the sum of the barycentric motion (denoted by superscript $b$) and the reflex motion (denoted by superscript $p$) described by \begin{align} \alpha(t)&=\frac{\alpha^p(t)}{\cos{\delta(t)}}+\alpha^b(t)~,\nonumber\\ \delta(t)&=\delta^p(t)+\delta^b(t)~,\nonumber\\ \frac{1}{\tilde{\omega}}&=-z^{\rm obs}(t)+\frac{1}{\tilde{\omega}^b(t)}~,\nonumber\\ \mu_\alpha(t)&=\mu_\alpha^p(t)+\mu_\alpha^b(t)~,\nonumber\\ \mu_\delta(t)&=\mu_\delta^p(t)+\mu_\delta^b(t)~,\nonumber\\ v_r(t)&=v_r^p(t)+v_r^b(t)~. \label{eqn:reflex} \end{align} Note that the distance or the inverse of parallax rather than the parallax is additive. $\alpha^p(t)$ is a planet-induced offset in the sky plane and thus should be divided by $\cos{\delta(t)}$ to calculate the real change in righ ascension. Assuming zero acceleration, the heliocentric motion of the barycenter of a planetary system is determined by \begin{align} \bm{v}^b(t)&=\bm{v}^b(t_0)\nonumber\\ \bm{r}^b(t)&=\bm{r}^b(t_0)+\bm{v}^b(t_0)(t-t_0) \label{eqn:bary_motion} \end{align} where $\bm{v}^b(t)$ is the barycentric velocity, and $\bm{r}^b(t)$ is the barycentric position. The linear motion is a good approximation of heliocentric motion of the barycenter over a few decades since the Galactic acceleration for nearby stars is typically less than mm/s/year. Since the RV variation induced by the barycentric motion, $v_r^b (t)$, is subtracted from the RV data, we only use $v_r^p (t)$ defined in equation (9) to model the Keplerian signal in RV data. Considering the barycentric RV is used to calculate the astrometry variation caused by the change of perspective, the planet-induced RV variation or gravitational redshift and convection blueshift only contribute as a secondary variation in astrometric data. Thus we approximate the barycentric RV by the Gaia RV if available or the value from the RV data sets we collected, i.e. $v_r^b (t)\approx v_r (t)$. We also neglect the parallax variation induced by stellar reflex motion and by the RV because they are secondary effects compared with the variation of other observables. For example, the parallax of $\epsilon$ Indi A is only changed by about 0.3\,$\mu$as due to the barycentric motion along the line of sight. Thus we only model the astrometry observables $\alpha$, $\delta$, $\mu_\alpha$ and $\mu_\delta$ of Hipparcos and Gaia DR2 to constrain the planetary orbit. The procedure of astrometry modeling is as follows. \begin{itemize} \item Choose the reference time at the Gaia epoch J2015.5 and determine the initial barycentric astrometry according to \begin{align} \alpha^b(t_0)&=\alpha^{\rm gaia}(t_0)-\frac{\alpha^p(t_0)}{\cos{\delta^{\rm gaia}(t_0)}}-\Delta\alpha~,\\ \delta^b(t_0)&=\delta^{\rm gaia}(t_0)-\delta^p(t_0)-\Delta\delta~,\\ \tilde{\omega}(t_0)&=\tilde{\omega}(t_0)~,\\ \mu_\alpha^b(t_0)&=\mu_\alpha^{\rm gaia}(t_0)-\mu_\alpha^{\rm p}(t_0)-\Delta\mu_\alpha~,\\ \mu_\delta^b(t_0)&=\mu_\delta^{\rm gaia}(t_0)-\mu_\delta^{\rm p}(t_0)-\Delta\mu_\delta~,\\ v_r^b(t_0)&=v_r(t_0)~, \end{align} where $\Delta\alpha$, $\Delta\delta$, $\Delta\mu_\alpha$, and $\Delta\mu_\delta$ are the offsets. These offsets are used to account for the reference frame spin rate (about 0.15\,mas/yr according to \cite{lindegren18}), for the zero point offset of parallaxes (about -0.029\,mas according to \cite{lindegren18}), and for the proper motion offsets related to light travel time (about 0.6 mas/yr in the case of $\epsilon$ Indi A; \citealt{kervella19}), and for other effects. \item Transform initial astrometry and RV into heliocentric state vector $[\bm{r}^b(t_0),\bm{v}^b(t_0)]$, and propagate the vector from $t_0$ to $t$ to derive$[\bm{r}^b(t),\bm{v}^b(t)]$ according to equation \ref{eqn:bary_motion}, and transform the state vector back to barycentric astrometry at time t. This step is to calculate perspective acceleration. \item Estimate $\alpha(t)$, $\delta(t)$, $\mu_\alpha(t)$, and $\mu_\delta (t)$ by summing the barycentric and reflex astrometry according to equations \ref{eqn:adp}, \ref{eqn:muadp}, and \ref{eqn:reflex}. \end{itemize} To account for unknown noise in astrometry data, we include a relative jitter in units of observational errors in the likelihood. Thus the likelihood of the combined RV and astrometry model is \begin{align} \ln\mathcal{L}&\equiv P(D|\theta,M)=\prod_k^{N_{\rm set}}\prod_i^{N_k^{\rm rv}}\frac{1}{\sqrt{2\pi(\sigma_j^2+\sigma_{\rm Jk}^2)}}\exp\left\{-\frac{[\hat{v}_r(t_i)-v_r(t_i)^2]}{2(\sigma_i^2+\sigma_{\rm Jk}^2)}\right\}\nonumber\\ &+(2\pi)^{\frac{N_{\rm epoch}N_{\rm par}}{2}}\prod_j^{N_{\rm epoch}}({\rm det}\Sigma_j)^{-\frac{1}{2}}{\rm exp}\left\{-\frac{1}{2}[\hat{\bm{\eta}}(t_i)-\bm{\eta}(t_i)]^T\Sigma_j^{-1}[\hat{\bm{\eta}}(t_i)-\bm{\eta}(t_i)]\right\}~, \label{eqn:comblike} \end{align} where $N_{\rm set}$, $N_{\rm epoch}$, and $N_{\rm par}$ are respectively the number of RV data sets, astrometry epochs, and free parameters of the astrometry model. $N_k^{\rm rv}$ is the number of RVs in the $k^{\rm th}$ RV set, $\bm{\eta}\equiv [\alpha,\delta,\mu_\alpha,\mu_\delta]$ is the astrometry data, and $\Sigma$ is the jitter corrected covariance matrix of $\bm{\eta}$, $\Sigma_j\equiv \Sigma_{\rm 0j}(1+J_j)$ where $\Sigma_{\rm 0j}$ is the catalog covariance matrix for the $j^{\rm th}$ astrometry epoch, and $J_j$ is the so-called ``relative astrometry jitter''. These relative jitters allow the model to account for potential underestimation of astrometry errors such as the so-called DOF bug \citep{lindegren18} and the effects mentioned in \cite{brandt18}. Unlike previous calibrations of the five-parameter solutions of Hipparcos and Gaia DR2 based on statistical analyses \citep{lindegren18,brandt18}, we model the potential underestimation of uncertainties and bias as offsets and astrometric jitters, and estimate them a posteriori rather than a priori by taking advantage of the high precision RV data and a combined modeling of the barycentric and reflex motions of $\epsilon$ Indi A. The RV model $\hat{v}_r(t_i)$ is given in Eqn. \ref{eqn:rv1}. Therefore the free parameters in the combined model are $\{m_{pk},a_k,e_k,I_k,\omega_k,\Omega_k,M_{0k}\}$ with $k\in\{1,...,N_p\}$ for $N_p$ planets or companions, $\{\Delta\alpha,\Delta\delta,\Delta\mu_\alpha,\Delta\mu_\delta,J_{\rm gaia},J_{\rm hip} \}$ for astrometry,$\{b_1,...,b_{N_{\rm set}}\}$ for the offsets and $\sigma_1,...,\sigma_{N_{\rm set}}$ for the jitters in $N_{\rm set}$ independent RV sets, $\{w_1^i,...,w_q^i\}$ and $\tau^i$ for the MA(q) model for the $i^{\rm th}$ RV data set. The superscript of a given parameter denotes the name of the corresponding data set. The argument of perihelion and semi-major axis for a planetary orbit with respect to the barycenter are $\omega_p=\omega+\pi$ and $a_p=m_s/(m_p+m_s) a$ while the other angular parameters are the same as for the stellar reflex motion. The offsets between Gaia and Hipparco catalog data are also used by \cite{calissendorff18,brandt18,kervella19} to constrain the dynamic mass of massive companions and to identify potential companion-induced accelerations. In particular, \cite{snellen18} used both the proper motion difference and Hipparcos epoch data to constrain the mass of $\beta$ Pictoris b. While these studies justify use of astrometry difference between Hipparcos and Gaia DR2 to constrain stellar reflex motion, the constraints are not strong due to a lack of combined modeling of RV and the difference in proper motion as well as in position. In particular, we find that position difference is found to be more sensitive to non-linear stellar motions which appear from orbital integration over decades. The aim of this work is to focus on a target with clear-cut RV and astrometric evidence alongside appropriate treatment of Gaia and Hipparcos catalog data in combination with RV analysis. \subsection{Orbital solution}\label{sec:solution} Based on the MCMC sampling of the posterior, we find the best orbital solution and show the fit to RV and astrometry data in Fig. \ref{fig:fit2}. In panel (A), it is visually apparent that the one-planet model is strongly favored relative to the zero-planet model ($\ln{\mathcal{L}}=331$ or ln\,BF=311) and that the curvature in the RV data is significant compared with a linear trend ($\ln{\mathcal{L}}=51$ or ln\,BF=38). Fig. \ref{fig:fit2} (B) and (C) indicate that the position puts a stronger constraint on the best fit model of the planetary orbit than the proper motion and comparable to the highest precision RV data. This arises because the position difference serves to integrate the non-linear proper motion over time whereas the proper motion difference is not as sensitive as position data to non-linear motion due to potential bias caused by the assumption of linear motion in the production of catalog data. Based on the combined analysis, we report the values of the orbital parameters of $\epsilon$ Indi A b together with the stellar parameters in Table \ref{tab:combined}. Note that the stellar mass slightly differs from the value given by \cite{demory09} because we determine the mass using the Gaia luminosity and the mass-luminosity relationship derived by \cite{eker15}. The posterior distribution of these parameters is shown in Fig. \ref{fig:pair}. In particular, we show the posterior distribution of planetary mass and orbital period in Fig. \ref{fig:mp}. It is apparent that the planetary mass and orbital period are constrained to a relatively high precision even without RV and astrometry data that cover the whole orbital phase. \begin{table} \centering \caption{Parameters for $\epsilon$ Indi A and A b are taken from Gaia DR2 \citep{brown18} or determined in this work. Except for rotation period and magnetic cycle for parameters determined in this work, the optimal value is estimated at the maximum a posteriori and the uncertainty interval is determined by the 10\% and 90\% quantiles of the posterior samples drawn by the MCMC chains.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{l*3{l}} \hline \hline Parameter&Unit&Meaning&Value\\\hline $m_s$&$M_\odot$&Stellar mass & $0.754_{-0.043}^{+0.043}$\\ $L_s$&$L_\odot$&Stellar Luminosity& $0.239_{-0.001}^{+0.001}$\\ $\alpha$&degree&Right ascension& 330.87\\ $\delta$&degree&Declination& -56.80\\ $\tilde{\omega}$&mas&Parallax& $274.8_{-0.25}^{+0.25}$\\ $\mu_\alpha$&mas/yr&Proper motion in right ascension& $3967.04_{-0.38}^{+0.38}$\\ $\mu_\delta$&mas/yr&Proper motion in right ascension&$-2535.76_{-0.41}^{+0.41}$ \\ $v_r$&km/s&Systematic radial velocity&$-40.50_{-0.23}^{+0.23}$ \\ $P_{\rm rot}$&day&Stellar rotation period&$-35.732_{-0.003}^{+0.006}$ \\ $P_{\rm mag}$&day&Stellar magnetic cycle&2500$\sim$3000 \\ $T_{\rm age}$&Gyr&Stellar age&3.7$\sim$5.7\\\hline $m_p$&$M_{\rm Jup}$&Planet mass&$3.25_{-0.65}^{+0.39}$\\ $P$&year&Orbital period&$45.20_{-4.77}^{+5.74}$\\ $a$&au&Semi-major axis&$11.55_{-0.86}^{+0.98}$\\ $K$&m/s&RV semi-amplitude&$29.22_{-6.07}^{+5.45}$\\ $e$&&Eccentricity&$0.26_{-0.03}^{+0.07}$\\ $I$&degree&Inclination&$64.25_{-6.09}^{+13.80}$\\ $\Omega$&degree&Longitude of ascending node&$250.20_{-14.84}^{+14.72}$\\ $\omega$&degree&Argument of periastron&$77.83_{-31.51}^{+20.21}$\\ $M_0$&degree&Mean anomaly at reference epoch&$143.8_{-58.75}^{+23.38}$\\ $T_0$&BJD&Reference epoch&2448929.56\\ $T_p$&BJD&Epoch at periastron&$2442332.95_{-3450.17}^{+2353.42}$\\\hline $b^{\rm LC}$&m/s&LC offset& $-16.60_{-5.10}^{+4.33}$\\ $b^{\rm VLC}$&m/s&VLC offset& $-22.97_{-3.84}^{+5.33}$\\ $b^{\rm HARPSpre}$&m/s&HARPSpre offset& $-27.87_{-4.08}^{+5.16}$\\ $b^{\rm HARPSpost}$&m/s&HARPSpost offset& $-14.23_{-5.75}^{+4.07}$\\ $b^{\rm UVES}$&m/s&UVES offset& $-15.04_{-4.09}^{+5.03}$\\ $\sigma^{\rm LC}$&m/s&LC RV jitter& $4.05_{-1.97}^{+3.01}$\\ $\sigma^{\rm VLC}$&m/s&VLC RV jitter& $0.42_{-0.15}^{+2.11}$\\ $\sigma^{\rm HARPSpre}$&m/s&HARPSpre RV jitter& $1.45_{-0.05}^{+0.09}$\\ $\sigma^{\rm HARPSpost}$&m/s&HARPSpost RV jitter& $1.02_{-0.07}^{+0.22}$\\ $\sigma^{\rm UVES}$&m/s&UVES RV jitter& $0.64_{-0.14}^{+0.09}$\\ $w_1^{\rm HARPSpre}$&m/s&Amplitude of MA(2) for HARPSpre&$0.69_{-0.04}^{+0.07}$\\ $w_2^{\rm HARPSpre}$&m/s&Amplitude of MA(2) for HARPSpre&$0.31_{-0.07}^{+0.06}$\\ $\ln{\frac{\tau^{\rm HARPSpre}}{1~{\rm day}}}$&&Logarithmic MA time scale&$0.98_{-0.18}^{+0.24}$\\ $w_1^{\rm HARPSpost}$&m/s&Amplitude of MA(1) for HARPSpost&$0.98_{-0.11}^{+0.00}$\\ $\ln{\frac{\tau^{\rm HARPSpost}}{1~{\rm day}}}$&&Logarithmic MA time scale&$2.16_{-0.41}^{+4.22}$\\ $J_{\rm hip}$&&Hipparcos relative jitter&$0.26_{-0.22}^{+0.08}$\\ $J_{\rm gaia}$&&Gaia relative jitter&$0.19_{-0.14}^{+0.21}$\\ $\Delta\alpha$&mas&Offset in $\alpha$ &$-0.99_{-0.09}^{+0.40}$\\ $\Delta\delta$&mas&Offset in $\delta$ &$-0.01_{-0.26}^{+0.25}$\\ $\Delta\mu_\alpha$&mas&Offset in $\mu_\alpha$ &$0.92_{-0.20}^{+0.29}$\\ $\Delta\mu_\delta$&mas&Offset in $\mu_\delta$ &$0.34_{-0.30}^{+0.54}$\\ \hline \end{tabular} \renewcommand{\arraystretch}{1} \label{tab:combined} \end{table} \begin{figure} \centering \includegraphics[scale=1]{combined_fit.pdf} \caption{Observed RV data (A) and astrometric data in terms of proper motion (B) and position (C) compared to best fit model predictions. The offsets due to linear motion are subtracted from the data. (A) RV data of $\epsilon$ Indi A compared to the best fit model parameters (maximum a posteriori) represented as a red line with values shown in the legend. $J_{\rm gaia}$ and $J_{\rm hip}$ are respectively the relative excess noise in Gaia and Hipparcos astrometry data. (B) Hipparcos and Gaia DR2 proper motions are respectively denoted by the dark and light gray ellipse with a black dot in the center. The red line denotes the same ``best'' model prediction of the proper motion shown in (A). The ellipses show the 90\% jitter-corrected confidence level of astrometry. (C) The relative Hipparcos position at BJD2448349.0625 and Gaia DR2 position at BJD2457206.375 are shown along with the red line showing the best model prediction of the position. The barycentric motion of $\epsilon$ Indi A is subtracted from data in order to show its reflex motion. } \label{fig:fit2} \end{figure} Based on the above analysis, $\epsilon$ Indi Ab is a cold Jovian exoplanet with a very long orbital period among the exoplanets detected through the radial velocity and transit methods. With components A, Ab, Ba and Bb, $\epsilon$ Indi provides a benchmark system for the formation of gas giants and brown dwarfs. $\epsilon$ Indi Ab is also a perfect target for direct imaging. The non-detection in previous direct imaging suggests a very low temperature \citep{janson09}. Based on the new combined constraint in this work, the orbit of $\epsilon$ Indi A b with respect to $\epsilon$ Indi A is shown in Fig. \ref{fig:ei}. \begin{figure*} \begin{center} \includegraphics[scale=0.6]{EI_orbit.pdf} \caption{Orbit of $\epsilon$ Indi A b relative to its host star projected onto the sky plane. The position at different epochs are given as references for direct imaging. The separation between the planet and the star is about 1.1\,as in 2020. The light blue cross indicates the average epoch of the direc imaging data used by \protect\cite{janson09}. The North (N) and East (E) directions are shown by grey arrows. The black cross indicates the location of the star. Note that the orbit is shown in offset coordinates and $\Delta\alpha*\equiv\Delta\alpha\cos{\delta}$. } \label{fig:ei} \end{center} \end{figure*} It is apparent that the current separation of the planet from the star is optimal for direct imaging. The separation will increase from 1.1\,as in 2020 to 3.3\,as in 2030 and thus provides a perfect chance for imaging of a nearby cold Jupiter by JWST \citep{krist07} or WFIRST \citep{melchior17}. Althoug the separation between $\epsilon$ Indi Ab and its host star at the epochs of previous direct imaging \citep{janson09} is wider than the one in 2020, our new constraint of the orbital parameters and mass of the planet can be used to optimize the observation strategy. Moreover, $\epsilon$ Indi A b would change its host star's position by about 1\,mas during Gaia's five year mission and thus is detectable by Gaia with its $\sim$20\,$\mu$as astrometric precision \citep{perryman14}. \subsection{Dynamical stability of $\epsilon$ Indi A b} With one Jovian planet and a binary brown dwarf system, the $\epsilon$ Indi system provides a benchmark case to test theories on the formation of giant planets and brown dwarfs. The age of the system is about 0.8$\sim$2.0 Gyr based on the estimated literature rotation period of 22\,day determined from Ca II measurements. The kinematics of the system show an older age of >9.87\,Gyr \citep{eker15}. Our analysis of the HARPSpre data set gives a rotation period of 35.7\,d leading to an age of 3.7$\sim$5.7 Gyr based on the rotation-age relationship given by \cite{eker15} which is more consistent with the age of 3.7$\sim$4.3 Gyr estimated by \cite{scholz03} based on an evolutionary model of the brown dwarf binary Ba and Bb. Thus we conclude that the age of the system is about 4\,Gyr. However, stars less massive than the Sun are unlikely to form more than one giant planet according to \cite{kennedy07}. To investigate whether B a and B b were captured by $\epsilon$ Indi A either in or out of its birth environment, we calculate the escape radius derived by \cite{feng18} based on simulations of perturbations from stellar encounters on wide binaries and an encounter rate of 80\,Myr$^{-1}$ for stellar encounters with periapsis less than 1 pc. The escape radius of $\epsilon$ Indi A is about 5600 au which is larger than the projected separation (1459\,au) between $\epsilon$ Indi B and A \citep{scholz03}. However, if $\epsilon$ Indi A migrated outward to its current location or captured $\epsilon$ Indi B during its formation, the encounter rate would be much higher and the escaped radius could be within the orbit of $\epsilon$ Indi B, which may also be much larger than the projected separation due to geometric effects. On the other hand, the brown dwarf binary may have been on a tighter orbit around the primary during the early evolution of the system and have migrated to its current orbit due to perturbations from the Galactic tide and stellar encounters. Considering these scenarios of dynamical history and that the components of the $\epsilon$ Indi system are likely to have the same age, we conclude that the system was probably formed together and the brown dwarf binary has migrated from a tighter orbit to its current wide orbit. Such a migration would significantly influence the habitability of the system through periodic perturbations from the binary on any potentially habitable planets in the system. \subsection{Comparing $\epsilon$ Indi A b with known exoplanets}\label{sec:comparing} The discovery of the nearby Jupiter analog $\epsilon$ Indi A b is a milestone for the studies of the formation and evolution of Jupiter analogs. In Fig. \ref{fig:epsb}, we show the mass and period of $\epsilon$ Indi A b compared with known exoplanets and the Solar System planets. We see that a few Jupiter-like planets (with 0.5$\sim$5 Jupiter mass) have been detected previously by the RV method though in these cases only provide minimum masses. So far none of them are confirmed or characterized by other methods. Although direct imaging is able to detect super-Jupiters and brown dwarfs around young stars, it is not able to probe the region below 8 Jupiter mass especially for evolved systems where Jupiter-like planets become very faint. Given the limitations of various methods in detecting wide-orbit gas giants, the detection of $\epsilon$ Indi A b by two methods illustrates how a good constraint (e.g., Fig. \ref{fig:mp}) can be set on a wide-orbit cold gas giant in an unpopulated region of exoplanet phase space by using astrometry and RV together. \begin{figure*} \begin{center} \includegraphics[scale=1]{epsIndib.pdf} \caption{Scatter plot of orbital period and planetary mass for confirmed exoplanets, Solar System planets, and $\epsilon$ Indi A b. The Solar System planets are denoted by their names. The exoplanets are downloaded from NASA Exoplanet Archive and are color-coded for different detection methods.} \label{fig:epsb} \end{center} \end{figure*} \section{Discussions and conclusion}\label{sec:conclusion} We analyze the RV and astrometry data of $\epsilon$ Indi A in the {\small Agatha} framework in combination with Bayesian methods. We confirm the suspected planet $\epsilon$ Indi A b to be a cold Jovian planet on a 11.55\,au-wide orbit with an orbital period of 45.20\,yr, making it the planet with a very long period from exoplanets detected by the RV methods. $\epsilon$ Indi A b is only 3.62\,pc away from the Sun and is the closest Jovian exoplanet from the Earth. Given its proximity to the Sun, $\epsilon$ Indi A b is separated from its host star by as much as 3.5\,$''$ and thus can be observed through direct imaging and astrometry, for example by JWST and Gaia \citep{krist07,perryman14} though based on the \cite{baraffe03} isochrones it is likely to be too faint for the current ground-based imaging systems (e.g., \citealt{janson09}). We also diagnose the other signals by comparing the BFPs for various RV data sets, noise models, and noise proxies. We find three signals at periods of about 11, 18 and 278\,d in the RV data of $\epsilon$ Indi A. These signals are significant and can be constrained by Bayesian posterior samplings. Nevertheless, they are unlikely to be Keplerian because significant powers around these signals are found in the periodograms of various noise proxies especially in sodium lines. Based on the activity diagnostics, we conclude that all these signals together with the 2500\,d signal in many noise proxies are caused by stellar activity. In particular, the 2500 and 278\,d signals correspond to the magnetic cycles of $\epsilon$ Indi A while the 35, 18 and 11\,d signals are related to the stellar rotation. We find that the correlation between RVs and activity indicators depends on activity time scales. Thus the inclusion of them in the fit would remove activity-induced noise as well as introduce activity-induced signals. The lack of signals with $K>1$\,m/s and $P<4000$\,days suggests a lack of super-Earths or mini-Neptunes in the habitable zone of $\epsilon$ Indi A which is 0.47 to 0.86 au according to \citep{kopparapu14}. Hence only Earth-like or smaller planets are allowed in the habitable zone. If these rocky planets are detected, $\epsilon$ Indi A would be similar to the Solar System, with close-in rocky planets and longer period gas giants. Thus, we also investigate the dynamical stability of this system according to the metrics of the stability of wide binaries under the perturbation of stellar encounters and Galactic tide \citep{feng18}. We find a considerable possibility that the brown dwarf binary in this system, $\epsilon$ Indi B a and B b, is unstable if they are on their current wide orbit. Hence the brown dwarf binary may have migrated from a tighter orbit to their current wide orbit, which might significantly influence the habitability of potentially Earth-like planet in the system. Our successful detection and characterization of $\epsilon$ Indi A b through combined RV and astrometry analysis provides a benchmark example for the use of the astrometric difference between Gaia and Hipparcos data in characterizing massive planets. We find that the position offsets are more sensitive to nonlinear motion than the proper motion offsets. Although there are only two astrometry epochs, each epoch corresponds to four independent data points. Thus we are able to constrain the orbital parameters that the RV data is not sensitive to. We eagerly anticipate joint application of astrometry and RV methods in exoplanet detection and characterization with the epoch data in future Gaia data releases. $\epsilon$ Indi A b is optimal for direct imaging with a current spearation from its host of about 1\,as. The separation will increase to 3.3\,as in the coming decade, making it the nearest Jupiter analog for direct imaging by JWST or WFIRST. \section*{Acknowledgements} We used the ESO Science Archive Facility to collect radial velocity data. We are very grateful to the referee for comments which significantly clarified the presentation of the results and the quality of the manuscript.
proofpile-arXiv_067-6763
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} The identification of the optical counterparts of the large number of sources detected in recent radio surveys at 1.4 GHz (e.g., \citealt{best12}) revealed that the majority of the radio sources associated with low redshift galaxies show compact emission, with sizes $\lesssim$ 10 kpc \citep{baldi09}. This population of compact objects was poorly represented in earlier surveys (performed at lower frequency and a higher flux density threshold), which were instead dominated by sources extending over scales of hundreds of kpc (e.g., \citealt{hardcastle98}). Due to the lack of extended radio emission, these ``compact'' sources were named ``FR~0s'' \citep{ghisellini11,sadler14,baldi15}, a convenient way to include them into the canonical \citet{fanaroff74} classification scheme of radio galaxies (RGs). The information available from observations of FR~0s is generally very limited, even in the radio band. As a consequence, it is still unclear what the nature of these sources is, and how they are related to the other classes of RGs. \citet{baldi18} selected a sample of compact radio sources named \FRo, in order to perform a systematic study of FR~0s. \FRo\ consists of 104 compact radio sources with redshift $\leq 0.05$ selected by combining observations from the National Radio Astronomy Observatory Very Large Array Sky Survey (NVSS; \citealt{condon98}), the Faint Images of the Radio Sky at Twenty centimeters survey, (FIRST, \citealt{becker95,helfand15}), and Sloan Digital Sky Survey (SDSS; \citealt{york00b}). In the catalog, \citeauthor{baldi18} included the sources brighter than 5 mJy and with a limit to the deconvolved angular size of 4$\arcsec$ in the FIRST images, corresponding to a linear size $\lesssim$ 5 kpc. Their radio luminosities at 1.4 GHz are in the $10^{22} \lesssim L_{1.4} \lesssim 10^{24} \WHz$range. \citet{baldi19} obtained high-resolution multi-frequency radio images of a sub-sample of 18 FR~0s randomly extracted from \FRo. Although the observations reach an angular resolution of $\sim$0\farcs3 (corresponding to $\sim 250$ pc at the median redshift of these sources, $z=$0.04), 14 of the FR~0s are still unresolved, while the remaining four extend over only a few kpc. These observations confirm the general lack of extended radio emission and the high core dominance of FR~0s when compared to the FR~Is of the 3C sample: in FR~Is the fraction of nuclear emission is typically $10^{-2}$ \citep{baldi19}. The origin of the different nature of FR~0s with respect to the extended RGs still remains to be understood. While the appearance in the radio images of FR~0s and FR~Is is radically different, the nuclear and host galaxies' properties of these two classes are very similar \citep{baldi18,torresi18}. A scenario in which FR~0s are young RGs that will all eventually evolve into extended radio sources cannot be reconciled with the large space density of FR~0s, five times more abundant than FR~Is. FR~0s might instead be recurrent sources, characterized by short phases of activity. \citet{baldi15} suggested that the jet properties of FR~0s might be intrinsically different from those of the FR~Is, for example, the former class with lower-bulk Lorentz factors. In this framework, low-frequency radio observations of FR~0s might play an important role, as they can be used to address the following questions: (1) Do FR~0s show low-frequency extended emission? Compactness is the main defining characteristic of FR~0s and one possibility to account for this property is that they are recurrent sources. In this case, we could expect to detect remnant emission from a previous cycle of activity. This is typically characterized by a very steep spectrum, and is therefore best observable at MHz-frequencies. (2) What is the low-frequency spectral shape of FR~0s? Observations at high resolution, required to spatially isolate any small-scale extended emission, are only available for a minority of FR~0s. The spectral index information can be used to infer the fraction of optically thin, hence extended, emission present in FR~0s overcoming the limited spatial resolution. (3) What is the fraction of young sources among the FR~0s? Such objects can be found by looking for the characteristic signature of young radio galaxies, meaning, their low-frequency spectral cut-off due to either synchrotron self-absorption \citep{kellermann66,hodges84} or free-free absorption \citep{kellermann66,bicknell97}. The 150 MHz continuum survey performed with the Giant Metrewave Radio Telescope (GMRT; \citealt{swarup91,intema17}) named TGSS (Tata Institute of Fundamental Research GMRT Sky Survey) offers the first possibility to gather low-frequency data with the combination of sensitivity and spatial resolution required for the study of FR~0s. In particular, we will focus on the TGSS observations of the \FRo\ sample. The paper is organized as follows. In Sect. \ref{sample}, we list the data available from the TGSS observations of the \FRo\ sources. In Sect.~\ref{results}, we present our results, which are then discussed in Sect.~\ref{discussion}. In Sect.~\ref{summary}, we summarize the results and draw our conclusions. \begin{table*} \caption{Radio properties of the sample} \begin{tabular}{c r r r r r | c r r r r r} \hline SDSS~name & {\small F(150)} &{\small F(1.4) } & {\small F(5)} & $\alpha_1$ & $\alpha_2$ & SDSS~name & {\small F(150)} &{\small F(1.4)} & {\small F(5)} & $\alpha_1$ & $\alpha_2$ \\ \hline 010852.48$-$003919.4 & --- & 10.9 & --- &$<$ 0.22 & --- & 123011.85+470022.7 & 128.8 & 93.8 & 73 & 0.14 & 0.20\\ 011204.61$-$001442.4 & --- & 17.9 & --- &$<$ 0.00 & --- & 124318.73+033300.6 & 304.0 & 63.5 & $<$30 & 0.70 & $>$ 0.60\\ 011515.78+001248.4 & --- & 42.6 & $<$33 &$<$-0.40 &$>$ 0.21 & 124633.75+115347.8 & 36.1 & 61.2 & 40 & -0.24 & 0.34\\ 015127.10$-$083019.3 & 60.8 & 35.7 & --- & 0.23 & --- & 125027.42+001345.6 & --- & 54.5 & 82 & $<$-0.51 & -0.33\\ 020835.81$-$083754.8 & --- & 28.4 & --- &$<$-0.20 & ---& 125409.12$-$011527.1 & --- & 7.7 & --- & $<$ 0.37 & --- \\ 075354.98+130916.5 & --- & 7.4 & $<$23 &$<$ 0.39 &$>$-0.92 & 130404.99+075428.4 & --- & 10.5 & $<$26 & $<$ 0.23 & $>$-0.74\\ 080716.58+145703.3 & 43.2 & 28.4 & 28* & 0.19 & 0.01 & 130837.91+434415.1 & 107.4 & 58.4 & 47 & 0.27 & 0.17\\ 083158.49+562052.3 & 26.1$^a$ & 9.0 & $<$18 & 0.48 &$>$-0.56 & 133042.51+323249.0 & --- & 17.9 & $<$19 & $<$-0.01 & $>$-0.04\\ 083511.98+051829.2 & --- & 10.1 & $<$29 &$<$ 0.24 &$>$-0.84 & 133455.94+134431.7 & 41.3 & 39.4 & $<$23 & 0.02 & $>$ 0.43\\ 084102.73+595610.5 & --- & 8.9 & $<$18 &$<$ 0.30 &$>$-0.57 & 133621.18+031951.0 & --- & 30.4 & $<$30 & $<$-0.25 & $>$ 0.01\\ 084701.88+100106.6 & 22.4$^a$ & 23.7 & $<$25 & -0.03 &$>$-0.04 & 133737.49+155820.0 & 41.6 & 26.9 & $<$22 & 0.20 & $>$ 0.16\\ 090652.79+412429.7 & --- & 51.8 & 65 &$<$-0.49 & -0.18 & 134159.72+294653.5 & --- & 10.4 & $<$19 & $<$ 0.23 & $>$-0.49\\ 090734.91+325722.9 & {\small no data} & 46.9 & $<$19 & --- &$>$-0.76 & 135036.01+334217.3 & 302.3 & 101.3 & 79 & 0.49 & 0.20\\ 090937.44+192808.2 & 168.2 & 69.1 & 106 & 0.40 & -0.34 & 135226.71+140528.5 & --- & 25.5 & $<$23 & $<$-0.17 & $>$ 0.09\\ 091039.92+184147.6 & 168.8 & 50.0 & 47* & 0.54 & 0.05 & 140528.32+304602.0 & --- & 7.4 & $<$19 & $<$ 0.39 & $>$-0.76\\ 091601.78+173523.3 & 94.8 & 24.5 & $<$21 & 0.61 &$>$ 0.12 & 141451.35+030751.2 & --- & 26.7 & 85 & $<$-0.19 & -0.93\\ 091754.25+133145.5 & 16.8 & 22.9 & $<$23 & -0.14 &$>$-0.01 & 141517.98$-$022641.0 & --- & 18.9 & --- & & $>$-0.02\\ 093003.56+341325.3 & 81.5 & 33.1 & $<$19 & 0.40 &$>$ 0.46 & 142724.23+372817.0 & --- & 10.8$^b$ & $<$18 & $<$ 0.22 & $>$-0.42\\ 093346.08+100909.0 & 99.7 & 56.6 & 80 & 0.25 & -0.28 & 143156.59+164615.4 & --- & 8.7 & $<$21 & $<$ 0.31 & $>$-0.73\\ 093938.62+385358.6 & --- & 6.1 & $<$18 &$<$ 0.47 &$>$-0.88 & 143312.96+525747.3 & --- & 15.6 & $<$18 & $<$ 0.05 & $>$-0.12\\ 094319.15+361452.1 & 35.4 & 75.1 & 81 & -0.34 & -0.06 & 143424.79+024756.2 & --- & 7.3 & $<$31 & $<$ 0.39 & $>$-1.15\\ 100549.83+003800.0 & --- & 24.1 & $<$32 &$<$-0.14 &$>$-0.24 & 143620.38+051951.5 & --- & 18.7 & $<$29 & $<$-0.03 & $>$-0.34\\ 101329.65+075415.6 & --- & 7.8 & $<$26 &$<$ 0.36 &$>$-0.98 & 144745.52+132032.2 & --- & 6.7 & $<$23 & $<$ 0.43 & $>$-1.00\\ 101806.67+000559.7 & --- & 14.3 & $<$33 &$<$ 0.09 &$>$-0.67 & 145216.49+121711.5 & --- & 8.0 & $<$24 & $<$ 0.35 & $>$-0.87\\ 102403.28+420629.8 & --- & 6.0 & $<$18 &$<$ 0.48 &$>$-0.88 & 145243.25+165413.4 & 17.9$^a$& 17.5 & $<$21 & 0.01 & $>$-0.17\\ 102511.50+171519.9 & --- & 10.2 & $<$21 &$<$ 0.24 &$>$-0.59 & 145616.20+203120.6 & 68.3 & 25.8 & $<$20 & 0.44 & $>$ 0.21\\ 102544.22+102230.4 & 90.4 & 76.7 & 60 & 0.07 & 0.20 & 150152.30+174228.2 & 18.6$^a$& 18.6 & $<$21 & 0.14 & $>$-0.10\\ 103719.33+433515.3 & 260.9 & 132.2 & 66 & 0.30 & 0.56 & 150425.68+074929.7 & --- & 7.8 & $<$27 & $<$ 0.36 & $>$-0.99\\ 103952.47+205049.3 & --- & 6.9 & $<$20 &$<$ 0.42 &$>$-0.85 & 150601.89+084723.2 & --- & 8.3 & $<$26 & $<$ 0.33 & $>$-0.91\\ 104028.37+091057.1 & 128.5 & 68.5 & 31 & 0.28 & 0.64 & 150636.57+092618.3 & 27.8$^a$& 27.8 & $<$25 & 0.03 & $>$ 0.08\\ 104403.68+435412.0 & 23.4 & 32.4 & 28 & -0.15 & 0.12 & 150808.25+265457.6 & --- & 20.3 & $<$19 & $<$-0.07 & $>$ 0.04\\ 104811.90+045954.8 & 175.3 & 49.1 & $<$29 & 0.57 &$>$ 0.43 & 152010.94+254319.3 & --- & 18.3 & $<$19 & $<$-0.02 & $>$-0.05\\ 104852.92+480314.8 & 63.7 & 19.2 & $<$18 & 0.54 &$>$ 0.05 & 152151.85+074231.7 & --- & 11.7 & $<$27 & $<$ 0.18 & $>$-0.66\\ 105731.16+405646.1 & 33.2 & 44.8 & 31 & -0.13 & 0.30 & 153016.15+270551.0 & --- & 13.3 & $<$19 & $<$ 0.12 & $>$-0.30\\ 111113.18+284147.0 & 56.4 & 41.1 & 56* & 0.14 & -0.25 & 154147.28+453321.7 & --- & 8.9 & $<$18 & $<$ 0.30 & $>$-0.57\\ 111622.70+291508.2 & --- & 71.5 & $<$19 &$<$-0.63 &$>$ 1.06 & 154426.93+470024.2 & 46.5 & 17.6 & $<$18 & 0.43 & $>$-0.02\\ 111700.10+323550.9 & --- & 17.6 & $<$19 &$<$-0.00 &$>$-0.05 & 154451.23+433050.6 & --- & 11.5 & $<$18 & $<$ 0.19 & $>$-0.36\\ 112029.23+040742.1 & --- & 7.5 & $<$30 &$<$ 0.38 &$>$-1.10 & 155951.61+255626.3 & 67.7$^a$& 29.2$^b$ & $<$19 & 0.38 & $>$ 0.33\\ 112256.47+340641.3 & 46.5 & 16.6 & $<$19 & 0.46 &$>$-0.09 & 155953.99+444232.4 & 188.5 & 59.5 & $<$18 & 0.52 & $>$ 0.96\\ 112625.19+520503.5 & --- & 9.0 & $<$18 &$<$ 0.30 &$>$-0.56 & 160426.51+174431.1 & 45.7 & 96.0 & 143 & -0.33 & -0.32\\ 112727.52+400409.4 & 43.0 & 13.8 & $<$18 & 0.51 &$>$-0.21 & 160523.84+143851.6 & --- & 8.6 & $<$23 & $<$ 0.32 & $>$-0.78\\ 113449.29+490439.4 & 107.0 & 33.0 & $<$18 & 0.53 &$>$ 0.49 & 160641.83+084436.8 & --- & 9.3 & $<$26 & $<$ 0.28 & $>$-0.82\\ 113637.14+510008.5 & --- & 9.0 & $<$18 &$<$ 0.30 &$>$-0.56 & 161238.84+293836.9 & --- & 27.4 & $<$19 & $<$-0.20 & $>$ 0.29\\ 114230.94$-$021505.3 & --- & 8.8 & --- &$<$ 0.31 & --- & 161256.85+095201.5 & 52.6 & 21.7 & $<$25 & 0.40 & $>$-0.11\\ 114232.84+262919.9 & --- & 42.0 & 51 &$<$-0.39 & -0.16 & 162146.06+254914.4 & --- & 9.1 & $<$19 & $<$ 0.29 & $>$-0.61\\ 114804.60+372638.0 & --- & 29.1 & 24 &$<$-0.23 & 0.16 & 162846.13+252940.9 & --- & 25.2 & $<$19 & $<$-0.16 & $>$ 0.21\\ 115531.39+545200.4 & --- & 31.2 & $<$18 &$<$-0.26 &$>$ 0.44 & 162944.98+404841.6 & --- & 7.7 & $<$18 & $<$ 0.37 & $>$-0.68\\ 120551.46+203119.0 & 92.6 & 89.9 & 52 & 0.01 & 0.44 & 164925.86+360321.3 & --- & 11.9 & $<$18 & $<$ 0.17 & $>$-0.35\\ 120607.81+400902.6 & --- & 9.5 & $<$18 &$<$ 0.27 &$>$-0.51 & 165830.05+252324.9 & --- & 13.1 & $<$19 & $<$ 0.13 & $>$-0.32\\ 121329.27+504429.4 & 178.1 & 96.5 & $<$18 & 0.27 &$>$ 1.35 & 170358.49+241039.5 & 28.2 & 32.7 & $<$20 & -0.07 & $>$ 0.41\\ 121951.65+282521.3 & --- & 8.7 & $<$19 &$<$ 0.31 &$>$-0.64 & 171522.97+572440.2 & 71.6 & 57.2 & 35 & 0.10 & 0.40\\ 122421.31+600641.2 & --- & 6.1 & $<$18 &$<$ 0.47 &$>$-0.87 & 172215.41+304239.8 & 48.2 & 8.1 & $<$19 & 0.80 & $>$-0.68\\ \hline \end{tabular} \label{tab1} \small{Column description: (1) name; (2 - 4) flux densities (in mJy) at 0.15, 1.4, and 5 GHz, respectively. (5 - 6) spectral indices between 0.15 and 1.4 GHz ($\alpha_1$) and 1.4 and 5 GHz ($\alpha_2$). The sources marked with $^a$ are those not present in the TGSS catalog, and whose flux density was measured from the images, while those marked with ``---'' are not detected at 150 MHz with a threshold of 17.5 mJy. For the sources marked with $^b,$ we used the FIRST measurement instead of the NVSS one, due to the contamination of a nearby source. Sources outside the GB6 survey coverage are indicated with ``---'' in the last column, while the three FR~0s whose five GHz measurements are potentially contaminated by nearby sources are marked with an asterisk.} \end{table*} \begin{figure*} \includegraphics[scale=1.00]{tgss2.ps} \caption{Comparison of flux densities of FR0CAT sources at 150 MHz and 1.4 GHz from TGSS and NVSS, respectively. The dotted lines represent loci of constant spectral indices $\alpha$ (defined as $F_{\nu}\propto\,\nu^{-\alpha}$) at the values indicated. The blue, left pointing arrows represent the upper limits in the TGSS.} \label{tgss} \end{figure*} \section{The TGSS data} \label{sample} The low-frequency data for the FR~0s are taken from the TGSS. \citet{intema17} produced a first alternative data release (TGSS ADR1) obtained through independent re-processing of the TGSS data. The TGSS ADR1 covers the declination range -53$^\circ$ $<$ $\delta$ $<$ +90$^\circ$ with images at 150 MHz, at a resolution of $\sim$25\arcsec\ corresponding to a linear scale between 10 and 25 kpc at the redshift of the sources considered. The median rms noise level of the TGSS ADR1 images is 3.5 mJy beam$^{-1}$. The catalog contains all sources above the 7$\sigma$ (24.5 mJy) threshold. The absolute astrometry accuracy of the catalog is better than 2\arcsec. All but one (namely J0907+35) of the 104 sources of the FR0{\sl CAT} sample are covered by the TGSS. By adopting a search radius of 5\arcsec, we found an association for 37 out of the 104 \FRo\ sources in the TGSS ADR1 catalog. However, the a priori knowledge of the optical position of the FR~0s enables us to safely use a less stringent limit, which we set at 5$\sigma$ (17.5 mJy). With this strategy, we recover five additional FR~0 detections. The visual inspection of the TGSS fields of all FR~0 sources also revealed an association for J1559+25, not listed in the catalog, likely because of the presence of a confusing bright (180 mJy at 1.4 GHz) nearby (29\arcsec) object. We measured a TGSS flux density for J1559+25 of 67.7 mJy. The total number of FR~0s with a 150 MHz flux density measurement is then 43. In Table \ref{tab1}, we list the radio data used for our analysis. To study the spectral properties of the FR~0s, we also used data from the Green Bank 6-cm survey (GB6, \citealt{gregory96}) and the Westerbork Northern Sky Survey at 327 MHz (WENSS, \citealt{rengelink97}). GB6 covers the Northern sky up to $\delta = 75^\circ,$ and it includes the position of all but seven \FRo\ sources. The catalog threshold is generally 18 mJy, but it varies with position within the survey area, being higher at lower declinations (see Tab. \ref{tab1}). The WENSS covers the sky North of $\delta = 30^\circ$, including 25 \FRo\ sources, with a limiting flux density of 18 mJy, and a resolution of 54\arcsec$\times$54\arcsec cosec$\delta$. Due to the large beams of the TGSS, NVSS, WENSS, and GB6 (with resolutions of 25\arcsec, 45\arcsec, 54\arcsec, and 3\arcmin, respectively), there is the possibility of a contamination from sources located at small distances from the FR0s. We inspected their higher resolution FIRST images to address this issue. Concerning the possible contamination of the first three surveys, we considered an area with a radius of 45$\arcsec$ (the NVSS beam size) centered on the targeted FR~0: we found that this includes a source present in the FIRST catalog in only two cases: J1427+37 and J1559+25. J1427+37 is not detected at either 150 MHz or 5 GHz. We already identified the confusing source in the TGSS image for J1559+25 (which is not visible in GB6). For these two objects, we used the FIRST flux density value at 1.4 GHz instead of the NVSS value. To assess any possible contamination of the GB6 measurements, we only considered the 23 FR~0s detected by this survey. For eight of them, there is no FIRST source within 3$^\prime$. In 12 cases, the sources within this radius have a flux density at 1.4 GHz between 3\% and 10\% of the FR~0 considered: even in the case that these sources have an inverted spectrum, they do not produce a significant contribution ($>$25\%) of the measured 5 GHz flux density. In the remaining three cases (namely, J0807+14, J0910+18, and J1111+28), there are nearby sources with flux densities between 30\% and 90\% of the target of interest. We marked them with an asterisk in Tab. \ref{tab1} to highlight the possible contamination. \begin{figure*} \includegraphics[scale=0.45]{alfa.ps} \includegraphics[scale=0.45]{alfa30.ps} \caption{Left: comparison of spectral indices measured between 150 MHz and 1.4 GHz and 1.4 and 5 GHz. The double red arrows indicate objects only detected at 1.4 GHz, the blue arrows indicate those detected at 1.4 and 5 GHz, but not at 150 MHz, the black arrows indicate those detected at 150 MHz and 1.4 GHz, but not at 5 GHz. The solid lines separate flat ($\alpha <0.5$) and steep sources. Sources located above the dotted line (marking an equal value of the two spectral indices) are objects with a convex spectrum. The green cross on the upper left corner represents the maximum error, corresponding to the sources with flux density equal to $5 \times \sigma$ in both TGSS and GB6. The cyan broken lines reproduce the typical spectral shape in different regions of the diagram. Right: same as left panel but limited to 19 sources with $F_{\rm 1.4} > 50$ mJy.} \label{three} \end{figure*} \section{Results} \label{results} The FR~0s detected in the TGSS images (with just one exception) show a single unresolved radio component confirming the lack of large scale extended emission characteristic of these sources seen in both the FIRST and the NVSS. The only exception is J1521+07, also known as 3C~318.1, located at the center of the MKW03S cluster: an extended radio source with an extremely steep spectrum ($\alpha$=2.42 between 235 and 1280 MHz)\footnote{Spectral indices $\alpha$ are defined as $F_{\nu}\propto\,\nu^{-\alpha}$.} and a morphology dominated by a peculiar concave arc-like structure (i.e. with its center of curvature located on the opposite side of the J1521+07) is seen $\sim$ 40 kpc toward the south \citep{giacintucci07}, a structure possibly not associated with the active nucleus. We can estimate the limit on the flux density of any extended emission around the FR~0s by considering a fiducial area of 100 kpc $\times$ 100 kpc. At the median redshift of the \FRo\ sample of 0.037, this corresponds to $\sim 135\arcsec \times 135\arcsec$. The distribution of GMRT antenna baselines is such that the array at 150 MHz is sensitive to extended emission on scales smaller than 68$^\prime$ \citep{intema17}, much larger than the angular sizes we are interested in. We measured the flux density over $\sim 135\arcsec \times 135\arcsec$ in several regions of the FR~0 images. We found a median value of five mJy with a dispersion of 39 mJy, corresponding to a 3$\sigma$ upper limit of $\sim$120 mJy. In Fig. \ref{tgss}, we compare the flux densities of the FR0CAT sources at 150 MHz and 1.4 GHz from the TGSS and NVSS, respectively. Since the FR~0s are compact objects, we do not expect the different resolution of the two surveys to significantly affect our results. Conversely, variability is an important issue, particularly because we are dealing with compact sources and considering that $\sim$15 years separate the TGSS and NVSS observations. Therefore, the results of this comparison for individual objects should be taken with some caution. The resulting spectral indices are steeper than $\alpha > 0.5$ only in eight sources, meaning, the vast majority (92\%) of the FR~0s show a flat ($0 < \alpha <0.5$) or inverted ($\alpha <0$) low-frequency spectrum. This conclusion applies not only to the sources detected by the TGSS, but also to those with a 150 MHz upper limit, because they all correspond to slopes flatter than $\alpha < 0.47$. Several FR~0s show an inverted spectrum: this is the case of eight of the TGSS detected sources, to which we add the 21 undetected objects with an upper limit $\alpha < 0$ to their spectral indices. The fraction of FR~0s with inverted spectrum is then at least 28\%, but it can be as high as 66\%, depending on the slope of the remaining 39 FR~0s, undetected in the TGSS, which are all consistent with a negative value of $\alpha$. \begin{figure*} \includegraphics[scale=0.99]{sed2.epsi} \caption{Radio spectra of the 19 FR~0s with a 1.4 GHz flux density $> 50$ mJy, covered by both the TGSS and GB6. Flux densities are normalized to unity at 1.4 GHz. The empty red symbols correspond to upper limits. The dashed lines are the log-parabolae defined by the measurements at 150 MHz, 1.4 and 5 GHz which are shown for the sources with a convex spectrum. The smaller symbols are the available WENSS measurements at 327 MHz.} \label{sed} \end{figure*} \begin{figure} \includegraphics[scale=0.49]{fwhist.epsi} \caption{Black histogram: distribution of FWHM (in decades of frequency) of 14 FR~0s with a convex radio spectrum extracted from 19 sources of bright sub-sample with $F_{1.4} > 50$ mJy. The left pointing arrows correspond to upper limits of the FWHM due to the non detection at least at one frequency. The right pointing arrow represents the FWHM of J1604+17 whose measured value (12.5) exceeds the plot limit. The dashed red histogram reports the FWHM measured by \citet{odea91} in a sample of 13 GPS.} \label{fwhist} \end{figure} To further investigate the spectral properties of the FR~0s, we also include the GB6 observations at 5 GHz in the analysis and compare the spectral indices measured between 150 MHz and 1.4 GHz, and between 1.4 and 5 GHz (see Fig. \ref{three}). This comparison confirms the main conclusion on the paucity of FR~0s with a steep spectrum. The GB6 measurements are also important to finding and to exploring the nature of the sources with a convex spectrum. All sources with $\alpha(150-1400) < \alpha(1400-5000) $, meaning, those located above the dotted line (marking an equal value of the two spectral indices considered) are objects with a convex spectrum. A significant limitation of our analysis is related to the relatively small fraction of objects detected at 150 MHz and (or) 5 GHz, due to the higher flux density thresholds of the TGSS and GB6 data with respect to the NVSS. We therefore preferred to restrict this analysis to the sub-sample formed by the 19 FR~0s with a 1.4 GHz flux density $F_{\rm 1.4} > 50$ mJy, covered by both the TGSS and GB6 area (see Fig. \ref{three}, right panel). At least 14 of these sources have a convex spectrum (see Fig. \ref{sed}). Following the approach of \citet{odea91}, we fit the radio spectra with a log-parabola and measured its full width at half maximum (FWHM). We limited our study to the 14 sources with a convex spectrum and measured values typically between 1.5 and six decades (see Fig. \ref{fwhist}). The FWHM in five sources must be considered as an upper limit, because the source is not detected in at least one band. The median FWHM is between 2.2 and 2.4 decades, depending on the actual values of the upper limits. The sampling of the radio spectra of FR~0s is clearly rather limited. For seven of these sources, the spectral coverage can be improved by including the measurements from the WENSS: overall, these support the interpretation that the FR~0s' spectra are rather shallow. In particular, in two of the sources not detected by the GB6 survey (namely, J1213+50 and J1559+44), the fitted parabolas are consistent with the WENSS data points, suggesting that the upper bounds to their curvature do not differ significantly from their actual values. \section{Discussion} \label{discussion} \subsection{Extended radio emission in FR~0s } The TGSS images confirm the lack of spatially extended emission around FR~0s, the main defining property of this class of sources. An extended steep spectrum component might have been expected in two cases. Firstly, FR~0s might be less efficient in the acceleration of the relativistic electrons with respect to standard extended radio-galaxies, and produce intrinsically steep radio spectra. Secondly, FR~0s might be recurrent sources leaving behind a relic emission from previous activity phases, characterized by a steep spectrum due to spectral ageing. The limit to the luminosity of extended structures is $\sim 4 \times 10^{23}$ W Hz$^{-1}$ over an area of 100 kpc $\times$ 100 kpc at the median distance of the FR 0 sample. As reference, the edge-darkened FR~I sources selected from the same catalog of RGs from which we extracted the FR~0s \citep{capetti17} have sizes between 60 and 120 kpc, and predicted luminosities at 150 MHz (when assuming a spectral slope of 0.7 between 150 MHz and 1.4 GHz) in the $\sim 10^{24} - 10^{26}$ W Hz$^{-1}$range. Nonetheless, very low surface brightness emission ($\sim$ 0.15 mJy beam$^{-1}$) extending over $\sim 10$ kpc have been detected in NGC~3998 \citep{frank16}. NGC~3998 is a nearby (z=0.0035) flat-spectrum radio source, unresolved in the FIRST images, fulfilling all requirements for an FR~0 classification.\footnote{NGC~3998 is not included in the \FRo,\ because the SDSS did not obtain its optical spectrum.} This individual example indicates that, although FR~0s and FR~Is are different classes of sources, deeper observations are needed to explore their relationship in greater depth, and in particular, to establish whether FR~0s are able to produce large-scale jets. \subsection{Radio spectral properties of FR~0s} The spectral index information can be used to probe the presence of optically-thin emission in FR~0s, and also on scales smaller than the spatial resolution of the TGSS (10 - 25 kpc). The fraction of \FRo\ sources detected by the TGSS is 42\%: this relatively low detection rate is due to the combination of (1) the different flux density thresholds used to build the \FRo\ catalog and the one adopted for the TGSS (5 and 17.5 mJy, respectively), and, (2) the generally flat spectral slope between 150 MHz and 1.4 GHz of the FR~0s. Nonetheless, the TGSS sensitivity is sufficient to exclude that the sources not detected by this survey have a spectrum with a slope steeper than 0.5. This leaves us with only eight FR~0s with a steep low-frequency radio spectrum. A simple model in which the emission is produced by two components, one extended and one compact, with spectral index $\alpha$=0.7 and $\alpha$=0, respectively, indicates that the overall spectrum becomes steep ($\alpha > 0.5$) when the optically-thin component contributes to a fraction $>$55\% of the total emission at 1.4 GHz. Therefore, at least half the emission from the 35 flat or inverted sources must originate from a core component. This value confirms the high-core dominance of FR~0s derived by \citet{baldi19}, based on high resolution images. \subsection{Contribution of young radio-galaxies to the FR~0s' population} A further issue that can be investigated from the low-frequency spectral properties of FR~0s is how many of them are compact because of their youth. Although, as reported in the introduction, the number density of FR~0s is too large to interpret the whole class of compact sources as young objects, the sources that will eventually produce extended RGs must necessarily pass through a small-size phase, and some FR~0s might indeed compact because they are young. Young RGs can be found looking for a convex spectrum due to a low frequency cut-off. By restricting to the 19 FR~0s with $F_{1.4} > 50$ mJy, the fraction of sources with a convex spectrum is $\sim$ 75\%. Nonetheless, the spectral curvature of the FR~0s spectra is less pronounced than what is measured in the more powerful GPS sources. \citeauthor{odea91} fit the radio spectra of a sample of GPS with log-parabola (the same method we adopted in the previous section) and found that generally the peak of their radio spectra are rather narrow, with a median value of the FWHM of 1.2 decades of frequency. Conversely, we find larger values, typically between 1.5 and 6 decades (with a median value of $\sim$2.3). Although five of them must be considered as upper limits, because the source is not detected in at least one band, the spectral curvature of FR~0s is generally much less pronounced than in GPS. Apparently, we do not see the sharp transition from an optically-thick to an optically-thin regime typical of the GPS. The spectral properties of FR~0s are better described as being due to a gradual steepening toward high frequencies. \citet{baldi19} noted a similar effect in their VLA observations between 1.4 and 7.5 GHz. The spectrum between 4.5 and 7.5 GHz is steeper than between 1.4 and 4.5 GHz, meaning, the high frequency spectra of FR~0s are generally convex: the median difference in the spectral slopes from 1.4 to 5 GHz and from 4.5 to 7.5 GHz is $\Delta \alpha = 0.16$. \citeauthor{baldi19} also found six out of 18 sources have steep spectra (the remaining objects have flat and, in one case, inverted, spectra), a significantly higher fraction than what we find in this study. Interestingly, the steep sources in the sub-sample\footnote{J0907+35, one of the six belonging to this group, is the only FR~0s not covered by the TGSS.} have a flat spectrum between 150 MHz and 1.4 GHz, confirming the presence of a spectral steepening, with data covering a broader spectral range. The presence of sources in which the spectral index increases with frequency, similarly to what we see in FR~0s, has been already recognized in very early works and ascribed to the combined effects of self-absorption and of the presence of components with a wide range of brightness temperatures \citep{kellermann69,marscher88}. This interpretation might also apply to FR~0s, and it can be tested with radio imaging at a high resolution, sufficient to spatially resolve the various emitting components. Nonetheless, in the bright sub-sample of 19 FR~0s, there are three sources (namely J0906+41, J1116+29, and J1250+00) whose spectrum is reminiscent of the GPS spectra: they have an inverted spectrum at low-frequencies, and an emission peak at $\nu \gtrsim 1$GHz. The FR~0s in which we might be observing a genuine low-frequency cut-off, and which can be interpreted as young compact objects, represent $\sim 15\%$ of the sub-sample with flux densities larger than 50 mJy. However, this result must be confirmed with more sensitive surveys. In fact, this flux density limit at 1.4 GHz might introduce a significant bias, because the median luminosity of this sub-sample is a factor 10 higher than for the whole \FRo. Furthermore, we might be excluding sources in which the emission peak is located at higher frequencies, and which, for this reason, do not meet the flux density threshold. The very selection of the \FRo\ is based on surveys at 1.4 GHz, and this might represent a bias against sources with a GPS-like spectra. Furthermore, variability is known to play an important role in the process of identification of this class of sources (see, e.g., \citealt{torniainen05}): simultaneous multi-frequency observations are needed to firmly assess their nature. The study of these candidates is particularly relevant because of their extremely low radio luminosity ($2 - 3 \times 10^{23} \WHz$), more than five orders of magnitude below that of the most studied samples of young compact sources \citep{odea98}, and even less powerful than the low-luminosity compact sources studied by \citet{kunert10}. \section{Summary and conclusions} \label{summary} We present the results obtained from the TGSS survey, based on GMRT observations at 150 MHz of the 104 compact FR~0s sources forming the \FRo\ sample. The fraction of FR~0s detected at low radio frequencies is 36\%. The relatively small number of 150 MHz detections is due in part to the higher flux density threshold of the TGSS with respect to the selection threshold of \FRo\ (17.5 and 5 mJy, respectively), but also to the general flatness of the radio spectra: only eight sources have a steep ($\alpha > 0.5$) radio spectral index between 150 MHz and 1.4 GHz. We failed to detect extended emission associated with the FR~0s. The corresponding upper limit, estimated over a region of 100 kpc $\times$ 100 kpc, is 4$\times 10^{23}$ W Hz$^{-1}$, a factor between 3 and 300 below the luminosity of FR~I sources. In addition, the FR~0s' spectral shapes indicate that the contribution of extended optically-thin emission within the TGSS beam might contribute for, on average, at most a fraction $\lesssim$ 50\% (at 1.4 GHz) to these compact sources. By also including observations at 5 GHz from the GB6 survey, we explored the radio spectra of FR~0s over a larger range of frequencies. Due to the higher threshold of the GB6 ($\sim$ 18 mJy), only 23 FR~0s are detected at 5 GHz. We then preferred to limit the multi-band analysis to the sub-sample formed by the 19 FR~0s with a flux density at 1.4 GHz larger than 50 mJy. Most of them (13) have spectral indices flatter than 0.5 in both frequency ranges, and in 14 FR~0s, the spectra is steeper between 1.4 and 5 GHz than between 150 MHz and 1.4 GHz, meaning they are convex spectra. A convex spectrum is a characteristic feature of young sources in which a turn-over is observed at low frequencies, due to a high optical depth of either free-free or synchrotron self-absorption. This raised the possibility that at least a fraction of the \FRo\ sources are compact, because they are young radio-galaxies. Nonetheless the spectral curvature of FR~0s is in general smaller than in GPS: in FR~0s, the median FWHM is 2.3 decades of frequency compared to a FWHM of 1.2 measured in GPS. The fraction of FR~0s with a high curvature and a spectrum rising in the GHz spectral region, reminiscent of GPS, is at most three out of 19, meaning $\lesssim$ 15\%. Clearly, the studies of the low-frequency radio properties of FR~0s would greatly benefit from the deeper and higher resolution observations that are being produced by the International Low Frequency Array (LOFAR; \citealt{vanhaarlem13}). In particular, the LOFAR Two-meter Sky Survey (LoTSS) will eventually cover the entire Northern sky, producing $\sim$5\arcsec\ resolution images with a sensitivity of $\sim$ 0.1 mJy beam$^{-1}$ at 150 MHz \citep{shimwell17}. It should enable us to set stronger limits on, or even allow the detection of, the extended emission, with an improvement of the detection threshold of more than an order of magnitude with respect to the TGSS, and to detect the low-frequency counterpart of all the FR~0s with $\alpha(150-1400) > -1$. This will enable us to characterize the spectral shape of FR~0s in much greater detail, in particular for those with a convex spectrum. \begin{acknowledgements} MB acknowledges support from INAF under PRIN SKA/CTA FORECaST and from the ERC-Stg DRANOEL, no 714245. RDB has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 730562 [RadioNet]. TOPCAT astronomical software \citep{taylor05} was used for the preparation and manipulation of the tabular data and the images. We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. This research has made use of Aladin sky atlas developed at CDS, Strasbourg Observatory, France. \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_067-6772
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{More on the Simulations} \label{app:simulation} In the following, we include additional information about the simulation study in Section~\ref{sec:simulations}. We start by explaining in detail how the data sets were created, and in Section~\ref{app:additional_simulation_results} we give some additional results. \subsection{Discrete-Time Survival Simulations from Logit Hazards} \label{app:sim_details} The simulated survival data sets were generated by drawing from the discrete hazard $h(t \mid \mathbf{x})$ across times $t \in \{0.1, 0.2, \ldots, 100\}$. The discrete hazard was defined through the logit hazard $g(t \mid \mathbf{x}) \in \mathbb{R}$, \begin{align*} h(t \mid \mathbf{x}) = \frac{1}{1 + \exp[- g(t \mid \mathbf{x})]}, \end{align*} ensuring that $h(t \mid \mathbf{x}) \in (0,\, 1)$. We let the logit hazard be a weighted sum of three different functions, $g_\text{sin}(t \mid \mathbf{x})$, $g_\text{con}(t \mid \mathbf{x})$, and $g_\text{acc}(t \mid \mathbf{x})$, giving \begin{align*} g(t \mid \mathbf{x}) &= \alpha_1\, g_\text{sin}(t \mid \mathbf{x}) + \alpha_2\, g_\text{con}(t \mid \mathbf{x}) + \alpha_3\, g_\text{acc}(t \mid \mathbf{x}), \\ g_\text{sin}(t \mid \mathbf{x}) &= \gamma_1 \sin(\gamma_2 [ t + \gamma_3]) + \gamma_4,\\ g_\text{con}(t \mid \mathbf{x}) &= \gamma_5, \\ g_\text{acc}(t \mid \mathbf{x}) &= \gamma_6 \cdot t - 10, \\ \alpha_i &= \frac{\exp(\gamma_{i+6})}{\sum_{j=1}^3 \exp(\gamma_{j+6})}, \quad \text{for } i=1, 2, 3. \end{align*} Here, we actually have covariate-dependent $\gamma_i$'s, i.e., $\gamma_i(\mathbf{x})$, but we have omitted the $\mathbf{x}$ for readability. Let $\tilde x_j$ be a linear combination of a subset of the covariates, $\tilde x_j = \mathbf{x}_j^T \boldsymbol\beta_j$ for $j=1, \dots, 9$, where the subsets are non-overlapping and of equal size (if the subsets are of size $m$, we have $\mathbf{x} \in \mathbb{R}^{9 m}$). The $\gamma$'s in the study are defined as \begin{align*} \gamma_1(\mathbf{x}) &= 5 \tilde x_1, \\ \gamma_2(\mathbf{x}) &= \frac{2\,\pi}{100} \cdot 2^{\lfloor \frac{5}{2}(\tilde x_2 + 1) - 1 \rfloor},\\ \gamma_3(\mathbf{x}) &= 15 \tilde x_3, \\ \gamma_4(\mathbf{x}) &= 2 \tilde x_4 - 6 - |\gamma_1(\mathbf{x})|, \\ \gamma_5(\mathbf{x}) &= \frac{5}{2} (\tilde x_5 + 1) - 8, \\ \gamma_6(\mathbf{x}) &= \frac{1}{1 + \exp[- \frac{6}{2} (\tilde x_6 + 1) + 5]}, \\ \gamma_7(\mathbf{x}) &= 5 (\tilde x_7 + 0.6),\\ \gamma_8(\mathbf{x}) &= 5 \tilde x_8,\\ \gamma_{9}(\mathbf{x}) &= 5 \tilde x_{9}, \end{align*} where $\lfloor z \rfloor$ is the floor operation. We draw $\tilde x_j \overset{iid}{\sim} \text{Unif}[-1, \, 1]$, and $\beta_k \overset{iid}{\sim} N(0,\, 1)$. The forms of the $\gamma_i(\mathbf{x})$'s have been chosen to obtain reasonable survival functions. In particular, $\gamma_2(\mathbf{x})$ ensures that the number of periods is a multiple of 2, as we found it more reasonable than having arbitrary periods. Finally, we draw covariates $x_{j,k}$, while ensuring $\mathbf{x}_j^T \boldsymbol\beta_j = \tilde x_j$, through the following scheme: For known $\boldsymbol\beta_j \in \mathbb{R}^m$, we draw $x_{j,k}$ conditionally such that \begin{align*} \left(\tilde x_j - \sum_{i=1}^{k} x_{j, i}\, \beta_{j, i} \right) \mid \tilde x_j, x_{j,1}, \ldots, x_{j, k-1} \sim \text{Unif}[-1,\, 1], \quad \text{for } k=1,\ldots,m-1. \end{align*} Hence, we sample $u_{j,k} \overset{iid}{\sim} \text{Unif}[-1,\, 1]$ for $k=1, \ldots , m-1$, and set $u_{j,k} = \tilde x_j - \sum_{i=1}^{k} x_{j, i}\, \beta_{j, i}$, giving the covariates \begin{align*} x_{j,k} = \begin{cases} \frac{1}{\beta_{j,1}} \left(\tilde x_j - u_{j, 1}\right), & \text{if } k = 1\\ \frac{1}{\beta_{j, k}}\left(u_{j, k-1} - u_{j, k}\right), & \text{if } k=2, \ldots, m-1\\ \frac{1}{\beta_{j, m}} \left( \tilde x_j - \sum_{i=1}^{m-1} x_{j, i} \beta_{j, i} \right), & \text{if } k = m. \end{cases} \end{align*} Using this scheme, it is straightforward to change the number of covariates without affecting the hazards. The code for generating theses simulations is available at \url{https://github.com/havakv/pycox}. \subsection{Additional Simulation Results} \label{app:additional_simulation_results} We here present some additional results from the simulation study in Section~\ref{sub:comparison_of_interpolation_schemes}. Recall that each method is fitted 80 times (4 grids $\times$ 2 discretization schemes $\times$ 10 repetitions). In the same manner as in Figure~\ref{fig:sim_rank_hazard_ip}, we plot in Figure~\ref{fig:sim_rank_all} the MSE and concordance for the Logistic-Hazard, Logistic-Hazard (CHI), PC-Hazard, and PMF, where the scores of the 80 models are sorted from best to worst. We again see that PC-Hazard and the Logistic-Hazard (CHI) perform better than the discrete estimates of Logistic-Hazard and PMF\@. Furthermore, Logistic-Hazard seems to generally perform better than the PMF method. We still find that for the best grid configurations, the differences between all models are very small. But we reiterate that, for practical purposes, it is quite desirable to have stable performance for a variety of hyperparameter configurations. \begin{figure}[tb] \centering \includegraphics[width=0.99\linewidth]{./figures/sim_rank_all.pdf} \vspace*{-1mm} \caption{MSE and concordance from the simulation study in Section~\ref{sec:simulations}. The scores are plotted from best to worst. The number above each plot gives the size of the training set. Note that the plots are not on the same scale.}\label{fig:sim_rank_all} \end{figure} \setcounter{equation}{0} \setcounter{table}{0} \setcounter{figure}{0} \section{PC-Hazard and Poisson Regression} \label{app:pc_hazard_as_poisson_regression} The PC-Hazard method presented in Section~\ref{sub:continuous_time_hazard_parametrization} is essentially a neural network version of the piecewise exponential model studied by \citet{holford1976life} and \citet{friedman1982piecewise}. Friedman showed that the likelihood obtained with piecewise constant hazards is proportional to the Poisson likelihood. Consequently, one can use standard software to fit the model. Nevertheless, we prefer to implement the log-likelihood of PC-Hazard more directly, and do not use the Poisson likelihood. This is because we wanted to ensure numerical stability with the softplus~\eqref{eq:pc_hazard_softplus} (as the inverse link function), while the Poisson likelihood available in most frameworks requires the log link function (i.e., an exponential activation function instead of the softplus). To see how we can obtain the Poisson likelihood, we first need to define some variables. Recall that ${\kappa(t)}$ denotes the index of an interval, such that $t \in \interv{\tau_{{\kappa(t)}-1}}{\tau_{\kappa(t)}}$. If we define $y_{ij} = \mathbbm{1}\{{\kappa(t_i)} = j,\, d_i = 1\}$ and let \begin{align*} \Delta\tilde{t}_{ij}=\left\{ \begin{array}{ll} \tau_j - \tau_{j-1}, &\text{if } t_i > \tau_j\\ t_i - \tau_{j-1}, &\text{if } \tau_{j-1} < t_i \leq \tau_j\\ 0, &\text{if } t_i \leq \tau_{j-1}, \end{array} \right. \end{align*} we can rewrite the likelihood contribution in~\eqref{eq:like_contrib_pc-hazard} as \begin{align*} L_i &= \prod_{j=1}^{{\kappa(t_i)}} {(\Delta\tilde{t}_{ij} \eta_j)}^{y_{ij}}\, \exp \left[- \Delta\tilde{t}_{ij} \eta_j \right], \end{align*} which is proportional to the likelihood of ${\kappa(t_i)}$ independent Poisson-distributed variables $y_{ij}$ with expectations $\mu_{ij} = \Delta\tilde{t}_{ij} \eta_j$. \setcounter{equation}{0} \setcounter{table}{0} \setcounter{figure}{0} \section{Implementation details} \label{app:implementation_details} The implementations of the survival methods described in Sections~\ref{sec:discrete_time_models} and~\ref{sec:continuous_time_models} are slightly different from the mathematical notation. This is because we also need to consider numerical stability. An implementation of the methods can be found at \url{https://github.com/havakv/pycox}. For the PMF parameterization, we used the log-sum-exp trick \begin{align*} \log\left(\sum_j \exp(z_j) \right) = \gamma + \log\left(\sum_j \exp(z_j-\gamma) \right), \end{align*} where $\gamma = \max_j(z_j)$, to ensure that we only take the exponential of non-positive numbers. Hence, by rewriting the loss~\eqref{eq:loss_pmf_sigma} in terms of $\phi_j(\mathbf{x})$, with $\phi_{m+1}(\mathbf{x}) = 0$ and $\gamma_i = \max_j (\phi_j(\mathbf{x}_i))$, we obtain \begin{align*} \text{loss} = &- \frac{1}{n} \sum_{i=1}^n d_i [\phi_{\kappa(t_i)} (\mathbf{x}_i) - \gamma_i] + \frac{1}{n}\sum_{i=1}^n \log \left( \sum_{j=1}^{m+1} \exp[\phi_j(\mathbf{x}_i) - \gamma_i]\right) \\ &- \frac{1}{n} \sum_{i=1}^n (1-d_i) \log \left( \sum_{j= {\kappa(t_i)}+1}^{m+1} \exp[\phi_j(\mathbf{x}_i)-\gamma_i] \right). \end{align*} For the discrete hazard parametrization, we can simply formulate it as the negative log-likelihood for Bernoulli data, or binary cross-entropy, and use existing implementations of the loss function to ensure numerical stability. In practice, these implementations use the log-sum-exp trick on the logits $\phi_j(\mathbf{x})$. Finally, for the continuous hazard parametrization, we use existing implementations of the softplus function which uses a linear function over a certain threshold, meaning $\log(1 + \exp[z]) \approx z$ for large values of $z$. However, we also note that for $z \approx 0$, we have that $\log(1 + z) \approx z$. Hence, for $\phi_{\kappa(t_i)}(\mathbf{x}_i) \ll 0$ we use that \begin{align*} \log \tilde{\eta}_{\kappa(t_i)}(\mathbf{x}_i) = \log[\log(1 + \exp[\phi_{\kappa(t_i)}(\mathbf{x}_i)])] \approx \phi_{\kappa(t_i)}(\mathbf{x}_i). \end{align*} \section{Discussion} \label{sec:discussion} In this paper, we have explored survival methodology built on neural networks for discrete-time data, and how it can be applied for continuous-time prediction. We have compared two existing discrete-time survival methods that minimize the negative log-likelihood of right-censored event times, where the first method~\citep{deephit} parameterize the event-time probability mass function (PMF), while the second method~\citep{gensheimer2019} parameterize the discrete hazard rate (Logistic-Hazard). Furthermore, we showed that the multi-task logistic regression \citep{MTLR, fotso2018} is, in fact, a PMF parametrization. Through empirical studies of simulated and real data sets, we found that the Logistic-Hazard method performed slightly better than the PMF parametrization. We proposed two interpolation schemes for the discrete methods, which were found to typically improve performance for smaller data sets. This is likely caused by the fact that interpolation allows for coarser discretization of the time scale, which reduces the number of parameters in the neural networks. We found that the interpolation scheme that assumed constant density within each time interval (CDI) performed slightly better than the scheme assuming constant hazard in each time interval (CHI). Note, however, that none of the schemes affect the training procedure, meaning both can be compared at test time. We also proposed a new continuous-time method that assumes constant hazard in predefined time-intervals (PC-Hazard). The method was found to perform very well compared to existing methods, both in terms of discrimination and calibration. Furthermore, in a simulation study, we found that the method continued to performed well for coarser discretization grids than the interpolated Logistic-Hazard method. This was particularly beneficial for the smallest data set in the simulation study. All three methods investigated in this paper need some form of discretization or coarsening of the time-scale. In that regard, we proposed a simple scheme that use the quantiles of the event-time distribution estimated by Kaplan-Meier, and showed through simulations that the quantile-based grids typically outperformed equidistant grids for coarser grids. \section{Experiments with Real Data} \label{sec:experiments} We now compare the methods discussed in this paper to other methods in the literature, in particular DeepHit~\citep{deephit}, DeepSurv~\citep{DeepSurv}, Cox-Time~\citep{Cox-Time}, CoxCC~\citep{Cox-Time}, Random Survival Forests~\citep[RSF,][]{Ishwaran2008}, and a regular Cox regression. We conduct the comparison on five common real-world data sets: the Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT), the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC), the Rotterdam tumor bank and German Breast Cancer Study Group (Rot. \& GBSG), the Assay Of Serum Free Light Chain (FLCHAIN), and the National Wilm's Tumor Study (NWTCO). \citet{DeepSurv} made the first three datasets available in their python package DeepSurv, and we have made no further preprocessing of the data. FLCHAIN and NWTCO were made available in the survival package of R \citep{survival-package}, but we use the same version of FLCHAIN as~\citet{Cox-Time}. No alterations were made to the NWTCO data set. The size, the number of covariates, and the proportion of censored individuals in each data set are given in Table~\ref{tab:datasets}. \begin{table}[tb] \centering \begin{tabular}{lrrr} \toprule Dataset & Size & Covariates & Prop. Censored \\ \midrule FLCHAIN & 6,524 & 8 & 0.70 \\ METABRIC & 1,904 & 9 & 0.42 \\ NWTCO & 4,028 & 6 & 0.86 \\ Rot. \& GBSG & 2,232 & 7 & 0.43 \\ SUPPORT & 8,873 & 14 & 0.32 \\ \bottomrule \end{tabular} \caption{Datasets for comparing survival methods. }\label{tab:datasets} \end{table} Hyperparameter tuning is performed with the evaluation criteria given in the paper that proposed the method. For the methods presented in this paper, however, we will use the integrated Brier score (IBS) by~\citet{Graf1999} computed over 100 equidistant points between the minimum and maximum observed times in the validation set. In the simulations in Section~\ref{sec:simulations}, we could use the validation loss for this purpose. This is, however, no longer feasible as we now also need to tune the discretization scheme and the discretization affects the magnitude of the losses. The IBS considers both discrimination and calibration, and is a useful substitute for the MSE~\eqref{eq:mse_surv} when the true survival function is not available. Hence, we believe it is a reasonable tuning criterion. The experiments were conducted by five-fold cross-validation. We used the same hyperparameter search and training strategy as presented in Section 6.1 of the paper by \citet{Cox-Time}, but decrease the learning rate by 0.8 at the start of each cycle, as this was found to give more stable training. The best parameter configuration for each method on each fold was fitted 10 times, and we calculated the median concordance and integrated Brier score (IBS) of the 10 repetitions and averaged that over the five folds. The results are presented in Tables~\ref{tab:concordance_all} and~\ref{tab:ibs_all}. In terms of concordance, we see that DeepHit and PC-Hazard perform very well. The three Logistic-Hazard methods and Cox-Time all perform close to the PC-Hazard, while the PMF, RSF and the other Cox methods perform slightly worse. The concordances of the two proposed interpolation schemes, CHI and CDI, are very similar, but the CDI method gives slightly higher scores. There does, however, not seem to be much performance gain in interpolation for the concordance. \begin{table}[tb] \resizebox{\textwidth}{!}{% \centering \begin{tabular}{lrrrrr} \toprule Model & FLCHAIN & METABRIC & NWTCO & Rot. \& GBSG & SUPPORT \\ \midrule Cox Regression & 0.790 & 0.626 & 0.706 & 0.664 & 0.599 \\ CoxCC & 0.792 & 0.647 & 0.711 & 0.670 & 0.614 \\ DeepSurv & 0.792 & 0.640 & 0.709 & 0.674 & 0.615 \\ Cox-Time & \textbf{0.793} & 0.664 & 0.709 & 0.674 & 0.630 \\ RSF & 0.784 & 0.651 & 0.705 & 0.668 & 0.632 \\ DeepHit & 0.791 & \textbf{0.675} & 0.710 & 0.675 & \textbf{0.639} \\ PMF & 0.786 & 0.632 & 0.710 & 0.669 & 0.627 \\ Logistic-Hazard & 0.792 & 0.658 & 0.704 & 0.670 & 0.625 \\ Logistic-Hazard (CHI) & 0.790 & 0.656 & 0.714 & 0.673 & 0.628 \\ Logistic-Hazard (CDI) & 0.790 & 0.660 & 0.700 & 0.676 & 0.630 \\ PC-Hazard & 0.791 & 0.655 & \textbf{0.716} & \textbf{0.679} & 0.628 \\ \bottomrule \end{tabular} } \caption{Concordance from 5-fold cross-validation on real-world data sets.}\label{tab:concordance_all} \end{table} \begin{table}[tb] \resizebox{\textwidth}{!}{% \centering \begin{tabular}{lrrrrr} \toprule Model & FLCHAIN & METABRIC & NWTCO & Rot. \& GBSG & SUPPORT \\ \midrule Cox Regression & 0.0961 & 0.183 & 0.0791 & 0.180 & 0.218 \\ CoxCC & 0.0924 & 0.173 & 0.0745 & 0.171 & 0.213 \\ DeepSurv & 0.0919 & 0.175 & 0.0745 & 0.170 & 0.213 \\ Cox-Time & 0.0925 & 0.173 & 0.0753 & 0.170 & \textbf{0.212} \\ RSF & 0.0928 & 0.175 & 0.0749 & 0.170 & 0.213 \\ DeepHit & 0.0929 & 0.186 & 0.0758 & 0.184 & 0.227 \\ PMF & 0.0924 & 0.174 & 0.0748 & \textbf{0.169} & 0.213 \\ Logistic-Hazard & 0.0918 & \textbf{0.172} & 0.0742 & 0.171 & 0.213 \\ Logistic-Hazard (CHI) & 0.0919 & 0.173 & \textbf{0.0738} & 0.170 & 0.213 \\ Logistic-Hazard (CDI) & \textbf{0.0917} & \textbf{0.172} & 0.0741 & 0.170 & \textbf{0.212} \\ PC-Hazard & 0.0918 & \textbf{0.172} & \textbf{0.0738} & \textbf{0.169} & \textbf{0.212} \\ \bottomrule \end{tabular} } \caption{Integrated Brier score from 5-fold cross-validation on real-world data sets.}\label{tab:ibs_all} \end{table} Examining the IBS in Table~\ref{tab:ibs_all} we again find that PC-Hazard performs very well. But now, DeepHit does quite poorly. This is expected as DeepHit is designed for discrimination rather than well-calibrated estimates~\citep[see][]{Cox-Time}. In general, the PMF, the RSF and the three proportional Cox methods seem to have slightly higher IBS than the Hazard methods, but again the differences are quite small. Cox-Time performs quite well on all data sets except for FLCHAIN and NWTCO\@. Comparing the interpolation schemes of Logistic-Hazard, it seems that CDI still performs slightly better than CHI, although both are quite close to the discrete estimates of Logistic-Hazard. Interestingly, we see that, for the NWTCO data set, PC-Hazard and Logistic-Hazard (CHI) performs the best both in terms of concordance and IBS\@. This likely means that the piecewise exponential survival estimates are a good way of representing this data set. In summary, all three methods discussed in this paper are competitive with existing survival methodology. However, the interpolated Logistic-Hazard or the PC-Hazard seems to give the most stable high performance considering both discrimination and calibration. \section{Introduction} Survival analysis, or time-to-event analysis, considers the problem of modeling the time of a future event. A plethora of statistical methods for analyzing right-censored time-to-event data have been developed over the last fifty years or so. Most of these methods, like Cox regression, assume continuous-time models, but methods based on discrete-time models are sometimes used as well. For a review see, e.g., \citet{klein2005survival} for statistical methods based on continuous-time models and \citet{tutz2016} for discrete-time models and methods. As a result of the rapid development in machine learning and, in particular, neural networks, a number of new methods for time-to-event predictions have been developed in the last few years. This development has benefited from the excellent frameworks for neural network development, such as TensorFlow, PyTorch, Theano, Keras, and CNTK, which have simplified the application of neural networks to existing likelihood-based methodology. Thus, novel methods for time-to-event predictions have been developed based on Cox partial likelihood~\citep[e.g.,][]{DeepSurv, Luck2017, yousefi2017predicting, Cox-Time} and the discrete-time survival likelihood~\citep[e.g.,][]{deephit, fotso2018, gensheimer2019}. To the best of our knowledge, \citet{deephit} were the first to apply neural networks to the discrete-time likelihood for right-censored time-to-event data. Their approach was to parameterize the probability mass function (PMF) of the event times. In statistical survival analysis, it is, however, more common to express the likelihood by the discrete-time hazard rate~\citep[see, e.g.,][]{tutz2016}. \citet{gensheimer2019} used this form of the likelihood and parameterized the hazard rates with a neural network. In this paper we perform a systematic study of the use of neural nets in conjunction with discrete-time likelihoods for right-censored time-to-event data; in particular, we perform a comparison of methods that parameterize the PMF and the discrete hazard rate. It is common to apply discrete-time methods as an approximation for continuous-time survival data. To this end one has to perform a discretization of the continuous time scale; a subject that has received little attention in the literature. We consider two discretization schemes, corresponding to equidistant times or equidistant survival probabilities, and conduct a simulation study to better understand the effect of the discretization scheme and the number of time-points used for the discrete-time methods. Closely related to the discretization of a continuous time scale is the subject of interpolation. A coarse discretization grid has the benefit of reducing the number of parameters in a neural network. But the approximation error that incurs when a discrete-time method is used as an approximation for continuous-time data, becomes smaller with a denser grid. By interpolating the discrete-time survival estimates, it is possible to use a coarser discretization grid without increasing the approximation error. For this reason, two simple interpolation schemes are investigated in this paper; the first assumes constant density functions between the time-points in the discretization grid, and the second assumes constant hazard rates between the grid points. As a modification of the latter method, we also propose a continuous-time method obtained by assuming that the continuous-time hazard rate is piecewise constant, and we compare this method with the aforementioned discrete-time methods with and without interpolation. The paper is organized as follows. First, in Section~\ref{sec:related_works}, we present a summary of related methods for time-to-event predictions. Then, in Section~\ref{sec:discrete_time_models}, we consider the discrete-time likelihood for right-censored event-times and discuss how the likelihood may be parameterized with neural networks. In Section~\ref{sec:continuous_time_models}, continuous-time models for time-to-event data are considered, and we discuss how discretization of the continuous time scale enables the application of discrete-time survival methods for continuous-time data. Here we also present the two schemes for interpolating discrete survival functions, and we consider our continuous-time method obtained by assuming piecewise constant hazards. In Section~\ref{sec:simulations}, a simulation study is conducted to understand the impact the discretization and interpolation schemes have on the methods, and in Section~\ref{sec:experiments}, we compare the methods with existing methods for time-to-event predictions using five real-world data sets. Finally, we summarize and discuss our findings in Section~\ref{sec:discussion}. The code for all methods, data sets, and simulations presented in this paper is available at \url{https://github.com/havakv/pycox}. \section{Related Works} \label{sec:related_works} Numerous researchers have used neural networks to parameterize the likelihood for discrete-time survival data. If fact, the two discrete-time survival methods explored in this paper were first proposed by \citet{deephit} and \citet{gensheimer2019}. DeepHit~\citep{deephit} parameterizes the probability mass function (PMF) of the survival distribution and combines the log-likelihood for right-censored data with a ranking loss for improved discriminative performance. The method has been extended to allow for competing risks data. \citet{deephit} only used the time-dependent concordance of \citet{Antolini2005} for performance evaluation, and they did not discuss discretization of a continuous time scale. \citet{Cox-Time} showed that, by only considering concordance, DeepHit has excellent discriminative performance at the cost of poorly calibrated survival estimates. It is well known in the survival analysis literature that the log-likelihood of discrete-time survival data can be expressed as a Bernoulli log-likelihood of the hazard rates. This enables the use of generalized linear models (GLM) software for fitting survival models parameterized by the hazard rate; for an overview see \citet{tutz2016}. \citet{gensheimer2019} extended this methodology by parameterizing the hazard rates with a neural network. They showed that their method performs well, both in terms of discrimination and calibration of the survival predictions. However, they did not compare their methodology with methods that parameterize the PMF\@. \citet{MTLR} proposed the multi-task logistic regression, which is a generalization of the binomial log-likelihood, to jointly model a sequence of binary labels representing event indicators. \citet{fotso2018} later applied this framework to neural networks. We show in Section~\ref{sub:multi_task_logistic_regression} that the multi-task logistic regression is, in fact, a PMF parametrization. Another approach to time-to-event prediction is to consider time as continuous rather than discrete. As a result, the obtained methodology is often not fully parametric. Many of the proposed continuous-time methods are based on Cox proportional hazards model, also called Cox regression model. Estimation in this semi-parametric model is commonly based on Cox partial likelihood~\citep[see, e.g.,][]{klein2005survival}. \citet{Faraggi1995} were the first to parameterize a Cox regression model with a neural network. They were, however, unsuccessful in achieving any improvements over regular Cox regression. Later extensions of the Cox proportional hazards methodology include new network architectures, larger data sets, and better optimization schemes \citep{DeepSurv, ching2018cox, yousefi2017predicting}. As a result, the predictive performance has been improved, in addition to enabling covariates in the form of images \citep{7822579, 8100208}. \citet{Luck2017} combined the negative Cox partial log-likelihood with an isotonic regression loss in an attempt to obtain better discriminative performance. Regardless, their method is still limited by the proportional hazards assumption. The proportionality assumption of Cox regression model is quite restrictive. Unlike the discrete methods discussed above, none of the aforementioned Cox extensions can estimate survival curves that cross each other. \citet{Cox-Time} alleviated this restriction by proposing a non-proportional extension of the Cox methodology. This was achieved by approximating the partial log-likelihood with a loss based on case-control sampling. Random Survival Forest (RSF) by~\cite{Ishwaran2008} is a fully non-parametric continuous-time method for right-censored survival data. RSF computes decision trees based on the log-rank test and estimates the cumulative hazard rate with the Nelson-Aalen estimator. The RSF method has become a staple in the predictive survival literature, and it is used as a benchmark in the majority of the work listed in this section. \section{Discrete-Time Models} \label{sec:discrete_time_models} In this section, we will restrict ourselves to models in discrete time. Then, in Section~\ref{sec:continuous_time_models}, we will discuss how discrete-time models may be used as approximations of models in continuous time. In the following, we start by a brief introduction to terms in the field of survival analysis, followed by the derivation of the likelihood for right-censored survival data, which is the basis for all methods presented in this paper (and much of survival analysis in general). We will then show how we can parameterize the likelihood with neural networks to obtain the methods proposed by \citet{deephit}, \citet{gensheimer2019}, \citet{MTLR}, and \citet{fotso2018}. \subsection{The Discrete-Time Survival Likelihood} \label{sub:the_discrete_survival_likelihood} Assume that time is discrete with values $0 = \tau_0 < \tau_1 < \ldots$, and let $\mathbb T = \{\tau_1, \tau_2, \dots \}$ denote the set of positive $\tau_j$'s. The time of an event is denoted $T^* \in \mathbb T$, and our goal is to model the distribution of such event times, or durations. The probability mass function (PMF) and the survival function for the event times are defined as \begin{align} \label{eq:survival_orig} f(\tau_j) &= \textnormal{P}(T^* = \tau_j),\nonumber\\ S(\tau_j) &= \textnormal{P}(T^* > \tau_j) = \sum_{k > j} f(\tau_k). \end{align} In survival analysis, models are often expressed in terms of the hazard function rather than the PMF\@. For discrete time, the hazard is defined as \begin{align*} h(\tau_j) &= \textnormal{P}(T^* = \tau_j \mid T^* > \tau_{j-1}) = \frac{f(\tau_j)}{S(\tau_{j-1})} = \frac{S(\tau_{j-1}) - S(\tau_j)}{S(\tau_{j-1})}, \end{align*} and it follows that \begin{align} \label{eq:hazard_more_f} f(\tau_j) &= h(\tau_j)\, S(\tau_{j-1}),\\ \label{eq:hazard_more_S} S(\tau_j) &= [1 - h(\tau_j)]\, S(\tau_{j-1}). \end{align} Note further, that from~\eqref{eq:hazard_more_S} it follows that the survival function can be expressed as \begin{align} \label{eq:surv_hazard} S(\tau_j) &= \prod_{k=1}^j [1 - h(\tau_k)]. \end{align} In most studies, we do not observe all event times. For some individuals, we will only observe a right-censored duration. So to allow for censoring, we let $C^* \in \mathbb T_C = \{\tau_1, \tau_2, \ldots, \tau_m\}$ be a right-censoring time. In the same manner as for the event time, the censoring-time has the PMF and survival function \begin{align*} f_{C^*}(\tau_j) &= \textnormal{P}(C^* = \tau_j), \\ S_{C^*}(\tau_j) &= \textnormal{P}(C^* > \tau_j). \end{align*} $T^*$ and $C^*$ are typically not observed directly, but instead, we observe a potentially right-censored duration $T$ and an event indicator $D$ given by \begin{align*} &T = \min\{T^*,\, C^*\},\\ &D = \mathbbm{1}\{T^* \leq C^*\}. \end{align*} We here follow the common convention in survival analysis that when an event and censoring time coincide, we observe the occurrence of the event. Note that, as $C^* \leq \tau_m$, we are not able to observe event times $T^*$ larger than $\tau_m$. Hence, we are restricted to model the distribution of the event times in $\mathbb T_C$. Now, assuming that $T^*$ and $C^*$ are independent, we can derive the likelihood function for right-censored survival data. To this end, note that, for $t \in \mathbb{T_C}$ and $d \in \{0, 1\}$, we have that \begin{align*} \textnormal{P}(T = t, D=d) \nonumber &= \textnormal{P}{(T^*=t,\, C^* \geq t)}^d \; {\textnormal{P}(T^* > t,\, C^*=t)}^{1-d} \nonumber \\ &= {\left[\textnormal{P}(T^*=t)\, \textnormal{P}(C^* \geq t)\right]}^d \; {\left[\textnormal{P}(T^* > t)\, \textnormal{P}(C^*=t)\right]}^{1-d} \nonumber \\ &= {\left[f(t)\, (S_{C^*}(t) + f_{C^*}(t)) \right]}^d \, {\left[S(t) f_{C^*}(t) \right]}^{1-d} \nonumber\\ &= \left[{f(t)}^d \, {S(t)}^{1-d}\right] \left[ {f_{C^*}(t)}^{1-d}\, {(S_{C^*}(t) + f_{C^*}(t))}^d\right]. \end{align*} Now, it is common to assume that $f(t)$ and $f_{C^*}(t)$ have no parameters in common. Then we can consider, separately, the contribution to the likelihood of the event time distribution and the censoring distribution. We are typically only interested in modeling the distribution of the event times, in which case, for individual $i$, we obtain the likelihood contribution \begin{align} \label{eq:likelihood_survival} L_i &= {f(t_i)}^{d_i} {S(t_i)}^{1-d_i}. \end{align} If we have data for $n$ independent individuals, each with covariates $\mathbf{x}_i$, observed time $t_i$, and event indicator $d_i$, we can fit models by minimizing the mean negative log-likelihood \begin{align} \label{eq:loss_general} \text{loss} &= - \frac{1}{n} \sum_{i=1}^n \left( d_i \log [f(t_i \mid \mathbf{x}_i)] + (1-d_i) \log [S(t_i \mid \mathbf{x}_i)] \right). \end{align} A useful alternative to the loss function~\eqref{eq:loss_general} is obtained by rewriting it in terms of the discrete hazards. To this end, let $\kappa(t) \in \{0, \ldots, m\}$ define the index of the discrete time $t$, meaning $t = \tau_{\kappa(t)}$. Using~\eqref{eq:hazard_more_f},~\eqref{eq:hazard_more_S}, and~\eqref{eq:surv_hazard}, we can then rewrite the likelihood contribution~\eqref{eq:likelihood_survival} as \begin{align*} L_i &= {f(t_i)}^{d_i} \, {S(t_i)}^{1-d_i} \\ &= {[h(t_i)\, S(\tau_{{\kappa(t_i)}-1})]}^{d_i} \, {[(1 - h(t_i))\, S(\tau_{{\kappa(t_i)}-1})]}^{1 - d_i}\\ &= {h(t_i)}^{d_i} \, {[1 - h(t_i)]}^{1-d_i} \, S(\tau_{{\kappa(t_i)} - 1}) \\ &= {h(t_i)}^{d_i} \, {[1 - h(t_i)]}^{1-d_i} \, \prod_{j=1}^{{\kappa(t_i)}-1} [1 - h(\tau_j)]. \end{align*} With this formulation, the mean negative log-likelihood in~\eqref{eq:loss_general} can be rewritten as \begin{align} \label{eq:loss_hazard} \text{loss} &= - \frac{1}{n} \sum_{i=1}^n \left(d_i \log[h(t_i \mid \mathbf{x}_i)] + (1 - d_i) \log[1 - h(t_i \mid \mathbf{x}_i)] + \sum_{j = 1}^{{\kappa(t_i)}-1} \log[1 - h(\tau_j \mid \mathbf{x}_i)] \right) \nonumber\\ &= - \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^{\kappa(t_i)} \left(y_{ij} \log[h(\tau_{j} \mid \mathbf{x}_i)] + (1 - y_{ij}) \log[1 - h(\tau_{j} \mid \mathbf{x}_i)] \right). \end{align} Here, $y_{ij} = \mathbbm{1}\{t_i = \tau_j,\, d_i = 1\}$, meaning $\mathbf y_i = (y_{i1}, \ldots , y_{im})$ is a vector of zeros with a single 1 at the event index $\kappa(t_i)$ when $t_i$ corresponds to an observed event ($d_i = 1$). We recognize this as the negative log-likelihood for Bernoulli data, or binary cross-entropy, a useful discovery first noted by \citet{brown1975use}. With the two loss functions~\eqref{eq:loss_general} and~\eqref{eq:loss_hazard}, we can now make survival models by parameterizing the PMF or the hazard function and minimizing the corresponding loss. For classical statistical models, these approaches are equivalent and have been used to obtain maximum likelihood estimates for the parameters in the PMF/hazard function; see, e.g., \citet{tutz2016} for a review. We will, however, not consider classical maximum likelihood estimates, but focus on the part of the literature that fit neural networks for the purpose of time-to-event prediction, in which case the two loss functions may give different results. \subsection{Parameterization with Neural Networks} \label{sub:parameterization_with_neural_networks} In the previous section, we saw that the survival likelihood can be expressed in terms of the PMF or the hazard function. In the following, we will describe how to use this to create survival methods by parameterizing the PMF or hazard with neural networks. In theory, as both approaches minimize the same negative log-likelihood, the methods should yield similar results. But as neural networks are quite complex, this might not be the case in practice. First, considering the hazard parametrization of the likelihood, let $\phi(\mathbf{x}) \in \mathbb{R}^m$ represent a neural network that takes the covariates $\mathbf{x}$ as input and gives $m$ outputs, each corresponding to a discrete time-point $\tau_j$, i.e., $\phi(\mathbf{x}) = {\{\phi_1(\mathbf{x}), \ldots, \phi_m(\mathbf{x})\}}$. As the discrete hazards are (conditional) probabilities, we require $h(\tau_j \mid \mathbf{x}) \in [0, 1]$. This can be achieved by applying the logistic function (sigmoid function) to the neural network \begin{align*} h(\tau_j \mid \mathbf{x}) = \frac{1}{1 + \exp[-\phi_j(\mathbf{x})]}. \end{align*} We can estimate the hazard function by minimizing the loss~\eqref{eq:loss_hazard}, and survival estimates can be obtained from~\eqref{eq:surv_hazard}. To the best of our knowledge, this method was first proposed by \citet{gensheimer2019}. However, if one considers $\phi_j(\mathbf{x})$ an arbitrary parametric function of $\mathbf{x}$, the approach is well known in the survival literature and seems to have been first addressed by \citet{Cox1972} and \citet{brown1975use}; see also \citet{10.2307/270718}. The book by \citet{tutz2016} gives a review of the approach. The implementation we use in the experiments in Sections~\ref{sec:simulations} and~\ref{sec:experiments} differs slightly from that of \citet{gensheimer2019}, as it was found to be more numerically stable (see Appendix~\ref{app:implementation_details}). In this paper, we will refer to the method as \textit{Logistic-Hazard}, as coined by \citet{brown1975use} (one can also find the term \text{Logistic Discrete Hazard} used in the statistical literature). \citet{gensheimer2019}, on the other hand, referred to it as \textit{Nnet-survival}, but to be better able to contrast this method to the other methods presented in this paper, we will instead use the more descriptive \textit{Logistic-Hazard}. We can obtain a survival model by parameterizing the PMF in a similar manner to the Logistic-Hazard method. As for the hazards, the PMF represents probabilities $f(\tau_j \mid \mathbf{x}) \in [0, 1]$, but, contrary to the conditional probabilities that define the hazard, we now require the PMF to sum to 1. As we only observe event times in $\mathbb{T}_C$, we fulfill this requirement indirectly through the probability of surviving past $\tau_m$, i.e., \begin{align} \label{eq:pmf_par_equal} \sum_{k=1}^m f(\tau_k \mid \mathbf{x}) + S(\tau_m \mid x) = 1. \end{align} Now, again with $\phi(\mathbf{x}) \in \mathbb{R}^{m}$ denoting a neural network, the PMF can be expressed as \begin{align} \label{eq:pmf_f} f(\tau_j \mid \mathbf{x}) = \frac{\exp[\phi_j(\mathbf{x})]}{1 + \sum_{k=1}^{m} \exp[\phi_k(\mathbf{x})]}, \quad\quad \text{for } j = 1, \ldots, m. \end{align} Note that~\eqref{eq:pmf_f} is equivalent to the softmax function with a fixed $\phi_{m+1}(\mathbf{x}) = 0$. Alternatively, one could let $\phi_{m+1}(\mathbf{x})$ vary freely, something that is quite common in machine learning, but we chose to follow the typical conventions in statistics. By combining~\eqref{eq:survival_orig} and~\eqref{eq:pmf_par_equal}, we can express the survival function as \begin{align} \label{eq:s_pmf_sum} S(\tau_j \mid \mathbf{x}) &= \sum_{k=j+1}^{m} f(\tau_k \mid \mathbf{x}) + S(\tau_m \mid \mathbf{x}), \quad\quad \text{for } j=1,\ldots,m-1, \\ S(\tau_m \mid \mathbf{x}) &= \frac{1}{1 + \sum_{k=1}^m \exp[\phi_k(\mathbf{x})]}. \nonumber \end{align} Now, let $\sigma_j[\phi(\mathbf{x})]$, for $j=1,\ldots,m+1$, denote the softmax in~\eqref{eq:pmf_f}, meaning $\sigma_{m+1}[\phi(\mathbf{x})] = S(\tau_m \mid \mathbf{x})$. Notice the similarities to classification with $m+1$ classes, as we are essentially classifying whether the event is happening at either time $\tau_1, \ldots, \tau_m$ or later than $\tau_m$. However, due to censoring, the likelihood is \textit{not} the cross-entropy. Instead, by inserting~\eqref{eq:pmf_f} and~\eqref{eq:s_pmf_sum} into~\eqref{eq:loss_general}, we get the mean negative log-likelihood \begin{align} \label{eq:loss_pmf_sigma} \text{loss} &= -\frac{1}{n} \sum_{i=1}^n \left( d_i \log [\sigma_{\kappa(t_i)}(\phi(\mathbf{x}_i)) ] + (1-d_i) \log \left[\sum_{k={\kappa(t_i)}+1}^{m+1} \sigma_k(\phi(\mathbf{x})) \right] \right), \end{align} where ${\kappa(t_i)}$ still denotes the duration index of individual $i$'s event time, i.e., $t_i = \tau_{\kappa(t_i)}$. This is essentially the same negative log-likelihood as presented by \citet{deephit}, but with only one type of event. Also, note that contrary to the work by \citet{deephit} the negative log-likelihood in~\eqref{eq:loss_pmf_sigma} allows for survival past time $\tau_m$. Some numerical improvements of the implementation are addressed in Appendix~\ref{app:implementation_details}. We will refer to this method simply by \textit{PMF} as this term is unambiguously discrete, contrary to the term \textit{hazard} which is used both for discrete and continuous time. \subsubsection{Multi-Task Logistic Regression} \label{sub:multi_task_logistic_regression} Multi-task logistic regression, proposed by \citet{MTLR}, provides a generalization of the binomial log-likelihood to jointly model the sequence of binary labels $Y_j = \mathbbm{1}\{T^* \leq \tau_j\}$. This means that $Y = (y_1, \ldots , y_m)$ is a sequence with zeros for every time $\tau_j$ up to the event time, followed by one's, e.g., $(0, \ldots, 0, 1, \ldots 1)$. Then \begin{align} \label{eq:mtlr_orig} \textnormal{P}(Y = (y_1, \dots, y_m) \mid \mathbf{x}) = \frac{\exp\left[\sum_{k=1}^m y_k \psi_k(\mathbf{x})\right]} {1 + \sum_{k=1}^{m} \exp\left[\sum_{l=k}^m \psi_l(\mathbf{x})\right]}. \end{align} \citet{MTLR} only consider linear predictors $\psi_k(\mathbf{x}) = \mathbf{x}^T \boldsymbol\beta_k$, but this was extended to a neural network by \citet{fotso2018}. The parameters of $\psi_k(\mathbf{x})$ are found by minimizing the negative log-likelihood in~\eqref{eq:loss_general}. As $f(\tau_j \mid \mathbf{x}) = \textnormal{P}(Y=(y_1, \ldots, y_m) \mid \mathbf{x})$, where $y_k = \mathbbm{1}\{k \geq j\}$, the expression in~\eqref{eq:mtlr_orig} can be written as \begin{align*} f(\tau_j \mid \mathbf{x}) &= \frac{\exp\left[\sum_{k=j}^m \psi_k(\mathbf{x})\right]} {1 + \sum_{k=1}^{m} \exp\left[\sum_{l=k}^m \psi_l(\mathbf{x})\right]} = \frac{\exp[\phi_j(\mathbf{x})]} {1 + \sum_{k=1}^{m} \exp[\phi_k(\mathbf{x})]}, \end{align*} where $\phi_j(\mathbf{x}) = \sum_{k=j}^m \psi_k(\mathbf{x})$. Hence, the multi-task logistic regression is equivalent to the PMF in~\eqref{eq:pmf_f}, but where $\phi_j(\mathbf{x})$ is the (reverse) cumulative sum of the output of the network $\psi(\mathbf{x}) \in \mathbb{R}^m$. To the extent of our knowledge, there are no benefits to this extra cumulative sum. Instead, it simply requires unnecessary computations, and, for large $m$, it can cause numerical instabilities. Hence, we will not consider this method further. \section{Continuous-Time Models} \label{sec:continuous_time_models} In the following, we no longer consider the time scale to be discrete, but instead consider continuous-time models, where $T^*, C^* > 0$, and we let $T = \min\{T^*, C^*\}$ and $D = \mathbbm{1}\{T^* \leq C^*\}$ be as before. Let $\tau$ denote the maximum possible value of $C^*$, meaning $P(C^* \leq \tau) = 1$. Hence, a potentially right-censored observation $T$ is in the interval $T \in \interv{0}{\tau}$. Instead of a PMF, we now have the density function $f(t)$ and the continuous-time survival function \begin{align*} S(t) = \textnormal{P}(T^* > t) = \int_t^\tau f(z)\, dz + S(\tau). \end{align*} Furthermore, the continuous-time hazard rate is a non-negative function of the time (no longer restricted to $[0, 1]$), \begin{align} \label{eq:hazard_cont_def} h(t) = \frac{f(t)}{S(t)} = \lim_{\Delta t \rightarrow 0} \frac{\textnormal{P}(t \leq T^* < t + \Delta t \mid T^* \geq t)}{\Delta t}. \end{align} As a result, we can express the survival function in terms of the cumulative hazard $H(t) = \int_{\tau_0}^t h(z)\, dz$, \begin{align} \label{eq:surv_continuous_cumulative_hazard} S(t) = \exp[-H(t)]. \end{align} This yields the continuous-time version of the likelihood contribution in~\eqref{eq:likelihood_survival}, \begin{align} \label{eq:likelihood_cont} L_i = {f(t_i)}^{d_i}\, {S(t_i)}^{1-d_i} = {h(t_i)}^{d_i}\, S(t_i) = {h(t_i)}^{d_i}\, \exp[-H(t_i)]. \end{align} The derivation of $L_i$ follows the same steps as the derivation of the discrete likelihood contribution~\eqref{eq:likelihood_survival}, only with density functions instead of probability mass functions. In what follows, we will first discuss how we can apply the discrete-time methods from Section~\ref{sub:parameterization_with_neural_networks} for continuous-time data. We will here address how time can be discretized to fit the discrete-time model formulation, and how to interpolate an estimated discrete survival function for continuous-time predictions. Then, we will propose a new continuous-time method by assuming that the hazard in~\eqref{eq:hazard_cont_def} is piecewise constant, which we call \textit{PC-Hazard}. \subsection{Discretization of Durations} \label{sub:discretization_of_durations} Both the PMF and Logistic-Hazard methods require time to be discrete on the form $0 = \tau_0 < \tau_1 < \cdots < \tau_m$. Hence, to apply the methods to continuous-time data, we need to perform some form of discretization of the time scale (also, for inherently discrete event times, we might want to coarsen the discrete time scale to obtain a smaller subset of $\tau_j$'s, as this will reduce the number of parameters in the neural networks). Possibly the most obvious way to discretize time is to make an equidistant grid in $[0, \tau]$ with $m$ grid points. An alternative, that we explore in this paper, is to make a grid based on the distribution of the event times. By estimating the survival function $S(t)$ with the Kaplan-Meier estimator, we obtain a general trend of event times. With $\hat S(t)$ denoting the Kaplan-Meier survival estimates, we can make a grid from the quantiles of the estimates, $1 = \hat S(0) = \zeta_0 > \zeta_1 > \cdots > \zeta_m = \hat S(\tau)$. We will assume that each interval has the same decrease in the survival estimate, so that $\zeta_j - \zeta_{j+1} = (1 - \hat S(\tau)) / m$. The corresponding duration grid, $\tau_1 < \cdots < \tau_m$, is found by solving $\hat S(\tau_j) = \zeta_j$. We will then obtain a more dense grid in intervals with more events, and less dense grid in intervals with fewer events. This is illustrated in Figure~\ref{fig:km_discretization}, where we can see that the grid becomes coarser as the slope of the survival curve becomes less steep. \begin{figure}[tpb] \centering \includegraphics[width=0.7\linewidth]{./figures/km_discretization.pdf} \vspace*{-3mm} \caption{Illustration of the Kaplan-Meier based discretization scheme. The quantiles of the Kaplan-Meier curve are used as the grid points.}\label{fig:km_discretization} \end{figure} The discrete-time methods assume that all events and censorings occur at the $\tau_j$'s, so, when performing the discretization, we move all event times in an interval to the end of that interval while censored times are moved to the end of the previous interval. This means that for $\tau_{j-1} < T_i \leq \tau_j$, we replace $T_i$ by $\tau_j$ if $D_i = 1$, and by $\tau_{j-1}$ if $D_i = 0$. Our reason for this choice is that this is typically how event times are recorded. Consider a study where we are only able to make observations at times $\tau_1 < \tau_2 < \cdots < \tau_m$. For a censored observation, $\tau_{j-1}$ is the last point in time where the individual was recorded alive, while for an observed event, $\tau_j$ is the first duration for which the individual was recorded with the event. As a side note, an alternative way to obtain the discrete loss in~\eqref{eq:loss_hazard} is by assuming continuous event times in defined intervals $\intervl{\tau_j}{\tau_{j+1}}$ and censorings that only occur at the beginning or end of the intervals \citep[see, e.g.,][]{tutz2016}. This justifies the use of this loss for continuous-time data grouped in intervals. \subsection{Interpolation for Continuous-Time Predictions} \label{sub:interpolation_for_continuous_time_predictions} When discrete-time survival methods are applied to continuous-time data, as described in Section~\ref{sub:discretization_of_durations}, the survival estimates become a step function with steps at the grid points (blue line in Figure~\ref{fig:hazard_interpolation}). Consequently, for coarser grids, it might be beneficial to interpolate the discrete survival estimates. In this regard, we propose two simple interpolation schemes that fulfill the monotonicity requirement of the survival function. The first assumes that the probability density function is constant in each time interval $\interv{\tau_{j-1}}{\tau_{j}}$, while the second scheme assumes constant hazards in each time interval. This corresponds to piecewise linear and piecewise exponential survival estimates, respectively, and we will, therefore, refer to the schemes as \textit{constant density interpolation} (CDI) and \textit{constant hazard interpolation} (CHI). See Figure~\ref{fig:hazard_interpolation} for an illustration of the two schemes and the discrete survival estimates. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{./figures/hazard_interpolation.pdf} \vspace*{-3mm} \caption{Survival estimates by a discrete model (e.g., PMF or Logistic-Hazard) for 5 grid points. The three lines represent the discrete survival estimates and the two interpolation schemes in Section~\ref{sub:interpolation_for_continuous_time_predictions}: The constant density interpolation (CDI) and constant hazard interpolation (CHI).}\label{fig:hazard_interpolation} \end{figure} \subsubsection{Constant Density Interpolation (CDI)} \label{sub:constant_pdf_interpolation} For a continuous time $t \in \interv{\tau_{j-1}}{\tau_j}$, linear interpolation of the discrete survival function takes the form \begin{align*} S(t) = S(\tau_{j-1}) + \left[S(\tau_j) - S(\tau_{j-1}) \right] \, \frac{t - \tau_{j-1}}{\Delta \tau_j}, \end{align*} where $\Delta \tau_j = \tau_j - \tau_{j-1}$. This means that the density function $f(t)$ is constant in this interval \begin{align} f(t) = - S'(t) = \frac{S(\tau_{j-1}) - S(\tau_{j})}{\Delta \tau_j}. \end{align} Let us now rewrite the expression of the survival function as \begin{align*} S(t) &= \frac{S(\tau_{j-1}) \tau_j - S(\tau_j) \tau_{j-1}}{\Delta \tau_j} - \frac{S(\tau_{j-1}) - S(\tau_{j})}{\Delta \tau_j}\, t = \alpha_j - \beta_j\, t, \end{align*} where both $\alpha_j$ and $\beta_j$ are non-negative. Using that the density is $f(t) = -S'(t) = \beta_j$, we get a simple expression for the hazard function~\eqref{eq:hazard_cont_def} \begin{align*} h(t) = \frac{f(t)}{S(t)} = \frac{\beta_j}{\alpha_j - \beta_j\, t}. \end{align*} Hence, we see that linear interpolation of the survival function corresponds to a constant density function and an increasing hazard rate throughout the interval. \subsubsection{Constant Hazard Interpolation (CHI)} \label{sub:constant_hazard_interpolation} The following scheme assumes constant hazard in each interval, which corresponds to linear interpolation of the cumulative hazard function. For a continuous time $t \in \interv{\tau_{j-1}}{\tau_j}$, the interpolated cumulative hazard is then \begin{align*} H(t) = H(\tau_{j-1}) + \left[H(\tau_j) - H(\tau_{j-1}) \right] \, \frac{t - \tau_{j-1}}{\Delta \tau_j}. \end{align*} This means that the hazard function is constant in this interval \begin{align*} h(t) = H'(t) = \frac{H(\tau_{j}) - H(\tau_{j-1})}{\Delta \tau_j} = \eta_j, \end{align*} and from~\eqref{eq:surv_continuous_cumulative_hazard}, we obtain the piecewise exponential survival function \begin{align*} S(t) = \exp[-H(t)] = \exp\left[- \eta_j\, (t - \tau_{j-1}) \right]\, S(\tau_{j-1}). \end{align*} Finally, the density is \begin{align*} f(t) = h(t)\, S(t) = \eta_j\, S(t), \end{align*} showing that the density is decreasing throughout the interval. Summarizing the two interpolation methods, CDI assumes that the events are spread evenly in the interval, while the CHI assumes that there are more events at the beginning of the interval. Correspondingly, the CDI assumes that the longer an individual ``survives'' in an interval, the higher the risk becomes of experiencing an event in the next immediate moment (increasing hazard), contrary to the CHI which assumes that this risk is constant. In fact, in the next section, we will propose a new method with the same assumptions as CHI, but contrary to the CHI, we can train it on continuous-time data. \subsection{A Piecewise Constant Continuous-Time Hazard Parametrization} \label{sub:continuous_time_hazard_parametrization} We now propose a continuous-time method by parameterizing the hazards in~\eqref{eq:likelihood_cont}. As for the CHI in Section~\ref{sub:constant_hazard_interpolation}, we will let the continuous-time hazard be piecewise constant. Disregarding the neural networks, this model was first proposed by \citet{holford1976life}, and further developed by \citet{friedman1982piecewise} who found that piecewise constant hazards yields a likelihood proportional to that of a Poisson likelihood; see Appendix~\ref{app:pc_hazard_as_poisson_regression} for details. Consider a partition of the time scale $0 = \tau_0 < \tau_1 < \cdots < \tau_m = \tau$, and let $\kappa(t)$ denote the interval index of time $t$ such that $t \in \interv{\tau_{{\kappa(t)}-1}}{\tau_{\kappa(t)}}$ (this is slightly different from the discrete case where we had $t = \tau_{\kappa(t)}$). If we assume that the hazard is constant within each interval, we can express the hazard as the step function \begin{align*} h(t) = \eta_{\kappa(t)}, \end{align*} for a set of non-negative constants $\{\eta_1, \ldots, \eta_m\}$. For $\Delta \tau_j = \tau_j - \tau_{j-1}$, we can now express the cumulative hazard as \begin{align*} H(t) &= \int_{0}^{t} h(z) dz\\ &= \left(\sum_{j=1}^{{\kappa(t)} - 1} \int_{\tau_{j-1}}^{\tau_j} h(z) dz\right) + \int_{{\kappa(t)} -1}^{t} h(z) dz\\ &= \left(\sum_{j=1}^{{\kappa(t)} - 1} \eta_j\, \Delta \tau_j\right) + \eta_{\kappa(t)}\, (t - \tau_{{\kappa(t)} -1 }). \end{align*} Inserting this into~\eqref{eq:likelihood_cont} yields the likelihood contribution for individual $i$ \begin{align} \label{eq:like_contrib_pc-hazard} L_i &= {h(t_i)}^{d_i}\, \exp\left[- H(t_i) \right] = {\eta_{\kappa(t_i)}}^{d_i}\, \exp\left[-\eta_{\kappa(t_i)}\, (t - \tau_{{\kappa(t_i)} - 1})\right] \prod_{j=1}^{{\kappa(t_i)} - 1} \exp\left[-\eta_j\, \Delta \tau_{j}\right]. \end{align} What remains is to parameterize the hazard with a neural network. However, to avoid passing all the $\tau_j$'s to the loss function, we let the network instead parameterize the quantities $\tilde \eta_j = \eta_j\, \Delta \tau_k$. This allows us to rewrite the likelihood contribution as \begin{align*} L_i &= {\left(\frac{{\tilde\eta_{\kappa(t_i)}{}}}{\Delta \tau_{\kappa(t_i)}}\right)}^{d_i}\, \exp\left[-\tilde\eta_{\kappa(t_i)}\, \rho(t_i) \right] \prod_{j=1}^{{\kappa(t_i)} - 1} \exp\left[-\tilde\eta_j\right] \\ &\propto {\tilde\eta_{\kappa(t_i)}{}}^{d_i}\, \exp\left[-\tilde\eta_{\kappa(t_i)}\, \rho(t_i) \right] \prod_{j=1}^{{\kappa(t_i)} - 1} \exp\left[-\tilde\eta_j\right], \end{align*} where \begin{align} \label{eq:rho_frac} \rho(t) = \frac{t - \tau_{{\kappa(t)} -1}}{\Delta \tau_{\kappa(t)}}, \end{align} is the proportion of interval ${\kappa(t)}$ at time $t$. As before, let $\phi(\mathbf{x}) \in \mathbb{R}^m$ denote a neural network. To ensure that $\tilde{\eta}_j$ is non-negative, we use the softplus function \begin{align} \label{eq:pc_hazard_softplus} \tilde{\eta}_j = \log(1 + \exp[\phi_j(\mathbf{x})]). \end{align} Our model can now be fitted by minimizing the mean negative log-likelihood \begin{align*} \text{loss} &= - \frac{1}{n}\sum_{i=1}^n \left( d_i\, \log \tilde{\eta}_{\kappa(t_i)} (\mathbf{x}_i) - \tilde{\eta}_{\kappa(t_i)} (\mathbf{x}_i)\, \rho(t_i) - \sum_{j=1}^{{\kappa(t_i)} - 1} \tilde{\eta}_j (\mathbf{x}_i) \right), \end{align*} and estimates for the survival function can be obtained by \begin{align} \label{eq:cont_haz_surv} S(t \mid \mathbf{x}) = \exp[- H(t \mid \mathbf{x})] = \exp[-\tilde{\eta}_{\kappa(t)} (\mathbf{x})\, \rho(t)] \prod_{j=1}^{{\kappa(t)}-1} \exp[-\tilde{\eta}_j(\mathbf{x})], \end{align} where $\rho(t)$ is given by~\eqref{eq:rho_frac}. We will refer to this method as the \textit{piecewise constant hazard} method, or \textit{PC-Hazard}. Even though this is a continuous-time method, we still need to decide the set of $\tau_j$'s that define the intervals. Therefore, the discretization techniques discussed in Section~\ref{sub:discretization_of_durations} are also relevant for this method. Comparing the PC-Hazard to the Logistic-Hazard method with survival estimates interpolated with CHI (Section~\ref{sub:constant_hazard_interpolation}), we see that the only difference is in the loss function, as both PC-Hazard and CHI have piecewise constant hazards. In other words, the two methods both use~\eqref{eq:cont_haz_surv} to obtain survival estimates, but they have different estimates for the $\tilde \eta_j$'s as the PC-Hazard use the observed continuous event times and censoring times, while Logistic-Hazard discretizes the times to a predefined set of $\tau_j$'s as described in Section~\ref{sub:discretization_of_durations}. \section{Simulations} \label{sec:simulations} To get a better understanding of the methodologies discussed in Sections~\ref{sec:discrete_time_models} and~\ref{sec:continuous_time_models}, we perform a simulation study where we vary the size of the training sets, the discretization scheme, and the number of grid points used for discretization. \citet{gensheimer2019} performed a similar study to evaluate the effect of discretization on their Logistic-Hazard method with the conclusion that there were no differences in performance. However, their simulations were quite simple (only one binary covariate), their only performance metric was the \citet{Harrell1982} concordance at 1-year survival, and they did not include any interpolation of the survival estimates. For this reason, we find that further investigation is warranted. We generate simulated survival times by sequentially sampling from discrete-time hazards defined on a fine grid of time points. The hazards are specified through their logit transforms, as this enables us to use functions in $\mathbb R$ while still obtaining hazards in $[0, 1]$. The logit hazards, $g(t) = \text{logit}[h(t)]$, are linear combinations of the three functions \begin{align*} &g_\text{sin}(t \mid \mathbf{x}) = \gamma_1 \sin(\gamma_2 [ t + \gamma_3]) + \gamma_4,\\ &g_\text{con}(t \mid \mathbf{x}) = \gamma_5, \\ &g_\text{acc}(t \mid \mathbf{x}) = \gamma_6 \cdot t - 10, \end{align*} with additional parameters $\gamma_7$, $\gamma_8$, and $\gamma_9$ determining the linear combination. As described in Appendix~\ref{app:simulation}, each $\gamma_k$ is a function of five covariates, meaning we have a total of 45 covariates. We let the discrete time scale consist of 1,000 equidistant points between 0 and 100 (i.e.\ $\tau_0=0$, $\tau_1 = 0.1$, $\tau_2 = 0.2$, \ldots, $\tau_{1000} = 100$). Knowing the hazards, the true survival function can be obtained with~\eqref{eq:surv_hazard}, $S(\tau_j \mid \mathbf{x}) = \prod_{k=1}^j [1 - h(\tau_k \mid \mathbf{x})]$. In Figure~\ref{fig:sim_examples} we show five examples of logit hazard functions and their corresponding survival functions. \begin{figure}[tpb] \centering \includegraphics[width=1\linewidth]{./figures/sim_examples.pdf} \vspace*{-6mm} \caption{Examples from the simulation study in Section~\ref{sec:simulations}. The left figure shows examples of 5 simulated survival curves, while the right figure shows the corresponding logit hazards.}\label{fig:sim_examples} \end{figure} Note that even though we simulate our data using a discrete-time model, the time-grid is so fine that this mimics simulation from a continuous-time model. The full details of this simulation study are given in Appendix~\ref{app:simulation}. We created three training sets of size 3,000, 10,000, and 50,000, a validation set of size 10,000 (for hyperparameter tuning) and a test set of size 100,000. For the training and validation sets, we included a censoring distribution with constant hazard resulting in 37 \% censoring. The full uncensored test set is used for evaluation. For the discretization of the time scale, we applied both the equidistant scheme and the Kaplan-Meier quantiles, each with 5, 25, 100, and 250 grid points. The neural networks were all ReLU-nets with batch normalization and dropout between each layer, with all layers consisting of the same number of nodes. We performed a hyperparameter grid search over 1, 2, 4, and 8 layers; 16, 64, and 256 nodes; and dropout of 0 and 0.5. Each net was trained with batch size of 256 and the AdamWR optimizer~\citep{adamwr} with cycle length 1, where, at each restart, the cycle length was doubled and the learning rate was multiplied by 0.8. Learning rates were found using the methods proposed by \citet{smith2017}. The hyperparameter tuning was repeated 10 times, giving 10 fitted models for each combination of method, grid size, discretization scheme, and training set size. \subsection{Comparison of Discrete-Time Methods} \label{sub:comparison_of_discrete_methods} We start by comparing the two discrete methods from Section~\ref{sub:parameterization_with_neural_networks}, that parameterize the PMF and the discrete-time hazards. We refer to them as PMF and Logistic-Hazard, respectively. For evaluation, we use the time-dependent concordance \citep{Antolini2005}, in addition to the MSE between the survival estimates and the true survival function at all 1,000 time points $\tau_1, \ldots, \tau_{1000}$ \begin{align} \label{eq:mse_surv} \text{MSE} = \frac{1}{100,000} \sum_{i=1}^{100,000} \frac{1}{1,000}\sum_{j=1}^{1,000} {\left(\hat S(\tau_j \mid \mathbf{x}_i) - S_i(\tau_j) \right)}^2. \end{align} Here $\hat S(\tau_j \mid \mathbf{x}_i)$ and $S_i(\tau_j)$ are the estimated and true survival functions, respectively, for individual $i$ (in the test set) at time $\tau_j$. So, in this regard, the discrete-time survival estimates are represented by step functions, as illustrated in Figure~\ref{fig:hazard_interpolation}. \begin{figure}[tb] \centering \includegraphics[width=0.99\linewidth]{./figures/sim_grid_haz_pmf_chi.pdf} \vspace*{-1mm} \caption{Median MSE and concordance for each grid size in the simulation study in Section~\ref{sub:comparison_of_discrete_methods}. The number above each plot gives the size of the training set. The full lines use an equidistant grid, while the dotted lines use Kaplan-Meier quantiles for discretization. Note that the plots are not on the same scale.}\label{fig:sim_grid_haz_pmf_chi} \end{figure} In Figure~\ref{fig:sim_grid_haz_pmf_chi} we plot the median test scores of the two methods versus the grid size used for discretization. The numbers above each plot give the size of the training set used to fit the methods. The full lines represent equidistant grids, while the dotted lines are from grids obtained with quantiles from Kaplan-Meier survival estimates. We have also included the constant hazard interpolation (CHI) of the survival estimates from the Logistic-Hazard method (see Section~\ref{sub:constant_hazard_interpolation}). It is evident that smaller discretization grids are better for the smaller training sets, while larger training sets allow for larger grids. This is reasonable as the smaller grids result in fewer parameters in the neural networks. Nevertheless, the smallest grid of size 5 seems to only work well for the interpolated estimates, and very poorly for the discrete estimates. We also note that the discretization grids from Kaplan-Meier quantiles seem to give slightly better scores than the equidistant grids. Comparing the discrete survival estimates from Logistic-Hazard (blue lines) with the CHI estimates (orange lines), we see that the two lines overlap for larger grid sizes. This is expected as the effect of interpolation decreases as the grids become denser. In general, the PMF method does not perform as well as the Logistic-Hazard, though the difference is rather small. Also, while the interpolated estimates yield better results for most grid configurations, the best scores are almost identical. This means that the interpolated estimates have more stable performance, but with careful tuning of the discretization scheme, similar performance can be obtained with the discrete estimates. \subsection{Comparison of Interpolation Schemes} \label{sub:comparison_of_interpolation_schemes} \begin{figure}[tb] \centering \includegraphics[width=0.99\linewidth]{./figures/sim_rank_hazard_ip.pdf} \vspace*{-1mm} \caption{MSE and concordance from the simulation study in Section~\ref{sub:comparison_of_interpolation_schemes}. The scores are plotted from best to worst. The number above each plot gives the size of the training set. Note that the plots are not on the same scale.}\label{fig:sim_rank_hazard_ip} \end{figure} In the following, we compare the interpolation schemes for the discrete-time hazard method Logistic-Hazard. The experiments are not shown for the PMF method as the results are very similar. In Section~\ref{sub:interpolation_for_continuous_time_predictions} we presented two methods for interpolation of discrete survival estimates. The first assume constant density in each interval (denoted CDI for constant density interpolation), while the second assumes constant hazard in each interval (denoted CHI for constant hazard interpolation). In our simulation study, we have four grid sizes and two discretization schemes. As the hyperparameter tuning was repeated 10 times this gives 80 fitted models for each method on each data set. In Figure~\ref{fig:sim_rank_hazard_ip}, we plot the scores of these 80 models sorted from best to worst, as this both tells us the best performance, in addition to the stability of the methods. The figure contains results from the discrete survival estimates (Logistic-Hazard), the constant density interpolation (CDI), and the constant hazard interpolation (CHI). Clearly, there is almost no difference in performance between the two interpolation schemes, while the discrete estimates have slightly worse best-case performance and much worse worst-case performance. In fact, the only difference between the two interpolation schemes is that that CDI estimates give slightly better MSE while the CHI estimates give slightly better concordance. In this regard, we will in the further simulations only include the CHI estimates, as they have the same assumption as the continuous-time PC-Hazard method, simplifying the comparison between the methods. \subsection{Comparison with PC-Hazard} \label{sub:comparison_with_the_continuous_hazard_method} \begin{figure}[tb] \centering \includegraphics[width=0.99\linewidth]{./figures/sim_grid_cont_chi.pdf} \caption{Median MSE and concordance for each grid size of the simulation study in Section~\ref{sub:comparison_with_the_continuous_hazard_method}. The number above each plot gives the size of the training set. The full lines use an equidistant grid, while the dotted lines use Kaplan-Meier quantiles for discretization. Note that the plots are not on the same scale.}\label{fig:sim_grid_cont_chi} \vspace*{-1mm} \end{figure} Finally, we compare the previous methods with our proposed continuous-time hazard method from Section~\ref{sub:continuous_time_hazard_parametrization}, PC-Hazard. In Figure~\ref{fig:sim_grid_cont_chi} we plot the MSE and concordance for the interpolated Logistic-Hazard (CHI) method, and the continuous-time PC-Hazard method. First, we notice that PC-Hazard does better for the smallest grids with only five grid points, while Logistic-Hazard (CHI) typically performs best with 25 grid points. Also, in terms of MSE, PC-Hazard does the best for the smallest training set, while Logistic-Hazard (CHI) does better for the two larger training sets. In terms of concordance, PC-Hazard performs the best for the smallest and largest data sets. All differences are however quite small. On the other hand, the Logistic-Hazard (CHI) estimates do better for a variety of grid configurations, showing that it is less sensitive to the discretization than the PC-Hazard method. Finally, we again see that the Kaplan-Meier quantiles seem to give slightly better performance than the equidistant discretization grids. In Figure~\ref{fig:sim_rank_all} in Appendix~\ref{app:simulation}, we have included a plot of the same type as Figure~\ref{fig:sim_rank_hazard_ip} for the Logistic-Hazard (CHI) method, the Logistic-Hazard method, the PMF method and the PC-Hazard method. The figure again shows that the PMF method performs slightly worse than the other methods, while the PC-Hazard method performs similarly to the Logistic-Hazard (CHI) estimates. \subsection{Summary of Simulations} \label{sub:summary_of_simulations} To summarize the results of the simulations, we have shown that the size of the discretization grid (number of $\tau_j$'s) has a large impact on the performance of the methods, and therefore needs to be carefully tuned. Finer grids enable the methods to reduce bias in the predictions but require more parameters in the neural networks (higher variance). By defining the discretization grid with Kaplan-Meier quantiles, the performance for the smaller grids typically improve. Furthermore, interpolation of the discrete-time survival estimates made the performance less sensitive to the number of grid points, and was generally found to improve performance of the methods for smaller grid sizes. The performance of the two proposed interpolation schemes, CHI and CDI, was more or less indistinguishable. Comparing the three methods, we found that PMF performs slightly worse than Logistic-Hazard, both in terms of best-case performance and stability to discretization-grid configurations. PC-Hazard was found to be competitive with the interpolated Logistic-Hazard method and even performed better for the smallest training set. But the differences between all methods were small, and the size of the training sets and the grid size were shown to have a much larger impact on the performance than the choice of method.
proofpile-arXiv_067-6778
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the world of $C^*$-algebras, the construction of crossed-products $A\rtimes_\alpha G$ for an action of a locally compact group $G$ an a $C^*$-algebra $A$ is one of the most fundamental tools in the theory--not only to construct interesting examples of $C^*$-algebras, but also in the application of $C^*$-algebra theory in Harmonic Analysis, Non-commutative Geometry, Topology, and other areas of mathematics. Having this in mind it is very surprising that a serious study of a similar construction did not appear in the world of non-selfadoint operator algebras, operator systems, or operator spaces until the recent works of Katsoulis and Ramsay \cite{KR} in the setting of operator algebras and the even more recent work \cite{HK} of Harris and Kim in which they give a construction of crossed products of operator systems by actions of {\em discrete} groups, following some of the ideas developed in \cite{KR} for the construction. But we should also mention the earlier preprint \cite{Ng} by Chi-Keung Ng, where he introduces reduced (i.e., spatially defined) crossed products for coactions of quantum groups on operator spaces. Just a few days before the first version of \cite{KR} appeared on the arXiv, the authors of this paper posted a preprint describing a crossed product construction for group actions on operator spaces (see \cite{AEN}). Although this paper contained many ideas which were quite similar to ideas used in \cite{KR}, the proof of a central theorem (\cite[Theorem 4.3]{AEN}) turned out to be wrong, and since we didn't see a way for a quick repair, we decided to withdraw the paper from the arXiv. We are very grateful to Elias Katsoulis for having pointed out this error to us! The problem was that in \cite{AEN} we defined the full and reduced crossed products $V\rtimes_\alpha^uG$ and $V\rtimes_\alpha^rG$ as the completions of $C_c(G,V)$ inside the full and reduced crossed products by a canonical action of $G$ on the enveloping $C^*$-algebra $C_e^*(X(V))$, where $X(V)$ denotes the Paulsen system of $V$. Indeed, there is no problem with this in case of the reduced crossed products, but the universal crossed product $V\rtimes_\alpha^uG$ should enjoy a universal property for suitable covariant representations of the system $(V,G,\alpha)$ (which was the content of the unfortunate \cite[Theorem 4.3]{AEN}). However, \cite[Theorem 5.6]{HK} indicates that this cannot be true in general. The way out is to define the universal crossed product $V\rtimes_\alpha^uG$ as the closure of $C_c(G,V)$ inside the universal crossed product $C_u^*(X(V))\rtimes_{\alpha,u}G$ where $C_u^*(X(V))$ denotes the universal $C^*$-hull of $X(V)$ as introduced by Kirchberg and Wassermann in \cite{KW}. This was the approach of \cite{KR} in case of operator algebras and of \cite{HK} in case of operator systems. If we want to exploit the full power of universal properties for the universal crossed products, however, we would want to have a one-to-one correspondence between completely bounded covariant maps $(\varphi, u)$ of the system $(V,G,\alpha)$ and the completely bounded maps $\Phi$ of $V\rtimes_\alpha^uG$ via a canonically defined integrated form $\varphi\rtimes u$. But it turns out that in order to obtain such a correspondence we need to remember more information of the ambient crossed product $C_u^*(X(V))\rtimes_\alpha^uG$. Indeed, taking the completion of $$C_c(G, X(V))=\left(\begin{matrix} C_c(G) & C_c(G, V)\\ C_c(G, V^*)& C_c(G)\end{matrix}\right)\subseteq C_u^*(X(V))\rtimes_{\alpha,u}G$$ and considering the convolution products between the upper diagonal entries gives $V\rtimes_\alpha^uG$ the structure of an operator $C_u^*(G)$-bimodule, and it is this structure which one needs to take into account for a good description of the universal properties. Other problems appear if we consider crossed products by operator systems instead of operator spaces. Since in the above procedure we defined crossed products by $V$ via a crossed-product construction with the Paulsen system $X(V)$, it appears to be useful to consider at first the case of crossed products by general operator systems $X$. As mentioned above, such crossed products have been introduced by Harris and Kim in \cite{HK} for discrete groups. The reason for the restriction to the discrete case was the simple fact that the analogous construction for non-discrete groups $G$ would result in a non-unital (but selfadjoint) operator space, hence it does not land in the right category. The other draw back is similar as the one described above for operator spaces crossed-products: we need to keep more structure than simply the completion of $C_c(G,X)$ inside $C_u^*(X)\rtimes_uG$ in order to get the full power of the universal properties. Our way out is to extend the category of operator systems to what we call $C^*$-operator systems: a concrete $C^*$-operator system $(A,X)$ is a pair of subsets $A\subseteq X\subseteq \mathcal B(H)$ for some Hilbert space $H$, such that $X=X^*$, $A$ is a non-degenerate $C^*$-subalgebra of $\mathcal B(H)$, and $AX=X=XA$. A morphism from $(A,X)$ to the $C^*$-operator system $(B,Y)$ is then a ccp map $\varphi_X: X\to Y$ such that the restriction $\varphi_A:=\varphi_X|_A$ is a $*$-homomorphisms from $A$ to $B$ and such that $\varphi_X(ax)=\varphi_A(a)\varphi_X(x)$ and $\varphi_X(xa)=\varphi_X(x)\varphi_A(a)$ for all $a\in A$ and $x\in X$. Of course, if $X\subseteq \mathcal B(H)$ is a classical operator system, then $(\C1, X)$ is $C^*$-operator system in this sense, and every ucp map between operator systems $X$ and $Y$ extends to a morphism in the above sense from $(\C1, X)$ to $(\C1, Y)$. Thus we get an inclusion of the category of operator systems into the category of $C^*$-operator systems. After some preliminaries given in Section \ref{sec-prel} we introduce $C^*$-operator systems in Section \ref{sec-C*opsys}, where we also introduce a corresponding notion of multiplier $C^*$-operator systems which play an analogous role as the multiplier algebra for a $C^*$-algebra. In particular, for a $C^*$-operator system $(A,X)$ the multiplier system $(M(A), M(X))$ can be considered as the largest unitization of $(A,X)$ and it always contains the unitization $(\tilde{A},\tilde{X})$ if $(A,X)$ has not unit (which means that $A$ has no unit). An important feature of the multiplier system is that every non-degennerate morphism from $(A,X) $ to $(B,Y)$ extends uniquely to a morphism from $(MA), M(X))$ to $(M(B), M(Y))$. In Section \ref{sec-univ-env} we study $C^*$-hulls of $C^*$-operator systems, i.e., $C^*$-algebras $C$ together with completely isometric representations $(j_A,j_X):(A,X)\to C$ such that $C$ is generated by the image $j_X(X)$ of $X$. We show that there always exists a largest (the universal) $C^*$-hull $C_u^*(A,X)$ and a smallest (the enveloping) $C^*$-hull $C_e^*(A,X)$, using well-known ideas of Kirchberg, Wassermann and Hamana and Ruan. As a first hint that the category of $C^*$-operator systems is useful, we give in Section \ref{sec-tensor} a brief discusion of some tensor product constructions with $C^*$-operator systems. In particular, the spatial tensor products $X\otimes A$ of an operator $X$ with a $C^*$-algebra $A$ naturally carries the structure of a $C^*$-operator system $(1_X\otimes A, X\otimes A)$ and if $A$ is not unital, this system is not unital as well! In Section \ref{crossedproducts} we define the universal crossed products $(A,X)\rtimes_\alpha^uG$ by a continuous action $\alpha$ of a locally compact group $G$ on a $C^*$-operator system $(A,X)$ as the pair of completion $(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)$ of $(C_c(G,A), C_c(G,X))$ inside the universal crossed product $C_u^*(A,X)\rtimes_{{\alpha}, u}G$ for the canonical action of $G$ on the universal $C^*$-hull $C_u^*(A,X)$. Note that the $C^*$-part $A\rtimes_{\alpha}^uG$ is not always isomorphic to the universal $C^*$-algebra crossed product $A\rtimes_{\alpha,u}G$ (e.g., see part (b) of Remark \ref{rem-crossed}). We show that in this setting we get a very satisfying picture of the universal property: every covariant morphism $(\varphi_X, u)$ of the system $(A,X, G,\alpha)$ ``integrates'' to a morphism $\varphi_X\rtimes u$ of $(A,X)\rtimes_{\alpha}^uG$ and every (non-degenerate) morphism of $(A,X)\rtimes_{\alpha}^uG$ appears as such integrated form. In Section \ref{sec-reduced-crossed} we add a brief discussion of the spatially defined reduced crossed product $(A,X)\rtimes_{\alpha}^rG$. In Section \ref{sec-coaction} we study coactions of groups on $C^*$-operator systems and their crossed products. Since locally compact groups are always co-amenable, it is not surprising that the full and reduced crossed products coincide in the sense that the spatially defined crossed product already enjoys the universal properties for covariant representations. In Section \ref{sec-dual} we prove versions of the Imai-Takai and Katayama duality theorems for actions and coactions of groups on $C^*$-operator systems: starting with an action $\alpha:G\to \operatorname{Aut}(A,X)$ there are canonical dual coactions $\widehat\alpha_u$ and $\widehat\alpha_r$ on the universal and reduced crossed products, respectively, such that we get canonical $\widehat{\widehat\alpha}=\alpha\otimes\operatorname{Ad}\rho$ equivariant isomorphisms (where $\rho$ denotes the right regular representation of $G$) $$\big(A\rtimes_{\alpha}^uG\rtimes_{\widehat{\alpha}}\widehat{G}, X\rtimes_{\alpha}^uG\rtimes_{\widehat{\alpha^u}}\widehat{G}\big) \cong \big(A\otimes {\mathcal K}(L^2(G)), X\otimes {\mathcal K}(L^2(G))\big)$$ and $$\big(A\rtimes_{\alpha}^rG\rtimes_{\widehat{\alpha^r}}\widehat{G}, X\rtimes_{\alpha}^rG\rtimes_{\widehat{\alpha^r}}\widehat{G}\big) \cong \big(A\otimes {\mathcal K}(L^2(G)), X\otimes {\mathcal K}(L^2(G))\big).$$ The converse direction, when starting with a coaction $\delta$, known as Katayama's theorem in case of $C^*$-algebra crossed products, is a bit more involved, and the full analogue of the Imai-Takai theorem only works under some additional assumptions, like when $G$ is amenable or if everything in sight was defined spatially (i.e., we would consider reduced group algebras and reduced crossed products only). Note that in \cite{Ng}, Chi-Keung Ng proves duality theorems for spatially defined crossed products in the more general case of (co-)actions by more general quantum groups. In Section \ref{sec-bimodules} we come back to group actions on operator spaces $V$. As indicated above, also in this case it is useful to find a suitable extension of the category of operator spaces since the natural candidate for the crossed product has a canonical structure of a $C^*$-algebra bimodule via a left and right action of $C_u^*(G)$ on $V\rtimes_\alpha^uG$. So we found that the right category would be the category of (concrete) $C^*$-operator bimodules $(A,V,B)$ which consist of a concrete operator space $V\subseteq \mathcal B(K,H)$ for some Hilbert spaces $H$ and $K$ together with $C^*$-subalgebras $A\subseteq \mathcal B(H)$ and $B\subseteq \mathcal B(K)$ such that $$AV=V=VB. $$ Again, we can identify operator spaces $V\subseteq \mathcal B(K,H)$ with the $C^*$-operator bimodule $(\C1_H,V, \C1_K)$. In this way the category of $C^*$-operator bimodules extends the category of operator spaces. If a $C^*$-operator bimodule $(A,V,B)$ is given, we get a corresponding {\em Paulsen $C^*$-operator system} $\big(A\oplus B, X(A,V,B)\big)$ with $$X(A,V,B)=\left(\begin{matrix} A&V\\ V^*& B\end{matrix}\right)$$ and a one-to-one correspondence between morphism of $(A,V,B)$ and morphisms of $(A\oplus B, X(A,V,B)\big)$. Thus it is fairly straightforward to apply the above described crossed-product constructions for $C^*$-operator systems to the Paulsen systems $\big(A\oplus B, X(A,V,B)\big)$ to obtain complete analogues of the above described results in this setting. In particular we get complete analogues of the Imai-Takai and Katayama duality theorems. The authors are grateful to Elias Katsoulis and David Blecher for valuable discussions and comments concerning this project and in particular to the content of the preprint \cite{AEN}. \section{Preliminaries}\label{sec-prel} If $H$ is a Hilbert space we denote by $\mathcal B(H)$ the algebra of bounded operators on $H$ equipped with the operator norm and the canonical involution. A concrete operator space is a closed linear subspace $X\subseteq \mathcal B(H)$ for some Hilbert space $H$. If $X\subseteq \mathcal B(H)$ and $Y\subseteq \mathcal B(K)$ are two operator spaces, then for each $n\in \mathbb{N}$ we have the matrix operator spaces $M_n(X)\subseteq \mathcal B(H^n)$ and $M_n(Y)\subseteq \mathcal B(K^n)$. If $\varphi:X\to Y$ is a linear map define $\varphi_n:M_n(X)\to M_n(Y)$ by $\varphi_n\big((x_{ij})_{1\leq i,j\leq n}\big)=(\varphi(x_{ij}))_{1\leq i,j\leq n}$. Then $\varphi:X\to Y$ is called {\em completely bounded} (or a {\em cb map}), if there exist a constant $C\geq 0$ such that $$\|\varphi_n(x)\|_{\operatorname{op}}\leq C\|x\|_{\operatorname{op}}$$ for all $n\in \mathbb{N}$ and all $x\in M_n(X)$. If $C$ can be chosen to be less or equal to one, we say that $\varphi:X\to Y$ is {\em completely contractive} and if $\|\varphi_n(x)\|_{\operatorname{op}}= \|x\|_{\operatorname{op}}$ for all $n\in \mathbb{N}$ and $x\in M_n(X)$, we say that $\varphi$ is completely isometric. Suppose now that $X=X^*$ and $Y=Y^*$ are symmetric closed subspaces of $\mathcal B(H)$ and $\mathcal B(K)$, respectively, not necessarily containing the units. We then call a linear map $\varphi:X\to Y$ a {\em ccp map} (completely contractive and positive) if it satisfies the following conditions: \begin{enumerate} \item $\varphi:X\to Y$ is completely contractive; \item $\varphi(x^*)=\varphi(x)^*$ for all $x\in X$; \item $\varphi_n(x)\geq 0$ for every positive $x\in M_n(X)$; \end{enumerate} where positivity of an element $x\in M_n(X)$ (resp. $y\in M_n(Y)$) means that $x$ (resp. $y$) is a positive element in $\mathcal B(H^n)$ (resp. $\mathcal B(K^n)$). If, in addition, $\varphi$ is completely isometric, we call it an {\em icp map}. Of course it is well known that if $X$ and $Y$ are operator systems (i.e., they contain the units $1_H$ and $1_K$, respectively), then every unital linear map which satisfies (3) automatically satisfies (1) and (2), hence is a ccp map. As usual, we then say that $\varphi:X\to Y$ is a {\em ucp map}. \section{$C^*$-operator systems and multiplier systems}\label{sec-C*opsys} In this section we introduce a category of possibly non-unital operator systems which include $C^*$-algebras and classical operator systems as subcategories. This category will play an important r\^ole in our construction of crossed products. \begin{definition}\label{def-cstaros} A (concrete) {\em $C^*$-operator system} $(A,X)$ on the Hilbert space $H$ is a pair of norm-closed self-adjoint subspaces $A\subseteq X\subseteq \mathcal B(H)$ such that \begin{enumerate} \item $A$ is a non-degenerate $C^*$-subalgebra of $\mathcal B(H)$, i.e., $A H=H$. \item $\overline{\operatorname{span}}\{a\cdot x: a\in A, x\in X\}=X$ (which by an application of Cohen's factorisation theorem is equivalent to $X=AX=\{ax: a\in A, x\in X\}$). \end{enumerate} A {\em morphism} between two $C^*$-operator systems $(A,X)$ and $(B,Y)$ on Hilbert spaces $H$ and $K$, respectively, consists of a ccp map $\psi: X\to Y$ such that \begin{enumerate} \item $\psi(A)\subseteq B$, and \item for all $a\in A$ and $x\in X$ we have $\psi(a x) =\psi(a)\psi(x)$. \end{enumerate} A morphism $\psi:X\to Y$ is called {\em non-degenerate} if $\psi(A)Y=Y$. We say that the $C^*$-operator system $(A,X)$ is unital, if $A$ is unital. \end{definition} \begin{example}\label{excstaros} \begin{enumerate} \item Clearly every classical operator system $X\subseteq \mathcal B(H)$ can be regarded as a unital $C^*$-operator system with respect to the $C^*$-subalgebra $A=\mathbb{C} 1\subseteq X$. In that case a nonegenerate morphism from $(\C1, X)$ to $(\C1, Y)$ is just a ucp map. \item If $(A,X)$ is a unital $C^*$-operator system on the Hilbert space $H$, then the unit of $A$ coincides with the identity operator on $H$ since $A$ acts non-degenerately on $H$. Hence $X$ is also a classical operator system. \item Every non-degenerate $C^*$-subalgebra $A\subseteq \mathcal B(H)$ gives rise to the $C^*$-operator system $(A,A)$ on $H$. \item Suppose that $(A,X)$ and $(B,Y)$ are $C^*$-operator systems on the Hilbert spaces $H$ and $K$, respectively. Then the norm-closed tensor product $X\otimes Y\subseteq \mathcal B(H\otimes K)$ contains the minimal tensor product $A\otimes B$ as a sub-$C^*$-algebra such that $(A\otimes B, X\otimes Y)$ becomes a $C^*$-operator system on $H\otimes K$. In particular, if $X\subseteq \mathcal B(H)$ is a classical unital operator system and $B\subseteq \mathcal B(K)$ is a $C^*$-algebra, then the minimal tensor product $X\otimes B$ has the structure of a $C^*$-operator system with $C^*$-subalgebra $B\cong \C1\otimes B\subseteq X\otimes B$. This example shows that $C^*$-operator systems do appear quite naturally! \end{enumerate} \end{example} \begin{definition}\label{def-rep} Let $(A,X)$ be a $C^*$-operator system on the Hilbert space $H$. A {\em representation} of $(A,X)$ on a Hilbert space $K$ is a ccp map $\pi: X\to \mathcal B(K)$ such that $\pi(ax)=\pi(a)\pi(x)$ for all $a\in A$ and $x\in X$ (in particular, $\pi|_A$ is a $*$-representation of $A$). The representation $\pi$ is called non-degenerate, if $\pi|_A:A\to \mathcal B(K)$ is non-degenerate. \end{definition} \begin{definition}\label{def-nuos} Suppose that $X\subseteq \mathcal B(H)$ is a self-adjoint norm-closed subset of $\mathcal B(H)$. A norm bounded net $(u_i)_{i\in I}$ of self-adjoint elements in $X$ is called an {\em approximate unit} for $X$ if for all $x\in X$, $i\in I$ we have $u_ix, xu_i\in X$ and $u_ix, xu_i\to x$ in the norm of $\mathcal B(H)$. \end{definition} \begin{lemma}\label{lem-approx} Suppose that $(A,X)$ is a $C^*$-operator system on $H$. Then every bounded self-adjoint approximate unit $(u_i)_{i\in I}$ of $A$ is an approximate unit for $X$ in the sense of the above definition. Moreover, $u_i\to 1_H$ $*$-strongly in $\mathcal B(H)$. \end{lemma} \begin{proof} The first assertion follows immediately from the requirement $AX=X$ for a $C^*$-operator system. The second assertion follows from $H=AH$. \end{proof} \begin{definition}\label{def-unitization} Suppose that $(A,X)$ is a $C^*$-operator system on some Hilbert space $H$. By a unitization of $(A,X)$ we understand a unital $C^*$-operator system $(\tilde{A},\tilde{X})$ on $H$ which contains $(A,X)$ such that the following are satisfied \begin{enumerate} \item $A$ is an ideal in $\tilde{A}$ and $A\tilde{X}\subseteq X$. \item If $x\in \tilde{X}$ such that $ax=0$ for all $a\in A$, then $ x=0$. \end{enumerate} \end{definition} It is well known that for a $C^*$-algebra $A$, the multiplier algebra $M(A)$ is the largest unitization of $A$. We shall now introduce an analogous construction for $C^*$-operator systems: \begin{lemma}\label{lem-multipliers} Suppose that $(A,X)$ is a $C^*$-operator system on the Hilbert space $H$. Let $M(A)=\{m\in \mathcal B(H): mA\cup Am\subseteq A\}$ be the realisation of the multiplier algebra of $A$ in $\mathcal B(H)$ and let $$M(X)=\{k\in \mathcal B(H): kA\cup Ak\subseteq X\}\subseteq \mathcal B(H).$$ Then $(M(A), M(X))$ is a unitization of $(A,X)$ in $\mathcal B(H)$. Moreover, if $\pi:X\to \mathcal B(K)$ is any non-degenerate ccp representation of $(A,X)$ on some Hilbert space $K$, then there exists a unique unital extension $\bar\pi:M(X)\to \mathcal B(K)$ of $\pi$ as a ccp representation of $(M(A), M(X))$ on $K$. Moreover, $\bar\pi$ is completely isometric iff $\pi$ is completely isometric. \end{lemma} \begin{notation}\label{not-multiplier} We call $(M(A), M(X))$ the {\em multiplier $C^*$-operator system} of $(A,X)$. Notice that the space $M(X)$ very much depends on the $C^*$-subalgebra $A\subseteq X$, so that a better notation would probably be to write $M_A(X)$ instead of $M(X)$. However, it will always be clear from the context with respect to which $C^*$-subalgebra $A\subseteq X$ the set $M(X)$ is defined, so in order to keep notation simple we stick to $M(X)$. \end{notation} \begin{proof}[Proof of Lemma \ref{lem-multipliers}] It is trivial to check that $(M(A), M(X))$ fulfils all properties of a unital $C^*$-operator system. Note that $M(A)X\subseteq X$ since $X=AX$ and hence $M(A)X=M(A)(AX)=(M(A)A)X=AX=X$ and similarly $XM(A)=X$. This easily implies that $M(A)M(X)\subseteq M(X)$. Moreover, if $k\in M(X)$ such that $ak=0$ for all $a\in A$, then we also have $k^*a=0$ for all $a\in A$, hence $k^*(AH)=k^*H=\{0\}$ which then implies that $k^*=0$. But then $k=0$. Thus it follows that $(M(A), M(X))$ is a unitization of $(A,X)$. Suppose now that $\pi:X\to \mathcal B(K)$ is a non-degenerate ccp representation of $(A,X)$. Then $\pi(A)K=K$ and we define the extension $\bar\pi:M(X)\to \mathcal B(K)$ by $$\bar\pi(k)(\pi(a)\xi):=\pi(ka)\xi.$$ To see that this is well defined, let let $(u_i)_{i\in I}$ be an approximate unit of $A$ consisting of positive elements of norm $\leq 1$. Then, if $\pi(a_1)\xi_1=\pi(a_2)\xi_2$ for some elements $a_1,a_2\in A$ and $\xi_1, \xi_2\in K$, we get \begin{align*} \pi(ka_1)\xi_1&=\lim_i\pi(ku_ia_1)\xi_1=\lim_i\pi(ku_i)\pi(a_1)\xi_1\\ &=\lim_i\pi(ku_i)\pi(a_2)\xi_2=\lim_i\pi(ku_ia_2)\xi_2=\pi(ka_2)\xi_2. \end{align*} This shows that $\bar\pi$ is well defined. We now need to check that $\bar\pi(mk)=\bar\pi(m)\bar\pi(k)$ for all $m\in M(A)$ and $k\in M(X)$. We first show that $\pi(ka)=\bar\pi(k)\pi(a)$ for all $k\in M(X), a\in A$. To see this let $\eta\in H$. Then $\eta=\pi(b)\xi$ for some $b\in A, \xi\in K$. Then $$\pi(ka)\eta=\pi(kab)\xi=\bar\pi\pi(ab)\xi=\bar\pi\pi(a)\pi(b)\xi=\bar\pi(k)\pi(a)\eta.$$ Suppose now that $m\in M(A)$ and $k\in M(X)$. Then, for $\xi, \eta\in H$ and $a,b\in A$ we get \begin{align*} \langle \bar\pi(mk)\pi(a)\xi, \pi(b)\eta\rangle&= \langle \pi(b^*)\bar\pi(mk)\pi(a)\xi, \eta\rangle\\ &= \langle \pi(b^*)\pi(mka)\xi, \eta\rangle\\ &= \langle \pi(b^*mka)\xi, \eta\rangle\\ &= \langle \pi(b^*m)\pi(ka)\xi, \eta\rangle\\ &= \langle \pi(b^*)\bar\pi(m)\bar\pi(k)\pi(a)\xi, \eta\rangle\\ &= \langle \bar\pi(m)\bar\pi(k)\pi(a)\xi, \pi(b)\eta\rangle \end{align*} which then implies that $\bar\pi(mk)=\bar\pi(m)\bar\pi(k)$. We need to show that $\bar\pi:M(X)\to \mathcal B(K)$ is completely positive. Since it is unital, this will also imply that it is completely contractive. If $(A,X)$ is unital, then $M(X)=X$ and nothing has to be done. If $(A,X)$ is not unital, let $(u_i)_{i\in I}$ be an approximate unit of $A$ consisting of positive elements of norm $\leq 1$. Since $\pi(A)K=K$ it follows that $\pi(u_i)\to 1_K$ $*$-strongly in $\mathcal B(K)$. Then, if $m\in M(X)$, it follows that $\pi(u_imu_i)=\pi(u_i)\bar\pi(m)\pi(u_i)$ weakly to $\bar\pi(m)$ in $\mathcal B(K)$. Now let $m\in M_n(M(X))\subseteq B(K^n)$ be any positive element. Let $v_i:=u_i \otimes I_n\in M_n(X)$. Then $v_imv_i$ is a positive element of $M_n(X)$ such that $\pi_n(v_imv_i)$ converges weakly to $\bar\pi_n(m)$ in $\mathcal B(K^n)$. Since weak limits of positive elements are positive, it follows that $\bar\pi_n(m)$ is positive. Finally assume that $\pi:X\to \mathcal B(K)$ is completely isometric and let $(\widetilde{A},\widetilde{X}):=(\pi(A),\pi(X))$ denote the image of $(A,X)$ in $\mathcal B(K)$. Then by the first part of this proof applied to the system $(\widetilde{A},\widetilde{X})$ the inverse $\pi^{-1}:\widetilde{X}\to \mathcal B(H)$ extends uniquely to a ccp representation $\bar{\pi}^{-1}:M(\widetilde X)\to {\mathcal B}(H)$. Since $\bar\pi^{-1}\circ \bar\pi|_X:X\to \mathcal B(H)$ coincides with the identity on $X$, it follows from the uniqueness assertion for the extension to $M(X)$ that $\bar\pi^{-1}\circ \bar\pi:M(X)\to \mathcal B(H)$ is the identity on $M(X)$. Similarly, $\bar\pi\circ \bar{\pi}^{-1}$ is the identity on $M(\widetilde{X})$. In particular, $\bar\pi:M(X)\to \mathcal B(K)$ is completely isometric. \end{proof} The following lemma shows that $(M(A), M(X))$ is the largest unitization of $(A,X)$. \begin{lemma}\label{lem-largest-unit} Let $(A,X)$ be a $C^*$-operator system in $\mathcal B(H)$ and suppose that $(\tilde{A},\tilde{X})$ is a unitization of $(A,X)$ in $\mathcal B(H)$. Then $\tilde{A}\subseteq M(A)$ and $\tilde{X}\subseteq M(X)$. As a consequence, if $\pi:X\to \mathcal B(K)$ is a non-degenerate ccp representation of $(A,X)$ on a Hilbert space $K$, there exists a unique ccp representation $\tilde\pi:\tilde{X}\to \mathcal B(K)$ of $(\tilde{A},\tilde{X})$ which extends $\pi$. \end{lemma} \begin{proof} Clearly, if $(\tilde{A},\tilde{X})$ is a unitization of $(A,X)$ in $\mathcal B(H)$, then every $m\in \tilde{X}$ multiplies $A$ into $X$. Hence $\tilde{X}\subseteq M(X)$. Since $A$ is an ideal in $\tilde{A}$ we also have $\tilde{A}\subseteq M(A)$. By Lemma \ref{lem-multipliers} we know that $\pi$ extends uniquely to a representation $\bar\pi$ of $M(X)$. We then put $\tilde\pi:=\bar\pi|_{\tilde{X}}$. \end{proof} \begin{definition}\label{def-nondeg-morphism} Suppose that $(A,X)$ and $(B,Y)$ are $C^*$-operator systems. We say that $\varphi:X\to M(Y)$ is a (non-degenerate) {\em generalized morphism} from $(A,X)$ to $(B,Y)$ if the following holds: \begin{enumerate} \item $\varphi:X\to M(Y)$ is a morphism from $(A,X)$ to $(M(B), M(Y))$, and \item $\varphi(A)B=B$. \end{enumerate} \end{definition} Note that since $BY=Y$, the condition (2) also implies that $\varphi(A)Y=Y$. \begin{lemma}\label{lem-non-deg-morphism} Suppose that $(A,X)$ and $(B,Y)$ are $C^*$-operator systems. If $\varphi:X\to M(Y)$ is a non-degenerate generalized morphism from $(A,X)$ to $(B,Y)$, then there exists a unique extension $\bar\varphi:M(X)\to M(Y)$ of $\varphi$ as a morphism from $(M(A), M(X))$ to $(M(B), M(Y))$. In particular, if $\varphi:(A,X)\to (B,Y)$ is a completely positive and completely isometric isomorphism of $C^*$-operator systems, the same holds for the extension $\bar\varphi: M(X)\to M(Y)$. \end{lemma} \begin{proof} Assume that $(B,Y)$ and hence $(M(B), M(Y))$ are $C^*$-operator systems on the Hilbert space $K$. It follows then from the condition that $\varphi(A)B=B$ that $\varphi(A)K=\varphi(A)(BK)=BK=K$, so $\varphi:A\to \mathcal B(K)$ is a non-degenerate representation of $(A,X)$ on $\mathcal B(K)$. By Lemma \ref{lem-multipliers} we know that there is a unique ccp extension $\bar\varphi:M(X)\to \mathcal B(K)$. We then get $$\bar\varphi(M(X))B=\bar\varphi(M(X))\varphi(A)B=\varphi(M(X)A)B \subseteq \varphi(X)B\subseteq M(Y)B\subseteq Y,$$ hence $\bar\varphi(M(X))\subseteq M(Y)$. A similar argument shows that $\bar\pi(M(A))\subseteq M(B)$. For the final statement assume that $\varphi:X\to Y$ is a completely isometric isomorphism of the $C^*$-operator systems $(A,X)$ and $(B,Y)$. Let $\bar\varphi:M(X)\to M(Y)$ and ${\bar\varphi}^{-1}:M(Y)\to M(X)$ denote the unique extensions of $\varphi$ and $\varphi^{-1}$ to $M(X)$ and $M(Y)$, respectively. Then ${\bar\varphi}^{-1}\circ \bar{\varphi}:M(X)\to M(X)$ extends the identity on $X$, and hence, by the uniqueness of the extension, must be equal to the identity mal on $M(X)$. Similarly, $\bar{\varphi}\circ {\bar\varphi}^{-1}$ is the identity on $M(Y)$. \end{proof} \begin{corollary}\label{cor-isometry} Let $(A,X)$ be a $C^*$-operator system on $H$ and suppose that $\pi:X\to \mathcal B(K)$ is a completely isometric c.p representation of $(A,X)$ on $K$. Then the unique extension $\bar\pi:M(X)\to \mathcal B(K)$ is completely isomeric as well. The same holds for the extension $\tilde\pi:\tilde{X}\to \mathcal B(K)$ for any unitization $(\tilde{A},\tilde{X})$ of $(A,X)$. \end{corollary} \begin{proof} If $\pi:X\to \mathcal B(K)$ is completely isometric, then $(\pi(A),\pi(X))$ is a $C^*$-operator system in $\mathcal B(K)$ and $\pi:X\to \pi(X)$ is a completely isometric isomorphism of $C^*$-operator systems. Thus the unique extension $\bar\pi:M(X)\to M(\pi(X))\subseteq \mathcal B(K)$ is completely isometric by Lemma \ref{lem-non-deg-morphism}. \end{proof} Of course there is also a smallest unitization of $(A,X)$: \begin{definition}\label{def-adjoining-unit} Suppose that $(A,X)$ is a $C^*$-operator system on a Hilbert space $H$. Let $X^1=X+\C1_H$ and $A^1=A+\C1_H\subseteq X^1$. Then $(A^1, X^1)$ is the smallest unitization of $(A,X)$ in $\mathcal B(H)$. We call it the {\em minimal} unitization of $(A,X)$. \end{definition} \begin{remark}\label{rem-extension} Of course, if $\pi:X\to \mathcal B(K)$ is any ccp representation of $(A,X)$ on a Hilbert space $K$, then the unique extension $\pi^1:X^1\to \mathcal B(K)$ is given by $\pi^1(x+\lambda 1_H)=\pi(x)+\lambda 1_K$. By Corollary \ref{cor-isometry}, if $\pi$ is completely isomertric, then $\pi^1$ is completely isometric as well. \end{remark} \section{$C^*$-hulls of $C^*$-operator systems}\label{sec-univ-env} If $(A,X)$ is a $C^*$-operator system, $C$ is a $C^*$-algebra, and $j:X\to C$ is a completely positive complete isometry such that $j(ax)=j(a)j(x)$ for all $a\in A, x\in X$ and such $X$ generates $C$ as a $C^*$-algebra, then the pair $(C, j)$ is called a $C^*$-hull of $(A,X)$. Two $C^*$-hulls $(C,j)$ and $(C', j')$ of $(A,X)$ are called equivalent, if there exists a $*$-isomorphism $\varphi: C\to C'$ such that $\varphi\circ j=j'$. In what follows below we want to show that for any $C^*$-operator system $(A,X)$ there exist $C^*$-hulls $(C_u^*(A,X), j_u)$ and $(C_{\operatorname{env}}^*(A,X), j_{\operatorname{env}})$ such that for any given $C^*$-hull $(C,j)$ of $(A,X)$ there exist unique surjective $*$-homomorphisms $$C_u^*(A,X)\stackrel{\varphi_u}{\twoheadrightarrow} C\stackrel{\varphi_{\operatorname{env}}}{\twoheadrightarrow} C_{\operatorname{env}}^*(A,X)$$ such that $\varphi_u\circ j_u=j$ and $\varphi_{\operatorname{env}}\circ j=j_{\operatorname{env}}$. It follows directly from these universal properties of $(C_u^*(A,X), j_u)$ and $(C_{\operatorname{env}}^*(A,X), j_{\operatorname{env}})$ that they are unique up to equivalence (if they exist). We call $(C_u^*(A,X), j_u)$ the universal $C^*$-hull of $(A,X)$ and we call $(C_{\operatorname{env}}^*(A,X), j_{\operatorname{env}})$ the enveloping $C^*$-algebra of $(A,X)$. Of course, the above notion of the universal $C^*$-hull of a $C^*$-operator system extends the notion of the universal $C^*$-hull of a classical operator system $X$ as introduced by Kirchberg and Wassermann in \cite{KW} and the notion of the $C^*$-envelope extends the well-known notion of a $C^*$-envelope of an operator system due to Hamana \cite{Hamana1}. Using the ideas of Kirchberg and Wassermann, we now construct the universal $C^*$-hull $(C_u^*(A,X), j_u)$. We need: \begin{definition}\label{def-Agenerated} Suppose that $(A,X)$ is a $C^*$-operator system. A representation $\pi:X\to \mathcal B(K)$ is called {\em finitely $A$-generated}, if there exists a finite subset $\{\xi_1,\ldots, \xi_l\}$ of $K$ such that $K=\overline{\operatorname{span}}\{\pi(A)\xi_1,\ldots, \pi(A)\xi_l\}$. \end{definition} If $\kappa$ is the cardinality of a dense subset of $A$, then every finitely $A$-generated representation of $(A,X)$ can be regarded, up to unitary equivalence, as a representation on a closed subspace of $\ell^2(I_\kappa)$, where $I_\kappa$ is a fixed set with cardinality $\kappa$. \begin{theorem}\label{thm-universal} For every $C^*$-operator system $(A,X)$ there exists a universal hull $(C_u^*(A,X), j_u)$ for $(A,X)$. \end{theorem} \begin{proof} Let $\kappa$ denote the cardinality of a dense subset of $A$ and let $S$ denote the set of all non-degenerate finitely $A$-generated ccp representations $\pi: X\to \mathcal B(H_\pi)$ where $H_\pi$ is a closed subspace of $\ell^2(I_\kappa)$. Write $H_S=\bigoplus_{\pi\in S} H_\pi$ and $\pi_S=\bigoplus_{\pi\in S}\pi$. We claim that $\pi_S: X\to \mathcal B(H_S)$ is a completely isometric representation of $(A,X)$. For this let us assume that $(A,X)$ is represented as a concrete $C^*$-operator system on the Hilbert space $H$. Then for each fixed $n\in \mathbb{N}$, $x\in M_n(X)$, and $\varepsilon>0$ we choose a finite rank projection $p\in \mathcal B(H)$ such that $$\|(p\otimes 1_n) x (p\otimes 1_n)\|\geq \|x\|-\varepsilon.$$ Let $H_{x,\varepsilon}:=\overline{\operatorname{span}}\{apH:a\in A\}$ and let $q:H\to H_{x,\varepsilon}$ denote the orthogonal projection. Define $$\pi_{x,\varepsilon}: X\to \mathcal B(H_{x,\varepsilon}); \pi_{x,\varepsilon}(y):=qy q$$ for all $y\in X$. Since $H_{x,\varepsilon}$ is an $A$-invariant subspace of $H$, we see that $q$ commutes with the elements of $A$, hence $$\pi_{x,\varepsilon}(ay)=qa y q=qaqyq=\pi_{x,\varepsilon}(a)\pi_{x,\varepsilon}(y)$$ for all $a\in A$, $y\in X$, so $\pi_{x,\varepsilon}$ is a ccp representation of $(A,X)$ on $H_{x,\varepsilon}$. By construction, $\pi_{x,\varepsilon}$ is finitely $A$-generated and $\|\pi_{x,\varepsilon,n}(x)\|\geq \|x\|-\varepsilon$. By choosing an isometric embedding of $H_{x,\varepsilon}$ into $\ell^2(I_{\kappa})$ we may assume that $\pi_{x,\varepsilon}\in S$. Since $\varepsilon$ is arbitrary, it follows now that $\pi_S$ is completely isometric. We now define $C_u^*(A,X)$ as the $C^*$-subalgebra of $\mathcal B(H_S)$ generated by $\pi_S(X)$ and $j_u=\pi_S: X\to C_u^*(A,X)\subseteq \mathcal B(H_S)$. We then have $$C_u^*(A,X)\subseteq \prod_{\pi\in S} \mathcal B(H_\pi)\subseteq \mathcal B(H_S).$$ Suppose now that $(C, j)$ is an arbitrary $C^*$-hull of $(A,X)$. We may assume that $C$ is realised as a non-degenerate subalgebra $C\subseteq \mathcal B(K)$ for some Hilbert space $K$. Let $C^1=C+\C1_K\subseteq \mathcal B(K)$ be the unitization of $C$. Applying the above construction to each element $c\in M_n(C^1)$ yields a family of completely positive maps $\rho_{c,\varepsilon}:C^1\to \mathcal B(H_{\rho_{c,\varepsilon}})$ such that $\rho_{c,\varepsilon}\circ j:X\to \mathcal B(H_{\rho_{c,\varepsilon}})$ is a finitely $A$-generated representation of $(A,X)$ and such that $\|\rho_{c,\varepsilon,n}(c)\|\geq \|c\|-\varepsilon$. Let $S'$ denote the set of all such maps $\rho_{c,\varepsilon}$. Then $\rho_{S'}=\bigoplus_{\rho\in S'}\rho:C^1\to \mathcal B(\bigoplus_{\rho\in S'} H_\rho)$ is a unital completely isometric map from $C^1$ into $\prod_{\rho\in S'} \mathcal B(H_\rho)$. By \cite[Theorem 4.1]{CE2} there exists a unique unital $*$-homomorphism $\varphi: C^*(\rho_{S'}(C^1))\to C^1$ such that $\varphi\circ \rho_{S'}=\operatorname{id}_{C^1}$. Since for each $\rho\in S'$, $\rho\circ j:X\to \mathcal B(H_\rho)$ is a finitely $A$-generated representation of $(A,X)$, we may identify $\rho\circ j$ with an element of $S$ via a suitable embedding of $H_\rho\hookrightarrow \ell^2(I_\kappa)$. We then obtain a map $S'\to S; \rho\mapsto \rho\circ j$ and a $*$-homomorphism $\Phi: \prod_{\pi\in S} B(H_\pi)\to \prod_{\rho\in S'}\mathcal B(H_\rho)$ by sending a tupel $(T_\pi)_{\pi\in S}$ to $(T_{\rho\circ j})_{\rho\in S'}$. The restriction of $\Phi$ to $C_u^*(A,X)\subseteq \prod_{\pi\in S}\mathcal B(H_\pi)$ sends $C_u^*(A,X)$ to $C^*(\rho_{S'}(j(X)))\subseteq C^*(\rho_{S'}(C^1))$. Thus we get a composition of $*$-homomorphism $$ \begin{CD} C_u^*(A,X) @>\Phi>> C^*(\rho_{S'}(j(X))) @>\varphi>> C \end{CD} $$ such that for $\varphi_u:=\varphi\circ \Phi$ we get $\varphi_u\circ j_u=j$. \end{proof} \begin{lemma}\label{lem-universal} Suppose that $\pi:X\to \mathcal B(K)$ is a ccp representation of the $C^*$-operator system $(A,X)$. Then there exists a unique $*$-homomorphism $\tilde\pi:C_u^*(A,X)\twoheadrightarrow C^*(\pi(X))\subseteq \mathcal B(K)$ such that $\tilde\pi\circ j_u=\pi$, where $C^*(\pi(X))$ denotes the closed $C^*$-subalgebra of $\mathcal B(K)$ generated by $\pi(X)$. \end{lemma} \begin{proof} Suppose that $(A,X)$ is a concrete $C^*$-operator system on the Hilbert space $H$ and assume that $\iota:X\hookrightarrow \mathcal B(H)$ is the inclusion map. Then $\iota\bigoplus \pi: X\to B(H\oplus K)$ is a completely isometric representation and therefore there exists a unique $*$-homomorphism $\widetilde{\iota\oplus\pi}:C_u^*(A,X)\to C^*(\iota\oplus\pi(X))\subseteq B(H\otimes K)$. As $\iota\oplus\pi(X)\subseteq X\oplus \pi(X)\subseteq \mathcal B(H)\oplus \mathcal B(K)$, we obtain a well defined $*$-homomorphism $C^*(\iota\oplus \pi(X))\to C^*(\pi(X))$ given by $T\mapsto P_KTP_K$, where $P_K:H\oplus K\to K$ denotes the orthogonal projection. Thus $\tilde\pi=P_K \widetilde{\iota\oplus\pi}(\cdot)P_K$ will do the job. The uniqueness follows from the fact that $C_u^*(A,X)$ is generated by $j_u(X)$. \end{proof} At this point it is convenient to consider representations of $C^*$-operator systems on multiplier algebras: \begin{definition}\label{def-rep-multiplier} Suppose that $(A,X)$ is a $C^*$-operator system and let $D$ be a $C^*$-algebra. A representation of $(A,X)$ into the multiplier algebra $M(D)$ is a ccp map $\Phi: X\to M(D)$ such that $\Phi(ax)=\Phi(a)\Phi(x)$ for all $a\in A, x\in X$. We then say that $\Phi$ is {\em non-degenerate} if the restriction of $\Phi$ to $A$ is a non-degenerate $*$-homomorphism, i.e., if $\Phi(A)D=D$. \footnote{Note that by Cohen's factorization theorem to have $\Phi(A)D=D$ it suffices to have that $\overline{\operatorname{span}}\{\Phi(A)D\}=D$.} \end{definition} \begin{remark}\label{rem-rep-multiplier} Note that every (non-degenerate) representation of a $C^*$-operator system $(A,X)$ on a Hilbert space $H$ an be regarded as a (non-degenerate) representation into $M(\mathcal K(H))=\mathcal B(H)$. Conversely, if $\Phi:X\to M(D)$ is a representation of $(A,X)$ in $M(D)$ and if $D$ (and hence $M(D)$) is represented faithfully on the Hilbert space $H$, then $\Phi$ can also be regarded as a representation of $(A,X)$ on $H$ which is non-degenerate if and only if $\Phi:X\to M(D)$ is non-degenerate. But it is often more convenient to work with representations into $M(D)$. \end{remark} We now get \begin{proposition}\label{prop-universal} Let $(A,X)$ be a $C^*$-operator system and let $D$ be a $C^*$-algebra. Then there is a one-to-one correspondence between \begin{enumerate} \item The non-degenerate representations of $(A,X)$ into $M(D)$. \item The non-degenerate $*$-homomorphisms of $C_u^*(A,X)$ into $M(D)$. \end{enumerate} If $\Phi: C_u^*(A,X)\to M(D)$ is as in (2), then the restriction $\Phi_X=\Phi\circ j_u:X\to M(D)$ gives the corresponding representation of $(A,X)$ as in (1). \end{proposition} \begin{proof} It clearly suffices to show that every non-degenerate representation of $(A,X)$ on $M(D)$ extends to a representation of $C_u^*(A,X)$. But representing $C_u^*(A,X)$ faithfully on a Hilbert space $H_u$, say, this follows easily from Lemma \ref{lem-universal}. \end{proof} We now proceed with a discussion of the enveloping $C^*$-hull for $(A,X)$. For this recall that an operator space $V$ is {\em injective} if, given operator spaces $W_{1}\subseteq W_{2}$, any completely bounded linear map $\varphi_{1}:W_{1}\rightarrow V$ can be extended to a completely bounded linear map $\varphi_{2}:W_{2}\rightarrow V$ with $\|\varphi_{2}\|_{cb}=\|\varphi_{1}\|_{cb}.$ The algebra $\mathcal B(H)$ is known to be an injective operator space \cite{Wittstock}. Hamana in \cite{Hamana1, Hamana2} and Ruan in \cite{Ruan} independently showed that for any operator space $V$ in $\mathcal B(H)$, there is a unique minimal injective operator subspace $I(V)$ of $\mathcal B(H)$ containing $V$. It is called the injective envelope of $V$ and enjoys the following fundamental property, which we shall use heavily throughout this paper (e.g., see \cite[\S5]{Ruan}): \begin{proposition}\label{prop-unique} Let $V\subseteq \mathcal B(H)$ be an operator space. Then every completely contractive map $\psi:I(V)\to I(V)$ which restricts to the identity on $V$ is the identity on $I(V)$. \end{proposition} We need the following result of Choi-Effros \cite{CE} (see \S6 in \cite{Effros} and particularly \cite[Theorem 6.1.3]{Effros}). \begin{theorem}\label{thm-product} If $I \subseteq \mathcal B(H)$ is an injective operator system, then there is a unique multiplication $\circ :I\times I\to I$ making $I$ a unital $C^*$-algebra with its given $*$-operation and norm and identity $\boldsymbol{1}_H$. The multiplication is given by $$x \cdot_{\varphi} y = \varphi(xy),$$ where $\varphi:\mathcal B(H)\to I$ is a fixed ccp onto projection. \end{theorem} Using these results we now show \begin{proposition}\label{prop-envelope} Suppose that $(A,X)$ is a $C^*$-operator system. Then there exists an enveloping $C^*$-hull $(C^*_{\operatorname{env}}(A,X), j_{\operatorname{env}})$ of $(A,X)$. \end{proposition} \begin{proof} Suppose that $(A,X)$ is a $C^*$-operator system on $H$. Let $(A^1, X^1)$ be the unitization of $(A,X)$ as in Definition \ref{def-adjoining-unit}. By Theorem \ref{thm-product} the injective envelope $I(X^1)$ of the unital operator system $X^1$ is a unital $C^*$-algebra with multiplication $x\cdot_{\varphi} y=\varphi(xy)$ for some fixed ccp onto projection $\varphi:\mathcal B(H)\to I(X^1)$. Now, for each $a\in A$ and $x\in X$ we have $ax\in X$ and therefore $a\cdot_\varphi x=\varphi(ax)=ax$. Therefore the inclusion map $X\hookrightarrow \mathcal B(H)$ induces a completely isometric embedding $j:X\to I(X^1)$ such that $j(ax)=j(a)j(x)$ for all $a\in A, x\in X$. Define $C_{\operatorname{env}}^*(A,X)$ to be the $C^*$-subalgebra of $I(X^1)$ generated by $j(X)$ and we let $j_{\operatorname{env}}=j:X\hookrightarrow C_{\operatorname{env}}^*(A,X)$ denote the inclusion map. Note that by construction, the unitization $C_{\operatorname{env}}^*(A,X)^1$ is just the enveloping $C^*$-algebra $C^*_{\operatorname{env}}(X^1)$ of the unital operator system $X^1$ in the sense of Hamana \cite{Hamana1}. To see that $(C_{\operatorname{env}}^*(A,X), j_{\operatorname{env}})$ satisfies the universal property let $(C,j)$ be any given $C^*$-hull of $(A,X)$. Choose a non-degenerate embedding $C\hookrightarrow \mathcal B(K)$ for some Hilbert space $K$ and let $C^1=C+\C1_K\subseteq \mathcal B(K)$. Then $j^1:X^1\to C^1$ is a completely isometric embedding of the operator system $X^1$. It follows therefore from the universal property of the enveloping $C^*$-algebra $C^*_{\operatorname{env}}(X^1)$ (see \cite{Hamana1}) that there exists a $*$-homomorphism $\varphi:C^1\to C^*_{\operatorname{env}}(X^1)=C_{\operatorname{env}}^*(A,X)^1$ which intertwines the inclusions of $X^1$ into these algebras. Restricting $\varphi$ to $C\subseteq C^1$ then gives the desired $*$-homomorphism $\varphi_{\operatorname{env}}:C\to C_{\operatorname{env}}^*(A,X)$. \end{proof} We close this section with the following useful result: \begin{lemma}\label{lem mult univ} Let $(C, j)$ be any $C^*$-hull of the $C^*$-operator system $(A,X)$. Then the inclusion map $j: (A,X)\to C$ extends to a completely isometric inclusion $$\bar{j}: \big( M(A), M(X)\big)\to M(C).$$ Moreover, we have $\bar{j}(M(A))\cap C=j(A)$. \end{lemma} \begin{proof} Since $A$ contains an approximate identity of $X$, and since $C$ is generated by $j(X)$ as a $C^*$-algebra, it follows that $j(A)$ contains an approximate identity of $C$. It follows that $j:(A,X)\to C\subseteq M(C)$ is a completely isometric non-degenerate (generalized) morphism, where we identify $C$ with the $C^*$-operator system $(C,C)$. The first assertion then follows from Lemma \ref{lem-non-deg-morphism}. To see the second assertion let $c\in C$ such that $ j(A) c\subseteq j(A)\subseteq M(C)$. Let $(a_i)_{i\in I}$ be an approximate unit in $A$. Then $(a_i)_{i\in I}$ is also an approximate unit in $X$, and $(j(a_i))_{i\in I}$ is an approximate unit in $C$. But then it follows that $c=\lim_i j(a_i) c\in j(A)$. \end{proof} \section{Tensor products}\label{sec-tensor} In this section we want to give a brief discussion on certain tensor product constructions of $C^*$-operator systems. In particular we want to discuss analogues of the commutative maximal tensor product $\mathcal S\otimes_c\mathcal T$ of two operator systems $\mathcal S$ and $\mathcal T$ as introduced in \cite{KPTT} and of the minimal (or spacial) tensor product. \begin{definition}\label{def-universal-tensor} Suppose that $(A,X)$ and $(B,Y)$ are $C^*$-operator systems. Let $A\otimes_c B$ and $X\otimes_cY$ denote the closures of the algebraic tensor products $A\odot B$ and $X\odot Y$ inside the maximal $C^*$-tensor product $C_u^*(A,X)\otimes_{\max}C_u^*(B,Y)$, respectively. Then $(A\otimes_c B, X\otimes_c Y)$ is a $C^*$-operator system which we call the {\em commuting universal tensor product} of $(A,X)$ with $(B,Y)$. \end{definition} \begin{lemma}\label{lem-factors} There are unique completely isometric generalized morphisms $i_X:X\to M(X\otimes_cY)$ and $i_Y:Y\to M(X\otimes_cY)$ such that $i_X(x)i_X(y)=x\otimes y\in X\otimes_cY$ for all $x\in X, y\in Y$. \end{lemma} \begin{proof} Write $D:=C_u^*(A,X)\otimes_{\max}C_u^*(B,Y)$ and assume that $D$ is represented faithfully and non-degenerately on a Hilbert space $K$, say. By the properties of the maximal tensor product of $C^*$-algebras, there are isometric $*$-homomorphisms $i_{C_u^*(A,X)}:C_u^*(A,X)\to M(D)$ and $i_{C_u^*(B,Y)}:C_u^*(B,Y)\to M(D)$ such that $i_{C_u^*(A,X)}(c)i_{C_u^*(B,Y)}(d)=c\otimes d$ for all $c\in C_u^*(A,X), d\in C_u^*(B,Y)$, respectively. Let $i_X$ and $i_Y$ denote the restrictions of $i_{C_u^*(A,X)}$ and $i_{C_u^*(B,Y)}$ to $X$ and $Y$, respectively. Then $i_X$ and $i_Y$ are completely isometric representation of $(A,X)$ and $(B,Y)$ into $M(D)\subseteq \mathcal B(K)$ such that $i_X(x)i_Y(y)=x\otimes y$ for all $x\in X, y\in Y$, if we regard the algebraic tensor product $X\odot Y$ as a subspace of $X\otimes_cY$. So all we need to check is that $i_X$ and $i_Y$ have image in $M(X\otimes_cY)$, which follows easily from $i_X(x)(a\otimes b)=i_X(x)i_X(a)i_Y(b)=i_X(xa)i_Y(b) =xa\otimes b\in X\otimes_cY$ hence $i_X(X)(A\otimes_cB)\subseteq X\otimes_cB$ and, similarly, $i_Y(Y)(A\otimes_cB)\subseteq A\otimes_cY$, where $A\otimes_cB, A\otimes_cY, X\otimes_cB$ are defined as the closures of the respective algebraic tensor products in $X\otimes_cY$. \end{proof} \begin{lemma}\label{lem-tensor} The tensor product $(A\otimes_cB, X\otimes_c Y)$ has the following universal property: whenever $(\varphi_X, \varphi_Y)$ is a pair of non-degenerate ccp representations $\varphi_X:X\to M(D), \varphi_Y:Y\to M(D)$ of $(A,X)$ and $(B,Y)$ into the multiplier algebra $M(D)$ for some $C^*$-algebra $D$ such that $\varphi_X(x)\varphi_Y(y)=\varphi_Y(y)\varphi_X(x)$ for all $x\in X$ and $y\in Y$, then there exists a unique ccp representation $\varphi=\varphi_X\rtimes\varphi_Y:X\otimes_cY\to M(D)$ of $(A\otimes_cB, X\otimes_c Y)$such that $$\varphi(x\otimes y)=\varphi_X(x)\varphi_Y(y)$$ for all $x\in X, y\in Y$. \end{lemma} \begin{remark}\label{rem-rep} If $H$ is a Hilbert space and $D=\mathcal K(H)$, we obtain a version of the above lemma for non-degenerate ccp representations on Hilbert space. \end{remark} \begin{proof}[Proof of Lemma \ref{lem-tensor}] It follows from Proposition \ref{prop-universal} that there exist unique $*$-homomorphisms $\tilde\varphi_X:C_u^*(A,X)\to \mathcal B(H)$ and $\tilde\varphi_Y:C_u^*(B,Y)\to \mathcal B(H)$ such that $\tilde\varphi_X\circ j_u=\varphi_X$ and $\tilde\varphi_Y\circ j_u=\varphi_Y$. Since $\varphi_X(x)$ commutes with $\varphi_Y(y)$ for all $x\in X, y\in Y$, and since $C_u^*(A,X)$ and $C_u^*(B,Y)$ are generated by $j_u(X)$ and $j_u(Y)$, respectively, it follows that the ranges of $\tilde\varphi_X$ and $\tilde\varphi_Y$ commute as well. Therefore, by the universal properties of the maximal tensor product, there exists a (unique) $*$-homomorphism $\tilde\varphi:C_u^*(A,X)\otimes_{\max}C_u^*(B,Y)\to \mathcal B(H)$ such that $\tilde\varphi(c\otimes d)=\tilde\varphi_X(c)\tilde\varphi_Y(d)$ for all $c\in C_u^*(A,X)$ and $d\in C_u^*(B,Y)$. It is then easily checked on elementary tensors that $\varphi(cz)=\varphi(c)\varphi(z)$ for all $c\in A\otimes_cB$ and $z\in X\otimes_cY$. \end{proof} \begin{remark}\label{rem-tensor} It is an interesting question, whether there exists a converse of the above lemma, i.e., whether every non-degenerate ccp representation $\pi:X\otimes_cY\to \mathcal B(K)$ can be realised as $\pi=\pi_X\rtimes\pi_Y$ for a pair of representations $(\pi_X,\pi_Y)$ as in the lemma. Indeed, this is only true if the representation $\pi$ preserves some more of the multiplicativity structure of $X\otimes_cY$, which is not directly part of the structure of $(A\otimes_cB, X\otimes_cY)$ as a $C^*$-operator system. Realised as a subspace of $D:=C_u^*(A,X)\otimes_{\max}C_u^*(B,Y)$, we see that an elementary tensor $x\otimes y$ can be written as a product $i_X(x)i_Y(y)=i_Y(y)i_X(x)$, where $i_X, i_Y$ are the canonical inclusions of $X$ and $Y$ into $M(X\otimes_cY)$ as in Lemma \ref{lem-factors}. The representations constructed in Lemma \ref{lem-tensor} are precisely those whose extension $\bar\pi$ to $M(X\otimes_cY)$ preserves these relations: If it does, then $\pi_X=\bar\pi\circ i_X$ and $\pi_Y=\bar\pi\circ i_Y$, satisfy the conditions of the lemma such that $\pi=\pi_X\rtimes\pi_Y$. But we believe that a general ccp representation $\pi:X\otimes_cY\to \mathcal B(K)$ does not need to satisfy these relations. But, as we see below, it does if $Y=B$ is a $C^*$-algebra. \end{remark} \begin{lemma}\label{lem-tensor-$C^*$-algebra} Suppose that $(A,X)$ is a $C^*$-operator system and $B$ is a $C^*$-algebra (viewed as the $C^*$-operator system $(B,B)$). Let $\varphi: X\otimes_cB\to M(D)$ be any non-degenerate ccp representation of $(A\otimes_cB, X\otimes_cB)$. Then there is a unique ccp representation $\varphi_X:X\to M(B)$ and a $*$-representation $\varphi_B:B\to M(B)$ such that $\varphi=\varphi_X\rtimes \varphi_B$. A similar statement holds for the tensor product $(B\otimes_cA, B\otimes_cX)$. \end{lemma} \begin{proof} Let $\varphi_X=\bar\varphi\circ i_X$ and $\varphi_B=\bar\varphi\circ i_B$ as in the above remark. Note that $i_X$ maps $X$ into $M(X\otimes_cY)$ but $i_B$ maps $B$ into $M(A\otimes_cB)\subseteq M(X\otimes_cB)$, since $(a\otimes b) i_B(c)=a\otimes bc\in A\otimes_cB$ for all $a\otimes b\in A\odot B$. Since the extension $\bar\varphi:M(X\otimes_cB)\to \mathcal B(K)$ is a unital c.p representation of the $C^*$-operator system $(M(A\otimes_cB), M(X\otimes_cB))$, we get $$\varphi_X(x)\varphi_B(b)=\bar\varphi(i_X(x))\bar\varphi(i_B(b))=\bar\varphi(i_X(x)i_B(b))=\bar\varphi(x\otimes b)$$ and similarly $\varphi_B(b)\varphi_X(x)=\bar\varphi(x\otimes b)$. It is then clear that $\varphi=\varphi_X\rtimes\varphi_B$ as in Lemma \ref{lem-tensor}. \end{proof} \begin{definition}\label{def-spacial-tensor} Suppose that $(A,X)$ and $(B, Y)$ are concrete $C^*$-operator systems on the Hilbert spaces $H$ and $K$, respectively. Then we define the spacial tensor product $(A\check\otimes B, X\check\otimes Y)$ via the closures of the algebraic tensor products $A\odot B$ and $X\odot Y$ in $B(H\otimes K)$. \end{definition} It is well known that the spatial tensor product does not depend, up to isomorphism, on the particular embeddings of $X$ in $\mathcal B(H)$ and $Y$ in $\mathcal B(K)$. Let $i_X^s:X\to B(H\otimes K), i_X^s(x)=x\otimes 1_K$ and $i_Y^s:Y\to B(H\otimes K); i_Y^s(y)=1_H\otimes y$ denote the canonical embeddings of $X$ and $Y$ into $B(H\otimes K)$. It then follows from Lemma \ref{lem-tensor} that there exists a canonical surjective morphism $$\Phi:=i_X\times i_Y: X\otimes_cY\to X\check\otimes Y $$ from $(A\otimes_cB, X\otimes_cY)$ onto $(A\check\otimes B, X\check\otimes Y)$ . The following proposition is now an easy consequence of our constructions: \begin{proposition}\label{prop-nuclear} Suppose that $(A,X)$ is a $C^*$-operator system. Then for any nuclear $C^*$-algebra $B$, the canonical morphism $(A\otimes_cB, X\otimes_cB)$ onto $(A\check\otimes B, X\check\otimes B)$ is an isomorphism (and similarly for $(B\otimes_cA, B\otimes_cX)$). \end{proposition} \begin{proof} If $B$ is nuclear, then $C_u^*(A,X)\otimes_{\max}B=C_u^*(A,X)\check\otimes B$. The result then follows from representing $C_u^*(A,X)$ (and hence $X$) faithfully on a Hilbert space $H$. \end{proof} For later use we also need to consider morphisms into the multiplier $C^*$-operator systems of tensor products of $C^*$-operator systems with $C^*$-algebras. This is the special case of the above constructions if one of the factors is a pair $(C,C)$ for a $C^*$-algebra $C$. Note that in this case we also have $C_u^*(C,C)=C$. \begin{lemma}\label{lem-tensormor} Suppose that $(A,X)$ and $(B,Y)$ are $C^*$-operator systems and let $C$ and $D$ be $C^*$-algebras. Let $\varphi_X:X\to M(Y)$ be a non-degenerate generalized homomorphism from $(A,X)$ into $(M(B), M(Y))$ and let $\varphi_C:C\to M(D)$ be a non-degenerate generalized homomorphism from $C$ to $D$. Then there exists a unique non-degenerate generalized homomorphism $$\varphi\otimes_c\psi: X\otimes_cC\to M(Y\otimes_cD)$$ (resp. $\varphi\check\otimes\psi: X\check\otimes C\to M(Y\check\otimes D)$) such that $\varphi\otimes_c\psi(x\otimes c)=\varphi(x)\otimes\psi(c)$ (resp. $\varphi\check\otimes\psi(x\otimes c)=\varphi(x)\otimes\psi(c)$) for all elementary tensors $x\otimes c\in X\odot C$. \end{lemma} \begin{proof} Let $\pi:Y\otimes_cD\to \mathcal B(H)$ be a non-degenerate completely isometric representation of $(B\otimes_cD, Y\otimes_cD)$ on the Hilbert space $H$, By Lemma \ref{lem-tensor-$C^*$-algebra} there are non-degenerate representations $\pi_Y:Y\to \mathcal B(H)$ and $\pi_D:D\to \mathcal B(H)$ such that $\pi=\pi_Y\otimes_c\pi_D$. Let $\psi_X:=\bar\pi_Y\circ \varphi_X$ and $\psi_C:=\bar\pi_D\circ \varphi_A$, where $\bar\pi_Y$ and $\bar\pi_D$ denote the unique extensions of $\pi_Y, \pi_D$ to $M(Y)$ and $M(D)$ as in Lemma \ref{lem-non-deg-morphism}. Note that for all $m\in M(Y), n\in M(D)$ we have $$\bar\pi_Y(m)\bar\pi_D(n)= \bar\pi_D(n)\bar\pi_Y(m).$$ Indeed, this follows from the fact that $\pi(B\odot D)H$ is dense in $H$ (since $\pi$ is non-degenerate) and for all $b\in B$ and $d\in D$, we have \begin{align*} \bar\pi_Y(m)\bar\pi_D(n)\pi(b\otimes d)&= \bar\pi_Y(m)\bar\pi_D(n)\pi_Y(b)\pi_D(d)=\bar\pi_Y(m)\bar\pi_D(n)\pi_D(d)\pi_Y(b)\\ &=\bar\pi_Y(m)\pi_D(nd)\pi_Y(b)=\bar\pi_Y(m)\pi_Y(b)\pi_D(nd)\\ &=\pi_Y(mb)\pi_D(nd) =\pi_D(nd)\pi_Y(mb)\\ &=\bar\pi_D(n)\pi_D(d)\pi_Y(mb)=\bar\pi_D(n)\pi_Y(mb)\pi_D(d)\\ &=\bar\pi_D(n)\bar\pi_Y(m)\pi_Y(b)\pi_D(d)=\bar\pi_D(n)\bar\pi_Y(m)\pi(b\otimes d). \end{align*} It follows from this that $\psi_X(x)\psi_C(c)=\psi_C(c)\psi_X(x)$ for all $x\in X, c\in C$. Thus, it follows from Lemma \ref{lem-tensor} that there exists a unique non-degenerate ccp representation $\psi:=\psi_X\rtimes\psi_C: X\otimes_cC\to \mathcal B(H)$ given on elementary tensors by $\psi(x\otimes c)=\psi_X(x)\psi_C(c)$. Now, by the construction of the multiplier system as in Lemma \ref{lem-multiplier} we may identify $M(Y\otimes_cD)$ with its image $\bar\pi(M(Y\otimes_cY)) \subseteq \mathcal B(H)$. Using this identification, we want to check that $\psi$ takes values in $$M(Y\otimes_cY)\cong\{m\in \mathcal B(H): m \pi(B\otimes_cD), \pi(B\otimes_cD) m\subseteq \pi(Y\otimes_{c}D).$$ For this let $x\otimes c\in X\odot C$ be any elementary tensor and let $b\otimes d\in B\odot D$. Then \begin{align*} \psi(x\otimes c)\pi(b\otimes d)&=\bar\pi_Y(\varphi_X(x))\bar\pi_D(\varphi_C(c))\pi_Y(b)\pi_D(d)\\ &=\bar\pi_Y(\varphi_X(x))\pi_D(\varphi_C(c)d)\pi_Y(b)=\pi_Y(\varphi_X(x)b)\pi_D(\varphi_C(c)d)\\ &=\pi(\varphi_X(x)b\otimes \varphi_C(c)d)\in \pi(Y\otimes_cD). \end{align*} Hence $\psi(X\otimes_cC)\pi(B\otimes_cD)\subseteq \pi(Y\otimes_cD)$ and the inclusion $\pi(B\otimes_cD)\psi(X\otimes_cC)\subseteq \pi(Y\otimes_cD)$ follows similarly. \end{proof} \section{Universal crossed products by group actionns}\label{crossedproducts} For a $C^*$-operator system $(A,X)$ let $\operatorname{Aut}(A,X)$ denote the group of all invertible morphisms $\alpha:(A,X)\to (A,X)$. A strongly continuous action of the locally compact group $G$ on the $C^*$-operator system $(A,X)$ is a homomorphism $\alpha:G\to \operatorname{Aut}(A,X); g\mapsto\alpha_g$ such that $g\mapsto \alpha_g(x)$ is continuous for all $x\in X$. It follows directly from the universal property of $(C^*_u(A,X), j_u)$ that every automorphism of $(A,X)$ extends to a unique automorphisms $\alpha^u$ of $C^*_u(A,X)$. Since $C^*_u(A,X)$ is generated by a copy of $X$, any strongly continuous action $\alpha:G\to \operatorname{Aut}(A,X)$ extends to unique strongly continuous action $\alpha^u:G\to \operatorname{Aut}(C_u^*(A,X))$. This leads to the following definition of the universal crossed product by an action $\alpha$ of $G$ on $(A,X)$. \begin{definition}\label{def-crossed-product} Let $\alpha:G\to \operatorname{Aut}(A,X)$ be an action as above. We define the {\em universal (or full) crossed product} $(A,X)\rtimes_{\alpha}^uG$ for the action $\alpha$ as the pair $(A\rtimes_{\alpha}^uG, X\rtimes_{\alpha}^uG)$, where $A\rtimes_{\alpha}^uG$ and $X\rtimes_{\alpha}^uG$ are the closures of $C_c(G,A)$ and $C_c(G,X)$ inside the universal $C^*$-algebra crossed product $C_u^*(A,X)\rtimes_{\alpha,u}G$. \end{definition} \begin{remark}\label{rem-crossed} {\bf (a)} If $\alpha:G\to\operatorname{Aut}(A)$ is an action of $G$ on the $C^*$-algebra $A$, and if we consider the corresponding $C^*$-operator system $(A,A)$, then $C_u^*(A,A)=A$, and therefore the crossed product $(A,A)\rtimes_\alpha^uG$ is given by the pair $(A\rtimes_{\alpha,u}G, A\rtimes_{\alpha,u}G)$. Thus, the universal crossed product construction for $C^*$-operator systems extends the well-known universal crossed product constructions for $C^*$-algebras. {\bf (b)} In general it is not true that that in the crossed product $(A,X)\rtimes_{\alpha}^uG=(A\rtimes_{\alpha}^uG, X\rtimes_{\alpha}^uG)$ the $C^*$-algebra $A\rtimes_{\alpha}^uG$ coincides with the universal $C^*$-algebra crossed product $A\rtimes_{\alpha, u} G$. To see this let $G$ be any (second countable) non-amenable exact group. Then it follows from \cite{BCL} that there exists an amenable compact $G$-space $\Omega$, which implies that the full and reduced crossed products of $G$ by $C(\Omega)$ coincide. Now choose a faithful and non-degenerate representation of $C(\Omega)$ into $ \mathcal B(H)$ for some Hilbert space $H$ and consider $X:=C(\Omega)\subseteq \mathcal B(H)$ as an operator system (forgetting the multiplicative structure). As in Example \ref{excstaros} we regard this as the $C^*$-operator system $(\mathbb{C}, X)$. Let $C_u^*(X)$ denote the enveloping $C^*$-algebra of $X$. We then get completely isometric embeddings $\mathbb{C}\hookrightarrow X\hookrightarrow C_u^*(X)$ which give rise to ccp maps between the full crossed products (in the $C^*$-algebra sense) \begin{equation}\label{eq-comp} C^*(G)=\mathbb{C}\rtimes_u G\to C(\Omega)\rtimes_u G\to C_u^*(X)\rtimes_u G. \end{equation} By definition of the crossed product $(\mathbb{C}, C(X))\rtimes_{\alpha^u}G=(\mathbb{C}\rtimes_\alpha^uG, X\rtimes_\alpha^uG)$, $X\rtimes_{\alpha}^uG$ is identical to the image of $C(\Omega)\rtimes_u G$ under the second map and $\mathbb{C}\rtimes_\alpha^uG$ coincides with the image of $C^*(G)$ under the composition in (\ref{eq-comp}). But since $C(\Omega)\rtimes_u G=C(\Omega)\rtimes_rG$ by amenability of the action of $G$ on $\Omega$ the first map in (\ref{eq-comp}) factors through the reduced group algebra $C_r^*(G)$. We therefore also have $\mathbb{C}\rtimes_\alpha^uG\cong C_r^*(G)\neq C^*(G)=\mathbb{C}\rtimes_u G$. \end{remark} In what follows we want to show that non-degenerate representations of the full crossed product are in one-to-one relation to the non-degenerate covariant representations of the system $(A,X, G,\alpha)$ as in \begin{definition}\label{def-covariant} Suppose that $\alpha:G\to \operatorname{Aut}(A,X)$ is an action of $G$ on the $C^*$-operator system $(A,X)$. A covariant representation of $(A,X,G,\alpha)$ is a pair $(\pi, u)$, where $\pi:X\to \mathcal B(H_\pi)$ is a ccp representation of $(A,X)$ on $\mathcal B(H_\pi)$ and $u:G\to U(H_\pi)$ is a strongly continuous unitary representation of $G$ such that $$\pi(\alpha_g(x))=U_g\pi(x)U_g^*\quad \forall x\in X, g\in G.$$ \end{definition} \begin{remark}\label{rem-regular-rep} Suppose that $\rho:X\to \mathcal B(K)$ is any ccp representation of $(A,X)$ on $\mathcal B(K)$. Then we can construct a covariant representation $\operatorname{Ind}\rho=(\tilde{\rho}, 1_K\otimes \lambda)$ on $\mathcal B(K\otimes L^2(G))$ by the usual formula $$\big(\tilde{\rho}(x)\xi\big)(g)=\rho(\alpha_{g^{-1}}(x))\xi(g)\quad\text{and}\quad (1\otimes\lambda)_h\xi(g)=\xi(h^{-1}g)$$ for $\xi\in L^2(G, K)\cong K\otimes L^2(G)$, $x\in X$, and $g,h\in G$. Observe that if $\rho$ is completely isometric, then so is $\tilde\rho$. Hence there exist covariant representations $(\pi,u)$ of $(A,X, G,\alpha)$ in which the representation $\pi$ is completely isometric. \end{remark} It is actually useful to extend the notion of a covariant representation to allow representations into multiplier systems as in \begin{definition}\label{def-covariant-mult} Suppose that $\alpha:G\to \operatorname{Aut}(A,X)$ is an action of $G$ on the $C^*$-operator system $(A,X)$ and suppose that $(B,Y)$ is a $C^*$-operator system. By a non-degenerate covariant homomorphism of $(A,X,G,\alpha)$ into the multiplier system $(M(B), M(Y))$ of $(B,Y)$ we understand a pair of maps $(\varphi, u)$, where $\varphi:X\to M(Y)$ is a non-degenerate generalized morphism from $(A,X)$ to $(B,Y)$ and $u:G\to UM(B)$ is a strictly continuous homomorphism such that $$\varphi(\alpha_g(x))=u_g \varphi(x) u_g^*$$ for all $x\in X$ and $g\in G$. \end{definition} \begin{remark} Note that if $(B,Y)$ is represented completely isometrically and non-degenerately on a Hilbert space $K$, then a non-degenerate covariant homomorphism of $(A,X,G,\alpha)$ into $(M(B), M(Y))$ turns into a non-degenerate covariant representation of $(A,X,G,\alpha)$ on $K$. But being a representation into $(M(B), M(Y))$ requires some additional structure of how the image interacts with $(B,Y)$. \end{remark} \begin{example}\label{ex-regular} If $\alpha:G\to \operatorname{Aut}(A,X)$ is an action of $G$ on the $C^*$-operator system $(A,X)$ we can define a canonical non-degenerate covariant homomorphism $(\Lambda_X, \Lambda_G)$ of $(A,X,G,\alpha)$ into $M(X\check\otimes \mathcal K(L^2(G)))$ as follows: We first define a non-degenerate representation $\Lambda_X:X\to M(X\otimes \mathcal K(L^2(G)))$ of $(A,X)$ by the composition of maps $$X\stackrel{\tilde\alpha}{\longrightarrow} M(X\check\otimes C_0(G)) \stackrel{\operatorname{id}_X\otimes M}{\longrightarrow} M(X\check\otimes \mathcal K(L^2(G))),$$ where $\tilde\alpha: X\to C_b(G,X)\subseteq M(X\check\otimes C_0(G))$ sends the element $x\in X$ to the bounded continuous function $g\mapsto\alpha_{g^{-1}}(x)$ and where $M:C_0(G)\to \mathcal B(L^2(G))$ is the representation of $C_0(G)$ by multiplication operators. Moreover, we define $\Lambda_G:G\to M(X\check\otimes \mathcal K(L^2(G)))$ by $\Lambda_G(g)=1\otimes \lambda_g$, where $\lambda:G\to U(L^2(G))$ is the regular representation of $G$. We leave it to the reader to check that $(\Lambda_X, \Lambda_G)$ satisfies the covariance condition. We call $(\Lambda_X, \Lambda_G)$ the {\em regular representation} of $(A,X,G,\alpha)$. Note that this construction directly extends the construction of the regular representation $(\Lambda_B, \Lambda_G)$ of a $C^*$-dynamical system $(B,G,\beta)$ into $M(B\otimes{\mathcal K}(L^2(G)))$. In particular, the restriction $(\Lambda_X|_A. \Lambda_G)$ of $(\Lambda_X, \Lambda_G)$ to $(A,G,\alpha)$ coincides with the regular representation of $(A,G,\alpha)$. Note also that if $\rho:X\to \mathcal B(K)$ is any ccp representation of $(A,X)$, we recover the representation $\operatorname{Ind}\rho=(\tilde\rho, 1\otimes \lambda)$ of Remark \ref{rem-regular-rep} as the composition $(\rho\otimes \operatorname{id}_{\mathcal K(L^2(G)})\circ (\Lambda_X, \Lambda_G)$. \end{example} \begin{proposition}\label{prop-covariant} For each non-degenerate covariant representation $(\varphi, u)$ of $(A,X,G,\alpha)$ into $(M(B), M(Y))$ there exists a unique generalized homomorphism $\varphi\rtimes u:X\rtimes_\alpha^uG\to M(Y)$ from $(A\rtimes_{\alpha}^uG, X\rtimes_\alpha^uG)$ to $(M(B),M(Y))$ given on $f\in C_c(G,X)$ by $$\varphi\rtimes u(f)=\int_G \varphi(f(g))u_g\, dg.$$ \end{proposition} \begin{proof} Let $(\varphi,u)$ be given and let $(B,Y)$ be represented completely isometrically on a Hilbert space $K$. By Lemma \ref{lem-universal} there exists a unique $*$-representation $\tilde\varphi:C_u^*(A,X)\to C^*(\varphi(X))\subseteq \mathcal B(K)$ which extends $\varphi$. Applying this fact to the representations $\varphi\circ \alpha_g=\operatorname{Ad} u_g\circ \varphi$ shows that $(\tilde\varphi, u)$ is a covariant representation of the $C^*$-dynamical system $(C_u^*(A,X), G, \alpha^u)$ into $\mathcal B(K)$. It therefore integrates to a $*$-representation $\tilde\varphi\rtimes u: C_u^*(A,X)\rtimes_{\alpha^u,u}G\to \mathcal B(K)$ given on $f\in C_c(G, C_u^*(A,X))$ by the integral formula in the lemma. The restriction of $\tilde\varphi\rtimes u$ to $X\rtimes_{\alpha}^uG$ is then the desired representation $\varphi\rtimes u$. To see that it maps into $M(Y)$, we only need to check that $\big(\varphi\rtimes u(f)\big)b, b\big(\varphi\rtimes u(f)\big)\in Y$ for all $b\in B$ and $f\in C_c(G,X)$. But the integration formula gives \begin{equation}\label{eq-integral} \big(\varphi\rtimes u(f)\big)b=\int_G f(g)u_g b\,dg \end{equation} Since $u_g\in M(B)$, we have $u_gb\in B$ and hence $f(g)u_gb\in Y$ for all $g\in G$. Thus the integral (\ref{eq-integral}) gives an element in $Y$. A similar argument shows that to $b\big(\varphi\rtimes u(f)\big)\in Y$. \end{proof} \begin{lemma}\label{lem-multiplier} Suppose that $\alpha:G\to \operatorname{Aut}(A,X)$ is an action. Then there is a canonical covariant morphism $$(i_X, i_G): (A,X, G, \alpha)\to \big(M(A\rtimes_\alpha^u G), M(X\rtimes_\alpha^u G)\big)$$ such that for each $x\in X$, $f_1\in C_c(G,A)$, $f_2\in C_c(G,X)$ and $g,h\in G$, we have $$(i_X(x)f_1)(g)=xf_1(g)\quad\text{}\quad (f_1 i_X(x))(g)= f_1(g)\alpha_g(x)$$ and $$(i_G(h)f_2)(g)=\alpha_h(f_2(h^{-1}g))\quad\text{}\quad (f_2i_G(h))(g)=f_2(gh^{-1})\Delta(h^{-1}).$$ Moreover, the integrated form $i_X\rtimes i_G: X\rtimes_\alpha^u G\to M(X\rtimes_\alpha^uG)$ is the identity map on $X\rtimes_\alpha^uG$. \end{lemma} \begin{proof} Suppose that $C_u^*(A,X)\rtimes_{\alpha^u,u}G$ is represented faithfully and nonde\-gene\-rately on a Hilbert-space $H_u$. Then the restriction to $X\rtimes_\alpha^uG$ gives a completely isometric representation of $(A,X)\rtimes_{\alpha}^uG$ on $H_u$ as well. By the universal properties of the maximal crossed product $C_u^*(A,X)\rtimes_{\alpha^u,u}G$ there exists a unique covariant homomorphism $(i_{C_u^*(A,X)}, i_G)$ of $(C_u^*(A,X), G, \alpha^u)$ into $M(C_u^*(A,X)\rtimes_{\alpha^u,u}G)\subseteq B(H_u)$ which is given for $b\in C_u^*(A,X)$, $g,h\in G$ and $f\in C_c(G, C_u^*(A,X))$ by the formulas $$(i_{C_u^*(A,X)}(b)f)(g)=bf (g)\quad\quad (f i_{C_u^*(A,X)}(b))(g)= f(g)\alpha_g(b)$$ and $$(i_G(h)f)(g)=\alpha_h(f(h^{-1}g))\quad\quad (fi_g(h))(g)=f(gh^{-1})\Delta(h^{-1}),$$ and such that the integrated form $i_{C_u^*(A,X)}\rtimes i_G$ coincides with the original representation of the crossed product on $H_u$. Let $i_X=i_{C_u^*(A,X)}\circ j_u$, where $j_u:X\to C_u^*(A,X)$ denotes the embedding. Then for each $f\in C_c(G,X)$ the integral $i_X\rtimes i_G(f)=\int_G i_X(f(g))i_G(g)\,dg$ coincides with the inclusion of $f\in C_c(G,X)\hookrightarrow X\rtimes_\alpha^uG\subseteq C_u^*(A,X)\rtimes_{\alpha^u,u}G$, and therefore extends to the identity on $X\rtimes_\alpha^uG$. Thus we only need to check that $(i_X,i_G)$ is a non-degenerate covariant morphism into $\big(M(A\rtimes_\alpha^uG), M(X\rtimes_\alpha^uG)\big)$. First, if $f\in C_c(G,A)$ and $h\in G$, then $g\mapsto (i_G(h)f)(g)=\alpha_h(f(h^{-1}g))$ lies in $C_c(G,A)$ as well, hence $i_G(h)\big(i_X\rtimes i_G(f)\big)=i_X\rtimes i_G([g\mapsto \alpha_h(f(h^{-1}g))])\in A\rtimes_\alpha^uG$, which shows that $i_G$ takes its values in $M(A\rtimes_\alpha^uG)$. Moreover, if $f\in C_c(G,A)$ and $x\in X$, then $[g\mapsto (i_X(x)f)(g)=xf(g)]\in C_c(G,X)$, and hence $i_X(x)(i_X\rtimes i_G(f))=i_X\rtimes i_G([g\mapsto xf(g)])\in X\rtimes_\alpha^uG$, which implies that $i_X(x)\in M(X\rtimes_\alpha^uG)$. This completes the proof. \end{proof} We are now ready for the converse of Proposition \ref{prop-covariant}. \begin{proposition}\label{prop-covariant1} Suppose that $\alpha:G\to \operatorname{Aut}(A,X)$ is an action. Then for each non-degenerate generalized morphism $\Phi:X\rtimes_\alpha^uG\to M(Y)$ from $(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)$ to $(B,Y)$ there exists a unique non-degenerate covariant morphism $(\varphi, u)$ of $(A,X,G,\alpha)$ to $(M(B), M(Y))$ such that $\Phi=\varphi\rtimes u$. More precisely, if $(i_X, i_G)$ is the covariant morphism of $(A,X,G,\alpha)$ into $(M(A\rtimes_\alpha^uG), M(X\rtimes_\alpha^uG))$ as in Lemma \ref{lem-multiplier} and if $\bar\Phi:M(X\rtimes_\alpha^uG)\to M(Y)$ denotes the unique extension of $\Phi$ as in Lemma \ref{lem-non-deg-morphism}, then $$\varphi=\bar\Phi\circ i_X\quad\text{and}\quad u=\bar\Phi\circ i_G.$$ \end{proposition} \begin{proof} It is straightforward to check on functions in $C_c(G,X)$ that $(\varphi, u)$ is a covariant morphism of $(A,X,G,\alpha)$ into $(M(B), M(Y))$ such that $$\varphi\rtimes u=\bar\Phi\circ (i_X\rtimes i_G)=\bar\Phi\circ \operatorname{id}_{X\rtimes^uG}=\Phi.$$ \end{proof} Note that we may regard covariant morphism of $(A,X, G,\alpha)$ into $M(B)$ for any $C^*$-algebra $B$ as the special case of covariant morphisms into the multiplier system of the $C^*$-operator system $(B,B)$. Similarly, we may regard covariant representations on a Hilbert space $H$ as the special case in which $B=\mathcal K(H)$. Thus Proposition \ref{prop-covariant} will give us \begin{corollary}\label{cor-covariant} Suppose that $(\varphi, u)$ is a covariant morphism of $(A,X,G,\alpha)$ into $M(D)$ (resp. covariant represention into $\mathcal B(K)$ for a Hilbert space $K$). Then there exist an integrated form $$\varphi\rtimes u: X\rtimes_\alpha^uG\to M(D)\quad \text{(resp. $\mathcal B(K)$)}$$ given for $f\in C_c(G,X)$ by $\varphi\rtimes u(f)=\int_G \varphi(f(g))u_g\,dg$. Conversely, if $\Phi: X\rtimes_\alpha^uG\to M(D)$ (resp. $\mathcal B(K)$) is any non-degenerate homomorphism (resp. representation) of $(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)$, then there exists a unique non-degenerate covariant homomorphism (resp. representation) of $(A,X,G,\alpha)$ into $M(D)$ (resp. $\mathcal B(K)$) such that $\Phi=\varphi\rtimes u$. Indeed, if $(i_X, i_G)$ are as in Lemma \ref{lem-multiplier} and $\bar\Phi$ denotes the unique extension of $\Phi$ to $M(X\rtimes^uG)$, then $$\varphi= \bar\Phi\circ i_X\quad\text{and}\quad u=\bar\Phi\circ i_G.$$ \end{corollary} \begin{corollary}\label{cor-universal} Let $(A,X, G,\alpha)$ be a $G$-$C^*$-operator system. Then $$C_u^*(A,X)\rtimes_{\alpha,u} G= C_u^*(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG).$$ \end{corollary} \begin{proof} Since $C_u^*(A,X)$ is generated by $X$, it is fairly straightforward to check that $C_u^*(A,X)\rtimes_{\alpha_u,u}G$ is generated by $X\rtimes_\alpha^uG\subseteq C_u^*(A,X)\rtimes_{\alpha_u,u}G$. So all we need to show is that any isometric representation $\psi:X\rtimes_{\alpha}^uG\hookrightarrow \mathcal B(H)$ of $(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)$ extends to a $*$-homomorphism $\Psi: C_u^*(A,X)\rtimes_{\alpha^u,u}G\to \mathcal B(H)$. Passing to the closed subspace $\varphi(X)H=\varphi(A)H$ if necessary, we may assume without loss of generality that $\varphi$ is non-degenerate. It follows then from Corollary \ref{cor-covariant} that there exists a covariant representation $(\varphi, u)$ of $(A,X, G,\alpha)$ such that $\psi=\varphi\rtimes u$. Then, as in the proof of Proposition \ref{prop-covariant}, we see that $(\varphi, u)$ extends uniquely to a covariant representation $(\bar\varphi, u)$ of $(C_u^*(A,X), G, \alpha^u)$ such that $\psi=\varphi\rtimes u: X\rtimes_\alpha^uG\to \mathcal B(H)$ coincides with the restriction to $X\rtimes_\alpha^uG$ of the integrated form $\bar\varphi\rtimes u: C_u^*(A,X)\rtimes_{\alpha^u,u}G\to \mathcal B(H)$. This finishes the proof. \end{proof} \section{Reduced crossed products}\label{sec-reduced-crossed} We now turn our attention to the construction of the reduced crossed product $(A\rtimes_\alpha^rG, X\rtimes_\alpha^rG)$ by an action $\alpha:G\to \operatorname{Aut}(A,X)$ of a group on a $C^*$-operator system $(A,X)$. Indeed, we define the reduced crossed product via the regular representation $(\Lambda_A, \Lambda_X)$ of $(A,X,G,\alpha)$ into $M(A\otimes{\mathcal K}(L^2(G), X\otimes {\mathcal K}(L^2(G))$ as constructed in Example \ref{ex-regular}: \begin{definition}\label{defn educed crossed product} Let $\alpha:G\to \operatorname{Aut}(A,X)$ be an action of $G$ on the $C^*$-operator system $(A,X)$. Then we define the reduced crossed product as the image $$(A,X)\rtimes_\alpha^rG= (A\rtimes_\alpha^rG, X\rtimes_\alpha^rG):=\big(\Lambda_A(A\rtimes_\alpha^uG), \Lambda_X(X\rtimes_\alpha^uG)\big)$$ of the universal crossed product by the regular representation $(\Lambda_A, \Lambda_X)$ inside $\big((M(A\otimes {\mathcal K}(L^2(G)), M(X\otimes {\mathcal K}(L^2(G))\big)$. \end{definition} For the following proposition recall from Remark \ref{rem-regular-rep} the construction of the representation $\operatorname{Ind}\rho:=(\tilde{\rho}, 1_K\otimes \lambda)$ on $\mathcal B(K\otimes L^2(G))$ given by the formula $$\big(\tilde{\rho}(x)\xi\big)(g)=\rho(\alpha_{g^{-1}}(x))\xi(g)\quad\text{and}\quad (1\otimes\lambda)_h\xi(g)=\xi(h^{-1}g),$$ where $\rho:X\to \mathcal B(K)$ is any given representation of $(A,X)$ on some Hilbert space $K$. We call $\operatorname{Ind}\rho$ the covariant representation of $(A,X,G,\alpha)$ induced by $\rho$. As an easy consequence of our definition of reduced crossed product and the discussion at the end of Example \ref{ex-regular} we obtain \begin{proposition}\label{prop-reduced} Let $\alpha:G\to \operatorname{Aut}(A,X)$ be an action of $G$ on the $C^*$-operator system $(A,X)$. Then for every non-degenerate representation $\rho:X\to \mathcal B(K)$ of $(A,X)$ the integrated form $\tilde\rho\rtimes (1\otimes \lambda): X\rtimes_\alpha^u G\to B(K\otimes L^2(G))$ of the induced representation $\operatorname{Ind}\rho=(\tilde\rho, 1\otimes \lambda)$ (which we shall also simply denote by $\operatorname{Ind}\rho$) factors through the reduced crossed product $X\rtimes_\alpha^rG$ to give a representation of $(A,X)\rtimes_\alpha^rG$ into $B(K\otimes L^2(G))$. Moreover, if $\rho$ is completely isometric, then so is $\operatorname{Ind}\rho$. \end{proposition} \begin{proof} By the discussion at the end of Example \ref{ex-regular} we have the identity. $\operatorname{Ind}\rho=(\rho\otimes\operatorname{id}_{{\mathcal K}(L^2(G))})\circ (\Lambda_A, \Lambda_X)$ as a representation from $(A,X)\rtimes_\alpha^uG$ to $\mathcal B(K\otimes L^2(G))$. Therefore $\operatorname{Ind}\rho$ clearly factors through $(A,X)\rtimes_\alpha^rG= \big(\Lambda_A(A\rtimes_\alpha^uG), \Lambda_X(X\rtimes_\alpha^uG)\big)$. If $\rho:X\to \mathcal B(K)$ is completely isometric, the same holds for $\rho\otimes\operatorname{id}_{{\mathcal K}(L^2(G))} :(A\otimes {\mathcal K}(L^2(G)), X\otimes L^2(G))\to \mathcal B(K\otimes L^2(G))$ and its extension to the multiplier system $\big(M(A\otimes {\mathcal K}(L^2(G)), M(X\otimes L^2(G))\big)$ (see Lemma \ref{lem-multipliers}). Therefore $\operatorname{Ind}\rho$ factors through a completely isometric representation of $(A,X)\rtimes_\alpha^rG$ into ${\mathcal B}(K\otimes L^2(G))$ as claimed. \end{proof} \begin{remark}\label{rem-reduced} {\bf (a)} It follows in particular from the above proposition that, up to completely isomeric isomorphism, the reduced crossed product $(A,X)\rtimes_\alpha^rG$ does not depend on the particular representation of $(A,X)$ on a Hilbert space $H$. {\bf (b)} Suppose that $(C, j)$ is any $C^*$-hull of $(A,X)$ such that a given action $\alpha:G\to\operatorname{Aut}(A,X)$ extends to an action $\alpha:G\to \operatorname{Aut}(C)$. Let $\rho:C\to {\mathcal B}(K)$ be any faithful and non-degenerate representation on a Hilbert space $K$. Then $\rho_X:=\rho\circ j:X\to \mathcal B(K)$ is a non-degenerate completely isometric representation of $(A,X)$ into $\mathcal B(K)$. Then the regular representation $\operatorname{Ind}\rho: C\rtimes_{\alpha,u}G\to \mathcal B(K\otimes L^2(G))$ factors through a faithful representation of the $C^*$-reduced crossed product $C\rtimes_{\alpha,r}G$ whose composition with the canonical inclusion of $(C_c(G,A), C_c(G,X))$ into $C\rtimes_{\alpha,r}G$ coincides the completely isometric representation $\operatorname{Ind}\rho_X: (A,X)\rtimes_\alpha^rG\to \mathcal B(K\otimes L^2(G))$ on the dense subsystem $(C_c(G,A), C_c(G,X))$. It follows that, up to a completely isometric isomorphism, the reduced crossed product $(A,X)\rtimes_{\alpha}^rG$ can be identified with the closure of the pair $(C_c(G,A), C_c(G,X))$ inside $C\rtimes_{\alpha,r}G$. This observation applies in particular to the universal $C^*$-hull $(C_u^*(A,X), j_u)$ and the enveloping $C^*$-algebra $(C_{\operatorname{env}}^*(A,X), j_{\operatorname{env}})$, where it easily follows from the respective universal properties that every action on $(A,X)$ extends to actions on $C_u^*(A,X)$ and $C_{\operatorname{env}}^*(A,X)$, respectively. \end{remark} \section{Crossed products by coactions}\label{sec-coaction} Recall (e.g. from \cite[Appendix A]{EKQR}) that if $G$ is a locally compact group, there is a canonical comultiplication $\delta_G:C^*(G)\to M(C^*(G) \check\otimes C^*(G))$ on $C^*(G)$ which is given as the integrated form of the unitary representation $g\mapsto u_g\otimes u_g\in UM(C^*(G)\check\otimes C^*(G))$, where $u:G\to UM(C^*(G))$ is the canonical representation of $G$ into $UM(C^*(G))$. A {\em coaction} of $G$ on a $C^*$-algebra $A$ is an injective non-degenerate $*$-homomorphism $\delta:A\to M(A\check\otimes C^*(G))$ such that the following conditions hold: \begin{enumerate} \item $\delta(A)(1\check\otimes C^*(G))\subseteq A\check\otimes C^*(G)$. \item The following diagram of maps commutes $$\begin{CD} A @>\delta>> M(A\check\otimes C^*(G))\\ @V\delta VV @VV \operatorname{id}_A\otimes \delta_GV\\ M(A\check\otimes C^*(G)) @>> \delta\otimes \operatorname{id}_{G}> M(A\check\otimes C^*(G)\check\otimes C^*(G)) \end{CD} $$ \end{enumerate} where $\operatorname{id}_G$ denotes the identity on $C^*(G)$. If, in addition, we have the identity \begin{itemize} \item[(1$'$)] $\delta(A)(1\check\otimes C^*(G))= A\check\otimes C^*(G)$ \end{itemize} then the coaction $\delta$ is called {\em non-degenerate}. Note that condition (1$'$) is automatic if $G$ is amenable or discrete (see \cite[Proposition 6]{Kat}, \cite[Lemma 3.8]{Land} and \cite{BS}). We are now going to extend the definition of a coaction of $C^*(G)$ to the category of $C^*$-operator systems. \begin{definition}\label{def coaction} Let $G$ be a locally compact group. A coaction of $G$ on the $C^*$-operator system $(A,X)$ is an injective non-degenerate generalized morphism $$\delta_X: (A,X) \to \big(M(A\check\otimes C^*(G)), M(X\check\otimes C^*(G))\big)$$ such that the following holds: \begin{enumerate} \item The map $\delta_A:=\delta_X|_A: A\to M(A\check\otimes C^*(G))$ is a coaction of $C^*(G)$ on $A$. \item The following diagram of maps commutes $$\begin{CD} X@>\delta_X>> M(X\check\otimes C^*(G))\\ @V\delta_X VV @VV \operatorname{id}_X\otimes \delta_GV\\ M(X\check\otimes C^*(G)) @>> \delta_X\otimes \operatorname{id}_{G}> M(X\check\otimes C^*(G)\check\otimes C^*(G)) \end{CD} $$ \end{enumerate} \end{definition} \begin{remark}\label{rem nondeg} {\bf (a)} Notice that condition (1) always implies that \begin{align*} \delta_X(X)(1\check\otimes C^*(G))&=\delta_X(XA)(1\check\otimes C^*(G))= \delta_X(X)\big(\delta_X(A)(1\check\otimes C^*(G))\big)\\ &\subseteq \delta_X(X)(A\check\otimes C^*(G))= X\check\otimes C^*(G), \end{align*} where the last equation follows from the nondegeneracy of $\delta_X\to M(X\check\otimes C^*(G))$. {\bf (b)} Let $1_G:C^*(G)\to \mathbb{C}$ denote the integrated form of the trivial representation of $G$. Then it follows from the definition of a coaction $\delta_X:(A,X)\to M(X\check\otimes C^*(G))$ that $(\operatorname{id}_X\otimes 1_G)\circ \delta_X$ is the identity on $X$. To see this observe that it follows from (a), that for all $z\in C^*(G)$ and $x\in X$ we have $(\operatorname{id}_X\otimes 1_G)(\delta_X(x)(1\otimes z))\in X$. Choosing $z$ such that $1_G(z)= 1$ then implies that $(\operatorname{id}_X\otimes 1_G)(\delta_X(x))=(\operatorname{id}_X\otimes 1_G)(\delta_X(x)(1\otimes z))\in X$ as well. Now, using condition (2) and the relation $(\operatorname{id}_G\otimes 1_G)\circ \delta_G=\operatorname{id}_G$ we get \begin{align*} \delta_X(x)&=(\operatorname{id}_X\otimes \operatorname{id}_G\otimes 1_G)\circ (\operatorname{id}_X\otimes \delta_G)\circ \delta_X(x)\\ &= (\operatorname{id}_X\otimes \operatorname{id}_G\otimes 1_G)\circ (\delta_X\otimes \operatorname{id}_G)\circ \delta_X(x)\\ &=(\delta_X\otimes 1_G)\circ \delta_X(x)=\delta_X\big( (\operatorname{id}_X\otimes 1_G)(\delta_X(x))\big), \end{align*} which implies $\delta_X\circ (\operatorname{id}_A\otimes 1_G)\circ \delta_X=\delta_X$. Since $\delta_X$ is injective, this implies that $(\operatorname{id}_X\otimes 1_G)\circ \delta_X=\operatorname{id}_X$. In particular, it follows that $\delta_X$ is completely isometric. \end{remark} \begin{example}\label{ex dual} Suppose that $\alpha:G\to \operatorname{Aut}(A,X)$ is an action of the locally compact group $G$ on the $C^*$-operator system $(A,X)$. Then there is a {\em dual coaction} $$\widehat{\alpha}: (A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)\to \big(M(A\rtimes_\alpha^uG\check\otimes C^*(G)), M(X\rtimes_\alpha^uG\check\otimes C^*(G))\big)$$ given by the integrated form of the generalized covariant homomorphism $(i_X\otimes 1, i_G\otimes u)$ of $(A,X, G,\alpha)$ into $ M(X\rtimes_\alpha^uG\check\otimes C^*(G))$, where $(i_X, i_G):(X, G)\to M(X\rtimes_\alpha^uG)$ denotes the canonical covariant homomorphism and $u:G\to UM(C^*(G))$ the universal representation. To see that this satisfies the conditions of Definition \ref{def coaction} we choose a faithful and non-degenerate representation of $C_u^*(A,X)\rtimes_{\alpha,u}G$ into some $ \mathcal B(H)$. This restricts to a faithful representation of $(A\rtimes_\alpha^uG, X\rtimes_{\alpha}^uG)$, into $\mathcal B(H)$. Moreover, by choosing a faithful representation of $C^*(G)$ onto some Hilbert space $K$, say, we obtain a faithful and non-degenerate representation of $C_u^*(A,X)\rtimes_{\alpha^u,u}G\check\otimes C^*(G)$ on $H \otimes K$, which restricts to a completely isometric representation of $X \check\otimes C^*(G))$, and hence of $M(X\rtimes_\alpha^uG\check\otimes C^*(G))$, respectively (use Corollary \ref{cor-isometry}). Now, there is a dual coaction $$\widehat{\alpha}_u: C_u^*(A,X)\rtimes_{\alpha,u}G\to M(C_u^*(A,X)\rtimes_{\alpha,u}G\check\otimes C^*(G))$$ given as the integrated form of the covariant homomorphism $(i_{C_u^*(A,X)}\otimes 1, i_G\otimes u)$ of $(C_u^*(A,X), G, \alpha)$ into $M(C_u^*(A,X)\rtimes_{\alpha,u}G\check\otimes C^*(G))$. This representation clearly restricts to the representation $(i_X\otimes 1, i_G\otimes u):(X, G)\to M(X\rtimes_\alpha^uG\check\otimes C^*(G))$ and the conditions in Definition \ref{def coaction} can then easily be deduced from the properties of the coaction $\widehat{\alpha}_u$ of $C^*(G)$ on $C_u^*(A,X)\rtimes_{\alpha,u}G$. Similarly, the dual coaction $$\widehat{\alpha_r}: C_u^*(A,X)\rtimes_{\alpha, r}G\to M(C_u^*(A,X)\rtimes_{\alpha, r}G\check\otimes C^*(G))$$ restricts to a dual coaction $$\widehat{\alpha}_r:=(i_X^r\otimes 1)\rtimes (i_G^r\otimes u): X\rtimes_\alpha^r G\to M(X\rtimes_\alpha^rG \check\otimes C^*(G))$$ of $C^*(G)$ on the reduced crossed product $(A,X)\rtimes_{\alpha}^rG=(A\rtimes_{\alpha}^rG, X\rtimes_{\alpha}^rG)$, where $(i_X^r, i_G^r)$ denotes the canonical covariant homomorphism of $(A,X,G,\alpha)$ into $M(X\rtimes_{\alpha}^rG)$ (i.e., the regular representation). \end{example} We now want to relate coactions of $C^*(G)$ on $(A,X)$ with coactions of $C^*(G)$ on $C_u^*(A,X)$. In order to formulate the result, observe that $\big(C_u^*(A,X)\check\otimes C^*(G), j_u\otimes \operatorname{id}_{C^*(G)}\big)$ is a $C^*$-hull of $(A\check\otimes C^*(G), X\check\otimes C^*(G))$ and therefore we get a canonical completely isometric embedding $$\overline{j_u\otimes \operatorname{id}_G}: M(A\check\otimes C^*(G), X\check\otimes C^*(G))\hookrightarrow M(C_u^*(A,X)\check\otimes C^*(G)).$$ \begin{proposition}\label{prop coaction} Let $(A,X)$ be a $C^*$-operator system. Then there is a one-to-one correspondence between coactions $\delta_X$ of $G$ on $(A,X)$ and coactions $$\delta_u:C_u^*(A,X)\to M(C_u^*(A,X)\check\otimes C^*(G))$$ of $G$ on $C_u^*(A,X)$ which satisfy the conditions \begin{equation}\label{eq-rest} \delta_u(A)\subseteq M(A\check\otimes C^*(G))\quad{and}\quad \delta_u(X)\subseteq M(X\check\otimes C^*(G)), \end{equation} where we regard $A$ and $X$ as subspaces of $C_u^*(A,X)$ via the inclusion map $j_u$. Given such coaction $\delta_u$ of $G$ on $C_u^*(A,X)$, the corresponding coaction of $G$ on $(A,X)$ is given by the restriction $\delta_X:=\delta_u|_X$. \end{proposition} \begin{proof} Suppose first that $\delta_X:X\to M(X\check\otimes C^*(G))$ is a coaction of $C^*(G)$ on $(A,X)$. It follows then from part (b) of Remark \ref{rem nondeg} and Lemma \ref{lem mult univ} that $$(j_u\otimes \operatorname{id}_G)\circ \delta_X: X\to M(C_u^*(A,X)\check\otimes C^*(G))$$ is a completely isometric representation of $(A,X)$ into $M(C_u^*(A,X)\check\otimes C^*(G))$. By Proposition \ref{prop-universal}, it extends to a non-degenerate $*$-homomorphism $$\delta_u:C_u^*(A,X)\to M(C_u^*(A,X)\check\otimes C^*(G)).$$ Since $C_u^*(A, X)$ is generated by $X$ and since $$\delta_u(x)(1\otimes z)=\delta_X(x)(1\otimes z)\in X\check\otimes C^*(G)\subseteq C_u^*(A,X)\check\otimes C^*(G)$$ for all $x\in X$, it follows that $$\delta_u(C_u^*(A,X))(1\check\otimes C^*(G))\subseteq C_u^*(A,X)\check\otimes C^*(G).$$ Using this, it follows that $(\operatorname{id}_{C_u^*(A,X)}\otimes 1_G)\circ \delta_u$ maps $C_u^*(A,X)$ into $C_u^*(A,X)$. Moreover, since $(\operatorname{id}_{C_u^*(A,X)}\otimes 1_G)\circ \delta_u$ restricts to $(\operatorname{id}_X\otimes 1_G)\circ \delta_X=\operatorname{id}_X$ on $X$, it follows that $(\operatorname{id}_{C_u^*(A,X)}\otimes 1_G)\circ \delta_u$ extends the identity on $X$ and therefore must be equal to the identity on $C_u^*(A,X)$. Thus it follows that $\delta_u$ is injective. In order to check that \begin{equation}\label{eq-coact} (\delta_u\otimes \operatorname{id}_G)\circ \delta_u=(\operatorname{id}_{C_u^*(A,X)}\otimes \delta_G)\circ \delta_u \end{equation} as maps from $C_u^*(A,X)$ into $M(C_u^*(A,X)\check\otimes C^*(G)\check\otimes C^*(G))$ we simply observe that, via the canonical embedding of $M(X\check\otimes C^*(G)\check\otimes C^*(G))$ into \linebreak $M(C_u^*(A,X)\check\otimes C^*(G)\check\otimes C^*(G))$, the left hand side restricts to $(\delta_X\otimes \operatorname{id}_G)\circ \delta_X$ and the right hand side restricts to $(\operatorname{id}_X\otimes \delta_G)\circ \delta_X$. But by condition (ii) of Definition \ref{def coaction}, these restrictions to $X$ coincide and then (\ref{eq-coact}) follows from the uniqueness assertion of Proposition \ref{prop-universal}. Conversely, suppose that $\delta:C_u^*(A,X)\to M(C_u^*(A,X)\check\otimes C^*(G))$ is a coaction of $G$ on $C_u^*(A,X)$ such that the equations (\ref{eq-rest}) hold. We need to check that $\delta_X:=\delta|_X$ is a coaction of $G$ on $(A,X)$, where we realize $M(X\check\otimes C^*(G))$ as a subspace of $M(C_u^*(A,X)\check\otimes C^*(G))$ as explained above. Since $\delta$ is injective, the same holds for $\delta_X$ and condition (ii) of Definition \ref{def coaction} clearly follows from the similar condition for $\delta$. Thus, all we need to show is that $\delta(A)(1\check\otimes C^*(G))\subseteq A\check\otimes C^*(G)$. But since $\delta(A)(1\check\otimes C^*(G))\subseteq C_u^*(A,X)\check\otimes C^*(G)$ we observe that $$\delta(A)(1\check\otimes C^*(G))\subseteq M(A\check\otimes C^*(G))\cap (C_u^*(A,X)\check\otimes C^*(G)),$$ where the intersection is taken inside $M(C_u^*(A,X)\check\otimes C^*(G))$. But it follows from Lemma \ref{lem mult univ} that this intersection equals $A\check\otimes C^*(G)$ and the result follows. \end{proof} \begin{definition}\label{def-nondeg} A coaction $\delta_X: X\to M(X\check\otimes C^*(G))$ of $G$ on $(A,X)$ is called {\em non-degenerate} if the corresponding coaction $\delta_u$ of $G$ on $C_u^*(A,X)$ is non-degenerate, i.e., $$\delta_u(C_u^*(A,X))(1\check\otimes C^*(G))=C_u^*(A,X)\check\otimes C^*(G).$$ \end{definition} \begin{remark}\label{rem nondeg-coaction} Of course it would be more satisfactory to define nondegeneracy of a coaction of $G$ on $(A,X)$ via a condition like $$\delta_X(X)(1\check\otimes C^*(G))=X\check\otimes C^*(G).$$ However, we were not able to prove that this condition is equivalent to nondegeneracy of $\delta_u$, and it is the latter condition we shall need later when dealing with Imai-Takai duality. Note that nondegeneracy of a coaction on $(A,X)$ is automatic for amenable or discrete $G$, since, as we remarked before, this holds true for coactions on $C^*$-algebras. The same holds for all dual coactions: \end{remark} \begin{lemma}\label{lem-non-degenerate} Suppose that $\alpha:G\to \operatorname{Aut}(A,X)$ is an action of $G$ on the $C^*$-operator system $(A,X)$. Then the dual coactions $\widehat{\alpha}:X\rtimes_{\alpha}^uG\to M(X\rtimes_{\alpha}^uG\check\otimes C^*(G))$ and $\widehat{\alpha_r}:X\rtimes_{\alpha}^rG\to M(X\rtimes_{\alpha}^rG\check\otimes C^*(G))$ are non-degenerate. \end{lemma} \begin{proof} The first assertion follows from Corollary \ref{cor-universal} together with the fact that dual coactions on $C^*$-algebra crossed products are non-degenerate (e.g., see the discussion at the end of \cite[Example A.26]{EKQR}). For the dual coaction on the reduced crossed $(A,X)\rtimes_{\alpha}^rG$ observe that the identity map on $C_c(G,X)$ induces surjective morphisms $$C_u^*(A,X)\rtimes_{\alpha, u}G\twoheadrightarrow C_u^*(A\rtimes_\alpha^rG, X\rtimes_{\alpha}^rG)\twoheadrightarrow C_u^*(A,X)\rtimes_{\alpha,r}G,$$ where the first map exists by the universal property of $C_u^*(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)\cong C_u^*(A,X)\rtimes_{\alpha, u}G$ together with the obvious morphism of $(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)$ into $C_u^*(A\rtimes_\alpha^rG, X\rtimes_{\alpha}^rG)$. These maps are equivariant for the respective dual coactions, where the one on $C_u^*(A\rtimes_\alpha^rG, X\rtimes_{\alpha}^rG)$ is induced from the dual coaction of $C^*(G)$ on $(A\rtimes_\alpha^rG, X\rtimes_\alpha^rG)$ via Proposition \ref{prop coaction}. But then it is easy to see that nondegeneracy of the dual coaction $\widehat{\alpha}$ on $C_u^*(A,X)\rtimes_{\alpha, u}G$ implies nondegeneracy of the (dual) coaction on $C_u^*(A\rtimes_\alpha^rG, X\rtimes_{\alpha}^rG)$. \end{proof} We are now going to study covariant representations for coactions on $C^*$-operator systems $(A, X)$, extending the well known theory for coactions on $C^*$-algebras. In what follows let $w_G\in UM(C_0(G)\check\otimes C^*(G))$ denote the unitary given by the map $[g\mapsto u_g]\in C_b(G, C^*(G))\subseteq M(C_0(G)\check\otimes C^*(G))$. Recall from \cite[Definition A.32]{EKQR} the following definition \begin{definition}\label{def-covariant-coact} Let $\delta:D\to M(D\check\otimes C^*(G))$ be a coaction of $G$ on the $C^*$-algebra $D$ and let $B$ be a $C^*$-algebra. Then a {\em covariant representation} of $(D,G,\delta)$ into $M(B)$ is a pair $(\pi,\mu)$, where $\pi:D\to M(B), \mu:C_0(G)\to M(D)$ are non-degenerate $*$-homomorphism satisfying the covariance condition $$(\pi\otimes\operatorname{id}_G)\circ \delta(d)=(\mu\otimes \operatorname{id}_G)(w_G)(\pi(d)\otimes 1)(\mu\otimes \operatorname{id}_G)(w_G)^*.$$ If $B=\mathcal K(H)$ for some Hilbert space $H$, then we say that $(\pi, \mu)$ is a covariant representation on $H$. \end{definition} If $(\pi, \mu)$ is a covariant representation of $(D,G,\delta)$ as above, then $$\pi(D)\mu(C_0(G)):=\overline{\operatorname{span}}\{\pi(d)\mu(f) : d\in D, f\in C_0(G)\}$$ is a $C^*$-subalgebra of $M(B)$ (see \cite[Proposition A.36]{EKQR}). Moreover, it is shown in \cite[Proposition A.37]{EKQR} that the pair $(\Lambda_D, \Lambda_{\widehat{G}}):=\big((\operatorname{id}_D\otimes \lambda)\circ \delta, 1\otimes M)$ where $M:C_0(G)\to \mathcal B(L^2(G))=M(\mathcal K(L^2(G)))$ denotes the representation by multiplication operators, defines a covariant representation, called {\em regular representation}, of $(D, G, \delta)$ into $M(D\otimes \mathcal K(L^2(G)))$. The {\em crossed product} $D\rtimes_\delta\widehat{G}$ of the co-system $(D,G,\delta)$ is then defined as the $C^*$-algebra $$D\rtimes_\delta\widehat{G}:=\Lambda_D(D)\Lambda_{\widehat{G}}(C_0(G))\subseteq M(D\otimes\mathcal K(L^2(G))).$$ We can then view $(\Lambda_D, \Lambda_{\widehat{G}})$ as a covariant representation into $M(D\rtimes_\delta\widehat{G})$ in a canonical way. It is then shown in \cite[Theorem A.41]{EKQR} that the triple $(D\rtimes_\delta\widehat{G}, \Lambda_D, \Lambda_{\widehat{G}})$ satisfies the following universal property: If $(\pi,\mu)$ is any covariant representation of $(D,G,\delta)$ into $M(B)$, then there exists a unique $*$-homomorphism $\pi\rtimes\mu: D\rtimes_\delta \widehat{G}\to M(B)$ such that \begin{equation}\label{eq-covuniv} (\pi\rtimes \mu)\circ \Lambda_D=\pi\quad\text{and}\quad (\pi\rtimes\mu)\circ \Lambda_{\widehat{G}}= \mu. \end{equation} Moreover, we get $$\pi\rtimes\mu(D\rtimes_\delta \widehat{G})=\pi(D)\mu(C_0(G)).$$ \medskip We are now going derive analogues of the above constructions and facts for coactions of $G$ on $C^*$-operator systems $(A,X)$. We start with \begin{definition}\label{def-covariant$C^*$} Suppose that $\delta_X:X\to M(X\check\otimes C^*(G))$ is a coaction of $G$ on $(A,X)$ and let $(B,Y)$ be a $C^*$-operator system. Then a covariant morphism of $(A,X, G,\delta_X)$ into $M(B,Y)=(M(B), M(Y))$ consists of a non-degenerate generalized morphism $\pi:X\to M(Y)$ of $(A,X)$ together with a non-degenerate $*$-homomorphism $\mu:C_0(G)\to M(B)$ such that the pair $(\pi, \mu)$ satisfies the covariance condition $$(\pi\otimes\operatorname{id}_G)\circ \delta_X(x)=(\mu\otimes \operatorname{id}_G)(w_G)(\pi(x)\otimes 1)(\mu\otimes \operatorname{id}_G)(w_G)^*$$ for all $x\in X$. If $(B,Y)=(B,B)$ is a $C^*$-algebra, we say that $(\pi,\mu)$ is a covariant representation of $(A,X,G, \delta_X)$ into $M(B)$. If, in addition, $B=\mathcal K(H)$ for some Hilbert space $H$, we say that $(\pi,\mu)$ is a covariant representation of $(A,X, G,\delta_X)$ on $H$. \end{definition} \begin{remark} Observe that the restricted pair $(\pi_A, \mu)$ with $\pi_A:=\pi|_A$ of a covariant representation $(\pi,\mu)$ of $(A,X,G,\delta_X)$ into $M(B,Y)$ is a non-degenerate covariant homomorphism of $(A, G, \delta_A)$ into $M(B)$. \end{remark} \begin{proposition}\label{prop-image} Suppose that $(\pi, \mu)$ is a covariant morphism of \linebreak $(A,X, G, \delta_X)$ into $M(B, Y)$ for some $C^*$-operator system $(B, Y)$. Then \linebreak $\big(\pi(A)\mu(C_0(G)), \pi(X)\mu(C_0(G))\big)$ (closed spans!) is a $C^*$-operator subsystem of $M(B,Y)$. \end{proposition} \begin{proof} We first observe that it follows directly from the above discussion that $\pi(A)\mu(C_0(G))$ is a non-degenerate $C^*$-subalgebra of $M(B)$. Note that this implies in particular $\mu(C_0(G))$ acts as multipliers on this $C^*$-algebra. On the other hand, precisely the same arguments as used in the proof of \cite[Proposition A.36]{EKQR} show that $\pi(X)\mu(C_0(G))=\mu(C_0(G))\pi(X)$, from which it follows that $\pi(X)\mu(C_0(G))$ is a selfadjoint subspace of $M(Y)$. So in order to complete the proof, we only need to show that \begin{align*} \big(\pi(A)\mu(C_0(G))\big)\big(\pi(X)\mu(C_0(G))\big)&=\pi(X)\mu(C_0(G))\\ &=\big(\pi(X)\mu(C_0(G))\big)\big(\pi(A)\mu(C_0(G))\big), \end{align*} but this follows $AX=X=XA$ and $\pi(X)\mu(C_0(G))=\mu(C_0(G))\pi(X)$. \end{proof} \begin{proposition}\label{prop-extend universal} Let $\delta_X:X\to M(X\check\otimes C^*(G))$ be a coaction of $G$ on $(A,X)$ and let $B$ be a $C^*$-algebra. Let $\delta_u:C_u^*(A,X)\to M(C_u^*(A,X)\check\otimes C^*(G))$ denote the corresponding coaction of $G$ on the universal $C^*$-hull $C_u^*(A,X)$ of $(A,X)$ as in Proposition \ref{prop coaction}. Then there is a one-to-one correspondence between the non-degenerate covariant representations $(\pi, \mu)$ of $(A,X, G, \delta_X)$ into $M(B)$ and the non-degenerate covariant representations of $(C_u^*(A,X), G, \delta_u)$ into $M(B)$, given by sending a covariant pair $(\pi, \mu)$ of $(A,X, G, \delta_X)$ to the covariant pair $(\bar\pi, \mu)$, where $\bar\pi:C_u^*(A,X)\to M(B)$ denotes the unique $*$-homomorphism which extends $\pi$. \end{proposition} \begin{proof} Since $(\pi\otimes\operatorname{id}_G)\circ \delta_X=(\mu\otimes \operatorname{id}_G)(w_G)(\pi(\cdot)\otimes 1)(\mu\otimes \operatorname{id}_G)(w_G)^*$ as maps from $X$ into $M(D\check\otimes C^*(G))$, it follows that both of the $*$-homomorphisms $(\bar\pi\otimes\operatorname{id}_G)\circ \delta_u$ and $(\mu\otimes \operatorname{id}_G)(w_G)(\pi(\cdot )\otimes 1)(\mu\otimes \operatorname{id}_G)(w_G)^*$ from $C_u^*(A,X)$ to $M(B\check\otimes C^*(G))$ extend the same non-degenerate generalized morphism of $(A,X)$, and therefore they coincide by Proposition \ref{prop-universal}. This shows that any covariant representation of $(A,X,G,\delta_X)$ has a unique extension to $(C_u^*(A,X), G, \delta_u)$. The converse direction follows by restriction. \end{proof} \begin{lemma}\label{lem-regularrep} Suppose that $(A,X, G,\delta)$ is a coaction of $G$ on $(A,X)$. Then the pair $(\Lambda_X, \Lambda_{\widehat{G}}):=((\operatorname{id}_X\otimes \lambda)\circ \delta_X, 1\otimes M)$ defines a covariant representation of $(A,X, G,\delta)$ into $M\big(A\otimes \mathcal K(L^2(G)), X\otimes \mathcal K(L^2(G))\big)$ which we call the {\em regular representation} of $(A,X, G,\delta)$ into $M(X\check\otimes C^*(G))$. Moreover, via the completely isometric embedding of $(A,X)$ into $C_u^*(A, X)$ and the corresponding completely isometric embedding of $(A\otimes {\mathcal K}(L^2(G)), X\otimes {\mathcal K}(L^2(G)))$ into $C_u^*(A,X)\otimes{\mathcal K}(L^2(G))$, we may view $(\Lambda_X, \Lambda_{\widehat{G}})$ as a covariant representation into $M(C_u^*(A,X)\otimes{\mathcal K}(L^2(G)))$, which uniquely extends to the regular representation $(\Lambda_{C_u^*(A,X)}, \Lambda_{\widehat{G}})$ of $(C_u^*(A,X), G, \delta_u)$ in the sense of Proposition \ref{prop-extend universal}. \end{lemma} \begin{proof} For the first assertion we follow the proof of \cite[Proposition A.37]{EKQR}. Using the identity $$(\lambda\otimes \operatorname{id}_G)\circ \delta_G=\operatorname{Ad}(M\otimes \operatorname{id}_G)(w_G)\circ (\lambda\otimes 1),$$ which has been established in the proof of \cite[Proposition A.37]{EKQR}, we compute \begin{align*} &\big((\operatorname{id}_X\otimes\lambda)\circ \delta_X\otimes\operatorname{id}_G\big)\circ \delta_X(x)=(\operatorname{id}_X\otimes\lambda \otimes \operatorname{id}_G)\circ (\delta_X\otimes \operatorname{id}_G)\circ \delta_X(x)\\ &=(\operatorname{id}_X\otimes \lambda\otimes\operatorname{id}_G)\circ (\operatorname{id}_X\otimes \delta_G)\circ \delta_X(x)\\ &=(\operatorname{id}_X\otimes (\lambda\otimes \operatorname{id}_G)\circ \delta_G)\circ \delta_X(x)\\ &=(1\otimes M\otimes \operatorname{id}_G)(w_G)\big((\operatorname{id}_X\otimes\lambda)(\delta_X(x)\otimes 1)(1\otimes M\otimes \operatorname{id}_G)(w_G)^*. \end{align*} This proves the covariance condition for $((\operatorname{id}_X\otimes \lambda)\circ \delta_X, 1\otimes M)$. The second assertion is now obvious. \end{proof} \begin{definition}\label{def-dual-cross} Suppose that $\delta_X:X\to M(X\check\otimes C^*(G))$ is a coaction of $G$ on the $C^*$-operator system $(A,X)$. Then we define the crossed product $(A,X)\rtimes_{\delta_X}G$ as the $C^*$-operator system $$(A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G}):=\left(\Lambda_X(A)\Lambda_{\widehat{G}}(C_0(G)), \Lambda_X(X)\Lambda_{\widehat{G}}(C_0(G))\right)$$ generated by the regular representation $(\Lambda_X, \Lambda_{\widehat{G}})$ of $(A,X,G, \delta_X)$ as in Proposition \ref{prop-image}. \end{definition} \begin{remark}\label{rem-dual-cross} Note that it follows directly from Lemma \ref{lem-regularrep} and the above definition that $C_u^*(A,X)\rtimes_{\delta_u}\widehat{G}=\Lambda_{C_u^*(A,X)}(C_u^*(A,X))\Lambda_{\widehat{G}}(C_0(G))$ is a $C^*$-hull of $(A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})$. Indeed, we shall see below, that it the universal $C^*$-hull of $(A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})$. \end{remark} We now show that the above defined crossed product does enjoy a universal property for covariant representations: \begin{proposition}\label{prop-dualuniv} Suppose that $(\pi, \mu)$ is a covariant morphism of $(A,X,G,\delta_X)$ into $M(B,Y)$ for some $C^*$-operator system $(B,Y)$. Then there is a unique generalized morphism $$\pi\rtimes\mu: (A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})\to M(B,Y)$$ such that \begin{equation}\label{eq-univ} (\pi\rtimes\mu)\circ \Lambda_X=\pi\quad\text{and}\quad (\pi\rtimes\mu)\circ \Lambda_{\widehat{G}}=\mu. \end{equation} Conversely, if $\Phi: (A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})\to M(B,Y)$ is any non-degenerate generalized morphism, then there is a unique covariant morphism $(\pi, \mu)$ of $(A,X,G,\delta_X)$ such that $\Phi=\pi\rtimes\mu$. \end{proposition} \begin{proof} As for the case of actions, we are going to use the correspondence between covariant representations of $(A,X, G, \delta_X)$ and covariant representations of $(C_u^*(A,X), G, \delta_u)$ as established in Proposition \ref{prop-extend universal}. For this we choose a non-degenerate completely isometric embedding of $(B,Y)$ into $\mathcal B(H)$ for some Hilbert space $H$. Then, if we compose a covariant representation of $(\pi,\mu)$ of $(A,X, G, \delta_X)$ into $M(B,Y)$ with this inclusion, we may view $(\pi, \mu)$ as a representation into $\mathcal B(H)=M({\mathcal K}(H))$. By Proposition \ref{prop-extend universal}, this extends to a covariant representation, say $(\bar\pi, \mu)$ of $(C_u^*(A,X), G, \delta_u)$ into $\mathcal B(H)$. By the universal property of the $C^*$-algebra crossed product $(C_u^*(A,X)\rtimes_{\delta_u} \widehat{G}, \Lambda_{C_u^*(A,X)}, \Lambda_{\widehat{G}})$ there exists a unique $*$-homomorphism $\bar\pi\rtimes\mu: C_u^*(A,X)\rtimes_{\delta_u} \widehat{G}\to \mathcal B(H)$ such that $$(\bar\pi\rtimes\mu)\circ \Lambda_{C_u^*(G,A)}=\bar\pi\quad\text{and}\quad (\bar\pi\rtimes\mu)\circ \Lambda_{\widehat{G}}=\mu.$$ Define $\pi\rtimes\mu$ as the restriction of $\bar\pi\rtimes \mu$ to $X\rtimes_{\delta_X}\widehat{G}\subseteq C_u^*(A,X)\rtimes_{\delta_u} \widehat{G}$. Then the restriction restriction of $(\bar\pi\rtimes\mu)\circ \Lambda_{C_u^*(G,A)}$ to $X$ equals $(\pi\rtimes\mu)\circ \Lambda_X$. Since $\bar\pi$ extends $\pi$, we see that $\pi\rtimes\mu: (A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})\to \mathcal B(H)$ is a representation which satisfies (\ref{eq-univ}). We still need to check that $\pi\rtimes\mu$ can be viewed as a generalized morphism into $M(B,Y)$. For this it suffices to check that $\pi\rtimes \mu(X\rtimes_{\delta_X}\widehat{G})B\subseteq Y$ and $\pi\rtimes \mu(A\rtimes_{\delta_A}\widehat{G})B=B$. But if we apply $\pi\rtimes\mu$ on a typical element of the form $\Lambda_X(x)\Lambda_{\widehat{G}}(f)$ of $X\rtimes_{\delta_X}\widehat{G}$ with $x\in X$, $f\in C_0(G)$, it follows from equation (\ref{eq-univ}) that $$\pi\rtimes \mu\big((\Lambda_X(x)\Lambda_{\widehat{G}}(f)\big)b=\pi(x)\mu(f)b\in Y$$ since, by definition of a covariant representation into $M(B,Y)$, we have $\mu(f)\in M(B)$ and $\pi(x)\in M(Y)$. Moreover, since $\pi|_A:A\to M(B)$ and $\mu:A\to M(B)$ are supposed to be non-degenerate, we get $$\pi\rtimes \mu(A\rtimes_{\delta_A}\widehat{G})B=(\pi(A)\mu(C_0(G)))B=\pi(A)(\mu(C_0(G))B)=\pi(A)B=B.$$ If, conversely, $\Phi: (A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})\to M(B,Y)$ is any non-degenerate generalized morphism, then we leave it as a straightforward exercise to check that the pair $(\pi,\mu)$ with $\pi:=\Phi\circ \Lambda_X, \mu:=\Phi\circ \Lambda_{\widehat{G}}$ is a non-degenerate covariant morphism such that $\Phi=\pi\rtimes\mu$. \end{proof} \begin{corollary}\label{cor-univhull} Suppose that $(A,X, G, \delta_X)$ is a coaction of $G$ on $(A,X)$. Then $$C_u^*(A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})=C_u^*(A,X)\rtimes_{\delta_u}\widehat{G}.$$ \end{corollary} \begin{proof} We already observed in Remark \ref{rem-dual-cross} that $C_u^*(A,X)\rtimes_{\delta_u}\widehat{G}$ is a $C^*$-hull of $(A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})$. So we only need to show that every representation $\Phi: (A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})\to B$ extends to a homomorphism $\bar\Phi: C_u^*(A,X)\rtimes_{\delta_u}\widehat{G}\to B$. For this let us assume without loss of generality that $B$ is generated by the image $\Phi(X\rtimes_{\delta_X}\widehat{G})$. Then $\Phi$ is non-degenerate and there exists a unique non-degenerate covariant representation $(\pi,\mu)$ of $(A,X,G, \delta_X)$ such that $\Phi=\pi\rtimes\mu$. By Proposition \ref{prop-extend universal} $(\pi,\mu)$ extends uniquely to a covariant homomorphism of $(\bar\pi,\mu)$ of $(C_u^*(A,X), G,\delta_u)$ into $B$. The arguments given in the proof of Proposition \ref{prop-dualuniv} then show that $\bar\pi\rtimes \mu$ is a $*$-homomorphism from $C_u^*(A,X)\rtimes_{\delta_u}\widehat{G}$ into $B$ which restricts to $\pi\rtimes\mu$ on $X$. \end{proof} \begin{remark}\label{rem-coact} We should note that, different from the situation for crossed products by actions, the definition of the crossed product $(A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})$ does not depend on the crossed product by the universal $C^*$-hull $C_u^*(A,X)$. This algebra is only used to reduce the proof of the universal properties to the well known case of coaction crossed products by $C^*$-algebras. One should observe that the definition of the crossed product for coactions is more like the definition of the reduced crossed product in case of group actions. The fact that this constructions already enjoys the universal property for covariant morphisms comes from the fact that the locally compact quantum group $C^*(G)$ is amenable for all $G$ (or, in other words, every group $G$ is coamenable (e.g., see \cite{BS1, KV} for a discussion of these notions). Hence we only have one reasonable candidate for a coaction crossed product! For this reason, it also follows that the algebra part $A\rtimes_{\delta_A}G$ of $(A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})$ is the (universal and reduced) crossed product of $A$ with respect to the coaction $\delta_A$. \end{remark} \section{Duality}\label{sec-dual} We now want to deduce versions of the Imai-Takai and Katayama duality for crossed products by actions and coactions. By Example \ref{ex dual} we know that the universal and reduced crossed products $(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)$ and $(A\rtimes_\alpha^rG, X\rtimes_\alpha^rG)$ for an action $\alpha:G\to \operatorname{Aut}(A,X)$ carry canonical dual coactions $\widehat{\alpha}$ and $\widehat{\alpha_r}$ which are given as the integrated forms of the covariant morphisms $\widehat{\alpha}=(i_X\otimes 1)\rtimes (i_G\otimes u)$ and $\widehat{\alpha_r}=(i_X^r\otimes 1)\rtimes (i_G^r\otimes u)$ into $M(X\rtimes_{\alpha}^uG\check\otimes C^*(G))$ (resp. $M(X\otimes_\alpha^rG\check\otimes C^*(G))$), where $(i_X, i_G)$ and $(i_X^r, i_G^r)$ are the canonical covariant morphisms from $(A,X, G,\alpha)$ into $M(X\rtimes_\alpha^uG)$ and $M(X\rtimes_\alpha^rG)$, respectively. Note that it follows directly from the constructions in Example \ref{ex dual}, that these coactions extend to the dual coaction on $C_u^*(A,X)\rtimes_{\alpha, u}G=C_u^*(A\rtimes_\alpha^uG, X\rtimes_\alpha^uG)$ and $C_u^*(A,X)\rtimes_{\alpha_u, r}G$, respectively. In a similar way, we have \begin{proposition}\label{prop-dualaction} Suppose that $(A,X,G, \delta_X)$ is a coaction of $G$ on $(A,X)$. Then there is a canonical dual action $$\widehat\delta:G\to \operatorname{Aut}(A\rtimes_{\delta_A}\widehat{G}, X\rtimes_{\delta_X}\widehat{G})$$ wich is given on a typical element $\Lambda_X(x)\Lambda_{\widehat{G}}(f)$ by $$\widehat\delta_{g}\big(\Lambda_X(x)\Lambda_{\widehat{G}}(f)\big)=\Lambda_X(x)\Lambda_{\widehat{G}}(\sigma_g(f)),$$ where $\sigma: G\to \operatorname{Aut}(C_0(G))$ denotes the right translation action, i.e., $\sigma_g(f)(s)=f(sg)$ for all $g,s\in G, f\in C_0(G)$. \end{proposition} \begin{proof} For any covariant representation $(\pi, \mu)$ of $(A,X,G,\delta_X)$ the pair \linebreak $(\pi, \mu\circ \sigma_g)$ is a covariant representation as well. Indeed, for all $x\in X$ we have \begin{align*} (\mu\circ &\sigma_g\otimes\operatorname{id}_G)(w_G)(x\otimes 1)(\mu\circ \sigma_g\otimes\operatorname{id}_G)(w_G)^*\\ &=(\mu \otimes\operatorname{id}_G)(w_G)(1\otimes u_g)(x\otimes 1)(1\otimes u_g^*)(\mu \otimes\operatorname{id}_G)(w_G)^*\\ &=(\mu \otimes\operatorname{id}_G)(w_G)(x\otimes 1)(\mu \otimes\operatorname{id}_G)(w_G)^*\\ &=\pi(x). \end{align*} Applying this to the regular representation $(\Lambda_X, \Lambda_{\widehat{G}})$ we get a covariant representation $(\Lambda_X, \Lambda_{\widehat{G}}\circ \sigma_g)$ of $(A,X,G, \delta_X)$ into $M(X\rtimes_{\delta_X} \widehat{G})$ whose integrated form $\widehat{\delta}_g$ maps $\Lambda_X(x)\Lambda_{\widehat{G}}(f)$ to $\Lambda_X(x)\Lambda_{\widehat{G}}(\sigma_g(f))$. It is then clear that $\widehat{\delta}_{g^{-1}}$ inverts $\widehat{\delta}_g$ and that $g\mapsto \widehat\delta_g$ is a homomorphism into $\operatorname{Aut}(X\rtimes_{\delta_X}\widehat{G})$. Since the action $\sigma:G\to \operatorname{Aut}(C_0(G))$ is strongly continuous, the same holds for $\widehat\delta$. \end{proof} We now formulate the analogue of the Imai-Takai duality theorem for crossed products of $C^*$-operator systems by actions. \begin{theorem}\label{thm-ImaiTakai} Suppose that $\alpha: G\to \operatorname{Aut}(A,X)$ is an action and let ${\mathcal K}:={\mathcal K}(L^2(G))$. Then there are canonical isomorphisms $$\big(A\rtimes_{\alpha}^uG\rtimes_{\widehat{\alpha}}\widehat{G}, X\rtimes_{\alpha}^uG\rtimes_{\widehat{\alpha^u}}\widehat{G}\big) \cong \big(A\otimes {\mathcal K}, X\otimes {\mathcal K}\big)$$ and $$\big(A\rtimes_{\alpha}^rG\rtimes_{\widehat{\alpha^r}}\widehat{G}, X\rtimes_{\alpha}^rG\rtimes_{\widehat{\alpha^r}}\widehat{G}\big) \cong \big(A\otimes {\mathcal K}, X\otimes {\mathcal K}\big)$$ which transfer the double dual actions $\widehat{\widehat{\alpha}}$ and (resp. $\widehat{\widehat{\alpha^r}}$) to the diagonal action $\alpha\otimes \operatorname{Ad}\rho$, respectively, where $\rho:G\to U(L^2(G))$ denotes the right regular representation of $G$. \end{theorem} \begin{proof} Both assertions can be deduced easily from the well known Takai-Takesaki duality theorem for the corresponding action on the universal $C^*$-hull $C_u^*(A,X)$. Indeed, it is shown in \cite[Theorem 5.1]{Rae} (in the even more general situation of a dual coaction of a twisted action) that for any action $\beta:G\to \operatorname{Aut}(B)$ the Imai-Takai isomorphism $$\Psi: B\rtimes_\beta G\rtimes_{\widehat\beta} \widehat{G}\stackrel{\cong}{\to} B\otimes {\mathcal K}$$ is given by the integrated form $(\Lambda_B\rtimes \Lambda_G)\rtimes \Lambda_{\widehat{G}}$ where $\Lambda_B\rtimes\Lambda_G: B\rtimes_\beta G\to M(B\otimes{\mathcal K}(L^2(G)))$ is the regular representation of $B\rtimes_\beta G$ and $\Lambda_{\widehat{G}}=1\otimes M$. It is then clear that this factors through a homomorphism of $B\rtimes_{\beta,r} G\rtimes_{\widehat\beta} \widehat{G}$, which explains that both crossed products are the same. Now, if we apply this to the system $(C_u^*(A,X), G,\alpha)$ we obtain the isomorphism $$(\Lambda_{C_u^*(A,X)}\rtimes\Lambda_G)\rtimes\Lambda_{\widehat{G}}: C_u^*(A,X)\rtimes_{\alpha_u} G\rtimes_{\widehat{\alpha_u}} \widehat{G}\stackrel{\cong}{\to} C_u^*(A,X)\otimes {\mathcal K}$$ which then clearly restricts to an isomorphism $$(\Lambda_X\rtimes\Lambda_G)\rtimes\Lambda_{\widehat{G}}: X\rtimes_{\alpha}^uG\rtimes_{\widehat{\alpha}} \widehat{G}\stackrel{\cong}{\to} X\otimes {\mathcal K}$$ and similarly for $X\rtimes_{\alpha}^rG\rtimes_{\widehat{\alpha_r}}\widehat{G}$. The statement on the double dual action $\widehat{\widehat{\alpha}}$ follows from the analoguous statement for the double dual action $\widehat{\widehat{\alpha}}$ on the double crossed product of $C_u^*(A,X)$. \end{proof} We now proceed to a discussion of Katayama duality, where we want to study double crossed products $$(A,X)\rtimes_{\delta_X}\widehat{G}\rtimes_{\widehat{\delta}_X}G$$ by dual actions of coactions. Note that here it will usually matter whether we take the universal or the reduced crossed product (or any exotic crossed product in between) on the outside, so we need to clarify this point. So let us first recall the situation if we start with a coaction $\delta: B\to M(B\check\otimes C^*(G))$ of $G$ on a $C^*$-algebra $B$. It is shown by Nilsen in \cite{Nilsen} that there exists a surjective $*$-homomorphisms \begin{equation}\label{eq-max} \Phi_B: B\rtimes_{\delta}\widehat{G}\rtimes_{\widehat{\delta}, u}G\twoheadrightarrow B\otimes {\mathcal K}(L^2(G)) \end{equation} given by the integrated form $$\Phi_B=\big (\Lambda_B\rtimes \Lambda_{\widehat{G}}\big)\rtimes(1\otimes \rho)$$ of the covariant homomorphism $(\Lambda_B\rtimes\Lambda_{\widehat{G}}, 1\otimes \rho)$ of $(B\rtimes_{\delta}\widehat{G}, G, \widehat{\delta})$ where $(\Lambda_B,\Lambda_{\widehat{G}})=\big((\operatorname{id}_B\otimes \lambda)\circ \delta, 1\otimes M\big)$ is the regular representation of $(B,G,\delta)$ into $M(B\otimes {\mathcal K}(L^2(G)))$. A coaction $\delta$ is called {\em maximal}, if $\Phi$ is an isomorphism, and it is called {\em normal}, if $\Phi$ factors through an isomorphism $B\rtimes_{\delta}\widehat{G}\rtimes_{\widehat{\delta},r}G\cong B\otimes {\mathcal K}(L^2(G))$. In general, the isomorphism $\Phi$ will factor through an isomorphism $$B\rtimes_{\delta}\widehat{G}\rtimes_{\widehat{\delta}, \mu}G \cong B\otimes {\mathcal K}(L^2(G))$$ of some {\em exotic crossed product} $B\rtimes_{\delta}\widehat{G}\rtimes_{\widehat{\delta}, \mu}G$ which lies between the maximal and the reduced crossed product in the sense that it is a $C^*$-completion of $C_c(G, B\rtimes_\delta\widehat{G})$ such that the identity map on $C_c(G, B\rtimes_\delta\widehat{G})$ induces surjective $*$-homomorphisms \begin{equation}\label{eq-surjection} B\rtimes_{\delta}\widehat{G}\rtimes_{\widehat{\delta}, u}G\twoheadrightarrow B\rtimes_{\delta}\widehat{G}\rtimes_{\widehat{\delta}, \mu}G\twoheadrightarrow B\rtimes_{\delta}\widehat{G}\rtimes_{\widehat{\delta}, r}G. \end{equation} It has been shown by Quigg \cite{Q} that $(B, G, \delta)$ is normal if and only if $\Lambda_B=(\operatorname{id}_B\otimes\lambda)\circ \delta: B\to M(B\otimes {\mathcal K}(L^2(G)))$ is faithful. In general the coaction $\delta$ determines a normal coaction $\delta_n$ (called the {\em normalization} of $\delta$) on the quotient $B_n:=B/(\ker\Lambda_B)$ such that the $\delta-\delta_n$ equivariant quotient map $\Psi_n:B\twoheadrightarrow B_n$ descents to an isomorphism of the dual systems $$(B\rtimes_{\delta}\widehat{G}, G, \widehat{\delta})\cong (B_n\rtimes_{\delta_n}\widehat{G}, G, \widehat{\delta_n}).$$ If $(B,\delta)=(A\rtimes_{\alpha} G, \widehat{\alpha})$ is the dual coaction on the full crossed product by an action $\alpha$ of $G$ on a $C^*$-algebra $A$, then $(B,\delta)$ is maximal (see \cite{EKQ}) and the normalization of $(B,\delta)$ is given by the pair $(B_n,\delta_n)=(A\rtimes_rG, \widehat{\alpha}_r)$, the dual coaction of the reduced crossed product. We now want to extend this picture to crossed products by $C^*$-operator systems. We start with the following obvious consequence to the above: \begin{theorem}\label{thm-Katayama} Suppose that $(A,X,G, \delta_X)$ is a coaction of the locally compact group $G$ on the $C^*$-operator system $(A,X)$. Then there is a canonical surjective morphism $$\Phi_X: \big(A\rtimes_{\delta_A}\widehat{G}\rtimes_{\widehat{\delta_A}}^uG, X\rtimes_{\delta_X}\widehat{G}\rtimes_{\widehat{\delta_X}}^uG\big)\twoheadrightarrow \big(A\otimes{\mathcal K}(L^2(G)), X\otimes {\mathcal K}(L^2(G)\big)$$ given by the integrated form $$\Phi=\big (\Lambda_X\rtimes \Lambda_{\widehat{G}}\big)\rtimes(1\otimes \rho)$$ of the covariant homomorphism $(\Lambda_X\rtimes\Lambda_{\widehat{G}}, 1\otimes \rho)$ of $(X\rtimes_{\delta_X}\widehat{G}, G, \widehat{\delta_X})$ where $(\Lambda_X,\Lambda_{\widehat{G}})=\big((\operatorname{id}_X\otimes \lambda)\circ \delta_X, 1\otimes M\big)$ is the regular representation of $(A,X,G,\delta_X)$ into $M(X\otimes {\mathcal K}(L^2(G)))$. Moreover, there is an exotic completion $(A\rtimes_{\delta_A}\widehat{G}\rtimes_{\widehat{\delta_A}}^{\mu}G, X\rtimes_{\delta_X}\widehat{G}\rtimes_{\widehat{\delta_X}}^{\mu}G)$ of the pair $(C_c(G, A\rtimes_{\delta_A} \widehat{G}), (C_c(G, X\rtimes_{\delta_X} \widehat{G})$, lying between the maximal and the reduced crossed products, such that $\Phi_X$ factors through a completely isometric isomorphism $$\big(A\rtimes_{\delta_A}\widehat{G}\rtimes_{\widehat{\delta_A}}^\mu G, X\rtimes_{\delta_X}\widehat{G}\rtimes_{\widehat{\delta_X}}^\mu G\big)\cong \big(A\otimes{\mathcal K}(L^2(G)), X\otimes {\mathcal K}(L^2(G)\big).$$ \end{theorem} \begin{proof} The result follows from the above cited results for coactions on $C^*$-algebras applied to the coaction $(B,G, \delta)=(C_u^*(A,X), G, \delta_u)$ and the restriction of the corresponding $*$-homomorphism $\Phi_B$ to $A\subseteq X\subseteq X$. \end{proof} \begin{corollary}\label{cor-katayama} Suppose that $G$ is an amenable locally compact group. Then the morphism $$\Phi_X: \big(A\rtimes_{\delta_A}\widehat{G}\rtimes_{\widehat{\delta_A}}^uG, X\rtimes_{\delta_X}\widehat{G}\rtimes_{\widehat{\delta_X}}^uG\big)\twoheadrightarrow \big(A\otimes{\mathcal K}(L^2(G)), X\otimes {\mathcal K}(L^2(G)\big)$$ of Theorem \ref{thm-Katayama} is a completely isometric isomorphism. \end{corollary} \section{$C^*$-operator bimodules}\label{sec-bimodules} In this section we want to study $C^*$-operator systems which are related to $C^*$-operator bimodules. This will later lead to an easy way to define crossed products by group actions on $C^*$-operator bimodules. Since every operator space can be regarded as a $C^*$-operator bimodule in a canonical way, this will also give a construction of crossed products by group actions on operator spaces. \begin{definition}\label{def-C*-operatorbimodule} Let $H$ and $K$ be Hilbert spaces. A concrete {\em $C^*$-operator bimodule} $(A,V, B)$ inside $\mathcal B(K,H)$ consists of a norm closed subset $V\subseteq \mathcal B(K,H)$ together with a $C^*$-subalgebra $A\subseteq \mathcal B(H)$ and a $C^*$-subalgebra $B\subseteq \mathcal B(K)$ satisfying $AV=V=VB$, $AH=H$, and $BK=K$. A {\em representation} of the $C^*$-operator bimodule $(A,V,B)$ on a pair of Hilbert spaces is $(K', H')$ is a triple of maps $\rho=(\rho_A, \rho_V, \rho_B)$ such that $\rho_A:A\to \mathcal B(H')$ and $\rho_B:B\to \mathcal B(K')$ are $*$-homomorphisms and $\rho_V: V\to \mathcal B(K',H')$ is a completely bounded map such that $$\rho_V(avb)=\rho_A(a)\rho_V(v)\rho_B(b)\quad\forall a\in A, v\in V, b\in B.$$ We say that $\rho$ is {\em non-degenerate} if $\rho_A$ and $\rho_B$ are non-degenerate. We say that $\rho$ is {\em completely contractive} if $\rho_V$ is completely contractive and $\rho$ is called {\em completely isometric} if $\rho_A$ and $\rho_B$ are faithful and $\rho_V$ is completely isometric. A {\em morphism} from the $C^*$-operator bimodule $(A,V,B)$ to the $C^*$-operator bimodule $(C, W,D)$ inside $\mathcal B(K', H')$ is a representation $\varphi=(\varphi_A,\varphi_V,\varphi_B)$ of $(A,V,B)$ to $\mathcal B(K',H')$ such that $\varphi_A(A)\subseteq C, \varphi_V(V)\subseteq W$, and $\varphi_B(B)\subseteq D$. The invertible morphisms (or isomorphisms) are then precisely the surjective completely isometric morphisms. We shall often identify isomorphic $C^*$-operator systems. \end{definition} \begin{remark}\label{rem-ospace} {(1)} Every concrete operator space$V\subseteq \mathcal B(K,H)$ determines the concrete $C^*$-operator system $(\C1_H, V, \mathbb{C} 1_K)$. If $V'\subseteq \mathcal B(K',H')$ is another operator space and $\varphi_V: V\to V'$ is a completely bounded map, then $\varphi=(\varphi_{\C1_H}, \varphi_V, \varphi_{\mathbb{C} 1_K})$ with $\varphi_{1_H}(\lambda 1_H)=\lambda 1_{H'}$ and $\varphi_{1_K}(\lambda 1_K)=\lambda 1_{K'}$ is an morphism from $(\mathbb{C} 1_H, V, \C1_K)\to (\mathbb{C} 1_{H'}, V', \C1_{K'})$. Hence $V$ and $V'$ are isomorphic as operator spaces if and only if $(\mathbb{C} 1_H, V, \C1_K)$ and $(\mathbb{C} 1_{H'}, V', \C1_{K'})$ are isomorphic as $C^*$-operator bimodules. In this way we may regard the category of (concrete) $C^*$-operator bimodules as an extension of the category of (concrete) operator spaces. \\ {(2)} If $(A,V, B)$ is a triple of subsets of $\big(\mathcal B(H), \mathcal B(K,H) , \mathcal B(K)\big)$ which satisfies all requirements of a $C^*$-operator bimodule as in Definition \ref{def-C*-operatorbimodule} except the non-degeneracy requirements $AH=H$ and $BK=K$, let $H':=AH\subseteq H$ and $K'=BK\subseteq K$. Then, via restriction, we obtain a completely isomeric and non-degenerate representation of $(A,V,B)$ on $\mathcal B(K',H')$. \\ {\bf (3)} If $\rho=(\rho_A, \rho_V, \rho_B)$ is a completely bounded representation (or morphism) of $(A,V,B)$ with $0<C:=\|\rho_V\|_{cb}$, then $\frac{1}{C}\rho:=(\rho_A, \frac{1}{C}\rho_V, \rho_B)$ is a completely contractive representation (resp. morphism). This easy observation shows that in most situations one may assume without loss of generality that $\rho$ is completely contractive. \end{remark} The following proposition extends the well-known construction which assigns to each operator space $V\subseteq \mathcal B(K,H)$ the Paulsen-operator system $ X(V):=\left(\begin{matrix} \mathbb{C} 1_H &V\\ V^* & \mathbb{C} 1_K\end{matrix}\right)\subseteq \mathcal B(H\oplus K)$. For this let $(A,V, B)$ be a $C^*$-operator bimodule in $\mathcal B(H,K)$. Let $$X(A,V,B):=\left\{\left(\begin{matrix} a&v\\ w^*&b\end{matrix}\right): a\in A, v,w\in V, b\in B\right\}\subseteq \mathcal B(H\oplus K),$$ and let $A\oplus B$ be viewed as the set of diagonal operators $\left(\begin{matrix}a&0\\0&b\end{matrix}\right)\in \mathcal B(H\oplus K)$ with $a\in A, b\in B$. Then it is easily checked that $(A\oplus B, X(A,V,B))$ is a $C^*$-operator system in $\mathcal B(H\oplus K)$ as defined in Definition \ref{def-cstaros}. On the other hand, one easily checks that the set of operators $${\operatorname{Op}}(A,V,B):=\left\{\left(\begin{matrix} a&v\\ 0&b\end{matrix}\right): a\in A, v\in V, b\in B\right\}\subseteq \mathcal B(H\oplus K),$$ is a concrete operator algebra in $\mathcal B(H\oplus K)$ such that each approximate unit of $A\oplus B$ serves as an approximate unit of ${\operatorname{Op}}(A,V,B)$. \begin{definition}\label{def-Paulsen} We call $(A\oplus B, X(A,V,B))$ the {\em Paulsen $C^*$-operator system} of $(A,V, B)$ and we call ${\operatorname{Op}}(A,V,B)$ the {\em Paulsen operator algebra} of $(A,V,B)$. \end{definition} \begin{proposition}\label{prop-reps} Let $(A,V,B)$ be a $C^*$-operator bimodule. Then there is a one-to-one correspondence between \begin{enumerate} \item non-degenerate completely contractive representations of $(A,V,B)$; \item non-degenerate completely positive contractive representations of the $C^*$-operator system $\big(A\oplus B, X(A,V,B)\big)$; and \item non-degenerate completely contractive operator algebra representations of ${\operatorname{Op}}(A,V,B)$. \end{enumerate} Given a representation $\rho=(\rho_A, \rho_V,\rho_B):(A,V,B)\to \mathcal B(K, H)$ the corresponding representation $\pi$ of $X(A,V, B)$ into $\mathcal B(H\oplus K)$ is given by $$\pi\Big(\left(\begin{matrix} a&v\\ w^*& b\end{matrix}\right)\Big)= \left(\begin{matrix} \rho_A(a)& \rho_V(v)\\ \rho_V(w)^*& \rho_B(b)\end{matrix}\right)\quad a\in A, v,w\in V, b\in B.$$ and given a representation $\pi: X(A,V,B)\to \mathcal B(L)$, the corresponding representation of ${\operatorname{Op}}(A,V,B)$ is given by the restriction of $\pi$ to ${\operatorname{Op}}(A,V,B)\subseteq X(A,V,B)$. \end{proposition} \begin{proof} Let $\rho=(\rho_A,\rho_V,\rho_B)$ be a completely contractive representation of $(A,V,B)$ into $\mathcal B(K',H')$. We may further assume without loss of generality that $A$ and $B$ are unital -- otherwise we replace $A$ and $B$ by their unitisations $\tilde{A}=A+\mathbb{C} 1_H$ and $\tilde{B}=B+ \C1_K$ and $\rho_A$ and $\rho_B$ by their canonical unital extensions to $\tilde{A}$ and $\tilde{B}$, respectively. Suppose now that $T:=\left(\begin{matrix} a&v\\ w^*& b\end{matrix}\right)\in X(A,V,B)$ is positive. Then $w=v$ and $a$ and $b$ are positive elements in $A$ and $B$, respectively. Let $\pi$ be as in the proposition. In order to see that $\pi(T)$ is positive, it suffices to show that $\pi(T+\varepsilon 1)=\pi(T)+\varepsilon 1$ is positive for all $\varepsilon>0$. Writing $a_\varepsilon:=a+\eps1$ and $b_\varepsilon:=b+\eps1$, we get \begin{align*} 0&\leq \left(\begin{matrix} a_\varepsilon^{-\frac{1}{2}}&0\\ 0& b_\varepsilon^{-\frac{1}{2}}\end{matrix}\right)\left(\begin{matrix} a_\varepsilon&v\\ v^*& b_\varepsilon\end{matrix}\right)\left(\begin{matrix} a_\varepsilon^{-\frac{1}{2}}&0\\ 0& b_\varepsilon^{-\frac{1}{2}}\end{matrix}\right)\\ &=\left(\begin{matrix} 1&a_\varepsilon^{-\frac{1}{2}}vb_\varepsilon^{-\frac{1}{2}}\\ b_\varepsilon^{-\frac{1}{2}}v^*a_\varepsilon^{-\frac{1}{2}}& 1\end{matrix}\right), \end{align*} from which it follows that $\|a_\varepsilon^{-\frac{1}{2}}vb_\varepsilon^{-\frac{1}{2}}\|\leq 1$. Since $\rho_V$ is completely contractive, we also have $\|\rho_V(a_\varepsilon^{-\frac{1}{2}}vb_\varepsilon^{-\frac{1}{2}})\|\leq 1$. It then follows that \begin{align*} &\pi(T+\varepsilon 1)\\ &=\left(\begin{matrix} \rho_A(a_\varepsilon^{\frac{1}{2}})&0\\ 0& \rho_B(b_\varepsilon^{\frac{1}{2}})\end{matrix}\right) \left(\begin{matrix} 1&\rho_V(a_\varepsilon^{-\frac{1}{2}}vb_\varepsilon^{-\frac{1}{2}})\\ \rho_V(b_\varepsilon^{-\frac{1}{2}}v^*a_\varepsilon^{-\frac{1}{2}})& 1\end{matrix}\right) \left(\begin{matrix} \rho_A(a_\varepsilon^{\frac{1}{2}})&0\\ 0& \rho_B(b_\varepsilon^{\frac{1}{2}})\end{matrix}\right) \end{align*} is positive as well. A similar computation performed an matrix algebras over $X(A,B,V)$ then shows that $\pi:X(A,V,B)\to \mathcal B(H\oplus K)$ is completely positive. Since $\pi$ is unital, it is also completely contractive. It is clear that every non-degenerate completely contractive representation of $X(A,B,V)$ restricts to a nondenerate completely contractive operator algebra representation of ${\operatorname{Op}}(A,V,B)$. So let us finally assume that we have a non-degenerate completely contractive operator algebra representation $\pi: {\operatorname{Op}}(A,V,B)\to \mathcal B(L)$ for some Hilbert space $L$. Let us regard $A, V$ and $B$ as subspaces of ${\operatorname{Op}}(A,V,B)$ in the canonical way. Then the restrictions $\pi_A, \pi_V, \pi_B$ of $\pi$ to $A,V$ and $B$ are completely contractive as well. Since $\pi_A:A\to \mathcal B(L)$ and $\pi_B:B\to \mathcal B(L)$ are contractive algebra homomorphisms, it follows from \cite[Proposition A.5.8]{Blecher1} that they are $*$-homomorphisms. Writing $H:=\pi_A(A)L$ and $K=\pi_B(B)L$ we get $L=H\oplus K$ and $(\pi_A,\pi_V, \pi_B)$ is a non-degenerate representation of $(A,V,B)$ into $\mathcal B(H,K)$ as in Definition \ref{def-C*-operatorbimodule}. \end{proof} \begin{remark}\label{rem-nondeg} If we allow possibly degenerate representations of $(A,V,B)$, $X(A,V,B)$ or ${\operatorname{Op}}(A,V,B)$ in the statement of Proposition \ref{prop-reps} then we can always pass to appropriate subspaces of the representation spaces to make these representations non-degenerate. Then the one-to-one correspondence will still hold modulo the possible addition of direct sums on which all operators act trivially. \end{remark} As a direct consequence of Proposition \ref{prop-reps} we now get \begin{corollary}\label{cor-morphisms} Suppose that $(A,V,B)$ and $(C,W,D)$ are $C^*$-operator bimodules. Then there is a one-to-one correspondence between \begin{enumerate} \item completely contractive morphism $\varphi:(A,V,B)\to (C,W,D)$, \item completely positiv contractive morphism $\phi:X(A,V,B)\to X(C,W,D)$ preserving the corners, and \item complete contractive homomorphisms $\psi:{\operatorname{Op}}(A,V,B)\to{\operatorname{Op}}(C,W,D)$ preserving the corners. \end{enumerate} If $\varphi=(\varphi_A,\varphi_V,\varphi_B)$ is as morphism from $(A,V,B)$ to $(C,W,D)$ then the corresponding morphism $\phi:X(A,V,B)\to X(C,W,D)$ is given by $$\phi\Big(\left(\begin{matrix} a&v\\ w^*& b\end{matrix}\right)\Big)= \left(\begin{matrix} \varphi_A(a)& \varphi_V(v)\\ \varphi_V(w)^*& \varphi_B(b)\end{matrix}\right)\quad a\in A, v,w\in V, b\in B,$$ and if $\phi:X(A,V,B)\to X(C,W,D)$ is a morphism as in (2), then its restriction $\psi$ to ${\operatorname{Op}}(A,V,B)$ is the corresponding morphism from ${\operatorname{Op}}(A,V,B)$ to ${\operatorname{Op}}(C,W,D)$. The correspondences are compatible with taking compositions of morphisms. \end{corollary} \begin{proof} It is clear that the constructions given above preserve all required algebraic properties. The combination of Proposition \ref{prop-reps} with Remark \ref{rem-nondeg} shows that they also preserve the property of being completely contractive. \end{proof} \begin{remark}\label{rem-C*-hull-bimodule} In what follows it is useful to consider representations of $C^*$-operator bimodules into general $C^*$-algebras. By such a representation we understand a triple $\rho=(\rho_A,\rho_V,\rho_B)$ of $(A,V,B)$ into a $C^*$-algebra $C$ satisfying \begin{enumerate} \item $\rho_A:A\to C$ and $\rho_B:B\to C$ are $*$-homomorphisms such that $\rho_A(A) \rho_B(B)=\{0\}$, \item $\rho_V:V\to C$ is completely contractive and $\rho_V(bva)=\rho_B(b)\rho_V(v)\rho_A(a)$ for all $a\in A, v\in V$ and $b\in B$. \end{enumerate} Then we have a well-defined $*$-homomorphism $\rho_A\oplus \rho_B:A\oplus B\to C$ mapping $a\oplus b$ to $\rho_A(a)+\rho_B(b)$ and we say that $\rho$ is non-degenerate if $\rho_A\oplus \rho_B$ maps approximate units of $A\oplus B$ to approximate units of $C$ (this is equivalent to the fact that $\rho_A\oplus\rho_B(A\oplus B)C=C$). In general we may always pass to the subalgebra $C'$ of $C$ generated by $\rho_A(A)\cup\rho_V(V)\cup\rho_B(B)$ to obtain a non-degenerate representation into this subalgebra. Then, representing $C$ (resp. $C'$) faithfully and non-degenerately on a Hilbert space $L$, we may regard $\rho$ as a non-degenerate representation of $(A,V,B)$ in $\mathcal B(K,H)$ with $H=\rho_A(A)L, K=\rho_B(B)L$. This allows us to use the results of Proposition \ref{prop-reps} also for representations into $C^*$-algebras. Note that conversely any triple of subsets $(A,V,B)$ of a $C^*$-algebra $C$ such that $A$ and $B$ are $C^*$-subalgebras of $C$, $V$ is a closed subspace of $C$, $AB=\{0\}$ and $AV=V=VB$ determines the structure of a $C^*$-operator bimodule on $(A,V,B)$ via a faithful representation of $C$ on Hilbert space. \end{remark} \begin{definition}\label{def-C*-hull} Let $(A,V,B)$ be a $C^*$-operator bimodule. If $j=(j_A,j_V, j_B)$ is a completely isometric representation of $(A,V,B)$ into a $C^*$-algebra $C$ such that $C$ is generated by $j_A(A)\cup j_V(V)\cup j_B(B)$ as a $C^*$-algebra, then $\big(C, (j_A,j_V,j_B)\big)$ is called a {\em $C^*$-hull} of $(A,V,B)$. A $C^*$-hull $\big(C_u^*(A,V,B), (i_A,i_V,i_B)\big)$ is called {\em universal} if for any completely contractive representation $\rho=(\rho_A, \rho_V,\rho_B)$ of $(A,V,B)$ into some $C^*$-algebra $D$ there exists a $*$-homomorphism $$\rho_C:C_u^*(A,V,B)\to D\; \text{such that} \; \rho_A=\rho_C\circ i_A, \;\rho_V=\rho_C\circ i_V,\;\text{and}\; \rho_B=\rho_C\circ i_B.$$ On the other hand, we say that a $C^*$-hull $\big(C_e^*(A,V,B), (k_A,k_V,k_B)\big)$ is {\em enveloping} if for any other $C^*$-hull $\big(C, (j_A, j_V, j_B)\big)$ there exists a $*$-homomor\-phism $$k_C:C\to C_e^*(A,V,B)\;\text{such that}\; k_A=k_C\circ j_A, k_V=k_C\circ j_V,\;\text{and}\; k_B=k_C\circ j_B.$$ (The $*$-homomorphisms $\rho_C$ and $k_C$ are then uniquely determined by these properties.) \end{definition} As a consequence, if the universal and the enveloping $C^*$-hulls exist, then for any $C^*$-hull $\big(C, (j_A, j_V, j_B)\big)$ of $(A,V,B)$ we obtain unique surjective $*$-homomorphisms $$C_u^*(A,V,B)\twoheadrightarrow C\twoheadrightarrow C_e^*(A,V,B)$$ which commute with the embeddings of $(A, V,B)$ into these $C^*$-algebras. Moreover, it follows easily from the universal properties that the universal and enveloping $C^*$-hulls are unique up to isomorphism which are compatible with the embeddings of $(A,V,B)$. \begin{proposition}\label{prop-C*-hull-bimodule} For each $C^*$-operator bimodule $(A,V,B)$ the universal and enveloping $C^*$-hulls exist. To be more precise: let $$\big(C_u^*(X(A,V,B)), i_{X(A,V,B)}\big)\quad\text{and}\quad \big(C_e^*(X(A,V,B)), k_{X(A,V,B)}\big)$$ denote the universal and enveloping $C^*$-hulls of the $C^*$-operator system $X(A,V,B)$ and let $(i_A,i_V,i_B)$ and $(k_A, k_V, k_B)$ be the compositions of $i_{X(A,V.B)}$ and $k_{X(A,V,B)}$ with the canonical inclusions of $(A,V,B)$ into $X(A,V,B)$. Then $$\big(C_u^*(X(A,V,B)),(i_A,i_V,i_B)\big)\quad\text{and}\quad \big(C_e^*(X(A,V,B)), (k_A, k_V, k_B)\big)$$ are the universal and enveloping $C^*$-hulls of $(A,V,B)$. Alternatively, let $$\big(C_u^*({\operatorname{Op}}(A,V,B)), i_{{\operatorname{Op}}(A,V,B)}\big)\quad\text{and}\quad\big(C_e^*({\operatorname{Op}}(A,V,B)), k_{{\operatorname{Op}}(A,V,B)}\big)$$ denote the universal and enveloping $C^*$-hulls of the operator algebra \linebreak ${\operatorname{Op}}(A,V,B)$ as in \cite[Propositions 2.4.2 and 4.3.5]{Blecher1} and let $(i_A,i_V,i_B)$ and $(k_A, k_V, k_B)$ be the compositions of $i_{{\operatorname{Op}}(A,V.B)}$ and $k_{{\operatorname{Op}}(A,V,B)}$ with the canonical inclusions of $(A,V,B)$ into ${\operatorname{Op}}(A,V,B)$. Then $$\big(C_u^*({\operatorname{Op}}(A,V,B)),(i_A,i_V,i_B)\big)\quad\text{and}\quad\big(C_e^*({\operatorname{Op}}(A,V,B)), (k_A, k_V, k_B)\big)$$ are the universal and enveloping $C^*$-hulls of $(A,V,B)$. \end{proposition} \begin{proof} The proof is an easy consequence of the definitions of the universal and enveloping $C^*$-hulls together Proposition \ref{prop-reps} and Remark \ref{rem-C*-hull-bimodule}. So we omit further details. \end{proof} We are now turning our attention to multipliers: \begin{definition}\label{def-multopbimodule} Let $(A,V,B)$ be a $C^*$-operator bimodule which is non-degenerately and completely isometrically represented on $\mathcal B(K,H)$. Then the {\em multiplier bimodule} of $(A,V,B)$ is the triple $\big(M(A), M(V), M(B)\big)$ in which $M(A)$ and $M(B)$ are the multiplier algebras of the $C^*$-algebras $A$ and $B$, respectively, and where $$M(V)=\{T\in \mathcal B(K,H): AT\cup TB\subseteq V\}.$$ \end{definition} \begin{remark} One easily checks that $\big(M(A), M(V), M(B)\big)$ is again a $C^*$-operator bimodule represented on $\mathcal B(K,H)$. If one of $A$ or $B$ is unital, we clearly have $M(V)=V$. Notice that, similarly to the construction of the multiplier $C^*$-operator system $(M(A), M(X))$ for a given $C^*$-operator system $(A,X)$, the space $M(V)$ heavily depends on the algebras $A$ and $B$. We retain from using a notation like $_AM_B(V)$ to keep things simple. \end{remark} Recall (e.g., from \cite{Blecher1}) that for any completely isometrically and faithfully represented operator algebra $\mathcal A\subseteq \mathcal B(L)$ the multiplier algebra $M(\mathcal A)$ can be defined (up to completely isometric isomorphism) as $$M(\mathcal A)=\{T\in \mathcal B(L): T\mathcal A\cup \mathcal AT\subseteq \mathcal A\}.$$ Recall also the definition of the multiplier system of a $C^*$-operator system as given in Lemma \ref{lem-multipliers}. \begin{proposition}\label{prop-multiplier} Let $\big(M(A), M(V), M(B)\big)$ be the muliplier bimodule of the $C^*$-operator bimodule $(A,V,B)$ in $\mathcal B(K,H)$. Then $$ \big(M(A\oplus B), M(X(A,V,B))\big) =\big(M(A)\oplus M(B), X\big(M(A), M(V), M(B)\big)\big)$$ is the multiplier system of the $C^*$-operator system $\big(A\oplus B, X(A,V,B)\big)$ and ${\operatorname{Op}}\big(M(A), M(V), M(B)\big)=M\big({\operatorname{Op}}(A,V,B)\big)$. \end{proposition} \begin{proof} Recall from Lemma \ref{lem-multipliers} that $M(X(A,V,B))$ is defined as the set of all elements $T\in \mathcal B(H\oplus K)$ such that $(A\oplus B)T\cup T(A\oplus B)\subseteq X(A,V,B)$. Writing $$T=\left(\begin{matrix}T_{11} & T_{12}\\ T_{21}& T_{22}\end{matrix}\right)\in \left(\begin{matrix}\mathcal B(H)& \mathcal B(K,H)\\ \mathcal B(H,K)& \mathcal B(K)\end{matrix}\right)$$ and computing ${\operatorname{diag}}(a,0)T, T{\operatorname{diag}}(a,0), {\operatorname{diag}}(0,b)T, T{\operatorname{diag}}(0,b)\in X(A,V, B)$ easily shows that $T\in \left(\begin{matrix} M(A)&M(V)\\ M(V)^*&M(B)\end{matrix}\right)=X\big(M(A), M(V), M(B)\big)$. Conversely one easily checks that $X\big(M(A), M(V), M(B)\big)\subseteq M(X(A,V,B))$. A similar argument also shows ${\operatorname{Op}}\big(M(A), M(V), M(B)\big)=M\big({\operatorname{Op}}(A,V,B)\big)$. \end{proof} \begin{definition}\label{def-genmor} Let $(A,V,B)$ and $(C,W,D)$ be two $C^*$-operator bimodules. A {\em generalized morphism} from $(A,V,B)$ to $(C,W,D)$ is a completely contractive morphism $\varphi:(A,V,B)\to (M(C), M(W), M(D))$ such that $\varphi_A(A)C=C$ and $\varphi_B(B)D=D$. \end{definition} \begin{example} Every non-degenerate representation $\rho$ of $(A,V,B)$ to some $\mathcal B(K,H)$ can be regarded as a generalized morphism from $(A,V,B)$ to the $C^*$-operator bimodule $\Big(M(\mathcal K(H)), M(\mathcal K(K,H)), M(\mathcal K(K))\Big)$. \end{example} The following proposition is now a direct combination of Proposition \ref{prop-reps}, Proposition \ref{prop-multiplier} and Lemma \ref{lem-non-deg-morphism}, so we leave the details to the reader: \begin{proposition}\label{prop-genmorextend} Every generalized morphism $$\varphi:(A,V,B)\to (M(C), M(W), M(D))$$ from $(A,V,B)$ to $(C,W,D)$ extends uniquely to a morphism $$\tilde\varphi:(M(A), M(V), M(B))\to (M(C), M(W), M(D)).$$ If $\varphi$ is completely isometric, then so is $\tilde\varphi$. In particular, every non-degenerate (completely isometric) representation $\rho$ of $(A,V,B)$ into $\mathcal B(K,H)$ uniquely extends to a (completely isometric) representation of $\big(M(A), M(V), M(B)\big)$ to $\mathcal B(K,H)$. \end{proposition} We close this section with the following analogue of Lemma \ref{lem mult univ}: \begin{proposition}\label{prop-multuniv} Let $\big(C, (j_A, j_V, j_B)\big)$ be a $C^*$-hull of $(A,V,B)$. Then the inclusions $j=(j_A, j_V, j_B): (A,V,B)\to C$ extend to a completely isometric morphism $$\bar{j}:=(\bar{j}_{M(A)}, \bar{j}_{M(V)}, \bar{j}_{M(B)}): (M(A), M(V), M(B))\to M(C)$$ such that $\bar{j}_{M(A)}(M(A))\cap C=j_A(A)$ and $\bar{j}_{M(B)}(M(B))\cap C=j_B(B)$. \end{proposition} \begin{proof} Let $j_{X(A,V,B)}: X(A,V,B)\to C$ denote the corresponding completely isometric representation of $X(A,V,B)$ into $C$. Then $\big(C, j_{X(A,V,B)}\big)$ is a $C^*$-hull of $X(A,V,B)$ and by Lemma \ref{lem mult univ} there exists a unique extension $$\big(\bar{j}_{M(A\oplus B)}, \bar{j}_{M(X(A,V,B))}\big): \big(M(A\oplus B), M(X(A,V,B)\big)\to M(C)$$ such that $(\bar{j}_{M(A\oplus B)}(M(A)\oplus B))\cap C=A\oplus B$. The result now easily follows from an application of Proposition \ref{prop-multiplier}. \end{proof} \section{Crossed products by $C^*$-operator bimodules} Let $(A,V,B)$ be a $C^*$-operator bimodule and let $\operatorname{Aut}(A,V,B)$ denote the group of completely isometric isomorphisms of $(A,V,B)$ to $(A,V,B)$. A {\em continuous action} of $G$ on $(A,V,B)$ is a homomorphism $\alpha:G\to\operatorname{Aut}(A,V,B)$ with $\alpha_g=(\alpha_g^A, \alpha_g^V,\alpha_g^B)$ such that all components $g\mapsto \alpha_g^A(a), \alpha_g^V(v), \alpha_g^B(b)$ are continuos for all $a\in A, v\in V, b\in B$. We then call $\big((A,V,B),G,\alpha\big)$ a {\em $C^*$-operator bimodule dynamical system}. A {\em covariant morphism} of $\big((A,V,B),G,\alpha\big)$ to the $C^*$-operator bimodule $(C,W,D)$ is a quintuple $(\rho_A, \rho_V, \rho_B, u, v)$ such that $(\rho_A, \rho_V,\rho_B)$ is a morphism from $(A,V,B)$ into $(C,W,D)$, $u:G\to UM(C)$ and $v:G\to UM(D)$ are strictly continuous unitary representations of $G$ such that $(\rho_A, u)$ and $(\rho_B,v)$ satisfy the usual cavariance conditions for the actions $\alpha^A$ and $\alpha^B$, respectively, and such that for all $v\in V$ and $g$ in $G$ we have $$\rho_V(\alpha_g(v))=u_g\rho_V(v)v_{g^{-1}}.$$ A {\em generalized covariant morphism} of $\big((A,V,B),G,\alpha\big)$ to $(C,W,B)$ is a covariant morphism $(\rho_A, \rho_V, \rho_B, u, v)$ into $\big(M(C), M(W), M(D)\big)$ such that $(\rho_A, \rho_V, \rho_B):(A,V,B)\to \big(M(C), M(W), M(D)\big)$ is a generalized morphism in the sense of Definition \ref{def-genmor}. A {\em covariant representation} of $(A,V,B)$ is a covariant morphism into\linebreak $\big(\mathcal B(H), \mathcal B(K,H), \mathcal B(K)\big)$ for some pair of Hilbert spaces $(H,K)$. If $(\rho_A, \rho_V, \rho_B, u, v)$ is covariant morphism of $(A,V,B)$ into $(C,W,D)$, we have {\em integrated forms} $$(\rho_A\rtimes u, u\ltimes\rho_V\rtimes v , \rho_B\rtimes v): \big(C_c(G,A), C_c(G,V), C_c(G,B)\big)\to (C,W,D)$$ in which $\rho_A\rtimes u$ and $\rho_B\rtimes v$ are the usual integrated forms of the covariant homomorphisms $(\rho_A, u)$ and $(\rho_B,v)$ of the systems $(A,G,\alpha^A)$ and $(B,G,\alpha^B)$, respectively, and where $u\ltimes \rho_V\rtimes v :C_c(G,V)\to W$ is given by $$u\ltimes \rho_V\rtimes v(f)=\int_G \rho_V(f(s))v_s\, ds =\int_G u_s\rho_V\big(\alpha_{s^{-1}}^V(f(s))\big)\, ds,$$ where the right equation follows from the covariance condition (this should explain the notation $v\ltimes \rho_V\rtimes u$). We have the usual convolution products on $C_c(G,A)$ and $C_c(G,B)$ and obvious convolution formulas for pairings $$C_c(G,A)\times C_c(G,V)\to C_c(G,V) \quad \text{and}\quad C_c(G,V)\times C_c(G,B)\to C_c(G,V)$$ which are preserved by the integrated form $(\rho_A\rtimes u, v\ltimes \rho_V\rtimes u, \rho_B\rtimes v)$ of $(\rho_A, \rho_V, \rho_B, u, v)$. The following is a direct consequence of Corollary \ref{cor-morphisms} and the universal properties of the universal and the enveloping $C^*$-hulls of $(A,V,B)$. \begin{proposition}\label{prop-actions} Let $(A,V,B)$ be a $C^*$-operator bimodule and let \linebreak $\big(C_u^*(A,V,B), (i_A, i_V, i_B)\big)$ and $\big(C_e^*(A,V,B), (k_A, k_V, k_B)\big)$ denote the universal and enveloping $C^*$-hulls of $(A,V,B)$, respectively. Then there is a canonical one-to-one correspondence between \begin{enumerate} \item continuous actions $\alpha:G\to \operatorname{Aut}(A,V,B)$, \item continuous actions $\alpha^X: G\to \operatorname{Aut}\Big(\big(A\oplus B, X(A,V,B)\big)\Big)$ which preserve the corners, \item continuous operator algebra actions $\alpha^{{\operatorname{Op}}}:G\to \operatorname{Aut}\big({\operatorname{Op}}(A,V,B)\big)$ which preserves the corners, \item continuous actions $\alpha^u:G\to \operatorname{Aut}(C_u^*(A,V,B))$ by automorphisms which preserve the subspaces $i_A(A), i_V(V)$ and $i_B(B)$. \item continuous actions $\alpha^e:G\to \operatorname{Aut}(C_e^*(A,V,B))$ by automorphisms which preserve the subspaces $k_A(A), k_V(V)$ and $k_B(B)$. \end{enumerate} Moreover, there are one-to-one correspondences between the covariant representations (morphisms) $(\rho_A, \rho_V, \rho_B, u, v)$ of $\big((A,V,B),G,\alpha\big)$ and covariant representations (morphisms) of the actions in (2), (3), and (4) above via the known correspondence for representations (morphisms) of $(A,V,B)$ and $X(A,V,B), {\operatorname{Op}}(A,V,B)$ and $C_u^*(A,V,B)$ with unitary parts given by the direct sum $u\oplus v$. \end{proposition} We now give the definition of the universal crossed product by an action of a locally compact group $G$ on a $C^*$-operator bimodule: \begin{definition}\label{def-full-crossed-C*-bimodule} Let $\alpha:G\to \operatorname{Aut}(A,V,B)$ be a strongly continuous action of the locally compact group $G$. We define the universal crossed product $$(A,V,B)\rtimes_{\alpha}^uG:= \big(A\rtimes_\alpha^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG\big)$$ of $(A,V,B)$ by $G$ as the respective closures of $\big(C_c(G, A), C_c(G,V), C_c(G,B)\big)$ inside $C_u^*(A,V,B)\rtimes_{\alpha, u} G$ (here we identify $A,V$ and $B$ with $i_A(A), i_V(V)$ and $i_B(B)$ inside $C_u^*(A,V,B)$, respectively). \end{definition} To see that $(A,V,B)\rtimes_{\alpha}^uG$ has a canonical structure of a $C^*$-operator bimodule, let $(i_{C_u^*(A,V,B)}, i_G)$ denote the universal representation of the crossed product $C_u^*(A,V,B)\rtimes_{\alpha,u}G$ an the Hilbert space $L_u$, i.e., the direct some of all GNS representations associated to the states of $C_u^*(A,V,B)\rtimes_{\alpha,u}G$. Then there exists a decomposition $L_u=H_u\oplus K_u$ such that $i_{C_u^*(A,V,B)}$ restricts to a representation of $(A,V,B)$ on $\mathcal B(K_u, H_u)$ . It is then easy to check that the covariant representation $(i_{C_u^*(A,V,B)}, i_G)$ restricts to the covariant representation $(i_A, i_V, i_B, i_G^A, i_G^B)$ of $\big((A,V,B),G,\alpha\big)$ where $i_A, i_V, i_B$ denote the restrictions of $i_{C_u^*(A,V,B)}$ to $A,V$ and $B$, respectively (viewed as operators in the respective corners of $\mathcal B(H_u\oplus K_u)$), and $i_G^A:=i_G|_{H_u}$, $i_G^B:=i_G|_{K_u}$. The integrated form $i_{C_u^*(A,V,B)}\rtimes i_G$ restricts to the integrated forms $i_A\rtimes i_G^A: C_c(G,A)\to \mathcal B(H_u)$, $i_G^B\ltimes i_V\rtimes i_G^A: C_c(G,V) \to\mathcal B(K_u, H_u)$ and $i_B\rtimes i_G^B: C_c(G,B)\to \mathcal B(K_u)$ and similarly for the respective completions. They therefore extend to a completely isometric representation of $\big(A\rtimes_\alpha^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG\big)$ as a concrete $C^*$-operator bimodule in $\mathcal B(K_u, H_u)$. Moreover, it is easy to check that $(i_A, i_V, i_B, i_G^A, i_G^B)$ take their values in $\big(M(A\rtimes_\alpha^uG), M(V\rtimes_\alpha^uG), M(B\rtimes_\alpha^uG)\big)$. Hence $(i_A, i_V, i_G, i_B, i_G^A, i_G^B)$ is a generalized covariant morphism of $\big((A,V,B),G,\alpha\big)$ into $\big(M(A\rtimes_\alpha^uG), M(V\rtimes_\alpha^uG), M(B\rtimes_\alpha^uG)\big)$ which integrates to the identity on $\big(A\rtimes_\alpha^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG\big)$. We call $(i_A, i_V, i_G, i_B, i_G^A, i_G^B)$ the universal morphism of $\big((A,V,B),G,\alpha\big)$. The following proposition shows that $(A,V,B)\rtimes_\alpha^uG$ has the right universal properties for covariant morphisms (representations) of $\big((A,V,B),G,\alpha\big)$. \begin{proposition}\label{prop-universalAVB} Let $\alpha:G\to \operatorname{Aut}(A,V,B)$ be a continuous action. For every generalized covariant morphism $(\rho_A,\rho_V,\rho_B, u,v)$ of $\big((A,V,B),G,\alpha\big)$ into $\big(M(C), M(W), M(D)\big)$ the integrated form $(\rho_A\rtimes u, u\ltimes\rho_V\rtimes v, \rho_B\rtimes v)$ from $(C_c(G,A), C_c(G,V), C_c(G,D))$ into $(M(C),M(W),M(D))$ extends uniquely to a morphism $$(\rho_A\rtimes u, u\ltimes\rho_V\rtimes v, \rho_B\rtimes v): (A\rtimes_\alpha^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG)\to (M(C),M(W),M(D)).$$ (which takes values in $(C,W,D)$ if $(\rho_A, \rho_V,\rho_B)$ does). If $(\rho_A,\rho_V,\rho_B, u,v)$ is non-degenerate, then so is $(\rho_A\rtimes u, u\ltimes\rho_V\rtimes v, \rho_B\rtimes v)$. Conversely, for every generalized morphism $(\pi_{A\rtimes_\alpha^uG}, \pi_{V\rtimes_{\alpha}^uG}, \pi_{B\rtimes_\alpha^uG})$ of $(A\rtimes_\alpha^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG)$ into $(M(C),M(W),M(D))$ there is a unique generalized covariant morphism $(\rho_A,\rho_V\rho_B,u,v)$ of $\big((A,V,B),G,\alpha\big)$ into \linebreak $\big(M(C), M(W), M(D)\big)$ such that $$(\pi_{A\rtimes_\alpha^uG}, \pi_{V\rtimes_{\alpha}^uG}, \pi_{B\rtimes_\alpha^uG})=(\rho_A\rtimes u, u\ltimes\rho_V\rtimes v, \rho_B\rtimes v)$$ given by the composition of $(\pi_{A\rtimes_\alpha^uG}, \pi_{V\rtimes_{\alpha}^uG}, \pi_{B\rtimes_\alpha^uG})$ (extended to multipliers as in Proposition \ref{prop-genmorextend}) with the universal representation $(i_A, i_V, i_G, i_B, i_G^A, i_G^B)$. \end{proposition} \begin{proof} Starting with $(\rho_A,\rho_V,\rho_B, u,v)$ we obtain a corresponding covariant homomorphism $(\rho_{C_u^*(A,V,B)}, u\oplus v)$ of $(C_u^*(A,V,B), G, \alpha^u)$ into $M(C_u^*(C,W,D))$ (use Propositions \ref{prop-multuniv} and \ref{prop-actions}). By the universal property of the maximal crossed product $C_u^*(A,V,B)\rtimes_{\alpha,u}G$ we obtain the integrated form $$\rho_{C_u^*(A,V,B)}\rtimes u\oplus v: C_u^*(A,V,B)\rtimes_{\alpha,u}G\to M(C_u^*(C,W,D))$$ whose restriction to $(C_c(G,A), C_c(G,V), C_c(G,B))$ coincides with the integrated form $(\rho_A\rtimes u, u\ltimes\rho_V\rtimes v, \rho_B\rtimes v)$ with values in $(M(C), M(W), M(D))\subseteq M(C_u^*(C,W,D))$. They therefore extend to $(A\rtimes_\alpha^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG)$ as desired. For the converse we need to show that the integrated form of the covariant morphism $(\rho_A,\rho_V\rho_B,u,v)$ obtained by composing $(\pi_{A\rtimes_\alpha^uG}, \pi_{V\rtimes_{\alpha}^uG}, \pi_{B\rtimes_\alpha^uG})$ with $(i_A, i_V, i_G, i_B, i_G^A, i_G^B)$ agrees with $(\pi_{A\rtimes_\alpha^uG}, \pi_{V\rtimes_{\alpha}^uG}, \pi_{B\rtimes_\alpha^uG})$ on \linebreak $\big(C_c(G,A), C_c(G,V), C_c(G,B)\big)$. But this follows from a straightforward computation which we omit. \end{proof} Recall from Proposition \ref{prop-C*-hull-bimodule} that for a $C^*$-operator bimodule $(A,V,B)$ we have the identities $$C_u^*(A,V,B)\cong C_u^*({\operatorname{Op}}(A,V,B))\cong C_u^*(X(A,V,B))$$ where the first isomorphism is given by the universal property of $C_u^*(A,V,B)$ applied to the canonical (corner) inclusions of $(A,V,B)$ into ${\operatorname{Op}}(A,V,B)\subseteq C_u^*({\operatorname{Op}}(A,V,B))$ and the second isomorphism is given by the universal property of $C_u^*({\operatorname{Op}}(A,V,B))$ applied to the canonical inclusion of ${\operatorname{Op}}(A,V,B)$ into $X(A,V,B)$. If $\alpha:G\to \operatorname{Aut}(A,V,B)$ is an action, then these isomorphism are $G$-equivariant, where $ C_u^*({\operatorname{Op}}(A,V,B))$ is equipped with the action extending $\alpha^{{\operatorname{Op}}}$ and $C_u^*(X(A,V,B))$ is equipped with the action extending $\alpha^X$, where $\alpha^{{\operatorname{Op}}}$ and $\alpha^X$ are as in Proposition \ref{prop-actions}. In \cite{KR} Katsoulis and Ramsay defined the universal crossed product of the operator algebra system $(\mathcal A,G,\alpha)$ as the closure of $C_c(G,\mathcal A)$ inside $C_u^*(\mathcal A)\rtimes_{\alpha,u}G$ and in Definition \ref{def-crossed-product} we defined the crossed product $X\rtimes_\alpha^u G$ for an action $\alpha$ on a $C^*$-operator system $X$ as the closure $X\rtimes_{\alpha}^uG$ of $C_c(G,X)$ inside $C_u^*(X)\rtimes_{\alpha,u}G$ (surpressing the $C^*$-part of the $C^*$-operator system in our notation). Thus, identifying $(C_c(G,A), C_c(G,V), C_c(G,B))$ with the three non-zero corners of $C_c(G, {\operatorname{Op}}(A,V,B))$ and the latter as a subspace of $C_c(G, X(A,V,B))$ we see that these inclusions extend to completely isometric inclusions \begin{align*} &{\operatorname{Op}}(A\rtimes_{\alpha}^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG)={\operatorname{Op}}(A,V,B)\rtimes_\alpha^uG\\ &\subseteq X(A,V,B)\rtimes_{\alpha}^uG=X(A\rtimes_{\alpha}^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG) \end{align*} Together with Corollary \ref{cor-universal} and Proposition \ref{prop-C*-hull-bimodule} we obtain isomorphisms \begin{align*} &C_u^*(A,V, B)\rtimes_{\alpha,u}G \cong C_u^*({\operatorname{Op}}(A,V,B))\rtimes_{\alpha,u}G \cong C_u^*(X(A,V,B))\rtimes_{\alpha,u}G\\ &\stackrel{\text{Corollary \ref{cor-universal}}}{\cong} C_u^*\big(X(A,V,B)\rtimes_{\alpha}^uG\big)\cong C_u^*\big(X(A\rtimes_{\alpha}^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG)\big)\\ &\quad\quad \cong C_u^*\big(A\rtimes_{\alpha}^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG\big)\\ &\quad\quad \cong C_u^*\big({\operatorname{Op}}(A\rtimes_{\alpha}^uG, V\rtimes_\alpha^uG, B\rtimes_\alpha^uG)\big) \end{align*} In particular we see that the theories for universal crossed product by corresponding actions of $G$ on $(A,V,B)$, ${\operatorname{Op}}(A,V,B)$ and $X(A,V,B)$ are completely equivalent! We close this section with a brief discussion of the reduced crossed product for an action $\alpha:G\to \operatorname{Aut}(A,V,B)$. The easiest way to do this at this point is to form the reduced crossed product $$X(A,V,B)\rtimes_{\alpha}^rG \subseteq M\big(X(A,V,B)\otimes \mathcal K(L^2(G))\big)$$ as the image of the regular representation $\Lambda_{X(A,V,B)}: X(A,V,B)\rtimes_{\alpha}^u G\to M\big(X(A,V,B)\otimes \mathcal K(L^2(G))\big)$ as in Definition \ref{defn educed crossed product} and define $(A,V,B)\rtimes_\alpha^rG=\big(A\rtimes_{\alpha}^rG, V\rtimes_{\alpha}^rG, B\rtimes_{\alpha}^rG\big)$ via the images of the corners $(A\rtimes_{\alpha}^uG, V\rtimes_{\alpha}^uG, B\rtimes_{\alpha}^uG)$ inside $X(A,V,B)\rtimes_{\alpha}^rG$ (which is the same as taking closures of $(C_c(G,A), C_c(G,V), C_c(G,B))$ inside $X(A,V,B)\rtimes_{\alpha}^rG$). We leave it as an exercise to the reader to formulate this in terms of a regular covariant representation of $\big((A,V,B),G,\alpha\big)$ into $\big(M(A\otimes {\mathcal K}), M(V\otimes {\mathcal K}), M(B\otimes {\mathcal K})\big)$ for ${\mathcal K}={\mathcal K}(L^2(G))$ and where, as usual, ``$\otimes$'' denotes the spacial tensor product. It follows from our construction and part (b) of Remark \ref{rem-reduced} that $\big(A\rtimes_{\alpha}^rG, V\rtimes_{\alpha}^rG, B\rtimes_{\alpha}^rG\big)$ is completely isometrically isomorphic to the closures of $(C_c(G, A), C_c(G,V), C_c(G,B))$ inside $C\rtimes_{\alpha^C,r}G$ for any $C^*$-hull \linebreak $\big(C, (j_A, j_V, j_B)\big)$ of $(A,V,B)$ (where we identify $(A,V,B)$ with the triple $(j_A(A), j_V(V), j_B(B))$ inside $C$) which carries an action $\alpha^C$ which is compatible with the given action on $(A,V,B)$. In particular, we may take the closures inside $C_u^*(A,V,B)\rtimes_{\alpha^u,r}G$ or $C_e^*(A,V,B)\rtimes_{\alpha^e,r}G$. From this we get \begin{proposition}\label{prop-max=red} Let $\alpha:G\to \operatorname{Aut}(A,V,B)$ be an action by an {\em amenable} group $G$. Then $$(A\rtimes_{\alpha}^uG, V\rtimes_{\alpha}^uG, B\rtimes_{\alpha}^uG)= \big(A\rtimes_{\alpha}^rG, V\rtimes_{\alpha}^rG, B\rtimes_{\alpha}^rG\big)$$ via the regular representation. \end{proposition} \begin{proof} This follows from the above discussion and the fact that $$C_u^*(A,V,B)\rtimes_{\alpha^u,u}G\cong C_u^*(A,V,B)\rtimes_{\alpha^u,r}G$$ if $G$ is amenable. \end{proof} \section{Coactions and duality} In this section we want to discuss the duality theorems for crossed products for $C^*$-operator bimodules. The theory is more or less a direct consequence of the theory for $C^*$-operator systems vie the functor $(A,V,B)\mapsto X(A,V,B)$, so we'll try to be brief. Note that if $(A,V,B)$ is a $C^*$-operator system represented completely isometrically on the pair of Hilbert spaces $(H,K)$ and if $C$ is any $C^*$-algebra which is represented faithfully on a Hilbert space $L$, then we can define the spatial tensor product $(A\otimes C, V\otimes C, B\otimes C)$ as the closures of the canonical inclusions of the algebraic tensor products $\big(A\odot C, V\odot C, B\odot C\big)$ inside $\big(\mathcal B(H\otimes L), \mathcal B(K\otimes L, H\otimes L), \mathcal B(K\otimes L)\big)$. One then checks that $$X(A,V,B)\otimes C=X(A\otimes C, V\otimes C, B\otimes C)$$ (and, similarly ${\operatorname{Op}}(A,V,B)\otimes C={\operatorname{Op}}(A\otimes C, V\otimes C, B\otimes C)$). In what follows we often write $(A,V,B)\otimes C$ for the $C^*$-operator bimodule $(A\otimes C, V\otimes C, B\otimes C)$ and we write $M\big((A,V,B)\otimes C\big)$ for the multiplier bimodule $\big(M(A\otimes C), M(V\otimes C), M(B\otimes C))\big)$. \begin{definition}\label{def-coactionC*-bimodule} Let $(A,V,B)$ be a $C^*$-operator bimodule. A {\em coaction} of the locally compact group $G$ on $(A,V,B)$ is a generalized morphism $$\delta_{(A,V,B)}=(\delta_A,\delta_V, \delta_B): (A,V,B)\to M\big((A,V,B)\check\otimes C^*(G)\big)$$ such that the following hold: \begin{enumerate} \item the maps $\delta_A:A\to M(A\check\otimes C^*(G))$ and $\delta_B:B\to M(B\check\otimes C^*(G))$ are coactions of $G$ on the $C^*$-algebras $A$ and $B$, respectively. \item The following diagram of generalized morphism commutes: $$\begin{CD} (A,V,B)@>\delta_{(A,V,B)} >> M\big((A,V,B)\check\otimes C^*(G)\big)\\ @V\delta_{(A,V,B)} VV @VV \operatorname{id}_{(A,V,B)}\otimes \delta_GV\\ M\big((A,V,B)\check\otimes C^*(G)\big) @>> \delta_{(A,V,B)}\otimes \operatorname{id}_{G}> M\big((A,V,B)\check\otimes C^*(G)\check\otimes C^*(G)\big) \end{CD} $$ \end{enumerate} \end{definition} Using the correspondence of generalized morphisms from $(A,V,B)$ to $M\big((A,V,B)\check\otimes C^*(G)\big)$ with generalized morphism from $X(A,V,B)$ to \linebreak $X(M(A,V,B))$ of Corollary \ref{cor-morphisms} and the isomorphism $X(M(A,V,B))\cong M(X(A,V,B))$ of Proposition \ref{prop-multiplier} we see that every coaction $\delta_{(A,V,B)}$ of $G$ on $(A,V,B)$ as in the definition above determines a coaction $\delta_{X(A,V,B)}$ of $G$ on $X(A,V,B)$ and vice versa. We then call $\delta_{(A,V,B)}$ {\em non-degenerate} iff $\delta_{X(A,V,B)}$ is non-degenerate in the sense of Definition \ref{def-nondeg}. \begin{example}\label{ex-dual-coaction-bimodule} Recall that for each action $\alpha:G\to \operatorname{Aut}(A,V,B)$ there corresponds a unique action (which here we also denote by $\alpha$) of $G$ on $X(A,V,B)$ such that $X(A,V,B)\rtimes_\alpha^uG=X\big((A,V,B)\rtimes_\alpha^uG\big)$ (and similarly for the reduced crossed products). Recall from Example \ref{ex dual} that there exist canonical dual coactions $\widehat\alpha_u$ and $\widehat\alpha_r$ of $G$ on $X(A,V,B)\rtimes_\alpha^uG$ and $X(A,V,B)\rtimes_\alpha^rG$, respectively. Identifying $X(A,V,B)\rtimes_\alpha^uG$ with $X\big( (A,V,B)\rtimes_\alpha^uG\big)$ and using the correspondence between coactions on \linebreak $(A,V,B)\rtimes_\alpha^uG$ and coactions on $X\big( (A,V,B)\rtimes_\alpha^uG\big)$ (and similarly for the reduced crossed products), we obtain dual coactions $\alpha_u$ and $\widehat\alpha_r$ on the full and reduced crossed products of $(A,V,B)$ by $G$, respectively. We leave it to the reader to spell out direct formulas for these coactions. \end{example} \begin{definition}\label{def-coact-covariant-bimodule} Let $\delta_{(A,V,B)}$ be a coaction of $G$ on the $C^*$-operator bimodule $(A,V,B)$. Then a (generalized) {\em covariant morphism} of the co-system $\big((A,V,B), G, \delta_{(A,V,B)}\big)$ into the muliplier bimodule $M(C,W,D)=\big(M(C), M(W), M(B)\big)$ of a $C^*$-operator bimodule $(C,W,D)$ consists of a quintuple $\big(\rho_A, \rho_V, \rho_B, \mu, \nu\big)$ such that \begin{enumerate} \item $\rho=(\rho_A, \rho_V, \rho_B):(A,V,B)\to \big(M(C), M(W), M(B)\big)$ is a generalized morphism of $(A,V,B)$; \item $\mu:C_0(G)\to M(C), \nu:C_0(G)\to M(D)$ are non-degenerate $*$-homomorphisms; \item $(\rho_A, \mu)$ and $(\rho_B,\nu)$ are covariant for $(A,G,\delta_A)$ and $(B, G, \delta_B)$, respectively; and \item $(\rho_V\otimes \operatorname{id}_G)\circ \delta_V(v)=\big(\mu\otimes \operatorname{id}_G(w_G)\big)(\rho_V(v)\otimes 1)\big(\nu\otimes\operatorname{id}_G(w_G)\big)^*$ for all $v\in V$. \end{enumerate} where $w_G\in C^b_{st}(G,M(C^*(G))\cong M(C_0(G)\check\otimes C^*(G))$ is the strictly continuous function $w_G(g)= i_G(g)$. A {\em covariant representation} of $\big((A,V,B), G, \delta_{(A,V,B)}\big)$ on the pair of Hilbert spaces $(H,K)$ is a morphism into $\big(\mathcal B(H), \mathcal B(K,H),\mathcal B(K)\big)$. \end{definition} It is now an easy exercise to see that there is a one-to-one correspondence between covariant morphism of $(A,V,B)$ into $M(C,V,D)$ and covariant morphisms of $\big(X(A,V,B), G, \delta_{X(A,V,B)}\big)$ into $M(X(C,V,D))$ given by assigning to $\big(\rho_A, \rho_V, \rho_B, \mu, \nu\big)$ the covariant pair $\big(\rho_{X(A,V,B)}, \mu\oplus \nu\big)$ with $$\rho_{X(A,V,B)}=\left(\begin{matrix} \rho_A &\rho_V\\ \rho_V^*& \rho_B\end{matrix}\right)$$ as in Corollary \ref{cor-morphisms}. Moreover, using the identity $C_u^*(A,V,B)\cong C_u^*(X(A,V,B))$ and Proposition \ref{prop coaction}, we deduce easily that there is a one-to-one correspondence between coactions $\delta_{(A,V,B)}$ of $G$ on $(A,V,B)$ and coactions $\delta_u$ of $G$ on $C_u^*(A,V,B)$ which satisfy the conditions $$\delta_u(A)\subseteq M(A\check\otimes C^*(G)), \; \delta_u(V)\subseteq M(V\check\otimes C^*(G)), \;\text{and}\; \delta_u(B)\subseteq M(B\check\otimes C^*(G)),$$ where we understand these inclusions with respect to the canonical inclusions of $(A,V,B)$ into $C_u^*(A,V,B)$ and of $M((A,V,B)\check\otimes C^*(G))$ into \linebreak $M(C_u^*(A,V,B)\check\otimes C^*(G))$ which can be deduced from Lemma \ref{lem mult univ}). \begin{example}\label{ex-coact-bimodule-regular} The {\em regular representation} of the co-system \linebreak $\big((A,V,B), G, \delta_{(A,V,B)}\big)$ is the covariant morphism from $\big((A,V,B), G, \delta_{(A,V,B)}\big)$ into $M\big( (A,V,B)\otimes {\mathcal K}(L^2(G))\big)$ defined as the quintuple \begin{align*} \big(\Lambda_A, \Lambda_V, &\Lambda_B, \Lambda_{\widehat{G}}^A, \Lambda_{\widehat{G}}^B\big)\\ &=\big((\operatorname{id}_A\otimes\lambda)\circ\delta_A, (\operatorname{id}_A\otimes\lambda)\circ\delta_A, (\operatorname{id}_A\otimes\lambda)\circ\delta_A, 1_A\otimes M, 1_B\otimes M\big) \end{align*} where $\lambda=\lambda_G$ denotes the regular representation of $G$ on $L^2(G)$ and $M:C_0(G)\to \mathcal B(L^2(G))$ is the representation by multiplication operators. One easily checks that this representation corresponds to the regular representation of the co-system $\big( X(A,V,B), G, \delta_{X(A,V,B)}\big)$ via the above described correspondence. In particular, it is a covariant morphism of $\big((A,V,B), G, \delta_{(A,V,B)}\big)$. \end{example} We are now ready to define the crossed products \begin{proposition}\label{def-coact-crossed-bimodule} Let $\delta_{(A,V,B)}$ be a coaction of $G$ on the $C^*$-operator bimodule $(A,V,B)$. We then define the crossed product $$(A,V,B)\rtimes_{\delta(A,V,B)}\widehat{G}=\big(A\rtimes_{\delta_A}\widehat{G}, V\rtimes_{\delta_V}\widehat{G}, B\rtimes_{\delta_B}\widehat{G}\big)$$ as $$A\rtimes_{\delta_A}\widehat{G}:=\overline{\operatorname{span}}\{\Lambda_A(A)\Lambda_{\widehat{G}}^A(C_0(G))\},\quad B\rtimes_{\delta_B}\widehat{G}:=\overline{\operatorname{span}}\{\Lambda_B(B)\Lambda_{\widehat{G}}^B(C_0(G))\}, $$ $$\text{and} \quad V\rtimes_{\delta_V}\widehat{G}=\overline{\operatorname{span}}\{\Lambda_V(V)\Lambda_{\widehat{G}}^A(C_0(G))\}=\overline{\operatorname{span}}\{\Lambda_{\widehat{G}}^B(C_0(G)\Lambda_V(V)\}$$ inside $M\big((A,V,B)\otimes {\mathcal K}(L^2(G))\big)$. \end{proposition} Note that it follows directly from the definitions that $(\Lambda_A, \Lambda_{\widehat{G}}^A)$ and $(\Lambda_B, \Lambda_{\widehat{G}}^B)$ are the regular representations of $(A,G, \delta_A)$ and $(B,G,\delta_B)$, respectively. We therefore see that the $C^*$-algebras $A\rtimes_{\delta_A}\widehat{G}$ and $B\rtimes_{\delta_B}\widehat{G}$ coincide with the crossed products of these $C^*$-co-systems as described in Section \ref{sec-coaction}. Using the above described correspondence between covariant morphisms of $\big((A,V,B), G, \delta_{(A,V,B)}\big)$ and covariant morphisms of $\big(X(A,V,B), G,\delta_{X(A,V,B)}\big)$ we now get from Proposition \ref{prop-dualuniv}: \begin{theorem}\label{thm-crossed-coact-bimodule} The crossed product $(A,V,B)\rtimes_{\delta(A,V,B)}\widehat{G}$ is a well defined $C^*$-operator bimodule such that $$X\big((A,V,B)\rtimes_{\delta(A,V,B)}\widehat{G}\big)=X(A,V,B)\rtimes_{\delta_{X(A,V,B)}}\widehat{G}.$$ and $$C_u^*\big((A,V,B)\rtimes_{\delta(A,V,B)}\widehat{G}\big)=C_u^*(A,V,B)\rtimes_{\delta_u}\widehat{G}.$$ Moreover, the pair $$\Big((A,V,B)\rtimes_{\delta(A,V,B)}\widehat{G}, \big(\Lambda_A, \Lambda_V, \Lambda_B, \Lambda_{\widehat{G}}^A, \Lambda_{\widehat{G}}^B\big) \Big)$$ satisfies the following universal property for covariant morphisms: If $\big(\rho_A, \rho_V, \rho_B, \mu, \nu\big)$ is any covariant morphism of $\big((A,V,B), G, \delta_{(A,V,B)}\big)$ into $M(C,W,D)$ then there exists a unique covariant morphism $$(\rho_A\rtimes \mu, \mu\ltimes \rho_V\rtimes \nu, \rho_B\rtimes \nu): (A,V,B)\rtimes_{\delta(A,V,B)}\widehat{G}\to M(C,W,D)$$ such that $$\rho_A=(\rho_A\rtimes \mu)\circ \Lambda_A,\; \rho_V=(\mu\ltimes \rho_V\rtimes \nu)\circ \Lambda_V, \; \rho_B=(\rho_B\rtimes \nu)\circ \Lambda_B, $$ $$ \mu=(\rho_A\rtimes \mu)\circ \Lambda_{\widehat{G}}^A\quad\text{and}\quad \nu=(\rho_B\rtimes \nu)\circ \Lambda_{\widehat{G}}^B.$$ Conversely, if $\Phi=\big(\Phi_{A\rtimes_{\delta_A}\widehat{G}}, \Phi_{V\rtimes_{\delta_V}\widehat{G}}, \Phi_{B\rtimes_{\delta_B}\widehat{G}}\big)$ is any generalized morphism from $(A,V,B)\rtimes_{\delta(A,V,B)}\widehat{G}$ into $M(C,W,D)$ then there exists a unique covariant morphism $\big(\rho_A, \rho_V, \rho_B, \mu, \nu\big)$ of $\big((A,V,B), G, \delta_{(A,V,B)}\big)$ such that $$\Phi=(\rho_A\rtimes \mu, \mu\ltimes \rho_V\rtimes \nu, \rho_B\rtimes \nu)$$ \end{theorem} \begin{remark}\label{rem-dual action bimodule} Let $\sigma:G\to \operatorname{Aut}(C_0(G))$ denote action given by right translation. Then there is a dual action $\widehat\delta:G\to \operatorname{Aut}\big((A,V,B)\rtimes_{\delta}\widehat{G}\big)$ such that for each $s\in G$ the automorphism $\widehat\delta_s$ is given by the integrated form of the covariant morphism $\big(\Lambda_A, \Lambda_V, \Lambda_B, \Lambda_{\widehat{G}}^A\circ \sigma(s), \Lambda_{\widehat{G}}^B\circ \sigma(s)\big) \Big)$. Of course, it corresponds to the dual action on $X(A,V,B)\rtimes_{\delta_X}\widehat{G}$. \end{remark} We now come to the duality theorems. We start with the $C^*$-operator bimodule version of the Imai-Takai duality theorem. Using the version of the Imai-Takai theorem for actions on $C^*$-operator systems, Theorem \ref{thm-ImaiTakai}, we now get \begin{theorem}\label{thm-Imai-Takai-bimodule} Let $\alpha:G\to \operatorname{Aut}(A,V,B)$ be an action. Then there exist canonical dual coactions $\widehat\alpha_u$ (resp. $\widehat\alpha_r$) of $G$ on the universal and reduced crossed products $(A,V,B)\rtimes_{\alpha}^uG$ and $(A,V,B)\rtimes_\alpha^rG$, respectively, such that $$(A,V,B)\rtimes_{\alpha}^uG\rtimes_{\widehat\alpha_u}\widehat{G}\cong (A,V,B)\otimes {\mathcal K}(L^2(G))$$ and $$(A,V,B)\rtimes_{\alpha}^rG\rtimes_{\widehat\alpha_r}\widehat{G}\cong (A,V,B)\otimes {\mathcal K}(L^2(G)),$$ and the isomorphism transforms the double dual actions $\widehat{\widehat\alpha_u}$ and \ $\widehat{\widehat\alpha_r}$ to the action $\alpha\otimes \operatorname{Ad}\rho$ on $(A,V,B)\otimes {\mathcal K}(L^2(G))$, where $\rho:G\to U(L^2(G))$ denotes the right regular representation of $G$. \end{theorem} Dually, as an application of Theorem \ref{thm-Katayama} we get the following version of Katayama's duality for coactions on $C^*$-operator bimodules: \begin{theorem}\label{thm-Katayama-bimodule} Let $\delta=\delta_{(A,V,B)}$ be a coaction of $G$ on the $C^*$-operator bimodule $(A,V,B)$. Then there exist is a surjective morphism $$\Theta: (A,V,B)\rtimes_\delta\widehat{G}\rtimes_{\widehat{\delta}}^uG\twoheadrightarrow (A,V,B)\otimes{\mathcal K}(L^2(G))$$ which factors through an isomorphism $$(A,V,B)\rtimes_\delta\widehat{G}\rtimes_{\widehat{\delta}}^\mu G\cong (A,V,B)\otimes{\mathcal K}(L^2(G)),$$ where $(A,V,B)\rtimes_\delta\widehat{G}\rtimes_{\widehat{\delta}}^\mu G$ is a completion of $C_c\big(G, (A,V,B)\rtimes_\delta\widehat{G}\big)$ with respect to a norm which lies between the universal and reduced crossed product norms. If $G$ is amenable, then $\Theta$ is an isomorphism. \end{theorem} \bibliographystyle{amsplain}
proofpile-arXiv_067-6850
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} One of the main tools in the study of singularities of maps is the theory of Thom polynomials. They express the fundamental classes of the multisingularity strata for a generic map of arbitrary compact smooth manifolds in terms of their characteristic classes. This theory is however not applicable to a natural class of generic maps, namely the maps between varieties that are defined by generic Laurent polynomials with given Newton polytopes. Such varieties are not compact, and the maps are not proper. At the same time, their toric compactifications associated with the Newton polytopes do not satisfy the genericity conditions necessary for classical Thom polynomials to be applicable (for details see e.g. Example 1.1 and Remark 1.2 in \cite{E3}). Therefore, working with multisingularity strata of the abovementioned class of maps requires alternative methods. We will now give a short overview of developments in this direction. The discriminant (i.e. $\mathcal A_1$ stratum) of a projection of a generic hypersurface was described in \cite{GP}. If $H$ is a hypersurface given by a generic polynomial $f(x_1,\ldots,x_d,y)$ and $\pi$ is the projection forgetting the last coordinate, then the Newton polytope of the polynomial defining the abovementioned $\mathcal A_1$ stratum is equal to the fiber polytope $\mathcal Q_{\pi}(f)\subset\mathbb R^d$ of the Newton polytope $\EuScript{N}(f).$ The image (i.e. $\mathcal A_0$ stratum) of a projection of a generic complete intersection was studied by A. Esterov and A. Khovanskii in \cite{EK}. They proved that the image under an epimorphism $\pi\colon({\mathbb C}\setminus 0)^{n}\to({\mathbb C}\setminus 0)^{n-k}$ of a complete intersection $\{f_1=\ldots=f_{k+1}\}\subset({\mathbb C}\setminus 0)^n$ defined by generic polynomials with given Newton polytopes $\EuScript{N}(f_i)=\Delta_i$ is a hypersurface $\{g=0\}\subset({\mathbb C}\setminus 0)^{n-k},$ whose Newton polytope $\EuScript{N}(g)\subset\mathbb R^{n-k}$ is equal to the so-called {\it mixed fiber polytope} $\mathop{\rm MP}\nolimits_{\pi}(\Delta_1,\ldots,\Delta_{k+1})$ of the polytopes $\Delta_1,\ldots,\Delta_{k+1}.$ An approach to studying the strata of higher codimension, e.g for $\mathcal A_2$ (cusps) and $2\mathcal A_1$ (double points), is suggested in \cite{E3}. However, this approach only works under additional assumptions on the Newton polytopes and employs operations with tropical fans of dimension that is too high for practical applications (namely, those dimensions are of the same order as the number of monomials inside the given Newton polytopes). For low dimensional maps or projections, the above mentioned strata are $0$-dimensional, and the problem of computing their cardinalities in terms of the given Newton polytopes arises naturally. For example, this problem was solved in \cite{FJR} for a mapping $\mathbb C^2\to\mathbb C^2$ whose components are generic polynomials of given degrees. See \cite{E3} for an overview of some other literature on problems of this kind. To the best of our knowledge, our paper is the first work where such a problem is solved for polynomials with arbitrary Newton polytopes. \begin{theor}\label{mainintro} Let $\Delta$ be a lattice polytope in $\mathbb Z^n\oplus \mathbb Z^2.$ If, for generic polynomials $f_1,\ldots,f_{n+1}$ supported at $\Delta,$ the image of the complete intersection curve $$\tilde{\mathcal C}={\{f_1=\ldots=f_{n+1}=0\}\subset({\mathbb C}\setminus 0)^n\times({\mathbb C}\setminus 0)^2}$$ under the projection $\pi\colon({\mathbb C}\setminus 0)^n\times({\mathbb C}\setminus 0)^2\to({\mathbb C}\setminus 0)^2$ is a reduced nodal curve, then the number of its nodes is given by formula (\ref{mainformula1}) in Theorem \ref{lemmain}. \end{theor} \begin{rem} The classification of the Newton polytopes, such that the abovementioned projection is not a nodal curve, is a non-trivial problem (see Example \ref{notnodal}) that is not addressed in this paper. Instead, we will prove a certain generalization of Theorem \ref{mainintro} which is applicable to all lattice polytopes $\Delta$ (see Theorem \ref{lemmain}). \end{rem} \begin{exa}[Counting the nodes of the projection of a complete intersection curve defined by generic equations of given degree]\label{simplex} For some support sets $A\subset\mathbb Z^{n+2},$ it is quite easy to show that the projection of a complete intersection given by generic polynomials supported at $A$ has only nodes as singularities. For instance, this is the case for $A=dT\cap\mathbb Z^3,$ where ${d\in\mathbb Z_{>0},}$ and $T\subset\mathbb R^3$ is the standard simplex. Let us compute the number of those nodes using (\ref{mainformula1}). In the notation of this formula, we have $n=1,$ the area of the polygon $P$ is equal to $d^4,$ the term $d^2$ comes from the area of the horizontal facet of $dT$, and the non-horizontal facets do not contribute, since for every $\Gamma\in\mathcal{F}(dT)\setminus\mathcal{H}(dT),$ we have $\mathop{\rm ind}\nolimits_{v}(\Gamma\cap A)=1.$ Thus, the answer is $$\mathcal{D}=\dfrac{d^4-2d ^3+d^2}{2}=\dfrac{d^2(d-1)^2}{2}.$$ \end{exa} This problem might also be of interest with regard to the study of algebraic knots, which is motivated by Viro's work \cite{V} about the rigid isotopy invariant called encomplexed writhe, as well as the works of Mikhalkin and Orevkov (see e.g. \cite{MO1}, \cite{MO2} , \cite{MO3}). Namely, it is quite natural to estimate the complexity of an algebraic knot/link (that is, the minimal crossing number) in terms of the algebraic complexity of its defining equations (that is, the size of their Newton polytopes). Theorem \ref{mainintro} gives an upper bound for the number of self-intersections of a projection of a real complete intersection curve onto a coordinate plane. In Example \ref{real} below we show that this upper bound is sharp for the case of real complete intersection links given by a pair of polynomials of given degree. Much harder problems, such as obtaining a sharp upper bound for arbitrary plane projections (not only onto coordinate planes), or topological classification of complete intersection links given by polynomials with arbitrary Newton polytopes, still remain unsolved. \begin{exa}[real version of Example \ref{simplex}]\label{real} Take two generic unions of $d$ planes as the hypersurfaces. Thus, we obtain a pair of varieties given by polynomials which are just products of the corresponding linear factors. Then, their intersection consists of $d^2$ lines, and the projection of this union of lines has $\dfrac{d^2(d^2-1)}{2}$ double points. Those double points are of two types: the ones which are stable and the ones which are not. The latter will disappear when we perturb the hypersurfaces. So, to find the sought number of stable double points, we need to compute the number of the double points that will disappear. Those are exactly the double points, whose preimages are intersections of a plane contained in one of the unions with the line of intersection of two planes contained in the other union. So, from $\dfrac{d^2(d^2-1)}{2}$, which is the total number of double points, we need to subtract the sum $\dfrac{d^2(d-1)}{2}+\dfrac{d^2(d-1)}{2}=d^2(d-1)$, and what we obtain as the result is exactly $\dfrac{d^2(d-1)^2}{2}$ nodes. Therefore, the upper bound obtained in the complex case is sharp for the real case. \end{exa} \noindent {\bf Methodology and structure of the paper.} The Newton polytopes of objects such as the image or the discriminant $M$ of the projection of a complete intersection are known (see \cite{E2}, \cite{EK}, \cite{GP}). Therefore a natural first step in the study of the singularities of $M$ would be passing to the corresponding toric compactification $\bar{M}.$ If $\bar{M}$ did not have any additional singularities, then the problem of describing the simplest singularity strata of $M$, such as $\mathcal A_2$ and $2\mathcal A_1$, could be solved using classical methods. Unfortunately, the compactification $\bar{M}$ does in general have singularities at infinity. Moreover, these singularities are significantly more complicated than the ones studied. Dealing with them turns out to be the most challenging part in this class of problems. We reduce it to the so-called {\it forking--path singularities}. This paper is organized as follows. Section \ref{preliminaries} is devoted to the notions and results that will be used throughout the paper. In Section \ref{mainresult} we fix the notation and state the main result of the paper: Theorem \ref{lemmain}. Section \ref{mainproof} is devoted to the proof of the main result. Finally, in Section \ref{fd} we discuss a few questions that naturally arise in the context of this article, including the tropical counterpart of the main problem. \noindent {\bf Acknowledgements.} This paper and the research behind it would not have been possible without the exceptional support, guidance and constant feedback of Alexander Esterov. I am also very grateful to Andr\'{a}s Szenes for his encouragement, help, and stimulating conversations. I would like to thank Patrick Popescu-Pampu and Grigory Mikhalkin for fruitful discussions, valuable comments and suggestions. The research was partially supported by the NCCR SwissMAP of the Swiss National Science Foundation.\\ \section{Preliminaries}\label{preliminaries} \subsection{Newton Polytopes and Bernstein--Kouchnirenko Theorem} \begin{defin} Let $f(x)=\sum_{a\in \mathbb Z^n} c_a x^a$ be a Laurent polynomial. The {\it support of $f$} is the set $\mathop{\rm supp}\nolimits(f)\subset \mathbb Z^n$ which consists of all the points $a\in \mathbb Z^n$ such that the corresponding coefficient $c_a$ of $f$ is non-zero. \end{defin} \begin{defin} The {\it Newton polytope of $f(x)$} is the convex hull of $\mathop{\rm supp}\nolimits(f)$ in $\mathbb R^n$. In other words, it is the minimal convex lattice polytope in $\mathbb R^n$ containing the set $\mathop{\rm supp}\nolimits(f)$. We denote the Newton polytope of a polynomial $f(x)$ by $\EuScript{N}(f)$. \end{defin} \begin{defin} \label{minkowski} For a pair of subsets $A, B\subset\mathbb R^n$, their {\it Minkowski sum} is defined to be the set $A+B=\{a+b\mid a\in A, b\in B\}$. \end{defin} The following fact provides a connection between the operations of the Minkowski addition and multiplication in the ring of Laurent polynomials. \begin{utver} Let $f,g$ be a pair of Laurent polynomials. Then we have the following equality: $$\EuScript{N}(fg)=\EuScript{N}(f)+\EuScript{N}(g).$$ \end{utver} \begin{defin}\label{suppface} Let $A\subset\mathbb R^m$ be a convex lattice polytope and $\ell\in(\mathbb R^m)^*$ be a covector. Consider $\ell$ as a linear function, and denote by $\ell\mid_A$ its restriction to the polytope $A$. The function $\ell\mid_A$ attains its maximum at some face $\Gamma\subset A$. This face is called {\it the support face} of the covector $\ell$ and is denoted by $A^{\ell}$. \end{defin} \begin{defin} Let $\gamma\neq 0$ in $(\mathbb R^n)^*$ be a covector and $f(x)$ be a Laurent polynomial with the Newton polytope $\EuScript{N}(f)$. The {\it truncation} of $f(x)$ with respect to $\gamma$ is the polynomial $f^{\gamma}(x)$ that is obtained from $f(x)$ by omitting the sum of monomials which are not contained in the support face ${\EuScript{N}(f)^{\gamma}}$. \end{defin} It is easy to show that for a system of equations $\{f_1(x)=\ldots=f_n(x)=0\}$ and an arbitrary covector $\gamma\neq 0$, the system $\{f_1^{\gamma}(x)=\ldots=f_n^{\gamma}(x)=0\}$ by a monomial change of variables can be reduced to a system in $n-1$ variables at most. Therefore, for the systems with coefficients in general position, the ``truncated" systems are inconsistent in $({\mathbb C}\setminus 0)^n$. \begin{defin} Let $(f_1,\ldots, f_n)$ be a tuple of Laurent polynomials. In the same notation as above, the system ${f_1(x)=\ldots=f_n(x)=0}$ is called {\it Newton-nondegenerate}, if for any covector $0\neq\gamma\in(\mathbb R^n)^*$ the system $\{f_1^{\gamma}(x)=\ldots=f_n^{\gamma}(x)=0\}$ is not consistent in $({\mathbb C}\setminus 0)^n$. \end{defin} \begin{defin} \label{mv} Let $\mathscr P$ be the semigroup of all convex polytopes in $\mathbb R^n$ with respect to the Minkowski addition (see Definition \ref{minkowski}). The {\it mixed volume} is a unique function $$\mathop{\rm MV}\nolimits\colon\underbrace{\mathscr{P}\times\ldots\times\mathscr{P}}_{\mbox{$n$ times}}\to\mathbb R$$ which symmetric, multilinear (with respect to the Minkowski addition) and which satisfies the following property: the equality $MV(P,\ldots, P)=\mathop{\rm Vol}\nolimits(P)$ holds for every polytope~${P\in \mathscr P}$. \end{defin} The following theorem allows to compute the number of roots for a non-degenerate polynomial system of equations in terms of the mixed volume of its Newton polytopes. \begin{theor}[Bernstein--Kouchnirenko formula, \cite{B}] The number of roots for a Newton-nondegenerate system of polynomial equations $\{f_1(x)=\ldots=f_n(x)=0\}$ in $({\mathbb C}\setminus 0)^n$ counted with multiplicities is equal to $n!\mathop{\rm MV}\nolimits(\EuScript{N}(f_1),\ldots, \EuScript{N}(f_n))$. \end{theor} \subsection{Fibers of Resultantal Sets} This Subsection is devoted to one of the key results that we will use in this paper -- Theorem \ref{num_fib}. We refer the reader to the work \cite{E1} for proofs and details. Before we introduce all the necessary notation and state the theorem itself, let us consider the following example. \begin{exa}\label{1_dim_exa} Let $\{f_1(x)=f_2(x)=0\}$ be a system of univariate equations with $\mathop{\rm supp}\nolimits(f_1)=\{0,2\}\subset\mathbb Z$ and $\mathop{\rm supp}\nolimits(f_2)=\{1,3\}\subset\mathbb Z.$ Thus we have $f_1(x)=a+bx^2$ and $f_2(x)=cx+dx^3.$ It is a well-known fact that such a system of equations is consistent if and only if its coefficients satisfy the equality $a(ad-bc)=0.$ Now, suppose that we take sufficiently generic polynomials $f_1(x)$ and $f_2(x)$ whose coefficients satisfy this relation. A very natural question is the following: how many solutions does such a system have? In this particular case it is quite easy to find the answer by hand: the system $\{f_1(x)=f_2(x)=0\}$ has $2$ roots. As we will see further, this answer can be obtained purely in terms of the combinatorial properties of the support sets $\mathop{\rm supp}\nolimits(f_1)$ and $\mathop{\rm supp}\nolimits(f_2).$ Namely, note that these sets can be shifted to the same proper sublattice $2\mathbb Z\subset \mathbb Z$ of index $2.$ \end{exa} Let $\mathcal A=(A_1,\ldots,A_{I})$ be a collection of finite sets $A_i\in\mathbb Z^n.$ \begin{defin} If $I\neq 0,$ then the number $I-\dim\sum\limits_{1\leqslant i\leqslant I} A_i,$ where sumation is in the sense of Minkowski, is called {\it the codimension} of the collection $\mathcal A.$ The codimension of the empty collection is set to be $0.$ \end{defin} \begin{exa} The codimension of the collection $\mathcal A=(\{0,2\},\{1,3\})$ considered in Example \ref{1_dim_exa} is equal to $1.$ \end{exa} \begin{defin} A collection $\mathcal A=(A_1,\ldots,A_{I})$ of finite sets in $\mathbb Z^n$ is called {\it essential,} if its codimension is strictly greater than the codimension of every subcollection $A_{i_1} ,\ldots,A_{i_k},$ where $\{i_1,...,i_k\}$ is a subset of $\{1,...,I\}.$ \end{defin} \begin{exa}\label{ess} The collection $\mathcal A=(\{0,2\},\{1,3\})$ considered in Example \ref{1_dim_exa} is essential, since the codimension of $\mathcal A$ is equal to $1$, while the codimensions of all of its subcollections are equal to $0.$ \end{exa} \begin{defin} A collection $\mathcal A=(A_1,\ldots,A_{I})$ of finite sets in $\mathbb Z^n$ is called {\it weakly essential,} if its codimension is not smaller than the codimension of each of the subcollections $(A_{i_1} ,\ldots,A_{i_k})$ where $\{i_1,\ldots,i_k\}\subset \{1,\ldots,I\}.$ \end{defin} \begin{exa}\label{w_ess} The collection $\mathcal A=(A_1,A_2,A_3,A_4)$ of finite subsets in $\mathbb Z^3$ shown in Figure $1$ below is weakly essential. \begin{center} \begin{tikzpicture} \draw[thick] (0,0)--(0,2); \draw[thick] (2,0)--(2,2)--(4,0)--(2,0); \draw[thick] (6,0)--(6,2)--(8,0)--(6,0); \draw[thick] (10,0)--(10,2)--(11,-1)--(10,0); \draw[thick] (11,-1)--(12,0)--(10,2)--(11,-1); \draw[thick, dashed] (10,0)--(12,0); \draw[fill,red] (0,0) circle [radius=0.1]; \draw[fill,red] (0,2) circle [radius=0.1]; \draw[fill,red] (2,0) circle [radius=0.1]; \draw[fill,red] (2,2) circle [radius=0.1]; \draw[fill,red] (4,0) circle [radius=0.1]; \draw[fill,red] (6,0) circle [radius=0.1]; \draw[fill,red] (6,2) circle [radius=0.1]; \draw[fill,red] (8,0) circle [radius=0.1]; \draw[fill,red] (10,0) circle [radius=0.1]; \draw[fill,red] (12,0) circle [radius=0.1]; \draw[fill,red] (10,2) circle [radius=0.1]; \draw[fill,red] (11,-1) circle [radius=0.1]; \node[left] at (0,0) {$(0,0,0)$}; \node[below] at (2,0) {$(0,0,0)$}; \node[below] at (6,0) {$(0,0,0)$}; \node[above left] at (10,0) {$(0,0,0)$}; \node[left] at (0,2) {$(1,0,0)$}; \node[right] at (2,2) {$(1,0,0)$}; \node[right] at (10,2) {$(1,0,0)$}; \node[left] at (0,2) {$(1,0,0)$}; \node[right] at (6,2) {$(1,0,0)$}; \node[below] at (4,0) {$(0,1,0)$}; \node[below] at (8,0) {$(0,1,0)$}; \node[right] at (11,-1) {$(0,0,1)$}; \node[right] at (12,0) {$(0,1,0)$}; \node[below] at (4,-1) {{\bf Figure 1.} A weakly essential collection of finite sets in $\mathbb Z^3$.}; \end{tikzpicture} \end{center} Indeed, its codimension is equal to $1,$ which is, for instance, equal to the codimension of the subcollection $(A_1,A_2,A_3).$ \end{exa} \begin{lemma} Every collection $\mathcal A=(A_1,\ldots,A_I )\subset \mathbb Z^n$ contains a unique essential subcollection of codimension not less than the codimensions of all other subcollections of $\mathcal A.$ \end{lemma} \begin{exa} The subcollection $(A_1,A_2,A_3)$ of the collection $\mathcal A=(A_1,A_2,A_3,A_4)$ in Example \ref{w_ess} is the unique essential subcollection of the same codimension as $A.$ \end{exa} \begin{defin}\label{quot_coll} Let $\mathcal A=(A_1,\ldots,A_I)$ be a collection of finite subsets of $\mathbb Z^n,$ and let $(A_{i_1},\ldots,A_{i_k})$ be oe of its subcollections. Suppose that the sum $\sum\nolimits_{j=1}^kA_{i_j}$ generates a sublattice $L \subset\mathbb Z^n.$ Consider $L'\subset\mathbb Z^n,$ the maximal sublattice of the same dimension as $L$ such that $L\subset L'.$ Denote the projection $\mathbb Z^n\twoheadrightarrow\bigslant{\mathbb Z^n}{L'}$ by $\rho.$ Then the collection of all sets of the form $\rho(A_i), i \notin \{i_1,\ldots,i_k\}.$ is called the {\it quotient collection.} \end{defin} \begin{exa}\label{quot_exa} Let us come back to the collection $\mathcal A=(A_1,A_2,A_3,A_4)$ of Example \ref{w_ess}. Take its the subcollection $(A_1,A_2,A_3).$ In this case, $L=L'=\langle (1,0,0),(0,1,0)\rangle,$ and the quotient collection $(B_1)$ that consists of the set $B_1=\rho(A_2)=\{0,1\}$ in $\bigslant{\mathbb Z^3}{L}\simeq\mathbb Z$. \end{exa} \begin{defin} In the notation of Definition \ref{quot_coll}, let $A_{i_1},\ldots,A_{i_k}$ be the essential subcollection of a collection of finite sets $\mathcal A=(A_1,\ldots,A_{I}) \subset \mathbb Z^n$ that has the same codimension as $\mathcal A$, and let $B_1,\ldots,B_{I-k}$ be the quotient collection. The {\it multiplicity} of the collection $\mathcal A$ is denoted by $d(\mathcal A)$ and defined by the following formula: $$d(\mathcal A)=|L'/L|(I - k)! \mathop{\rm MV}\nolimits(\mathop{\rm conv}\nolimits B_1,\ldots, \mathop{\rm conv}\nolimits B_{I-k}),$$ where $|L'/L|$ stands for the index of the sublattice $L\subset L'.$ \end{defin} \begin{rem} If the collection $\mathcal A$ is essential, then the quotient collection is empty. In this case, we follow the convention $\mathop{\rm MV}\nolimits(\varnothing)=1.$ \end{rem} \begin{exa} Let us come back to Example \ref{quot_exa}. The multiplicity $d(\mathcal A)$ of the collection $\mathcal A$ is equal to $1!~\mathrm{Length}(\mathop{\rm conv}\nolimits B_1)=1.$ \end{exa} \begin{defin} For a finite set $A\subset \mathbb Z^n, $ by $\mathbb C^A$ we denote the set of all Laurent polynomials of the form $\sum\limits_{a\in A} c_at^a$, where $a=(a_1,\ldots,a_n), t=(t_1,\ldots,t_n)$ and $t^a=t_1^{a_1}\cdot\ldots\cdot t_n^{a_n}.$ Let $A_1,\ldots,A_I \subset \mathbb Z^n$ be finite sets. We define $\Sigma(A_1,...,A_I)$ to be the closure of the set of all collections $(p_1,\ldots,p_I ) \in \mathbb C^{A_1}\oplus\ldots\oplus\mathbb C^{A_I}$ such that the set $$\{t\in ({\mathbb C}\setminus 0)^n \mid p_1(t) = \ldots = p_I (t)=0\} \subset({\mathbb C}\setminus 0)^n$$ is not empty. \end{defin} \begin{theor} Let $\mathcal A=(A_1,...,A_I)$ be a collection of finite sets in $\mathbb Z^n.$ The set $\Sigma(A_1,...,A_I)$ is irreducible, and its codimension is equal to the maximum over the codimensions of all the subcollections of the collection $\mathcal A.$ \end{theor} \begin{exa}\label{codim_s} For the sets $A_1,A_2$ in Example \ref{ess}, we have $\mathop{\rm codim}\nolimits(\Sigma(A_1,A_2))=1.$ The codimension of the set $\Sigma(A_1,A_2,A_3,A_4),$ where $A_1,A_2,A_3,A_4$ are the same as in Example \ref{w_ess}, is also equal to $1.$ \end{exa} \begin{defin} For a point $f = (f_1,...,f_I ) \in \Sigma(A_1,...,A_I),$ the $f$-fiber is defined to be the algebraic set $$\{t\in({\mathbb C}\setminus 0)^n \mid f_1(t) = \ldots = f_I (t)=0\}\subset({\mathbb C}\setminus 0)^n.$$ \end{defin} The dimension of the $f$-fiber for a generic $f\in \Sigma(A_1,...,A_I)$ can be computed via the following formula. \begin{lemma} There exists a Zariski open subset $S\subset\Sigma(A_1,...,A_I)$ such that the dimension of the $f$-fiber is equal $n -I + \mathop{\rm codim}\nolimits( \Sigma(A_1,...,A_I))$ for every $f\in S.$ \end{lemma} \begin{exa} For both of the collections of finite sets considered in Example \ref{codim_s}, the dimension of the $f$-fiber is equal to $0,$ for generic $f.$ At the same time, if instead of any of these two collections we consider their images under an inclusion into some lattice $\mathbb Z^k$ of higher dimension, then the dimension of the corresponding $f$-fiber will be non-zero. \end{exa} Now we are ready to state the main result of this Subsection. \begin{theor}\label{num_fib} Suppose that a collection $\mathcal A=(A_1,\ldots,A_I)$ of finite sets in $\mathbb Z^n$ is weakly essential. Then there exists a Zariski open subset $S\subset \Sigma(A_1,\ldots,A_I )$ such that, for every $f\in S,$ the $f$-fiber is a disjoint union of $d(\mathcal A )$ shifted copies of a subtorus of the complex torus $({\mathbb C}\setminus 0)^n.$ \end{theor} \begin{exa} Applying Theorem \ref{num_fib} to the collection $\mathcal A=(A_1,A_2,A_3,A_4)$ considered in Example \ref{w_ess}, we obtain the following statement: there exists a Zariski open subset $S\subset\Sigma(A_1,A_2,A_3,A_4)$ such that, for every $f=(f_1,f_2,f_3,f_4)\in S,$ the system $\{f_1=f_2=f_3=f_4=0\}$ has exaclty one root. \end{exa} \begin{exa} For the essential collection considered in Example \ref{1_dim_exa}, we have that there exists a Zariski open subset $S\subset\Sigma(A_1,A_2)$ such that for every $f=(f_1,f_2)\in S,$ the system $\{f_1=f_2=0\}$ has exaclty $2$ roots. \end{exa} \subsection{Fiber Polytopes} This Subsection contains some basic facts about fiber polytopes, that we will use throughout the paper. For more details we refer the reader to the work \cite{EK}. \begin{defin} Let $\pi\colon({\mathbb C}\setminus 0)^n\to({\mathbb C}\setminus 0)^{n-k}$ be an epimorphism of complex tori, and let $f_1,\ldots,f_{k+1}$ be Laurent polynomials on $({\mathbb C}\setminus 0)^n$ defining a complete intersection $\tilde{X}=\{f_1=\ldots=f_{k+1}=0\}$ of codimension $k+1.$ Then the Laurent polynomial $\pi_{f_1,\ldots,f_{k+1}}$ defining the image $X\subset({\mathbb C}\setminus 0)^{n-k}$ of $\tilde{X}$ under the epimorphism $\pi$ is called {\it the composite polynomial} of polynomials $f_1,\ldots,f_{k+1}$ with respect to $\pi.$ \end{defin} The composite polynomial is defined uniquely up to a monomial factor. Its Newton polytope is uniquely determined up to a shift according to Theorem \ref{comp} below. The goal of this Subsection is to describe the Newton polytope of the composite polynomial in the case when $\EuScript{N}(f_1)=\ldots=\EuScript{N}(f_{k+1})=\Delta.$ Let $\pi^{\times}\colon\mathbb Z^{n-k}\hookrightarrow\mathbb Z^n$ be the inclusion of character lattices induced by the epimorphism $\pi\colon({\mathbb C}\setminus 0)^n\to({\mathbb C}\setminus 0)^{n-k}.$ We denote $\Delta_j=\EuScript{N}(f_j)$ for $1\leqslant j\leqslant k+1,$ and ${B=\EuScript{N}(\pi_{f_1,\ldots,f_{k+1}}).}$ \begin{theor}[\cite{EK}] In the same notation as above, if the polynomials $f_1,\ldots,f_{k+1}$ are Newton-nondegenerate, then for any convex bodies $B_1,\ldots,B_{n-k-1}\subset\mathbb Z^{n-k},$ the following equation holds: \begin{equation} (n-k)!\mathop{\rm MV}\nolimits(B,B_1,\ldots,B_{n-k-1})=n!\mathop{\rm MV}\nolimits(\Delta_1,\ldots,\Delta_{k+1},\pi^{\times}B_1,\ldots,\pi^{\times}B_{n-k-1}). \end{equation} \end{theor} \begin{defin} Let $L\subset\mathbb R^n$ be a codimension $k$ vector subspace, and let $\mu$ and $\mu'$ be volume forms on $\bigslant{\mathbb R^n}{L}$ and $L$ respectively. Consider $\Delta_1,\ldots,\Delta_{k+1},$ a tuple of convex bodies in $\mathbb R^n.$ A convex body $B\subset L$ is called a {\it composite body} of $\Delta_1,\ldots,\Delta_{k+1}$ in $L,$ if, for every collection $B_1,\ldots,B_{n-k-1}\subset L,$ the following equality holds: \begin{equation}\label{vol_composite} (n-k)!\mathop{\rm MV}\nolimits_{\mu'}(B,B_1,\ldots,B_{n-k-1})=n!\mathop{\rm MV}\nolimits_{\mu\wedge\mu'}(\Delta_1,\ldots,\Delta_{k+1},B_1,\ldots,B_{n-k-1}). \end{equation} \end{defin} \begin{theor}\label{comp} In the same notation as above, for any collection of convex bodies $\Delta_1,\ldots,\Delta_{k+1}\subset\mathbb R^n$ its composite body exists. Moreover, it is unique up to a shift. \end{theor} We will now give an explicit description of the composite body for the special case ${\Delta_1=\ldots=\Delta_{k+1}=\Delta.}$ \begin{defin}\label{Mint_def} Let $L\subset\mathbb R^n$ be a codimension $k$ vector subspace, and let $\mu$ be a volume form on $\bigslant{\mathbb R^n}{L}$. Denote by $p$ the projection $\mathbb R^n\twoheadrightarrow \bigslant{\mathbb R^n}{L}.$For a convex body $\Delta\subset\mathbb R^n,$ the set of all points of the form $\int_{p(\Delta)}s\mu\in\mathbb R^n,$ where $s\colon p(\Delta)\to\Delta$ is a continuous section of the projection $p,$ is called the {\it Minkowski integral}, or the {\it fiber body} of $\Delta$ and is denoted by $\int p|_{\Delta}\mu.$ \end{defin} \begin{exa}\label{cylind} By $p$ we denote the projection $p\colon\mathbb R^3\twoheadrightarrow\bigslant{\mathbb R^3}{L}.$ Consider $\Delta=Q\times I,$ with $Q\subset L=\mathbb R^2$ convex and $I\subset\ker p$ a closed interval. Using Definition \ref{Mint_def}, we immediately obtain $\int p|_{\Delta}\mu=|I|\cdot Q.$ \end{exa} \begin{rem} Definition \ref{Mint_def} can be visualized as follows. Let $p\colon\mathbb R^3\to\mathbb R^2$ be the projection forgetting the last coordinate. The Minkowski integral of the convex body $\Delta$ is defined as follows: $\int p|_{\Delta}\mu=\int\Delta(t) dt,$ where $\Delta(t)$ stands for the function which maps every $s\in\mathbb R$ to $\Delta\cap\{t=s\}\subset\mathbb R^2$ (see Figure 2 below). It is similar to the usual Riemann integral, but in this case, the function takes values in convex sets and the addition operation is the Minkowski sum. The body $\Delta$ can be approximated by cylinders, whose Minkowski integrals were computed in Example \ref{cylind}. So, the integral $\int p|_{\Delta}\mu$ is then the limit of the sums $\sum\limits_{j=0}^{n}(t_{j+1}-t_{j})\Delta(t_j)$ as $n$ goes to infinity. \begin{center} \begin{tikzpicture}[scale=0.9] \draw[->, thick] (-2,0)--(4,0); \draw[->, thick] (-1.5,-1)--(-1.5,5); \node[above left] at (-1.6, 5) {$\mathbb R$}; \node[above right] at (-1.6, 4.9) {$t$}; \node[below] at (4, 0) {$\mathbb R^2$}; \draw[thick, red] (1,2.5) circle (2cm); \node[above, red] at (1,4.5) {$\Delta$}; \draw[thick, violet] (-1.5,4.5)--(4,4.5); \draw[thick, violet] (-1.5,4)--(4,4); \draw[thick, violet] (-1.5,3.5)--(4,3.5); \draw[thick, violet] (-1.5,3)--(4,3); \draw[thick, violet] (-1.5,2.5)--(4,2.5); \draw[thick, violet] (-1.5,2)--(4,2); \draw[thick, violet] (-1.5,1.5)--(4,1.5); \draw[thick, violet] (-1.5,1)--(4,1); \draw[thick, violet] (-1.5,0.5)--(4,0.5); \draw[thick, red] (1,2.5) circle (2cm); \node[left,violet] at (-1.5,4.5) {$t_0$}; \node[left,violet] at (-1.5,3) {$t_i$}; \node[left,violet] at (-1.5,2.5) {$t_j$}; \node[left,violet] at (-1.5,0.5) {$t_n$}; \node[below] at (0,-1.5) {{\bf Figure 2.} Minkowski integral as the limit of Riemann sums.}; \draw[thick, violet, fill=orange!50!white] (-0.7,3) rectangle (2.7,3.5); \draw[thick, violet, fill=orange!50!white] (-0.7,1.5) rectangle (2.7,2); \draw[thick, violet, fill=orange!50!white] (-0.9,2.5) rectangle (2.9,3); \draw[thick, violet, fill=orange!50!white] (-0.9,2) rectangle (2.9,2.5); \draw[thick, violet, fill=orange!50!white] (-0.3,3.5) rectangle (2.3,4); \draw[thick, violet, fill=orange!50!white] (-0.3,1) rectangle (2.3,1.5); \end{tikzpicture} \end{center} \end{rem} \begin{theor} In the same notation as above, for arbitrary convex bodies $B_1,\ldots,B_{n-k-1}\subset L$ the convex body $B=(k+1)!\int p|_{\Delta}\mu$ is contained in a fiber of the projection $p$ and satisfies the equality \begin{equation} (n-k)!\mathop{\rm MV}\nolimits_{\mu'}(B,B_1,\ldots,B_{n-k-1})=n!\mathop{\rm MV}\nolimits_{\mu\wedge\mu'}(\underbrace{\Delta,\ldots,\Delta}_{k+1\mbox{~times}},B_1,\ldots,B_{n-k-1}). \end{equation} In other words, the convex body $(k+1)!\int p|_{\Delta}\mu,$ up to a shift, is the composite body of $\underbrace{\Delta,\ldots,\Delta}_{k+1\mbox{~times}}.$ \end{theor} \begin{sledst} Let $\pi\colon({\mathbb C}\setminus 0)^n\to({\mathbb C}\setminus 0)^{n-k}$ be an epimorphism of complex tori, and let $f_1,\ldots,f_{k+1}$ be a generic tuple of Laurent polynomials on $({\mathbb C}\setminus 0)^n$ such that ${\EuScript{N}(f_j)=\Delta,~1\leqslant j\leqslant k+1.}$ Consider the complete intersection $\tilde{X}=\{f_1=\ldots=f_{k+1}=0\}\subset({\mathbb C}\setminus 0)^n.$ Then the Newton polytope of the composite polynomial of $f_1,\ldots,f_{k+1}$ with respect to $\pi$ is equal, up to a shift, to $\int p|_{\Delta}\mu.$ \end{sledst} \begin{theor}\label{faces_fib} Let $\Delta\subset\mathbb R^n$ be a convex polytope, and $\gamma\in L^*$ be a covector. Then the support face $\big(\int p|_{\Delta}\mu\big)^{\gamma}$ coincides with the Minkowski sum $$\sum\limits_{\substack{\beta\in(\mathbb R^n)^*\\ \beta|_L=\gamma}}\int p|_{\Delta^{\beta}}\mu.$$ \end{theor} Definition \ref{Mint_def} implies that if the subspace $L$ is a line with the volume form $dl,$ then the Minkowski integral $\int p|_{\Delta}\mu$ is an interval of length $n!\mathop{\rm Vol}\nolimits_{\mu\wedge dl}(\Delta).$ Combining this observation with Theorem \ref{faces_fib} we obtain the following result. \begin{lemma}\label{volumes} Let $\Delta\subset\mathbb R^n$ be a convex polytope, $P\subset\mathbb R^n$ be a $k$-dimensional subspace, and $p$ be the projection $\mathbb R^n\to\bigslant{\mathbb R^n}{P}.$ Suppose that the face $\Gamma=\big(\int p|_{\Delta}\mu\big)^{\gamma}\subset\int p|_{\Delta}\mu$ is a segment parallel to a line $L\subset P.$ Let $\mu$ and $dl$ be volume forms on $\bigslant{\mathbb R^n}{P}$ and $L$ respectively. Then the length of $\Gamma$ equals the sum $$\sum\limits_{\substack{\beta\in(\mathbb R^n)^*\\ \beta|_L=\gamma}}\mathop{\rm Vol}\nolimits_{dl\wedge p^*\mu}(\Delta^{\beta}).$$ \end{lemma} Indeed, the length of the segment $\Gamma$ in the sense of the form $dl$ is equal to the sum of volumes (in the sense of the form ${dl\wedge p^*\mu}$) of the $(n-k+1)$-dimensional faces of $\Delta$ satisfying the following property. Each of those faces is contained in an affine $(n-k+1)$-plane which intersects the plane $P$ along a line parallel to $L.$ \begin{rem} From now on, the volume form on $\mathbb R^n$ that we will use is the lattice volume form, i.e., such that the volume of the standard unit simplex is $1.$ \end{rem} \begin{exa} Let $\Delta\subset\mathbb R^n$ be the standard simplex of size $d\in\mathbb Z_+,$ and $p\colon\mathbb R^3\to\mathbb R^2$ be the projection forgetting the last coordinate. Then the fiber polytope of $\Delta$ is equal to the standard $2$-dimensional simplex of size $d^2.$ \end{exa} \subsection{The Euler Characteristic of a Complete Intersection} Here we state one of the key results that we will use in this paper, namely, the formula, which expresses the Euler characteristic of a generic complete intersection in terms of the Newton polytopes of the polynomials defining it. This amazing result was obtained by A.G. Khovanskii in the work \cite{K1} (see Theorem 2, p.44). Before stating it, let us introduce some notation. Let $\Delta_1,\ldots,\Delta_n$ be $n$-dimensional polytopes in an $n$-dimensional space. The mixed volume of these polyhedra is denoted by $\mathop{\rm MV}\nolimits(\Delta_1,\ldots,\Delta_n)$. Now let $F(x_1,\ldots,x_k)$ be the Taylor series of an analytic function of the $k$ variables $x_1, \ldots, x_k$ at the point $0$. We wish to determine the number $F(\Delta_1,\ldots,\Delta_k)$, and we will do is as follows: if $F$ is a monomial of degree $n$, namely, $x=x_1^{n_1}\cdot\ldots\cdot x_k^{n_k}, n_1 +\ldots+n_k=n$, then we put $$F(\Delta_1,\ldots,\Delta_k)=\mathop{\rm MV}\nolimits(\underbrace{\Delta_1,\ldots,\Delta_1}_{\text{$n_1$ times}},\ldots,\underbrace{\Delta_k,\ldots,\Delta_k}_{\text{$n_k$ times}}).$$ For monomials $F$ of degrees other than $n,$ we set $F(\Delta_1,\ldots,\Delta_k)=0.$ Then, by linearity, this definition is extended to arbitrary linear combinations of monomials. \begin{theor}\cite{K1}\label{chicomplint} Let $X$ be a variety defined in $({\mathbb C}\setminus 0)^n$ by a nondegenerate system of equations $f_l = \ldots = f_k = 0$ with Newton polyhedra $\Delta_1,\ldots, \Delta_k$. Then, in the same notation as above, we have that $$\chi(X) = \prod\Delta_i(1 + \Delta_i)^{-1}.$$ \end{theor} \begin{exa}\label{chiproj} In the same notation as above, if $k=1,$ the variety $X$ is a hypersurface in $({\mathbb C}\setminus 0)^n$ given by a polynomial $f_1$ with the Newton polytope $\Delta_1.$ In this case, by Theorem \ref{chicomplint}, we have $$\chi(X)=(-1)^{n-1}\mathop{\rm Vol}\nolimits(\Delta_1).$$ \end{exa} \begin{exa}\label{chici} Now, suppose that $X$ is a complete intersection in $({\mathbb C}\setminus 0)^n$ of codimension $n-1$ (i.e., a curve), defined by polynomials $f_1, \ldots,f_{n-1}$ with Newton polytopes $\Delta_1,\ldots,\Delta_{n-1}.$ Then using Theorem \ref{chicomplint}, we obtain the following formula: $$\chi(X)=-\mathop{\rm MV}\nolimits(\Delta_1,\ldots,\Delta_{n-1},\Delta_1+\ldots+\Delta_{n-1}).$$ \end{exa} \subsection{Forking Paths Singularities}\label{intro_fps} This subsection is devoted to the so-called {\it forking paths singularities}, introduced in the work \cite{E3}. Let $i=(i_1,i_2\ldots)$ be a sequence of integers satisfying the following properties: \begin{itemize} \item the sequence $i$ stabilizes at $1;$ \item for every $r\in\mathbb N,$ the number $i_{r+1} $ divides $i_r.$ \end{itemize} Given such a sequence $i,$ one can construct another sequence $q=(q_1,q_2\ldots)$ as follows: for every $r\in\mathbb N,$ set $q_{r}=\dfrac{i_r}{i_{r+1}}.$ This data can be encoded using a certain system of subsets of a finite set $R$ of $i_1$ elements via the so-called {\it $i$-nested boxes construction}. This construction works level by level as follows. Level $0$ consists of one box -- the set $R$ itself. To construct level $1,$ we divide the elements in $R$ into $q_1$ boxes containing $i_2$ elements each. Level $2$ is then the result of dividing the elements of each of the level $1$ boxes into $q_2$ boxes containing $i_3$ elements each. We continue this operation until we end up with $i_1$ boxes containing an element of $R$ each. The latter will happen in a finite number of steps, since the sequence $i$ stabilizes at $1.$ To illustrate the nested boxes construction, we consider the following special case. Let $R$ be the set of complex roots for a polynomial $z^{i_1}-1.$ We put elements $r_1,r_2$ into the same box on level $k,$ if $r_1^{i_{k+1}}=r_2^{i_{k+1}}.$ \begin{exa}\label{boxes} Consider the sequence $i=(8,4,2,1,...),$ then according to the above-mentioned construction, the roots for the polynomial $z^8-1$ are placed in the following system of nested boxes: \begin{equation*} \begin{rcases*} \begin{rcases*} \begin{rcases*} \boxed{1}\\ \boxed{-1}\\ \end{rcases*}\mbox{level 2}\\ \begin{rcases*} \boxed{e^{\frac{\pi i}{2}}}\\ \boxed{e^{\frac{3\pi i}{2}}}\\ \end{rcases*}\mbox{level 2}\\ \end{rcases*}\mbox{level 1}\\ \begin{rcases*} \begin{rcases*} \boxed{e^{\frac{\pi i}{4}}}\\ \boxed{e^{\frac{5\pi i}{4}}}\\ \end{rcases*}\mbox{level 2}\\ \begin{rcases*} \boxed{e^{\frac{3\pi i}{4}}}\\ \boxed{e^{\frac{7\pi i}{4}}}\\ \end{rcases*}\mbox{level 2}\\ \end{rcases*}\mbox{level 1}\\ \end{rcases*}\mbox{level 0}\\ \end{equation*} \end{exa} Each of the elements of $R$ in the nested-boxes construction has its own {\it address}, i.e. a finite sequence of integers, constructed as follows. The $(k+1)-$th element of the address is the number of the $k-$th level box containing the given element. For any two elements $r_1,r_2$ of $R$ with addresses $(a_1,a_2,\ldots,a_N)$ and $(b_1,b_2,\ldots,b_N)$ one can define the {\it depth} of their relation as the number $\kappa(r_1,r_2)$ equal to the minimal number $K$ such that $a_K\neq b_K.$ \begin{exa} If we enumerate the boxes from top to bottom on every level in Example \ref{boxes}, the element $1$ has the address $(1, 1, 1, 1),$ while the address of $e^{\frac{\pi i}{2}}$ is $(1,1,2,1).$ The depth $\kappa(1,e^{\frac{\pi i}{2}})$ of their relation is equal to $3.$ \end{exa} \begin{defin}\label{def_fps} In the same notation as above, let $i=(i_1,i_2,\ldots)$ be an integer sequence stabilizing at $1$ and such that $i_{r+1}$ divides $i_r$ for every $r.$ With the $i-$nested boxes construction one can associate a plane singularity with $i_1$ distinct regular branches $\varphi_{r_m}\colon(\mathbb C,0)\to(\mathbb C^2,0)$ indexed by the elements of the set $R$ and such that the intersection number of $\varphi_{r_m}$ and $\varphi_{r_n}$ with $i\neq j$ is equal to $\kappa(r_m,r_n).$ We call this singularity an {\it $i-$forking paths singularity.} \end{defin} \begin{utver}\label{milnornumber}\label{chifps} The Euler characteristic $\chi(i)$ of the Milnor fiber of an $i-$forking paths singularity can be computed using the following formula: \begin{equation} \chi(i)=i_1-i_1\sum\limits_{n=1}^{\infty}(i_n-1). \end{equation} \end{utver} \begin{proof} First, we independently perturb the branches of the singularity, i.e. in such a way that for every pair of branches, their multiple intersection splits into several transverse intersections. Then the union $U$ of the perturbations has $\sum\limits_{n=1}^{\infty}\dfrac{i_1(i_n-1)}{2}$ nodes. One can prove it by induction on the ``depth'' of the corresponding nested boxes construction. {\it Base.} For $i=(i_1,1,\ldots),$ sought number of nodes is equal to $\dfrac{i_1(i_1-1)}{2},$ since in this case, each branch transversely intersects every other branch exactly once. {\it Inductive step.} Let $i=(i_1,\ldots,i_k,1,\ldots)$ and $i'=(i_1,\ldots,i_k,i_{k+1},1,\ldots).$ Then the union of the perturbations of an $i'-$forking paths singularity has $\dfrac{i_1(i_{k+1}-1)}{2}=\dfrac{i_1}{i_{k+1}} \dfrac{i_{k+1}(i_{k+1}-1)}{2}$ more nodes then the one for an $i-$forking paths singularity. Indeed, we add $\dfrac{i_1}{i_{k+1}}$ boxes of $i_{k+1}$ elements each, and for every such box, the corresponding branches intersect each other transversely and exactly once, which yields $\dfrac{i_{k+1}(i_{k+1}-1)}{2}$ nodes for each of the boxes. The next step is to compute the Euler characteristic of the Milnor fiber of the $i-$forking paths singularity. Here we use a fact which follows from the additive property of Euler characteristic. \begin{utver}\label{methodmilnor} If a perturbation $U$ of an isolated hypersurface singularity $S$ has a fiber with isolated singularities $S_1,\ldots,S_k,$ then we have: $$\chi(\mathrm{Milnor~Fiber~of~ }S)=\chi(\mathrm{smooth~part~of~} U)+\sum\limits_{j=1}^{k}\chi(\mathrm{Milnor~ Fiber~of~}S_j).$$ \end{utver} In our case, the smooth part of $U$ is topologically equivalent to the union of $i_1$ complex planes with $2\sum\limits_{n=1}^{\infty}\dfrac{i_1(i_n-1)}{2}$ points punctured. The Euler characteristic of the Milnor fiber of a node is equal to $0.$ Thus we obtain the desired answer: \begin{equation} \chi(i)=i_1-i_1\sum\limits_{n=1}^{\infty}(i_n-1).\end{equation} \end{proof} \section{The Main Result}\label{mainresult} \subsection{Dramatis Person\ae} \begin{itemize} \item[--] $(x_1,\ldots,x_n,y,t),$ coordinates in $({\mathbb C}\setminus 0)^{n+2},~n\geqslant 1;$ \item[--] $(e_1,\ldots,e_{n+2}),$ the corresponding coordinate system in the character lattice $\mathbb Z^{n+2};$ \item[--] $A\subset\mathbb Z^{n+2},$ a finite subset of maximal dimension; \item[--] $\Delta=\mathop{\rm conv}\nolimits(A)\subset\mathbb R^{n+2},$ the convex hull of the set $A;$ \item[--] $\mathcal{F}(\Delta),$ the set of all facets of the polytope $\Delta;$ \item[--] $X_{\Delta},$ the toric variety associated to the polytope $\Delta;$ \item[--] $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ a tuple of polynomials supported at $A;$ \item[--] $\tilde{\mathcal C}=\{f_1=\ldots=f_{n+1}=0\}\subset({\mathbb C}\setminus 0)^{n+2},$ the complete intersection given by the polynomials $f_1,\ldots,f_{n+1};$ \item[--] $\pi\colon({\mathbb C}\setminus 0)^{n+2}\to({\mathbb C}\setminus 0)^2,$ the projection forgetting the first $n$ coordinates; \item[--] $\mathcal C\subset({\mathbb C}\setminus 0)^2,$ the closure of the image $\pi(\tilde{\mathcal{C}})\subset({\mathbb C}\setminus 0)^2;$ \item[--] $P\subset\mathbb R^2$ the Newton polygon of the curve $\mathcal C;$ \item[--] $\mathcal S,$ the singular locus of the curve ${\mathcal C}.$ \end{itemize} \subsection{Statement of the Problem} In this subsection we give a precise formulation of the question that we address in this paper and discuss all the assumptions we make. For generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the complete intersection $\tilde{\mathcal C}=\{f_1=\ldots=f_{n+1}=0\}\subset({\mathbb C}\setminus 0)^{n+2}$ is a smooth curve and the closure $\mathcal C$ of its image under the projection $\pi$ is a plane curve in $({\mathbb C}\setminus 0)^2,$ whose singular locus consists of finitely many isolated singular points. It is quite natural to expect that under certain genericity conditions, all the singular points of the curve $\mathcal C$ are nodes. However, it is the case not for all support sets. Moreover, the following example shows that when the support sets $\mathop{\rm supp}\nolimits(f_j)=A_j$ do not coincide, one can no longer expect the singular points of the projection of the corresponding complete intersection to be nodes even for generic polynomials $f_j\in\mathbb C^{A_j}.$ \begin{exa}\label{notnodal} Consider $A_1=\{0,1,3\}\subset \mathbb Z^1$ and $A_2=\{0,3\}\subset\mathbb Z^1.$ Let $f_1(x)$ and $f_2(x)$ be polynomials supported at $A_1$ and $A_2$ respectively. Suppose that the univariate system $\{f_1(x)=f_2(x)=0\}$ has $2$ distinct roots $r_1,r_2\in({\mathbb C}\setminus 0).$ Let us show that this system also has another root $r_3\in({\mathbb C}\setminus 0).$ Indeed, the assumption we made implies that $r_2=\alpha\cdot r_1,$ where $\alpha$ is a root of unity. Substituting these roots into the first equation, we obtain that the linear term of $f_1$ has to be $0.$ But then it is clear that the third root $r_3=\alpha\cdot r_2$ is also a root for the first equation. \end{exa} Therefore, in this paper we address a slightly more general question: namely, we compute the sum of the $\delta$-invariants (for details see \S 10 in \cite{Milnor}) of the singular points of the curve $\mathcal{C}.$ On one hand, this question makes sense for any support sets. On the other hand, if the curve $\mathcal{C}$ only has nodes as singularities, then the answer is exactly the number of those nodes. \begin{rem}\label{deltarem} Roughly speaking, the $\delta$-invariant of a plane curve singularity $S$ is a non-negative integer that is equal to the number of double points concentrated at it. Let $S$ be a singularity with $b(S)$ branches. Perturb its branches in such a way that the union $U$ of the perturbations has only nodes as singularities. Denote by $N$ the number of these nodes. By Proposition \ref{methodmilnor}, the Euler characteristic of the Milnor fiber of $S$ is then equal to $b(S)-2N.$ Therefore, the Milnor number of $S$ is given by the following formula: $$\mu(S)=1-b(S)+2N,$$ or, equivalently, $$2N=\mu(s)+b(S)-1.$$ The latter then agrees with Milnor formula $2\delta(s)=\mu(s)+b(S)-1,$ which relates the $\delta$-invariant of the singularity, its Milnor number and the number of its branches. \end{rem} \begin{prb}\label{mainprb} In the same notation as above, express the sum $\mathcal D$ of the $\delta$-invariants of the singular points of the curve $\mathcal{C}$ in terms of the set $A.$ \end{prb} Let us introduce some more notation that will be used a lot throughout the paper. Let $\tilde{\Lambda}_A\subset\mathbb Z^{n+2}$ be the sublattice generated by $A,$ and let $\Lambda_A$ be its image under the projection $\rho\colon\mathbb Z^{n+2}\twoheadrightarrow\bigslant{Z^{n+2}}{\langle e_{n+1},e_{n+2}\rangle}.$ Then by $\mathop{\rm ind}\nolimits_v(A)$ we denote the index of $\Lambda_A$ in $\bigslant{Z^{n+2}}{\langle e_{n+1},e_{n+2}\rangle}.$ \begin{predpol} The set $A$ contains $0\in\mathbb Z^{n+2}.$ \end{predpol} This assumption can be made since multiplication by monomial does not change the zero set of the polynomial inside the algebraic torus. At the same time, the resulting support set is a shift of the initial one. \begin{predpol}\label{indall} The set $A$ satisfies the following property: $ind_v(A)=1.$ \end{predpol} \begin{rem}\label{indv} We can make this assumption due to the following reason. $\Lambda_A$ and $\Lambda=\bigslant{Z^{n+2}}{\langle e_{n+1},e_{n+2}\rangle}$ admit a pair of aligned bases such that $\Lambda=\bigoplus\mathbb Z w_i$ and $\Lambda_A=\bigoplus\mathbb Z a_i w_i$ for some $a_i\in\mathbb Z.$ Performing a monomial change of variables to pass from the basis $(e_1,\ldots,e_{n+2})$ to $(w_1,\ldots,w_{n},e_{n+1},e_{n+2})$ and then another change of variables of the form $\check{x}_i=x_i^{a_i},$ we will reduce our problem to the case $ind_v(A)=1.$ \end{rem} Let $Q=p(\Delta)$ be the image of the polytope $\Delta$ under the projection $\rho\colon\mathbb R^{(n+2)}\twoheadrightarrow\bigslant{\mathbb R^{(n+2)}}{\langle e_{n+1},e_{n+2}\rangle}.$ \begin{defin} We call a face $\tilde{\Gamma}\subset\Delta$ {\it horizontal}, if its projection is contained in the boundary of $Q$. We denote the set of all horizontal facets of the polytope $\Delta$ by $\mathcal{H}(\Delta).$ \end{defin} \subsection{Statement of the Main Result} Let $\Gamma\subset\Delta$ be a non-horizontal facet contained in a hyperplane given by a linear equation of the form $\ell(e_1,\ldots,e_{n+2})=c.$ The function $\ell$ is unique up to a scalar multiple, therefore, one can assume that the coefficients of $\ell$ are coprime integers and that for any $\alpha\in A\setminus\Gamma,~\ell(\alpha)<c.$ We now construct a sequence of integers $i^{\Gamma}=(i_1^{\Gamma},i_2^{\Gamma},\ldots)$ as follows. Set $B_1^{\Gamma}=A\cap\Gamma.$ For every $r>1,$ we define $$B_r^{\Gamma}=B_{r-1}^{\Gamma}\cup(A\cap\{\ell(e_1,\ldots,e_{n+2})=c- (r-1)\}),$$ depending on the way $\Delta$ is positioned relative to the hyperplane containing $\Gamma.$ Finally, for every $r\geqslant 1,$ we set $$i_r^{\Gamma}=\mathop{\rm ind}\nolimits_{v}(B_r^{\Gamma}).$$ It is clear that for every $r,$ the element $i_r^{\Gamma}$ divides $i_{r-1}^{\Gamma}.$ Moreover, since for the set $A$ we have $\mathop{\rm ind}\nolimits_v(A)=1,$ any such sequence stabilizes to $1$. \begin{theor}\label{lemmain} Let $A\subset\mathbb Z^{n+2}$ be a finite set of full dimension, satisfying Assumption \ref{indall}, and let $\Delta\subset\mathbb R^{n+2}$ be its convex hull. In the same notation as above, for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the closure $\mathcal C$ of the image of the curve $\tilde{\mathcal{C}}=\{f_1=\ldots=f_{n+1}=0\}$ under the projection $\pi\colon({\mathbb C}\setminus 0)^{n+2}\to({\mathbb C}\setminus 0)^2$ forgetting the first $n$ coordinates is an algebraic plane curve, whose singular locus $\mathcal S$ consists of isolated singular points. Then the number $\mathcal D=\sum\limits_{s\in\mathcal S}\delta(s)$ can be computed via the following formula: \begin{equation}\label{mainformula1} \mathcal D=\dfrac{1}{2}\Bigg(\mathop{\rm Area}\nolimits(P)-(n+1)\mathop{\rm Vol}\nolimits(\Delta)+\sum\limits_{\Gamma\in\mathcal H(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)-\sum\limits_{\Gamma\in\mathcal{F}(\Delta)\setminus\mathcal{H}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\sum\limits_1^{\infty}(i_r^{\Gamma}-1)\Bigg), \end{equation} where $\delta(s)$ is the $\delta$-invariant of the singular point $s$, $P=\int_{\pi}(\Delta),$ the set $\mathcal{F}(\Delta)$ is the set of all facets of the polytope $\Delta$ and $\mathcal{H}(\Delta)$ is the set of all horizontal facets of $\Delta.$ \end{theor} \section{Proof of the Main Result}\label{mainproof} This Section is organized as follows. Subsections \ref{s1} and \ref{s2} are devoted to the singular points of the curve $\mathcal C$ at infinity. There we show that for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ all the singularities of the curve $\mathcal C$ at infinity are forking path singularities (see Subsection \ref{intro_fps} for details) and compute their contribution to the Euler characteristic of the curve $\mathcal{C}.$ In Subsection \ref{main}, we compare the Euler characteristic of the curves $\tilde{\mathcal{C}}$ and $\mathcal{C}$ to deduce the main result of this paper -- Theorem \ref{lemmain}. \subsection{Order of Contact Between Branches of the Curve $\mathcal C$ at Infinity}\label{s1} We first make a very useful technical assumption that will simplify the proof of Theorem \ref{lemmain}. \begin{predpol}\label{primitive} For every primitive covector $\gamma$ such that $\Delta^{\gamma}\subset\Delta$ is a facet, its image under the projection forgetting the first $n$ coordinates is also primitive. \end{predpol} \begin{exa}\label{simpl1} Consider ${A=\{(0,0,0),(0,1,0),(0,0,1),(1,0,0),(2,0,0)\}}\subset\mathbb Z^3$ (see Figure 3 below). The covector $\gamma=(1,2,2)\in(\mathbb R^3)^*$ is supported at the facet $\Gamma_0=\mathop{\rm conv}\nolimits(\{(0,1,0),(0,0,1),(2,0,0)\})$ (hatched orange) of the polytope $\Delta=\mathop{\rm conv}\nolimits(A).$ The covector $\gamma$ is primitive, while its projection $(2,2)$ is not. At the same time, the projection of the primitive covector supported at the facet $\Gamma_1=\mathop{\rm conv}\nolimits(\{(0,0,0),(0,0,1),(2,0,0)\})$ (hatched blue) is primitive. \begin{center} \begin{tikzpicture}[scale=0.9] \pattern[pattern=north east lines, pattern color=orange] (1,0)--(2,1)--(0,5)--(1,0); \pattern[pattern=north west lines, pattern color=blue] (0,1)--(1,0)--(0,5)--(0,1); \draw[ultra thick] (0,1)--(0,5); \draw[ultra thick] (0,1)--(1,0); \draw[ultra thick] (0,5)--(1,0); \draw[ultra thick] (0,5)--(2,1); \draw[ultra thick] (2,1)--(1,0); \draw[dashed, ultra thick] (0,1)--(2,1); \draw[fill,red] (0,1) circle [radius=0.1]; \draw[fill,red] (2,1) circle [radius=0.1]; \draw[fill,red] (1,0) circle [radius=0.1]; \draw[fill,red] (0,5) circle [radius=0.1]; \draw[fill,red] (0,3) circle [radius=0.1]; \node[below] at (1,-0.3) {{\bf Figure 3.} A polytope that does not satisfy Assumption \ref{primitive}.}; \node[right] at (0,5) {(2,0,0)}; \node[left] at (0,3) {(1,0,0)}; \node[left] at (0,1) {(0,0,0)}; \node[right] at (2,1) {(0,0,1)}; \node[right] at (1,0) {(0,1,0)}; \end{tikzpicture} \end{center} \end{exa} \begin{rem}\label{primitive_exp} We make this purely technical assumption to be able to perform certain monomial changes of variables that would preserve horizontality of facets in $\Delta.$ Moreover, as we will further see, this assumption guarantees that the projection $\mathcal C$ of the complete intersection $\tilde{\mathcal C}$ has only forking--path singularities at infinity (see Lemma \ref{fpsinfinity} for details). Also note that one can always reduce the computation to the case when the polytope $\Delta$ satisfies Assumption \ref{primitive}. For instance, consider a monomial change of variables $\check{x}_i=x_i,~\check{y}^{N!}=y,~\check{t}^{N!}=t,$ for $N=k!,$ where $k=\max_{\Gamma\subset\Delta}(\mathop{\rm ind}\nolimits_v(A\cap\Gamma)).$ The polytope $\check{\Delta}$ clearly satisfies the desired condition. At the same time, inside the torus $({\mathbb C}\setminus 0)^2,$ the projection of the new complete intersection $\check{\mathcal C}$ defined by polynomials $\check{f_1},\ldots,\check{f}_{n+1}$ with the Newton polytope $\check{\Delta}$ can be viewed as an $(N!)^2$-covering of the initial curve $\mathcal C$, therefore, to find the number $\mathcal D$ for the curve $\mathcal C$, one can just find this number for the projection of the curve $\check{\mathcal C}$ and divide it by $(N!)^2$. Moreover, we will later see that the answer does not depend on the choice of $N,$ therefore our main result (Theorem \ref{lemmain}) holds true without this assumption. \end{rem} \begin{exa}\label{simp2} Let us come back to $\Delta$ from Example \ref{simpl1}. The monomial change $\check{x}_1=x_1,~\check{t}^{2!}=t,~\check{y}^{2!}=y$ reduces our problem to a much easier case $\check{\Delta}=\mathop{\rm conv}\nolimits(\{(0,0,0),(2,0,0),(0,2,0),(1,0,0),(0,0,2)\})$ (see Figure 4). \begin{center} \begin{tikzpicture} \pattern[pattern=north east lines, pattern color=orange] (1,0)--(2,1)--(0,5)--(1,0); \pattern[pattern=north east lines, pattern color=orange] (8,-1)--(10,1)--(6,5)--(8,-1); \draw[ultra thick] (0,1)--(0,5); \draw[ultra thick] (0,1)--(1,0); \draw[ultra thick] (0,5)--(1,0); \draw[ultra thick] (0,5)--(2,1); \draw[ultra thick] (2,1)--(1,0); \draw[ultra thick] (6,1)--(6,5); \draw[ultra thick] (6,1)--(8,-1); \draw[ultra thick] (6,5)--(8,-1); \draw[ultra thick] (6,5)--(10,1); \draw[ultra thick] (10,1)--(8,-1); \draw[dashed, ultra thick] (0,1)--(2,1); \draw[dashed, ultra thick] (6,1)--(10,1); \draw[fill,red] (0,1) circle [radius=0.1]; \draw[fill,red] (2,1) circle [radius=0.1]; \draw[fill,red] (1,0) circle [radius=0.1]; \draw[fill,red] (0,5) circle [radius=0.1]; \draw[fill,red] (0,3) circle [radius=0.1]; \draw[fill,red] (6,1) circle [radius=0.1]; \draw[fill,red] (10,1) circle [radius=0.1]; \draw[fill,red] (8,-1) circle [radius=0.1]; \draw[fill,red] (6,5) circle [radius=0.1]; \draw[fill,red] (6,3) circle [radius=0.1]; \draw [ultra thick, fill=white] (8,1) circle [radius=0.1]; \draw [ultra thick, fill=white] (8,3) circle [radius=0.1]; \draw [ultra thick, fill=white] (9,0) circle [radius=0.1]; \draw [ultra thick, fill=white] (7,2) circle [radius=0.1]; \draw [ultra thick, fill=white] (7,0) circle [radius=0.1]; \node[below] at (1,-0.5) {{\bf Figure 4.} The polytopes $\Delta$ and $\check{\Delta}.$}; \node[right] at (0,5) {(2,0,0)}; \node[left] at (0,3) {(1,0,0)}; \node[left] at (0,1) {(0,0,0)}; \node[right] at (2,1) {(0,0,1)}; \node[right] at (1,0) {(0,1,0)}; \node[right] at (6,5) {(2,0,0)}; \node[left] at (6,3) {(1,0,0)}; \node[left] at (6,1) {(0,0,0)}; \node[right] at (10,1) {(0,0,2)}; \node[right] at (8,-1) {(0,2,0)}; \end{tikzpicture} \end{center} After the change of variables, the primitive covector supported at $\check{\Gamma}$ (hatched orange) is equal to $(1,1,1),$ therefore, its projection $(1,1)$ is also primitive. So, Assumption \ref{primitive} is now satisfied. \end{exa} \begin{utver}\label{vert_change} Let $\Gamma=\Delta^{\gamma}\subset\Delta$ be a non-horizontal facet. Then, under Assumption \ref{primitive}, there exists a basis $(h_1,\ldots,h_{n+2})$ in $\mathbb Z^{n+2}$ satisfying the following conditions: \begin{itemize} \item passing from the basis $(e_1,\ldots,e_{n+2})$ to $(h_1,\ldots,h_{n+2})$ preserves the set $\mathcal{H}(\Delta)$ of the horizontal facets of $\Delta;$ \item in the basis $(h_1,\ldots,h_{n+2})$ the facet $\Gamma$ is ``vertical'', i.e., parallel to the coordinate hyperplane $\{h_{n+2}=0\}.$ \end{itemize} \end{utver} \begin{proof} Indeed, let $(\check{e}_1,\ldots,\check{e}_{n+2})$ be the basis of $(\mathbb Z^{n+2})^{*}$ dual to the basis $(e_1,\ldots,e_{n+2})$ of the lattice $\mathbb Z^{n+2}.$ Consider the primitive covector $\gamma\in(\mathbb Z^{n+2})^{*}$ that is supported at the facet $\tilde{\Gamma}.$ Assumption \ref{primitive} implies that the image of $\gamma$ under the projection forgetting the first $n$ coordinates is primitive, therefore, together with some integer covector $\tau=\lambda_1\check{e}_{n+1}+\lambda_2\check{e}_{n+2},$ it forms a basis of the $2$-dimensional lattice $\langle\check{e}_{n+1},\check{e}_{n+2}\rangle. $ We construct a new basis $(\check{h}_1,\ldots,\check{h}_{n+2})$ of $(\mathbb Z^{n+2})^*$ as follows. We set $$ \check{h}_i= \begin{cases} \check{e}_i,~\text{ if } 1\leqslant i\leqslant n\\ \tau~\text{ if } i=n+1\\ \gamma~\text{ if } i=n+2. \end{cases} $$ Let $M$ be the corresponding transition matrix. The monomial change of variables given by its transpose $M^T$ does preserve horizontality of faces of $\Delta.$ Moreover, under this change of variables the facet $\tilde{\Gamma}$ becomes ``vertical'', since in the new coordinate system $(h_1,\ldots, h_{n+2})$ it is parallel to the coordinate hyperplane $\{h_{n+2}=0\}.$ \end{proof} \begin{exa}\label{simp3} Consider the support set $\check{A}=\{(0,0,0),(1,0,0),(2,0,0),(0,2,0),(0,0,2)\}$ and the polytope $\check{\Delta}=\mathop{\rm conv}\nolimits{\check{A}}$ from Example \ref{simp2} (see Figure 5 below). The monomial change of variables $x\mapsto\tilde{x}\tilde{t},~y\mapsto\tilde{t},~t\mapsto \tilde{y}\tilde{t}$ turns a polynomial of the form $f(x,y,t)=a+by^2+ct^2+dx+ex^2$ into the polynomial $\tilde{f}(\tilde{x},\tilde{y},\tilde{t})=a+b\tilde{t}^2+c\tilde{y}^2\tilde{t^2}+d\tilde{x}\tilde{t}+e\tilde{x}^2\tilde{t}^2.$ Under this change of variables the facet $\Gamma=\mathop{\rm conv}\nolimits(\{(2,0,0),(0,2,0),(0,0,2)\})\subset\check{\Delta}$ becomes a ``vertical'' facet $\tilde{\Gamma}=\mathop{\rm conv}\nolimits(\{(2,0,2),(0,0,2),(0,2,2)\})\subset\tilde{\Delta}.$ \begin{center} \begin{tikzpicture} \pattern[pattern=north east lines, pattern color=orange] (2,-1)--(4,1)--(0,5)--(2,-1); \pattern[pattern=north west lines, pattern color=orange] (12,1)--(14,-1)--(12,5)--(12,1); \draw[dashed, ultra thick] (0,1)--(4,1); \draw[ultra thick] (0,1)--(0,5); \draw[ultra thick] (0,1)--(2,-1); \draw[ultra thick] (0,5)--(2,-1); \draw[ultra thick] (0,5)--(4,1); \draw[ultra thick] (4,1)--(2,-1); \draw[ultra thick] (8,1)--(14,-1); \draw[ultra thick] (8,1)--(12,5); \draw[ultra thick] (12,5)--(14,-1); \draw[dashed, ultra thick] (8,1)--(12,1); \draw[dashed, ultra thick] (12,1)--(14,-1); \draw[dashed, ultra thick] (12,1)--(12,5); \draw[fill,red] (0,1) circle [radius=0.1]; \draw[fill,red] (4,1) circle [radius=0.1]; \draw[fill,red] (2,-1) circle [radius=0.1]; \draw[fill,red] (0,5) circle [radius=0.1]; \draw[fill,red] (0,3) circle [radius=0.1]; \draw[fill,red] (8,1) circle [radius=0.1]; \draw[fill,red] (12,1) circle [radius=0.1]; \draw[fill,red] (12,5) circle [radius=0.1]; \draw[fill,red] (14,-1) circle [radius=0.1]; \draw[fill,red] (10,3) circle [radius=0.1]; \draw [ultra thick, fill=white] (2,1) circle [radius=0.1]; \draw [ultra thick, fill=white] (2,3) circle [radius=0.1]; \draw [ultra thick, fill=white] (3,0) circle [radius=0.1]; \draw [ultra thick, fill=white] (1,2) circle [radius=0.1]; \draw [ultra thick, fill=white] (1,0) circle [radius=0.1]; \draw [ultra thick, fill=white] (10,1) circle [radius=0.1]; \draw [ultra thick, fill=white] (13,0) circle [radius=0.1]; \draw [ultra thick, fill=white] (11,0) circle [radius=0.1]; \draw [ultra thick, fill=white] (12,3) circle [radius=0.1]; \draw [ultra thick, fill=white] (13,2) circle [radius=0.1]; \node[below] at (8,-1) {{\bf Figure 5.} Making a facet ``vertical''.}; \node[right] at (0,5) {(2,0,0)}; \node[left] at (0,3) {(1,0,0)}; \node[left] at (0,1) {(0,0,0)}; \node[right] at (4,1) {(0,0,2)}; \node[right] at (2,-1) {(0,2,0)}; \node[above left] at (12,1) {(0,0,2)}; \node[left] at (8,1) {(0,0,0)}; \node[left] at (10,3) {(1,0,1)}; \node[right] at (12,5) {(2,0,2)}; \node[right] at (14,-1) {(0,2,2)}; \end{tikzpicture} \end{center} \end{exa} \begin{lemma}\label{inf_br} For generic $f_1,\ldots, f_{n+1}\in\mathbb C^A,$ every limiting point $p$ of the curve $\mathcal C$ in the toric variety $X_P$ is contained in the $1$-dimensional orbit $\mathcal O_{\Gamma}$ associated to some edge $\Gamma\subset P.$ Moreover, under Assumption \ref{primitive}, in a small neighborhood of $p$ every branch of the curve $\mathcal C$ is a smooth curve that intersects $\mathcal O_{\Gamma}$ transversally. \end{lemma} \begin{proof} The first statement is a corollary of Theorem 4.10 in \cite{EK}. Moreover, this result implies that in a small neighborhood of $p,$ each branch of the curve $\mathcal C$ is the projection of the intersection of the curve $\tilde{\mathcal C}$ with a small neighborhood of some limiting point $q$ in the orbit $\mathcal O_{\tilde{\Gamma}}\subset X_{\Delta},$ where $\tilde{\Gamma}\subset \Delta$ is some facet parallel to the edge $\Gamma.$ It is well known that for generic $f_1,\ldots, f_{n+1}\in\mathbb C^A,$ the complete intersection $\tilde{\mathcal C}$ in a small neighborhood of $q$ is smooth and intersects $\mathcal O_{\tilde{\Gamma}}$ transversally. Let us now assume that the edge $\Gamma\subset P$ is contained in a coordinate axis, and the facet $\tilde{\Gamma}$ is ``vertical'' and is contained in the coordinate hyperplane whose image under the projection is that coordinate axis. In this setting, the torus $({\mathbb C}\setminus 0)^2\subset X_{P}$ and the orbit $\mathcal O_{\Gamma}$ can be considered as dense subsets of the plane $\mathbb C^2$ together with one of its coordinate axes respectively, while the torus $({\mathbb C}\setminus 0)^{n+2}\subset X_{\Delta}$ and the orbit $\mathcal O_{\tilde{\Gamma}}$ can be considered as dense subsets of $\mathbb C^{n+2}$ together with one of its coordinate hyperplanes. Therefore, $\pi$ is just the projection from $\mathbb C^{n+2}$ to $\mathbb C^2$ forgetting the first $n$ coordinates, which maps the torus $({\mathbb C}\setminus 0)^{n+2}$ to the torus $({\mathbb C}\setminus 0)^2,$ the orbit $\mathcal O_{\tilde{\Gamma}}$ to the orbit $\mathcal O_{\Gamma}$ and the point $q$ to the point $p.$ Thus, under this assumption, the statement of the lemma is reduced to the following trivial statement: if a curve $C\subset \mathbb C^{n+2}$ is smooth and intersects a ``vertical'' coordinate hyperplane transversally at the point $q,$ then its image $\pi(C)$ is a smooth curve which transversally intersects the corresponding coordinate axis at the point $p=\pi(q).$ Finally, due to Assumption \ref{primitive}, the general statement of the lemma can be reduced to the special case considered above by applying a certain monomial change of variables preserving horizontality of facets of $\Delta.$ The existence of such a change of variables follows from Proposition \ref{vert_change}. \end{proof} \begin{exa} Let $\pi\colon({\mathbb C}\setminus 0)^2\to({\mathbb C}\setminus 0)$ be the projection given by $\pi\colon (x,y)\mapsto y,$ and consider the set $A=\{(0,0),(2,0),(0,1)\}.$ Its convex hull $\Delta$ does not satisfy Assumption \ref{primitive}. Take a polynomial $f$ with $\mathop{\rm supp}\nolimits(f)=A.$ Up to a scalar multiple, its truncation $f^{\Gamma}$ where $\Gamma=\mathop{\rm conv}\nolimits(\{(2,0),(0,1)\})$ is a polynomial of the form $\alpha x^2-y,$ so the intersection of the branches of the curve $C$ defined by $f$ and the $1-$dimensional orbit of $X_{\Delta}$ corresponding to $\Gamma$ is not transverse. Now, consider the change of variables $\check{x}=x, \check{y}=y^2.$ Then the truncation $\check{f}^{\check{\Gamma}}$ after this change of variables is the polynomial $\alpha\check{x}^2-\check{y}^2,$ whose zero set is a union of two lines $\{\sqrt{\alpha}\check{x}\pm\check{y}\}$ that do intersect the corresponding orbit of the toric variety $X_{\check{\Delta}}$ transversally. Note that, inside the torus $({\mathbb C}\setminus 0)^2,$ the curve $\check{C}$ defined by $\check{f}$ is a $2$-covering of the curve $C$ defined by $f.$ \end{exa} Consider a $1$-dimensional orbit $\mathcal O_{\Gamma}$ of the toric variety $X_P$ containing multiple points of the closure of the curve $\mathcal{C}.$ Let $\Gamma\subset P$ be the corresponding edge of the polygon $P$ and $\gamma\in(\mathbb Z^2)^*$ be the primitive covector supported at $\Gamma.$ \begin{predpol}\label{facets} For any pair of facets $\tilde{\Gamma}=\Delta^{\gamma_1},\tilde{\Gamma}'=\Delta^{\gamma_2},~\gamma_1\neq\gamma_2$ parallel to the edge $\Gamma$, we have $$\pi(\{f_1^{\tilde{\Gamma}}=\ldots=f_{n+1}^{\tilde{\Gamma}}=0\})\cap\pi(\{f_1^{\tilde{\Gamma}'}=\ldots=f_{n+1}^{\tilde{\Gamma}'}=0\})=\varnothing.$$ \end{predpol} \begin{utver}\label{diff_facets} The tuples of polynomials $f_1,\ldots,f_{n+1}\in\mathbb C^A$ satisfying Assumption \ref{facets} form an everywhere dense Zariski open subset. \end{utver} \begin{proof} It is clear that this set is Zariski open. Therefore, one just has to show that it is not empty. Indeed, take a tuple of polynomials that does not satisfy the assumption. Since $\tilde{\Gamma}\neq\tilde{\Gamma'},$ there exists a monomial $f$ belonging to the set $A\cap\tilde{\Gamma}$ but not to the set $\tilde{\Gamma'}.$ Perturbing its coefficient in one of the polynomials $f_j$, for instance, in $f_{1},$ we do not change any of the roots for the truncated system $\{f_1^{\tilde{\Gamma}'}=\ldots=f_{n+1}^{\tilde{\Gamma}'}=0\}.$ At the same time, this system has only finitely many roots, and, thus, for generic $\varepsilon,$ the projections of the roots for the system $\{f_1^{\tilde{\Gamma}}+\varepsilon\cdot f=\ldots=f_j^{\tilde{\Gamma}}=\ldots=f_{n+1}^{\tilde{\Gamma}}=0\}$ do not coincide with those of the roots for $\{f_1^{\tilde{\Gamma}'}=\ldots=f_{n+1}^{\tilde{\Gamma}'}=0\}.$ \end{proof} The integer length of an edge $\Gamma=P^{\gamma}\subset P$ equals the sum of the integer volumes of all the facets $\tilde{\Gamma}\subset\Delta$ that are parallel to $\Gamma$ (see Lemma \ref{volumes} for details). Proposition \ref{diff_facets} implies that, for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ each of these facets contributes to its own subset of multiple points in the orbit $\mathcal O_{\Gamma}\subset X_{P},$ and, for any pair of facets $\tilde{\Gamma}_1, \tilde{\Gamma}_2$ parallel to $\Gamma,$ the corresponding subsets do not intersect. Therefore, without loss of genericity, one can consider the case when there is a unique facet $\tilde{\Gamma}\subset\Delta,$ such that the projection of the primitive covector supported at it coincides with $\gamma.$ In this setting, the polynomial $h^{\Gamma}$ has $\mathop{\rm Vol}\nolimits(\tilde{\Gamma})$ roots if counted with multiplicities, and those multiplicities are equal to $\mathop{\rm ind}\nolimits_v(A\cap\tilde{\Gamma})$ (for details, see Theorem 5.3 in \cite{M}). Let us also suppose that the facet $\tilde{\Gamma}$ is ``vertical'', i.e., the restrictions $f_1^{\tilde{\Gamma}},\ldots,f_{n+1}^{\tilde{\Gamma}}$ are polynomials in $x_1,\ldots,x_n, y.$ Due to Assumption \ref{primitive}, one can always achieve that by performing a certain monomial change of variables preserving horizontality of faces of the polytope $\Delta$ (see Proposition \ref{vert_change}). Let $p$ be a multiple point of intersecton of the closure of the curve $\mathcal C$ with the $1$-dimensional orbit $\mathcal O_{\Gamma}$ of the toric variety $X_{P}$ corresponding to the edge $\Gamma.$ Lemma \ref{inf_br} implies that in a small neighborhood of the point $p,$ the curve $\mathcal C$ is a union of smooth branches intersecting $\mathcal O_{\Gamma}$ transversally. We will now compute the order of contact between any two of those branches in terms of the support set $A.$ In the special case we are now considering, the polynomials $f_i,~1\leqslant\ldots i\leqslant n+1,$ can be written in the following form: $f_i=g_i(x_1\ldots,x_n,y)+\sum_{t=1}^{\infty} t^m\tilde{g}_{i,m}(x_1\ldots,x_n,y),$ where $g_i=f_i^{\tilde{\Gamma}}(x_1,\ldots,x_n,y),$ and $\tilde{g}_{i,m}$ are polynomials in the variables $x_1,\ldots,x_n,y.$ \begin{exa}\label{slice} Consider the support set $A=\{(0,0,0),(1,0,0),(2,0,0),(0,2,0),(0,0,2)\}$ and the polytope $\Delta=\mathop{\rm conv}\nolimits(A)$ (see Figure 6 below). \begin{center} \begin{tikzpicture} \pattern[pattern=north west lines, pattern color=orange] (0,1)--(2,-1)--(0,5)--(0,1); \draw[ultra thick,orange] (0,1)--(0,5); \draw[ultra thick,orange] (0,1)--(2,-1); \draw[ultra thick,orange] (0,5)--(2,-1); \pattern[pattern=north west lines, pattern color=orange] (2,3)--(3,0)--(2,1)--(2,3); \draw[ultra thick,orange] (2,3)--(3,0); \draw[dashed, ultra thick,orange] (2,3)--(2,1); \draw[dashed, ultra thick,orange] (2,1)--(3,0); \draw[ultra thick] (0,5)--(4,1); \draw[ultra thick] (4,1)--(2,-1); \draw[dashed, ultra thick] (0,1)--(4,1); \draw[fill,red] (0,1) circle [radius=0.1]; \draw[fill,red] (4,1) circle [radius=0.1]; \draw[fill,red] (2,-1) circle [radius=0.1]; \draw[fill,red] (0,5) circle [radius=0.1]; \draw[fill,red] (2,3) circle [radius=0.1]; \draw [ultra thick, fill=white] (2,1) circle [radius=0.1]; \draw [ultra thick, fill=white] (0,3) circle [radius=0.1]; \draw [ultra thick, fill=white] (3,0) circle [radius=0.1]; \draw [ultra thick, fill=white] (1,2) circle [radius=0.1]; \draw [ultra thick, fill=white] (1,0) circle [radius=0.1]; \node[below] at (2,-1.3) {{\bf Figure 6.} Slicing the polytope $\Delta$.}; \node[right] at (0,5) {(2,0,0)}; \node[right] at (2,3) {(1,0,1)}; \node[left] at (0,1) {(0,0,0)}; \node[right] at (4,1) {(0,0,2)}; \node[right] at (2,-1) {(0,2,0)}; \end{tikzpicture} \end{center} A polynomial $f_j(x_1,y,t)$ supported at the set $A,$ can be written in the form $f_j(x_1,y,t)=f_j^{\tilde{\Gamma}}(x_1,y)+t\cdot\tilde{g}_{j,1}(x_1,y)+t^2\cdot\tilde{g}_{j,2}(x_1,y)=(a+bx_1^2+cy^2)+t\cdot \alpha x_1y+t^2\cdot \beta.$ \end{exa} Let $p=(0,\ldots,0,u,0)$ be a multiple intersection point of the curve $\mathcal{C}$ with the $1$-dimensional orbit of the toric variety $X_{P}$ corresponding to the edge $\Gamma\subset P.$ Consider a pair of branches of the curve $\mathcal{C}$ passing through the point $p.$ Then the preimages of this two branches of the curve $\mathcal C$ intersect the orbit corresponding to the facet $\tilde{\Gamma}\subset\Delta$ at the points $p_1=(v_1,\ldots,v_n,u,0), p_2=(v'_1,\ldots,v'_n,u,0)$ respectively. Moreover, for every $1\leqslant i\leqslant n+1,$ we have $g_i(p_1)=g_i(p_2).$ Due to Assumption \ref{indall} the support set $A$ satisfies $\mathop{\rm ind}\nolimits_v(A)=1$. Therefore there exists a number $K\in\mathbb N,$ such that for $1\leqslant i\leqslant n+1,$ we have $\tilde{g}_{i,K-1}(p_1)=\tilde{g}_{i,K-1}(p_2)$ and $\tilde{g}_{i,K}(p_1)\neq\tilde{g}_{i,K}(p_2).$ We denote this number by $K(p_1,p_2)$ to emphasize its dependence on the points $p_1,p_2.$ \begin{exa}\label{ord_cont} Consider the same simplex $\check{\Delta}$ as in Example \ref{slice}. The projection $\tilde{C}$ of a complete intersection given by generic polynomials $f_1,f_2$ supported at $A,$ has two multiple points on the $1-$dimensional orbit of the toric variety $X_{P}$ corresponding to the edge $\Gamma=P^{(0,-1)}$. The polygon $P$ is a standard simplex of size $4,$ and since $\mathop{\rm ind}\nolimits_v(A\cap\tilde{\Gamma})=2,$ each of the multiple points has $2$ preimages. Take any of them, and denote it by $p.$ Its preimages $p_1,p_2$ are not distinguished by $f_j^{\tilde{\Gamma}}$ while the polynomials $\tilde{g}_{j,1}$ do distinguish them. Therefore, we have $K(p_1,p_2)=1.$ \end{exa} \begin{lemma}\label{ordcontact} In the same notation as above, for a generic tuple of polynomials $f_1\ldots,f_{n+1}\in\mathbb C^A$ the order of contact between the projections of the branches of the curve $\tilde{\mathcal C}$ passing through the points $p_1,p_2,$ such that $\pi(p_1)=\pi(p_2)=p,$ equals to $K(p_1,p_2).$ \end{lemma} \begin{proof} Let $K=K(p_1,p_2),$ where $p_1,p_2$ are as described above. Let us first compute the lower bound for the sought order of contact. \begin{utver}\label{lowerbound} The order of contact between the projections of the branches of $\tilde{\mathcal C}$ passing through the points $p_1,p_2$ is greater or equal to $K.$ \end{utver} \begin{proof} Note that the points $p_1=(v_1,\ldots,v_n,u,0), p_2=(v'_1,\ldots,v'_n,u,0)$ under consideration are related in the following way. For some integers $a_1,\ldots,a_n,$ we have $v'_j=r_jv_j,$ where $r_j$ is some $(a_j)$-th root of unity. Also, note that rescaling the coordinate system in the same manner, i.e. performing the change of variables $\check{x}_i=r_jx_j,$ does not affect the coefficients monomials of $f_1,\ldots,f_{n+1}$ that do not distinguish the points $p_1$ and $p_2.$ For $1\leqslant j\leqslant n+1,$ by $F_j$ denote the part of $f_j$ which is invariant under this rescaling, and by $G_j$ the one that is not. Now, let us compute the order of contact at the point $p_1$ between the complete intersection curves $\tilde{\mathcal C}=\{f_1=\ldots=f_{n+1}=0\}$ and $X=\{F_1=\ldots=F_{n+1}=0\}.$ Let the curve $X$ be locally parametrized, i.e., in a neighborhood of the point $p_1,$ the curve $X$ is the image of a parametrization map $s\mapsto\varphi(s)=(\varphi_1(s),\ldots,\varphi_n(s),\varphi_{n+1}(s), \varphi_{n+2}(s)).$ Note also that since the $n+2$-th coordinate of $p_1$ is $0,$ the Taylor series of $\varphi_{n+2}$ at $p_1$ has no constant term, therefore it starts with the term of degree at least $1$ in $s.$ Substituting $\varphi(s)$ into the system $\{f_1=\ldots f_{n+1}=0\},$ we obtain the following system: $$\{G_1(\varphi(s))=\ldots=G_{n+1}(\varphi(s))=0\}.$$ The way the number $K=K(p_1,p_2)$ was defined implies that the polynomials $G_j$ are of the form $G_j=\sum_{m=K}^{\infty} t^m\tilde{g}_{i,m}(x_1\ldots,x_n,y).$ Thus, for $1\leqslant j\leqslant n+1,$ the leading term in $G_j(\varphi(s))$ is of degree $\geqslant K$ in $s.$ So, the order of contact between $X$ and $\tilde{\mathcal C}$ at $p_1$ is at least $K.$ The same is obviously true for the order of contact between $X$ and $\tilde{\mathcal C}$ at $p_2.$ Now we note that the change of variables $\check{x}_i=r_jx_j$ maps the branch of the curve $X$ passing through $p_1$ to the one passing through $p_2$ and at the same time does not change the defining polynomials of $X.$ Moreover, since this change of variables does not do anything with the last two of the $(n+2)$ coordinates, the projections of the two above-mentioned branches of $X$ coincide in some neighborhood of $p=\pi(p_1)=\pi(p_2).$ Finally, the projection $\pi$ does not decrease the order of contact between curves, which yields the desired inequality. \end{proof} \begin{utver}\label{transverse} In the same notation as above, suppose that $K(p_1,p_2)=1.$ Then for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the projections of the branches of $\tilde{\mathcal C}$ that pass through $p_1$ and $p_2$ intersect transversally. \end{utver} \begin{proof} The idea is to compare the projections of the tangent lines to the curve $\tilde{C}$ at the points $p_1$ and $p_2$ and make sure that for almost all $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ they do not coincide. To find the tangent lines at $p_1$ and $p_2$ we need to compute the kernel of the $(n+1)$--differential form $\bigwedge_{i=1}^{n+1} df_i$ at those points. Moreover, since we need to compare the projections of the tangent lines, the only components of $\bigwedge_{i=1}^{n+1} df_i$ that we have to look at are $dx_1\wedge\ldots\wedge dx_n\wedge dt$ and $dx_1\wedge\ldots\wedge dx_n\wedge dy.$ In the setting that is being considered, the corresponding coefficients will be the last two minors $M_{n+1}$ and $M_{n+2}$ of the $(n+2)\times(n+1)$--matrix $\mathcal J$ evaluated at the points $p_1$ and $p_2,$ where $$\mathcal J=\begin{pmatrix} \frac{\partial g_1}{\partial x_1}&\dots&\frac{\partial g_1}{\partial x_n}&\frac{\partial g_1}{\partial y}&\tilde{g}_{1,1}\\ \frac{\partial g_2}{\partial x_1}&\dots&\frac{\partial g_2}{\partial x_n}&\frac{\partial g_2}{\partial y}&\tilde{g}_{2,1}\\ \vdots & \ddots & \vdots & \vdots & \vdots \\ \frac{\partial g_{n+1}}{\partial x_1}&\dots&\frac{\partial g_{n+1}}{\partial x_n}&\frac{\partial g_{n+1}}{\partial y}&\tilde{g}_{n+1,1}\\ \end{pmatrix}.$$ Therefore, we need to show that for generic $f_1,\ldots,f_{n+1},$ we have \begin{equation}\label{jac} \begin{vmatrix} M_{n+1}(p_1)&M_{n+2}(p_1)\\ M_{n+1}(p_2)&M_{n+2}(p_2)\\ \end{vmatrix}\neq 0. \end{equation} The condition (\ref{jac}) is clearly algebraic, so, the set of tuples $f_1,\ldots,f_{n+1}\in\mathbb C^A$ satisfying it is Zariski open. To make sure it is everywhere dense, we need to show that it is non-empty. Indeed, suppose that for some $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ we have \begin{equation}\label{tangentcond} \begin{vmatrix} M_{n+1}(p_1)&M_{n+2}(p_1)\\ M_{n+1}(p_2)&M_{n+2}(p_2)\\ \end{vmatrix}= 0. \end{equation} This is equivalent to the following system of equations: \begin{equation}\label{syst} \begin{cases} M_{n+1}(p_1)=\lambda M_{n+2}(p_1)\\ M_{n+1}(p_2)=\lambda M_{n+2}(p_2) \end{cases} \end{equation} for some $\lambda\in\mathbb C.$ The minors $M_{n+1}$ and $M_{n+2}$ are given by the following formulas: \begin{equation}\label{Minors} M_{n+1}=\begin{vmatrix} \frac{\partial g_1}{\partial x_1}&\dots&\frac{\partial g_1}{\partial x_n}&\tilde{g}_{1,1}\\ \frac{\partial g_2}{\partial x_1}&\dots&\frac{\partial g_2}{\partial x_n}&\tilde{g}_{2,1}\\ \vdots & \ddots & \vdots & \vdots \\ \frac{\partial g_{n+1}}{\partial x_1}&\dots&\frac{\partial g_{n+1}}{\partial x_n}&\tilde{g}_{n+1,1}\\ \end{vmatrix}, ~M_{n+2}=\begin{vmatrix} \frac{\partial g_1}{\partial x_1}&\dots&\frac{\partial g_1}{\partial x_n}&\frac{\partial g_1}{\partial y}\\ \frac{\partial g_2}{\partial x_1}&\dots&\frac{\partial g_2}{\partial x_n}&\frac{\partial g_2}{\partial y}\\ \vdots & \ddots & \vdots & \vdots \\ \frac{\partial g_{n+1}}{\partial x_1}&\dots&\frac{\partial g_{n+1}}{\partial x_n}&\frac{\partial g_{n+1}}{\partial y}\\ \end{vmatrix}. \end{equation} The matrices in (\ref{Minors}) are almost identical except for the last column, let us expand their determinants along this column. Then, we obtain: \begin{align}\label{determexpand} M_{n+1}=\sum_{j=1}^{n+1}(-1)^{n+1+j}\tilde{g}_{j,1}D_j=(-1)^{n+2}\tilde{g}_{1,1}D_1+\sum_{j=2}^{n+1}(-1)^{n+1+j}\tilde{g}_{j,1}D_j;\\ M_{n+2}=\sum_{j=1}^{n+1}(-1)^{n+1+j}\frac{\partial g_j}{\partial y}D_j=(-1)^{n+2}\frac{\partial g_1}{\partial y}D_1+\sum_{j=2}^{n+1}(-1)^{n+1+j}\frac{\partial g_j}{\partial y}D_j, \end{align} where $D_j$ are the cofactors of the entries in the last columns. We have that for generic $f_1,\ldots,f_{n+1},$ the intersection points of the curve $\tilde{\mathcal{C}}$ with the orbit of $X_{\Delta}$ corresponding to the facet $\tilde{\Gamma}$ are non-singular, therefore the minor $M_{n+1}$ does not vanish at $p.$ Thus, at least one of the cofactors $D_j$ is not zero. Without loss of generality, let us assume that the cofactor $D_1$ is not zero. Substituting equations in (\ref{determexpand}) into the system (\ref{syst}), we obtain: \begin{equation}\label{syst1} \begin{cases} (-1)^{n+2}(\tilde{g}_{1,1}-\lambda\frac{\partial g_1}{\partial y})D_1\mid_{p_1}=\sum_{j=2}^{n+1}(-1)^{n+j}(\tilde{g}_{j,1}-\lambda\frac{\partial g_j}{\partial y})D_j\mid_{p_1}\\ (-1)^{n+2}(\tilde{g}_{1,1}-\lambda\frac{\partial g_1}{\partial y})D_1\mid_{p_2}=\sum_{j=2}^{n+1}(-1)^{n+j}(\tilde{g}_{j,1}-\lambda\frac{\partial g_j}{\partial y})D_j\mid_{p_2}. \end{cases} \end{equation} Now let us note, that if the polynomial $\tilde{g}_{1,1}$ distinguishes the points $p_1$ and $p_2,$ then so does at least one of its monomials. If we change the coefficient of this monomial in $\tilde{g}_{1,1},$ then the left-hand sides of each of the equalities in (\ref{syst1}) change independently, while the right-hand sides remain unchanged. Therefore the equations become no longer true, so the determinant in (\ref{tangentcond}) does not vanish anymore. \end{proof} It remains to show that Proposition \ref{transverse} together with Proposition \ref{lowerbound} implies Lemma $\ref{ordcontact}$ for $K\neq 1.$ Let us note that if all the polynomials $\tilde{g}_{i,m}$ for all $1\leqslant i\leqslant n+1$ and all $m$ except for $m=0$ and $m=K$ are zero, then the proof is the same as in case $K=1,$ up to a change of variable $\check{t}=t^{K}.$ Therefore, we proved that for almost all $f_1,\ldots, f_{n+1}$ contained in the plane in $\mathbb C^A$ given by equations $c_{\alpha}=0,$ where $\alpha\notin\mathop{\rm supp}\nolimits(f_i^{\tilde{\Gamma}})\cup\mathop{\rm supp}\nolimits(\tilde{g}_{i,K}),$ the order of contact of the projections of the branches passing through $p_1,p_2$ is equal to $K=K(p_1,p_2).$ Finally, the intersection index is upper semi-continuous. Therefore, perturbing the coefficients of monomials in $\alpha\in \mathop{\rm supp}\nolimits(\tilde{g}_{i,m}),~1\leqslant m\leqslant K-1,$ does not increase the sought order of contact, therefore, it remains smaller or equal to $K.$ At the same time, Proposition \ref{lowerbound} gives the lower bound for the sought order of contact, which is also equal to $K.$ Thus Lemma \ref{ordcontact} is proved. \end{proof} \subsection{Singular Points of the Curve $\mathcal C$ at Infinity}\label{s2} Let $\Gamma\subset\Delta$ be a facet contained in a hyperplane given by a linear equation of the form $\ell(e_1,\ldots,e_{n+2})=c.$ We now construct a sequence of integers $i^{\Gamma}=(i_1^{\Gamma},i_2^{\Gamma},\ldots)$ as follows. Set $B_1^{\Gamma}=A\cap\Gamma.$ For every $r>1,$ we define $$B_r^{\Gamma}=B_{r-1}^{\Gamma}\cup(A\cap\{\ell(e_1,\ldots,e_{n+2})=c\pm (r-1)\}),$$ depending on the way $\Delta$ is positioned relative to the hyperplane containing $\Gamma.$ Finally, for every $r\geqslant 1,$ we set $$i_r^{\Gamma}=\mathop{\rm ind}\nolimits_{v}(B_r^{\Gamma}).$$ \begin{rem}\label{stabind} It is clear that for every $r,$ the element $i_r^{\Gamma}$ divides $i_{r-1}^{\Gamma}.$ Moreover, since for the set $A$ we have $\mathop{\rm ind}\nolimits_v(A)=1,$ any such sequence stabilizes to $1$. \end{rem} In the same notation as above, we state the following result. \begin{lemma} \label{fpsinfinity} For a generic tuple of polynomials $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ all the singularities of the curve $\mathcal C$ at infinity are $i^{\Gamma}-$forking paths singularities for non-horizontal facets $\Gamma\subset\Delta.$ \end{lemma} \begin{proof} This statement is a straightforward corollary of Lemma \ref{inf_br}, Lemma \ref{ordcontact} and Definition \ref{def_fps}. \end{proof} \subsection{Comparing the Euler characteristics of the Complete Intersection and Its Projection}\label{main} First, let us make the following important observation. \begin{utver} If $A\subset\mathbb Z^{n+2}$ satisfies Assumption \ref{indall}, then for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ a generic point $p\in\mathcal{C}$ has exactly one preimage. Moreover, the set of points in $\mathcal{C}$ that have more than $1$ preimage is finite. \end{utver} \begin{proof} The first of the statements follows from Assumption \ref{indall} together with Theorem \ref{num_fib} applied to the essential collection $\mathcal A=\Big(\underbrace{A,\ldots,A}_{n+1\text{~times}},\mathop{\rm supp}\nolimits(y-y_0),\mathop{\rm supp}\nolimits(t-t_0)\Big).$ Indeed, in the notation of Theorem \ref{num_fib}, we have $I=n+3,~L=L'=\mathbb Z^{n+2}$ and $k=0.$ So, $d(\mathcal A)=1,$ which proves the first part of the Proposition. Moreover, note that since the projection $\pi\mid_{\tilde{\mathcal C}}$ is a regular morphism of varieties, the set of points having more than one preimage is a constructible set, which we denote by $\mathcal B.$ Let us show that it is finite. Indeed, assume that $\mathcal B$ is infinite. Then it is of positive dimension, therefore, it contains an algebraic curve (possibly with finitely many punctured points). An algebraic curve in $({\mathbb C}\setminus 0)^2$ has at least one branch at infinity. Lemma \ref{fpsinfinity} implies that this branch is one of the branches of a forking paths singularity of the curve $\mathcal C$ at infinity. At the same time we note that the preimage of this branch contains a pair of branches of the curve $\tilde{\mathcal{C}}.$ However, Lemma \ref{ordcontact}, Lemma \ref{fpsinfinity} and Remark \ref{stabind} imply that the projections of any two branches of the curve $\tilde{\mathcal{C}}$ at infinity cannot coincide. \end{proof} \begin{rem}\label{prim_rem} Note that it suffices to prove Theorem \ref{lemmain} for a support set $A$ whose convex hull $\Delta$ satisfies Assumption \ref{primitive}. Indeed, suppose $\Delta$ does not satisfy this assumption. Then, using a monomial change of variables of the form $\check{x}_i=x_i,\check{y}^{N!}=y,\check{t}^{N!}$ for $N$ big enough, we obtain the polytope $\check{\Delta}$ that satisfies the desired assumption. Moreover, as was discussed in Remark \ref{primitive_exp}, under this change of variables, the left-hand side of (\ref{mainformula1}) is multiplied by $N!^2.$ Note that each of the summands in the right-hand side of (\ref{mainformula1}) is also multiplied by $N!^2.$ Therefore, the $N!^2$ factor can be cancelled out. So, Theorem \ref{lemmain} holds true independently of Assumption \ref{primitive}. \end{rem} \begin{proof}[Proof of Theorem \ref{lemmain}] Suppose that the polytope $\Delta$ satisfies Assumption \ref{primitive}. The number $\mathcal{D}$ can be deduced from the following system of equations: \begin{equation}\label{Dplus} \begin{cases} \chi(\tilde{\mathcal C})=-(n+1)\mathop{\rm Vol}\nolimits(\Delta)\\ \chi(\tilde{\mathcal C})+\sum\limits_{\Gamma\in\mathcal H(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)=\chi(\mathcal C)+\sum\limits_{s\in\mathcal{S}}(b(s)-1)\\ \chi(\mathcal C)=-\mathop{\rm Area}\nolimits(P)-\sum\limits_{s\in\mathcal{S}}(b(s)-1)+2\mathcal{D}+\sum_{\Gamma\in\mathcal{F}\Delta\setminus\mathcal H(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\sum\limits_1^{\infty}(i_r^{\Gamma}-1), \end{cases} \end{equation} where $b(s)$ is the number of branches passing through the singular point $s\in\mathcal{S}.$ The first equation is a straightforward corollary of Theorem \ref{chicomplint} (see Example \ref{chici} for details). The second equation follows from the additivity property of Euler characteristic combined with Bernstein--Kouchnirenko theorem: to obtain the Euler characteristic of $\tilde{\mathcal{C}}$ from the one of $\mathcal{C}$ one just needs to count the multiple points as many times, as is the number of branches passing through them and subtract the number of points coming from vertical asymptotes of the curve $\tilde{\mathcal C}$ (and thus contained in the set $\mathcal C\setminus\pi(\tilde{\mathcal C})$). Finally, the third equation can be deduced as follows. Let $Y$ be a smooth curve having the same Newton polygon $P$ as the curve $\mathcal C.$ Theorem \ref{chicomplint} implies that its Euler characteristic is equal to $-\mathop{\rm Area}\nolimits(P)$ (see Example \ref{chiproj}). Now, we compare the Euler characteristic of $Y$ with the one of $\mathcal{C},$ and we have the following equality: \begin{equation}\label{euler_comp} \chi(Y)=\chi(\mathcal{C})-\sum\limits_{s\in\mathcal{S}}1+\sum\limits_{s\in\mathcal{S}}(b(s)-2\delta(s))\\+\sum\limits_{S\in \mathrm{FPS}}\chi([\mathrm{Milnor~fiber~of~}S]\cap({\mathbb C}\setminus 0)^2), \end{equation} where the third summation runs over all forking paths singularities of $\mathcal{C}$ at infinity. Indeed, the right-hand side of (\ref{euler_comp}) can be interpreted as follows. We puncture small neighborhoods of singular points of the curve $\mathcal{C}.$ Then we replace those neighborhoods with their Milnor fibers. The latter are of Euler characteristic $b(s)-2\delta(s),$ by Milnor formula (see Theorem 10.5 in \cite{Milnor}). Finally, we add the Milnor fibers of the forking paths singularities of $\mathcal{C}.$ Then, the Euler characteristic of the curve that we obtain this way equals to the Euler characteristic of the smooth curve $Y.$ Remark \ref{prim_rem} implies that the desired statement holds true without Assumption \ref{primitive}, which concludes the proof of Theorem \ref{lemmain}. \end{proof} \section{Further Discussion}\label{fd} \subsection{On the Singular Points of the curve $\mathcal C$} We begin this subsection by stating the following conjecture. \begin{conj}\label{c1} Let $A\in\mathbb Z^{n+2}$ be a finite set of full dimension containing $0\in\mathbb Z^{n+2}$ and satisfying Assumption \ref{indall}. For generic polynomials $f_1,\ldots,f_{n+1}\in\mathbb C^{A},$ the image of the complete intersection curve ${\tilde{\mathcal{C}}=\{f_1=\ldots=f_{n+1}=0\}}\subset ({\mathbb C}\setminus 0)^{n+2}$ under the projection $\pi\colon({\mathbb C}\setminus 0)^{n+2}\to({\mathbb C}\setminus 0)^2$ forgetting the first $n$ coordinates is an algebraic plane curve $\pi(\tilde{\mathcal C})$ with punctured points, and its singular locus consists only of nodes. \end{conj} \begin{rem} In other words, the statement of Conjecture \ref{s1} means that the singular points of the curve $\mathcal C$ which are not nodes are contained in the set $\mathcal C\setminus\pi(\tilde{\mathcal C}).$ \end{rem} Conjecture \ref{c1} naturally leads to the following question. \begin{prb}\label{p2} In the same notation as above, compute the sum of the $\delta$-invariants of the singularities of the curve $\mathcal C$ at the punctured points of the curve $\pi(\tilde{\mathcal C}).$ \end{prb} Given a solution to Problem \ref{p2}, one could then compute the number of nodes of the curve $\pi(\tilde{\mathcal C})$ itself. Under the following assumption, we state a conjecture for this number. \begin{predpol}\label{horiz_latt} For every horizontal facet $\Gamma\in\mathcal H(\Delta),$ the image of $\Gamma\cap A$ under the projection $\rho\colon\mathbb Z^{(n+2)}\twoheadrightarrow\bigslant{\mathbb Z^{(n+2)}}{\langle e_{n+1},e_{n+2}\rangle}$ generates a saturated sublattice. \end{predpol} \begin{rem} Assumption \ref{horiz_latt} guarantees that for generic $f_1,\ldots,f_{n+1}\in\mathbb C^{A},$ each of the singularities at the points in $\mathcal C\setminus\pi(\tilde{\mathcal C})$ has only one irreducible component. For $n=1,$ this assumption is satisfied for any support set $A,$ while for $n>1$ it is non-trivial. \end{rem} To state the conjecture below, we need to extend the construction of the sequences $i_r^{\Gamma}$ (see Subsection \ref{s2}) to horizontal facets of the polytope $\Delta.$ If we follow the same procedure as in Subsection \ref{s2}, we will get a sequence $j_k^{\Gamma}$ of indices of the form $(\underbrace{\infty,\infty,\ldots,\infty}_{ R \text{ times}},i^{\Gamma}_{R+1},i^{\Gamma}_{R+2},\ldots),$ where $R$ is the smallest strictly positive number such that $B_{R+1}^{\Gamma}\neq A\cap\Gamma.$ Finally, we set $$i_r^{\Gamma}=j_{R+r}^{\Gamma}.$$ \begin{conj}\label{c2} Let $A\subset\mathbb Z^{n+2},~n\geqslant 1,$ be a finite set of full dimension containing $0$ and satisfying Assumptions \ref{indall} and \ref{horiz_latt}. Let $\Delta\subset\mathbb R^{n+2}$ be its convex hull. For generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the image of the $1-$dimensional complete intersection $\tilde{\mathcal{C}}=\{f_1=\ldots=f_{n+1}=0\}$ under the projection $\pi\colon({\mathbb C}\setminus 0)^{n+2}\to({\mathbb C}\setminus 0)^2$ forgetting the first $n$ coordinates, is an algebraic plane curve $\pi(\tilde{\mathcal C})$ with punctured points. Its singular locus $\mathcal S$ (not including the punctured points) consists of isolated singular points, and the number $\mathcal D=\sum\limits_{s\in\mathcal S}\delta(s)$ can be computed via the following formula: \begin{equation}\label{conjform} \mathcal D=\dfrac{1}{2}\Bigg(\mathop{\rm Area}\nolimits(P)-(n+1)\mathop{\rm Vol}\nolimits(\Delta)+\sum\limits_{\Gamma\in\mathcal{H}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\big(2i_1^{\Gamma}-(i_1^{\Gamma})^2\big)-\sum\limits_{\Gamma\in\mathcal{F}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\sum\limits_1^{\infty}(i_r^{\Gamma}-1)\Bigg), \end{equation} where $\delta(s)$ is the $\delta$-invariant of the singular point $s$, the polygon $P=\int_{\pi}(\Delta)$ is the Minkowski integral of $\Delta$ with respect to the projection $\pi$, the set $\mathcal{F}(\Delta)$ is the set of all the facets of the polytope $\Delta,$ and $\mathcal{H}(\Delta)\subset\mathcal{F}(\Delta)$ is the set of all the horizontal facets of the polytope $\Delta.$ \end{conj} We will now prove Conjecture \ref{c2} under an additional assumption on the support set $A$. We begin with a result which is a straightforward corollary of Theorem \ref{lemmain}. \begin{theor}\label{developed} Let $A\subset\mathbb Z^{n+2}$ be a finite set of full dimension, satisfying Assumption \ref{indall}, and let $\Delta\subset\mathbb R^{n+2}$ be its convex hull. Let $\pi\colon({\mathbb C}\setminus 0)^{n+2}\to({\mathbb C}\setminus 0)^2$ be the projection forgetting the first $n$ coordinates. In the same notation as above, suppose that the set $\mathcal H(\Delta)$ is empty. Then for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the image of the curve $\tilde{\mathcal{C}}=\{f_1=\ldots=f_{n+1}=0\}$ under the projection $\pi$ is an algebraic plane curve $\mathcal{C}$ an algebraic plane curve $\mathcal C$ (with no points punctured), whose singular locus $\mathcal S$ consists of isolated singular points. The number $\mathcal D=\sum\limits_{s\in\mathcal S}\delta(s)$ can be computed via the following formula: \begin{equation} \mathcal D=\dfrac{1}{2}\Bigg(\mathop{\rm Area}\nolimits(P)-(n+1)\mathop{\rm Vol}\nolimits(\Delta)-\sum\limits_{\Gamma\in\mathcal{F}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\sum\limits_1^{\infty}(i_r^{\Gamma}-1)\Bigg), \end{equation} where $\delta(s)$ is the $\delta$-invariant of the singular point $s$, $P=\int_{\pi}(\Delta),$ the set $\mathcal{F}(\Delta)$ is the set of all facets of the polytope $\Delta.$ \end{theor} Theorem \ref{developed} applies to the case when the polytope $\Delta$ is developed with respect to the plane of projection $\pi.$ We can, in fact, prove a stronger statement, namely, Theorem \ref{maintheor}, which works for a slightly more general case when the support set $A$ satisfies an additional property (see Assumption \ref{horizontal} below). \begin{predpol}\label{horizontal} For every horizontal face $\tilde{\Gamma}\subset \Delta,$ one of the following two possibilities is realized: \begin{itemize} \item the preimage $\tilde{\Gamma}\subset\Delta$ is not a facet; \item $\tilde{\Gamma}\subset\Delta$ is a facet, and the set $A\setminus\tilde{\Gamma}$ is at lattice distance $1$ from $\tilde{\Gamma}$. \end{itemize} \end{predpol} \begin{rem} One can rephrase the second condition in Assumption \ref{horizontal} as follows: if $\tilde{\Gamma}\subset\Delta$ is a facet contained in a hyperplane given by the equation $\ell(e_1,\ldots,e_{n+2})=c,$ then the set $A\cap\{\ell(e_1,\ldots,e_{n+2})=c\pm 1\}\subset\mathbb Z^{n+2}$ is non-empty. \end{rem} The set $\mathcal{C}\setminus\pi(\tilde{\mathcal{C}})$ consists of the projections of the intersection points of the curve $\tilde{\mathcal C}$ with the orbits of the toric variety $X_{\Delta}$ corresponding to the horizontal facets of the polytope $\Delta.$ As we will see further, under Assumptions \ref{horiz_latt} and \ref{horizontal}, for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the points in $\mathcal{C}\setminus\pi(\tilde{\mathcal{C}})$ are smooth, therefore each of those points contributes exaclty $1$ to the Euler characteristic of the curve $\mathcal C.$ \begin{lemma}\label{punct} Let $A\subset\mathbb Z^{n+2}$ be a finite set of full dimension, satisfying Assumptions \ref{indall},\ref{horiz_latt} and \ref{horizontal}. Then for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the following statements are true: \begin{enumerate} \item let $\Gamma\subset\Delta$ be a horizontal facet and $p$ be an intersection point of the closure of the curve $\tilde{\mathcal C}$ with the corresponding orbit $\mathcal O_{\Gamma}\subset X_{\Delta}.$ The point $\pi(p)$ has exactly one preimage; \item Let $p$ be as described in part 1. Then the projection of the tangent line at $p$ is $1$-dimensional. \end{enumerate} \end{lemma} \begin{proof} {\bf Part 1.} Let $\Gamma\subset\Delta$ be a horizontal facet. Up to a monomial change of variables one can assume the truncations $f_1^{\Gamma},\ldots,f_{n+1}^{\Gamma}$ to be polynomials in $(x_2,\ldots,x_n,y,t).$ Thus the polynomials $f_1,\ldots,f_{n+1}$ can be written in the following form: $f_i=g_i(x_2,\ldots,x_n,y,t)+\sum_{m=1}^{\infty}x_1^m\tilde{g}_{i,m}(x_2,\ldots,x_n,y,t),$ where $g_i=f_i^{\Gamma}.$ Assumption \ref{horizontal} implies that the polynomials $g_{i,1}$ are nonzero for all $1\leqslant i\leqslant n+1.$ Let $q=(u_2,\ldots,u_n,y_0,t_0)$ be a root for the system $f_1^{\Gamma}=\ldots=f_{n+1}^{\Gamma}=0$ and $p=(0,u_2,\ldots,u_n,y_0,t_0)\in\mathcal{O}_{\Gamma}$ be the corresponding limiting point of the curve $\tilde{\mathcal{C}}.$ We need to show that there is no other point $p'$ in the closure of the curve $\tilde{ \mathcal {C}}\subset X_{\Delta}$ such that $\pi(p')=\pi(p)=(y_0,t_0).$ Suppose that such a point $p'$ exists. Then Proposition \ref{diff_facets} implies that the point $p'$ either belongs to the same orbit $\mathcal{O}_{\Gamma}$ as the point $p,$ or is in the torus $({\mathbb C}\setminus 0)^{n+2}.$ As we will see further, due to Assumption \ref{horiz_latt}, the first of the two options is not the case. In other words, we will show that for any two distinct roots $q$ and $q'$ for the truncated system $f_1^{\Gamma}=\ldots=f_{n+1}^{\Gamma}=0,$ at least one of the characters $y^1$ or $t^1$ attains distinct values on $q$ and $q'.$ The latter is equivalent to the following statement: if the overdetermined system $$\{f_1^{\Gamma}=\ldots=f_{n+1}^{\Gamma}=y-y_0=t-t_0=0\}$$ is consistent, then it has a unique root. Let $\Lambda_{\Gamma}$ be the lattice in $\Lambda=\bigslant{Z^{n+2}}{\langle e_{n+1},e_{n+2}\rangle}\simeq \mathbb Z^n$ generated by the image of $\Gamma\cap A$ under the projection $\rho\colon\mathbb Z^{(n+2)}\twoheadrightarrow\bigslant{\mathbb Z^{(n+2)}}{\langle e_{n+1},e_{n+2}\rangle}.$ Note that its dimension is equal to $n-1.$ Assumption \ref{horiz_latt} implies that the lattice $\Lambda_{\Gamma}\subset\Lambda$ is saturated, therefore the sets $(A\cap\Gamma),\mathop{\rm supp}\nolimits(y-y_0),\mathop{\rm supp}\nolimits(t-t_0)\subset\mathbb Z^{n+2}.$ generate a saturated dimension $n+1$ sublattice $L\simeq\mathbb Z^{n+1}$ in $\mathbb Z^{n+2}.$ Moreover, these sets cannot be shifted to the same proper sublattice in $L\simeq\mathbb Z^{n+1}.$ The statement that we are proving is then a special case of Theorem \ref{num_fib}: we take the essential collection $\mathcal A$ consisting of $n+1$ copies of the set $A\cap\Gamma\subset\mathbb Z^{n+2}$ and two more sets $\mathop{\rm supp}\nolimits(y-y_0),\mathop{\rm supp}\nolimits(t-t_0)\subset\mathbb Z^{n+2}.$ In the notation of this theorem, the formula for the number $d(\mathcal A)$ in our special case gives the answer $1,$ which means that for generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the system $\{f_1^{\Gamma}=\ldots=f_{n+1}^{\Gamma}=y-y_0=t-t_0=0\}$ has exactly one root $q.$ Indeed, in our case $L'=L,$ because $L$ is saturated, and $k=0,$ because the collection is essential. Therefore, any other point $p'\in\tilde{C}$ such that $\pi(p')=\pi(p)$ should belong to $({\mathbb C}\setminus 0)^{n+2},$ and thus any monomial of degree at least $1$ in $x_1$ distinguishes the points $p$ and $p'$. Assumption \ref{horizontal} implies the existence of such a monomial. Thus we can construct a polynomial $\tilde{f}$ such that $\tilde{f}(p)=0,$ and $\tilde{f}(p')\neq 0.$ Adding $\varepsilon\cdot\tilde{f}$ to one of the $f_i$'s does not affect the point $p$ in any way, while $p'$ does not belong to the corresponding complete intersection anymore. At the same time, the polynomial $(f_i+\varepsilon\cdot\tilde{f})\mid_p:=f_i(x_1,u_2,\ldots,u_n,y_0,t_0)+\varepsilon\cdot\tilde{f}(x_1,u_2,\ldots,u_n,y_0,t_0)$ is a polynomial in one variable $x_1,$ which has only finitely many roots $r_1,\ldots,r_m.$ Therefore, for generic $f_1,\ldots,f_{n+1},$ the restrictions $f_j\mid_p,~j\neq i$ do not all vanish at $r_1,\ldots,r_m.$ This concludes the proof of Part 1. {\bf Part 2.} In the same notation as above, to describe the tangent line to the point $p,$ one needs first to compute the kernel of the differential form $\bigwedge_{i=1}^{n+1}df_i$ at the point $p.$ Since we are interested in the projection of the tangent line, the only components that we need to compute are $dx_1\wedge\ldots\wedge dx_{n}\wedge dy$ and $dx_1\wedge\ldots\wedge dx_{n}\wedge dt.$ The corresponding coefficients are the last two minors $M_{n+1}$ and $M_{n+2}$ of the $(n+2)\times(n+1)$--matrix $\mathcal J$ evaluated at $p,$ where $$\mathcal J=\begin{pmatrix} \tilde{g}_{1,1}&\frac{\partial g_1}{\partial x_2}&\dots&\frac{\partial g_1}{\partial x_n}&\frac{\partial g_1}{\partial y}&\frac{\partial g_1}{\partial t}\\ \tilde{g}_{2,1}&\frac{\partial g_2}{\partial x_2}&\dots&\frac{\partial g_2}{\partial x_n}&\frac{\partial g_2}{\partial y}&\frac{\partial g_2}{\partial t}\\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ \tilde{g}_{n+1,1}&\frac{\partial g_{n+1}}{\partial x_2}&\dots&\frac{\partial g_{n+1}}{\partial x_n}&\frac{\partial g_{n+1}}{\partial y}&\frac{\partial g_{n+1}}{\partial t}\\ \end{pmatrix}.$$ The projection of the tangent line to the curve $\tilde{\mathcal C}$ at the point $p$ is $1$-dimensional if and only if at least one of $M_{n+1}(p)$ and $M_{n+2}(p)$ is not zero. This condition is algebraic, therefore, the set of all tuples $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ that satisfy it is Zariski open. Thus, the only thing that we still need to show is that this set is non-empty. Indeed, let $M_{n+1}(p)$ and $M_{n+2}(p)$ both vanish for some $f_1,\ldots,f_{n+1}\in\mathbb C^A.$ The latter means that the following equalities hold: \begin{align*} M_{n+1}=\begin{vmatrix} \tilde{g}_{1,1}&\frac{\partial g_1}{\partial x_2}&\dots&\frac{\partial g_1}{\partial x_n}&\frac{\partial g_1}{\partial t}\\ \tilde{g}_{2,1}&\frac{\partial g_2}{\partial x_2}&\dots&\frac{\partial g_2}{\partial x_n}&\frac{\partial g_2}{\partial t}\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ \tilde{g}_{n+1,1}&\frac{\partial g_{n+1}}{\partial x_2}&\dots&\frac{\partial g_{n+1}}{\partial x_n}&\frac{\partial g_{n+1}}{\partial t} \end{vmatrix}=0, ~M_{n+2}=\begin{vmatrix} \tilde{g}_{1,1}&\frac{\partial g_1}{\partial x_2}&\dots&\frac{\partial g_1}{\partial x_n}&\frac{\partial g_1}{\partial y}\\ \tilde{g}_{2,1}&\frac{\partial g_2}{\partial x_2}&\dots&\frac{\partial g_2}{\partial x_n}&\frac{\partial g_2}{\partial y}\\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \tilde{g}_{n+1,1}&\frac{\partial g_{n+1}}{\partial x_2}&\dots&\frac{\partial g_{n+1}}{\partial x_n}&\frac{\partial g_{n+1}}{\partial y}\\ \end{vmatrix}=0. \end{align*} Let us look at the first equality. We have that the Jacobian of $g_1,\ldots,g_{n+1}$ does not vanish at $p.$ Expanding it along the second to last column, we conclude that at least one of the cofactors is not zero. Without loss of generality, assume that this is the cofactor $D_1$ of $\frac{\partial g_1} {\partial y}.$ Now, let us expand the determinant $M_{n+1}$ along the first column. The cofactors $D_j$ of the elements $\tilde{g}_{j,1}$ are exactly the same as the ones of $\frac{\partial g_j} {\partial y}$ in the Jacobian matrix of $g_1,\ldots,g_{n+1}.$ Thus we have: $$M_{n+1}\mid_{p}=\tilde{g}_{1,1}D_1\mid_{p}+\sum_{j=2}^{n+1}(-1)^{n+1+j}\tilde{g}_{j,1}D_j\mid_{p}=0,$$ or, equivalently, \begin{equation}\label{nondeg} \tilde{g}_{1,1}D_1\mid_{p}=\sum_{j=2}^{n+1}(-1)^{n+j}\tilde{g}_{j,1}D_j\mid_{p}. \end{equation} By Assumption \ref{horizontal}, the polynomial $\tilde{g}_{1,1}$ is nonzero, therefore, it has at least one monomial with a nonzero coefficient. If we change this coefficient, the left-hand side of (\ref{nondeg}) changes, while the right-hand side does not. Therefore, the equality is no longer true, which concludes the proof of Part $2.$ \end{proof} The following result is a straightforward corollary of Theorem \ref{lemmain} and Lemma \ref{punct}. \begin{theor}\label{maintheor} Let $A\subset\mathbb Z^{n+2},~n\geqslant 1,$ be a finite set of full dimension containing $0$ and satisfying Assumptions \ref{indall}, \ref{horiz_latt} and \ref{horizontal}. Let $\Delta\subset\mathbb R^{n+2}$ be its convex hull. For generic $f_1,\ldots,f_{n+1}\in\mathbb C^A,$ the image of the $1-$dimensional complete intersection ${\tilde{\mathcal{C}}=\{f_1=\ldots=f_{n+1}=0\}}$ under the projection $\pi\colon({\mathbb C}\setminus 0)^{n+2}\to({\mathbb C}\setminus 0)^2$ forgetting the first $n$ coordinates is an algebraic plane curve with punctured points. Its singular locus $\mathcal S$ (not including the punctured points) consists of isolated singular points, and the number $\mathcal D=\sum\limits_{s\in\mathcal S}\delta(s)$ can be computed via the following formula: \begin{equation}\label{mainformula} \mathcal D=\dfrac{1}{2}\Bigg(\mathop{\rm Area}\nolimits(P)-(n+1)\mathop{\rm Vol}\nolimits(\Delta)+\sum\limits_{\Gamma\in\mathcal{H}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)-\sum\limits_{\Gamma\in\mathcal{F}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\sum\limits_1^{\infty}(i_r^{\Gamma}-1)\Bigg), \end{equation} where $\delta(s)$ is the $\delta$-invariant of the singular point $s$, the polygon $P=\int_{\pi}(\Delta)$ is the Minkowski integral of $\Delta$ with respect to the projection $\pi$, and the set $\mathcal{F}(\Delta)$ is the set of all facets of the polytope $\Delta.$ \end{theor} \begin{proof} Assumption \ref{horizontal} guarantees that for every horizontal facet $\Gamma\subset\Delta,$ we have $i^{\Gamma}=(1,1,\ldots).$ Therefore, in this case, we have the equality $$\sum\limits_{\Gamma\in\mathcal{F}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\sum(i_r^{\Gamma}-1)=\sum\limits_{\Gamma\in\mathcal{F}(\Delta)\setminus\mathcal{H}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\sum(i_r^{\Gamma}-1).$$ Theorem \ref{lemmain} gives the formula for the sum $\mathcal D$ of the $\delta$-invariants of all the singularities of the closure $\mathcal C$ of the curve $\pi(\tilde{\mathcal C}).$ Lemma \ref{punct} implies that all the points of the complement $\mathcal C\setminus \pi(\tilde{\mathcal C})$ are smooth, therefore those points do not contribute to the sum $D,$ which concludes the proof of the theorem. \end{proof} \begin{rem} Note that Theorem \ref{maintheor} agrees with Conjecture \ref{c2}. Indeed, since for every horizontal facet $\Gamma\subset\Delta,$ we have $i^{\Gamma}=(1,1,\ldots).$ So, we have $$\sum\limits_{\Gamma\in\mathcal{H}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)\big(2i_1^{\Gamma}-(i_1^{\Gamma})^2\big)=\sum\limits_{\Gamma\in\mathcal{H}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma)(2-1^2)=\sum\limits_{\Gamma\in\mathcal{H}(\Delta)}\mathop{\rm Vol}\nolimits(\Gamma).$$ All the other summands in (\ref{conjform}) are the same as in (\ref{mainformula}), so these two expressions give the same answer. \end{rem} \begin{exa} Let us apply Theorem \ref{maintheor} to the following two support sets (see Figure 7): \begin{align*} &A_1=\{(0,0,0),(1,0,0),(2,0,0),(3,0,0),(0,1,0),(0,0,1)\}\\ &A_2=\{(0,0,0),(1,0,0),(3,0,0),(0,1,0),(0,0,1)\} \end{align*} \begin{center} \begin{tikzpicture} \pattern[pattern=north east lines, pattern color=orange] (0,7)--(2,1)--(1,0)--(0,7); \pattern[pattern=north east lines, pattern color=orange] (6,7)--(8,1)--(7,0)--(6,7); \draw[ultra thick] (0,1)--(0,7); \draw[ultra thick] (0,1)--(1,0); \draw[ultra thick] (0,7)--(2,1); \draw[ultra thick] (1,0)--(2,1); \draw[ultra thick] (0,7)--(1,0); \draw[dashed, ultra thick] (0,1)--(2,1); \draw[fill,red] (0,1) circle [radius=0.1]; \draw[fill,red] (0,7) circle [radius=0.1]; \draw[fill,red] (1,0) circle [radius=0.1]; \draw[fill,red] (2,1) circle [radius=0.1]; \draw[fill,red] (0,3) circle [radius=0.1]; \draw[fill,red] (0,5) circle [radius=0.1]; \draw[ultra thick] (6,1)--(6,7); \draw[ultra thick] (6,1)--(7,0); \draw[ultra thick] (6,7)--(8,1); \draw[ultra thick] (7,0)--(8,1); \draw[ultra thick] (6,7)--(7,0); \draw[dashed, ultra thick] (6,1)--(8,1); \draw[fill,red] (6,1) circle [radius=0.1]; \draw[fill,red] (7,0) circle [radius=0.1]; \draw[fill,red] (8,1) circle [radius=0.1]; \draw[fill,red] (6,7) circle [radius=0.1]; \draw[fill,red] (6,3) circle [radius=0.1]; \draw [ultra thick, fill=white] (6,5) circle [radius=0.1]; \node[below] at (2,-0.5) {{\bf Figure 7.} Two similar examples with different forking paths singularities at infinity.}; \node[left] at (0,1) {(0,0,0)}; \node[left] at (6,1) {(0,0,0)}; \node[left] at (0,5) {(2,0,0)}; \node[left] at (0,7) {(3,0,0)}; \node[left] at (0,3) {(1,0,0)}; \node[left] at (6,5) {(2,0,0)}; \node[left] at (6,7) {(3,0,0)}; \node[left] at (6,3) {(1,0,0)}; \node[right] at (2,1) {(0,0,1)}; \node[right] at (1,0) {(0,1,0)}; \node[right] at (7,0) {(0,1,0)}; \node[right] at (8,1) {(0,0,1)}; \end{tikzpicture} \end{center} We have that $\mathop{\rm conv}\nolimits(A_1)=\mathop{\rm conv}\nolimits(A_2)=\Delta,$ and $\mathop{\rm Vol}\nolimits(\Delta)=3.$ In this case, the polygon $P$ is a standard simplex of size $3,$ therefore, $\mathop{\rm Area}\nolimits(P)=9.$ In both cases, the horizontal facet contributes exactly $1$ punctured point. Also, in both cases, we have exactly one facet $\Gamma\subset\Delta$ (hatched orange) such that $\mathop{\rm ind}\nolimits_v(A_1\cap\Gamma)\neq 1$ and $\mathop{\rm ind}\nolimits_v(A_2\cap\Gamma)\neq 1.$ The area of this facet is equal to $1.$ In the first case, we have $i^{\Gamma}=(3,1,\ldots),$ while in the second case $i^{\Gamma}=(3,3,1,\ldots).$ Substituting all of this data into (\ref{mainformula}), we get: \begin{align*} &\mathcal{D}_1=\dfrac{9-2\cdot 3+1-1\cdot(3-1)}{2}=\dfrac{9-6+1-2}{2}=1;\\ &\mathcal{D}_2=\dfrac{9-2\cdot 3+1-1\cdot((3-1)+(3-1))}{2}=\dfrac{9-6+1-4}{2}=0. \end{align*} \end{exa} \subsection{The Tropical Counterpart} It is very natural to study the behaviour of singularities of generic objects under passing to the tropical limit. There is a number of papers in which this problem is addressed, we first give a short overview. According to Mikhalkin's correspondence theorem (see \cite{Mikh} and \cite{S}), under passing to the tropical limit, the nodes of a generic nodal curve in $({\mathbb C}\setminus 0)^2$ are accumulated with certain multiplicities in triple points and crossings of the corresponding tropical curve, as well as the midpoints of its multiple edges. The main result of \cite{MMS1} is the classification of singular tropical curves of maximal-dimensional geometric type. A singular point of such a tropical curve of this type is either a crossing of two edges, or a multiplicity $3$ three-valent vertex, or a point on a midpoint of an edge of weight $2.$ The work \cite{MMS2} by the same authors addresses a similar problem for singular tropical surfaces in the $3$-dimensional space. The work \cite{MR} is devoted to the study of behaviour of log-inflection points of curves in $({\mathbb C}\setminus 0)^2$ under the passage to the tropical limit. Assuming that the limiting tropical curve is smooth, the authors show that the log-inflection points accumulate by pairs at the midpoints of its bounded edges. In \cite{LM}, lifts of tropical bitangents (i.e., nodes of the dual curve) to the tropicalization of a given complex algebraic curve and their lifting multiplicities are studied. In particular, it is shown that all seven bitangents of a smooth tropical plane quartic lift in sets of four to algebraic bitangents. Algebraic and combinatorial aspects of projections of $m-$dimensional tropical varieties onto $(m+1)-$dimensional planes are studied in \cite{HT}. For the case of curves (which is exactly the tropical version of the problem addressed in our paper), some bounds for the number of self-intersection of projections onto the plane were given, as well as constructions with many self-intersections. Let $b\colon A\to\mathbb Z$ be a sufficiently generic function. Consider the family $\tilde{\mathcal{C}}_{\tau}=\{f_{1,\tau}=\ldots=f_{n+1,\tau}=0\},$ where $$f_{j,\tau}=\sum\limits_{\alpha\in A}c_{j,\alpha}(\tau)x_1^{\alpha_1}\ldots x_n^{\alpha_n}y^{\alpha_{n+1}}t^{\alpha_{n+2}},$$ and every coefficient $c_{j,\alpha}(\tau)$ is a generic polynomial in $\tau$ of degree $b(\alpha).$ Denote by $\mathcal C_{\tau}$ the closure of the image $\pi(\tilde{\mathcal{C}}_{\tau}).$ The tropical limit of $\tilde{\mathcal{C}}_{\tau}$ as $\tau\rightarrow\infty$ is a tropical curve $\tilde{C}$. By $C$ denote its image under the projection forgetting the first $n$ coordinates. \begin{prb} In the same notation as above, which points of the tropical curve $C$ are the tropical limits of nodes of the curves $\mathcal C_{\tau}$ as $\tau\rightarrow\infty$? How many nodes does each of those points ``accumulate''? \end{prb} The example below illustrates the case when the tropical limit of the nodes of the curves $\mathcal{C}_{\tau}$ is a node of the corresponding plane tropical curve $C.$ \begin{exa}\label{trop1} Take $A=\{(0,0,0),(1,0,0),(2,0,0),(1,1,0),(0,0,1)\}.$ The volume of the polytope $\Delta=\mathop{\rm conv}\nolimits(A)$ is equal to $2.$ Its fiber polytope with respect to the projection $\pi\colon({\mathbb C}\setminus 0)^3\to({\mathbb C}\setminus 0)^2$ forgetting the first coordinate is of area $6$. For each of the facets $\Gamma\in\mathcal{F}(\Delta)$ we have $\mathop{\rm ind}\nolimits_v(A\cap\Delta)=1,$ and the set $\mathcal H(\Delta)$ of the horizontal facets is empty. Therefore, by Theorem \ref{developed}, the number of nodes of the projection of a generic complete intersection defined by polynomials $f,g\in\mathbb C^A,$ is equal to $\dfrac{6-2\cdot 2}{2}=1.$ \begin{center} \begin{tikzpicture}[scale=0.9] \draw[dashed, ultra thick] (0,0)--(2,2); \draw[dashed, ultra thick] (0,0)--(0,4); \draw[dashed, ultra thick] (0,0)--(-1,-2); \draw[ultra thick] (0,4)--(-1,-2); \draw[ultra thick] (0,4)--(2,2); \draw[ultra thick] (2,2)--(-1,-2); \draw[fill,red] (0,0) circle [radius=0.1]; \draw[fill,red] (2,2) circle [radius=0.1]; \draw[fill,red] (-1,-2) circle [radius=0.1]; \draw[fill,red] (0,2) circle [radius=0.1]; \draw[fill,red] (0,4) circle [radius=0.1]; \draw[ultra thick] (6,0)--(10,0); \draw[ultra thick] (6,0)--(6,4); \draw[ultra thick] (6,4)--(10,2); \draw[ultra thick] (10,0)--(10,2); \draw[fill] (6,0) circle [radius=0.1]; \draw[fill] (10,0) circle [radius=0.1]; \draw[fill] (10,2) circle [radius=0.1]; \draw[fill] (6,4) circle [radius=0.1]; \draw[fill] (6,2) circle [radius=0.1]; \draw[fill] (8,0) circle [radius=0.1]; \draw[fill] (8,2) circle [radius=0.1]; \node[below] at (7,-1.5) {{\bf Figure 8.} The polytope $\Delta$ and its fiber polygon $P.$}; \node[right] at (0.5,0) {(0,0,0)}; \node[left] at (-1,-2) {(0,0,1)}; \node[right] at (2,2) {(1,1,0)}; \node[right] at (0,2) {(1,0,0)}; \node[left] at (0,4) {(2,0,0)}; \node[left] at (6,0) {(0,0)}; \node[left] at (6,4) {(0,2)}; \node[left] at (6,2) {(0,1)}; \node[right] at (10,2) {(2,1)}; \node[right] at (10,0) {(2,0)}; \node[below] at (8,2) {(1,1)}; \node[below] at (8,0) {(1,0)}; \end{tikzpicture} \end{center} One can easily express the coordinates $(u,v)$ of this unique double point in terms of the coefficients of the polynomials $f,g.$ Indeed, suppose that we have $f=a_0+a_1 x+a_2 x^2+a_3 t+a_4 x y$ and $g=b_0+b_1 x+b_2 x^2+b_3 t+b_4 x y.$ If $f(x,u,v)$ and $g(x,u,v)$ have two common roots as polynomials in the variable $x,$ then, the numbers $u,v$ clearly should satisfy the following equations: \begin{equation}\label{coord_node} \left\{\begin{aligned} (a_1+a_4u)b_2=(b_1+b_4u)a_2\\ (a_0+a_3v)b_2=(b_0+b_3v)a_2\\ \end{aligned}\right. \iff \left\{\begin{aligned} u=\dfrac{a_1b_2-b_1a_2}{a_2b_4-b_2a_4}\\ v=\dfrac{a_0b_2-b_0a_2}{a_2b_3-b_2a_3} \end{aligned}\right. .\end{equation} Now, consider the following deformation of the polynomials $f$ and $g:$ \begin{align}\label{deform} \tilde{f}_{\tau}(x,y,t)&=\tau^{-1}+\tau^{-2}x+\tau^{-8}x^2+\tau^{-4}t+\tau^{-8}xy;\\ \tilde{g}_{\tau}(x,y,t)&=\tau^{5}+\tau^{5}x+\tau^{5}x^2+\tau^{5}t+\tau^{6}xy; \end{align} Passing to the tropical limit, we obtain a pair of tropical polynomials $F(X,Y,T)$ and $G(X,Y,T),$ where: \begin{align*} F(X,Y,T)&=\max(-1,X-2,2X-8,T-4,X+Y-8)\\ G(X,Y,T)&=\max(5,X+5,2X+5,T+5,X+Y+6)\\ \end{align*} The intersection of the tropical hypersurfaces defined by polynomials $F$ and $G$ is a $3$-valent tropical curve $\tilde{C}$. Its image $C$ under the projection forgetting the first coordinate and the corresponding subdivision of the polygon $P$ are shown in Figure $9$ below. \begin{center} \begin{tikzpicture}[scale=0.7] \draw[ultra thick](1,4)--(5,8); \draw[ultra thick](1,3)--(1,4); \draw[ultra thick] (1,3)--(8,3); \draw[ultra thick] (1,3)--(0,2); \draw[ultra thick] (0,2)--(-5,2); \draw[ultra thick] (0,2)--(0,-3); \draw[ultra thick] (1,4)--(-5,4); \draw[ultra thick] (5,8)--(7,12); \draw[ultra thick] (5,8)--(5,-3); \draw[fill,blue] (1,4) circle [radius=0.2]; \draw[fill,blue] (5,8) circle [radius=0.2]; \draw[fill,red] (5,3) circle [radius=0.2]; \draw[fill,blue] (0,2) circle [radius=0.2]; \draw[fill,blue] (1,3) circle [radius=0.2]; \node[above left] at (1.3,4) {$(1,4)$}; \node[below right] at (5,8) {$(5,8)$}; \node[above right] at (5,3) {$(5,3)$}; \node[below left] at (0,2) {$(0,2)$}; \node[below right] at (0.8,3) {$(1,3)$}; \node[above left] at (2,6) {$2T+2$}; \node[above right] at (6,5) {$2Y+T$}; \node[above] at (3.2,3.5) {$Y+T+5$}; \node[right] at (6,1) {$2Y+3$}; \node[left] at (0,3) {$T+6$}; \node[below left] at (-2,0) {$8$}; \node[below] at (3,1) {$Y+8$}; \draw[ultra thick] (12,-2)--(16,-2); \draw[ultra thick] (12,-2)--(12,2); \draw[ultra thick] (12,2)--(16,0); \draw[ultra thick] (16,-2)--(16,0); \draw[ultra thick] (14,0)--(14,-2); \draw[ultra thick] (12,0)--(14,-2); \draw[ultra thick] (12,0)--(16,0); \draw[ultra thick] (12,2)--(14,0); \draw[fill] (12,-2) circle [radius=0.1]; \draw[fill] (16,-2) circle [radius=0.1]; \draw[fill] (14,-2) circle [radius=0.1]; \draw[fill] (12,0) circle [radius=0.1]; \draw[fill] (14,0) circle [radius=0.1]; \draw[fill] (16,0) circle [radius=0.1]; \draw[fill] (12,2) circle [radius=0.1]; \node[left] at (12,-2) {$(0,0)$}; \node[left] at (12,2) {$(0,2)$}; \node[left] at (12,0) {$(0,1)$}; \node[right] at (16,0) {$(2,1)$}; \node[right] at (16,-2) {$(2,0)$}; \node[below right] at (14,0) {$(1,1)$}; \node[below] at (14,-2) {$(1,0)$}; \node[below] at (7,-5) {{\bf Figure 9.} The tropical curve $C$ and the corresponding subdivision of the polygon $P.$}; \end{tikzpicture} \end{center} The tropical curve $C$ has one node (marked red). Now, let us substitute the coefficients of the polynomials $\tilde{f}_{\tau}(x,y,t)$ and $\tilde{g}_{\tau}(x,y,t)$ into the expressions for the coordinates $(u,v)$ of the node of the curve $\mathcal C$ (see (\ref{coord_node})). We get: \begin{equation} \left\{\begin{aligned} u(\tau)=\dfrac{a_1(\tau)b_2(\tau)-b_1(\tau)a_2(\tau)}{a_2(\tau)b_4(\tau)-b_2(\tau)a_4(\tau)}\\ v(\tau)=\dfrac{a_0(\tau)b_2(\tau)-b_0(\tau)a_2(\tau)}{a_2(\tau)b_3(\tau)-b_2(\tau)a_3(\tau)} \end{aligned}\right. \Longrightarrow \left\{\begin{aligned} u(\tau)&=\dfrac{-\tau^{-3}+\tau^3}{-\tau^{-3}+\tau^{-2}}=1+\tau+\tau^2+\tau^3+\tau^4+\tau^5\\ v(\tau)&=\dfrac{-\tau^{-3}-\tau^4}{\tau^{-3}-\tau}=\dfrac{\tau^7-1}{1-\tau^4}\end{aligned}\right. \end{equation} Note that as $\tau\rightarrow\infty,$ we have $u(\tau)\sim\tau^5$ and $v(\tau)\sim-\tau^3,$ which agrees with the coordinates of the node of the tropical curve $C$ being $(5,3).$ \end{exa} \begin{exa}\label{trop2} Now, consider the following deformation of the polynomials $f$ and $g:$ \begin{align}\label{deform2} \tilde{f}_{\tau}(x,y,t)&=\tau+\tau^{4}x+\tau x^2+\tau t+\tau^{5}xy;\\ \tilde{g}_{\tau}(x,y,t)&=\tau+\tau x+\tau x^2+\tau^{2}t+\tau^{2}xy; \end{align} Passing to the tropical limit, we obtain a pair of tropical polynomials $F(X,Y,T)$ and $G(X,Y,T),$ where: \begin{align*} F(X,Y,T)&=\max(1,X+4,2X+1,T+1,X+Y+5)\\ G(X,Y,T)&=\max(1,X+1,2X+1,T+2,X+Y+2)\\ \end{align*} The intersection of the tropical hypersurfaces defined by polynomials $F$ and $G$ is a $3$-valent tropical curve $\tilde{C}$. Its image $C$ under the projection forgetting the first coordinate and the corresponding subdivision of the polygon $P$ are shown in Figure $10$ below. \begin{center} \begin{tikzpicture}[scale=0.7] \draw[ultra thick](-1,-4)--(-1,5); \draw[ultra thick](-5,-1)--(4,-1); \draw[ultra thick] (-5,5)--(-1,5); \draw[ultra thick] (-1,5)--(2,11); \draw[fill,blue] (-1,-1) circle [radius=0.2]; \draw[fill,blue] (-1,5) circle [radius=0.2]; \draw[fill,red] (-1,0) circle [radius=0.2]; \node[below left] at (-1,-1) {$(-1,-1)$}; \node[below right] at (-1,5) {$(-1,5)$}; \node[right] at (-1,0) {$(-1,0)$}; \node[above left] at (-1,8) {$2T+6$}; \node[above] at (2,1.5) {$2Y+T+13$}; \node[left] at (-2,2) {$T+11$}; \node[below left] at (-3,-3) {$10$}; \node[below] at (2,-2) {$Y+11$}; \draw[ultra thick] (8,-2)--(12,-2); \draw[ultra thick] (8,-2)--(8,2); \draw[ultra thick] (8,2)--(12,0); \draw[ultra thick] (12,-2)--(12,0); \draw[ultra thick] (8,0)--(12,0); \draw[fill] (8,-2) circle [radius=0.1]; \draw[fill] (12,-2) circle [radius=0.1]; \draw[fill] (10,-2) circle [radius=0.1]; \draw[fill] (8,0) circle [radius=0.1]; \draw[fill] (10,0) circle [radius=0.1]; \draw[fill] (12,0) circle [radius=0.1]; \draw[fill] (8,2) circle [radius=0.1]; \node[left] at (8,-2) {$(0,0)$}; \node[left] at (8,2) {$(0,2)$}; \node[left] at (8,0) {$(0,1)$}; \node[right] at (12,0) {$(2,1)$}; \node[right] at (12,-2) {$(2,0)$}; \node[below right] at (10,0) {$(1,1)$}; \node[below] at (10,-2) {$(1,0)$}; \node[below] at (3,-5) {{\bf Figure 10.} The tropical curve $C$ and the corresponding subdivision of the polygon $P.$}; \end{tikzpicture} \end{center} Now, let us substitute the coefficients of the polynomials $\tilde{f}_{\tau}(x,y,t)$ and $\tilde{g}_{\tau}(x,y,t)$ into the expressions for the coordinates $(u,v)$ of the node of the curve $\mathcal C$ (see (\ref{coord_node})). We get: \begin{equation} \left\{\begin{aligned} u(\tau)=\dfrac{a_1(\tau)b_2(\tau)-b_1(\tau)a_2(\tau)}{a_2(\tau)b_4(\tau)-b_2(\tau)a_4(\tau)}\\ v(\tau)=\dfrac{a_0(\tau)b_2(\tau)-b_0(\tau)a_2(\tau)}{a_2(\tau)b_3(\tau)-b_2(\tau)a_3(\tau)} \end{aligned}\right. \Longrightarrow \left\{\begin{aligned} u(\tau)&=\dfrac{\tau^{5}-\tau^2}{\tau^{3}-\tau^{6}}=-\dfrac{\tau^2(\tau^3-1)}{\tau^3(\tau^3-1)}\\ v(\tau)&=\dfrac{\tau^{2}-\tau^2}{\tau^{3}-\tau^2}=0\end{aligned}\right. \end{equation} So the tropical limit of the nodes of the curves $\mathcal{C}_{\tau}$ in this case is the point $(-1,0)$ (marked red). Unlike the case considered in Example \ref{trop1}, this point is not a vertex of the corresponding nodal tropical curve. Moreover, the edge containing this point is of multiplicity 2. \end{exa}
proofpile-arXiv_067-6865
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Generative Adversarial Network (GAN) \cite{goodfellow2014generative} is one of recent promising generative models. In this context, we prepare two networks, a generator $G$ and a discriminator $D$. $G$ generates fake samples $G(\bm z)$ from noise $\bm z$ and tries to fool $D$. $D$ classifies real sample $\bm x$ and fake samples $\bm y = G(\bm z)$. In the training phase, we update them alternatingly until it reaches to an equilibrium state. In general, however, the training process is unstable and requires tuning of hyperparameters. Since from the first successful implementation by convolutional neural nets \cite{radford2015unsupervised}, most literatures concentrate on \textit{how to improve the unstable optimization procedures} including changing objective functions \cite{nowozin2016f, zhao2016energy, arjovsky2017wasserstein, lim2017geometric, unterthiner2017coulomb, bellemare2017cramer}, adding penalty terms \cite{gulrajani2017improved, petzka2017regularization, wei2018improving}, techniques on optimization precesses themselves \cite{metz2016unrolled, salimans2016improved, karras2017progressive, heusel2017gans}, inserting new layers to the network \cite{miyato2018spectral, zhang2018self}, and others we cannot list here completely. Even if one can make the optimization relatively stable and succeed in getting $G$ around an equilibrium, it sometimes fails to generate meaningful images. Bad images may include some unwanted structures like unnecessary shadows, strange symbols, and blurred edges of objects. For example, see generated images surrounded by blue lines in Figure \ref{fig:ot_all_imagenet}. These problems may be fixed by scaling up the network structure and the optimization process. Generically speaking, however, it needs large scale computational resources, and if one wants to apply GAN to individual tasks by making use of more compact devices, the above problem looks inevitable and crucial. There is another problem. In many cases, we discard the trained discriminator $D$ after the training. This situation is in contrast to other latent space generative models. For example, variational auto-encoder (VAE) \cite{kingma2013auto} is also composed of two distinct networks, an encoder network and a decoder network. We can utilize both of them after the training: the encoder can be used as a data compressor, and the decoder can be regarded as a generator. Compared to this situation, it sounds wasteful to use only $G$ after the GAN training. From this viewpoint, it would be natural to ask \textit{how to use trained models $G$ and $D$ efficiently}. Recent related works in the same spirit are discriminator rejection sampling (DRS) \cite{azadi2018discriminator} and Metropolis-Hastings GAN (MH-GAN) \cite{turner2019metropolis}. In each case, they use the generator-induced distribution $p_G$ as a proposal distribution, and approximate acceptance ratio of the proposed sample based on the trained $D$. Intuitively, generated image $\bm y = G(\bm z)$ is accepted if the value $D(\bm y)$ is relatively large, otherwise it is rejected. They show its theoretical backgrounds, and it actually improve scores on generated images in practice. In this paper, we try a similar but more active approach, i.e. improving generated image $\bm y = G(\bm z)$ directly by adding $\delta \bm y$ to $\bm y$ such that $D(\bm y + \delta \bm y) > D(\bm y)$. The optimal transport (OT) theory guarantees that the improved samples can be regarded as approximate samples from the target distribution $p$. More concretely, our contributions are \begin{itemize} \item Proposal of the discriminator optimal transport (DOT) based on the fact that the objective function for $D$ provides lower bound of the dual cost function for the Wasserstein distance between $p$ and $p_G$. \item Suggesting approximated algorithms and verifying that they improve Earth Mover's distance (EMD) \cite{flamary2017pot}, inception score \cite{salimans2016improved} and Fr\'echet inception distance (FID) \cite{heusel2017gans}. \item Pointing out a \textit{generality} on DOT, i.e. if the $G$'s output domain is same as the $D$'s input domain, then we can use \textit{any} pair of trained $G$ and $D$ to improve generated samples. \end{itemize} In addition, we show some results on experiment comparing DOT and a naive method of improving sample just by increasing the value of $D$, under a fair setting. One can download our codes from {\tt \href{https://github.com/AkinoriTanaka-phys/DOT}{https://github.com/AkinoriTanaka-phys/DOT}}. \begin{figure}[t] \centering \includegraphics[width=14cm]{pdfs/intro2.pdf} \vspace*{-2mm} \caption{ Each left image (blue): a sample from generator $G$. Each right image (red): the sample modified by our algorithm based on discriminator $D$. We use here the trained model available on {\tt \href{https://github.com/pfnet-research/sngan_projection}{https://github.com/pfnet-research/sngan\_projection}} . } \label{fig:ot_all_imagenet} \end{figure} \section{Background} \subsection{Generative Adversarial Nets} Throughout this paper, we regard an image sample as a vector in certain Euclidean space: $\bm x \in X$. We name latent space as $Z$ and a prior distribution on it as $p_Z(\bm z)$. The ultimate goal of the GAN is making generator $G : Z \to X$ whose push-foward of the prior $ p_G(\bm x) = \int_Z p_Z(\bm z) \delta \big(\bm x - G(\bm z) \big) d \bm z $ reproduces data-generating probability density $p(\bm x)$. To achieve it, discriminator $D : X \to \mathbb{R}$ and objective functions, \begin{align} &V_D(G, D) = \E{\sbm x\sim p}{f (-D(\bm x))} +\E{\sbm y \sim p_G}{f(D(\bm y))} , \label{VD} \\ &V_G(G, D) = \E{\sbm y \sim p_G}{g(D(\bm y))} , \label{VG} \end{align} are introduced. Choice of functions $f$ and $g$ corresponds to choice of GAN update algorithm as explained below. Practically, $G$ and $D$ are parametric models $G_\theta$ and $D_\varphi$, and they are alternatingly updated as \begin{align} &\varphi \leftarrow \varphi + \epsilon \nabla_\varphi V_D(G_\theta, D_\varphi), \label{18} \\ &\theta \leftarrow \theta - \epsilon \nabla_\theta V_G(G_\theta, D_\varphi), \label{17} \end{align} until the updating dynamics reaches to an equilibrium. One of well know choices for $f$ and $g$ is \begin{align} f(u) = - \log (1+e^u) \quad g(u) = - f(-u). \label{gan} \end{align} Theoretically speaking, it seems better to take $g(u) = f(u)$, which is called minimax GAN \cite{fedus2017many} to guarantee $p_G = p$ at the equilibrium as proved in \cite{goodfellow2014generative}. However, it is well known that taking \eqref{gan}, called non-saturating GAN, enjoys better performance practically. As an alternative, we can choose the following $f$ and $g$ \cite{lim2017geometric, zhao2016energy}: \begin{align} f(u) = \max(0, -1-u), \quad g(u) = -u. \label{hinge} \end{align} It is also known to be relatively stable. In addition to it, $p_G=p$ at an equilibrium is proved at least in the theoretically ideal situation. Another famous choice is taking \begin{align} f(u) = -u, \quad g(u) = u. \label{wgan} \end{align} The resultant GAN is called WGAN \cite{arjovsky2017wasserstein}. We use \eqref{wgan} with gradient penalty (WGAN-GP) \cite{gulrajani2017improved} in our experiment. WGAN is related to the concept of the optimal transport (OT) which we review below, so one might think our method is available only when we use WGAN. But we would like to emphasize that such OT approach is also useful even when we take GANs described by \eqref{gan} and \eqref{hinge} as we will show later. \subsection{Spectral normalization} Spectral normalization (SN) \cite{miyato2018spectral} is one of standard normalizations on neural network weights to stabilize training process of GANs. To explain it, let us define a norm for function called Lipschitz norm, \begin{align} ||f||_{Lip} := \sup \Big\{ \frac{ || f(\bm x) - f(\bm y) ||_2}{||\bm x-\bm y||_2} \Big| \bm x \neq \bm y \Big\}. \label{lip} \end{align} For example, $ ||ReLU||_{Lip} = ||lReLU||_{Lip} =1 $ because their maximum gradient is 1. For linear transformation $l_{W, b}$ with weight matrix $W$ and bias $b$, the norm $||l_{W, b}||_{Lip}$ is equal to the maximum singular value $\sigma(W)$. Spectral normalization on $l_{W, b}$ is defined by dividing the weight $W$ in the linear transform by the $\sigma(W)$: \begin{align} SN(l_{W, b} ) = l_{W/\sigma(W), b}. \end{align} By definition, it enjoys $||l_{W/\sigma(W)}||_{Lip} = 1$. If we focus on neural networks, estimation of the upper bound of the norm is relatively easy because of the following property\footnote{ This inequality can be understood as follows. Naively, the norm \eqref{lip} is defined by the maximum gradient between two different points. Suppose $\bm x_1$ and $\bm x_2$ realizing maximum of gradient for $g$ and $\bm u_1$ and $\bm u_2$ are points for $f$, then the (RHS) of the inequality \eqref{ineq} is equal to $|| f(\bm u_1) - f(\bm u_2) ||_2/ || \bm u_1 - \bm u_2 ||_2 \times || g(\bm x_1) - g(\bm x_2) ||_2/ || \bm x_1 - \bm x_2 ||_2$. If $g(\bm x_i) = \bm u_i$, it reduces to the (LHS) of the \eqref{ineq}, but the condition is not satisfied in general, and the (RHS) takes a larger value than (LHS). This observation is actually important to the later part of this paper because estimation of the norm based on the inequality seems to be overestimated in many cases. }: \begin{align} ||f \circ g ||_{Lip} \leq ||f||_{Lip} \cdot || g ||_{Lip} . \label{ineq} \end{align} For example, suppose $f_{nn}$ is a neural network with ReLU or lReLU activations and spectral normalizations: $ f_{nn}(\bm x) = SN \circ l_{W_L} \circ f \circ SN \circ l_{W_{L-1}} \circ \dots \circ SN \circ l_{W_1} (\bm x) $, then the Lipschitz norm is bounded by one: \begin{align} ||f_{nn}||_{Lip} \leq \prod_{l=1}^L || l_{W_l/\sigma(W_l)} ||_{Lip} =1 \label{fnn1} \end{align} Thanks to this Lipschitz nature, the normalized network gradient behaves mild during repeating updates \eqref{18} and \eqref{17}, and as a result, it stabilizes the wild and dynamic optimization process of GANs. \subsection{Optimal transport} Another important background in this paper is optimal transport. Suppose there are two probability densities, $p(\bm x)$ and $q(\bm y)$ where $\bm x, \bm y \in X$. Let us consider the cost for transporting one unit of mass from $\bm x \sim p$ to $\bm y \sim q$. The optimal cost is called Wasserstein distance. Throughout this paper, we focus on the Wasserstein distance defined by $l_2$-norm cost $||\bm x - \bm y||_2$: \begin{align} W(p, q) = \min_{\pi \in \Pi(p, q)} \Big( \mathbb{E}_{(\sbm x, \sbm y) \sim \pi} \Big[ ||\bm x-\bm y||_2 \Big] \Big). \label{eWD} \end{align} $\pi$ means joint probability for transportation between $\bm x$ and $\bm y$. To realize it, we need to restrict $\pi$ satisfying marginality conditions, \begin{align} \int d\bm x \ \pi(\bm x, \bm y) = q(\bm y), \quad \int d\bm y \ \pi(\bm x, \bm y) = p(\bm x). \label{pi_const} \end{align} An optimal $\pi^*$ satisfies $W(p, q) = \mathbb{E}_{(\sbm x, \sbm y) \sim \pi^*}[||\bm x - \bm y||_2]$, and it realizes the most effective transport between two probability densities under the $l_2$ cost. Interestingly, \eqref{eWD} has the dual form \begin{align} W(p, q) = \max_{||\tilde{D}||_{Lip} \leq 1} \Big( \mathbb{E}_{\sbm x\sim p} \Big[ \tilde{D}(\bm x) \Big] - \mathbb{E}_{\sbm y\sim q} \Big[ \tilde{D}(\bm y) \Big] \Big). \label{KantDual} \end{align} The duality is called Kantorovich-Rubinstein duality \cite{villani2008optimal, peyre2017computational}. Note that $||f||_{Lip}$ is defined in \eqref{lip}, and the dual variable $\tilde{D}$ should satisfy Lipschitz continuity condition $||\tilde{D}||_{Lip} \leq 1$. One may wonder whether any relationship between the optimal transport plan $\pi^*$ and the optimal dual variable $D^*$ exist or not. The following theorem is an answer and it plays a key role in this paper. \begin{theorem} \label{th:1} Suppose $\pi^*$ and $D^*$ are optimal solutions of the primal \eqref{eWD} and the dual \eqref{KantDual} problem, respectively. If $\pi^*$ is deterministic optimal transport described by a certain automorphism\footnote{ It is equivalent to assume there exists an unique solution of the corresponding Monge problem: \begin{align} \min_{T:X \to X} \Big( \mathbb{E}_{\sbm y \sim q} \Big[ || T(\bm y) - \bm y ||_2 \Big] \Big), \quad \text{constrained by \eqref{reprod}.} \notag \end{align} Reconstructing $T^*$ from $\pi^*$ without any assumption is subtle problem and only guaranteed within strictly convex cost functions \cite{gangbo1996geometry}. Unfortunately, it is not satisfied in our $l_2$ cost. However, there is a known method \cite{caffarelli2002constructing} to find a solution based on relaxing the cost to strictly convex cost $||\bm x-\bm y||^{1+\epsilon}_2$ with $\epsilon > 0$. In our experiments, DOT works only when $||\bm x-\bm y||_2$ is small enough for given $\bm y$. In this case, there is no big difference between $||\bm x-\bm y||_2$ and $||\bm x-\bm y||_2^{1+\epsilon}$, and it suggests DOT approximates their solution. } $T: X \to X$, then the following equations are satisfied: \begin{align} &||D^*||_{Lip} = 1, \label{15lip} \\ &T(\bm y) = \argmin_{\sbm x} \Big\{ ||\bm x - \bm y ||_2 - D^*(\bm x) \Big\} , \label{CondEn} \\ & p(\bm x) = \int d \bm y \ \delta \Big( \bm x - T(\bm y) \Big) q(\bm y) . \label{reprod} \end{align} \end{theorem} (Proof) It can be proved by combining well know facts. See Supplementary Materials. $_\square$ \section{Discriminator optimal transport} If we apply the spectral normalization on a discriminator $D$, it satisfies K-Lipschitz condition $||D||_L = K$ with a certain real number $K$. By redefining it to $\tilde{D} = D/K$, it becomes 1-Lipschitz $||\tilde{D}||_L = 1$. It reminds us the equation \eqref{15lip}, and one may expect a connection between OT and GAN. In fact, we can show the following theorem: \begin{theorem} \label{th:2} Each objective function of GAN using logistic \eqref{gan}, or hinge \eqref{hinge}, or identity \eqref{wgan} loss with gradient penalty, provides lower bound of the mean discrepancy of $\tilde{D} = D/K$ between $p$ and $p_G$: \begin{align} V_D(G, D) \leq K \Big( \mathbb{E}_{\sbm x\sim p} \Big[ \tilde{D}(\bm x) \Big] - \mathbb{E}_{\sbm y\sim p_G} \Big[ \tilde{D}(\bm y) \Big] \Big). \label{key2} \end{align} \end{theorem} (Proof) See Supplementary Materials. $_\square$ In practical optimization process of GAN, $K$ could change its value during the training process, but it stays almost constant at least approximately as explained below. \subsection{Discriminator Optimal Transport (ideal version)} The inequality \eqref{key2} implies that the update \eqref{18} of $D$ during GANs training maximizes the lower bound of the objective in \eqref{KantDual}, the dual form of the Wasserstein distance. In this sense, the optimization of $D$ in \eqref{18} can be regarded solving the problem \eqref{KantDual} approximately\footnote{ This situation is similar to guarantee VAE \cite{kingma2013auto} objective function which is a lower bound of the likelihood called evidence lower bound (ELBO). }. If we apply \eqref{CondEn} with $D^* \approx \tilde{D} = D/K$, the following transport of given $\bm y \sim p_G$ \begin{align} T_D(\bm y) = \argmin_{\sbm x} \Big\{ ||\bm x- \bm y||_2 - \frac{1}{K} D(\bm x) \Big\} \end{align} is expected to recover the sampling from the target distribution $p$ thanks to the equality \eqref{reprod}. \subsection{Discriminator Optimal Transport (practical version)} To check whether $K$ changes drastically or not during the GAN updates, we calculate approximated Lipschitz constants defined by \begin{align} &K_\text{eff} = \max \Big\{ \frac{|D(\bm x) - D(\bm y)|}{||\bm x- \bm y||_2} \Big| \bm x , \bm y \sim p_G \Big\}, \label{Keff} \\ &k_\text{eff} = \max \Big\{ \frac{| D \circ G(\bm z) - D \circ G(\bm z_{\sbm y})|}{||\bm z-\bm z_{\sbm y}||_2} \Big| \bm z , \bm z_{\sbm y} \sim p_Z \Big\}, \label{keff} \end{align} in each 5,000 iteration on GAN training with CIFAR-10 data with DCGAN models explained in Supplementary Materials. As plotted in Figure \ref{fig:ks}, both of them do not increase drastically. \begin{figure}[t] \centering \includegraphics[width=400pt]{pdfs/ks.pdf} \vspace*{-2mm} \caption{ Logs of inception score (left), approximated Lipschitz constant of $D$ (middle), and approximated Lipschitz constant of $D \circ G$ (right) on each GAN trained with CIFAR-10. Approximated Lipschitz constants are calculated by random 500 pair samples. Errorbars are plotted within 1$\sigma$ by 500 trials. } \label{fig:ks} \end{figure} It is worth to mention that the naive upper bound of the Lipschitz constant like \eqref{fnn1} turn to be overestimated. For example, SNGAN has the naive upper bound 1, but \eqref{Keff} stays around 0.08 in Figure \ref{fig:ks}. \paragraph{Target space DOT} Based on these facts, we conclude that trained discriminators can approximate the optimal transport \eqref{CondEn} by \begin{align} T_D^\text{eff}(\bm y) = \argmin_{\sbm x} \Big\{ || \bm x- \bm y||_2 - \frac{1}{K_\text{eff}} D(\bm x) \Big\}. \label{DOT} \end{align} As a preliminary experiment, we apply DOT to WGAN-GP trained by 25gaussians dataset and swissroll dataset. We use the gradient descent method shown in Algorithm \ref{alg1} to search transported point $T_D^\text{eff}(\bm y)$ for given $\bm y \sim p_G$. In Figure \ref{2d_WGANGP}, we compare the DOT samples and naively transported samples by the discriminator which is implemented by replacing the gradient in Algorithm \ref{alg1} to $ - \frac{1}{K_\text{eff}} \nabla_{\sbm x} D(\bm x) $ , i.e. just searching $\bm x$ with large $D(\bm x)$ from initial condition $\bm x \leftarrow \bm y$ where $\bm y \sim p_G$. \begin{algorithm}[t] \caption{Target space optimal transport by gradient descent} \label{alg1} \begin{algorithmic} \REQUIRE trained $D$, approximated $K_\text{eff}$ by \eqref{Keff}, sample $\bm y$, learning rate $\epsilon$ and small vector $\bm \delta$ \STATE Initialize $\bm x \leftarrow \bm y$ \FOR{$n_\text{trial}$ in range($N_\text{updates}$)} \STATE {$\bm x \leftarrow \bm x - \epsilon \bm \nabla_{\sbm x} \Big\{ ||\bm x - \bm y + \bm \delta||_2 - \frac{1}{K_\text{eff}} D(\bm x) \Big\}$ \quad ( \text{$\bm \delta$ is for preventing overflow.} ) } \ENDFOR \RETURN{$\bm x$} \end{algorithmic} \end{algorithm} DOT outperforms the naive method qualitatively and quantitatively. On the 25gaussians, one might think 4th naively improved samples are better than 3rd DOT samples. However, the 4th samples are too concentrated and lack the variance around each peak. In fact, the value of the Earth Mover's distance, EMD, which measures how long it is separated from the real samples, shows relatively large value. On the swissroll, 4th samples based on naive transport lack many relevant points close to the original data, and it is trivially bad. On the other hand, one can see that the 3rd DOT samples keep swissroll shape and clean the blurred shape in the original samples by generator. \begin{figure}[t] \centering \includegraphics[width=400pt]{pdfs/gs.pdf} \caption{ 2d experiments by using trained model of WGAN-GP. 1,000 samples of, 1st: training samples, 2nd: generated samples by $G$, 3rd: samples by target space DOT, 4th: samples by naive transport, are plotted. Each EMD value is calculated by 100 trials. The error corresponds to 1$\sigma$. We use $\bm \delta = {\bf 0.001}$. See the supplementary material for more details on this experiment. } \label{2d_WGANGP} \end{figure} \paragraph{Latent space DOT} The target space DOT works in low dimensional data, but it turns out to be useless once we apply it to higher dimensional data. See Figure \ref{fig:dot2} for example. Alternative, and more workable idea is regarding $D \circ G : Z \to \mathbb{R}$ as the dual variable for defining Wasserstein distance between ``pullback'' of $p$ by $G$ and prior $p_Z$. \if0 : \begin{align} W(G^\# p, p_z ) = \max_{ || f ||_L \leq 1} \Big( \mathbb{E}_{\sbm z\sim G^\# p} \Big[ f(\bm z) \Big] - \mathbb{E}_{\sbm z_{\ssbm y}\sim p_z} \Big[ f(\bm z_{\sbm y}) \Big] \Big). \end{align} \fi Latent space OT itself is not a novel idea \cite{agustsson2017optimal, salimans2018improving}, but there seems to be no literature using trained $G$ and $D$, to the best of our knowledge. The approximated Lipschitz constant of $G \circ D$ also stays constant as shown in the right sub-figure in Figure \ref{fig:ks}, so we conclude that \begin{align} T_{D \circ G}^\text{eff}(\bm z_{\sbm y}) = \argmin_{\sbm z} \Big\{ || \bm z - \bm z_{\sbm y} ||_2 - \frac{1}{k_\text{eff}} D \circ G(\bm z) \Big\} \end{align} approximate optimal transport in latent space. Note that if the prior $p_Z$ has non-trivial support, we need to restrict $\bm z$ onto the support during the DOT process. In our algorithm \ref{alg2}, we apply projection of the gradient. One of the major practical priors is normal distribution $\mathcal{N}(0, \bm I_{D\times D})$ where $D$ is the latent space dimension. If $D$ is large, it is well known that the support is concentrated on $(D-1)$-dimensional sphere with radius $\sqrt{D}$, so the projection of the gradient $\bm g$ can be calculated by $\bm g - (\bm g \cdot \bm z) \bm z/ \sqrt{D} $ approximately. Even if we skip this procedure, transported images may look improved, but it downgrades inception scores and FIDs. \begin{algorithm}[t] \caption{Latent space optimal transport by gradient descent} \label{alg2} \begin{algorithmic} \REQUIRE trained $G$ and $D$, approximated $k_\text{eff}$, sample $\bm z_{\sbm y}$, learning rate $\epsilon$, and small vector $\bm \delta$ \STATE Initialize $\bm z \leftarrow \bm z_{\sbm y}$ \FOR{$n_\text{trial}$ in range($N_\text{updates}$)} \STATE {$\bm g = \bm \nabla_{\sbm z} \Big\{ ||\bm z - \bm z_{\sbm y} + \bm \delta||_2 - \frac{1}{k_\text{eff}} D\circ G(\bm z) \Big\}$ \quad ( \text{$\bm \delta$ is for preventing overflow.} ) } \IF {noise is generated by $\mathcal{N}(0, \bm I_{D\times D})$} \STATE {$\bm g \leftarrow \bm g - (\bm g \cdot \bm z) \bm z / \sqrt{D} $} \ENDIF \STATE {$\bm z \leftarrow \bm z - \epsilon \bm g$} \IF {noise is generated by $U([-1, 1])$} \STATE {clip $\bm z \in [-1, 1]$ } \ENDIF \ENDFOR \RETURN{$\bm x = G(\bm z)$} \end{algorithmic} \end{algorithm} \section{Experiments on latent space DOT} \begin{figure}[t] \centering \includegraphics[width=150pt]{pdfs/cifar_ex1.pdf} \quad \includegraphics[width=150pt]{pdfs/cifar_ex2.pdf} \vspace*{-2mm} \caption{ Target space DOT sample (each left) and latent space DOT sample (each right). The former looks like giving meaningless noises like perturbations in adversarial examples \cite{szegedy2013intriguing}. On the other hand, the latent space DOT samples keep the shape of image, and clean it. } \label{fig:dot2} \end{figure} \subsection{CIFAR-10 and SLT-10}\label{sec41} We prepare pre-trained DCGAN models and ResNet models on various settings, and apply latent space DOT. In each case, inception score and FID are improved (Table \ref{tab:scores}). We can use arbitrary discriminator $D$ to improve scores by fixed $G$ as shown in Table \ref{tab:intriguing}. As one can see, DOT really works. But it needs tuning of hyperparameters. First, it is recommended to use small $\epsilon$ as possible. A large $\epsilon$ may accelerate upgrading, but easily downgrade unless appropriate $N_\text{updates}$ is chosen. Second, we recommend to use $k_\text{eff}$ calculated by using enough number of samples. If not, it becomes relatively small and it also possibly downgrade images. As a shortcut, $k_\text{eff}=1$ also works. See Supplementary Materials for details and additional results including comparison to other methods. \begin{table}[b] \vspace{-.6cm} \centering \begin{tabular}{p{9mm}p{17mm}|cc|cc|} & & \multicolumn{2}{|c|}{CIFAR-10} & \multicolumn{2}{|c|}{STL-10} \\ \hline & & bare & DOT & bare & DOT \\ \hline {\small DCGAN} & WGAN-GP & 6.53(08), 27.84 & 7.45(05), 24.14 & 8.69(07), 49.94 & 9.31(07), 44.45 \\ & SNGAN(ns) & 7.45(09), 20.74 & 7.97(14), {\bf 15.78} & 8.67(01), 41.18 & 9.45(13), {\bf 34.84} \\ & SNGAN(hi) & 7.45(08), 20.47 & 8.02(16), 17.12 & 8.83(12), 40.10 & 9.35(12), 34.85 \\ & SAGAN(ns) & 7.75(07), 25.37 & {\bf 8.50(01)}, 20.57 & 8.68(01), 48.23 & 10.04(14), 41.19 \\ & SAGAN(hi) & 7.52(06), 25.78 & 8.38(05), 21.21 & 9.29(13), 45.79 & {\bf 10.30(21)}, 40.51 \\ \hline {\small Resnet} & SAGAN(ns) & 7.74(09), 22.13 & 8.49(13), 20.22 & 9.33(08), 41.91 & {\bf 10.03(14), 39.48} \\ & SAGAN(hi) & 7.85(11), 21.53 & {\bf 8.50(12), 19.71} & & \\ \hline \end{tabular} \caption{(Inception score, FID) by usual sampling (bare) and DOT: Models in \cite{miyato2018spectral} and self-attention layer \cite{zhang2018self} are used. (ns) and (hi) mean models trained by \eqref{gan} and \eqref{hinge}. $\epsilon=0.01$ SGD is applied 20 times for CIDAR-10 and 10 times for STL-10. $k_\text{eff}$ is calculated by 100 samples and $\bm \delta = {\bf 0.001}$. } \label{tab:scores} \ \begin{tabular}{l|c||c|c|c|c|c|} $D$& without $D$ & WGAN-gp & SNGAN(ns) & SNGAN(hi) & SAGAN(ns) & SAGAN(hi) \\ \hline IS & 7.52(06) & 8.03(11) & 8.22(07) & 8.38(07) & 8.36(12) & 8.38(05) \\ \hline FID& 25.78 & 24.47 & 21.45 & 23.03 & 21.07 & 21.21 \\ \hline \end{tabular} \caption{Results on scores by $G_\text{SAGAN(ns)}$ after latent space DOT using each $D$ in different training scheme using CIFAR-10 within DCGAN architecture. Parameters for DOT are same in Table \ref{tab:scores}. } \label{tab:intriguing} \end{table} \subsection{ImageNet} \begin{figure}[t] \centering \includegraphics[width=400pt]{pdfs/images.pdf} \caption{ Left images surrounded by blue lines are samples from the conditional generator. The number of updates $N_\text{updates}$ for DOT increases along horizontal axis. Right Images surrounded by red lines corresponds after 30 times updates with Adam $(\alpha, \beta_1, \beta_2) = (0.01, 0, 0.9)$ and $k_\text{eff}(y) = 1$. } \label{fig:cDOT} \end{figure} \paragraph{Conditional version of latent space DOT} In this section, we show results on ImageNet dataset. As pre-trained models, we utilize a pair of public models $(G, D)$ \cite{miyato2018cgans} of conditional GAN \cite{mirza2014conditional}\footnote{ These are available on {\tt \href{https://github.com/pfnet-research/sngan_projection}{https://github.com/pfnet-research/sngan\_projection}} . }. In conditional GAN, $G$ and $D$ are networks conditioned by label $y$. Typical objective function $V_D$ is therefore represented by average over the label: \begin{align} V_D(G, D) = \mathbb{E}_{y \sim p(y)} \Big[ V_D\Big(G(\cdot| y) , D(\cdot | y) \Big) \Big] . \end{align} But, once $y$ is fixed, $G(\bm z| y)$ and $D(\bm x| y)$ can be regarded as usual networks with input $\bm z$ and $\bm x$ respectively. So, by repeating our argument so far, DOT in conditional GAN can be written by \begin{align} T_{G \circ D}(\bm z_{\scriptsize \bm y} | y) = \text{argmin}_{ \sbm z } \Big\{ ||\bm z- \bm z_{\sbm y}||_2 - \frac{1}{k_\text{eff} (y)} D \Big( G(\bm z | y) \big| y \Big) \Big\}. \label{cDOT} \end{align} where $k_\text{eff} (y)$ is approximated Lipschitz constant conditioned by $y$. It is calculated by \begin{align} &k_\text{eff}(y) = \max \Big\{ \frac{| D \big( G(\bm z|y) \big| y \big) - D \big( G(\bm z_{\sbm y} | y) \big| y \big) |}{||\bm z-\bm z_{\sbm y}||_2} \Big| \bm z , \bm z_{\sbm y} \sim p_Z \Big\}. \label{ckeff} \end{align} \paragraph{Experiments} We apply gradient descent updates with with Adam$(\alpha, \beta_1, \beta_2)=(0.01, 0, 0.9)$. \if Inception score and FID are improved as \begin{align} 36.65(54), 43.35 \quad \to \quad \bm{37.34(69), 42.47} . \end{align} As one can see, DOT improves these scores even with very naive estimation of the Lipschitz constant. \fi We show results on 4 independent trials in Table \ref{tab:imagenet}. It is clear that DOT mildly improve each score. Note that we need some tunings on hyperparameters $\epsilon, N_\text{updates}$ as we already commented in \ref{sec41}. \begin{table}[b] \vspace{-1cm} \centering \begin{tabular}{l||c|c|c|c|} & \text{\# updates}=0 & \text{\# updates}=4 & \text{\# updates}=16 & \text{\# updates}=32 \\ \hline \text{trial1($k_\text{eff} (y) = 1$)} & 36.40(91), 43.34 & 36.99(75), 43.01 & 37.25(84), 42.70 & {\bf 37.61(88), 42.35} \\ \text{trial2($k_\text{eff} (y) = 1$)} & 36.68(59), 43.60 & 36.26(98), 43.09 & 36.97(63), 42.85 & {\bf 37.02(73), 42.74} \\ \text{trial3} & 36.64(63), 43.55 & 36.87(84), 43.11 & {\bf 37.51(01), 42.43} & 36.88(79), 42.52 \\ \text{trial4} & 36.23(98), 43.63 & 36.49(54), 43.25 & 37.29(86), 42.67 & {\bf 37.29(07), 42.40} \end{tabular} \caption{ (Inception score, FID) for each update. Upper 2 cases are executed by $k_\text{eff}(y)=1$ without calculating \eqref{ckeff}. We use 50 samples for each label $y$ to calculate $k_\text{eff}(y)$ in lower 2 trials. $\bm \delta = {\bf 0.001}$. } \label{tab:imagenet} \end{table} \newpage \begin{algorithm}[h] \caption{Latent space conditional optimal transport by gradient descent} \label{alg3} \begin{algorithmic} \REQUIRE trained $G$ and $D$, label $y$, approximated $k_\text{eff}(y)$, sample $\bm z_{\sbm y}$, learning rate $\epsilon$ and small vector $\bm \delta$ \STATE Initialize $\bm z \leftarrow \bm z_{\sbm y}$ \FOR{$n_\text{trial}$ in range($N_\text{update}$)} \STATE {$\bm g = \bm \nabla_{\sbm z} \Big\{ ||\bm z - \bm z_{\sbm y} + \bm \delta||_2 - \frac{1}{k_\text{eff}(y)} D \Big( G(\bm z | y) \Big| y \Big) \Big\}$ \quad ( \text{$\bm \delta$ is for preventing overflow.} ) } \IF {noise is generated by $\mathcal{N}(0, \bm I_{D\times D})$} \STATE {$\bm g \leftarrow \bm g - (\bm g \cdot \bm z) \bm z / \sqrt{D} $} \ENDIF \STATE {$\bm z \leftarrow \bm z - \epsilon \bm g$} \IF {noise is generated by $U([-1, 1])$} \STATE {clip $\bm z \in [-1, 1]$ } \ENDIF \ENDFOR \RETURN{$\bm x = G(\bm z | y)$} \end{algorithmic} \end{algorithm} \paragraph{Evaluation} To calculate FID, we use available 798,900 image files in ILSVRC2012 dataset. We reshape each image to the size $299 \times 299 \times 3$, feed all images to the public inception model to get the mean vector $\bm m_w$ and the covariance matrix $\bm C_w$ in 2,048 dimensional feature space. \section{Conclusion} In this paper, we show the relevance of discriminator optimal transport (DOT) method on various trained GAN models to improve generated samples. Let us conclude with some comments here. First, DOT objective function in \eqref{DOT} reminds us the objective for making adversarial examples \cite{szegedy2013intriguing}. There is known fast algorithm to make adversarial example making use of the piecewise-linear structure of the ReLU neural network \cite{goodfellow2014explaining}. The method would be also useful for accelerating DOT. Second, latent space DOT can be regarded as improving the prior $p_Z$. A similar idea can be found also in \cite{brock2018large}. In the usual context of the GAN, we fix the prior, but it may be possible to train the prior itself simultaneously by making use of the DOT techniques. We leave these as future works. \subsubsection*{Acknowledgments} We would like to thank Asuka Takatsu for fruitful discussion and Kenichi Bannai for careful reading this manuscript. This work was supported by computational resources provided by RIKEN AIP deep learning environment (RAIDEN) and RIKEN iTHEMS. \inputt{endbib}
proofpile-arXiv_067-6901
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{\corRef{compose}}\proofLabel{corollary:compose} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{64}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\mathcal{D}\!^+\!\;(\Varid{g}\hsdot{\circ }{.}\Varid{f})\;\Varid{a}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}((\Varid{g}\hsdot{\circ }{.}\Varid{f})\;\Varid{a},\mathcal{D}\;(\Varid{g}\hsdot{\circ }{.}\Varid{f})\;\Varid{a}){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{\mathcal{D}\!^+\!}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{g}\;(\Varid{f}\;\Varid{a}),\mathcal{D}\;(\Varid{g}\hsdot{\circ }{.}\Varid{f})\;\Varid{a}){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{g}\;(\Varid{f}\;\Varid{a}),\mathcal{D}\;\Varid{g}\;(\Varid{f}\;\Varid{a})\hsdot{\circ }{.}\mathcal{D}\;\Varid{f}\;\Varid{a}){}\<[64]% \>[64]{}\mbox{\onelinecomment \thmRef{compose}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\mathbf{let}\;\Varid{b}\mathrel{=}\Varid{f}\;\Varid{a}\;\mathbf{in}\;(\Varid{g}\;\Varid{b},\mathcal{D}\;\Varid{g}\;\Varid{b}\hsdot{\circ }{.}\mathcal{D}\;\Varid{f}\;\Varid{a}){}\<[64]% \>[64]{}\mbox{\onelinecomment refactoring to share \ensuremath{\Varid{f}\;\Varid{a}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\mathbf{let}\;\{\mskip1.5mu (\Varid{b},\Varid{f'})\mathrel{=}\mathcal{D}\!^+\!\;\Varid{f}\;\Varid{a};(\Varid{c},\Varid{g'})\mathrel{=}\mathcal{D}\!^+\!\;\Varid{g}\;\Varid{b}\mskip1.5mu\}\;\mathbf{in}\;(\Varid{c},\Varid{g'}\hsdot{\circ }{.}\Varid{f'}){}\<[64]% \>[64]{}\mbox{\onelinecomment refactoring to show compositionality}{}\<[E]% \ColumnHook \end{hscode}\resethooks \subsection{\corRef{cross}}\proofLabel{corollary:cross} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{86}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\mathcal{D}\!^+\!\;(\Varid{f}\times\Varid{g})\;(\Varid{a},\Varid{b}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}((\Varid{f}\times\Varid{g})\;(\Varid{a},\Varid{b}),\mathcal{D}\;(\Varid{f}\times\Varid{g})\;(\Varid{a},\Varid{b})){}\<[86]% \>[86]{}\mbox{\onelinecomment definition of \ensuremath{\mathcal{D}\!^+\!}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}((\Varid{f}\;\Varid{a},\Varid{g}\;\Varid{b}),\mathcal{D}\;(\Varid{f}\times\Varid{g})\;(\Varid{a},\Varid{b})){}\<[86]% \>[86]{}\mbox{\onelinecomment definition of \ensuremath{(\times)}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}((\Varid{f}\;\Varid{a},\Varid{g}\;\Varid{b}),\mathcal{D}\;\Varid{f}\;\Varid{a}\times\mathcal{D}\;\Varid{g}\;\Varid{b}){}\<[86]% \>[86]{}\mbox{\onelinecomment \thmRef{cross}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\mathbf{let}\;\{\mskip1.5mu (\Varid{c},\Varid{f'})\mathrel{=}(\Varid{f}\;\Varid{a},\mathcal{D}\;\Varid{f}\;\Varid{a});(\Varid{d},\Varid{g'})\mathrel{=}(\Varid{g}\;\Varid{b},\mathcal{D}\;\Varid{g}\;\Varid{b})\mskip1.5mu\}\;\mathbf{in}\;((\Varid{c},\Varid{d}),\Varid{f'}\times\Varid{g'}){}\<[86]% \>[86]{}\mbox{\onelinecomment refactoring}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\mathbf{let}\;\{\mskip1.5mu (\Varid{c},\Varid{f'})\mathrel{=}\mathcal{D}\!^+\!\;\Varid{f}\;\Varid{a};(\Varid{d},\Varid{g'})\mathrel{=}\mathcal{D}\!^+\!\;\Varid{g}\;\Varid{b}\mskip1.5mu\}\;\mathbf{in}\;((\Varid{c},\Varid{d}),\Varid{f'}\times\Varid{g'}){}\<[86]% \>[86]{}\mbox{\onelinecomment definition of \ensuremath{\mathcal{D}\!^+\!}}{}\<[E]% \ColumnHook \end{hscode}\resethooks \subsection{\thmRef{cont}}\proofLabel{theorem:cont} Recall the definition of \ensuremath{\Varid{cont}}: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{cont}\mathbin{::}\Conid{Category}\;\Varid{k}\Rightarrow (\Varid{a}\mathbin{`\Varid{k}`}\Varid{b})\to \Conid{Cont}_{\Varid{k}}^{\Varid{r}}\;\Varid{a}\;\Varid{b}{}\<[E]% \\ \>[B]{}\Varid{cont}\;\Varid{f}\mathrel{=}\Conid{Cont}\;(\hsdot{\circ }{.}\:{}\Varid{f}){}\<[E]% \ColumnHook \end{hscode}\resethooks To say that \ensuremath{\Varid{cont}} is a functor (\ensuremath{\Conid{Category}} homomorphism) is equivalent to the following two equalities: \begin{closerCodePars} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{cont}\;\Varid{id}\mathrel{=}\Varid{id}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{cont}\;(\Varid{g}\hsdot{\circ }{.}\Varid{f})\mathrel{=}\Varid{cont}\;\Varid{g}\hsdot{\circ }{.}\Varid{cont}\;\Varid{f}{}\<[E]% \ColumnHook \end{hscode}\resethooks \end{closerCodePars}% Simplify the first homomorphism equation: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{27}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{cont}\;\Varid{id}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\hsdot{\circ }{.}\:{}\Varid{id}){}\<[27]% \>[27]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}\Varid{id}){}\<[27]% \>[27]{}\mbox{\onelinecomment definition of right section}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}){}\<[27]% \>[27]{}\mbox{\onelinecomment category law}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;\Varid{id}{}\<[27]% \>[27]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{id}} for functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks The first homomorphism equation is thus equivalent to \ensuremath{\Varid{id}\mathrel{=}\Conid{Cont}\;\Varid{id}}, which is in solved form. For the second homomorphism equation, simplify both sides: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{44}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{cont}\;\Varid{g}\hsdot{\circ }{.}\Varid{cont}\;\Varid{f}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\hsdot{\circ }{.}\:{}\Varid{g})\hsdot{\circ }{.}\Conid{Cont}\;(\hsdot{\circ }{.}\:{}\Varid{f}){}\<[44]% \>[44]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{cont}\;(\Varid{g}\hsdot{\circ }{.}\Varid{f}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{cont}\;(\hsdot{\circ }{.}\:{}(\Varid{g}\hsdot{\circ }{.}\Varid{f})){}\<[44]% \>[44]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}(\Varid{g}\hsdot{\circ }{.}\Varid{f})){}\<[44]% \>[44]{}\mbox{\onelinecomment definition of right section}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{cont}\;(\lambda \Varid{h}\to (\Varid{h}\hsdot{\circ }{.}\Varid{g})\hsdot{\circ }{.}\Varid{f}){}\<[44]% \>[44]{}\mbox{\onelinecomment category law}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{cont}\;(\lambda \Varid{h}\to (\hsdot{\circ }{.}\:{}\Varid{f})\;((\hsdot{\circ }{.}\:{}\Varid{g})\;\Varid{h})){}\<[44]% \>[44]{}\mbox{\onelinecomment definition of right section}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;((\hsdot{\circ }{.}\:{}\Varid{f})\hsdot{\circ }{.}(\hsdot{\circ }{.}\:{}\Varid{g})){}\<[44]% \>[44]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})}}{}\<[E]% \ColumnHook \end{hscode}\resethooks The simplified requirement: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Conid{Cont}\;(\hsdot{\circ }{.}\:{}\Varid{g})\hsdot{\circ }{.}\Conid{Cont}\;(\hsdot{\circ }{.}\:{}\Varid{f})\mathrel{=}\Conid{Cont}\;((\hsdot{\circ }{.}\:{}\Varid{f})\hsdot{\circ }{.}(\hsdot{\circ }{.}\:{}\Varid{g})){}\<[E]% \ColumnHook \end{hscode}\resethooks Generalize to a stronger condition, replacing \ensuremath{(\hsdot{\circ }{.}\:{}\Varid{g})} and \ensuremath{(\hsdot{\circ }{.}\:{}\Varid{f})} with \ensuremath{\Varid{g}} and \ensuremath{\Varid{f}} (appropriately re-typed): \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Conid{Cont}\;\Varid{g}\hsdot{\circ }{.}\Conid{Cont}\;\Varid{f}\mathrel{=}\Conid{Cont}\;(\Varid{f}\hsdot{\circ }{.}\Varid{g}){}\<[E]% \ColumnHook \end{hscode}\resethooks This strengthened condition is also in solved form. Notice the reversal of composition (and, more subtly, of \ensuremath{\Varid{id}}). The monoidal functor (i.e., a \ensuremath{\Conid{Monoidal}} homomorphism) property: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{cont}\;(\Varid{f}\times\Varid{g})\mathrel{=}\Varid{cont}\;\Varid{f}\times\Varid{cont}\;\Varid{g}{}\<[E]% \ColumnHook \end{hscode}\resethooks Simplify both sides: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{72}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{cont}\;\Varid{f}\times\Varid{cont}\;\Varid{g}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\hsdot{\circ }{.}\:{}\Varid{f})\times\Conid{Cont}\;(\hsdot{\circ }{.}\:{}\Varid{g}){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{cont}\;(\Varid{f}\times\Varid{g}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\hsdot{\circ }{.}\:{}(\Varid{f}\times\Varid{g})){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of right section}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{join}\;(\Varid{unjoin}\;\Varid{h})\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})){}\<[72]% \>[72]{}\mbox{\onelinecomment \ensuremath{\Varid{join}\hsdot{\circ }{.}\Varid{unjoin}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}})\mathrel{=}\Varid{unjoin}\;\Varid{h}\;\mathbf{in}\;\Varid{join}\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}})\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})){}\<[72]% \>[72]{}\mbox{\onelinecomment refactor}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\mathbin{...}\mathbf{in}\;(\Varid{h}_{\Varid{a}}\mathbin{\triangledown}\Varid{h}_{\Varid{b}})\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{join}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\mathbin{...}\mathbf{in}\;(\Varid{h}_{\Varid{a}}\hsdot{\circ }{.}\Varid{f}\mathbin{\triangledown}\Varid{h}_{\Varid{b}}\hsdot{\circ }{.}\Varid{g})){}\<[72]% \>[72]{}\mbox{\onelinecomment \citep[Section 1.5.2]{Gibbons2002Calculating}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\mathbin{...}\mathbf{in}\;((\hsdot{\circ }{.}\:{}\Varid{f})\;\Varid{h}_{\Varid{a}}\mathbin{\triangledown}(\hsdot{\circ }{.}\:{}\Varid{g})\;\Varid{h}_{\Varid{b}})){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of right section}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\mathbin{...}\mathbf{in}\;\Varid{join}\;((\hsdot{\circ }{.}\:{}\Varid{f})\;\Varid{h}_{\Varid{a}},(\hsdot{\circ }{.}\:{}\Varid{g})\;\Varid{h}_{\Varid{b}})){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{join}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\mathbin{...}\mathbf{in}\;\Varid{join}\;(((\hsdot{\circ }{.}\:{}\Varid{f})\times(\hsdot{\circ }{.}\:{}\Varid{g}))\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}}))){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of \ensuremath{(\times)}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{join}\;(((\hsdot{\circ }{.}\:{}\Varid{f})\times(\hsdot{\circ }{.}\:{}\Varid{g}))\;(\Varid{unjoin}\;\Varid{h}))){}\<[72]% \>[72]{}\mbox{\onelinecomment eliminate \ensuremath{\mathbf{let}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\Varid{join}\hsdot{\circ }{.}((\hsdot{\circ }{.}\:{}\Varid{f})\times(\hsdot{\circ }{.}\:{}\Varid{g}))\hsdot{\circ }{.}\Varid{unjoin}){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})}}{}\<[E]% \ColumnHook \end{hscode}\resethooks The crucial trick here was to note that \ensuremath{\Varid{h}\mathbin{::}(\Varid{a} \times \Varid{b})\mathbin{`\Varid{k}`}\Varid{r}} can be split into two continuations \ensuremath{\Varid{h}_{\Varid{a}}\mathbin{::}\Varid{a}\mathbin{`\Varid{k}`}\Varid{r}} and \ensuremath{\Varid{h}_{\Varid{b}}\mathbin{::}\Varid{b}\mathbin{`\Varid{k}`}\Varid{r}} thanks to \ensuremath{\Varid{join}}/\ensuremath{\Varid{unjoin}} isomorphism from \secref{Derived Operations}.\notefoot{In general, this splitting can lose efficiency, since \ensuremath{\Varid{h}_{\Varid{a}}} and \ensuremath{\Varid{h}_{\Varid{b}}} could duplicate work that was shared in \ensuremath{\Varid{h}}. Investigate this concern.} Now, strengthen the massaged specification, generalizing from \ensuremath{(\hsdot{\circ }{.}\:{}\Varid{f})} and \ensuremath{(\hsdot{\circ }{.}\:{}\Varid{g})} as usual, resulting in a sufficient condition in solved form: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Conid{Cont}\;\Varid{f}\times\Conid{Cont}\;\Varid{g}\mathrel{=}\Conid{Cont}\;(\Varid{join}\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})\hsdot{\circ }{.}\Varid{unjoin}){}\<[E]% \ColumnHook \end{hscode}\resethooks Next, derive \ensuremath{\Conid{Cartesian}} and \ensuremath{\Conid{Cocartesian}} instances from the specification that \ensuremath{\Varid{cont}} is a cartesian functor and a cocartesian functor (i.e., \ensuremath{\Conid{Cartesian}} and \ensuremath{\Conid{Cocartesian}} homomorphisms), i.e.,\\ {\mathindent2.5em \begin{minipage}[b]{0.30\textwidth} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{11}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{cont}\;\Varid{exl}{}\<[11]% \>[11]{}\mathrel{=}\Varid{exl}{}\<[E]% \\ \>[B]{}\Varid{cont}\;\Varid{exr}{}\<[11]% \>[11]{}\mathrel{=}\Varid{exr}{}\<[E]% \\ \>[B]{}\Varid{cont}\;\Varid{dup}{}\<[11]% \>[11]{}\mathrel{=}\Varid{dup}{}\<[E]% \ColumnHook \end{hscode}\resethooks \end{minipage} \begin{minipage}[b]{0ex}{\rule[3ex]{0.5pt}{0.43in}}\end{minipage} \begin{minipage}[b]{0.0\textwidth} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{11}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{cont}\;\Varid{inl}{}\<[11]% \>[11]{}\mathrel{=}\Varid{inl}{}\<[E]% \\ \>[B]{}\Varid{cont}\;\Varid{inr}{}\<[11]% \>[11]{}\mathrel{=}\Varid{inr}{}\<[E]% \\ \>[B]{}\Varid{cont}\;\Varid{jam}{}\<[11]% \>[11]{}\mathrel{=}\Varid{jam}{}\<[E]% \ColumnHook \end{hscode}\resethooks \end{minipage}}\\ Reversing each of these equations puts them in solved form, so they can be used directly as definitions. \out{\\ \begin{minipage}[b]{0.45\textwidth} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{8}{@{}>{\hspre}l<{\hspost}@{}}% \column{11}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{instance}\;{}\<[11]% \>[11]{}\Conid{Cartesian}\;\Varid{k}\Rightarrow {}\<[E]% \\ \>[11]{}\Conid{Cartesian}\;\Conid{Cont}_{\Varid{k}}^{\Varid{r}}\;\mathbf{where}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{exl}{}\<[8]% \>[8]{}\mathrel{=}\Varid{cont}\;\Varid{exl}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{exr}{}\<[8]% \>[8]{}\mathrel{=}\Varid{cont}\;\Varid{exr}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{dup}{}\<[8]% \>[8]{}\mathrel{=}\Varid{cont}\;\Varid{dup}{}\<[E]% \ColumnHook \end{hscode}\resethooks \end{minipage} \begin{minipage}[b]{0ex}{\rule[2.5ex]{0.5pt}{0.8in}}\end{minipage} \begin{minipage}[b]{0.0\textwidth} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{8}{@{}>{\hspre}l<{\hspost}@{}}% \column{11}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{instance}\;{}\<[11]% \>[11]{}\Conid{Cocartesian}\;\Varid{k}\Rightarrow {}\<[E]% \\ \>[11]{}\Conid{Cocartesian}\;\Conid{Cont}_{\Varid{k}}^{\Varid{r}}\;\mathbf{where}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{inl}{}\<[8]% \>[8]{}\mathrel{=}\Varid{cont}\;\Varid{inl}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{inr}{}\<[8]% \>[8]{}\mathrel{=}\Varid{cont}\;\Varid{inr}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{jam}{}\<[8]% \>[8]{}\mathrel{=}\Varid{cont}\;\Varid{jam}{}\<[E]% \ColumnHook \end{hscode}\resethooks \end{minipage} } While these definitions are correct, they can be made more efficient. For instance, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{34}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{cont}\;\Varid{exl}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}\Varid{exl}){}\<[34]% \>[34]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\mathbin{\triangledown}\mathrm{0}){}\<[34]% \>[34]{}\mbox{\onelinecomment \appref{Abelian Categories}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{join}\;(\Varid{h},\mathrm{0})){}\<[34]% \>[34]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{join}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{join}\;(\Varid{inl}\;\Varid{h})){}\<[34]% \>[34]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{inl}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\Varid{join}\hsdot{\circ }{.}\Varid{inl}){}\<[34]% \>[34]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks Similarly, \ensuremath{\Varid{cont}\;\Varid{exr}\mathrel{=}\Conid{Cont}\;(\Varid{join}\hsdot{\circ }{.}\Varid{inr})}. For \ensuremath{\Varid{dup}\mathbin{::}\Varid{a}\mathbin{`\Varid{k}`}(\Varid{a} \times \Varid{a})}, we'll have \ensuremath{\Varid{h}\mathbin{::}(\Varid{a} \times \Varid{a}) \leadsto \Varid{r}}, so we can split \ensuremath{\Varid{h}} with \ensuremath{\Varid{unjoin}}: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{64}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{cont}\;\Varid{dup}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}\Varid{dup}){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{join}\;(\Varid{unjoin}\;\Varid{h})\hsdot{\circ }{.}\Varid{dup}){}\<[64]% \>[64]{}\mbox{\onelinecomment \ensuremath{\Varid{join}\hsdot{\circ }{.}\Varid{unjoin}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}})\mathrel{=}\Varid{unjoin}\;\Varid{h}\;\mathbf{in}\;(\Varid{h}_{\Varid{a}}\mathbin{\triangledown}\Varid{h}_{\Varid{b}})\hsdot{\circ }{.}\Varid{dup}){}\<[64]% \>[64]{}\mbox{\onelinecomment refactor; definition of \ensuremath{\Varid{join}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}})\mathrel{=}\Varid{unjoin}\;\Varid{h}\;\mathbf{in}\;\Varid{h}_{\Varid{a}}\mathbin{+}\Varid{h}_{\Varid{b}}){}\<[64]% \>[64]{}\mbox{\onelinecomment \appref{Abelian Categories}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}})\mathrel{=}\Varid{unjoin}\;\Varid{h}\;\mathbf{in}\;\Varid{jam}\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}})){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{jam}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{jam}\;(\Varid{unjoin}\;\Varid{h})){}\<[64]% \>[64]{}\mbox{\onelinecomment eliminate the \ensuremath{\mathbf{let}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\Varid{jam}\hsdot{\circ }{.}\Varid{unjoin}){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} on functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks For \ensuremath{\Conid{Cocartesian}}, we reason dually: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{64}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{cont}\;\Varid{inl}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}\Varid{inl}){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{inl}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{join}\;(\Varid{unjoin}\;\Varid{h})\hsdot{\circ }{.}\Varid{inl}){}\<[64]% \>[64]{}\mbox{\onelinecomment \ensuremath{\Varid{join}\hsdot{\circ }{.}\Varid{unjoin}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}})\mathrel{=}\Varid{unjoin}\;\Varid{h}\;\mathbf{in}\;(\Varid{h}_{\Varid{a}}\mathbin{\triangledown}\Varid{h}_{\Varid{b}})\hsdot{\circ }{.}\Varid{inl}){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{join}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \mathbf{let}\;(\Varid{h}_{\Varid{a}},\Varid{h}_{\Varid{b}})\mathrel{=}\Varid{unjoin}\;\Varid{h}\;\mathbf{in}\;\Varid{h}_{\Varid{a}}){}\<[64]% \>[64]{}\mbox{\onelinecomment \citep[Section 1.5.2]{Gibbons2002Calculating}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{exl}\;(\Varid{unjoin}\;\Varid{h})){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{exl}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\Varid{exl}\hsdot{\circ }{.}\Varid{unjoin}){}\<[64]% \>[64]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks Similarly, \ensuremath{\Varid{cont}\;\Varid{inr}\mathrel{=}\Conid{Cont}\;(\Varid{exr}\hsdot{\circ }{.}\Varid{unjoin})}. Next, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{38}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{cont}\;\Varid{jam}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}\Varid{jam}){}\<[38]% \>[38]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}(\Varid{id}\mathbin{\triangledown}\Varid{id})){}\<[38]% \>[38]{}\mbox{\onelinecomment a law for \ensuremath{\Varid{jam}} and \ensuremath{(\mathbin{\triangledown})}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}\Varid{id}\mathbin{\triangledown}\Varid{h}\hsdot{\circ }{.}\Varid{id}){}\<[38]% \>[38]{}\mbox{\onelinecomment \citep[Section 1.5.2]{Gibbons2002Calculating}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\mathbin{\triangledown}\Varid{h}){}\<[38]% \>[38]{}\mbox{\onelinecomment category law}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{join}\;(\Varid{h},\Varid{h})){}\<[38]% \>[38]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{join}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\Varid{join}\hsdot{\circ }{.}\Varid{dup}){}\<[38]% \>[38]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{dup}} on functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks The final element of our linear vocabulary is scalar multiplication:\notefoot{Is there a more general argument to make? I haven't wanted to say that \ensuremath{\Varid{h}} is linear.} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{32}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{cont}\;(\Varid{scale}\;\Varid{s}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{h}\hsdot{\circ }{.}\Varid{scale}\;\Varid{s}){}\<[32]% \>[32]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{scale}\;\Varid{s}\hsdot{\circ }{.}\Varid{h}){}\<[32]% \>[32]{}\mbox{\onelinecomment linearity of \ensuremath{\Varid{h}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\lambda \Varid{h}\to \Varid{scale}\;\Varid{s}\;\Varid{h}){}\<[32]% \>[32]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{scale}} for functions/maps}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Cont}\;(\Varid{scale}\;\Varid{s}){}\<[32]% \>[32]{}\mbox{\onelinecomment $\eta$-reduction}{}\<[E]% \ColumnHook \end{hscode}\resethooks These optimized solved forms match the definitions in \figref{cont}. \subsection{\thmRef{asDual}}\proofLabel{theorem:asDual} \nc\lemDot[1]{\lemRef{dot-properties}, part \ref{#1}} \nc\lemDotTwo[2]{Lemma \ref{lemma:dot-properties}, parts \ref{#1} \& \ref{#2}} To derive instances for \ensuremath{\Conid{Dual}_{\Varid{k}}}, we'll need some properties. \begin{lemma} \lemLabel{dot-properties} The following properties hold: \normalfont \begin{enumerate} \item \ensuremath{\Varid{dot}} is linear. \label{dot-linear} \item \ensuremath{\Varid{dot}^{-1}} is linear. \label{unDot-linear} \item \ensuremath{\Varid{unjoin}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{dot}\times\Varid{dot}} \label{unjoin-dot} \item \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{join}\mathrel{=}\Varid{dot}^{-1}\times\Varid{dot}^{-1}} \label{unDot-join} \item \ensuremath{\Varid{dot}\;\Varid{u}\mathbin{\triangledown}\Varid{dot}\;\Varid{v}\mathrel{=}\Varid{dot}\;(\Varid{u},\Varid{v})} \label{dot-dot-join} \item \ensuremath{\Varid{dot}\;\mathrm{0}\mathrel{=}\mathrm{0}} (zero vector vs zero morphism) \label{dot-zeroV} \end{enumerate} \end{lemma} \emph{Proof:} \begin{enumerate} \item Follows from the bilinearity of uncurried dot product:\notefoot{I'm treating linear maps here as functions. Revisit.} \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{31}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{dot}\;(\Varid{u}\mathbin{+}\Varid{v}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{w}\to \Varid{dot}\;(\Varid{u}\mathbin{+}\Varid{v})\;\Varid{w}{}\<[31]% \>[31]{}\mbox{\onelinecomment $\eta$-expansion}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{w}\to \Varid{dot}\;\Varid{u}\;\Varid{w}\mathbin{+}\Varid{dot}\;\Varid{v}\;\Varid{w}{}\<[31]% \>[31]{}\mbox{\onelinecomment bilinearity of uncurried dot product}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{dot}\;\Varid{u}\mathbin{+}\Varid{dot}\;\Varid{v}{}\<[31]% \>[31]{}\mbox{\onelinecomment definition of \ensuremath{(\mathbin{+})} of functions}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{dot}\;(\Varid{s} \cdot \Varid{u}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{w}\to \Varid{dot}\;(\Varid{s} \cdot \Varid{u})\;\Varid{w}{}\<[31]% \>[31]{}\mbox{\onelinecomment $\eta$-expansion}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{w}\to \Varid{s} \cdot \Varid{dot}\;\Varid{u}\;\Varid{w}{}\<[31]% \>[31]{}\mbox{\onelinecomment bilinearity of uncurried dot product}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{s} \cdot \Varid{dot}\;\Varid{u}{}\<[31]% \>[31]{}\mbox{\onelinecomment definition of \ensuremath{( \cdot )} on functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks \item Invertible linear functions have linear inverses. In particular, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{44}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{dot}^{-1}\;(\Varid{u}\mathbin{+}\Varid{v}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{dot}^{-1}\;(\Varid{dot}\;(\Varid{dot}^{-1}\;\Varid{u})\mathbin{+}\Varid{dot}\;(\Varid{dot}^{-1}\;\Varid{v})){}\<[44]% \>[44]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}\hsdot{\circ }{.}\Varid{dot}^{-1}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{dot}^{-1}\;(\Varid{dot}\;(\Varid{dot}^{-1}\;\Varid{u}\mathbin{+}\Varid{dot}^{-1}\;\Varid{v})){}\<[44]% \>[44]{}\mbox{\onelinecomment linearity of \ensuremath{\Varid{dot}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{dot}^{-1}\;\Varid{u}\mathbin{+}\Varid{dot}^{-1}\;\Varid{v}{}\<[44]% \>[44]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{id}}}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{dot}^{-1}\;(\Varid{s} \cdot \Varid{u}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{dot}^{-1}\;(\Varid{s} \cdot \Varid{dot}\;(\Varid{dot}^{-1}\;\Varid{u})){}\<[44]% \>[44]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}\hsdot{\circ }{.}\Varid{dot}^{-1}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{dot}^{-1}\;(\Varid{dot}\;(\Varid{s} \cdot \Varid{dot}^{-1}\;\Varid{u})){}\<[44]% \>[44]{}\mbox{\onelinecomment linearity of \ensuremath{\Varid{dot}} }{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{s} \cdot \Varid{dot}^{-1}\;\Varid{u}{}\<[44]% \>[44]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{id}}}{}\<[E]% \ColumnHook \end{hscode}\resethooks \item Noting that the argument of both sides is a pair, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{72}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{unjoin}\hsdot{\circ }{.}\Varid{dot}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{u},\Varid{v})\to \Varid{unjoin}\;(\Varid{dot}\;(\Varid{u},\Varid{v})){}\<[72]% \>[72]{}\mbox{\onelinecomment $\eta$-expansion}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{u},\Varid{v})\to (\Varid{dot}\;(\Varid{u},\Varid{v})\hsdot{\circ }{.}\Varid{inl},\Varid{dot}\;(\Varid{u},\Varid{v})\hsdot{\circ }{.}\Varid{inr}){}\<[72]% \>[72]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{unjoin}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{u},\Varid{v})\to (\lambda \Varid{x}\to \Varid{dot}\;(\Varid{u},\Varid{v})\;(\Varid{inl}\;\Varid{x}),\lambda \Varid{y}\to \Varid{dot}\;(\Varid{u},\Varid{v})\;(\Varid{inr}\;\Varid{y})){}\<[72]% \>[72]{}\mbox{\onelinecomment def'n of \ensuremath{(\hsdot{\circ }{.})} for \ensuremath{(\to )}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{u},\Varid{v})\to (\lambda \Varid{x}\to \Varid{dot}\;(\Varid{u},\Varid{v})\;(\Varid{x},\mathrm{0}),\lambda \Varid{y}\to \Varid{dot}\;(\Varid{u},\Varid{v})\;(\mathrm{0},\Varid{y})){}\<[72]% \>[72]{}\mbox{\onelinecomment def'n of \ensuremath{\Varid{inl}} for \ensuremath{(\multimap)}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{u},\Varid{v})\to (\lambda \Varid{x}\to \Varid{dot}\;\Varid{u}\;\Varid{x}\mathbin{+}\Varid{dot}\;\Varid{v}\;\mathrm{0},\lambda \Varid{y}\to \Varid{dot}\;\Varid{u}\;\mathrm{0}\mathbin{+}\Varid{dot}\;\Varid{v}\;\Varid{y}){}\<[72]% \>[72]{}\mbox{\onelinecomment def'n of \ensuremath{\Varid{dot}} for pairs}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{u},\Varid{v})\to (\lambda \Varid{x}\to \Varid{dot}\;\Varid{u}\;\Varid{x},\lambda \Varid{y}\to \Varid{dot}\;\Varid{v}\;\Varid{y}){}\<[72]% \>[72]{}\mbox{\onelinecomment linearity of \ensuremath{\Varid{dot}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{u},\Varid{v})\to (\Varid{dot}\;\Varid{u},\Varid{dot}\;\Varid{v}){}\<[72]% \>[72]{}\mbox{\onelinecomment $\eta$-reduction}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{dot}\times\Varid{dot}{}\<[72]% \>[72]{}\mbox{\onelinecomment def'n of \ensuremath{(\times)} for \ensuremath{(\to )}}{}\<[E]% \ColumnHook \end{hscode}\resethooks \item Follows from inverting each side of part \ref{unjoin-dot}. \item Noting again that the argument of both sides is a pair, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{48}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{dot}\;\Varid{u}\mathbin{\triangledown}\Varid{dot}\;\Varid{v}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{jam}\hsdot{\circ }{.}(\Varid{dot}\;\Varid{u}\times\Varid{dot}\;\Varid{v}){}\<[48]% \>[48]{}\mbox{\onelinecomment definition of \ensuremath{(\mathbin{\triangledown})}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{x},\Varid{y})\to \Varid{jam}\;((\Varid{dot}\;\Varid{u}\times\Varid{dot}\;\Varid{v})\;(\Varid{x},\Varid{y})){}\<[48]% \>[48]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{x},\Varid{y})\to \Varid{jam}\;(\Varid{dot}\;\Varid{u}\;\Varid{x},\Varid{dot}\;\Varid{v}\;\Varid{y}){}\<[48]% \>[48]{}\mbox{\onelinecomment definition of \ensuremath{(\times)} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{x},\Varid{y})\to \Varid{dot}\;\Varid{u}\;\Varid{x}\mathbin{+}\Varid{dot}\;\Varid{v}\;\Varid{y}{}\<[48]% \>[48]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{jam}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{x},\Varid{y})\to \Varid{dot}\;(\Varid{u},\Varid{v})\;(\Varid{x},\Varid{y}){}\<[48]% \>[48]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{dot}} for pairs}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{dot}\;(\Varid{u},\Varid{v}){}\<[48]% \>[48]{}\mbox{\onelinecomment $\eta$-reduction}{}\<[E]% \ColumnHook \end{hscode}\resethooks \item Immediate from linearity and the definition of \ensuremath{\mathrm{0}} for functions. \end{enumerate} \emph{End of proof of \lemRef{dot-properties}}.\\ Recall the definition of \ensuremath{\Varid{asDual}} from \secref{Gradients and Duality}: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{asDual}\mathbin{::}(\Conid{HasDot}^{\Varid{s}}\;\Varid{a},\Conid{HasDot}^{\Varid{s}}\;\Varid{b})\Rightarrow \Conid{Cont}_{\Varid{k}}^{\Varid{s}}\;\Varid{a}\;\Varid{b}\to \Conid{Dual}_{\Varid{k}}\;\Varid{a}\;\Varid{b}{}\<[E]% \\ \>[B]{}\Varid{asDual}\;(\Conid{Cont}\;\Varid{f})\mathrel{=}\Conid{Dual}\;(\Varid{onDot}\;\Varid{f}){}\<[E]% \ColumnHook \end{hscode}\resethooks where \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{onDot}\mathbin{::}(\Conid{HasDot}^{\Varid{s}}\;\Varid{a},\Conid{HasDot}^{\Varid{s}}\;\Varid{b})\Rightarrow ((\Varid{b}\multimap\Varid{s})\to (\Varid{a}\multimap\Varid{s}))\to (\Varid{b}\multimap\Varid{a}){}\<[E]% \\ \>[B]{}\Varid{onDot}\;\Varid{f}\mathrel{=}\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{f}\hsdot{\circ }{.}\Varid{dot}{}\<[E]% \ColumnHook \end{hscode}\resethooks For the \ensuremath{\Conid{Category}} instance of \ensuremath{\Conid{Dual}_{\Varid{k}}}, we'll need that \ensuremath{\Varid{id}\mathrel{=}\Varid{asDual}\;\Varid{id}}. Simplifying the RHS, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{30}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{asDual}\;\Varid{id}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;\Varid{id}){}\<[30]% \>[30]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{id}} for \ensuremath{\Conid{Cont}_{\Varid{k}}^{\Varid{r}}} (\figref{cont})}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{id}\hsdot{\circ }{.}\Varid{dot}){}\<[30]% \>[30]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}){}\<[30]% \>[30]{}\mbox{\onelinecomment \ensuremath{\Conid{Category}} law for \ensuremath{\Varid{id}}/\ensuremath{(\hsdot{\circ }{.})}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;\Varid{id}{}\<[30]% \>[30]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{id}}}{}\<[E]% \ColumnHook \end{hscode}\resethooks We also need \ensuremath{\Varid{asDual}\;(\Varid{g}\hsdot{\circ }{.}\Varid{f})\mathrel{=}\Varid{asDual}\;\Varid{g}\hsdot{\circ }{.}\Varid{asDual}\;\Varid{f}}, or (without loss of generality) \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{asDual}\;(\Conid{Cont}\;\Varid{g}\hsdot{\circ }{.}\Conid{Cont}\;\Varid{f})\mathrel{=}\Varid{asDual}\;(\Conid{Cont}\;\Varid{g})\hsdot{\circ }{.}\Varid{asDual}\;(\Conid{Cont}\;\Varid{f}){}\<[E]% \ColumnHook \end{hscode}\resethooks Simplifying both sides, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{47}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;\Varid{g}\hsdot{\circ }{.}\Conid{Cont}\;\Varid{f}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;(\Varid{f}\hsdot{\circ }{.}\Varid{g})){}\<[47]% \>[47]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for \ensuremath{\Conid{Cont}_{\Varid{k}}^{\Varid{r}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{f}\hsdot{\circ }{.}\Varid{g}\hsdot{\circ }{.}\Varid{dot}){}\<[47]% \>[47]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{f}\hsdot{\circ }{.}\Varid{dot}\hsdot{\circ }{.}\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{g}\hsdot{\circ }{.}\Varid{dot}){}\<[47]% \>[47]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}\hsdot{\circ }{.}\Varid{dot}^{-1}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;\Varid{f}\hsdot{\circ }{.}\Varid{onDot}\;\Varid{g}){}\<[47]% \>[47]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{onDot}}}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;\Varid{g})\hsdot{\circ }{.}\Varid{asDual}\;(\Conid{Cont}\;\Varid{f}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;\Varid{g})\hsdot{\circ }{.}\Varid{asDual}\;(\Varid{onDot}\;\Varid{f}){}\<[47]% \>[47]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \ColumnHook \end{hscode}\resethooks As usual, strengthen this equality by replacing \ensuremath{\Varid{onDot}\;\Varid{g}} and \ensuremath{\Varid{onDot}\;\Varid{f}} by re-typed \ensuremath{\Varid{g}} and \ensuremath{\Varid{f}}, and read off a sufficient definition: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Conid{Dual}\;(\Varid{f}\hsdot{\circ }{.}\Varid{g})\mathrel{=}\Conid{Dual}\;\Varid{g}\hsdot{\circ }{.}\Varid{asDual}\;\Varid{f}{}\<[E]% \ColumnHook \end{hscode}\resethooks For \ensuremath{\Conid{Monoidal}}, the homomorphism condition is \ensuremath{\Varid{asDual}\;(\Varid{f}\times\Varid{g})\mathrel{=}\Varid{asDual}\;\Varid{f}\times\Varid{asDual}\;\Varid{g}}. Simplify both sides: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{59}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;\Varid{f})\times\Varid{asDual}\;(\Conid{Cont}\;\Varid{g}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;\Varid{f})\times\Conid{Dual}\;(\Varid{onDot}\;\Varid{g}){}\<[59]% \>[59]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;\Varid{f}\times\Conid{Cont}\;\Varid{g}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;(\Varid{join}\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})\hsdot{\circ }{.}\Varid{unjoin})){}\<[59]% \>[59]{}\mbox{\onelinecomment definition of \ensuremath{(\times)} on \ensuremath{\Conid{Cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;(\Varid{join}\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})\hsdot{\circ }{.}\Varid{unjoin})){}\<[59]% \>[59]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{join}\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})\hsdot{\circ }{.}\Varid{unjoin}\hsdot{\circ }{.}\Varid{dot}){}\<[59]% \>[59]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{onDot}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;((\Varid{dot}^{-1}\times\Varid{dot}^{-1})\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})\hsdot{\circ }{.}(\Varid{dot}\times\Varid{dot})){}\<[59]% \>[59]{}\mbox{\onelinecomment \lemDotTwo{unjoin-dot}{unDot-join} }{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{f}\hsdot{\circ }{.}\Varid{dot}\times\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{g}\hsdot{\circ }{.}\Varid{dot}^{-1}){}\<[59]% \>[59]{}\mbox{\onelinecomment law about \ensuremath{(\times)}/\ensuremath{(\hsdot{\circ }{.})}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;\Varid{f}\times\Varid{onDot}\;\Varid{g}){}\<[59]% \>[59]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{onDot}}}{}\<[E]% \ColumnHook \end{hscode}\resethooks Strengthening from \ensuremath{\Varid{onDot}\;\Varid{f}} and \ensuremath{\Varid{onDot}\;\Varid{g}} gives a simple sufficient condition: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Conid{Dual}\;\Varid{f}\times\Conid{Dual}\;\Varid{g}\mathrel{=}\Conid{Dual}\;(\Varid{f}\times\Varid{g}){}\<[E]% \ColumnHook \end{hscode}\resethooks For \ensuremath{\Conid{Cartesian}}, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{49}{@{}>{\hspre}l<{\hspost}@{}}% \column{58}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{exl}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;\Varid{exl}{}\<[49]% \>[49]{}\mbox{\onelinecomment specification}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;(\Varid{join}\hsdot{\circ }{.}\Varid{inl})){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{exl}} for \ensuremath{\Conid{Cont}_{\Varid{k}}^{\Varid{r}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;(\Varid{join}\hsdot{\circ }{.}\Varid{inl})){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{join}\hsdot{\circ }{.}\Varid{inl}\hsdot{\circ }{.}\Varid{dot}){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{onDot}}, and associativity of \ensuremath{(\hsdot{\circ }{.})}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{join}\;(\Varid{inl}\;(\Varid{dot}\;\Varid{u})))){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{join}\;(\Varid{dot}\;\Varid{u},\mathrm{0}))){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{inl}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{dot}\;\Varid{u}\mathbin{\triangledown}\mathrm{0})){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{join}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{dot}\;\Varid{u}\mathbin{\triangledown}\Varid{dot}\;\mathrm{0})){}\<[49]% \>[49]{}\mbox{\onelinecomment \lemDot{dot-zeroV}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{dot}\;(\Varid{u},\mathrm{0}))){}\<[49]% \>[49]{}\mbox{\onelinecomment \lemDot{dot-dot-join}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to (\Varid{u},\mathrm{0})){}\<[49]% \>[49]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{inl}\;\Varid{u}){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{inl}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;\Varid{inl}{}\<[49]% \>[49]{}\mbox{\onelinecomment $\eta$-reduction}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{exrP}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;\Varid{inr}{}\<[49]% \>[49]{}\mbox{\onelinecomment as with \ensuremath{\Varid{exlP}}}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{dup}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;\Varid{dup}{}\<[58]% \>[58]{}\mbox{\onelinecomment specification}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;(\Varid{jam}\hsdot{\circ }{.}\Varid{unjoin})){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{dup}} for \ensuremath{\Conid{Cont}_{\Varid{k}}^{\Varid{r}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;(\Varid{jam}\hsdot{\circ }{.}\Varid{unjoin})){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{jam}\hsdot{\circ }{.}\Varid{unjoin}\hsdot{\circ }{.}\Varid{dot}){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{onDot}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{dot}^{-1}\;(\Varid{jam}\;(\Varid{unjoin}\;(\Varid{dot}\;(\Varid{u},\Varid{v}))))){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{dot}^{-1}\;(\Varid{jam}\;(\Varid{dot}\;\Varid{u},\Varid{dot}\;\Varid{v}))){}\<[58]% \>[58]{}\mbox{\onelinecomment \lemDot{unjoin-dot}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{dot}^{-1}\;(\Varid{dot}\;\Varid{u}\mathbin{+}\Varid{dot}\;\Varid{v})){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{jam}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{dot}^{-1}\;(\Varid{dot}\;\Varid{u})\mathbin{+}\Varid{dot}^{-1}\;(\Varid{dot}\;\Varid{v})){}\<[58]% \>[58]{}\mbox{\onelinecomment \lemDot{unDot-linear}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{u}\mathbin{+}\Varid{v}){}\<[58]% \>[58]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;\Varid{jam}{}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{jam}} for functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks The \ensuremath{\Conid{Cocartesian}} instance comes out similarly: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{49}{@{}>{\hspre}l<{\hspost}@{}}% \column{58}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{inl}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;\Varid{inl}{}\<[58]% \>[58]{}\mbox{\onelinecomment specification}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;(\Varid{exl}\hsdot{\circ }{.}\Varid{unjoin})){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{inl}} for \ensuremath{\Conid{Cont}_{\Varid{k}}^{\Varid{r}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;(\Varid{exl}\hsdot{\circ }{.}\Varid{unjoin})){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{exl}\hsdot{\circ }{.}\Varid{unjoin}\hsdot{\circ }{.}\Varid{dot}){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{onDot}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{dot}^{-1}\;(\Varid{exl}\;(\Varid{unjoin}\;(\Varid{dot}\;(\Varid{u},\Varid{v}))))){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{dot}^{-1}\;(\Varid{exl}\;(\Varid{dot}\;\Varid{u},\Varid{dot}\;\Varid{v}))){}\<[58]% \>[58]{}\mbox{\onelinecomment \lemDot{unjoin-dot}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{dot}^{-1}\;(\Varid{dot}\;\Varid{u})){}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{exl}} on functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda (\Varid{u},\Varid{v})\to \Varid{u}){}\<[58]% \>[58]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;\Varid{exl}{}\<[58]% \>[58]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{exl}} for functions}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{inr}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;\Varid{exr}{}\<[58]% \>[58]{}\mbox{\onelinecomment \ldots{} as with \ensuremath{\Varid{inl}} \ldots}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Varid{jam}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;\Varid{jam}{}\<[49]% \>[49]{}\mbox{\onelinecomment specification}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;(\Varid{join}\hsdot{\circ }{.}\Varid{dup})){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{jam}} on \ensuremath{\Conid{Cont}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;(\Varid{join}\hsdot{\circ }{.}\Varid{dup})){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{join}\hsdot{\circ }{.}\Varid{dup}\hsdot{\circ }{.}\Varid{dot}){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{onDot}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{join}\;(\Varid{dup}\;(\Varid{dot}\;\Varid{u})))){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} on functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{join}\;(\Varid{dot}\;\Varid{u},\Varid{dot}\;\Varid{u}))){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{dup}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{dot}\;\Varid{u}\mathbin{\triangledown}\Varid{dot}\;\Varid{u})){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{join}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dot}^{-1}\;(\Varid{dot}\;(\Varid{u},\Varid{u}))){}\<[49]% \>[49]{}\mbox{\onelinecomment \lemDot{dot-dot-join}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to (\Varid{u},\Varid{u})){}\<[49]% \>[49]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{id}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\lambda \Varid{u}\to \Varid{dup}\;\Varid{u}){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{dup}} on functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;\Varid{dup}{}\<[49]% \>[49]{}\mbox{\onelinecomment $\eta$-reduction}{}\<[E]% \ColumnHook \end{hscode}\resethooks Finally, scaling: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{35}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{scale}\;\Varid{s}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Varid{scale}\;\Varid{s}){}\<[35]% \>[35]{}\mbox{\onelinecomment specification}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{asDual}\;(\Conid{Cont}\;(\Varid{scale}\;\Varid{s})){}\<[35]% \>[35]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{scale}} for \ensuremath{\Conid{Cont}_{\Varid{k}}^{\Varid{r}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{onDot}\;(\Varid{scale}\;\Varid{s})){}\<[35]% \>[35]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{asDual}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{scale}\;\Varid{s}\hsdot{\circ }{.}\Varid{dot}){}\<[35]% \>[35]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{onDot}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{scale}\;\Varid{s}\hsdot{\circ }{.}\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}){}\<[35]% \>[35]{}\mbox{\onelinecomment \lemDot{unDot-linear}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{scale}\;\Varid{s}){}\<[35]% \>[35]{}\mbox{\onelinecomment \ensuremath{\Varid{dot}^{-1}\hsdot{\circ }{.}\Varid{dot}\mathrel{=}\Varid{id}}}{}\<[E]% \ColumnHook \end{hscode}\resethooks \subsection{\corRef{dual-derived}}\proofLabel{corollary:dual-derived} Given the definitions in \figref{asDual}, \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{33}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Conid{Dual}\;\Varid{f}\mathbin{\vartriangle}\Conid{Dual}\;\Varid{g}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Conid{Dual}\;\Varid{f}\times\Conid{Dual}\;\Varid{g})\hsdot{\circ }{.}\Varid{dup}{}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{(\mathbin{\vartriangle})}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{f}\times\Varid{g})\hsdot{\circ }{.}\Varid{dup}{}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{(\times)} for \ensuremath{\Conid{Dual}_{\Varid{k}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{f}\times\Varid{g})\hsdot{\circ }{.}\Conid{Dual}\;\Varid{jam}{}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{dup}} for \ensuremath{\Conid{Dual}_{\Varid{k}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{jam}\hsdot{\circ }{.}(\Varid{f}\times\Varid{g})){}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for \ensuremath{\Conid{Dual}_{\Varid{k}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{f}\mathbin{\triangledown}\Varid{g}){}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{(\mathbin{\triangledown})}}{}\<[E]% \\[\blanklineskip]% \>[5]{}\Conid{Dual}\;\Varid{f}\mathbin{\triangledown}\Conid{Dual}\;\Varid{g}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{jam}\hsdot{\circ }{.}(\Conid{Dual}\;\Varid{f}\times\Conid{Dual}\;\Varid{g}){}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{(\mathbin{\triangledown})}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{jam}\hsdot{\circ }{.}\Conid{Dual}\;(\Varid{f}\times\Varid{g}){}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{(\times)} for \ensuremath{\Conid{Dual}_{\Varid{k}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;\Varid{dup}\hsdot{\circ }{.}\Conid{Dual}\;(\Varid{f}\times\Varid{g}){}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{jam}} for \ensuremath{\Conid{Dual}_{\Varid{k}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;((\Varid{f}\times\Varid{g})\hsdot{\circ }{.}\Varid{dup}){}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for \ensuremath{\Conid{Dual}_{\Varid{k}}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Conid{Dual}\;(\Varid{f}\mathbin{\vartriangle}\Varid{g}){}\<[33]% \>[33]{}\mbox{\onelinecomment definition of \ensuremath{(\mathbin{\vartriangle})}}{}\<[E]% \ColumnHook \end{hscode}\resethooks \subsection{\thmRef{indexed}}\proofLabel{theorem:indexed} We will need an indexed counterpart to \thmRef{cross}, which says $$\ensuremath{\mathcal{D}\;(\Varid{f}\times\Varid{g})\;(\Varid{a},\Varid{b})\mathrel{=}\mathcal{D}\;\Varid{f}\;\Varid{a}\times\mathcal{D}\;\Varid{g}\;\Varid{b}}$$ Letting \ensuremath{\Varid{cross}\mathrel{=}\Varid{uncurry}\;(\times)}, we can rephrase this theorem: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{49}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\mathcal{D}\;(\Varid{f}\times\Varid{g}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{a},\Varid{b})\to \mathcal{D}\;\Varid{f}\;\Varid{a}\times\mathcal{D}\;\Varid{g}\;\Varid{b}{}\<[49]% \>[49]{}\mbox{\onelinecomment \thmRef{cross}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{a},\Varid{b})\to \Varid{cross}\;(\mathcal{D}\;\Varid{f}\;\Varid{a},\mathcal{D}\;\Varid{g}\;\Varid{b}){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{cross}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda (\Varid{a},\Varid{b})\to \Varid{cross}\;((\mathcal{D}\;\Varid{f}\times\mathcal{D}\;\Varid{g})\;(\Varid{a},\Varid{b})){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{(\times)} on functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{cross}\hsdot{\circ }{.}(\mathcal{D}\;\Varid{f}\times\mathcal{D}\;\Varid{g}){}\<[49]% \>[49]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} on functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks Likewise, extend from binary to $n$-ary: \begin{theorem}[indexed cross rule] \thmLabel{crossF} $$\ensuremath{\mathcal{D}\;(\Varid{crossI}\;\Varid{fs})\mathrel{=}\Varid{crossI}\hsdot{\circ }{.}\Varid{crossI}\;(\Varid{fmap}\;\mathcal{D}\;\Varid{fs})}$$ \end{theorem} If \ensuremath{\Varid{fs}\mathbin{::}\Varid{h}\;(\Varid{a}\to \Varid{b})}, then both sides of this equation have type \ensuremath{\Varid{h}\;\Varid{a}\to (\Varid{h}\;\Varid{a}\multimap\Varid{h}\;\Varid{b})}. The proof is similar to \thmRef{cross} \citep[variant of Theorem 2-3 (3)]{Spivak65}. \thmRef{crossF} gives us what we need to construct \ensuremath{\mathcal{D}\!^+\!\;(\Varid{crossI}\;\Varid{fs})} compositionally: \begin{corollary} \corLabel{crossF} \ensuremath{\mathcal{D}\!^+\!} is compositional with respect to \ensuremath{\Varid{crossI}}. Specifically, $$\ensuremath{\mathcal{D}\!^+\!\;(\Varid{crossI}\;\Varid{fs})\mathrel{=}\Varid{second}\;\Varid{crossI}\hsdot{\circ }{.}\Varid{unzip}\hsdot{\circ }{.}\Varid{crossI}\;(\Varid{fmap}\;\mathcal{D}\!^+\!\;\Varid{fs})}$$ \end{corollary} The proof is analogous to that of \corRef{cross}: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{75}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\mathcal{D}\!^+\!\;(\Varid{crossI}\;\Varid{fs})\;\Varid{as}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{crossI}\;\Varid{fs}\;\Varid{as},\mathcal{D}\;(\Varid{crossI}\;\Varid{fs})\;\Varid{as}){}\<[75]% \>[75]{}\mbox{\onelinecomment definition of \ensuremath{\mathcal{D}\!^+\!}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{crossI}\;\Varid{fs}\;\Varid{as},\Varid{crossI}\;(\Varid{crossI}\;(\Varid{fmap}\;\mathcal{D}\;\Varid{fs})\;\Varid{as})){}\<[75]% \>[75]{}\mbox{\onelinecomment \thmRef{crossF}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{second}\;\Varid{crossI}\;(\Varid{crossI}\;\Varid{fs}\;\Varid{as},\Varid{crossI}\;(\Varid{fmap}\;\mathcal{D}\;\Varid{fs})\;\Varid{as}){}\<[75]% \>[75]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{second}} (\figref{indexed})}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{second}\;\Varid{crossI}\;((\Varid{crossI}\;\Varid{fs}\mathbin{\vartriangle}\Varid{crossI}\;(\Varid{fmap}\;\mathcal{D}\;\Varid{fs}))\;\Varid{as}){}\<[75]% \>[75]{}\mbox{\onelinecomment definition of \ensuremath{(\mathbin{\vartriangle})} on functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{second}\;\Varid{crossI}\hsdot{\circ }{.}(\Varid{crossI}\;\Varid{fs}\mathbin{\vartriangle}\Varid{crossI}\;(\Varid{fmap}\;\mathcal{D}\;\Varid{fs})))\;\Varid{as}{}\<[75]% \>[75]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} on functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{second}\;\Varid{crossI}\hsdot{\circ }{.}\Varid{unzip}\hsdot{\circ }{.}\Varid{crossI}\;(\Varid{zipWith}\;(\mathbin{\vartriangle})\;\Varid{fs}\;(\Varid{fmap}\;\mathcal{D}\;\Varid{fs})))\;\Varid{as}{}\<[75]% \>[75]{}\mbox{\onelinecomment \lemRef{crossZip} below}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{second}\;\Varid{crossI}\hsdot{\circ }{.}\Varid{unzip}\hsdot{\circ }{.}\Varid{crossI}\;(\Varid{fmap}\;\mathcal{D}\!^+\!\;\Varid{fs}))\;\Varid{as}{}\<[75]% \>[75]{}\mbox{\onelinecomment definition of \ensuremath{\mathcal{D}\!^+\!}}{}\<[E]% \ColumnHook \end{hscode}\resethooks For the second-to-last step, \begin{lemma}\lemLabel{crossZip} \ensuremath{\Varid{crossI}\;\Varid{fs}\mathbin{\vartriangle}\Varid{crossI}\;\Varid{gs}\mathrel{=}\Varid{unzip}\hsdot{\circ }{.}\Varid{crossI}\;(\Varid{zipWith}\;(\mathbin{\vartriangle})\;\Varid{fs}\;\Varid{gs})}. \end{lemma} For now, let's prove just the binary version of this lemma, namely $$ \ensuremath{(\Varid{f}\times\Varid{f'})\mathbin{\vartriangle}(\Varid{g}\times\Varid{g'})\mathrel{=}\Varid{transpose}\hsdot{\circ }{.}((\Varid{f}\mathbin{\vartriangle}\Varid{g})\times(\Varid{g'}\mathbin{\vartriangle}\Varid{g'}))} $$ where \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{transpose}\mathbin{::}((\Varid{a} \times \Varid{b}) \times (\Varid{c} \times \Varid{d}))\to ((\Varid{a} \times \Varid{c}) \times (\Varid{b} \times \Varid{d})){}\<[E]% \\ \>[B]{}\Varid{transpose}\;((\Varid{a},\Varid{b}),(\Varid{c},\Varid{d}))\mathrel{=}((\Varid{a},\Varid{c}),(\Varid{b},\Varid{d})){}\<[E]% \ColumnHook \end{hscode}\resethooks \out{For general cartesian categories, \ensuremath{\Varid{transpose}\mathrel{=}(\Varid{exl}\hsdot{\circ }{.}\Varid{exl}\mathbin{\vartriangle}\Varid{exl}\hsdot{\circ }{.}\Varid{exr})\mathbin{\vartriangle}(\Varid{exr}\hsdot{\circ }{.}\Varid{exl}\mathbin{\vartriangle}\Varid{exr}\hsdot{\circ }{.}\Varid{exr})}.} Proof: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{69}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}(\Varid{f}\times\Varid{f'})\mathbin{\vartriangle}(\Varid{g}\times\Varid{g'}){}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{inl}\hsdot{\circ }{.}\Varid{f}\mathbin{\triangledown}\Varid{inr}\hsdot{\circ }{.}\Varid{f'})\mathbin{\vartriangle}(\Varid{inl}\hsdot{\circ }{.}\Varid{g}\mathbin{\triangledown}\Varid{inr}\hsdot{\circ }{.}\Varid{g'}){}\<[69]% \>[69]{}\mbox{\onelinecomment \citep[Equation (17)]{MacedoOliveira2013Typing}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}(\Varid{inl}\hsdot{\circ }{.}\Varid{f}\mathbin{\vartriangle}\Varid{inl}\hsdot{\circ }{.}\Varid{g})\mathbin{\triangledown}(\Varid{inr}\hsdot{\circ }{.}\Varid{f'}\mathbin{\vartriangle}\Varid{inr}\hsdot{\circ }{.}\Varid{g'}){}\<[69]% \>[69]{}\mbox{\onelinecomment exchange law \citep[Section 1.5.4]{Gibbons2002Calculating}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{transpose}\hsdot{\circ }{.}\Varid{inl}\hsdot{\circ }{.}(\Varid{f}\mathbin{\vartriangle}\Varid{g})\mathbin{\triangledown}\Varid{transpose}\hsdot{\circ }{.}\Varid{inr}\hsdot{\circ }{.}(\Varid{f'}\mathbin{\vartriangle}\Varid{g'}){}\<[69]% \>[69]{}\mbox{\onelinecomment \lemRef{inlFork} below}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{transpose}\hsdot{\circ }{.}(\Varid{inl}\hsdot{\circ }{.}(\Varid{f}\mathbin{\vartriangle}\Varid{g})\mathbin{\triangledown}\Varid{inr}\hsdot{\circ }{.}(\Varid{f'}\mathbin{\vartriangle}\Varid{g'})){}\<[69]% \>[69]{}\mbox{\onelinecomment \citep[Section 1.5.2]{Gibbons2002Calculating}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{transpose}\hsdot{\circ }{.}((\Varid{f}\mathbin{\vartriangle}\Varid{g})\times(\Varid{f'}\mathbin{\vartriangle}\Varid{g'})){}\<[69]% \>[69]{}\mbox{\onelinecomment \citep[Equation (17)]{MacedoOliveira2013Typing}}{}\<[E]% \ColumnHook \end{hscode}\resethooks For the third step, we need two more properties. \begin{lemma}\lemLabel{inlFork} $$ \ensuremath{\Varid{inl}\hsdot{\circ }{.}\Varid{f}\mathbin{\vartriangle}\Varid{inl}\hsdot{\circ }{.}\Varid{g}\mathrel{=}\Varid{transpose}\hsdot{\circ }{.}\Varid{inl}\hsdot{\circ }{.}(\Varid{f}\mathbin{\vartriangle}\Varid{g})} $$ $$ \ensuremath{\Varid{inr}\hsdot{\circ }{.}\Varid{f}\mathbin{\vartriangle}\Varid{inr}\hsdot{\circ }{.}\Varid{g}\mathrel{=}\Varid{transpose}\hsdot{\circ }{.}\Varid{inr}\hsdot{\circ }{.}(\Varid{f}\mathbin{\vartriangle}\Varid{g})} $$ \end{lemma} Below is a proof in the \ensuremath{(\to )} category, which suffice for our purpose. (To do: does the property hold for general biproduct categories?) \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}c<{\hspost}@{}}% \column{BE}{@{}l@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{50}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[5]{}\Varid{inl}\hsdot{\circ }{.}\Varid{f}\mathbin{\vartriangle}\Varid{inl}\hsdot{\circ }{.}\Varid{g}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{a}\to (\Varid{inl}\hsdot{\circ }{.}\Varid{f}\mathbin{\vartriangle}\Varid{inl}\hsdot{\circ }{.}\Varid{g})\;\Varid{a}{}\<[50]% \>[50]{}\mbox{\onelinecomment $\eta$-expand}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{a}\to (\Varid{inl}\;(\Varid{f}\;\Varid{a}),\Varid{inl}\;(\Varid{g}\;\Varid{a})){}\<[50]% \>[50]{}\mbox{\onelinecomment definition of \ensuremath{(\mathbin{\vartriangle})} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{a}\to ((\Varid{f}\;\Varid{a},\mathrm{0}),(\Varid{g}\;\Varid{a},\mathrm{0})){}\<[50]% \>[50]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{inl}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{a}\to \Varid{transpose}\;((\Varid{f}\;\Varid{a},\Varid{g}\;\Varid{a}),(\mathrm{0},\mathrm{0})){}\<[50]% \>[50]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{transpose}}}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{a}\to \Varid{transpose}\;((\Varid{f}\;\Varid{a},\Varid{g}\;\Varid{a}),\mathrm{0}){}\<[50]% \>[50]{}\mbox{\onelinecomment definition of \ensuremath{\mathrm{0}} for pairs}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\lambda \Varid{a}\to \Varid{transpose}\;(\Varid{inl}\;(\Varid{f}\;\Varid{a},\Varid{g}\;\Varid{a})){}\<[50]% \>[50]{}\mbox{\onelinecomment definition of \ensuremath{\Varid{inl}} for functions}{}\<[E]% \\ \>[B]{}\mathrel{=}{}\<[BE]% \>[5]{}\Varid{transpose}\hsdot{\circ }{.}\Varid{inl}\hsdot{\circ }{.}(\Varid{f}\mathbin{\vartriangle}\Varid{g}){}\<[50]% \>[50]{}\mbox{\onelinecomment definition of \ensuremath{(\hsdot{\circ }{.})} for functions}{}\<[E]% \ColumnHook \end{hscode}\resethooks Similarly for the second property (with \ensuremath{\Varid{inr}}), noting that \ensuremath{((\mathrm{0},\Varid{f}\;\Varid{a}),(\mathrm{0},\Varid{g}\;\Varid{a}))\mathrel{=}\Varid{transpose}\;((\mathrm{0},\mathrm{0}),(\Varid{f}\;\Varid{a},\Varid{g}\;\Varid{a}))}. The \ensuremath{\Conid{CartesianI}} and \ensuremath{\Conid{CocartesianI}} instances follow from linearity (\thmRef{linear}). \section{#1}\seclabel{#1}} \nc\subsectionl[1]{\subsection{#1}\seclabel{#1}} \usepackage{subcaption} \nc\figlabel[1]{\label{fig:#1}} \nc\figref[1]{Figure \ref{fig:#1}} \nc\figreftwo[2]{Figures \ref{fig:#1} and \ref{fig:#2}} \nc\figrefthree[3]{Figures \ref{fig:#1}, \ref{fig:#2}, and \ref{fig:#3}} \nc\incpic[1]{\includegraphics[width=\linewidth]{Figures/#1}} \nc\figp[2]{\begin{figure}\centering #1 \hspace{-2ex} #2\end{figure}} \nc\figoneW[3]{ \begin{minipage}{#1\linewidth} \centering \setlength\mathindent{0ex} \incpic{#2} \vspace*{-4ex} \captionof{figure}{#3} \figlabel{#2} \end{minipage} } } \nc\figone{\figoneW{0.48}} \nc\workingHere{ \vspace{1ex} \begin{center} \setlength{\fboxsep}{3ex} \setlength{\fboxrule}{4pt} \huge\textcolor{red}{\framebox{Working here}} \end{center} \vspace{1ex} } \nc\notefootsep{\indraft{\textsuperscript{,}}} \let\oldFootnote\footnote \nc\nextToken\relax \rnc\footnote[1]{\oldFootnote{#1}\futurelet\nextToken\isFootnote} \nc\isFootnote{\ifx\footnote\nextToken\notefootsep{}\fi} \newtheorem{theorem}{Theorem \nc\thmLabel[1]{\label{theorem:#1}} \nc\thmRef[1]{Theorem \ref{theorem:#1}} \nc\thmRefTwo[2]{Theorems \ref{theorem:#1} and \ref{theorem:#2}} \nc\thmRefs[2]{Theorems \ref{theorem:#1} through \ref{theorem:#2}} \newtheorem{corollary}{Corollary}[theorem] \nc\corLabel[1]{\label{corollary:#1}} \nc\corRef[1]{Corollary \ref{corollary:#1}} \nc\corRefTwo[2]{Corollaries \ref{corollary:#1} and \ref{corollary:#2}} \nc\corRefs[2]{Corollaries \ref{corollary:#1} through \ref{corollary:#2}} \newtheorem{lemma}[theorem]{Lemma} \nc\lemLabel[1]{\label{lemma:#1}} \nc\lemRef[1]{Lemma \ref{lemma:#1}} \nc\lemRefTwo[2]{Lemma \ref{lemma:#1} and \ref{lemma:#2}} \nc\lemRefs[2]{Lemma \ref{lemma:#1} through \ref{lemma:#2}} \usepackage{hyperref} \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000
proofpile-arXiv_067-6961
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Many applications in the fields of machine learning~\cite{boyd2011distributed} and signal processing~\cite{chierchiaapproche} require the solution of the programming problem \begin{equation} \label{eq:minF+G} \min_{x \in {\mathsf X}} F(x) + G(x) \end{equation} where ${\mathsf X}$ is an Euclidean space, $F$ and $G$ are elements of the set $\Gamma_0({\mathsf X})$ of convex, lower semi-continuous and proper functions. In these contexts, $F$ often represents a cost function and $G$ a regularization term. The Douglas-Rachford algorithm is one of the most popular approach towards solving Problem~\eqref{eq:minF+G}. Given $\gamma >0$, the algorithm is written \begin{align} \label{eq:dr-deter} y_{n+1} &= \prox_{\gamma F}(x_n) \nonumber\\ z_{n+1} & = \prox_{\gamma G}(2 y_{n+1} - x_n) \nonumber \\ x_{n+1} &= x_n + z_{n+1} - y_{n+1} \end{align} where $\prox_{\gamma F}$ denotes the proximity operator of $F$, defined for every $x \in {\mathsf X}$ by the equation \begin{equation*} \prox_{\gamma F}(x) = \arg\min_{y \in {\mathsf X}} \frac12 \|x - y\|^2 + \gamma F(y). \end{equation*} Assuming that a standard qualification condition holds and that the set of solutions $\arg\min F+G$ of~\eqref{eq:minF+G} is not empty, the sequence $(y_n)_n$ converges to an element in $\arg\min F+G$ as $n \to +\infty$ (\cite{lio-mer-79,eckstein1992douglas}). In this paper, we study the case where $F$ and $G$ are integral functionals of the form $$F({x}) = \bE_\xi(f({x},\xi)), \qquad G({x}) = \bE_\xi(g({x},\xi))$$ where $\xi$ is a random variable (r.v) from some probability space $(\Omega, \mathcal F, \bP)$ into a measurable space $(\Xi,\mathcal G)$, with distribution $\mu$, and where $\{f(\cdot,s), s \in \Xi\}$ and $\{g(\cdot,s), s \in \Xi\}$ are subsets of $\Gamma_0({\mathsf X})$. In this context, the stochastic Douglas Rachford algorithm aims to solve Problem~\eqref{eq:minF+G} by iterating \begin{align} \label{eq:sto-dr} {y}_{n+1} &=\prox_{\gamma f(\cdot,\xi_{n+1})}({x}_{n}) \nonumber\\ {z}_{n+1} &=\prox_{\gamma g(\cdot,\xi_{n+1})}(2{y}_{n+1}- {x}_{n})\nonumber\\ {x}_{n+1} &= {x}_{n} + {z}_{n+1}-{y}_{n+1}\,, \end{align} where $(\xi_{n})_n$ is a sequence of i.i.d copies of the random variable $\xi$ and $\gamma > 0$ is the constant step size. Compared to the "deterministic" Douglas Rachford algorithm~\eqref{eq:dr-deter}, the stochastic Douglas Rachford algorithm~\eqref{eq:sto-dr} is an online method. The constant step size used make it implementable in adaptive signal processing or online machine learning contexts. In this algorithm, the function $F$ (resp. $G$) is replaced at each iteration $n$ by a random realization $f(\cdot,\xi_{n})$ (resp. $g(\cdot,\xi_{n})$). It can be implemented in the case where $F$ (resp. $G$) cannot be computed in its closed form~\cite{bia-hac-jota16,bia-hac-sal-(sub)jca17} or in the case where the computation of its proximity operator is demanding~\cite{sal-bia-hac-(sub)tac17}. Compared to other online optimization algorithm like the stochastic subgradient algorithm, the algorithm~\eqref{eq:sto-dr} benefits from the numerical stability of stochastic proximal methods. Stochastic version of the Douglas Rachford algorithm have been considered in~\cite{chierchiaapproche,shi2013online}. These papers consider the case where $G$ is deterministic, \textit{i.e} is not written as an expectation and $F$ is written as an expectation that reduces to a sum. The latter case is also contained as a particular case of the algorithm~\cite{chambolle2017stochastic}. The algorithms~\cite{rosasco2015stochastic,combettes2016stochastic} are generalizations of a partially stochastic Douglas Rachford algorithm where $G$ is deterministic. The convergence of these algorithms is obtained under a summability assumption of the noise over the iterations. The stochastic Douglas Rachford studied in this paper was implemented in an adaptive signal processing context~\cite{mourya2017adaptive} to solve a target tracking problem. Whereas the paper~\cite{mourya2017adaptive} is mainly focused on the application to target tracking, in this work we provide theoretical basis for the algorithm~\eqref{eq:sto-dr} and convergence results. Moreover, a novel application to solve a programming problem regularized with the overlapping group lasso online is provided. The next section introduces some notations. Section~\ref{sec-th} is devoted to the statement of the main convergence result. In Section~\ref{sec-proof}, an outline of the proof of the result in Section~\ref{sec-th} is provided. Finally, the algorithm~\eqref{eq:sto-dr} is implemented to solve a regularized problem in Section~\ref{sec-simu}. \section{Notations} For every function $g \in \Gamma_0({\mathsf X})$, $\partial g(x)$ denotes the subdifferential of $g$ at the point $x \in {\mathsf X}$ and $\partial g_0(x)$ the least norm element in $\partial g(x)$. The domain of $g$ is denoted as $\dom(g)$. It is a known fact that the closure of $\dom(g)$, denoted as $\cl(\dom(g))$, is convex. For every closed convex set $\mC$, we denote by $\Pi_C$ the projection operator onto $\mC$. The indicator function of the set $\mC$ is defined by $\iota_{\mC}(x) = 0$ if $x \in \mC$, and $\iota_{\mC}(x) = +\infty$ elsewhere. It is easy to see that $\iota_{\mC} \in \Gamma_0({\mathsf X})$ and that $\prox_{\iota_{\mC}} = \Pi_\mC$. The Moreau envelope of $g \in \Gamma_0({\mathsf X})$ is equal to $$ g_\gamma({x}) = \min_{{y}\in {\mathsf X}} g({y})+\frac{\|{y}-{x}\|^2}{2\gamma} $$ for every ${x}\in {\mathsf X}$. Recall that $g_\gamma$ is differentiable and $\nabla g_\gamma (x) = \frac{1}{\gamma}(x - \prox_{\gamma g}({x})).$ If $f \in \Gamma_0({\mathsf X})$ is differentiable, then, $\partial f(x) = \{\nabla f(x)\}$ and $\nabla f(\prox_{\gamma f}(x)) = \nabla f_\gamma(x)$, for every $x \in {\mathsf X}$. When $S \subset {\mathsf X}$, $d(x,S)$ denote the distance from the point $x \in {\mathsf X}$ to the set $S$. In the context of algorithm~\eqref{eq:sto-dr} we shall denote $D(s) = \dom(g(\cdot,s))$ and $\cD = \dom(G)$. Denote $\mcB({\mathsf X})$ the Borel sigma field over ${\mathsf X}$. For every $p \geq 1$, $L^p(\Xi,{\mathsf X})$ is the set of all r.v $\varphi$ from the probability space $(\Xi,\cG,\mu)$ into the measurable space $({\mathsf X},\mcB({\mathsf X}))$, such that $\|\varphi\|^p$ is integrable. From now on, we shall state explicitly the dependence of the iterates of the algorithm in the step size and the starting point. Namely, we shall denote $(x_n^{\gamma,\nu})_n$ the sequence $(x_n)_n$ generated by the stochastic Douglas Rachford algorithm~\eqref{eq:sto-dr} with step $\gamma$, such that the distribution of $x_0^{\gamma,\nu}$ over ${\mathsf X}$ is $\nu$. If $\nu = \delta_a$, where $\delta_a$ is the Dirac measure at the point $a \in {\mathsf X}$, we shall prefer the notation $x_n^{\gamma,a}$. \section{Main convergence theorem} \label{sec-th} Consider the following assumptions. \begin{assumption} \label{h0bnd} For every compact set $\cK\subset {\mathsf X}$, there exists $\varepsilon>0$ such that \[ \sup_{x\in\cK \cap \mD} \int \| \partial g_0(x,s) \|^{1+\varepsilon} \, \mu(ds) < \infty . \] \end{assumption} \begin{assumption} \label{nablaf-bnd} For $\mu$-a.e $s \in \Xi$, $f(\cdot,s)$ is differentiable and there exists a closed ball in ${\mathsf X}$ such that $\|\nabla f(x,s)\| \leq M(s)$ for all $x$ in this ball, where $M(s)$ is $\mu$-integrable. Moreover, for every compact set $\cK\subset {\mathsf X}$, there exists $\varepsilon > 0$ such that \[ \sup_{x\in\cK} \int \| \nabla f (x,s) \|^{1+\varepsilon} \, \mu(ds) < \infty\, . \] \end{assumption} \begin{assumption} \label{linreg} $\displaystyle{ \forall x\in {\mathsf X}, \ \int d(x,D(s))^2 \, \mu(ds) \geq C \boldsymbol d(x)^2}$. \end{assumption} \begin{assumption} \label{JBnd-dif} For every compact set $\cK\subset {\mathsf X}$, there exists $\varepsilon, C, \gamma_0 > 0$ such that for all $\gamma \in (0,\gamma_0]$ and all $x \in \cK$, \[\frac 1{\gamma^{1+\varepsilon}} \int \| \prox_{\gamma g(\cdot,s)}(x) - \Pi_{\cl(D(s))}(x) \|^{1+\varepsilon} \, \mu(ds) < C \, . \] \end{assumption} \begin{assumption} \label{baillon} There exists $L > 0$ such that $\nabla f(\cdot,s)$ is $\mu$-a.e, a $L$-Lipschitz continuous function. \end{assumption} \begin{assumption} \label{L2} There exists $x_\star \in \arg\min F+G$ and $\varphi \in L^2(\Xi,{\mathsf X})$ such that $\varphi(s) \in \partial g(x_\star,s)$ $\mu$-a.s, $\nabla f(x_\star,\cdot) \in L^2(\Xi,{\mathsf X})$ and $\int \nabla f(x_\star,s) \, \mu(ds) + \int \varphi(s) \, \mu(ds) = 0$. \end{assumption} \begin{assumption} \label{F+G-coerc} The function $F+G$ satisfies one of the following properties: \begin{enumerate}[label=(\alph*)] \item\label{Zer-cpct} $F+G$ is coercive \textit{i.e} $F(x)+G(x) \longrightarrow_{\|x\| \to +\infty} +\infty$ \item\label{F+G-super} $F+G$ is supercoercive \textit{i.e} $\frac{F(x)+G(x)}{\|x\|} \longrightarrow_{\|x\| \to +\infty} +\infty$. \end{enumerate} \end{assumption} \begin{assumption} \label{JBgrow-fct} There exists $\gamma_0 > 0$, such that for all $\gamma\in (0,\gamma_0]$ and all $x\in {\mathsf X}$, \begin{align*} &\int \| \nabla f_\gamma(x,s) \| + \frac{1}{\gamma}\| \prox_{\gamma g(\cdot,s)}(x) - \Pi_{\cl(D(s))}(x) \| \mu(ds) \\ &\leq C ( 1 + | F^\gamma(x) + G^\gamma(x)| ) \, . \end{align*} \end{assumption} \begin{theorem} \label{th-cv} Let Assumptions~\ref{h0bnd}--~\ref{JBgrow-fct} hold true. Then, for each probability measure $\nu$ over ${\mathsf X}$ having a finite second moment, for any $\varepsilon > 0$, \[ \limsup_{n\to\infty} \frac 1{n+1}\sum_{k=0}^n \bP\left( d(x_k^{\gamma,\nu}, \arg\min(F+G))>\varepsilon\right) \xrightarrow[\gamma\to 0]{}0\,. \] Moreover, if Assumption~\ref{F+G-coerc}--\ref{F+G-super} holds true, then \begin{gather*} \limsup_{n\to\infty}\ \bP\left(d\left(\bar x_n^{\gamma,\nu} , \arg\min(F+G) \right)\geq \varepsilon\right)\xrightarrow[\gamma\to 0]{}0, \ \text{and} \\ \limsup_{n\to\infty}\ d\left(\bE(\bar x_n^{\gamma,\nu}), \arg\min(F+G) \right) \xrightarrow[\gamma\to 0]{}0\,. \end{gather*} where $\bar x_n^{\gamma,\nu} = \frac{1}{n}\sum_{k=1}^{n} x_k^{\gamma,\nu}$. \end{theorem} Loosely speaking, the theorem states that, with high probability, the iterates $(x_n^{\gamma,\nu})_n$ stay close to the set of solutions $\arg\min F+G$ as $n \to \infty$ and $\gamma \to 0$. Some Assumptions deserve comments. Following~\cite{bauschke1996projection}, we say that a finite collection of subsets $\mC_1,\dots,\mC_m$ of ${\mathsf X}$ is \textit{linearly regular} if \begin{equation*} \exists \kappa > 0, \forall x \in {\mathsf X}, \max_{s \in \{1,\dots,m\}} d(x,\mC_s) \geq \kappa d(x,\cap_{s = 1}^{m} \mC_s) \end{equation*} In the case where there exists a $\mu$-probability one set $\tilde{\Xi}$ such that the set $\{D(s), s \in \tilde{\Xi}\} = \{\mC_1,\dots,\mC_m\}$ is finite, it is routine to check that Assumption~\ref{linreg} holds if and only if the domains $\mC_1,\dots,\mC_m$ are linearly regular. See~\cite{mourya2017adaptive} for an applicative context of the algorithm~\eqref{eq:sto-dr} in the latter case. It is a known fact that $$\prox_{\gamma g(\cdot,s)}(x) \longrightarrow_{\gamma \to 0} \Pi_{\cl({\dom(g(\cdot,s))})}(x),$$ for each $(x,s)$. Assumptions~\ref{JBnd-dif} and~\ref{JBgrow-fct} add controls on the convergence rate. Since $f(\cdot,s), g(\cdot,s) \in \Gamma_0({\mathsf X})$, and $f(\cdot,s)$ is differentiable, $\partial (F+G)(x) = \nabla F(x) + \partial G(x) = \bE (\nabla f(x,\xi)) + \bE (\partial g(x,\xi))$ \cite{roc-wets-livre98}, where the set $\bE (\partial g(x,\xi))$ is defined by its Aumann integral $$\left\{\int \varphi(s) \, \mu(ds), \varphi \in L^1(\Xi,{\mathsf X}), \text{ s.t. } \varphi(s) \in \partial g(x,s), \mu\text{-a.s.} \right\}$$ Therefore, using Fermat's rule, if $x \in \arg\min F+G$, then there exists $\varphi \in L^1(\Xi,{\mathsf X})$, such that $\varphi(s) \in \partial g(x,s)$ $\mu$-a.s, and $\int \nabla f(x,s) \, \mu(ds) + \int \varphi(s) \, \mu(ds) = 0$. We refer to $(\nabla f(x,\cdot), \varphi)$ as a \textit{representation} of the solution $x$. Assumption~\ref{L2} ensures the existence of $x_\star \in \arg\min F+G$ with a representation $\nabla f(x,\cdot), \varphi \in L^2(\Xi,{\mathsf X})$. \section{Outline of the convergence proof} \label{sec-proof} This section is devoted to sketching the proof of the convergence of the stochastic Douglas Rachford algorithm. The approach follows the same steps as~\cite{bia-hac-sal-(sub)jca17} and is detailed in~\cite{salimSDR2017}. The first step of the proof is to study the dynamical behavior of the iterates $(x_n^{\gamma,a})_n$ where $a \in \cD$. The Ordinary Differential Equation (ODE) method, well known in the literature of stochastic approximation (\cite{kus-yin-(livre)03}), is applied. Consider the continuous time stochastic process ${\mathsf x}_{\gamma,a}$ obtained by linearly interpolating with time interval $\gamma$ the iterates $(x_n^{\gamma,a})$: \begin{equation} \label{eq:interpol} {\mathsf x}_{\gamma,a}(t) = x_n^{\gamma,a} + (t - n\gamma)\frac{x_{n+1}^{\gamma,a} - x_n^{\gamma,a}}{\gamma}, \end{equation} for all $t \geq 0$ such that $n\gamma \leq t < (n+1)\gamma$, for all $n \in \bN$. Let Assumptions~\ref{h0bnd}--\ref{JBnd-dif} \footnote{In the case where the domains are common, \textit{i.e} $s \mapsto D(s)$ is $\mu$-a.s constant, the moment Assumptions~\ref{h0bnd} and~\ref{nablaf-bnd} are sufficient to state the dynamical behavior result. See~\cite{mourya2017adaptive} for an applicative context where the domains $D(s)$ are distinct.} hold true. Consider the set $C(\bR_+,{\mathsf X})$ of continuous functions from $\bR_+$ to ${\mathsf X}$ equipped with the topology of uniform convergence on the compact intervals. It is shown that the continuous time stochastic process ${\mathsf x}_{\gamma,a}$ converges weakly over $\bR_+$ (\textit{i.e} in distribution in $C(\bR_+,{\mathsf X})$) as $\gamma \to 0$. Moreover, the limit is proven to be the unique absolutely continuous function ${\mathsf x}$ over $\bR_+$ satisfying ${\mathsf x}(0) = a$ and for almost every $t \geq 0$, the Differential Inclusion (DI), \begin{equation} \label{eq:di} \dot {\mathsf x}(t) \in - (\nabla F + \partial G)({\mathsf x}(t)), \end{equation} (see \cite{bre-livre73}). Differential inclusions like~\eqref{eq:di} generalize ODE to set-valued mappings. The DI~\eqref{eq:di} induces a map $\Phi : \mD \times \bR_+ \to \mD, (x_0, t) \mapsto {\mathsf x}(t)$ that can be extended to a semi-flow over $\cl(\mD)$, still denoted by $\Phi$. The weak convergence of $({\mathsf x}_{\gamma,a})$ to ${\mathsf x}$ is not enough to study the long term behavior of the iterates $(x_n^{\gamma,a})_n$. The second step of the proof is to prove a stability result for the Feller Markov chain $(x_n^{\gamma,a})_n$. Denote by $P_\gamma$ its transition kernel. The deterministic counterpart of this step of the proof is the so-called \textit{Fejér monotonicity} of the sequence $(x_n)$ of the algorithm~\eqref{eq:dr-deter}. Even if some work has been done~\cite{bia-hac-jota16,combettes2015stochastic}, there is no immediate way to adapt the Fejér monotonicity to our random setting, mainly because of the constant step $\gamma$. As an alternative, we assume Hypotheses~\ref{baillon}-\ref{L2}, and prove the existence of positive numbers $\alpha, C$ and $\gamma_0$, such that for every $\gamma \in (0,\gamma_0]$, \begin{align} \label{eq:F+G} \bE_{n} \|x_{n+1}^{\gamma,a}-x_\star\|^2 \leq &\| x_n^{\gamma,a} - x_\star\|^2\\ &- \alpha \gamma (F^\gamma + G^\gamma)(x_n^{\gamma,a}) + \gamma C. \nonumber \end{align} In this inequality, $\bE_{n}$ denotes the conditional expectation with respect to the sigma-algebra $\sigma(x_0^\gamma,x_1^\gamma,\dots,x_n^\gamma)$ and $$F^\gamma(x) = \int f_\gamma(x,s) \, \mu(ds), \quad G^\gamma(x) = \int g_\gamma(x,s) \, \mu(ds).$$ Since $\gamma \mapsto F^\gamma(x) + G^\gamma(x)$ is decreasing~\cite{bia-hac-sal-(sub)jca17,salimSDR2017}, the function $F^\gamma+G^\gamma$ can be replaced by $F^{\gamma_0} + G^{\gamma_0}$. Besides, the coercivity of $F+G$ (Assumption~\ref{F+G-coerc}) implies the coercivity of $F^{\gamma_0} + G^{\gamma_0}$ (~\cite{bia-hac-sal-(sub)jca17,salimSDR2017}). Therefore, assuming~\ref{baillon}--\ref{F+G-coerc} and setting $\Psi = F^{\gamma_0} + G^{\gamma_0}$, there exist positive numbers $\alpha, C$ and $\gamma_0$, such that for every $\gamma \in (0,\gamma_0]$, \begin{equation} \label{eq:PH} \bE_{n} \|x_{n+1}^{\gamma,a}-x_\star\|^2 \leq \| x_n^{\gamma,a} - x_\star\|^2 - \alpha \gamma \Psi (x_n^{\gamma,a}) + \gamma C. \end{equation} Equation~\eqref{eq:PH} can alternatively be seen as a tightness result. It implies that the set $I_\gamma$ of invariant measures of the Markov kernel $P_\gamma$ is not empty for every $\gamma \in (0,\gamma_0]$, and that the set \begin{equation}\label{eq:Inv} \text{Inv} = \cup_{\gamma \in (0,\gamma_0]} I_\gamma \end{equation} is \textit{tight}(~\cite{for-pag-99,bia-hac-sal-(arxiv)16}). It remains to characterize the cluster points of Inv as $\gamma \to 0$. To that end, the dynamical behavior result and the stability result are combined. Let Assumptions~\ref{h0bnd}--~\ref{JBgrow-fct} hold true. \footnote{Assumptions~\ref{linreg},~\ref{JBnd-dif} and~\ref{JBgrow-fct} are not needed if the domains $D(s)$ are common.} Then, the set Inv is tight, and, as $\gamma \to 0$, every cluster point of Inv is an invariant measure for the semi-flow $\Phi$. The Theorem~\ref{th-cv} is a consequence of this fact. \section{Application to structured regularization} \label{sec-simu} In this section is provided an application of the stochastic Douglas Rachford~\eqref{eq:sto-dr} algorithm to solve a regularized optimization problem. Consider problem~\eqref{eq:minF+G}, where $F$ is a cost function that is written as an expectation, and $G$ is a regularization term. Towards solving~\eqref{eq:minF+G}, many approaches involve the computation of the proximity operator of the regularization term $G$. In the case where $G$ is a structured regularization term, its proximity operator is often difficult to compute. When $G$ is a graph-based regularization, it is possible to apply a stochastic proximal method to address the regularization~\cite{sal-bia-hac-(sub)tac17}. We shall concentrate on the case where $G$ is an overlapping group regularization. In this case, the computation of the proximity operator of $G$ is known to be a bottleneck~\cite{yuan2011efficient}. We shall apply the algorithm~\eqref{eq:sto-dr} to overcome this difficulty. Consider ${\mathsf X} = \bR^N$, $N \in \bN^\star$, and $g \in \bN^\star$. Consider $g$ subsets of $\{1,\dots,N\}$, $S_1,\dots,S_g$, possibly overlapping. Set $G(x) = \sum_{j = 1}^{g} \|x_{S_j}\|$, where $x_{S_j}$ denotes the restriction of $x$ to the set of index $S_j$ and $\|\cdot\|$ denotes the Euclidean norm. Set $F(x) = \bE_{(\xi,\eta)}(h(\eta \ps{x,\xi}))$ where $h$ denotes the hinge loss $h(z) = \max(0,1-z)$ and $(\xi,\eta)$ is a r.v defined on some probability space with values in ${\mathsf X} \times \{-1,+1\}$. In this case, the problem~\eqref{eq:minF+G} is also called the SVM classification problem, regularized by the overlapping group lasso. It is assumed that the user is provided with i.i.d copies $((\xi_n,\eta_n))_n$ of the r.v $(\xi,\eta)$ online. To solve this problem, we implement a stochastic Douglas Rachford strategy. To that end, the regularization $G$ is rewritten $G(x) = \bE_J(g \|x_{S_J}\|)$ where $J$ is an uniform r.v over $\{1,\dots,g\}$. At each iteration $n$ of the stochastic Douglas Rachford algorithm, the user is provided with the realization $(\xi_n,\eta_n)$ and sample a group $J_n$ uniformly in $\{1,\dots,g\}$. Then, a Douglas Rachford step is done, involving the computation of the proximity operators of the functions $g_n : x \mapsto \|x_{S_{J_n}}\|$ and $f_n : x \mapsto h(\eta_n \ps{x,\xi_n})$. This strategy is compared with a partially stochastic Douglas Rachford algorithm, deterministic in the regularization $G$, where the fast subroutine Fog-Lasso~\cite{yuan2011efficient} is used to compute the proximity operator of the regularization $G$. At each iteration $n$, the user is provided with $(\xi_n,\eta_n)$. Then, a Douglas Rachford step is done, involving the computation of the proximity operators of the functions $G$ and $f_n : x \mapsto h(\eta_n \ps{x,\xi_n})$. Figure~\ref{fig:simu} demonstrates the advantage of treating the regularization term in a stochastic way. \begin{figure}[ht!] \includegraphics[width=\linewidth]{stodr-random-samples.png} \caption{The objective function $F+G$ as a function of time in seconds for each algorithm} \label{fig:simu} \end{figure} \begin{figure}[ht!] \includegraphics[width=\linewidth]{HistogrammeIcassp.png} \caption{Histogram of the Initialization and the last iterates of the Stochastic D-R (S. D-R) and the partially stochastic D-R (Part. S. D-R)} \label{fig:hist} \end{figure} In Figure~\ref{fig:simu} "Stochastic D-R" denotes the stochastic Douglas Rachford algorithm and "Partially stochastic D-R" denotes the partially stochastic Douglas Rachford where the subroutine FoG-Lasso~\cite{yuan2011efficient} is used at each iteration to compute the true proximity operator of the regularization $G$. Figure~\ref{fig:hist} shows the appearance of the first and the last iterates. Even if a best performing procedure~\cite{yuan2011efficient} is used to compute $\prox_{\gamma G}$, we observe on Figure~\ref{fig:simu} that Stochastic D-R takes advantage of being a stochastic method. This advantage is known to be twofold (\cite{bottou2016optimization}). First, the iteration complexity of Stochastic D-R is moderate because $\prox_{\gamma G}$ is never computed. Then, Stochastic D-R is faster than its partially deterministic counterpart which uses Fog-Lasso~\cite{yuan2011efficient} as a subroutine, especially in the first iterations of the algorithms. Moreover, Stochastic D-R seems to perform globally better. This is because every proximity operators in Stochastic D-R can be efficiently computed (\cite{bau-com-livre11}). Contrary to the proximity operator of $G$~\cite{yuan2011efficient}, the proximity operator of $g_n$ is easily computable. The proximity operator of $f_n$ is easily computable as well.\footnote{Even if $h(x) = \log(1+\exp(-x))$ (logistic regression), the proximity operator of $f_n$ is easily computable, see~\cite{chierchiaapproche}.} \bibliographystyle{IEEEbib}
proofpile-arXiv_067-6971
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Discussion of the examples} In the course of our discussion we feel free to use standard facts of hyperbolic geometry as one might find in classical texts such as \cite{Maskit,Ratcliffe}. It will also be convenient to see our groups as fundamental groups of infinite, locally finite, graph of groups. We refer to standard texts like \cite{Serre} for basic facts about graphs of groups. \subsection*{Example 1.} We give an algebraic description of a group $G$, then we prove that it has no finite quotients, and we finally show that it is isomorphic to a discrete subgroup of $\PSL_2\mathbb C} \newcommand{\BH}{\mathbb H$. Let $T$ be the maximal rooted binary tree. Denote by $\CV$ and $\mathcal E} \newcommand{\CF}{\mathcal F$ the sets of vertices and edges respectively, let $*$ be the root of $T$ and, for $v\in\CV$, let $\vert v\vert\in\mathbb N} \newcommand{\BQ}{\mathbb Q$ be the distance from $v$ to $*$. We orient the edges of $T$ so that they point to the root and for $e\in\mathcal E} \newcommand{\CF}{\mathcal F$ we let $e^+$ be its terminal vertex. Given a vertex $v\in\CV$ with $\vert v\vert\ge 1$ let $e_0(v)$ the edge leaving $v$ and pointing out of $v$ and label the two edges pointing into $v$ by $e_1(v)$ and $e_2(v)$. Consider from now the group $$G=\left\langle\{g_e\vert e\in\mathcal E} \newcommand{\CF}{\mathcal F\}\middle\vert\left\{g_{e_0(v)}^{3+\vert v\vert},\ g_{e_0(v)}g_{e_1(v)}^{-1}g_{e_2(v)}^{-1}\middle\vert v\in\CV\text{ with }\vert v\vert\ge 1\right\}\right\rangle.$$ The group $G$ also admits a description as the fundamental group $$G=\pi_1(\CT)$$ of a graph of groups $\CT$ with underlaying graph $T$ with vertex groups $$G_v=\left\langle g_{e_0(v)},g_{e_1(v)},g_{e_2(v)}\middle\vert g_{e_0(v)}^{3+\vert v\vert},g_{e_1(v)}^{4+\vert v\vert},g_{e_2(v)}^{4+\vert v\vert},g_{e_0(v)}g_{e_1(v)}^{-1}g_{e_2(v)}^{-1}\right\rangle$$ if $v\neq *$, with $$G_*=\left\langle g_{e_1(*)},g_{e_2(*)}\middle\vert g_{e_1(*)}^{4},g_{e_2(*)}^{4}\right\rangle,$$ and with edge groups $$G_e=\left\langle g_e\middle\vert g_e^{4+\vert e^+\vert}\right\rangle.$$ We are going to think of the group $G$ as the nested union of a sequence of subgroups. The easiest way to describe these subgroups is as the fundamental groups $$G^n=\pi_1(\CT^n)$$ of the subgraph of groups $\CT^n\subset \CT$ corresponding to the ball of radius $n-1$ around the root $*$. Alternatively, $G^n$ is the subgroup of $G$ generated by all those elements $g_{e_0(v)}$ with $\vert v\vert\le n$. We have $$G^1\subset G^2\subset G^3\subset\ldots,\ \ G=\bigcup_{n\in\mathbb N}G^n.$$ \begin{claim} The group $G^n$ is generated by the set $S^n=\{g_{e_0(v)}\text{ with }\vert v\vert=n\}$. \end{claim} \begin{proof} Since $G^n$ is generated by $S_1\cup S_2\cup\dots\cup S_n$ and hence by $G^{n-1}\cup S^n$, we can argue by induction on $n$. Therefore, it suffices to prove that $S^{n-1}$ is contained in the group generated by $S^n$. Well, given $v\in\CV$ with $\vert v\vert =n-1$ let $v_1$ and $v_2\in\CV$ be the initial vertices of the edges $e_1(v)$ and $e_2(v)$. Given that $e_1(v)=e_0(v_1)$ and $e_2(v)=e_0(v_2)$ the presentation of the group gives us: $$g_{e_0(v)}=g_{e_0(v_2)}g_{e_0(v_1)}$$ The claim then immediately follows. \end{proof} We are now ready to prove that $G$ has no finite quotients: \begin{claim} $G$ has no finite non-trivial quotients. \end{claim} \begin{proof} Let $H$ be a finite group and $\pi:G\to H$ be a homomorphism. We need to prove that $\pi$ is trivial. Now, denote by $\vert H\vert$ the order of $H$, let $k\in\mathbb N} \newcommand{\BQ}{\mathbb Q$ be arbitrary, and let $n>k$ with $$3+n\equiv 1\mod(\vert H\vert).$$ From the presentation of the group $G$ we get that the elements in the set $S^{n}=\{g_{e_0(v)}\text{ with }\vert v\vert=n\}$ have order $3+n$ in $G$. Since $\vert H\vert$ and $3+n$ are prime to each other, we get that the elements in $S^n$ are killed by $\pi$: $$S^{n}\subset\Ker(\pi).$$ Now, from the previous claim we also get that the subgroup $G^{n}$ is contained in the kernel of $\pi$, meaning that also $G^k\subset\ker(\pi)$. Since $G$ is the union of the subgroups $G^k$ we get that $G\subset\ker(\pi)$. We have proved the claim. \end{proof} All is left now is to show that $G$ admits discrete and faithful representations into $\PSL_2\mathbb C} \newcommand{\BH}{\mathbb H$. We are going to construct the desired representation as a limit of representations of the groups $G^n$. \begin{claim} For each $n\ge 0$ there is a discrete and faithful representation $$\rho_n:G^n\to\PSL_2\mathbb C} \newcommand{\BH}{\mathbb H$$ such that the restriction of $\rho_n$ to $G^{n-1}$ agrees with $\rho_{n-1}$ for every $n\ge 1$. Moreover, the limit set of the group $\rho_n(G_v)$ bounds a round disk in the discontinuity domain of $\rho_n(G^n)$ for every vertex group $v$ with $\vert v\vert=n$. \end{claim} The final assertion of the Claim serves to be able to argue by induction -- the real point is the first assertion because it allows us to define $$\rho:G\to\PSL_2\mathbb C} \newcommand{\BH}{\mathbb H$$ satisfying $\rho(g)=\rho_n(g)$ if $g\in G^n$. This representation is faithful because each one of the representations $\rho_n$ is. The same argument holds true for discreteness because already the first group $\rho(G_1)$ is non-elementary \cite{Benedetti-Petronio}. All that it is left is to prove the claim. \begin{proof}[Proof of the claim] We will argue by induction. First note that $$G^0=G_*=\left\langle g_{e_1(*)},g_{e_2(*)}\middle\vert g_{e_1(*)}^{4},g_{e_2(*)}^{4}\right\rangle$$ is isomorphic to the $(4,4,\infty)$-triangle group: $$H_0=\langle a,b\vert a^{4}, b^{4}\rangle.$$ We can thus take $\rho_0:G_0\to\PSL_2\mathbb C} \newcommand{\BH}{\mathbb H$ to be the standard fuchsian representation and take the said disk to be any one of the two connected components of the discontinuity domain of $\rho_0(G_0)$. Suppose that the claim holds true for $n-1$. For each vertex $v\in\CV$ with $\vert v\vert=n-1$ let $\Delta_v\subset\mathbb S} \newcommand{\BZ}{\mathbb Z^2=\partial_\infty\BH^3$ be the disk in the discontinuity domain bounded by the limit set of $\rho_{n-1}(G_v)$. Note that $\Delta_v$ is $\rho_{n-1}(G_v)$-invariant and that no translate of $\Delta_v$ under $\rho_{n-1}(G^{n-1})$ meets $\Delta_w$ for another vertex $w\neq v$ with $\vert w\vert=n-1$. The disk $\Delta_v$ is the boundary at infinity of a hyperbolic half-space $H_v$ - let $\Delta'_v$ be the hyperbolic plane bounding $H_v$. Suppose now that we have a vertex $z\in\CV$ with $\vert z\vert=n$ and denote by $z^+$ the terminal vertex of the edge $e=e_0(z)$. We identify the edge group $G_{e_0(z)}$ with the corresponding subgroup of the vertex group $G_{z^+}$. The group $\rho_{n-1}(G_{e_0(z)})$ is cyclic and has a unique fixed points $\alpha\in\Delta_v$ and $\alpha'\in\Delta'_v$. For $L>0$ let $x\in[\alpha',\alpha)$ be the point at distance $L$ from $\alpha'$ and let $D$ be the hyperbolic plane containing $x$ and perpendicular to the ray $[x,\alpha)$. The plane $D$ is $\rho_{n-1}(G_{e_0(z)})$-invariant. Consider now the edge group $G_{e_0(z)}$ as a subgroup of $G_z$. The action of $G_{e_0(z)}$ on $D$ via $\rho_{n-1}$ extends to a discrete action of the triangle group $G_z$ and thus to an action of $G^{(n-1)}*_{G_{e_0(z)}}G_z$. Proceeding in this way with all vertices $z\in\CV$ with $\vert z\vert=n$ we then get a representation $$\rho_L:G^n\to\PSL_2\mathbb C} \newcommand{\BH}{\mathbb H$$ depending on the parameter $L$. By construction all of these constructions extend the representation $\rho_{n-1}$ and it follows from the Klein-Maskit combination theorem that for $L$ large enough the representation $\rho_L$ is discrete and satisfies the additional desired claim. \end{proof} This concludes the discussion of Example \ref{ex3}. \subsection*{Example 2.} Let $T$ be a once holed torus and let $\alpha$ and $\beta$ be simple curves in $T$ which intersect (transversely) in a single point. Let also $U$ be a regular neighborhood of $\beta\times\{0\}$ in the 3-manifold $T\times[-1,1]$, $\mu$ the meridian of $U$, and $\beta'\subset\partial U$ the longitude of $U$ isotopic to $\beta\times\{1\}$ in $T\times[-1,1]\setminus U$. Finally, for $n\in\mathbb N} \newcommand{\BQ}{\mathbb Q$ let $M_n$ be manifold obtained from $T\times[-1,1]\setminus U$ by Dehn-filling the curve $n \mu+ \beta'$. The curve $\beta'$ intersects the new meridian $n$ times, which means that it represents the n-th power of the soul of the new solid torus. We get thus the presentation $$\pi_1(M_n)=\pi_1(T)*_{\langle\beta\rangle}\pi_1(U)\simeq\langle a,b,c\vert b=c^n\rangle$$ with respect to which the curve $\partial T\times\{0\}$ corresponds to the conjugacy class of $[a,b]=[a,c^n]$. Now, the pair $(M_n,\partial T\times[-1,1])$ is a pared manifold, which means that the group $\pi_1(M_n)$ admits a geometrically finite representation $$\rho_n:\pi_1(M_n)\to \PSL_2\mathbb C} \newcommand{\BH}{\mathbb H=\Isom_+(\BH^3)$$ with $\rho_n([a,b])=g\in\PSL_2\mathbb C} \newcommand{\BH}{\mathbb H$ where $g(z)=z+1$. For $t\in\mathbb R} \newcommand{\BD}{\mathbb D$ let $h_t\in\PSL_2\mathbb C} \newcommand{\BH}{\mathbb H$ be the parabolic element $h_t(z)=z+ti$. Now, a standard combination argument implies that for a sufficiently fast growing sequence $t_n\to\infty$, the group $$G=\left\langle \cup_nh_{t_n}\rho_n(\pi_1(M_n))h_{t_n}^{-1}\middle\vert n\in\mathbb N} \newcommand{\BQ}{\mathbb Q\right\rangle$$ generated by the union of the groups $h_{t_n}\rho_n(\pi_1(M_n))h_{t_n}^{-1}$ is discrete. Notice now that for each $n$ the element $g\in G$ can be written as $$g=h_t g h_t^{-1}=h_t\rho_n([a,b])h_t^{-1}=h_t\rho_n([a,c^n])h_t^{-1}.$$ It follows that if $H$ is an arbitrary finite group and if $\pi:G\to H$ is a homomorphism then \begin{align*} \pi(g)&=\pi(h_t)[\pi(\rho_n(a),\rho_{\vert H\vert}(c)^{\vert H\vert}]\pi(h_t)^{-1}\\ &=\pi(h_t)[\pi(\rho_n(a),\Id_H]\pi(h_t)^{-1}\\ &=\Id_H \end{align*} where $\vert H\vert$ is the order of $H$. Having proved that $g\in G$ belongs to the kernel of every homomorphism to a finite group, we have showed that $G$ is not residually finite. This concludes the discussion of Example \ref{ex1}.
proofpile-arXiv_067-7083
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The control flow of a program describes how its elementary operations are organized along the execution. Usual primitive control mechanisms are sequences, tests, iteration and recursion. Elementary operations placed in sequence are executed in order. Tests allow conditionally executing a group of operations and changing the course of the execution of the program. Finally, iteration gives the possibility to iterate a process an arbitrary number of times and recursion generalizes iteration to automatically manage the history of the operations performed during iteration. The structure of control flow for conventional (classical) computation is well-understood. In the case of \emph{quantum} computation, control flow is still subject to debate. This paper proposes a working notion of quantum control in closed quantum systems, shedding new light on the problem, and clarifying several of the previous concerns. \paragraph{Quantum computation.} A good starting point for understanding quantum computation is to consider classical circuits over \emph{bits} but replacing the bits with \emph{qubits}, which are intuitively superpositions of bits weighed by complex number amplitudes. Computationally, a qubit is an abstract data type governed by the laws of quantum physics, whose values are normalized vectors of complex numbers in the Hilbert space $\mathbb{C}^2$ (modulo a global phase). By choosing an orthonormal basis, say the classical bits $\mathtt{t}\!\mathtt{t}$ and $\mathtt{f}\!\mathtt{f}$, a qubit can be regarded as a complex linear combination, $\alpha~\mathtt{t}\!\mathtt{t} + \beta~\mathtt{f}\!\mathtt{f}$, where $\alpha$ and $\beta$ are complex numbers such that $|\alpha|^2+|\beta|^2=1$. This generalizes naturally to multiple qubits: the state of a system of $n$ qubits is a vector in the Hilbert space $(\mathbb{C}^2)^{\otimes{}n}$. The operations one can perform on a quantum memory are of two kinds: quantum gates and measurements. Quantum gates are unitary operations that are ``purely quantum'' in the sense that they modify the quantum memory without giving any feedback to the outside world: the quantum memory is viewed as a {\em closed system}. A customary graphical representation for these operations is the {\em quantum circuit}, akin to conventional boolean circuits: wires represent qubits while boxes represents operations to perform on them. One of the peculiar aspects of quantum computation is that the state of a qubit is non-duplicable~\cite{wootters82single}, a result known as the {\em no-cloning theorem}. A corollary is that a quantum circuit is a very simple kind of circuit: wires neither split nor merge. Measurement is a fundamentally different kind of operation: it queries the state of the quantum memory and returns a classical result. Measuring the state of a quantum bit is a probabilistic and destructive operation: it produces a classical answer with a probability that depends on the amplitudes $\alpha, \beta$ in the state of the qubit while projecting this state onto $\mathtt{t}\!\mathtt{t}$ or $\mathtt{f}\!\mathtt{f}$, based on the result. For a more detailed introduction to quantum computation, we refer the reader to recent textbooks (e.g., ~\cite{NielsenChuang}). \paragraph{Control flow in quantum computation.} In the context of quantum programming languages, there is a well-understood notion of control flow: the so-called {\em classical control flow}. A quantum program can be seen as the construction, manipulation and evaluation of quantum circuits~\cite{quipper,qwire}. In this setting, circuits are simply considered as special kinds of data without much computational content, and programs are ruled by regular classical control. One can however consider the circuit being manipulated as a program in its own right: a particular sequence of execution on the quantum memory is then seen as a closed system. One can then try to derive a notion of {\em quantum control}~\cite{qml}, with ``quantum tests'' and ``quantum loops''. Quantum tests are a bit tricky to perform~\cite{qml,qalternation} but they essentially correspond to well-understood controlled operations. The situation with quantum loops is more subtle~\cite{qalternation,yingbook}. First, a hypothetical quantum loop {\em must} terminate. Indeed, a non-terminating quantum loop would entail an infinite quantum circuit, and this concept has so far no meaning. Second, the interaction of quantum loops with measurement is problematic: it is known that the canonical model of \emph{open} quantum computation based on superoperators~\cite{selinger04quantum,qarrow} is incompatible with such quantum control~\cite{qalternation}. Finally, the mathematical operator corresponding to a quantum loop would need to act on an infinite-dimensional Hilbert space and the question of mixing programming languages with infinitary Hilbert spaces is still an unresolved issue. \paragraph{Our contribution.} In this paper, we offer a novel solution to the question of quantum control: we define a purely quantum language, inspired by Theseus~\cite{theseus}, featuring tests and fixpoints together with lists. More precisely, we propose (1) a typed, reversible language, extensible to linear combinations of terms, with a reduction strategy akin to algebraic lambda-calculi~\cite{linvec,lineal,Vaux09}; (2) a model for the language based on unitary operators over infinite-dimensional Hilbert spaces, simplifying the Fock space model of Ying~\cite{yingbook}. This model captures lists, tests, and structurally recursive fixpoints. We therefore settle two longstanding issues. (1) We offer a solution to the problem of quantum loops, with the use of {\em terminating}, {\em structurally recursive}, {\em purely quantum} fixpoints. We dodge previously noted concerns (e.g.,~\cite{qalternation}) by staying in the closed quantum setting and answer the problem of the external system of quantum ``coins''~\cite{yingbook} with the use of lists. (2) By using a linear language based on patterns and clauses, we give an extensible framework for reconciling algebraic calculi with quantum computation~\cite{tonder04lambda,lineal,linvec}. In the remainder of the paper, we first introduce the key idea underlying our classical reversible language in a simple first-order setting. We then generalize the setting to allow second-order functions, recursive types (e.g., lists), and fixpoints. After illustrating the expressiveness of this classical language, we adapt it to the quantum domain and give a semantics to the resulting quantum language in infinite-dimensional Hilbert spaces. This technical report is an extended version of a paper accepted for publication in the proceedings of FoSSaCS'18~\cite{shortversion}. \section{Pattern-Matching Isomorphisms} \label{sec:intro-iso} \label{sec:iso-1st-order} The most elementary control structure in a programming language is the ability to conditionally execute one of several possible code fragments. Expressing such an abstraction using predicates and nested \textbf{if}-expressions makes it difficult for both humans and compilers to reason about the control flow structure. Instead, in modern functional languages, this control flow paradigm is elegantly expressed using \emph{pattern-matching}. This approach yields code that is not only more concise and readable but also enables the compiler to easily verify two crucial properties: (i) non-overlapping patterns and (ii) exhaustive coverage of a datatype using a collection of patterns. Indeed most compilers for functional languages perform these checks, warning the user when they are violated. At a more fundamental level, e.g., in type theories and proof assistants, these properties are actually necessary for correct reasoning about programs. Our first insight, explained in this section, is that these properties, perhaps surprisingly, are sufficient to produce a simple and intuitive first-order reversible programming language. \begin{figure}[t] \centering \begin{minipage}{0.4\linewidth} \begin{verbatim} f :: Either Int Int -> a f (Left 0) = undefined f (Left (n+1)) = undefined f (Right n) = undefined \end{verbatim}\vspace{-4ex} \caption{A skeleton}\label{fig:intro-ex-1} \end{minipage} \hfill \begin{minipage}{0.4\linewidth} \begin{verbatim} g :: (Bool,Int) -> a g (False,n) = undefined g (True,0) = undefined g (True,n+1) = undefined \end{verbatim}\vspace{-4ex} \caption{Another skeleton}\label{fig:intro-ex-2} \end{minipage} \\[3ex] \begin{minipage}{0.6\linewidth} \begin{verbatim} h :: Either Int Int <-> (Bool,Int) h (Left 0) = (True,0) h (Left (n+1)) = (False,n) h (Right n) = (True,n+1) \end{verbatim}\vspace{-4ex} \caption{An isomorphism}\label{fig:intro-ex-3} \end{minipage} \end{figure} \subsection{An Example} We start with a small illustrative example, written in a Haskell-like syntax. Fig.~\ref{fig:intro-ex-1} gives the skeleton of a function \verb|f| that accepts a value of type \verb|Either Int| \verb|Int|; the patterns on the left-hand side exhaustively cover every possible incoming value and are non-overlapping. Similarly, Fig.~\ref{fig:intro-ex-2} gives the skeleton for a function~\verb|g| that accepts a value of type \verb|(Bool,Int)|; again the patterns on the left-hand side exhaustively cover every possible incoming value and are non-overlapping. Now we claim that since the types \verb|Either Int Int| and \verb|(Bool,Int)| are isomorphic, we can combine the patterns of \verb|f| and \verb|g| into \emph{symmetric pattern-matching clauses} to produce a reversible function between the types \verb|Either Int Int| and \verb|(Bool,Int)|. Fig.~\ref{fig:intro-ex-3} gives one such function; there, we suggestively use \verb|<->| to indicate that the function can be executed in either direction. This reversible function is obtained by simply combining the non-overlapping exhaustive patterns on the two sides of a clause. In order to be well-formed in either direction, these clauses are subject to the constraint that each variable occurring on one side must occur exactly once on the other side (and with the same type). Thus it is acceptable to swap the second and third right-hand sides of \verb|h| but not the first and second ones. \subsection{Terms and Types} \label{sec:1st-order-finite} We present a formalization of the ideas presented above using a simple typed first-order reversible language. The language is two-layered. The first layer contains values, which also play the role of patterns. These values are constructed from variables ranged over $x$ and the introduction forms for the finite types $a,b$ constructed from the unit type and sums and products of types. The second layer contains collections of pattern-matching clauses denoting isomorphisms of type $a \leftrightarrow b$. Computations are chained applications of isomorphisms to values: \begin{alignat*}{10} &\text{(Value types)}\quad & a, b &~~&&::= ~&& \mathbb{1} ~\mid~ a \oplus b ~\mid~ a \otimes b\\ &\text{(Iso types)} & T &&&::=&& a \leftrightarrow b \\[1.5ex] &\text{(Values)} & v &&&::=&& () ~\mid~ x ~\mid~ \inl{v} ~\mid~ \inr{v} ~\mid~ \pv{v_1}{v_2}\\ &\text{(Isos)} & \omega &&&::=&& \clauses{\clause{v_1}{v'_1}\clause{v_2}{v'_2}~\ldots} \\ &\text{(Terms)} & t &&& ::= && v ~\mid~ \omega\,t \end{alignat*} The typing rules are defined using two judgments: $\Delta \vdash_v v : a$ for typing values (or {\em patterns}) and terms; and $\vdash_{\isoterm} \omega : a \leftrightarrow b$ for typing collections of pattern-matching clauses denoting an isomorphism. As it is customary, we write $a_1\otimes a_2\otimes\cdots\otimes a_n$ for $((a_1\otimes a_2)\otimes\cdots\otimes a_n)$, and similarly $\pv{x_1}{x_2,\ldots,x_n}$ for $\pv{\pv{x_1}{x_2}}{\ldots,x_n}$. The typing rules for values are the expected ones. The only subtlety is the fact that they are linear: because values act as patterns, we forbid the repetition of variables. A typing context $\Delta$ is a set of typed variables $x_1:a_1,\ldots, x_n:a_n$. A value typing judgment is valid if it can be derived from the following rules: \[\begin{array}{c} \infer{ \vdash_v() : \mathbb{1}, }{} \qquad \infer{ x:a\vdash_v x:a, }{} \qquad \infer{ \Delta_1,\Delta_2\vdash_v\pv{v_1}{v_2} : a\otimes b. }{ \Delta_1\vdash_v v_1 : a & \Delta_2\vdash_v v_2 : b } \\ \\ \infer{ \Delta\vdash_v\inl{v} : a\oplus b, }{ \Delta\vdash_v v : a } \qquad \infer{ \Delta\vdash_v\inr{v} : a\oplus b, }{ \Delta\vdash_v v : b } \end{array}\] \noindent The typing rule for term construction is simple and forces the term to be closed: \[ \infer{ \vdash_v \omega~t : b }{ \vdash_v t : a & \vdash_{\isoterm} \omega : a \leftrightarrow b } \] \noindent The most interesting type rule is the one for isomorphisms. We present the rule and then explain it in detail: \begin{equation}\label{eq:typ-iso-specialized} \infer{ \vdash_{\isoterm} \clauses{\clause{v_1}{v'_1}\clause{v_2}{v'_2}~\ldots} : a \leftrightarrow b, }{ \begin{array}{@{}l@{}} \Delta_1\vdash_v v_1 : a \\ \Delta_1\vdash_v v'_1 : b \end{array} & \ldots & \begin{array}{@{}l@{}} \Delta_n\vdash_v v_n : a \\ \Delta_n\vdash_v v'_n : b \end{array} & \begin{array}{@{}l@{}} \forall i\neq j, v_i\bot v_j \\ \forall i\neq j, v'_i\bot v'_j \end{array} & \begin{array}{@{}l@{}} \dim(a) = n \\ \dim(b) = n \end{array} } \end{equation} \noindent The rule relies on two auxiliary conditions as motivated in the beginning of the section. These conditions are (i) the orthogonality judgment $v \bot v'$ that formalizes that patterns must be \emph{non-overlapping} and (ii) the condition $\dim(a)=n$ which formalizes that patterns are \emph{exhaustive}. The rules for deriving orthogonality of values or patterns are: \[ \begin{array}{c} \infer{\inl{v_1}~\bot~\inr{v_2}}{} \qquad \infer{\inr{v_1}~\bot~\inl{v_2}}{} \\[2ex] \infer{\inl{v_1}~\bot~\inl{v_2}}{v_1~\bot~v_2} \quad \infer{\inr{v_1}~\bot~\inr{v_2}}{v_1~\bot~v_2} \quad \infer{\pv{v}{v_1}~\bot~\pv{v'}{v_2}}{v_1~\bot~v_2} \quad \infer{\pv{v_1}{v}~\bot~\pv{v_2}{v'}}{v_1~\bot~v_2} \end{array} \] \noindent The idea is simply that the left and right injections are disjoint subspaces of values. To characterize that a set of patterns is exhaustive, we associate a \emph{dimension} with each type. For finite types, this is just the number of elements in the type and is inductively defined as follows: $\dim(\mathbb{1})=1$; $\dim(a\oplus b) = \dim(a)+\dim(b)$; and $\dim(a\otimes b) = \dim(a)\cdot\dim(b)$. For a given type $a$, if a set of non-overlapping clauses has cardinality $\dim(a)$, it is exhaustive. Conversely, any set of exhaustive clauses for a type $a$ either has cardinality $\dim(a)$ or can be extended to an equivalent exhaustive set of clauses of cardinality $\dim(a)$. \subsection{Semantics} \label{sec:iso-iso} We equip our language with a simple operational semantics on terms, using the natural notion of matching. To formally define it, we first introduce the notion of variable assignation, or valuation, which is a partial map from a finite set of variables (the support) to a set of values. We denote the matching of a value $w$ against a pattern $v$ and its associated valuation $\sigma$ as $\sigma[v] = w$ and define it as follows: \[ \infer{\sigma[()] = ()}{} \quad \infer{\sigma[x] = v}{\sigma = \{ x \mapsto v\}} \quad \infer{\sigma[\inl{v}] = \inl{w}}{\sigma[v] = w} \quad \infer{\sigma[\inr{v}] = \inr{w}}{\sigma[v] = w} \] \[ \infer{ \sigma[\pv{v_1}{v_2}] = \pv{w_1}{w_2} }{ \sigma_2[v_1] = w_1 & \sigma_1[v_2] = w_2 & \text{supp}(\sigma_1) \cap \text{supp}(\sigma_2) = \emptyset & \sigma = \sigma_1\cup\sigma_2 } \] If $\sigma$ is a valuation whose support contains the variables of $v$, we write $\sigma(v)$ for the value where the variables of $v$ have been replaced with the corresponding values in $\sigma$: \begin{itemize} \item $\sigma(()) = ()$ \item $\sigma(x) = v$ is $\{x\mapsto v\}\subseteq \sigma$ \item $\sigma(\inl{v}) = \inl{\sigma(v)}$ \item $\sigma(\inr{v}) = \inr{\sigma(v)}$ \item $\sigma(\pv{v_1}{v_2}) = \pv{\sigma(v_1)}{\sigma(v_2)}$ \end{itemize} \noindent Given these definitions, we can define the reduction relation on terms. The redex $ \clauses{\clause{v_1}{v'_1}\clause{v_2}{v'_2}~\ldots} v $ reduces to $\sigma(v'_i)$ whenever $\sigma[v_i] = v'_i$. Because of the conditions on patterns, a matching pattern exists by exhaustivity of coverage, and this pattern is unique by the non-overlapping condition. Congruence holds: $\omega\,t\to\omega\,t'$ whenever $t\to t'$. As usual, we write $s\to t$ to say that~$s$ rewrites in one step to $t$ and $s\to^*t$ to say that~$s$ rewrites to $t$ in~0 or more steps. Because of the conditions set on patterns, the rewrite system is deterministic. More interestingly, we can swap the two sides of all pattern-matching clauses in an isomorphism $\omega$ to get $\omega^{-1}$. The execution of $\omega^{-1}$ is the reverse execution of $\omega$ in the sense that $\omega^{-1}(\omega~t) \to^* t $ and $\omega(\omega^{-1}~t') \to^* t'$. \section{Second-Order Functions, Lists, and Recursion} The first-order reversible language from the previous section embodies symmet\-ric-pattern matching clauses as its core notion of control. Its expressiveness is limited, however. We now show that it is possible to extend it to have more in common with a conventional functional language. To that end, we extend the language with the ability to parametrically manipulate isomorphisms, with a recursive type (lists), and with recursion. \subsection{Terms and Types} \label{sec:2st-order} Formally, the language is now defined as follows. \begin{alignat*}{100} &\text{(Val }\& ~\text{term types)} \quad& a, b &&&::=~ && \mathbb{1} ~\mid~ a \oplus b ~\mid~ a \otimes b ~\mid~ [a] \\ &\text{(Iso types)} & T &&&::=&& a \leftrightarrow b ~\mid~ (a \leftrightarrow b) \to T \\[1.5ex] &\text{(Values)} & v &&&::=&& () ~\mid~ x ~\mid~ \inl{v} ~\mid~ \inr{v} ~\mid~ \pv{v_1}{v_2}\\ &\text{(Products)} & p &&&::=&& () ~\mid~ x ~\mid~ \pv{p_1}{p_2}\\ &\text{(Extended Values)} & e &&&::=&& v ~\mid~ \letv{p_1}{\omega~p_2}{e}\\ &\text{(Isos)} & \omega &&&::=&& \clauses{\clause{v_1}{e_1}\clause{v_2}{e_2}~\ldots} ~\mid~ \lambda f.\omega ~\mid~ \\ & & &&& && \mu f.\omega ~\mid~ f ~\mid~ \omega_1\,\omega_2 \\ &\text{(Terms)} & t &&&::=&& () ~\mid~ x ~\mid~ \inl{t} ~\mid~ \inr{t} ~\mid~ \pv{t_1}{t_2} ~\mid~ \\ & & &&& && \omega~t ~\mid~ \letv{p}{t_1}{t_2} \end{alignat*} \noindent We use variables $f$ to span a set of iso-variables and variables $x$ to span a set of term-variables. We extend the layer of isos so that it can be parameterized by a fixed number of other isos, i.e., we now allow higher-order manipulation of isos using $\lambda f.\omega$, iso-variables, and applications. Isos can now be used inside the definition of other isos with a let-notation. These let-constructs are however restricted to products of term-variables: they essentially serve as syntactic sugar for composition of isos. An extended value is then a value where some of its free variables are substituted with the result of the application of one or several isos. Given an extended value $e$, we define its {\em bottom value}, denoted with ${\rm Val}(e)$ as the value ``at the end'' of the let-chain: ${\rm Val}(v) = v$, and ${\rm Val}(\letv{p}{\omega p}{e})={\rm Val}(e)$. The orthogonality of extended values is simply the orthogonality of their bottom value. \begin{table}[tb] \[ \begin{array}{c} \infer{\emptyset;\Psi\vdash_v () : \mathbb{1}}{} \qquad \infer{x:a;\Psi \vdash_v x :a}{} \\ \\ \infer{\Delta;\Psi\vdash_v \inl{t}:a \oplus b}{\Delta;\Psi\vdash_v t:a} \qquad \infer{\Delta;\Psi\vdash_v\inr{t}:a \oplus b}{\Delta;\Psi\vdash_v t:b} \\ \\ \infer{ \Delta_1,\Delta_2;\Psi\vdash_v \pv{t_1}{t_2} : a \otimes b }{ \Delta_1;\Psi\vdash_v t_1 : a & \Delta_2;\Psi\vdash_v t_2 : b } \\ \\ \infer{ \Delta;\Psi\vdash_v \omega~t : b }{ \Psi\vdash_{\isoterm} \omega : a \leftrightarrow b & \Delta;\Psi\vdash_v t : a } \quad \infer{ \Delta;\Psi\vdash_v \letpv{x,y}{t_1}{t_2} : c }{ \Delta;\Psi\vdash_v t_1 : a \otimes b & \Delta,x:a,y:b;\Psi\vdash_v t_2 : c } \end{array} \] \caption{Typing rules for terms and values} \label{tab:termtyp} \end{table} \begin{table}[t] \[ \begin{array}{c} \infer{ \Psi\vdash_{\isoterm} \clauses{\clause{v_1}{e_1}\clause{v_2}{e_2}~\ldots} : a \leftrightarrow b. }{ \begin{array}{l@{\quad}l@{\quad}l@{\qquad}l} \Delta_1;\Psi \vdash_v v_1 : a & \ldots & \Delta_n;\Psi\vdash_v v_n : a & \OD{a}\{v_1,\ldots,v_n\} \\ \Delta_1;\Psi \vdash_v e_1 : b & \ldots & \Delta_n;\Psi\vdash_v e_n : b & \ODe{b}\{e_1,\ldots,e_n\} \end{array} } \\ \\ \infer{ \Psi \vdash_{\isoterm} \lambda f.\omega :(a\leftrightarrow b) \to T }{ \Psi,f:a\leftrightarrow b \vdash_{\isoterm} \omega : T } \qquad \infer{ \Psi,f : T \vdash_{\isoterm} f : T}{} \quad \\ \\ \infer{\Psi \vdash_{\isoterm} \omega_1 \omega_2 : T} {\Psi \vdash_{\isoterm} \omega_1 : (a\leftrightarrow b) \to T \;\; \Psi \vdash_{\isoterm} \omega_2 : a\leftrightarrow b} \\ \\ \inferrule{ \Psi, f:a\leftrightarrow b\vdash_{\isoterm} \omega : (a_1\leftrightarrow b_1)\to\cdots \to(a_n\leftrightarrow b_n)\to(a\leftrightarrow b) \\ \text{$\mu f.\omega$ terminates in any finite context} } { \Psi\vdash_{\isoterm} \mu f.\omega : (a_1\leftrightarrow b_1)\to\cdots \to(a_n\leftrightarrow b_n)\to(a\leftrightarrow b) } \\ \end{array} \] \caption{Typing rules for isos} \label{tab:isotyp} \end{table} As usual, the type of lists $[a]$ of elements of type $a$ is a recursive type and is equivalent to $\mathbb{1}\oplus(a\times[a])$. We build the value $[]$ (empty list) as $\inl{()}$ and the term $t_1:t_2$ (cons of $t_1$ and $t_2$) as $\inr{\pv{t_1}{t_2}}$. In addition, to take full advantage of recursive datatypes, it is natural to consider recursion. Modulo a termination guarantee it is possible to add a fixpoint to the language: we extend isos with the fixpoint constructor $\mu f.\omega$. Some reversible languages allow infinite loops and must work with partial isomorphisms instead. Since we plan on using our language as a foundation for a quantum language we insist of termination. Since the language features two kinds of variables, there are typing contexts (written $\Delta$) consisting of base-level typed variables of the form $x:a$, and typing context (written $\Psi$) consisting of typed iso-variables of the form $f:T$. As terms and values contain both base-level and iso-variables, one needs two typing contexts. Typing judgments are therefore written respectively as $\Delta;\Psi\vdash_v t:a$. The updated rules for $(\vdash_v)$ are found in Tab.~\ref{tab:termtyp}. As the only possible free variables in isos are iso-variables, their typing judgments only need one context and are written as $\Psi\vdash_{\isoterm} \omega:T$. The rules for typing derivations of isos are in Tab.~\ref{tab:isotyp}. It is worthwhile mentioning that isos are treated in a usual, non-linear way: this is the purpose of the typing context separation. The intuition is that an iso is the description of a closed computation with respect to inputs: remark that isos cannot accept value-types. As computations, they can be erased or duplicated without issues. On the other hand, value-types still need to be treated linearly. In the typing rule for recursion, the condition ``$\mu f.\omega$ terminates in any finite context'' formally refers to the following requirement. A well-typed fixpoint $\mu f.\omega$ of type $\Psi\vdash_{\isoterm} \mu f.\omega : (a_1\leftrightarrow b_1)\to\cdots \to(a_n\leftrightarrow b_n)\to(a\leftrightarrow b)$ is {\em terminating in a $0$-context} if for all closed isos $\omega_i:a_i\leftrightarrow b_i$ not using fixpoints and for every closed value $v$ of type $a$, the term $((\mu f.\omega)\omega_1\ldots\omega_n)v$ terminates. We say that the fixpoint is {\em terminating in an $(n+1)$-context} if for all closed isos $\omega_i:a_i\leftrightarrow b_i$ terminating in $n$-contexts, and for every closed value $v$ of type $a$, the term $((\mu f.\omega)\omega_1\ldots\omega_n)v $ terminates. Finally, we say that the fixpoint is {\em terminating in any finitary context} if for all $n$ it is terminating in any $n$-context. With the addition of lists, the non-overlapping and exhaustivity conditions need to be modified. The main problem is that we can no longer define the dimension of types using natural numbers: $[a]$ is in essence an infinite sum, and would have an ``infinite'' dimension. Instead, we combine the two conditions into the concept of \emph{orthogonal decomposition}. Formally, given a type $a$, we say that a set $S$ of patterns is an {\em orthogonal decomposition}, written $\OD{a}{(S)}$, when these patterns are pairwise orthogonal and when they cover the whole type. We formally define $\OD{a}{(S)}$ as follows. For all types $a$, $\OD{a}\{x\}$ is valid. For the unit type, $\OD{\mathbb{1}}\{()\}$ is valid. If $\OD{a}(S)$ and $\OD{b}(T)$, then \begin{align*} &\OD{a\oplus b}(\{\inl{v}~|~ v\in S\}\cup\{\inr{v}~|~ v\in T\}) \\ \text{and}\quad &\OD{a\otimes b}\{\pv{v_1}{v_2} ~|~ v_1\in S,~ v_2\in T,~ {\rm FV}(v_1)\cap{\rm FV}(v_2)=\emptyset\}, \end{align*} where ${\rm FV}(t)$ stands for the set of free value-variables in $t$. We then extend the notion of orthogonal decomposition to extended values as follows. If $S$ is a set of extended values, $\ODe{a}(S)$ is true whenever $ \OD{a}\{{\rm Val}(e) ~|~ e \in S\} $. With this new characterization, the typing rule of iso in Eq.~\ref{eq:typ-iso-specialized} still holds, and then can be re-written using this notion of orthogonal decomposition as shown in Tab.~\ref{tab:isotyp}. \subsection{Semantics} \label{subsec:2st-semantics} In Tab.~\ref{tab:reduction} we present the reduction rules for the reversible language. We assume that the reduction relation applies to well-typed terms. In the rules, the notation $C[-]$ stands for an {\em applicative context}, and is defined as follows. \[ \begin{array}{lll} C[-]&::= & [-] \alt \inl{C[-]} \alt \inr{C[-]} \alt (C[-])\omega \alt \\ &&\{\cdots\}~C[-] \alt \letv{p}{C[-]}{t_2}\alt \pv{C[-]}{v} \alt \pv{v}{C[-]}. \end{array} \] \begin{table}[t] \[ \begin{array}{c} \infer[\mathrm{Cong}]{C[t_1] \to C[t_2]}{t_1 \to t_2} \qquad \infer[\mathrm{LetE}]{\letv{p}{v_1}{t_2} \to \sigma(t_2)}{\sigma[p] = v_1} \\[1.5ex] \infer[\mathrm{IsoApp}]{ \clauses{\clause{v_1}{t_1}\;|~\ldots\; \clause{v_n}{t_n} } \; v \to \sigma(t_i)}{ \sigma[v_i] = v} \\[1.5ex] \infer[\mathrm{HIsoApp}]{(\lambda f.\omega)\;\omega_2 \to \omega[\omega_2/f]}{} \\[1.5ex] \infer[\mathrm{IsoRec}]{ \mu f.\omega \to \lambda f_1\ldots f_n.(\omega[((\mu f.\omega)f_1\ldots f_n)/f])f_1\ldots f_n }{ \Psi, f:a\leftrightarrow b\vdash_{\isoterm}\omega : (a_1\leftrightarrow b_1)\to\cdots \to(a_n\leftrightarrow b_n)\to(a\leftrightarrow b) } \end{array} \] \caption{Reduction rules} \label{tab:reduction} \end{table} The inversion of isos is still possible but more subtle than in the first-order case. We define an inversion operation $(-)^{-1}$ on iso types with, $(a\leftrightarrow b)^{-1} := (b\leftrightarrow a)$, $((a\leftrightarrow b)\to T)^{-1} := ((b\leftrightarrow a)\to (T^{-1}))$. Inversion of isos is defined as follows. For fixpoints, $(\mu f.\omega)^{-1} = \mu f.(\omega^-1)$. For variables, $(f)^{-1} := f$. For applications, $(\omega_1~\omega_2)^{-1} := (\omega_1)^{-1}~(\omega_2)^{-1}$. For abstraction, $(\lambda f.\omega)^{-1} := \lambda f.(\omega^{-1})$. Finally, clauses are inverted as follows: \[\begin{array}{c} \left( \begin{array}{l@{~}c@{~}l} v_1&{\leftrightarrow}&{\tt let}\,p_1=\omega_1\,p'_1\,{\tt in} \\ && \cdots \\ && {\tt let}\,p_n=\omega_n\,p'_n\,{\tt in}~v'_1 \end{array} \right)^{-1} := \left( \begin{array}{lcl@{}l@{}l} v'_1&{\leftrightarrow}&{\tt let}\,p'_n&=\omega_n^{-1}&\,p_n\,{\tt in} \\ && \cdots \\ && {\tt let}\,p'_1&=\omega_1^{-1}&\,p_1\,{\tt in}~v_1 \end{array} \right). \end{array} \] Note that $(-)^{-1}$ only inverts first-order arrows $(\leftrightarrow)$, not second-order arrows $(\to)$. This is reflected by the fact that iso-variable are non-linear while value-variables are. This is due to the clear separation of the two layers of the language. The rewriting system satisfies the usual properties for well-typed terms: it is terminating, well-typed closed terms have a unique normal value-form, and it preserves typing. \begin{theorem}\label{th:para-iso-inv-type} The inversion operation is well-typed, in the sense that if $ f_1:a_1\leftrightarrow b_1,\ldots,f_n:a_n\leftrightarrow b_n \vdash_{\isoterm} \omega : T $ then we also have $ f_1:b_1\leftrightarrow a_1,\ldots,f_n:b_n\leftrightarrow a_n \vdash_{\isoterm} \omega^{-1} : T^{-1} $. \qed \end{theorem} Thanks to the fact that the language is terminating, we also recover the operational result of Sec.~\ref{sec:iso-iso}. \begin{theorem}\label{th:para-iso-iso} Consider a well-typed, closed iso $\vdash_{\isoterm} \omega:a\leftrightarrow b$, and suppose that $\vdash_v v:a$ and that $\vdash_v w:b$, then $\omega^{-1}(\omega~v) \to^* v$ and $\omega(\omega^{-1}~w) \to^* w$. \qed \end{theorem} \section{Examples} \label{subsec:2st-examples} In the previous sections, we developed a novel classical reversible language with a familiar syntax based on pattern-matching. The language includes a limited notion of higher-order functions and (terminating) recursive functions. We illustrate the expressiveness of the language with a few examples and motivate the changes and extensions needed to adapt the language to the quantum domain. We encode booleans as follows: $\mathbb{B} = \mathbb{1} \oplus \mathbb{1}$, $\mathtt{t}\!\mathtt{t} = \inl{()}$, and $\mathtt{f}\!\mathtt{f} = \inr{()}$. One of the easiest function to define is ${\tt not}:\mathbb{B}\leftrightarrow\mathbb{B}$ which flips a boolean. The controlled-not gate which flips the second bit when the first is true can also be expressed: \[ \begin{array}{l} \mathtt{not} : \mathbb{B} \leftrightarrow \mathbb{B} = \left(\begin{array}{r@{~\leftrightarrow~}l} {\mathtt{f}\!\mathtt{f}} & {\mathtt{t}\!\mathtt{t}} \\ {\mathtt{t}\!\mathtt{t}} & {\mathtt{f}\!\mathtt{f}} \end{array}\right), \end{array} \] \[ \begin{array}{l} \mathtt{cnot} : \mathbb{B} \otimes \mathbb{B} \leftrightarrow \mathbb{B} \otimes \mathbb{B} = \left(\begin{array}{r@{~\leftrightarrow~}l} \pv{\mathtt{f}\!\mathtt{f}}{x} & \pv{\mathtt{f}\!\mathtt{f}}{x} \\ \pv{\mathtt{t}\!\mathtt{t}}{\mathtt{f}\!\mathtt{f}} & \pv{\mathtt{t}\!\mathtt{t}}{\mathtt{t}\!\mathtt{t}} \\ \pv{\mathtt{t}\!\mathtt{t}}{\mathtt{t}\!\mathtt{t}} & \pv{\mathtt{t}\!\mathtt{t}}{\mathtt{f}\!\mathtt{f}} \end{array}\right). \end{array} \] All the patterns in the previous two functions are orthogonal decompositions which guarantee reversibility as desired. By using the abstraction facilities in the language, we can define higher-order operations that build complex reversible functions from simpler ones. For example, we can define a conditional expression parameterized by the functions used in the two branches: \[\begin{array}{l} \mathtt{if} : (a \leftrightarrow b) \to (a \leftrightarrow b) \to (\mathbb{B}\otimes a \leftrightarrow \mathbb{B} \otimes b) \\ \mathtt{if} = \lambda g. \lambda h.\left( \begin{array}{r@{~\leftrightarrow~}l} \pv{\mathtt{t}\!\mathtt{t}}{x} & \letv{y}{g~x}{\pv{\mathtt{t}\!\mathtt{t}}{y}} \\ \pv{\mathtt{f}\!\mathtt{f}}{x} & \letv{y}{h~x}{\pv{\mathtt{f}\!\mathtt{f}}{y}} \end{array}\right) \end{array}\] \noindent Using $\mathtt{if}$ and the obvious definition for the identity function $\mathtt{id}$, we can define ${\tt ctrl} :: (a\leftrightarrow a) \to (\mathbb{B} \otimes a \leftrightarrow \mathbb{B} \otimes a)$ as ${\tt ctrl}~f = \mathtt{if}~f~{\tt id}$ and recover an alternative definition of {\tt cnot} as ${\tt ctrl}~{\tt not}$. We can then define the controlled-controlled-not gate (aka the Toffoli gate) by writing ${\tt ctrl}~{\tt cnot}$. We can even iterate this construction using fixpoints to produce an $n$-controlled-not function that takes a list of $n$ control bits and a target bit and flips the target bit iff all the control bits are $\mathtt{t}\!\mathtt{t}$: \[ \begin{array}{l} \mathtt{cnot*} : ([\mathbb{B}]\otimes\mathbb{B}) \leftrightarrow ([\mathbb{B}]\otimes\mathbb{B}) \\ \mathtt{cnot*} = \mu f.\left( \begin{array}{r@{~}c@{~}l} \pv{[]}{tb} & {}{\leftrightarrow}{} & \letv{tb'}{\mathtt{not}~tb}{\pv{[]}{tb'}} \\ \pv{\mathtt{f}\!\mathtt{f}:cbs}{tb} & {}{\leftrightarrow}{} & \pv{\mathtt{f}\!\mathtt{f}:cbs}{tb} \\ \pv{\mathtt{t}\!\mathtt{t}:cbs}{tb} & {}{\leftrightarrow}{} & {\tt let}~\pv{cbs'}{tb'} = f~\pv{cbs}{tb}~{\tt in} ~\pv{\mathtt{t}\!\mathtt{t}:cbs'}{tb'} \end{array}\!\!\right) \end{array} \] The language is also expressible enough to write conventional recursive (and higher-order) programs. We illustrate this expressiveness using the usual $\mathtt{map}$ operation and an accumulating variant $\mathtt{mapAccu}$: \[ \begin{array}{l} {\tt map} : (a \leftrightarrow b) \to ([a] \leftrightarrow [b]) \\ \lambda g.\mu f.\left( \begin{array}{rcl} [] & {}\leftrightarrow{} & [] \\ h:t & {}\leftrightarrow{} & \letv{x}{g~h}{} \\ &&\letv{y}{f~t}{x:y} \end{array} \right), \end{array} \] \[ \begin{array}{l} {\tt mapAccu} :(a\otimes b \leftrightarrow a\otimes c) \to (a\otimes [b] \leftrightarrow a\otimes [c]) \\ \lambda g.\mu f.\left( \begin{array}{rcl} \pv{x}{[]} & {}\leftrightarrow{} & \pv{x}{[]} \\ \pv{x}{(h:t)} & {}\leftrightarrow{} & {\tt let}~\pv{y}{h'} = g~\pv{x}{h}~{\tt in} \\ && {\tt let}~\pv{z}{t'} = f~\pv{y}{t}~{\tt in}\\ && \pv{z}{(h':t')} \end{array} \right). \end{array} \] \noindent The three examples {\tt cnot*}, {\tt map} and {\tt mapAccu} use fixpoints which are clearly terminating in any finite context. Indeed, the functions are structurally recursive. A formal definition of this notion for the reversible language is as follows. \begin{definition}\label{def:struct-rec}\rm Define a {\em structurally recursive type} as a type of the form $[a]\otimes b_1\otimes\ldots\otimes b_n$. Let $\omega = \{ v_i \leftrightarrow e_i ~|~ i\in I \}$ be an iso such that $ f : a\leftrightarrow b\vdash_{\isoterm}\omega : a\leftrightarrow c $ where $a$ is a structurally recursive type. % We say that $\mu f.\omega$ is {\em structurally recursive} provided that for each $i\in I$, the value $v_i$ is either of the form $\langle [], p_1, \ldots p_n\rangle$ or of the form $\langle h:t, p_1, \ldots p_n\rangle$. In the former case, $e_i$ does not contain $f$ as a free variable. In the latter case, $e_i$ is of the form $C[f\pv{t}{p'_1,\ldots,p'_n}]$ where $C$ is a context of the form $C[-] ::= [-]~\mid~\letv{p}{C[-]}{t}~\mid~\letv{p}{t}{C[-]}$. \end{definition} \noindent This definition will be critical for quantum loops in the next section. \section{From Reversible Isos to Quantum Control} \begin{figure}[tb] \centering \begin{minipage}{.31\textwidth} \[\begin{blockarray}{cccc} & v_1 & v_2 & v_3 \\ \begin{block}{c(ccc)} v'_1 ~& 1 & 0 & 0 \\ v'_2 ~& 0 & 1 & 0 \\ v'_3 ~& 0 & 0 & 1 \\ \end{block} \end{blockarray} \] \vspace{-6ex}\caption{Classical iso}\label{fig:iso-id}\end{minipage} \qquad \begin{minipage}{.4\textwidth} \[ \begin{blockarray}{cccc} & v_1 & v_2 & v_3 \\ \begin{block}{c(ccc)} v'_1 ~& a_{11} & a_{12} & a_{13} \\ v'_2 ~& a_{21} & a_{22} & a_{23} \\ v'_3 ~& a_{31} & a_{32} & a_{33} \\ \end{block} \end{blockarray} \] \vspace{-6ex}\caption{Quantum iso}\label{fig:iso-general} \end{minipage} \\ \begin{minipage}{.4\textwidth} \[\begin{blockarray}{ccc} & \pv{\mathtt{t}\!\mathtt{t}}{x} ~& \pv{\mathtt{f}\!\mathtt{f}}{x} \\ \begin{block}{c(cc)} \pv{\mathtt{t}\!\mathtt{t}}{x} ~& \frac1{\sqrt2}{\tt Had} & \frac1{\sqrt2}{\tt Id} \\ \pv{\mathtt{f}\!\mathtt{f}}{x} ~& \frac1{\sqrt2}{\tt Had} & \frac{-1}{\sqrt2}{\tt Id} \\ \end{block} \end{blockarray} \] \vspace{-6ex}\caption{Semantics of {\tt Gate}}\label{fig:sem-gate} \end{minipage} \end{figure} \noindent In the language presented so far, an iso $\omega:a\leftrightarrow b$ describes a bijection between the set $\base{a}$ of closed values of type $a$ and the set $\base{b}$ of closed values of type $b$. If one regards $\base{a}$ and $\base{b}$ as the basis elements of some vector space $\denot{a}$ and $\denot{b}$, the iso $\omega$ becomes a 0/1 matrix. As an example, consider an iso $\omega$ defined using three clauses of the form \[ \clauses{\clause{v_1}{v'_1}\clause{v_2}{v'_2}\clause{v_3}{v'_3}}. \] From the exhaustivity and non-overlapping conditions derives the fact that the space $\denot{a}$ can be split into the direct sum of the three subspaces $\denot{a}_{v_i}$ ($i=1,2,3$) generated by~$v_i$. Similarly, $\denot{b}$ is split into the direct sum of the subspaces $\denot{b}_{v'_i}$ generated by~$v'_i$. One can therefore represent $\omega$ as the matrix $\denot{\omega}$ in Fig.~\ref{fig:iso-id}: The ``$1$'' in each column $v_i$ indicates to which subspace $\denot{b}_{v'_j}$ an element of $\denot{a}_{v_i}$ is sent to. In Sec.~\ref{sec:1st-order-finite} we discussed the fact that $v_i\bot v_j$ when $i\neq j$. This notation hints at the fact that $\denot{a}$ and $\denot{b}$ could be seen as Hilbert spaces and the mapping $\denot{\omega}$ as a unitary map from $\denot{a}$ to $\denot{b}$. The purpose of this section is to extend and formalize precisely the correspondence between isos and unitary maps. \begin{figure}[t] \[ \left\{ \begin{array}{l} \clause{v_1~}{~a_{11}v'_1+a_{21}v'_2+a_{31}v'_3}\\ \clause{v_2~}{~a_{12}v'_1+a_{22}v'_2+a_{23}v'_3}\\ \clause{v_3~}{~a_{31}v'_1+a_{32}v'_2+a_{33}v'_3} \end{array} \right\} \] \caption{Illustrating the generalization of clauses}\label{fig:iso-gen-ex} \end{figure} The definition of clauses is extended following this idea of seeing isos as unitaries, and not only bijections on basis elements of the input space. We therefore essentially propose to generalize the clauses to complex, linear combinations of values on the right-hand-side, such as shown in Figure~\ref{fig:iso-gen-ex}, with the side conditions on that the matrix of Fig.~\ref{fig:iso-general} is unitary. We define in Sec.~\ref{sec:unitlang} how this extends to second-order. \subsection{Extending the Language to Linear Combinations of Terms} \label{sec:unitlang} The quantum unitary language extends the reversible language from the previous section by closing extended values and terms under complex, finite linear combinations. For example, if $v_1$ and $v_2$ are values and $\alpha$ and $\beta$ are complex numbers, $\alpha\cdot v_1 + \beta\cdot v_2$ is now an extended value. Several approaches exist for performing such an extension. One can update the reduction strategy to be able to reduce these sums and scalar multiplications to normal forms~\cite{lineal,alglambdacalcreview}, or one can instead consider terms modulo the usual algebraic equalities~\cite{Vaux09,alglambdacalcreview}: this is the strategy we follow for this paper. When extending a language to linear combination of terms in a naive way, this added structure might generate inconsistencies in the presence of unconstrained fixpoints~\cite{Vaux09,lineal,alglambdacalcreview}. The weak condition on termination we imposed on fixpoints in the classical language was enough to guarantee reversibility. With the presence of linear combinations, we want the much stronger guarantee of unitarity. For this reason, we instead impose fixpoints to be {\em structurally recursive}. The quantum unitary language is defined by allowing sums of terms and values and multiplications by complex numbers: if $t$ and $t'$ are terms, so is $\alpha\cdot t + t'$. Terms and values are taken modulo the equational theory of modules. We furthermore consider the value and term constructs $\pv{-}{-}$, $\letv{p}{-}{-}$, $\inl(-)$, $\inr(-)$ distributive over sum and scalar multiplication. We do {\em not} however take iso-constructions as distributive over sum and scalar multiplication: $\clauses{\clause{v_1}{\alpha v_2 + \beta v_3}}$ is {\em not} the same thing as $\alpha\clauses{\clause{v_1}{v_2}} + \beta\clauses{\clause{v_1}{v_3}}$. This is in the spirit of Lineal~\cite{lineal,linvec}. Formally, the quantum unitary language is defined as follows. \begin{alignat*}{100} &\text{Val }\& ~\text{term types} \quad& a, b &&&::=~ && \mathbb{1} ~\mid~ a \oplus b ~\mid~ a \otimes b ~\mid~ [a] \\ &\text{Iso types} & T &&&::=&& a \leftrightarrow b ~\mid~ (a \leftrightarrow b) \to T \\[1.5ex] &\text{Pure values} & v &&&::=&& () ~\mid~ x ~\mid~ \inl{v} ~\mid~ \inr{v} ~\mid~ \pv{v_1}{v_2}\\ &\text{Combination of values} & e &&&::=&& v ~\mid~ e_1 + e_2 ~\mid~ \alpha e\\ &\text{Products} & p &&&::=&& () ~\mid~ x ~\mid~ \pv{p_1}{p_2}\\ &\text{Extended Values} & e &&&::=&& v ~\mid~ \letv{p_1}{\omega~p_2}{e}\\ &\text{Isos} & \omega &&&::=&& \clauses{\clause{v_1}{e_1}\clause{v_2}{e_2}~\ldots} ~\mid~ \lambda f.\omega ~\mid~ \\ & & &&& && \mu f.\omega ~\mid~ f ~\mid~ \omega_1\,\omega_2 \\[2ex] &\text{Terms} & t &&&::=&& () ~\mid~ x ~\mid~ \inl{t} ~\mid~ \inr{t} ~\mid~ \pv{t_1}{t_2} ~\mid~ \\ & & &&& && \omega~t ~\mid~ \letv{p}{t_1}{t_2}~\mid~ t_1 + t_2 ~\mid~ \alpha\cdot t. \end{alignat*} The scalar $\alpha$ ranges over complex numbers. Extended terms and values are considered modulo associativity and commutativity of the addition, and modulo the equational theory of modules: \begin{align*} \alpha\cdot(e_1 + e_2) &= \alpha\cdot{}e_1 + \alpha\cdot{}e_2 & 1\cdot{}e &= e\\ \alpha\cdot{}e + \beta\cdot{}e &= (\alpha+\beta)\cdot{}e& \alpha\cdot(\beta\cdot{}e) &= (\alpha\beta)\cdot e\\ 0\cdot e_1 + e_2 &= e_2 \end{align*} We furthermore consider the value and term constructs $\pv{-}{-}$, $\letv{p}{-}{-}$, $\inl(-)$, $\inr(-)$ distributive over sum and scalar multiplication. The typing rules for terms and extended values are updated as shown in Table~\ref{tab:quantum-types}. We only allow linear combinations of terms and values of the same type and of the same free variables. An iso is now not only performing an ``identity'' as in Figure~\ref{fig:iso-id} but a true unitary operation. Finally, fixpoints are now required to be {\em structurally recursive}, as introduced in Definition~\ref{def:struct-rec}. \begin{table}[tb] \[ \begin{array}{c} \infer{\Delta;\Psi\vdash_v\alpha\cdot t:a }{\Delta;\Psi\vdash_v t:a} \quad \infer{ \Delta;\Psi\vdash_v t_1+t_2 : a }{ \Delta;\Psi\vdash_v t_1 : a & \Delta;\Psi\vdash_v t_2 : a } \\[2ex] \infer{ \Psi\vdash_{\isoterm} \left\{ \begin{array}{ccc} v_1 & \leftrightarrow & a_{11}\cdot e_1 + \cdots + a_{1n}\cdot e_n \\ &\ldots& \\ v_n & \leftrightarrow & a_{n1}\cdot e_1 + \cdots + a_{nn}\cdot e_n \end{array}\right\} : a \leftrightarrow b. }{ \begin{array}{ll@{~~~}l} \Delta_1;\Psi \vdash_v v_1 : a & \ldots & \Delta_n;\Psi\vdash_v v_n : a \\ \Delta_1;\Psi \vdash_v e_1 : b & \ldots & \Delta_n;\Psi\vdash_v e_n : b \\ \OD{a}\{v_1,\ldots,v_n\} && \ODe{b}\{e_1,\ldots,e_n\} \end{array} & \begin{pmatrix} a_{11} & \cdots &a_{1n}\\ \vdots & & \vdots\\ a_{n1} & \cdots & a_{nn} \end{pmatrix} \text{ is unitary} } \\[6ex] \inferrule{ \Psi, f:a\leftrightarrow b\vdash_{\isoterm} \omega : (a_1\leftrightarrow b_1)\to\cdots \to(a_n\leftrightarrow b_n)\to(a\leftrightarrow b) \\ \text{$\mu f.\omega$ is structurally recursive} } { \Psi\vdash_{\isoterm} \mu f.\omega : (a_1\leftrightarrow b_1)\to\cdots \to(a_n\leftrightarrow b_n)\to(a\leftrightarrow b) } \end{array} \] \caption{Typing rules for the quantum extension} \label{tab:quantum-types} \end{table} The reduction is updated to stay deterministic in this extended setting. It is split into two parts: the reduction of pure terms, i.e. non-extended terms or values, and linear combinations thereof. \begin{itemize} \item Pure terms and values reduces using the reduction rules found in Table~\ref{tab:reduction}. We do not extend applicative contexts to linear combinations. The only slightly modified rule is the rule (IsoApp): the $t_i$ might now be a linear combination of pure terms $\sum_j\alpha_j\cdot t'_j$. We define $\sigma(t_i)$ and $\sum_j\alpha_j\cdot\sigma(t'_j)$. Because of the constraint on the typing rule for linear combinations, as long as $t_i$ is well-typed then the substitution is well-defined on all the $t'_j$. \item Consider the linear combination of pure terms $\sum_i\alpha_i\cdot t_i + \sum_j\beta_j\cdot t'_j$, where the $t_j$ are in normal form but the $t_i$ are not: $t_i\to t''_i$. Then \[ \sum_i\alpha_i\cdot t_i + \sum_j\beta_j\cdot t'_j \to \sum_i\alpha_i\cdot t''_i + \sum_j\beta_j\cdot t'_j \] Note that this extended reduction relation is deterministic. \end{itemize} \begin{example} \label{ex:had}\label{sec:example-quantum} This allows one to define an iso behaving as the Hadamard gate, or a slightly more complex iso conditionally applying another iso, whose behavior as a matrix is shown in Fig.~\ref{fig:sem-gate}. \[ \begin{array}{l} \mathtt{Had} : \mathbb{B} \leftrightarrow \mathbb{B} \\ \left( \begin{array}{r@{~}c@{~}l} \mathtt{t}\!\mathtt{t} & {}\leftrightarrow{} & \frac1{\sqrt2}\mathtt{t}\!\mathtt{t} + \frac1{\sqrt2}\mathtt{f}\!\mathtt{f} \\ \mathtt{f}\!\mathtt{f} & {}\leftrightarrow{} & \frac1{\sqrt2}\mathtt{t}\!\mathtt{t} - \frac1{\sqrt2}\mathtt{f}\!\mathtt{f} \end{array} \right)\!, \end{array} \] \[ \begin{array}{l} \mathtt{Gate} : \mathbb{B}\otimes\mathbb{B} \leftrightarrow \mathbb{B}\otimes\mathbb{B} \\ \left( \begin{array}{r@{~}c@{~}l} \pv{\mathtt{t}\!\mathtt{t}}{x} & {}\leftrightarrow{} & \letv{y}{{\tt Had}\,x}{\frac1{\sqrt2}\pv{\mathtt{t}\!\mathtt{t}}{y} + \frac1{\sqrt2}\pv{\mathtt{f}\!\mathtt{f}}{y}} \\ \pv{\mathtt{f}\!\mathtt{f}}{x} & {}\leftrightarrow{} & \letv{y}{{\tt Id}\,x}{~~\frac1{\sqrt2}\pv{\mathtt{t}\!\mathtt{t}}{y} - \frac1{\sqrt2}\pv{\mathtt{f}\!\mathtt{f}}{y}} \end{array}\right)\!. \end{array} \] \end{example} With this extension to linear combinations of terms, one can characterize normal forms as follows. \begin{lemma}[Structure of the normal forms] \label{lem:isonorm1} Let $\omega$ be such that $\vdash_{\isoterm}\omega : a\leftrightarrow b$. For all closed values $v$ of type $a$, the term $\omega\,v$ rewrites to a normal form $ \sum_{i=1}^N\alpha_i\cdot w_i $ where $N<\infty$, each $w_i$ is a closed value of type $b$ and $\sum_i|\alpha_i| = 1$. \end{lemma} \begin{proof} The fact that $\omega\,v$ converges to a normal form is a corollary of the fact that we impose structural recursion on fixpoints. The property of the structure of the normal form is then proven by induction on the maximal number of steps it takes to reach it. It uses the restriction on the introduction of sums in the typing rule for clauses in isos and the determinism of the reduction. \end{proof} In the classical setting, isos describe bijections between sets of closed values: it was proven by considering the behavior of an iso against its inverse. In the presence of linear combinations of terms, we claim that isos describe more than bijections: they describe unitary maps. In the next section, we discuss how types can be understood as Hilbert spaces (Sec.~\ref{sec:types-hilb}) and isos as unitary maps (Secs~\ref{sec:isos-blmpas} and~\ref{sec:isos-umaps}). \subsection{Modeling Types as Hilbert Spaces} \label{sec:types-hilb} By allowing complex linear combinations of terms, closed normal forms of finite types such as $\mathbb{B}$ or $\mathbb{B}\otimes\mathbb{B}$ can be regarded as complex vector spaces with basis consisting of closed values. For example, $\mathbb{B}$ is associated with $\denot{\mathbb{B}}=\{\alpha\cdot\mathtt{t}\!\mathtt{t} + \beta\cdot\mathtt{f}\!\mathtt{f}~|~\alpha,\beta\in\mathbb{C}\}\equiv\mathbb{C}^2$. We can consider this space as a complex Hilbert space where the scalar product is defined on basis elements in the obvious way: $\scalprod{v}{v} = 1$ and $\scalprod{v}{w} = 0$ if $v\neq w$. The map ${\tt Had}$ of Ex.~\ref{ex:had} is then effectively a unitary map on the space $\denot{\mathbb{B}}$. The problem comes from lists: the type $[\mathbb{1}]$ is inhabited by an infinite number of closed values: $[]$, $[()]$, $[(),()]$, $[(),(),()]$,\ldots To account for this case, we need to consider infinitely dimensional complex Hilbert spaces. In general, a complex Hilbert space~\cite{HSBook} is a complex vector space endowed with a scalar product that is complete with respect the distance induced by the scalar product. The completeness requirement implies for example that the infinite linear combination $[] + \frac12\cdot[()] + \frac14[(),()] + \frac18[(),(),()] + \cdots$ needs to be an element of $\denot{[\mathbb{B}]}$. To account for these limit elements, we propose to use the standard~\cite{HSBook} Hilbert space $\ell^2$ of infinite sequences. \begin{definition}\label{def:hilb}\rm Let $a$ be a value type. As before, we write $\base{a}$ for the set of closed values of type $a$, that is, $\base{a} = \{ v ~|~ \vdash_v v:a \}$. The {\em span of $a$} is defined as the Hilbert space $\denot{a} = \ell^2(\base{a})$ consisting of sequences $(\phi_v)_{v\in\base{a}}$ of complex numbers indexed by $\base{a}$ such that $\sum_{v\in\base{a}}|\phi_v|^2<\infty$. The scalar product on this space is defined as $\scalprod{(\phi_v)_{v\in\base{a}}}{(\psi_v)_{v\in\base{a}}} = \sum_{v\in\base{a}} \overline{\phi_v}\psi_v$. \end{definition} We shall use the following conventions. A closed value $v$ of $\denot{a}$ is identified with the sequence $(\delta_{v,v'})_{v'\in\base{a}}$ where $\delta_{v,v} = 1$ and $\delta_{v,v'}=0$ if $v\neq v'$. An element $(\phi_v)_{v\in\base{a}}$ of $\denot{a}$ is also written as the infinite, formal sum $\sum_{v\in\base{a}}\phi_v\cdot v$. \subsection{Modeling Isos as Bounded Linear Maps} \label{sec:isos-blmpas} We can now define what is the linear map associated to an iso. \begin{definition}\label{def:linmap}\rm For each closed iso $\vdash_{\isoterm}\omega : a\leftrightarrow b$ we define $\denot{\omega}$ as the linear map from $\denot{a}$ to $\denot{b}$ sending the closed value $v:a$ to the normal form of $\omega\,v:b$ under the rewrite system. \end{definition} In general, the fact that $\denot{\omega}$ is well-defined is not trivial. If it is formally stated in Theorem~\ref{th:unitwdef}, we can first try to understand what could go wrong. The problem comes from the fact that the space $\denot{a}$ is not finite in general. Consider the iso ${\tt map}~{\tt Had} : [\mathbb{B}]\leftrightarrow[\mathbb{B}]$. Any closed value $v:[\mathbb{B}]$ is a list and the term $({\tt map}~{\tt Had})\,v$ rewrites to a normal form consisting of a linear combination of lists. Denote the linear combination associated to $v$ with $L_v$. An element of $\denot{[\mathbb{B}]}$ is a sequence $ \phi = (\phi_v)_{v\in\base{[\mathbb{B}]}} $. From Definition~\ref{def:linmap}, the map $\denot{\omega}$ sends the element $\phi\in\denot{[\mathbb{B}]}$ to $ \sum_{v\in\base{[\mathbb{B}]}} \phi_v \cdot L_v. $ This is an infinite sum of sums of complex numbers: we need to make sure that it is well-defined: this is the purpose of the next result. Because of the constraints on the language, we can even show that it is a {\em bounded} linear map. In the case of the map ${\tt map}~{\tt Had}$, we can understand why it works as follows. The space $\denot{[\mathbb{B}]}$ can be decomposed as the direct sum $\sum_{i=0}^\infty E_i$, where $E_i$ is generated with all the lists in $\mathbb{B}$ of size $i$. The map ${\tt map}~{\tt Had}$ is acting locally on each finitely-dimensional subspace $E_i$. It is therefore well-defined. Because of the unitarity constraint on the linear combinations appearing in {\tt Had}, the operation performed by ${\tt map}~{\tt Had}$ sends elements of norm 1 to elements of norm 1. This idea can be formalized and yield the following theorem. \begin{theorem} \label{th:unitwdef} For each closed iso $\vdash_{\isoterm}\omega : a\leftrightarrow b$ the linear map $\denot{\omega} : \denot{a}\to\denot{b}$ is well-defined and bounded. % \end{theorem} \begin{proof} Consider a general element $e = (e_v)_{v\in\base{a}}$ of $\denot{a}$. Using Lemma~\ref{lem:isonorm1}, to each $v\in\base{a}$ one can attach a finite linear combination \[ W_v = \sum_{i=1}^{N_v}\alpha^v_i\cdot w^v_i \] such that $\omega\,v$ rewrites to $W_v$, with $\sum_i|\alpha^v_i|=1$. % By definition, $\denot{\omega}(e)$ is a sequence indexed by $\base{b}$ where for all $w$ in $\base{b}$, \[ (\denot{\omega}(e))_w = \left(\sum_{v\in\base{a}} e_v\cdot W_v\right)_w =\sum_{v\in\base{a}} e_v\cdot \left(W_v\right)_w \] % This series is absolutely converging, and therefore well-defined. Indeed, \begin{alignat*}{100} \sum_{v\in\base{a}} \left|e_v\cdot \left(W_v\right)_w\right| & = \sum_{v\in\base{a}} |e_v|\cdot \left|\left(W_v\right)_w\right| \\ &\leq \sum_{v\in\base{a}} |e_v| & \text{because $|(W_v)_w|\leq 1$} \\ &\leq \infty & \text{because $e\in\base{a}$.} \end{alignat*} To show that $\denot{\omega}(e)$ is indeed an element of $\base{b}$, we need to show that its norm is well-defined. We show this by bounding it with the norm of the vector $e$. \begin{alignat*}{100} &\sum_{w\in\base{b}}|(\denot{\omega}(e))_w| \\ & = \sum_{w\in\base{b}}\left|\sum_{v\in\base{a}} e_v\cdot \left(W_v\right)_w\right| \\ & \leq \sum_{w\in\base{b}}\sum_{v\in\base{a}} \left|e_v\cdot \left(W_v\right)_w\right| \\ & = \sum_{w\in\base{b}}\sum_{v\in\base{a}} |e_v|\cdot |\left(W_v\right)_w| \\ & = \sum_{v\in\base{a}}\sum_{w\in\base{b}} |e_v|\cdot |\left(W_v\right)_w| \\ & = \sum_{v\in\base{a}}|e_v|\cdot\left(\sum_{w\in\base{b}} |\left(W_v\right)_w|\right) \\ & = \sum_{v\in\base{a}}|e_v|\cdot\left(\sum_{i=1}^{N_v} |\alpha^v_i|\right) \\ & = \sum_{v\in\base{a}}|e_v| \\ & = \|e\|. \end{alignat*} This concludes the proof that the linear map $\denot{\omega}$ is not only well-defined between the vector spaces $\denot{a}$ and $\denot{b}$ but also bounded. \end{proof} \subsection{Modeling Isos as Unitary Maps} \label{sec:isos-umaps} In this section, we show that not only closed isos can be modeled as bounded linear maps, but that these linear maps are in fact unitary maps. The problem comes from fixpoints. We first consider the case of isos written without fixpoints, and then the case with fixpoints. \paragraph{Without recursion.} The case without recursion is relatively easy to treat, as the linear map modeling the iso can be compositionally constructed out of elementary unitary maps. \begin{theorem}\label{th:unitnorec} Given a closed iso $\vdash_{\isoterm}\omega : a\leftrightarrow b$ defined without the use of recursion, the linear map $\denot{\pi} : \denot{a} \to \denot{b}$ is unitary. \end{theorem} \begin{proof}[Proof sketch] The proof of the theorem relies on the fact that to each closed iso $\vdash_{\isoterm}\omega : a\leftrightarrow b$ one can associate an operationally equivalent iso $\vdash_{\isoterm}\omega' : a\leftrightarrow b$ that does not use iso-variables nor lambda-abstractions. Such an iso $\omega$ is necessarily of the canonical form \begin{equation}\label{eq:canonical-form} \left( \begin{array}{rcl} v_1 &\leftrightarrow& {\tt let}\,{p_{11}}=\omega_{11}\,p'_{11}~{\tt in}\\ && {\tt let}\,{p_{12}}=\omega_{12}\,p'_{12}~{\tt in}\\ && \cdots\\ && {\tt in}~\alpha_{11}w_1 + \cdots + \alpha_{1n}w_n \\ \vdots &\vdots& \vdots \\ v_n &\leftrightarrow& {\tt let}\,{p_{n1}}=\omega_{n1}\,p'_{n1}~{\tt in}\\ && {\tt let}\,{p_{n2}}=\omega_{n2}\,p'_{n2}~{\tt in}\\ && \cdots\\ && {\tt in}~\alpha_{n1}w_1 + \cdots + \alpha_{nn}w_n \end{array} \right) \end{equation} where all the $\omega_{ij}$ are also of this canonical form. We define the {\em applicative depth} of such an iso as follows: it is of depth~0 if its definition does not mention any $\omega_{ij}$, and if it does, its depth is $1$ plus the maximum depth of its $\omega_{ij}$. We then prove the theorem by induction on the depth of the iso $\pi$. If it is of depth 0, then it is done because of the fact that $(\alpha_{ij})_{i,j}$ forms a unitary matrix. If it is of depth $n+1$, and if the result is true for all isos of depth less or equal to $n$, then consider the factorization of the iso of Equation~\eqref{eq:canonical-form} into $\omega_{\it straight}$ followed by $\omega_{\it rotate}$, where \[ \begin{array}{@{}l@{}} \omega_{\it straight} =\\[1.2ex] \left( \begin{array}{r@{~\leftrightarrow~}l} v_1 & {\tt let}\,{p_{11}}=\omega_{11}\,p'_{11}~{\tt in}~ {\tt let}\,{p_{12}}=\omega_{12}\,p'_{12}~{\tt in}~ \cdots ~{\tt in}~w_1 \\ \cdots & \cdots \\ v_n & {\tt let}\,{p_{n1}}=\omega_{n1}\,p'_{n1}~{\tt in}~ {\tt let}\,{p_{n2}}=\omega_{n2}\,p'_{n2}~{\tt in}~ \cdots ~{\tt in}~w_n \end{array} \right), \end{array} \] \[ \begin{array}{@{}l@{}} \omega_{\it rotate} = \\[1.2ex] \left( \begin{array}{r@{~\leftrightarrow~}l} w_1 & \alpha_{11}w_1 + \cdots + \alpha_{1n}w_n \\ \cdots & \cdots \\ w_n & \alpha_{n1}w_1 + \cdots + \alpha_{nn}w_n \end{array} \right). \end{array} \] The iso $\denot{\pi_{\it rotate}}$ is a unitary because the $(\alpha_{ij})_{i,j}$ forms a unitary matrix. For the iso $\denot{\pi_{\it straight}}$, since the $v_i$'s form an exhaustive and non-overlapping coverage of $a$, the space $\denot{a}$ can be orthogonally decomposed as $\denot{v_i}\oplus\cdots\oplus\denot{v_n}$ where $\denot{v_i}$ is the subspace corresponding to the closed values matching $v_i$. Similarly, $\denot{b}$ can be orthogonally decomposed into $\denot{w_1}\oplus\cdots\oplus\denot{w_n}$. The map $\denot{\pi_{\it straight}}$ is then sending each subspace $\denot{v_i}$ to the subspace $\denot{w_i}$. The corresponding operation is the composition of $\denot{\pi_{i1}}$ with $\denot{\pi_{i2}}$ with $\denot{\pi_{i3}}$ with \ldots, each being unitaries by induction hypothesis. Summing up, $\denot{\pi_{\it straight}}$ is unitary because it is the (orthogonal) sum of compositions of unitaries: $\denot{\pi}$ is therefore unitary. \end{proof} As an illustration, the semantics of {\tt Gate} of Example~\ref{sec:example-quantum} is given in Figure~\ref{fig:sem-gate}. \paragraph{Isos with structural recursion.} When considering fixpoints, we cannot rely anymore on this finite compositional construction: the space $\denot{a}$ cannot anymore be regarded as a {\em finite} sum of subspaces described by each clause. We therefore need to rely on the formal definition of unitary maps in general, infinite Hilbert spaces. On top of being bounded linear, a map $\denot{\omega}:\denot{a}\to\denot{b}$ is unitary if (1) it preserves the scalar product: $ \scalprod{\denot{\omega}(e)}{\denot{\omega}(f)} = \scalprod{e}{f} $ for all $e$ and $f$ in $\denot{a}$ and (2) it is surjective. \begin{theorem}\label{th:unitwithrec} Given a closed iso $\vdash_{\isoterm}\omega : a\leftrightarrow b$ that can use structural recursion, the linear map $\denot{\pi} : \denot{a} \to \denot{b}$ is unitary.\qed \end{theorem} The proof uses the idea highlighted in Sec.~\ref{sec:isos-umaps}: for a structurally recursive iso of type $[a]\otimes b \leftrightarrow c$, the Hilbert space $\denot{[a]\otimes b}$ can be split into a canonical decomposition $E_0\oplus E_1\oplus E_2\oplus\cdots$, where $E_i$ contains only the values of the form $\pv{[x_1\ldots x_i]}{y}$, containing the lists of size $i$. On each $E_i$, the iso is equivalent to an iso without structural recursion. \begin{proof} First note that for any given {\em finite} set $(v_k)_k$ of values of type $a$, there exists an unfolding $\omega'$ of $\omega$ such that for all $k$, $ \omega' v_k \to\ldots \to W_k $ with $W_k$ a normal form of type $b$, and such that there are no rewrite step performing a fixpoint unfolding. (Indeed, for each $v_k$ there is such an unfolding: there exists a maximal unfolding that matches all of them). We then ensure that $\denot{\omega'}$ acts in a unitary way on a subspace of $\denot{a}$, i.e. that the trick of the decomposition we used for the finite case in Theorem~\ref{th:unitnorec} can be applied here. % This is the purpose of the condition of structural recursion. The point is that now for all $n$, the set $\base{a}$ can be decomposed as the disjoint union of the values with lists of length smaller than $n$ and the ones of length larger than $n$: $ \base{a}^<_n = \{ v \otimes [e_1\ldots e_k] | v : a_1 \text{~and~} k \leq n \text{~and~} \forall i, e_i : a_2 \} $ and $ \base{a}^>_n = \{ v \otimes [e_1\ldots e_k] | v : a_1 \text{~and~} k > n \text{~and~} \forall i, e_i : a_2 \}. $ For a fixed $n$, the fixpoint $\mu f.\omega$ can be unfolded so that if $v \in \base{a}^<_n$ the rewriting of $(\mu f.\omega) v$ to a normal form does not need to contain any unfolding of $f$. Therefore, the Hilbert space $\denot{a}$ is the disjoint sum $\ell^2(\base{a}^<_n) \oplus \ell^2(\base{a}^>_n)$ and the action of $\denot{\mu f.\omega}$ on this space can be decomposed as $ \denot{\mu f.\omega}^<_n + \denot{\mu f.\omega}^>_n $ where $\denot{\mu f.\omega}^<_n$ has a representation as a composition of unitaries, invoking Theorem~\ref{th:unitnorec}. The corollary is that if $e$ and $f$ are elements of $\denot{a}$ with support in $\base{a}^<_n$ we effectively have $ \scalprod{\denot{\mu f.\omega}(e)}{\denot{\mu f.\omega}(f)} = \scalprod{e}{f}. $ Now, consider $e$ and $f$ two {\em general} elements of $\denot{a}$. We claim that we still have the above equality, and we obtain it by a limit argument. % Indeed, construct $e_n$ and $f_n$ the elements built from $e$ and $f$ by restricting their support to $\base{a}^<_n$. % Because of the norm condition on $e$ and $f$, the errors between $\scalprod{e}{f}$ and $\scalprod{e_n}{f_n}$ and the error between $\scalprod{\denot{\mu f.\omega}(e)}{\denot{\mu f.\omega}(f)}$ and $\scalprod{\denot{\mu f.\omega}(e_n)}{\denot{\mu f.\omega}(f_n)}$ goes to zero as $n$ goes to infinity. The last thing to check is that this operator is indeed surjective. But then the same argument as for classical fixpoints can be used: as long as the syntactic inverse is proved to be total, we retrieve surjectivity. \end{proof} \section{Conclusion} In this paper, we proposed a reversible language amenable to quantum superpositions of values. The language features a weak form of higher-order that is nonetheless expressible enough to get interesting maps such as generalized Toffoli operators. We sketched how this language effectively encodes bijections in the classical case and unitary operations in the quantum case. It would be interesting to see how this relates to join inverse categories~\cite{catflow,KAARSGAARD201733}. In the vectorial extension of the language we have the same control as in the classical, reversible language. Tests are captured by clauses, and naturally yield quantum tests: this is similar to what can be found in QML~\cite{qml,qalternation}, yet more general since the QML approach is restricted to {\tt if-then-else} constructs. The novel aspect of quantum control that we are able to capture here is a notion of {\em quantum loops}. These loops were believed to be hard, if not impossible. What makes it work in our approach is the fact that we are firmly within a closed quantum system, without measurements. This makes it possible to only consider unitary maps and frees us from the L\"ower order on positive matrices~\cite{qalternation}. As we restrict fixpoints to structural recursion, valid isos are regular enough to capture unitarity. Ying~\cite{yingbook} also proposes a framework for quantum while-loops that is similar in spirit to our approach at the level of denotations: in his approach the control part of the loops is modeled using an external systems of ``coins'' which, in our case, correspond to conventional lists. Reducing the manipulation of this external coin system to iteration on lists allowed us to give a simple operational semantics for the language.
proofpile-arXiv_067-7134
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} Many quantum materials are characterized by a rich phase diagram, in which numerous order parameters assume finite values in neighboring regions of the parameter space of temperature, chemical composition, mechanical strain, and electromagnetic fields. The natural instinct one has to capture this physics is to assign the different phases to competing order parameters. For example, if one finds antiferromagnetism (with order parameter $\mathbf{m}$) and superconductivity (with order parameter $\Delta$) nearby, one writes individual Ginzburg-Landau expansions for the free energy $f_{m}$ and $f_{\Delta}$ for both degrees of freedoms, coupled by a symmetry-allowed term such as $f_{m-\Delta}=\gamma\mathbf{m}\cdot\mathbf{m}\left|\Delta\right|^{2}$. Positive $\gamma$ amounts to phase competition while negative $\gamma$ causes one phase to attract the other. While this approach proved to be very efficient in many cases \cite{Sachdev2004,Fernandes2010, Fernandes2010-2}, Landau theory cannot explain why multiple phases emerge close to each other in a phase diagram. Addressing this question usually requires a microscopic description in terms of a model Hamiltonian, a task that can be technically challenging. Given the abundance of complex phase diagrams in correlated electronic systems, it is desirable to identify general principles to describe the close relationship between their multiple ordered states. An underlying general principle to rationalize complex phase diagrams without necessarily resorting to a microscopic description was recently advocated in Ref. \cite{Fradkin2015} and is generally referred to as intertwined order. The idea is that multiple phases of a rich phase diagram are born out of a primary state. A prime example for such a behavior is that of pair-density-wave order, which entangles superconductivity and density waves \cite{Himeda2002,Agterberg08,Berg09,Berg2009, Berg2009-2,Loder2011}. Intertwined orders can also arise due to the interactions induced by a primary order parameter near a quantum phase transition. For example, antiferromagnetic or nematic fluctuations near quantum critical points have been proposed to provide or enhance the pairing interactions for a superconducting phase \cite{Chubukov2003,Lederer2008}. In this review, we focus on a particular realization of intertwined phases in terms of vestigial \textendash{} or composite \textendash{} order. Composite order exists when higher order combinations of potentially symmetry-breaking order parameters condense. Consider a complex multi-component field $\eta_{\alpha}$, where $\alpha$ labels the order parameter components. A finite expectation value $\left\langle \eta_{\alpha}\right\rangle $ would break a certain symmetry of the system \textendash{} for instance, time-reversal in the case of ferromagnetism or translational symmetry in the case of charge order. Composite order then corresponds to the case where certain combinations of the product of the order parameters are on average non-zero, whereas each individual order parameter remains zero on average: \begin{equation} \left\langle \eta_{\alpha}^{*}\eta_{\beta}\right\rangle \neq0\qquad\mathrm{but}\qquad\left\langle \eta_{\alpha}\right\rangle =0.\label{eq:composite intro} \end{equation} The bilinear combination $\left\langle \eta_{\alpha}^{*}\eta_{\beta}\right\rangle $ behaves itself as an order parameter, which breaks only a subset of the symmetries broken by $\eta_{\alpha}$. For this reason, the composite order is called a \emph{vestige} of the primary phase where $\left\langle \eta_{\alpha}\right\rangle $ is finite. This makes both the composite and primary orders naturally intertwined. At first glance, this scenario may seem rather contrived. However, as we will show here, it naturally arises in many quantum materials, when the primary order parameter has multiple components, such that the primary phase is degenerate. In Eq. (\ref{eq:composite intro}) we allowed for $\eta_{\alpha}$ to be complex. This is relevant for superconductors or incommensurate density-wave states. There are two complementary ways to approach a composite ordered phase. If one starts from the primary ordered phase, composite order can be understood as a partial melting of the former, before the system goes to a completely disordered phase \cite{Kivelson98,Zaanen04}. Conversely, starting from the disordered phase, vestigial order can be understood as a fluctuation-induced composite order, i.e. a state of symmetry-breaking fluctuations \cite{Fernandes2012}. Since these fluctuations are naturally strong near the phase transition of the primary order parameter, this line of reasoning explains the existence of multiple nearby ordered states, largely using symmetry arguments. It allows for predictability of complex phase diagrams, even in strongly correlated materials. \begin{figure*} \begin{centering} \includegraphics[width=\linewidth]{Figure-01-V1-3} \par\end{centering} \caption{Schematic phases diagrams for the primary order (denoted by the parent order parameter $\left\langle \boldsymbol{\eta}\right\rangle $) and the vestigial order (denoted by the composite order parameters $\left\langle \boldsymbol{\eta}^{\dagger}\boldsymbol{\tau\eta}\right\rangle $ and $\left\langle \boldsymbol{\eta}\boldsymbol{\tau\eta}\boldsymbol{\tau\eta}\right\rangle $). Second-order (first-order) transitions are denoted by solid (dashed) lines. Panels (a), (b), and (c) show three possible outcomes for the quantum phase transitions of the vesitigial and primary orders, in the case when their finite-temperature phase transitions are split. Panel (d) illustrates the appearance of two different vestigial orders when the condensed component of the order parameter $\left\langle \boldsymbol{\eta}\right\rangle $ of the primary phase changes across the phase diagram. Panel (e) displays a situation in which two different vestigial orders appear, corresponding to bilinear and trilinear composites. Panel (f) illustrates the case in which the vestigial order itself has a regime with quasi-long-range order, giving rise to a critical vestigial phase. The parameter $g$ here corresponds to some external tuning parameter. Other phase diagrams not shown here are also possible \label{fig_phase_diagrams}. } \end{figure*} The richness of the phase diagrams involving vestigial orders contrast with the well-known phase diagrams involving competing phases \cite{Kosterlitz76,Aharony03}. In the latter, the system displays either a bicritical or a tetracritical point, depending on whether the competing orders phase-separate or coexist, respectively. In contrast, several outcomes are possible in the former case, some of which are illustrated in Fig. \ref{fig_phase_diagrams}. A key feature is that the behavior at finite temperatures can be very different than that at $T=0$. For instance, in the simple case of split vestigial and primary transitions at finite temperatures, the system may display two quantum critical points (Fig. \ref{fig_phase_diagrams}a), a single first-order quantum phase transition (Fig. \ref{fig_phase_diagrams}b), or even a single quantum critical point (Fig. \ref{fig_phase_diagrams}c). Importantly, in several models more than one vestigial order appears. Two vestigial orders can appear if the non-zero component of the primary order parameter $\left\langle \eta_{\alpha}\right\rangle $ changes along the phase diagram (Fig. \ref{fig_phase_diagrams}d). Moreover, certain systems can display additional vestigial phases formed by composite trilinear order parameters (Fig. \ref{fig_phase_diagrams}e) or quasi-long-range ordered bilinears (Fig. \ref{fig_phase_diagrams}f). Examples of these cases will be given throughout the review. Historically, fluctuation-induced composite order has played an important role in the area of frustrated magnetism and is closely related to the concept of order-from-disorder \cite{Villain77,Shender82,Henley88}. The identification of an emergent, vestigial Ising degree of freedom in a frustrated two-dimensional Heisenberg model in Ref. \cite{Chandra1990} is a beautiful and influential example for vestigial order. More recently, the concept of composite order played a prominent role in the explanation of nematicity, i.e. electronically-driven rotational symmetry-breaking, in iron-based superconductors \cite{Fang2008,Xu2009,Fernandes2010b,Fernandes2012,Fernandes2014}. As we argue here, the applicability of this concept goes well beyond frustrated magnetism and nematicity, opening interesting routes to investigate unusual electronic states in unconventional superconductors and density-wave systems. In order to move beyond particular examples and systems, it is important to put the concept of composite order on formal grounds, which can be achieved using symmetry arguments. Let the complex primary order parameter $\eta_{\alpha}$ transform under a specific irreducible representation $\Gamma$ of the symmetry group ${\cal G}$ of the problem. Then the components $\alpha=1,\cdots,d_{\Gamma}$ refer to the elements within the irreducible representation of dimension $d_{\Gamma}$. The composite order parameter \begin{equation} \phi_{m}=\sum_{\alpha\beta}\eta_{\alpha}^{*}\Lambda_{\alpha\beta}^{m}\eta_{\beta}\label{eq: composite} \end{equation} transforms under one of the irreducible representations $\Gamma^{m}$ that is contained in the product $\Gamma^{*}\otimes\Gamma$\cite{Hergert2018,Dresselhaus}. Here $\Lambda_{\alpha\beta}^{m}$ is a $d_{\Gamma}\times d_{\Gamma}$-dimensional matrix that transforms under $\Gamma^{m}$. Elementary group theoretical arguments show that symmetry-breaking composites can only be formed out of multi-component primary order parameters, i.e. $d_{\Gamma}>1$. Otherwise, $\phi_{m}$ must transform under the trivial representation and will not break a symmetry. Thus, composite order of the type Eq.\eqref{eq: composite} requires a non-Abelian symmetry group ${\cal G}$. Fortunately, there appears plenty of those in generic condensed-matter systems. In the remainder of this review, we will apply and generalize such symmetry arguments to analyze composite order that is driven by strong fluctuations. To set the stage, we start by discussing the case of $p$-wave unconventional superconductivity (Sec. II), followed by the cases of density-waves on the square lattice (Sec. III) and on the hexagonal lattice (Sec. IV). The latter have important consequences for the phase diagrams of iron-based superconductors and graphene, respectively. Section V discusses other examples and possible extensions of these ideas, including an example of a system where an emergent symmetry of the ground-state leads to the absence of vestigial order. We will demonstrate that a rich plethora of electronic states with scalar and vector chiral order, spin-nematic order, Ising-nematic order, time-reversal symmetry-breaking order, and critical phases emerge out of this simple underlying principle. \section{VESTIGIAL ORDER FROM UNCONVENTIONAL SUPERCONDUCTIVITY} To set the stage for the next sections, we start by investigating vestigial order in unconventional superconductors. As a specific example, we consider a $p$-wave superconductor on a tetragonal ($d=3$) or square ($d=2$) lattice. The amplitude $\left\langle c_{\mathbf{k}\alpha}^{\dagger}c_{-\mathbf{k}\beta}^{\dagger}\right\rangle $ of a Cooper pair that consists of one electron with momentum $\mathbf{k}$ and spin $\alpha$ and another electron with $-\mathbf{k}$ and $\beta$ is efficiently characterized in terms of the d-vector $\mathbf{d}_{\mathbf{k}}$: \begin{equation} \Delta_{\alpha\beta}\left(\mathbf{k}\right)=\left[\left(\mathbf{d}_{\mathbf{k}}\cdot\boldsymbol{\sigma}\right)i\sigma_{y}\right]_{\alpha\beta}. \end{equation} Here, $\sigma_{j}$ are Pauli matrices. The Pauli principle dictates that the d-vector is odd under inversion, i.e. $\mathbf{d}_{-\mathbf{k}}=-\mathbf{d}_{\mathbf{k}}$, such that the gap function is antisymmetric with respect to the exchange of the two electrons that form the Cooper pair. In the case of a triplet Cooper pair with $S_{z}=0$, the d-vector is parallel to the $z$-axis and can be parametrized as: \begin{equation} \mathbf{d}_{\mathbf{k}}=\hat{\mathbf{z}}\left(\eta_{x}\sin k_{x}a+\eta_{y}\sin k_{y}a\right).\label{d-vector} \end{equation} Here, $a$ is the lattice constant in the $xy$ plane. The two complex order parameters $\eta_{x}$ and $\eta_{y}$ thus correspond to $p_{x}$ and $p_{y}$ superconducting states, respectively. Theoretically, $p$-wave superconductivity is expected when pairing is mediated by the exchange of ferromagnetic fluctuations. Experimentally, material candidates for $p$-wave superconductors include the ruthenate Sr$_{2}$RuO$_{4}$\cite{Mackenzie2003,Kallin2012} and the doped topological insulator Cu$_{x}$Bi$_{2}$Se$_{3}$\cite{Hor2010,Kriener2011}. \subsection{Symmetry classification} We can build a Ginzburg-Landau expansion of the free energy $f$ in terms of the two-component order parameter $\boldsymbol{\eta}\equiv\left(\eta_{x},\eta_{y}\right)$. The usual form for the expansion for a system with spin-orbit coupling and tetragonal point group $D_{4h}$ is \cite{Sigrist91} (gradient terms are neglected for the sake of clarity): \begin{align} f =\frac{r}{2}\left(\left|\eta_{x}\right|^{2}+\left|\eta_{y}\right|^{2}\right)+\frac{u}{4}\left(\left|\eta_{x}\right|^{4}+\left|\eta_{y}\right|^{4}\right) + \frac{g}{2}\left|\eta_{x}\right|^{2}\left|\eta_{y}\right|^{2}+\frac{w}{8}\left(\eta_{x}\eta_{y}^{*}+\eta_{y}\eta_{x}^{*}\right)^{2}.\label{aux_SC_free_energy-1} \end{align} The terms that determine the allowed ground states are the quartic ones. For our purposes, it is therefore convenient to write $f$ in terms of bilinears: \begin{equation} f=\frac{r}{2}\,\phi_{0}+\frac{\left(u+g\right)}{8}\,\phi_{0}^{2}+\frac{\left(u-g\right)}{8}\,\phi_{3}^{2}+\frac{w}{8}\,\phi_{1}^{2},\label{SC_free_energy1} \end{equation} where \begin{equation} \phi_{m}=\sum_{\alpha\beta}\eta_{\alpha}^{*}\tau_{\alpha\beta}^{m}\eta_{\beta}\label{SC_bilinears} \end{equation} are the possible bilinear forms, with the Pauli matrices $\tau_{\alpha\beta}^{m}$ playing the role of the matrices $\Lambda_{\alpha\beta}^{m}$ in Eq.\eqref{eq: composite}. The absence of the $\phi_{2}^{2}$ term is a consequence of the Fierz identity, $\phi_{1}^{2}+\phi_{3}^{2}=\phi_{0}^{2}-\phi_{2}^{2}$, which implies that one can always express one of the allowed bilinear forms in terms of the others. The different possible $p$-wave superconducting states are obtained by minimizing the free energy, and are given by \begin{eqnarray} \boldsymbol{\eta}_{B_{1g}} & \propto & \left(1,0\right)\:{\rm or}\:\left(0,1\right),\nonumber \\ \boldsymbol{\eta}_{B_{2g}} & \propto & \left(1,\pm1\right),\nonumber \\ \boldsymbol{\eta}_{A_{2g}} & \propto & \left(1,\pm i\right). \end{eqnarray} Which state is realized depends on the values of the quartic coefficients. The $B_{1g}$ superconducting state is the ground state when $u-g<\mathrm{min}\left(0,w\,\right)$; the $B_{2g}$ ground state takes place when $u-g>w$ and $w<0$; and the $A_{2g}$ ground state is realized when $u-g>0$ and $w>0$. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{Figure-02} \par\end{centering} \caption{The three possible $S_{z}=0$ triplet superconducting states in a tetragonal system (upper panels), and the corresponding vestigial phases (lower panels). Panel (a) shows the $B_{1g}$-nematic state; panel (b), the $B_{2g}$-nematic state; and panel (c), the $A_{2g}$ time-reversal symmetry-breaking state. The primary phases are illustrated by three-dimensional plots of the gap function $\left|\Delta\right|^{2}$, defined in Eq. (\ref{d-vector}), around a circular Fermi surface. The vestigial phases are illustrated by the behavior of the phases defined in Eq. (\ref{gap_eta}). While the global phase $\theta$ is always fluctuating in the vestigial phases, one of the phases $\alpha$ or $\psi$ can acquire two different values, evidencing the Ising-like character of the vestigial order parameters. \label{fig_p_wave}} \end{figure} At the superconducting transition, the global $U\left(1\right)$ symmetry is broken or, in the case of a two-dimensional system, algebraic order sets in via a Berezinskii-Kosterlitz-Thouless (BKT) transition. In addition, each of the three possible ground states also breaks a discrete Ising-like ($Z_{2}$) symmetry of the system, as illustrated schematically in Fig. \ref{fig_p_wave}. The $B_{1g}$ state breaks the tetragonal symmetry of the lattice such that the $x$ and $y$ directions are inequivalent. This is shown in the upper panel of Fig. \ref{fig_p_wave}a, which plots $\left|\Delta\right|^{2}$ for a circular Fermi surface. It therefore corresponds to a $B_{1g}$ nematic superconductor, since the system retains horizontal and vertical mirror symmetries. The $B_{2g}$ state with $\eta_{x}=\pm\eta_{y}$ also breaks the tetragonal symmetry of the lattice, but by making the diagonal directions $xy$ and $x\bar{y}$ inequivalent (upper panel of Fig. \ref{fig_p_wave}b). As a result, it is a $B_{2g}$ nematic superconductor, where the diagonal mirror symmetries are preserved. Finally, the state with $\eta_{x}=\pm i\eta_{y}$ breaks time-reversal symmetry (upper panel of Fig. \ref{fig_p_wave}c). It supports orbital currents associated with the chirality of the gap function, and as such it transforms as the $A_{2g}$ irreducible representation of the tetragonal group. These properties of a $p$-wave superconductor are efficiently captured within the framework outlined in the introduction. The symmetry group of the problem is $\mathcal{G}=D_{4h}\otimes U(1)$, where $D_{4h}$ refers to the tetragonal point group that describes the square and tetragonal lattices, and $U(1)$ is the continuous group related to the complex nature of the order parameters $\eta_{\alpha}$. Importantly, the $\eta_{x}$ and $\eta_{y}$ order parameters transform according to the two-dimensional irreducible representation $E_{u}$ of $D_{4h}$, i.e. $\Gamma=E_{u}\times e^{im\theta}$ . We focus on composite order parameters that do not break the $U\left(1\right)$ order parameter. The product \begin{equation} \Gamma^{*}\otimes\Gamma = E_{u}\otimes E_{u} = A_{1g}\oplus B_{1g}\oplus B_{2g}\oplus A_{2g}\label{eq:Eu_irreps} \end{equation} is then decomposed in terms of $4$ one-dimensional irreducible representations of the $D_{4h}$ group. From Eq. (\ref{eq:Eu_irreps}), we conclude that there are four different values of the irreducible representation index, $m=0,\,1,\,2,\,3$. The associated bilinear forms are the same as those introduced in Eq. \eqref{SC_bilinears}. In explicit form, we have: $\phi_{0} =\boldsymbol{\eta}^{\dagger}\tau^{0}\boldsymbol{\eta}=\left|\eta_{x}\right|^{2}+\left|\eta_{y}\right|^{2}$, $\phi_{1} =\boldsymbol{\eta}^{\dagger}\tau^{1}\boldsymbol{\eta}=\eta_{x}\eta_{y}^{*}+\eta_{y}\eta_{x}^{*}$, $\phi_{2} =\boldsymbol{\eta}^{\dagger}\tau^{2}\boldsymbol{\eta}=i\left(\eta_{x}\eta_{y}^{*}-\eta_{y}\eta_{x}^{*}\right)$, and $\phi_{3} =\boldsymbol{\eta}^{\dagger}\tau^{3}\boldsymbol{\eta}=\left|\eta_{x}\right|^{2}-\left|\eta_{y}\right|^{2}$. Because $\phi_{0}$ transforms as the trivial irreducible representation $A_{1g}$, it does not break any symmetry of the system. As a result, it cannot serve as a vestigial order parameter, but instead corresponds to fluctuations present in the vicinities of the normal-state to superconducting phase transition, regardless of the nature of the superconducting state. $\phi_{1}$, on the other hand, transforms as the $B_{2g}$ irreducible representation and, as such, is a nematic vestigial order parameter that breaks the tetragonal symmetry of the system. It is clear that it is only compatible with the $B_{2g}$ nematic superconducting ground state, in which $\eta_{x}=\pm\eta_{y}$. Similarly, $\phi_{3}$ transforms as the $B_{1g}$ irreducible representation, and is thus also a nematic vestigial order, compatible with the $B_{1g}$ nematic superconducting state in which either $\eta_{x}=0$ or $\eta_{y}=0$. Finally, $\phi_{2}$ transforms as the $A_{2g}$ irreducible representation, and thus breaks time-reversal symmetry, since $A_{2g}$ corresponds to orbital angular momentum along the $z$ axis. Clearly, it is only compatible with the time-reversal symmetry-breaking superconducting ground state, in which $\eta_{x}=\pm i\eta_{y}$. The coefficients of the terms that are quadratic in $\phi_{i}^{2}$ (with $i=1,\,2,\,3$), also called ``masses'' in field theory, determine which of the vestigial orders can appear. From Eqs. (\ref{SC_free_energy1}) and the Fierz identity we conclude that if $u-g<\mathrm{min}\left(0,w\,\right)$, the mass of the $\phi_{3}^{2}$ term is negative, and smaller than the masses of the $\phi_{1}^{2}$ and $\phi_{2}^{2}$ terms, indicating a tendency towards $B_{1g}$ vestigial order. $u-g<\mathrm{min}\left(0,w\,\right)$ is also the condition that ensures that the ground state is the $B_{1g}$ nematic superconducting state. Similar results hold in the other two regions of the parameter space $(u-g,\,w)$. The key remaining question is whether the composite order parameter $\phi_{i}$ can condense even in the absence of superconducting order, i.e. whether the system can display a regime in which $\left\langle \boldsymbol{\eta}^{\dagger}\tau^{i}\boldsymbol{\eta}\right\rangle \neq0$ but $\left\langle \boldsymbol{\eta}\right\rangle =0$. The $\phi_{i}$ are $Z_{2}$ (Ising-like) order parameters in this case, since they each transform according to one-dimensional irreducible representations, whereas $\boldsymbol{\eta}$ are complex $U(1)$ fields. Within mean-field, both the $Z_{2}$ and $U(1)$ symmetries are broken at the same temperature. However, once fluctuations are included, the natural result is that they are broken at two different temperatures or that a joint first order transition takes place. These two options are the generic behaviors of two order parameters that break different symmetries, whereas the simultaneous and second-order transition is only correct within a mean-field description. Since mean-field theory is appropriate for many superconductors, one expects quantitatively small effects. There are, however, a number of low-density and low-dimensional superconductors that are governed by sizable fluctuations of the superconducting order parameter and that are strong candidates for vestigial order. Examples are doped Bi$_{2}$Se$_{3}$\cite{Hor2010,Kriener2011} and the half-Heusler systems LuPtBi and YbPtBi\cite{Goll2008,Butch2011,Tafti2013,Xu2014}. In fact, the observed nematic order below $T_{c}$ in Cu- and Sr\textendash doped Bi$_{2}$Se$_{3}$ \cite{Matano2016,Yonezawa2017,Pan2016,Du2017,Asaba2017,Shen2017} strongly suggests a nematic phase above $T_{c}$\cite{Hecker2018}. Due to the trigonal point group $D_{3d}$ of this material, it follows that the vestigial nematic order parameter behaves like a three-state Potts model\cite{Hecker2018}. The cuprates are another class of materials where strong superconducting fluctuations are present. However, the gap function is $d_{x^{2}-y^{2}}$, which transforms as a one-dimensional irreducible representation of the $D_{4h}$ group. Consequently, vestigial order related to superconductivity in the cuprates can only arise if there is additional translational symmetry breaking, as is the case for pair-density-wave states. Several recent works have focused on the issue of vestigial orders of the pair-density-waves, mostly in the context of the cuprates \cite{Agterberg08,Berg09,Yuxuan15,Zaanen17}. \subsection{Model calculations} \label{sec:model-calculations} Symmetry arguments can take us this far, but to proceed and determine whether the superconducting and vestigial orders are split, explicit calculations are necessary. Approaching the vestigial order from the melted ordered state, we parametrize the order parameter in terms of: \begin{equation} \boldsymbol{\eta}\left(\mathbf{x}\right)=\sqrt{n_{0}}e^{i\theta\left(\mathbf{x}\right)}\left(\begin{array}{c} \cos\alpha\left(\mathbf{x}\right)\\ e^{i\psi\left(\mathbf{x}\right)}\sin\alpha\left(\mathbf{x}\right) \end{array}\right),\label{gap_eta} \end{equation} with constant $n_{0}$. There are three coordinate-dependent phase variables: the global phase $\theta$, the relative phase $\psi$ between the two $p$-wave components $p_{x}$ and $p_{y}$, and the phase $\alpha$ that selects whether both components are simultaneously present. In each of the three vestigial phases, $\left\langle e^{i\theta\left(\mathbf{x}\right)}\right\rangle =0$, implying that the system has no superconducting order. Moreover, either $\alpha$ or $\psi$ can acquire two values in a given vestigial phase, highlighting the Ising character of the composite order parameters (see Fig. \ref{fig_p_wave}) For concreteness, let us consider a two-dimensional system with $u-g>w>0$, which corresponds to the mean-field ground state $\boldsymbol{\eta}_{A_{2g}}\propto\left(1,\pm i\right)$. The effective action $S\equiv F/T$, where $F$ is the total free energy has two contributions: the gradient term (we neglect here the coupling to the electromagnetic field) \begin{equation} S_{{\rm grad}} = \frac{1}{2T}\int d^{2}x\left\{ \left(\partial_{\mu}\theta\right)^{2}+\left(\partial_{\mu}\alpha\right)^{2}+\sin^{2}\alpha\left(\partial_{\mu}\psi\right)^{2}\right. + \left.2\sin^{2}\alpha\partial_{\mu}\psi\partial_{\mu}\theta\right\} \end{equation} with some dimensionless temperature $T$, and the potential term \begin{equation} S_{{\rm pot}}=-\frac{\Delta}{a^{2}T}\int d^{2}x\sin^{2}\left(2\alpha\right)\sin^{2}\psi, \end{equation} where $\Delta=wn_{0}^{2}a^{2}$ is a dimensionless constant with $w>0$ from Eq.\eqref{aux_SC_free_energy-1}. This action can be analyzed using renormalization-group techniques (for a related problem, see Ref. \cite{Fellows2012}). The key result is the onset of time-reversal symmetry (TRS) breaking at a temperature $T_{0}\sim2\pi/\log\left(\Delta^{-1}\right)$. For $T>T_{0}$, the gradient term dominates the renormalization-group flow, and the system behaves similarly to Heisenberg $O(3)$ spins. In this regime, the superconducting correlation length follows the usual behavior of the non-linear sigma model with spin correlation length $\xi\left(T>T_{0}\right)=ae^{\pi/T}$. Below $T_{0}$, the potential term starts to dominate. Because $\Delta>0$ increases under the renormalization group flow, the effect of this term is to lock the variables $\alpha$ and $\psi$ in order to minimize the energy, i.e. $\alpha=\frac{\pi}{4}$ and $\psi=\frac{\pi}{2}$ or $\psi=\frac{3\pi}{2}$ . As a result, the order parameter is that of a $p_{x}\pm ip_{y}$ superconductor with a fluctuating phase: \begin{equation} \boldsymbol{\eta}\left(x\right)=\sqrt{\frac{n_{0}}{2}}e^{i\theta\left(x\right)}\left(\begin{array}{c} 1\\ \pm i \end{array}\right), \end{equation} Now the only relevant variable is the overall superconducting phase, such that the gradient term becomes \begin{equation} S_{{\rm grad}}\rightarrow\frac{1}{2T}\int d^{2}x\left(\partial_{\mu}\theta\right)^{2} \end{equation} which is the same action as the usual XY-model. As a result, the system becomes governed by the Berezinskii-Kosterlitz-Thouless (BKT) behavior of the XY-model with the key difference that the size of the vortex core is $\xi\left(T_{0}\right)\approx\frac{a}{\sqrt{\Delta}}$. Because $\xi\left(T_{\mathrm{BKT}}\right)\rightarrow\infty$, the BKT transition temperature $T_{\mathrm{BKT}}$ is clearly below $T_{0}$, even though we find, following Ref.\cite{Fellows2012}, that both temperatures are parametrically of the same order. To unveil the meaning of the temperature $T_{0}$, we note that the potential term can be alternatively expressed in terms of the vestigial $A_{2g}$ order parameter $\phi_{2}$ \[ S_{{\rm pot}}=-\frac{\Delta}{a^{2}T}\int d^{2}x\left(\frac{\phi_{2}}{n_{0}}\right)^{2} \] As the correlation length increases, regions of typical size $\xi$ essentially share the same value of the Ising variable $\phi_{2}/n_{0}\approx\pm1$. For a two-dimensional system, this implies that a true Ising-like phase transition takes place when the correlation length becomes comparable to the Ising domain-wall thickness $a/\sqrt{\Delta}$: \begin{equation} \Delta\frac{\xi\left(T\right)^{2}}{a^{2}}\approx1.\label{eq:T0 d2} \end{equation} This immediately yields $T_{{\rm c,Ising}}=T_{0}$. Thus, $T_{0}>T_{\mathrm{BKT}}$ signals a true Ising-like phase transition to the vestigial state that breaks time-reversal symmetry, but does not display quasi-long-range superconducting order. This result agrees with analyses of related models that also find an Ising order onsetting above the BKT transition \cite{Korshunov02,Vicari05}. One can also approach the vestigial phase if the system is not exactly two-dimensional, i.e. if true superconducting long-range order can take place. Generally, different techniques can be employed, such as the renormalization-group \cite{Qi09,Millis10,Fernandes2012}, self-consistent Gaussian approximation \cite{Nie17}, and the saddle-point large-$N$ approximation \cite{Fernandes2012}. For the specific case of a $p$-wave superconductor, the self-consistent Gaussian approximation was employed in Ref. \cite{Fischer16}. Here, we will focus on the large-$N$ approach: in this method, one starts with the free energy (\ref{SC_free_energy1}), complemented by the standard gradient terms, and decouples the quartic coefficients (quadratic in the bilinears) using Hubbard-Stratonovich transformations. In the parameter regime relevant for $A_{2g}$ superconducting order, $u-g>w>0$, it is sufficient to keep only the fields corresponding to the $\phi_{0}$ and $\phi_{2}$ bilinears, obtaining the action \begin{align} S =\int_{k}\boldsymbol{\eta}_{k}^{\dagger}\left[\left(r+\phi_{0}+k^{2}\right)\boldsymbol{\tau}_{0}+\phi_{2}\boldsymbol{\tau}_{y}\right]\boldsymbol{\eta}_{k} +\frac{\phi_{2}^{2}}{4w}-\frac{\phi_{0}^{2}}{4\left(u+g+w\right)}. \end{align} In the disordered state, the fields $\boldsymbol{\eta}$ are fluctuating and can be integrated out exactly, resulting in an action that depends only on $\phi_{0}$ and $\phi_{2}$. The equation of state for $\phi_{2}$ can then be obtained using a saddle-point approximation, which is formally exact in the limit where the number $N$ of components of $\boldsymbol{\eta}$ is $N\rightarrow\infty$. To linear order in the vestigial order parameter $\phi_{2}$, one obtains: \begin{equation} \frac{\phi_{2}}{w}=A\xi^{4-d}\phi_{2} \end{equation} where $A$ is some constant, $\xi\left(T\right)$ is the temperature-dependent correlation length, and $d>2$ is the dimensionality of the system. This equation allows a non-zero $\phi_{2}$ value at the critical temperature $T_{0}$ where $\xi\left(T_{0}\right)=\left(Aw\right)^{-\frac{1}{4-d}}$, which takes place before the temperature $T_{c}$ in which long-range superconductivity appears, since $\xi\left(T_{c}\right)\rightarrow\infty$. For $d=2$ we recover the previous result for the correlation length at the vestigial transition. The only way to avoid vestigial order at a separate transition temperature $T_{0}>T_{c}$ is via a simultaneous first-order transition, since in this case $\xi(T_{c})$ no longer diverges. This is the generic behavior that occurs in isotropic, three-dimensional systems\cite{Fernandes2012}. The split second-order transition in low-dimensional and anisotropic three-dimensional systems is a consequence of the enhanced role of fluctuations of the primary order parameter\cite{Karahasanovic2016}. The quantum dynamics near $T=0$, however, may place the system closer to its upper critical dimension, thus reducing the impact of fluctuations and favoring a single first-order quantum transition \cite{Fernandes13} (see Figs. \ref{fig_phase_diagrams}a and b). These analyses reveal that there can be no single second-order phase transition into a multi-component superconductor: either there are two separate transitions or a single first-order transition. This simple yet robust result has important implications for the interpretation of experimental data on material candidates for $p$-wave superconductivity (see also Ref. \cite{Fischer16}). The same behavior holds also for any superconducting state with a multi-component order parameter that transforms according to any of the 32 point groups of three-dimensional crystalline systems. Specifically, vestigial orders originating from superconductivity are possible for the 15 point groups with higher-dimensional irreducible representations, i.e. for all the cubic groups $T$, $T_{h}$, $T_{d}$, $O$, and $O_{h}$, the tetragonal groups $C_{4v}$, $D_{2d}$, $D_{4}$, and $D_{4h}$, the hexagonal groups $C_{6v}$, $D_{3h}$, and $D_{6h}$, and the trigonal groups $C_{3v}$, $D_{3d}$, and $D_{3}$. On the other hand, no vestigial order of translationally-invariant superconducting states occurs in an orthorhombic, monoclinic, or triclinic system. \section{VESTIGIAL ORDER FROM DENSITY-WAVES IN THE SQUARE LATTICE} \label{sec:Vestigial-order-from} We now proceed to apply the formalism developed above to classify possible vestigial orders arising from density-waves on the square lattice. We start with the richer case of spin density-waves. As explained, the ground state must be degenerate in order for non-trivial composite operators to emerge. The standard N\'eel-like order, with wave-vector $\mathbf{Q}=\left(\pi,\pi\right)$, does not support vestigial orders that break the point group symmetry of the lattice. The simplest non-trivial case is then that of two degenerate magnetic ground states that are related by a symmetry of the lattice, corresponding to two ordering vectors $\mathbf{Q}_{1}=\left(\pi,0\right)$ and $\mathbf{Q}_{2}=\left(0,\pi\right)$. The local spin can then be written as \begin{equation} \mathbf{S}\left(\mathbf{r}\right)=\mathbf{m}_{1}\cos{\left(\mathbf{Q}_{1}\cdot\mathbf{r}\right)}+\mathbf{m}_{2}\cos{\left( \mathbf{Q}_{2}\cdot\mathbf{r} \right)}, \end{equation} where $\mathbf{m}_{a}$ are the real vector order parameters associated with $\mathbf{Q}_{a}$, where $a=1,\,2$. In the square lattice, there are three possible magnetic ground states \cite{Lorenzana08,Fernandes2016}, illustrated in Fig. \ref{fig_square_lattice}: a $C_{2}$-symmetric single-\textbf{Q }spin density-wave, corresponding to only one $\mathbf{m}_{a}$ being non-zero; a $C_{4}$-symmetric collinear double-\textbf{Q} spin density-wave, corresponding to $\mathbf{m}_{1}\parallel\mathbf{m}_{2}\neq0$; and a $C_{4}$-symmetric non-collinear double-\textbf{Q }spin density-wave, corresponding to $\mathbf{m}_{1}\perp\mathbf{m}_{2}\neq0$. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{Figure-03} \par\end{centering} \caption{Schematic representation of the three possible square-lattice spin density-wave ground states (upper panels) with ordering vectors $\mathbf{Q}_{1}=\left(\pi,0\right)$ and $\mathbf{Q}_{2}=\left(0,\pi\right)$, and their corresponding vestigial phases (lower panels). Panel (a) refers to the single-\textbf{Q }$C_{2}$-symmetric magnetic phase and its corresponding nematic vestigial phase, characterized by unequal bonds. Panel (b) shows the collinear double-\textbf{Q }$C_{4}$-symmetric magnetic phase and its corresponding charge-ordered vestigial phase, characterized by unequal sites. Note that, in the magnetically ordered state, half of the sites have zero magnetization. Panel (c) illustrates the non-collinear double-\textbf{Q }$C_{4}$-symmetric magnetic phase and its corresponding spin-current vestigial phase, characterized by unequal plaquettes. As indicated in the figure, the magnitudes of the local magnetization are different in each ordered state. \label{fig_square_lattice}} \end{figure} This description has been widely employed to discuss nematicity and magnetism in iron-based materials \cite{Lorenzana08,Eremin10,Brydon11,Giovannetti11,Fernandes2012,Wang15,Fernandes2016}. It arises from either a $J_{1}$-$J_{2}$ localized spin model or an itinerant microscopic model with partially nested Fermi pockets \cite{Fang2008,Xu2009,Lorenzana08,Fernandes2012}. In what follows, we will not repeat arguments that were extensively presented elsewhere \cite{Fernandes2014,Fernandes2012b}, but instead give a symmetry-based analysis of the allowed vestigial states. We start by writing down the symmetry group of the problem without spin-orbit interaction: \begin{equation} {\cal G}=C_{4v}^{'''}\times SO\left(3\right). \end{equation} Here, $C_{4v}^{'''}$ is called the extended point group \cite{Serbyn13,Venderbos16}. It corresponds to the standard point group $C_{4v}$ supplemented by three translations: $T_{1}=\left(1,0\right)$, $T_{2}=\left(0,1\right)$, and $T_{3}=\left(1,1\right)$. It is convenient to consider this group because the density-wave order parameters break translational symmetry. We do not include inversion symmetry explicitly here. Because we have a vector order parameter, it transforms under the irreducible representation $\Gamma=E_{5g}\otimes\Gamma^{S=1}$. Since $d_{E_{5g}}=2$ and $d_{S}=2S+1$, the dimensionality of the irreducible representation of the primary order parameter is $d_{\Gamma}=2\times\left(2+1\right)=6$. As a result, the order parameter can be written as $\eta_{A}=\left(\mathbf{m}_{1},\mathbf{m}_{2}\right),$where the $\mathbf{m}_{a}$ are three-component vectors in spin space. Note that such a classification of the primary order parameters was done in Ref. \cite{Venderbos16}, from which we borrow the group-theory notation. Here, our goal is to systematically discuss the possible vestigial orders. The bilinear forms can be analyzed by using the following results \begin{equation} E_{5}\otimes E_{5}=A_{1}\oplus B_{2}^{'}\oplus A_{2}^{'}\oplus B_{1} \end{equation} and \begin{equation} \Gamma^{S*}\otimes\Gamma^{S}=\bigoplus_{j=0}^{2S}\Gamma^{j}\label{eq_Gamma_s} \end{equation} The primes in $A'_{2}$ and $B'_{2}$ indicate that translational symmetry is broken by $T_{3}=(1,1)$. We thus obtain: \begin{equation} \Gamma^{*}\otimes\Gamma=\left(A_{1}\oplus B_{2}^{'}\oplus A_{2}^{'}\oplus B_{1}\right)\otimes\left(\Gamma^{0}\oplus\Gamma^{1}\oplus\Gamma^{2}\right) \end{equation} The index of a irreducible representation of the product, $m=\left(r,j\right)$, is then a combination of the four spatial irreducible representations $r=\left(0,1,2,3\right)=\left(A_{1},B_{2}^{'},A_{2}^{'},B_{1}\right)$ and the spin $j$. As a result, the possible composite operators are written as: \begin{equation} \phi_{m\equiv\left(r,j\right)}^{\mu}=\sum_{A,B}\eta_{A}\Lambda_{A,B}^{m,\mu}\eta_{B} \end{equation} with matrices \begin{equation} \Lambda_{A,B}^{m\equiv(r,j),\mu}=\tau_{ab}^{r}\lambda_{\alpha\beta}^{j,\mu}. \end{equation} These matrices transform according to one of the irreducible representations $\Gamma^{m}$ contained in the product $\Gamma^{*}\otimes\Gamma$. The number of matrices is given by the dimensionality $d_{m}$ of $\Gamma^{m}$, so that the index $\mu=1,\ldots,d_{m}$. The indices $A=\left(a,\alpha\right)$, $B=(b,\beta)$ combine point and spin group indices, such that $\eta_{A}\equiv m_{a}^{\alpha}$. The $\tau_{ab}^{r}$ are the unit matrix $\tau^{0}$ and the three Pauli matrices $\tau^{r}$. The $3\times3$ matrices $\lambda^{j}$ for $j=0\text{, }\text{1},$ and $2$ act in spin space and are given as follows: For $j=0$ we have $\lambda_{\alpha\beta}^{0,0}=\delta_{\alpha\beta}$, and the composite order parameter can be expressed as a scalar $\phi_{(r,j=0)}$. For $j=1$ we have three matrices $\lambda_{\alpha\beta}^{1,\mu}=i\epsilon_{\alpha\beta\mu}$ corresponding to the three anti-symmetric Gell-Mann matrices\textbf{\emph{. }}Thus, we can express the composite order parameter as a vector $\boldsymbol{\phi}_{(r,j=1)}$. Finally for $j=2$, we use the five symmetric Gell-Mann matrices. They can be labelled by a double index $\left(\mu,\mu'\right)$ of a symmetric tensor, where $\mu$ and $\mu'$ take three values each: \begin{equation} \lambda_{\alpha\beta}^{2,\left(\mu,\mu'\right)}=\frac{1}{2}\left(\delta_{\alpha\mu}\delta_{\beta\mu'}+\delta_{\alpha\mu'}\delta_{\beta\mu}\right)-\frac{1}{3}\delta_{\alpha\beta}\delta_{\mu\mu'}. \end{equation} In this case, the vestigial order parameter is a second-rank tensor $\phi_{(r,j=2)}^{\mu\mu'}$. This exhausts all $3\times3$ matrices, which is what we expect for an order parameter that transforms as $S=1.$ We first consider $j=0$. There are three possible non-vanishing scalar bilinears \begin{eqnarray} \phi_{\left(0,0\right)} & = & \mathbf{m}_{1}\cdot\mathbf{m}_{1}+\mathbf{m}_{2}\cdot\mathbf{m}_{2}\nonumber \\ \phi_{\left(1,0\right)} & = & 2\mathbf{m}_{1}\cdot\mathbf{m}_{2}\nonumber \\ \phi_{\left(3,0\right)} & = & \mathbf{m}_{1}\cdot\mathbf{m}_{1}-\mathbf{m}_{2}\cdot\mathbf{m}_{2}, \end{eqnarray} Note that $\phi_{\left(2,0\right)}$ vanishes, since the $\mathbf{m}_{i}$ are real vectors. While $\phi_{\left(0,0\right)}$ transforms trivially ($A_{1}$ representation), we obtain two vestigial order parameters that break spatial symmetries, without breaking spin-space symmetries. $\phi_{\left(3,0\right)}$, which transforms as $B_{1}$, is an Ising-nematic order parameter, which is frequently observed in iron-based systems (see Fig. \ref{fig_square_lattice}a). It is the vestigial phase of the single-\textbf{Q }magnetic ground state. $\phi_{\left(1,0\right)}$, which transforms as $B'_{2}$, corresponds to a scalar that breaks translational symmetry (with ordering vector $\mathbf{Q}_{1}+\mathbf{Q}_{2}=\left(\pi,\pi\right)$), while preserving the tetragonal symmetry of the lattice (see Fig. \ref{fig_square_lattice}b). It thus corresponds to a checkerboard charge order, and is the vestigial phase of the $C_{4}$-symmetric collinear double-\textbf{Q} magnetic state observed in several iron-based systems\cite{Kim2010,Hassinger2012,Avci2014,Wang2016,Boehmer2015,Allred2015,Hassinger2016,Allred2016}. Interestingly, in the phase diagrams of these compounds, the single-\textbf{Q }phase undergoes a transition to the double-\textbf{Q }phase as function of doping. A little explored problem is the interplay between the two corresponding vestigial orders in this case where the primary order itself changes (see Fig. \ref{fig_phase_diagrams}d). For $j=1$, the only non-zero vestigial order is the vector composite order parameter \begin{equation} \mathbf{\boldsymbol{\phi}}_{\left(2,1\right)}=2\mathbf{m}_{1}\times\mathbf{m}_{2}. \end{equation} It corresponds to a vector chirality, which is manifested as spin current loops that are staggered between different plaquettes, forming an imaginary spin density-wave with ordering vector $\mathbf{Q}_{1}+\mathbf{Q}_{2}=\left(\pi,\pi\right)$ (see Fig. \ref{fig_square_lattice}c). It is the vestigial phase of the $C_{4}$-symmetric non-collinear double-\textbf{Q} magnetic state observed recently in doped CaKFe$_{4}$As$_{4}$\cite{Meier2018}. The three non-trivial states $\phi_{\left(1,0\right)}$, $\phi_{\left(3,0\right)}$, and $\mathbf{\boldsymbol{\phi}}_{\left(2,1\right)}$ were recently discussed in Ref. \cite{Fernandes2016} and analyzed using a large-$N$ approximation. In a two-dimensional system, where long-range order of the primary order parameters is prohibited by the Hohenberg-Mermin-Wagner theorem, a vestigial phase having only the Ising-like order parameters $\phi_{\left(1,0\right)}$ or $\phi_{\left(3,0\right)}$ will take place. Note that the continuous composite order parameter $\mathbf{\boldsymbol{\phi}}_{\left(2,1\right)}$ cannot condense in a two-dimensional system; in Ref. \cite{Fernandes2016}, it was argued that a vestigial phase with $\mathbf{\boldsymbol{\phi}}_{\left(2,1\right)}\neq0$ but $\mathbf{m}_{i}=0$ is possible in strongly anisotropic three-dimensional systems. Finally, there is also the possibility for three more vestigial states with $j=2$: \begin{align} \phi_{\left(0,2\right)}^{\mu\mu'} & = m_{1}^{\mu}m_{1}^{\mu'}+m_{2}^{\mu}m_{2}^{\mu'} - \frac{1}{3}\delta_{\alpha\beta}\left(\mathbf{m}_{1}\cdot\mathbf{m}_{1}+\mathbf{m}_{2}\cdot\mathbf{m}_{2}\right)\nonumber \\ \phi_{\left(1,2\right)}^{\mu\mu'} & = m_{1}^{\mu}m_{2}^{\mu'}+m_{2}^{\mu}m_{1}^{\mu'} - \frac{1}{3}\delta_{\mu\mu'}\left(\mathbf{m}_{1}\cdot\mathbf{m}_{2}+\mathbf{m}_{2}\cdot\mathbf{m}_{2}\right)\nonumber \\ \phi_{\left(3,2\right)}^{\mu\mu'} & = m_{1}^{\mu}m_{1}^{\mu'}-m_{2}^{\mu}m_{2}^{\mu'} - \frac{1}{3}\delta_{\mu\mu'}\left(\mathbf{m}_{1}\cdot\mathbf{m}_{1}-\mathbf{m}_{2}\cdot\mathbf{m}_{2}\right).\label{eq_tensor_rank2} \end{align} While $\phi_{\left(0,2\right)}^{\mu\mu'}$ (with $\mu,\mu'=1,2,3$) corresponds to pure spin-nematicity (i.e. nematic order in spin space, without affecting the lattice point group symmetry), the other two correspond to simultaneous rotational symmetry breaking in lattice and in spin space. Clearly, these order parameters mix if one includes spin-orbit interaction. However, it is still an interesting open question whether there are iron-based superconductors or other materials where these quadrupolar order parameters are the dominant vestigial order parameters. The above analysis applies to any square-lattice system displaying density-waves with ordering vectors $\mathbf{Q}_{1}=\left(\pi,0\right)$ and $\mathbf{Q}_{2}=\left(0,\pi\right)$. While we focused on the case of spin density-waves here, extension to the case of charge density-waves is straightforward. In particular, in the case of (commensurate) charge density-waves, the possible vestigial orders are exactly the same as the $j=0$ composite order parameters discussed above. \section{VESTIGIAL ORDER FROM DENSITY-WAVES IN THE HEXAGONAL LATTICE} \label{sec:vestigial-order-from} A similar analysis as the one outlined above can be performed for the case of density-waves in the hexagonal lattice. The new aspect of this problem is the existence of a triply-degenerate ground state, which allows us to discuss the case where the primary order parameter transforms as a three-dimensional irreducible representation. In this situation, non-trivial trilinear composite order parameters can exist, leading to an even richer phase diagram. We note that the classification of the primary order parameters for this situation was previously done in Refs. \cite{Venderbos16,Venderbos2_16}; we follow the notation of that paper to study the various composite orders. In the case of spin density-waves, the local spin is parametrized in terms of three magnetic order parameters $\mathbf{m}_{a}$ associated with three wave-vectors related by $60^{\circ}$ rotations: $\mathbf{Q}_{1}=\frac{\pi}{\sqrt{3}}\left(\sqrt{3},1\right)$, $\mathbf{Q}_{2}=\frac{\pi}{\sqrt{3}}\left(0,-2\right)$, and $\mathbf{Q}_{3}=\frac{\pi}{\sqrt{3}}\left(-\sqrt{3},1\right)$, such that $\mathbf{Q}_{1}+\mathbf{Q}_{2}+\mathbf{Q}_{3}=0$: \begin{equation} \mathbf{S}\left(\mathbf{r}\right)=\sum_{a=1,2,3}\mathbf{m}_{a}\cos \bigl( \mathbf{Q}_{a}\cdot\mathbf{r} \bigr) \end{equation} The three possible magnetic ground states, illustrated in Fig. \ref{fig_hexagonal_lattice}, correspond to \cite{Nandkishore_Chern12}: (i) a single-\textbf{Q} spin density-wave phase, in which only one of the $\mathbf{m}_{a}$ order parameters is non-zero; (ii) a collinear triple-\textbf{Q} spin density-wave, in which all three magnetic order parameters are non-zero and parallel or anti-parallel to each other; (iii) a non-coplanar triple-\textbf{Q} spin density-wave, in which again all three magnetic order parameters are non-zero and perpendicular to each other. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{Figure-04} \par\end{centering} \caption{Schematic representation of the three possible hexagonal-lattice spin density-wave ground states (upper panels) with ordering vectors $\mathbf{Q}_{1}=\frac{\pi}{\sqrt{3}}\left(\sqrt{3},1\right)$, $\mathbf{Q}_{2}=\frac{\pi}{\sqrt{3}}\left(0,-2\right)$, and $\mathbf{Q}_{3}=\frac{\pi}{\sqrt{3}}\left(-\sqrt{3},1\right)$, and their corresponding vestigial phases (lower panels). Panel (a) refers to the single-\textbf{Q }magnetic phase and its corresponding nematic vestigial phase, characterized by unequal bonds. Panel (b) shows the collinear triple-\textbf{Q }magnetic phase and its corresponding charge-ordered vestigial phase, characterized by unequal sites. Note that the magnitude of the magnetization is three times larger in some sites (orange arrows) than in other sites (red arrows). Panel (c) illustrates the non-coplanar triple-\textbf{Q }magnetic phase, and its corresponding spin-current vestigial phase. In the magnetically ordered state, the local magnetization can point in one of the four directions shown in the upper inset; for simplicity, here we represent these four directions in the plane, as shown in the lower inset. The vestigial triple-\textbf{Q } spin-current phase is characterized by unequal plaquettes; specifically, there are four types of plaquettes corresponding to one of the four polarizations of the vector spin-chirality. Note that the scalar spin-chirality $\mathbf{m}_1 \cdot (\mathbf{m}_2 \times \mathbf{m}_3)$ is positive and equal in each plaquette for the spin configuration drawn here. \label{fig_hexagonal_lattice}} \end{figure} Such a description has been employed to study the magnetic properties of graphene doped to its van-Hove singularity point and also of doped cobaltates \cite{TLi12,Nandkishore_Chern12,DHLee12,Chern12,Batista12}. More recently, it has been applied to SrTiO$_{3}$ thin films grown along the $(111)$ orientation \cite{Paramekanti18}. It can be derived from an itinerant microscopic model with a nearly-nested Fermi surface \cite{Nandkishore_Chern12}. The relevant group of the primary order parameter is given, in the absence of spin-orbit coupling, by \begin{equation} {\cal G}=C_{6v}^{'''}\times SO\left(3\right). \end{equation} The extended point group $C_{6v}^{'''}$ corresponds to the point group $C_{6v}$ supplemented by three translations $T_{1}=\frac{1}{2}\left(1,\,\sqrt{3}\right)$, $T_{2}=\frac{1}{2}\left(1,\,-\sqrt{3}\right)$, and $T_{3}=\left(1,\,0\right)$. The primary order parameters $\mathbf{m}_{a}$ transform according to the irreducible representation $\Gamma=F_{1}\otimes\Gamma^{S=1}$. Since $F_{1}$ is a three-dimensional irreducible representation, we have a nine-dimensional primary order parameter $\eta_{A}=\left(\mathbf{m}_{1},\,\mathbf{m}_{2},\,\mathbf{m}_{3}\right)$. To proceed and form the bilinear forms, we use Eq. (\ref{eq_Gamma_s}) to decompose the product $\Gamma^{S}\otimes\Gamma^{S}$ and also \begin{equation} F_{1}\otimes F_{1}=A_{1}\oplus E_{2}\oplus F_{2}\oplus F_{1}. \end{equation} Note that the decomposition of $F_{1}\otimes F_{1}$ does not only yield one-dimensional (1D) irreducible representations. Instead, in addition to the trivial irreducible representation $A_{1}$, we obtain the two-dimensional irreducible representation $E_{2}$ (corresponding to the degeneracy between $d_{xy}$ and $d_{x^{2}-y^{2}}$ in the hexagonal lattice), and the three-dimensional irreducible representations $F_{1}$ and $F_{2}$, which correspond to orders that break translational symmetry according to the wave-vectors $\mathbf{Q}_{1}$, $\mathbf{Q}_{2}$, and $\mathbf{Q}_{3}$. Similarly to the case of the square lattice, we introduce the index $m=\left(r,j\right)$ that is a combination of the spatial irreducible representations $r=\left(0,1,2,3\right)=\left(A_{1},E_{2},F_{2},F_{1}\right)$ and the spin index $j$. The bilinears are once again given by \begin{equation} \phi_{m\equiv\left(r,j\right)}^{\nu\mu}=\sum_{A,B}\eta_{A}\Lambda_{A,B}^{m,\nu\mu}\eta_{B} \end{equation} with $A=\left(a,\alpha\right)$, $B=(b,\beta)$ and matrices: \begin{equation} \Lambda_{A,B}^{m\equiv(r,j),\nu\mu}=\Gamma_{ab}^{r,\nu}\lambda_{\alpha\beta}^{j,\mu}. \end{equation} The spin-space matrices $\lambda_{\alpha\beta}^{j,\mu}$ are the same as the ones presented in the previous section. As for the nine $3\times3$ matrices $\Gamma_{ab}^{r,\nu}$, they can be expressed in terms of the identity matrix $\Gamma_{ab}^{0,0}=\delta_{ab}$ and the eight Gell-Mann matrices. Denoting them by the usual notation $\lambda_{ab}^{l}$, with $l=1,\ldots,8$, we separate the eight matrices into one doublet and two triplets according to: $\Gamma_{ab}^{1,\nu}=\left\{ \lambda_{ab}^{3},\,\lambda_{ab}^{8}\right\} $, $\Gamma_{ab}^{2,\nu}=\left\{ \lambda_{ab}^{2},\,\lambda_{ab}^{7},\,\lambda_{ab}^{5}\right\} $, and $\Gamma_{ab}^{3,\nu}=\left\{ \lambda_{ab}^{1},\,\lambda_{ab}^{6},\,\lambda_{ab}^{4}\right\} $. In what follows, we focus on scalar and vector bilinears; rank-2 tensor bilinears can be obtained in the same way as in the previous section in a straightforward way. For $j=0$, the bilinears are scalars and given by $\phi_{(r,0)}^{\nu}=\sum_{a,b=1}^{3}\left(\mathbf{m}_{a}\cdot\mathbf{m}_{b}\right)\Gamma_{ab}^{r,\nu}$. We find six non-zero possible bilinears: \begin{align} \phi_{(0,0)} & =m_{1}^{2}+m_{2}^{2}+m_{3}^{2}\nonumber \\ \phi_{(1,0)}^{\nu} & =\left\{ m_{1}^{2}-m_{2}^{2},\,\frac{1}{\sqrt{3}}\left(m_{1}^{2}+m_{2}^{2}-2m_{3}^{2}\right)\right\} \nonumber \\ \phi_{(3,0)}^{\nu} & =2\left\{ \mathbf{m}_{1}\cdot\mathbf{m}_{2},\,\mathbf{m}_{2}\cdot\mathbf{m}_{3},\,\mathbf{m}_{1}\cdot\mathbf{m}_{3}\right\} \label{eq_phi_hex_DW} \end{align} Note that $\phi_{(0,0)}$ transforms trivially as $A_{1}$ and thus cannot form a vestigial order. The two order parameters of $\phi_{(1,0)}^{\nu}$ transform non-trivially as the two-dimensional irreducible representation $E_{2}$ and correspond to nematic orders with $d_{x^{2}-y^{2}}$ and $d_{xy}$ form factors, respectively. These bilinears allow for a vestigial nematic state that lowers the point-group symmetry without breaking translational symmetry. They are the vestigial phase of the single-\textbf{Q }spin density-wave (see Fig. \ref{fig_hexagonal_lattice}a). The three order parameters of $\phi_{(3,0)}^{\nu}$ transform non-trivially as the three-dimensional irreducible representation $F_{1}$. They preserve the point-group symmetry of the lattice but break translational symmetry. Thus, they correspond to charge density-waves with ordering vectors $\mathbf{Q}_{3}$ ($\nu=1$), $\mathbf{Q}_{1}$ ($\nu=2$), and $\mathbf{Q}_{2}$ ($\nu=3$), which are vestigial orders of the collinear triple-\textbf{Q} spin density-wave (see Fig. \ref{fig_hexagonal_lattice}b). As shown in Ref.\cite{Chern12}, the transition to the vestigial phase belongs to the same universality class of the $4$-state Potts model, corresponding to $\phi_{(3,0)}^{\nu}=\pm1$ subject to the constraint $\prod\limits _{\nu=1}^{3}\mathrm{sign}\left[\phi_{(3,0)}^{\nu}\right]=\pm1$. Finally, the composite order parameters $\phi_{(2,0)}^{\nu}$vanish as the three corresponding Gell-Mann matrices are purely imaginary, but $m_{a}^{\alpha}$ are real. For $j=1$, we obtain vector bilinears according to $\boldsymbol{\phi}_{(r,1)}^{\nu}=\sum_{a,b}\left(\mathbf{m}_{a}\times\mathbf{m}_{b}\right)\Gamma_{ab}^{r,\nu}$. There are three non-zero such bilinears, which transform as the three-dimensional irreducible representation $F_{2}$: \begin{equation} \boldsymbol{\phi}_{(2,1)}^{\nu}=2\left\{ \mathbf{m}_{1}\times\mathbf{m}_{2},\,\mathbf{m}_{2}\times\mathbf{m}_{3},\,\mathbf{m}_{1}\times\mathbf{m}_{3}\right\} \label{eq_phi_hex_DW2} \end{equation} Each of them corresponds to spin-current density-waves (i.e. vector chirality) with ordering vectors $\mathbf{Q}_{3}$ ($\nu=1$), $\mathbf{Q}_{1}$ ($\nu=2$), and $\mathbf{Q}_{2}$ ($\nu=3$). The resulting vestigial order is thus the triple-\textbf{Q} spin-current order shown in Fig. \ref{fig_hexagonal_lattice}c. This is the vestigial phase of the non-coplanar triple-\textbf{Q} spin density-wave. Interestingly, because the primary order parameter transforms as a three-dimensional irreducible representation, it is possible to also construct trilinear forms $\psi^{m}=\sum_{A,B,C}\eta_{A}\eta_{B}\eta_{C}\Lambda_{A,B,C}^{m}$ that transform non-trivially. This can be formally done by combining vestigial order parameters $\phi_{\left(r,j\right)}^{\nu\mu}$ that transform as higher-dimensional irreducible representations and the primary order parameter. Among the bilinears presented in Eqs. (\ref{eq_phi_hex_DW}) and (\ref{eq_phi_hex_DW2}), combining $\boldsymbol{\phi}_{(2,1)}^{\nu}$, which transforms as $F_{2}$, with the primary order parameter $\eta_{A}$, which transforms as $F_{1}$, yields a composite trilinear scalar that transforms non-trivially according to the $A_{2}$ irreducible representation, since $F_{1}\otimes F_{2}=E_{2}\oplus F_{1}\oplus F_{2}\oplus A_{2}$. In explicit form, the corresponding order parameter $\psi$ is given by: \begin{equation} \psi=\mathbf{m}_{1}\cdot\left(\mathbf{m}_{2}\times\mathbf{m}_{3}\right). \end{equation} We identify $\psi$ as the scalar chirality, an Ising-like, $\mathbf{Q}_{1}+\mathbf{Q}_{2}+\mathbf{Q}_{3}=0$, order parameter that breaks time-reversal symmetry \cite{Martin08,Venderbos2_16}. Similarly to the bilinear $\boldsymbol{\phi}_{(2,1)}^{\nu}$, it is also a vestigial phase of the non-coplanar triple-\textbf{Q }spin density-wave. This brings an interesting scenario, in which there are two different vestigial phases associated with the same primary order. While $\psi$ breaks a discrete symmetry, the vector chirality $\boldsymbol{\phi}_{(2,1)}^{\nu}$ is a continuous order parameter. One thus expects the vestigial scalar chirality $\psi$ to order at a higher temperature than the vestigial spin-current density-waves $\boldsymbol{\phi}_{(2,1)}^{\nu}$ in a sufficiently strongly anisotropic three-dimensional system (as schematically shown in Fig. \ref{fig_phase_diagrams}e). A microscopic calculation of such a scenario remains to be seen. We finish this section by discussing the case in which the primary order parameter is a charge density-wave. In this situation, one would expect vestigial orders corresponding to the $j=0$ composite order parameters of the spin density-wave case, namely, $\phi_{(1,0)}^{\nu}$ and $\phi_{(3,0)}^{\nu}$ in Eq. (\ref{eq_phi_hex_DW}). However, $\phi_{(3,0)}^{\nu}$ corresponds to charge density-waves with the same ordering vectors as the primary order parameters, and thus do not constitute a vestigial order. Moreover, $\phi_{(1,0)}^{\nu}$ cannot be realized, since it is not possible to form a single-\textbf{Q }charge-density wave. This follows from the fact that the trilinear $W_{1}W_{2}W_{3}$ (where $W_{a}$ correspond to the Ising-like charge density-wave order parameters) transforms trivially as $A_{1}$, implying that only triple-\textbf{Q }charge density-waves can be formed in the hexagonal lattice. As a result, even though the ground state is degenerate, in this particular case no vestigial order appears. Similar arguments imply that the nematic phase alone, which transforms as the two-dimensional irreducible representation $E_{2}$, Eq. (\ref{eq_phi_hex_DW}), does not admit vestigial phases. This is because a cubic term appears in the free energy selecting the $d_{x^{2}-y^{2}}$ over the $d_{xy}$ nematic state \cite{Hecker2018}. \section{OTHER EXAMPLES OF VESTIGIAL ORDER} \label{sec:other-exampl-vest} Besides the examples discussed above, there are several other systems that allow vestigial orders to appear. Here we discuss some of them, without the same level of details as in the previous sections. The example that we analyzed for the square lattice consisted of doubly-degenerate spin density-wave ordering vectors $\left(\pi,0\right)$ and $\left(0,\pi\right)$. We mentioned that a non-degenerate ground state, such as the N\'eel order, which displays ordering vector $\left(\pi,\pi\right)$, does not allow vestigial orders that lower the point-group symmetry of the lattice. Yet, this does not imply that vestigial order is impossible for primary N\'eel order. Similarly to the composite order with $j=2$ discussed in Eq. (\ref{eq_tensor_rank2}), in the case of N\'eel order it is possible to form a rank-2 tensor bilinear analogous to $\phi_{\left(0,2\right)}^{\mu\mu'}$ that breaks spin-rotational invariance while preserving the point group. This so-called spin-nematic phase \cite{Andreev84} has been widely discussed in the context of spin-1 models. A candidate material for spin-nematic order is NiGa$_{2}$S$_{4}$\cite{Nakatsuji2005}, which can be described by spin-1 Heisenberg spins on a triangular lattice. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{Figure-05} \par\end{centering} \caption{Schematic representation of the magnetic ground state of the $J_{1}$-$J_{2}$ model on the windmill lattice, which is composed of interpenetrating triangular (black lines) and honeycomb (dark brown lines) sublattices. The spins order in a N\'eel pattern in the honeycomb sublattice and in a $120^{\circ}$ coplanar configuration in the triangular sublattice. The vestigial order is described in terms of a $\mathbb{Z}_{6}$ clock-model order parameter, and corresponds to different relative orientations between the spin orders of the two sublattices (blue bonds denote antiparallel spins). \label{fig_windmill}} \end{figure} Still focusing on the square lattice, it is also possible to have magnetic ground states with degeneracy higher than $2$, such as the fourfold-degenerate ordering vectors $\left(\frac{\pi}{2},\frac{\pi}{2}\right)$, $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, $\left(\frac{\pi}{2},-\frac{\pi}{2}\right)$, and $\left(-\frac{\pi}{2},-\frac{\pi}{2}\right)$. Microscopically, such states arise for instance in the $J_{1}$-$J_{2}$-$J_{3}$ Heisenberg model on the square lattice, with dominant $J_{3}$ . Besides the so-called double-stripe magnetic ground states, plaquette ground states are also realized in this model \cite{Ducatman12}. It was shown in Ref. \cite{Zhang17} that the double-stripe magnetic state has two Ising-like vestigial orders, corresponding to a $B_{2g}$ nematic order and a $\left(\pi,\pi\right)$ bond-order that breaks translational and reflection symmetries of the lattice. Experimentally, material candidates to exhibit these vestigial orders are the iron chalcogenide FeTe~\cite{FeTe1,FeTe2,JPHu09,Paul11} and the titanium-based oxypnictide BaTi$_{2}$Sb$_{2}$O \cite{Zhang17}. A full classification of all possible vestigial orders in this case is not yet available. Another interesting case of multiple vestigial orders is that of the $J_{1}$-$J_{2}$ Heisenberg antiferromagnet on the windmill lattice, which consists of interpenetrating triangular and honeycomb sublattices \cite{Orth12,Orth14,Jeevanesan15}. The $J_{1}$-$J_{2}$ windmill model is a straightforward, but non-trivial generalization of the corresponding $J_{1}$-$J_{2}$ square lattice model. It hosts a vestigial $Z_{6}$ clock order parameter instead of the Ising nematic $Z_{2}$ degree of freedom found on the square lattice. In both cases, vestigial order refers to a \textit{\emph{relative}} ordering of spins on the two sublattices. In the windmill case, it breaks translation symmetry by tripling the unit cell as well as a mirror symmetry that exchanges the $A$ and $B$ sites on the honeycomb sublattice\textbf{ (}see Fig. \ref{fig_windmill}). Interestingly, due to the higher degree of degeneracy of the composite order parameter ($Z_{n}$ with $n\geq5$), vestigial long-range order develops via a two-step process consisting of two Kosterlitz-Thouless phase transitions that enclose an intermediate critical phase. In the critical phase, the correlations of the composite degrees of freedom decay algebraically with a temperature-dependent exponent $\eta(T)$\cite{Jeevanesan15} \textbf{(}see Fig. \ref{fig_phase_diagrams}f). Such a behavior is reminiscent of melting of two-dimensional solids, where (algebraic) translational order disappears via an intermediate hexatic phase\cite{Halperin78}. In all the density-wave examples discussed so far, only commensurate ordering vectors were considered. In the case of incommensurate wave-vectors, the primary order parameters become complex, which can lead to vestigial orders that break time-reversal symmetry. This was proposed for instance in Ref. \cite{Chubukov14} in the context of charge density-waves in the cuprates. Moreover, incommensurate spin density-wave with wave-vector $\mathbf{Q}$ naturally couples to incommensurate charge density-wave with wave-vector $2\mathbf{Q}$, intrinsically coupling charge-driven and spin-driven nematicity. A full microscopic description of this peculiar case, which may be relevant for the cuprate La$_{2-x}$Sr$_{x}$CuO$_{4}$, is still missing \cite{Nie17}. The case of incommensurate charge density-wave also highlights the key role of disorder in vestigial phases: as shown in Ref. \cite{Nie14}, in tw -dimensions, any amount of disorder kills incommensurate charge order at finite temperatures, while preserving its nematic vestigial phase. Overall, the impact of disorder on vestigial phases remains little explored, despite the ubiquity of disorder in realistic systems (see also Ref. \cite{Cui18}). It is also important to emphasize that the existence of degenerate ground states is a necessary, but not sufficient condition for the appearance of vestigial orders. We already mentioned this feature in the case of charge density-waves in the hexagonal lattice, which do not allow any vestigial phases. Another interesting example is the case of spin-dimers on a square lattice with nearest and next-nearest neighbor interactions and in the presence of an external magnetic field \cite{Loison2000}. A closely related model has been employed to describe the unusual properties of BaCuSi$_{2}$O$_{6}$ in an external magnetic field \cite{Sebastian06,Batista07,Schmalian08}. The model is equivalent to hard-core bosons on a square lattice that can undergo Bose-Einstein condensation at momenta $\mathbf{Q}_{1}=\left(\pi,0\right)$ or $\mathbf{Q}_{2}=\left(0,\pi\right)$. As the chemical potential $\mu$ of these hard-core bosons is tuned from negative to positive, there is a quantum phase transition from a disordered state to a condensate with finite momentum. The model thus shares the same properties as the case of doubly-degenerate charge density-waves on the square lattice discussed above. However, there is one important difference: at $T=0$ and at the quantum critical point ($\mu=0$), the system is empty of bosons and fluctuations are thus irrelevant. As a result, there are no fluctuations to trigger a $T=0$ vestigial phase. Formally, this is manifested in the Ginzburg-Landau free-energy expansion by the vanishing of all the coefficients in front of the squared non-trivial bilinears (see for instance Eq. \eqref{SC_free_energy1}). The resulting phase diagram has therefore a vestigial phase of the finite-momentum Bose-Einstein condensate only at finite temperatures, but a single second-order phase transition at $T=0$ (see Fig. \ref{fig_phase_diagrams}c). \section{CONCLUDING REMARKS} \label{sec:concluding-remarks} The formalism developed here demonstrates that multi-component order parameters give rise to complex phase diagrams, providing an appealing framework to understand quantum materials that goes beyond the paradigm of competing phases. The degenerate nature of the ordered state \textendash{} a necessary but not sufficient condition for the emergence of vestigial order \textendash{} leads to the condensation of fluctuations at their own transition temperature, manifested by long-range order of composite operators. In this regard, composide order not only behaves as a vestige of the primary order, but it also affects the latter by lifting its degeneracy and thus relieving the frustration of the system. In situations where the primary order cannot establish long-range order, either due to strong thermal or quantum fluctuations or due to disorder, the vestigial order is the only sharp remanent of the primary order. While symmetry arguments can efficiently be employed to classify which vestigial states are allowed in each case, they do not prove the actual existence of vestigial phases. Only via microscopic calculations of minimal models one can assess whether the vestigial and primary phase transitions take place at different temperatures or simultaneously as a first-order transition \textendash{} in which case there is no vestigial order. Particularly near a quantum phase transition, the symmetry of the order parameter is not enough to determine the final behavior of the vestigial phase, as the dynamics of the primary order parameter plays an essential role. Theoretically, while mean-field calculations are incapable of capturing vestigial phases, a variety of controlled and uncontrolled analytical methods exist, such as the saddle-point large-$N$ approach \cite{Fernandes2012,Nie14}, the self-consistent Gaussian approximation \cite{Fischer16,Nie17}, and the renormalization-group approach \cite{Qi09,Millis10}. Numerically, vestigial order can be addressed straightforwardly by analyzing the statistical properties of the corresponding higher-order correlation functions. For example, the Ising nematic transition in the classical $J_{1}$-$J_{2}$ square lattice model has been directly observed using Monte-Carlo simulations \cite{Weber03,Batista11} (see Ref.\cite{Jeevanesan15} for a related study on the windmill lattice). An interesting further direction is to investigate vestigial order in low-dimensional quantum (spin) systems at zero temperature, where powerful numerical techniques are available\cite{Stoudenmire12}. The concept of vestigial order has thus the potential to be applied to a vast number of systems that have been partially explored or that even remain completely unexplored. An interesting issue that goes beyond broken-symmetry phases is whether topologically-driven orders may also support unusual vestigial states of matter \cite{Scheurer17}. \section*{ACKNOWLEDGMENTS} We thank C. Batista, E. Berg, P. Chandra, G. W. Chern, A. Chubukov, M. Christensen, P. Coleman, I. Eremin, R. Flint, E. Fradkin, M. Hecker, B. Jeevanesan, J. Kang, S. Kivelson, I. Mazin, J. Venderbos, and X. Wang for fruitful discussions and collaborations on topics reviewed in this work. R.M.F. is supported by the US Department of Energy, Office of Science, Basic Energy Sciences, under Award DE-SC0012336. P.P.O. acknowledges support from Iowa State University Startup Funds. J.S. is supported by the Helmholtz Program \emph{Science and Technology of Nanosystems} at the Karlsruhe Institute of Technology (KIT).
proofpile-arXiv_067-7178
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} Multiple wide-field radio surveys have demonstrated that up to 30\% of the radio sky is variable when observing at centimeter wavelengths \citep{2006MNRAS.370.1556B, 2011ApJ...740...65O, 2011MNRAS.412..634B, 2013PASA...30....6M, 2017MNRAS.466.1944M, 2021ApJ...923...31S}. Some variability is extrinsic, a result of \textit{scintillation}, the scattering of radio waves by free electrons in the interstellar medium resulting in random flux modulations. Variability can also be intrinsic, a result of physical changes within the emitting region that can give clues to the nature of the source. A good example of this is observed in radio data sets of blazars (persistently accreting supermassive black holes (SMBHs) that launch outflows pointing towards Earth) which are highly variable and very luminous \citep{1997ARA&A..35..445U, 2009ApJ...703..802M}. These characteristics are indicative of a highly relativistic outflow with a compact emitting region. Blazars are not the only systems to produce jets, and not all jets are persistent: many stellar-mass black hole systems produce transient jets e.g. gamma-ray bursts or X-ray binaries \citep{2020MNRAS.496.3326R, 2005Ap&SS.300....1F, 2020NatAs...4..697B}. In the most energetic systems, the kinetic energy required for such an outflow is so high that the radiation must originate from a highly collimated jet as opposed to a spherical outflow \citep{1993Natur.361..236M, 1999ApJ...519L..17S, 2018Natur.561..355M}. The presence of a relativistic jet may be inferred by the detection of superluminal motion/expansion or the presence of a high brightness temperature component as a result of Doppler boosting (also known as relativistic beaming). Doppler boosting occurs when outflowing material is moving at velocities close to the speed of light (i.e. with high bulk Lorentz factors) and close to the observer's line of sight making it appear more luminous than in the rest frame of the material. If a star passes too close to a SMBH, the tidal forces of the SMBH can overcome the self-gravity keeping the star together and pull it apart creating a tidal disruption event \citep[TDE;][]{1988Natur.333..523R}. Approximately half of the disrupted material is thought to be lost while the other half falls back and is accreted onto the SMBH. In a small fraction of TDEs, the radiation at all wavelengths is dominated by luminous non-thermal emission thought to be produced by a jet. These systems are called \textit{relativistic TDEs}. The most well-studied relativistic TDE to date is \textit{Swift} J1644+57 which was first discovered due to a bright gamma-ray flash followed by a luminous, variable and decaying X-ray counterpart \citep{2011Sci...333..203B, 2011Natur.476..421B}. A bright radio counterpart of \textit{Swift} J1644+57 has been detected at low frequencies for the past decade and modelled as a narrow, highly relativistic jet pointed towards Earth \citep{2011Sci...333..203B, 2012ApJ...748...36B, 2013ApJ...767..152Z, 2014ApJ...788...32W, 2015MNRAS.450.2824M, 2018ApJ...854...86E}. The presence of a jet would be directly confirmed upon either the detection of superluminal motion/expansion \citep{2016MNRAS.462L..66Y} or the presence of a high brightness temperature as a result of Doppler boosting, but no such evidence has yet been found. ZTF22aaajecp is a panchromatic transient discovered by the Zwicky Transient Facility on 2022 Feb 11 10:42 UT \citep[MJD 59621.4458, $T_{0}$, ][]{2022TNSAN..38....1A} and registered on the Transient Name Server as AT2022cmc. The optical light curve shows a fast decay ($>$1\,mag/day) until day 10 post-discovery when it settled into a plateau between 21 and 22\,magnitudes \citep[r-band, ][]{2022GCN.31846....1P, 2022GCN.31805....1D, 2022GCN.31729....1C, 2022TNSAN..40....1F}. Early optical spectroscopy detected interstellar medium absorption from a likely host galaxy at $z = 1.193$ \citep{2022GCN.31602....1T}. AT2022cmc is over four times more distant than \textit{Swift} J1644+57 \citep[assuming $H_0$ = 70\,km\,s\textsuperscript{-1}\,Mpc\textsuperscript{-1} and $\Omega_{\textrm{M}}$ = 0.3, ][]{2011Sci...333..199L}. Rapid radio, sub-mm, and X-ray follow-up observations were also performed and bright counterparts were detected in all bands \citep[e.g.][]{2022GCN.31627....1P, 2022ATel15269....1A, 2022GCN.31665....1D, 2022GCN.31641....1Y, 2022GCN.31601....1P}. The high X-ray luminosity, hour-timescale variability and spectra from NICER and NuSTAR \citep{2022ATel15349....1H, 2022ATel15230....1Y, 2022ATel15232....1P} indicated that AT2022cmc is a relativistic TDE, the first to be discovered in over a decade \citep{2011Sci...333..203B, 2012ApJ...753...77C, 2015MNRAS.452.4297B}. \begin{figure*} \centering \includegraphics[width = 0.85\textwidth]{lc.pdf} \caption{The radio light curve of AT2022cmc, combining the observations presented in this work at 3$\sigma$ upper limits at 1.3\,GHz (black downwards facing triangles), \textit{e}--MERLIN upper limits and detections at 5\,GHz as red downwards facing triangles and stars, respectively) and 15.5\,GHz detections and upper limits shown as blue circles and downwards facing triangles, respectively. Also shown are detections at 5 (red hexagons), 8.5 (purple narrow diamonds), 10 (gold pentagons), 15.9 (blue wide diamonds) and 33.5\,GHz (green squares) from the VLA \citep{2022arXiv221116530A}. The 15.5\,GHz light curve shows clear evidence of inter-observation variability. There is also evidence of variability at 10.5\,GHz.} \label{fig:lc} \end{figure*} \section{Methods}\label{methods} \subsection{Observations} We obtained radio observations of AT2022cmc through guaranteed and rapid response time on the Arcminute Microkelvin Imager Large Array (AMI-LA), \textit{enhanced} -- Multi-Element Radio Linked Interferometer Network (\textit{e}--MERLIN) and MeerKAT radio telescopes. \begin{figure*} \centering \includegraphics[width = 0.7\textwidth]{fig_2.pdf} \caption{\textit{a}: The AMI--LA 15.5\,GHz light curve of AT2022cmc. The fractional variability is 16.5$\pm$1.3\%. \textit{b}: the 15.5\,GHz light curve normalised by dividing through by a single power law fit to the data. The fractional variability of the whole normalised light curve is 6.1$\pm$1.3\%. \textit{c}: the fractional variability of AT2022cmc at 15.5\,GHz as a function of time: each data point is the fractional variability of 15 days of observations plotted at the time average of those 15 days. The blue-shaded region is the time average fractional variability for the whole observing campaign. The purple and green shaded regions refer to the periods of ten-day and one-day variability.} \label{fig:cmc_var} \end{figure*} \subsubsection{AMI--LA} Observations with AMI--LA \citep{amila, 2018MNRAS.475.5677H} started on 2022 February 26\textsuperscript{th} 00:40 UT (14.6\,days after the initial discovery). For each observation 3C 286 was used as both the interleaved phase calibrator and to set the absolute flux scale. We use the custom pipeline \textsc{reduce\_dc} (e.g. \citealt{reduce}) to flag instrumental issues, calibrate the bandpass response of the array, correct for atmospheric temperatures, and solve for phases on the interleaved calibrator which were then applied to the target field. Because 3C 286 was also used as the complex gain calibrator, we reinitialized the sky-model for 3C 286 from the Perley-Butler 2017 standard \citep{perley2017} and solved for complex gains (both amplitudes and phases) with a solution interval of 600\,s, deriving solutions for each of the eight frequency channels. We then image all observations for both fields using the \textsc{casa} task \textit{clean}, and measure the flux density of 3C 286 and AT2022cmc using \textit{imfit}. The resulting flux density was measured for 3C 286 is stable to better than 1 \subsubsection{\textit{e}--MERLIN} We obtained Rapid Response Time observations (PI: Rhodes, RR13002) of AT2022cmc with \textit{e}--MERLIN beginning with an initial epoch at 5\,GHz on 2022 February 27\textsuperscript{th} 02:30 UT (15.7\,days) with follow-up observations every three weeks. \textit{e}--MERLIN data are reduced using a \textsc{casa}-based (Version 5.8.0) pipeline \citep{2021ascl.soft09006M}. The pipeline averages, flags for radio frequency interference, calibrates and images the data. We do not detect the 5\,GHz in the first two epochs. From 40\,days onwards, a point source at around 100$\mu$Jy was persistently detected. \subsubsection{MeerKAT} Observations with MeerKAT were awarded through an open-time call for proposals (PI: Rhodes, MKT-20185). Three observations were made over the first 100 days, each at 1.28\,GHz with a bandwidth of 0.856\,GHz. MeerKAT data are reduced using \textsc{oxkat}, a series of semi-automated \textsc{python} scripts \citep{oxkat}. The scripts flag and calibrate the data using standard procedures in \textsc{casa} \citep{CASA} then images are made using \textsc{wsclean} \citep{offringa-wsclean-2014}. A round of phase-only self-calibration is also performed. We do not detect any radio emission at the position of AT2022cmc in any of the three observations reported here \subsection{Variability and brightness temperature calculations} To interpret the observations we have made of AT2022cmc, we calculate the brightness temperature using different variability timescales. Here, we present the fractional root mean square (rms) variability which is required to check whether any variability observed is statistically significant using the method from \citet{2003MNRAS.345.1271V}. Then we demonstrate how to use a given variability timescale to calculate the brightness temperature and from there the relativistic Doppler factor. The fractional rms variability examines the variability that is in excess of the contribution from the uncertainties associated with the measured flux densities. The fractional variability is given by \begin{equation} F_{\mathrm{var}}=\sqrt{\frac{S^{2}-\overline{\sigma_{\mathrm{err}}^{2}}}{\bar{S_{\nu}}^{2}}} \label{eq:var} \end{equation} where the variance is \begin{equation} S^{2}=\frac{1}{N-1} \sum_{i=1}^{N}\left(S_{\nu,i}-\bar{S_{\nu}}\right)^{2} \end{equation} and the mean error is, \begin{equation} \overline{\sigma_{\mathrm{err}}^{2}}=\frac{1}{N} \sum_{i=1}^{N} \sigma_{\mathrm{err}, i}^{2} \text {. } \end{equation} where $\bar{S_{\nu}}$ is the mean flux density and $N$ is the number of data points. The uncertainty associated with the fractional variability is given by: \begin{equation} \operatorname{err}\left(F_{\mathrm{var}}\right)=\sqrt{\left\{\sqrt{\frac{1}{2 N}} \cdot \frac{\overline{\sigma_{\mathrm{err}}^{2}}}{\bar{x}^{2}F_{\mathrm{var}}}\right\}^{2}+\left\{\sqrt{\frac{\overline{\sigma_{\mathrm{err}}^{2}}}{N}} \cdot \frac{1}{\bar{x}}\right\}^{2}} \label{eq:var_err} \end{equation} Equations \ref{eq:var} and \ref{eq:var_err} are used to calculate whether the variability observed in the light curve is real and statistically significant. We consider any F\textsubscript{var} value greater than three times its associated uncertainty, $\operatorname{err}\left(F_{\mathrm{var}}\right)$ as statistically significant. If the observed variability is real, we can calculate the brightness temperature of a radio source starting with: \begin{equation} T_B = \frac{S_{\nu} c^2}{2 k_B \Omega \nu^2} \label{eq:TB} \end{equation} \noindent where $\Omega = R^2/D_{A}^2$, \textit{c} is the speed of light,\textit{ k\textsubscript{B}} is the Boltzmann constant, $\theta$ is the projected size of the source on the sky, $\nu$ is the observing frequency, $D_{A}$ is the angular diameter distance (i.e. the comoving distance: $D/(1+z)$ or the luminosity distance $D_L/(1+z)^2$) and $R$ is the radius of the source \citep{2011hea..book.....L}. All of the above variables are observable except for the radius ($R$), which can be inferred from an observed variability timescale: $R = c\Delta t_{\textrm{var}}/(1+z)$. Substituting these values into Equation \ref{eq:TB} gives \citep{1995ARA&A..33..163W}: \begin{equation} T_B = \frac{S_{\nu} D_{L}^2}{2 k_B \nu^2 t_{\textrm{var}}^2 (1+z)^2} \label{eq:TB_2} \end{equation} Brightness temperatures above 10\textsuperscript{12}\,K are not possible from synchrotron radiation \citep{1969ApJ...155L..71K}. Above 10\textsuperscript{12}\,K, the emitting region undergoes significant Compton cooling so that the brightness temperature drops back below 10\textsuperscript{12}\,K, called the inverse-Compton catastrophe. Some incoherent transients have brightness temperatures above 10\textsuperscript{12}\,K, such as gamma-ray bursts, where the cause of the high brightness temperatures is most likely relativistic beaming. Therefore, any brightness temperature measurements above 10\textsuperscript{12}\,K must originate from radiation that is strongly beamed into our line of sight. \noindent We use the brightness temperature measurements to infer the relativistic Doppler factor by substituting the following into Equation \ref{eq:TB_2}: $S_{\nu} = I_{\nu}\Omega$, $\nu = \nu'/(1+z)$, the angular radius $\theta = (\delta_{\textrm{var}}c\Delta t_{\textrm{var}})/(D_{A} (1+z))$, and I\textsubscript{$\nu$}/$\nu^{3}$ = I$'$\textsubscript{$\nu$}/$\nu'^{3}$. Equation \ref{eq:TB_2} can be rearranged to obtain the Doppler factor in terms of the observed brightness temperature (T\textsubscript{var}) and a given rest frame temperature ($\textrm{T}'_{\textrm{var}}$): \begin{equation} \delta_{\textrm{var}} = \sqrt[3]{(1+z)\frac{T_{\textrm{var}}}{T_{\textrm{var}}'}} = \frac{1}{\Gamma(1-\beta\cos{\phi})} \label{eq:df} \end{equation} \noindent where $\Gamma$ is the bulk Lorentz factor, $\beta$ is the velocity of the outflow material as a fraction of the speed of light and $\phi$ is the angle of the outflow to the line of sight. A result of Doppler boosting is that radiation is beamed into a cone within an opening angle, $\phi \approx 1/\Gamma$. For the work presented in this paper, we assume a rest frame brightness temperature of T$'$\textsubscript{var} = 10\textsuperscript{12}\,K. However, it is possible that the rest frame temperature is considerably lower \citep[e.g.][]{1994ApJ...426...51R, 1999ApJ...511..112L}. From Equation \ref{eq:df}, one can see that by decreasing T$'$\textsubscript{var}, the Doppler factor we infer would increase. Therefore, the Doppler factors we present in this work are lower limits. \section{Results}\label{results} Figure \ref{fig:lc} shows radio light curves from MeerKAT \citep[1.3\,GHz][]{2016mks..confE...1J}, \textit{e}--MERLIN (5\,GHz) and AMI--LA (15.5\,GHz) along with data points at 5, 8.5, 10.5, 15.9 and 33.5\,GHz from the Karl G. Jansky Very Large Array \citep{2022arXiv221116530A}. In all bands (except 1.3\,GHz, where AT2022cmc was not detected), the light curves show a slow rise over the duration of the respective observing campaigns. A radio counterpart was first detected with the AMI--LA from 14\,days post-discovery \citep{2022GCN.31667....1S} and at 5\,GHz with \textit{e}--MERLIN at 40 days post-discovery. The high time cadence of the 15.5\,GHz AMI--LA light curve shows day-timescale variability with an underlying slowly rising light curve. To parameterize and understand the short timescale variations occurring within the radio emitting region as observed at 15.5\,GHz we use the fractional variability. The fractional variability examines the variability that is in excess of the contribution from the uncertainties associated with the measured flux densities \citep{2003MNRAS.345.1271V}. We measure a fractional variability of 16.5$\pm$1.3\,\% for the whole 15.5\,GHz data set which considers both the underlying long-term flux density increase as well as the day-to-day variability. In order to determine if the short timescale variability is real, we fit a power law to the 15.5\,GHz light curve (shown in Figure \ref{fig:cmc_var}a) and divide the light curve data by the fit (Figure \ref{fig:cmc_var}b) and obtain an excess variability of AT2022cmc is 6.1$\pm$1.3\,\%. The observed variability is unlikely to be a result of scintillation at 15.5\,GHz. For the position of AT2022cmc on the sky, the AMI--LA observing band is firmly in the weak scintillation regime \citep{2002astro.ph..7156C}. Any effects due to weak scintillation would cause variability on a timescale of approximately 1\,hour with an amplitude of about 30\% \citep{1997NewA....2..449G}. We measure no statistically significant variability on this timescale. Given that it is unlikely that scintillation is the origin of the observed variability, it is possible that it originates from the TDE. A significant contribution ($\sim$ one third) to the fractional variability measurement originates from the data points between 23 and 33\,days post-discovery where the flux density increases by 50\% over 10\,days (the highlighted purple region in Figure \ref{fig:cmc_var}). A variability timescale ($\Delta t$) of 10\,days corresponds to a very small emission region ($c \Delta t/(1+z)$) of $\lesssim$ 1.2$\times$10\textsuperscript{16}\,cm and a high brightness temperature of (2.0$\pm$0.4)$\times$10\textsuperscript{13}\,K From the brightness temperature of (2.0$\pm$0.4)$\times$10\textsuperscript{13}\,K, we use Equation \ref{eq:df}, and $\textrm{T}_{\textrm{var}}'$ = 10\textsuperscript{12}\,K, to derive a Doppler factor of 4. There are two other rest frame brightness temperature limits that are often used in the literature: 2$\times$10\textsuperscript{11}\,K \citep{1994ApJ...430..550S} and 5$\times$10\textsuperscript{10}\,K \citep{1994ApJ...426...51R}. By using these other values in our calculations, we obtain Doppler factors of 6 and 10, respectively. The dotted blue lines in Figure \ref{fig:dopp} shows the allowed values of $\Gamma$ and $\phi$ for three values of $\delta_{\textrm{var}}$ quoted above. Between days 33 and 51 post-discovery (the green highlighted region in Figure \ref{fig:cmc_var}), the AMI--LA light curve shows evidence of day-timescale variability. Shorter timescale variability corresponds to an even higher brightness temperature of (2.0$\pm$0.4)$\times$10\textsuperscript{15}\,K. We calculate values of $\delta_{\textrm{var}}$ to be 16, 27 and 30, for T$'$\textsubscript{var} = 10\textsuperscript{12}, 2$\times$10\textsuperscript{11}\,K and 5$\times$10\textsuperscript{10}\,K, respectively \citep{1994ApJ...430..550S, 1994ApJ...426...51R}. All three values are marked by solid blue lines on Figure \ref{fig:dopp}. \begin{figure} \centering \includegraphics[width = \columnwidth]{contour_new.pdf} \caption{A contour plot showing the relativistic Doppler factor ($\delta_{\textrm{var}}$) as a function of angle to the line of sight and bulk Lorentz factor. The solid blue lines correspond to Doppler factors calculated from the day-timescale variability using rest frame brightness temperatures of 10\textsuperscript{12}, 2$\times$10\textsuperscript{11} and 5$\times$10\textsuperscript{10}\,K. The dotted lines are the Doppler factors from the variability timescale of 10 days using the same three rest frame brightness temperatures.} \label{fig:dopp} \end{figure} Given the consistently high cadence data over the first 100\,days after the TDE discovery, we can also search for changes in variability amplitude. Figure \ref{fig:cmc_var}(c) shows the percentage fractional variability as a function of time using bins of 15 days. There is significant variability for the first 70 days after which the epoch-to-epoch variability ceases. There is no evidence in the data to link the reduction in variability to any spectral index variations. For the first 100\,days, we measure a spectral index of 1.9$\pm$0.1, consistent with what is expected from self-absorbed synchrotron emission. \section{Discussion}\label{discussion} Using model-independent analysis, from the Doppler factor calculation (Equation \ref{eq:df}), we infer from day-timescale variability, that the bulk Lorentz factor of the outflowing material $\gtrsim$8 for a jet that is pointing directly towards Earth, i.e. the angle to the line of sight is zero degrees. \citet{2022NatAs.tmp..252P} also inferred a slightly higher bulk Lorentz factor through SED modelling. We also find that after the first 70\,days post-discovery that there is a reduction in variability. This decrease could reflect a reduction in mass accretion rate \citep{1994ApJ...430L..93R, 2013MNRAS.429L..20M}, or an intrinsic reduction of variability at the shock front. In order to produce such a highly relativistic outflow, vast amounts of kinetic energy are required, so much so that the observed emission cannot originate from a spherical outflow. \citet{2022arXiv221116530A} used the X-ray emission to estimate an isotropic equivalent kinetic energy of $\approx10^{53-54}$\,erg. For an estimate of $10^{54}$\,erg, at least 20\,M\textsubscript{\(\odot\)} of material is required (by assuming an optimistic efficiency of 10\% and considering that 50\% of the material will not be accreted on to the SMBH) to produce the observed emission if the outflow is isotropic. This problem is alleviated if instead the outflow is collimated into a jet-like outflow and beamed into our line of sight. Evidence of relativistic outflows has been found in many classes of systems, both galactic and extragalactic. Using Very Long Baseline Interferometry \citep[e.g. ][]{1981Natur.290..365P, 1985ApJ...289..109U, 2001ApJS..134..181J} and multi-wavelength monitoring programs \citep[e.g.][]{2017MNRAS.466.4625L}, Doppler factors as high as those shown in Figure \ref{fig:dopp} have been found in blazars. Gamma-ray bursts, some of the most powerful explosions known, also require high bulk Lorentz factors of at least 100 \citep{2010MNRAS.402.1854Z, 2018A&A...609A.112G}. High angular resolution observations in multiple bands support the requirement for high launch Lorentz factors where gamma-ray burst jets have Lorentz factors of around 5, at tens to hundreds of days post-burst \citep{2004ApJ...609L...1T, 2022Natur.610..273M}. We note that, in the case of gamma-ray bursts, it is often assumed that the angle to the line of sight is zero and Lorentz factors are quoted instead of Doppler factors. Within the Milky Way, X-ray binaries have a larger range of Doppler factors. The average jet angle to the line of sight is around 60\textsuperscript{$\circ$} meaning that in many cases the emission we observe is actually deboosted. When Lorentz factors are calculated considering a given systems inclination angle, they are much lower than those measured for extra-galactic systems, with values around 2 \citep{2020NatAs...4..697B, 2022MNRAS.511.4826C}. The Doppler factor we infer from the ten-day-timescale variability observed in AT2022cmc are consistent with the highest superluminal velocities measured in blazar systems \citep{2005AJ....130.1418J, 2009A&A...494..527H, 2017MNRAS.466.4625L} as well as gamma-ray burst jets. The high Lorentz factor measured for AT2022cmc most likely arises from the transient nature of AT2022cmc, the short injection energy resulted in a higher Doppler factor for a short period of time followed by the deceleration of the jet. \begin{figure} \centering \includegraphics[width=\columnwidth]{lums_deboosted.pdf} \caption{The 15.5\,GHz luminosities of \textit{Swift} J1644 (blue stars) and AT2022cmc (green circles) for the first 100 days in the rest frame of the respective events. The two events show a similar long term evolution for the period in which we have overlap i.e. between 6 and 45 rest frame days. We do not consider the Doppler boosting in calculating the luminosities. } \label{fig:luminosities} \end{figure} Comparison of the 15.5\,GHz luminosities of \textit{Swift} J1644+57 and AT2022cmc over the first 100\,days corrected for their respective redshifts since their respective discovery days (Figure \ref{fig:luminosities}). The evolution of both \textit{Swift} J1644+57 and AT2022cmc show remarkable similarities over the first 40 days in their respective rest frames, where the light curves follow a power law rise of approximately $S_{\nu} \propto t^{0.4}$ \citep{2012ApJ...748...36B}. Unlike in AT2022cmc, \textit{Swift} J1644+57 shows no statistically significant variability. To infer the same bulk Lorentz factor in the AMI--LA data of \textit{Swift} J1644+57 as we have for AT2022cmc we would have to observe variability on a timescale of less than one day, a timescale not sampled by the radio follow-up campaigns. \section{Conclusions} Our radio campaign to observe the newly discovered relativistic TDE AT2022cmc has produced high cadence multi-frequency data set spanning the first 100 days post-discovery. The 15.5\,GHz light curve shows short timescale variability which corresponds to a very high brightness temperature implying the presence of Doppler boosting and providing a model-independent confirmation of a relativistic outflow. The analysis we have performed here is vital in our long-term understanding, modelling and interpretation of AT2022cmc and other relativistic TDEs. This is the first direct evidence of Doppler beaming in any type of TDE and confirms that such systems are truly relativistic. \section*{Acknowledgements} L. R. acknowledges the support given by the Science and Technology Facilities Council through an STFC studentship. We thank the Mullard Radio Astronomy Observatory staff for scheduling and carrying out the AMI--LA observations. The AMI telescope is supported by the European Research Council under grant ERC-2012-StG-307215 LODESTONE, the UK Science and Technology Facilities Council, and the Universities of Cambridge and Oxford. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. \textit{e}--MERLIN is a National Facility operated by the University of Manchester at Jodrell Bank Observatory on behalf of STFC, part of UK Research and Innovation. This research has made use of NASA’s Astrophysics Data System, and the Python packages \textsc{numpy} \citep{5725236} and \textsc{matplotlib} \citep{4160265}. \section*{Data Availability} All the data used in this paper are available in the online Appendices along with AMI--LA and MeerKAT radio maps and variability consistency checks. \bibliographystyle{mnras}
proofpile-arXiv_067-7247
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Multi-step update targets play an important role in TD learning \citep{sutton:ml88} and reinforcement learning \citep{sutton:book98, szepesvari:book09}. The core concept behind TD learning is to bootstrap the value of one state (or state-action pair) from the value of another state (or state-action pair). With one-step update targets the state that is bootstrapped from lies one time step in the future; with multi-step update targets bootstrapping occurs with respect to values of states that lie further in the future. Controlling from which states bootstrapping occurs is important, because it affects the fundamental trade-off between bias and variance of updates. The trade-off that produces the best performance is different from domain to domain, but for most domains the best trade-off lies somewhere in between a one-step update target (high bias, but low variance) and an update with the full return (unbiased, but high variance). This has made TD($\lambda$), where the trade-off between variance and bias of the update target can be controlled by the parameter $\lambda$, one of the most popular TD methods in linear function approximation. While TD($\lambda$) and its control variant Sarsa($\lambda$) are very popular in the case of linear function approximation, when non-linear function approximation is used to represent the value function single-step methods are the norm. A reason could be that in many domains with non-linear function approximation TD($\lambda$) does not perform particularly well. In particular, it is very susceptible to divergence of values. We argue that the underlying reasons for this instability are not unique to non-linear function approximation; it is a more general phenomenon of traditional TD($\lambda$). However, the issues are more prominent when non-linear function approximation is used for two reasons. First, for table lookup or linear function approximation with binary features, an alternative version of TD($\lambda$) is available (TD($\lambda$) with replacing traces) that is less sensitive to divergence \citep{singh:ml96}. Second, value blow-ups occur especially in domains where the same feature is active (i.e., has a value $\neq 0$) for many subsequent time steps \citep{vanseijen:arxiv15b}. This is something that occurs often with non-linear function approximation, because features are typically more general in this setting and can be active over a large part of the state-space. We show that the susceptibility of TD($\lambda$) to divergence stems from a deviation of TD($\lambda$) from the general TD update rule based on gradient descent that is formalized by its forward view. Unfortunately, while the forward view is less susceptible to divergence, it is expensive to implement (both the computation time per step and required memory grow over time), making it not a practical alternative to TD($\lambda$). To address this, we present an alternative version of TD($\lambda$), which we call forward TD($\lambda$), that implements the gradient-descent-based update rule exactly and is computationally efficient as well. The price that is payed to achieve this is that updates occur with a delay. However, we show empirically that the advantages of having an exact implementation of the gradient-descent-based update rule substantially outweigh the disadvantages of having a delay in the updates. \section{Related Work} This work is related to true online temporal-difference learning \citep{vanseijen:icml14, vanseijen:arxiv15b}. The non-linear, online $\lambda$-return algorithm presented in Section \ref{sec:analysis} is a direct extension of the linear, online $\lambda$-return algorithm that underlies true online TD($\lambda$). In the linear case, the computationally inefficient forward view equations can be rewritten in computationally efficient backward view equations, yielding the true online TD($\lambda$) algorithm. Unfortunately, this is not possible in the non-linear case, because the derivation of the true online equations makes use of the fact that the gradient with respect to the value function is independent of the weight vector, which does not hold in the case of non-linear function approximation. Forward TD($\lambda$) is similar to a method introduced by \citet{cichosz:jair95}. Specifically, Cichosz's method is based on the same update target as forward TD($\lambda$). Interestingly, Cichosz presents his method in the context of linear function approximation as a computationally efficient alternative to traditional TD($\lambda$). While we focus primarily on sample efficiency in the non-linear setting, like Cichosz's method, forward TD($\lambda$) also has computational advantages. In fact, forward TD($\lambda$) is more efficient than Cichosz's method. Forward TD($\lambda$) has the same computation-time complexity as TD(0); by contrast, the computation-time of Cichosz's method depends on $K$. \section{Background} Our problem setting is that of a \emph{Markov decision processes} (MDP), which can be described as a 5-tuple of the form $\langle \mathcal{S}, \mathcal{A}, p, r, \gamma \rangle$, consisting of $\mathcal{S}$, the set of all states; $\mathcal{A}$, the set of all actions; $p(s'|s,a)$, the transition probability function, giving for each state $s \in \mathcal{S}$ and action $a \in \mathcal{A}$ the probability of a transition to state $s' \in \mathcal{S}$ at the next step; $r(s,a,s')$, the reward function, giving the expected reward for a transition from $(s,a)$ to $s'$. $\gamma$ is the discount factor, specifying how future rewards are weighted with respect to the immediate reward. An MDP can contain terminal states, which terminate an episode. Mathematically, a terminal state can be interpreted as a state with a single action that results in a reward of 0 and a transition to itself. The return at time $t$ is defined as the discounted sum of rewards, observed after $t$: \begin{displaymath} G_t = R_{t+1} + \gamma\,R_{t+2} + \gamma^2\,R_{t+3}+... = \sum_{i=1}^\infty\,\gamma^{i-1}\, R_{t+i}\thinspace, \end{displaymath} where $R_{t+1}$ is the reward received after taking action $A_t$ in state $S_t$. Actions are taken at discrete time steps $t = 0,1,2,...$ according to a \emph{policy} $\pi: \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$, which defines for each action the selection probability conditioned on the state. Each policy $\pi$ has a corresponding state-value function $v_{\pi}(s)$, which maps each state $s \in \mathcal{S}$ to the expected value of the return $G_t$ from that state, when following policy $\pi$: $$v_{\pi}(s) = \mathbb{E}\{ G_t \,|\, S_t = s, \pi \}\thinspace.$$ The value of a terminal state is (by definition) 0. Temporal-Difference (TD) learning aims to learn the state-value function using a strategy based on stochastic gradient descent \citep{bertsekas:book95}. Let $\hat V(s | {\boldsymbol \theta})$ be an estimate of $v_\pi(s)$ given the weight vector ${\boldsymbol \theta} \in \mathbb{R}^n$. Then, the general form of the TD update is: \begin{equation} {\boldsymbol \theta}_{t+1} = {\boldsymbol \theta}_t + \alpha \Big( U_t - \hat V(S_t | {\boldsymbol \theta}_t) \Big) \nabla_\theta \hat V(S_t | {\boldsymbol \theta}_t)\,, \label{eq:non-linear update} \end{equation} where $\alpha > 0$ is the step-size parameter, $U_t$ is the update target, and $\nabla_\theta \hat V$ is the gradient of $\hat V$ with respect to the weight vector ${\boldsymbol \theta}$. The update target $U_t$ is some estimate of the value $v_\pi (S_t)$. A simple example is the TD(0) update target, which uses the estimate of the next state to bootstrap from: $$U_t = R_{t+1} + \gamma \hat V(S_{t+1}|{\boldsymbol \theta}_t)\,.$$ The update equations for TD($\lambda$) are: \begin{eqnarray*} \delta_t &=& R_{t+1} + \gamma \hat V(S_{t+1}| {\boldsymbol \theta}_t) - \hat V(S_{t}| {\boldsymbol \theta}_t) \\ {\boldsymbol e}_t &=& \gamma\lambda {\boldsymbol e}_{t-1} + \nabla_{\boldsymbol \theta} \hat V(S_{t}| {\boldsymbol \theta}_t) \\ {\boldsymbol \theta}_{t+1} &=& {\boldsymbol \theta}_t + \alpha \delta_t\,{\boldsymbol e}_{t} \end{eqnarray*} where ${\boldsymbol e}_t$ is called the \emph{eligibility-trace vector}. While these updates appear to deviate from the gradient-descent-based update rule given in (\ref{eq:non-linear update}), there is a close connection with this update rule. In the next section, we go deeper into the details of this relation. \section{Analysis of TD($\lambda$)} \label{sec:analysis} That TD($\lambda$) is a multi-step method is not immediately obvious, because its update equations are different in form than (\ref{eq:non-linear update}), making it hard to specify what the update target is. That TD($\lambda$) is a multi-step method becomes clear from the fact that the weights computed by TD($\lambda$) are similar to those computed by a different algorithm that does have a well-defined multi-step update target, called the $\lambda$-return algorithm. The $\lambda$-return algorithm is also referred to as the forward view of TD($\lambda$). While the traditional $\lambda$-return algorithm is similar to TD($\lambda$) only at the end of an episode \citep{sutton:book98, bertsekas:book96}, below we specify a more general version that is similar to TD($\lambda$) at \emph{all time steps}. We define the $\lambda$-return for time step $t$ with horizon $h \geq t+1$ as follows: \begin{equation} G^{\lambda|h}_t := (1-\lambda) \sum_{n=1}^{h-t-1} \lambda^{n-1} G_t^{(n)} + \lambda^{h-t-1} G_t^{(h-t)}\, \label{eq:interim lambda return1} \end{equation} where $G_t^{(n)}$ is the $n$-step return, defined as: \begin{displaymath} G_t^{\,(n)} := \sum_{k=1}^n \gamma^{k-1} R_{t+k} + \gamma^n\, \hat V(S_{t+n}| {\boldsymbol \theta}_{t+n-1}). \end{displaymath} Note that $G_t^{\lambda|h}$ uses information only up to the horizon $h$. We define ${\boldsymbol \theta}_t$ as the result of a sequence of updates of the form (\ref{eq:non-linear update}), based on states $S_0, \dots, S_{t-1}$ and update targets $G_0^{\lambda|t}, \dots, G_{t-1}^{\lambda|t}$, respectively. Formally, we define ${\boldsymbol \theta}_t := {\boldsymbol \theta}_t^t$, with ${\boldsymbol \theta}_t^t$ incrementally defined by:\footnote{Note that the sequence of updates is different for each time step, due to the different horizons, requiring the double indices for the weight vectors.} \begin{equation} {\boldsymbol \theta}_{k+1}^t := {\boldsymbol \theta}_k^t + \alpha \Big( G_k^{\lambda|t} - \hat V(S_k | {\boldsymbol \theta}_k^t) \Big) \nabla_\theta \hat V(S_k | {\boldsymbol \theta}_k^t), \qquad\mbox{for } 0 \leq k < t\,. \label{eq:interim update} \end{equation} with ${\boldsymbol \theta}_0^t := {\boldsymbol \theta}_0$ for all $t$ and ${\boldsymbol \theta}_0$ being the weight vector at the start of the episode. We call the algorithm that implements these updates the \emph{online $\lambda$-return algorithm}. Furthermore, we define the \emph{offline $\lambda$-return algorithm} as the algorithm that performs (\ref{eq:interim update}) only at the end of an episode. That is, ${\boldsymbol \theta}_t := {\boldsymbol \theta}_0$ for $0 \leq t < T$, with $T$ the time step of termination, while ${\boldsymbol \theta}_T: = {\boldsymbol \theta}_T^T$, with ${\boldsymbol \theta}_T^T$ defined incrementally by (\ref{eq:interim update}). Figure \ref{fig:offline vs online} illustrates the difference between the online and offline $\lambda$-return algorithm and TD($\lambda$), by showing the RMS error on a random walk task. The task consists of 10 states laid out in a row plus a terminal state on the left. Each state transitions with 70\% probability to its left neighbour and with 30\% probability to its right neighbour (or to itself in case of the right-most state). All rewards are 1, and $\gamma = 1$. The right-most state is the initial state. \begin{figure}[thb] \begin{center} \includegraphics[width=9cm]{./online_vs_offline4.pdf} \caption{RMS error as function of time, for the first 3 episodes of a random walk task, for $\lambda = 1$ and $\alpha = 0.2$. The error shown is the RMS error over all states, normalized by the initial RMS error.} \label{fig:offline vs online} \end{center} \end{figure} The theorem below states that for appropriately small step-sizes TD($\lambda$) behaves like the online $\lambda$-return algorithm. We provide the proof for the theorem in Appendix \ref{sec:proof}. The theorem uses the term $\Delta_i^t$, which we define as: $$\Delta_i^t := \big(\bar G_i^{\lambda|t} - \hat V(S_i | {\boldsymbol \theta}_0)\big)\nabla_{\boldsymbol \theta} \hat V(S_i | {\boldsymbol \theta}_0)\,,$$ with $\bar G_i^{\lambda|t}$ the interim $\lambda$-return for state $S_i$ with horizon $t$ that uses ${\boldsymbol \theta}_0$ for all value evaluations. Note that $\Delta_i^t$ is independent of the step-size. \begin{theorem} Let ${\boldsymbol \theta}_0$ be the initial weight vector, ${\boldsymbol \theta}_{t}^{td}$ be the weight vector at time $t$ computed by TD($\lambda$), and ${\boldsymbol \theta}_{t}^{\lambda}$ be the weight vector at time $t$ computed by the online $\lambda$-return algorithm. Furthermore, assume that $\nabla_{\boldsymbol \theta} \hat V$ is well-defined and continuous everywhere and that $\sum_{i=0}^{t-1} \Delta_i^t \neq {\boldsymbol 0}$. Then, for all time steps $t$: $$\frac{|| {\boldsymbol \theta}_{t}^{td} - {\boldsymbol \theta}_{t}^{\lambda} ||}{|| {\boldsymbol \theta}_{t}^{td} - {\boldsymbol \theta}_0 ||} \rightarrow 0 \qquad\mbox{as $\,\,\,\alpha \rightarrow 0$}.$$ \end{theorem} While TD($\lambda$) behaves for small step-size like the $\lambda$-return algorithm, in practise a small step-size often results in slow learning. Hence, higher step-sizes are desirable. Figure \ref{fig:offline vs online} suggests that for higher step-sizes, TD($\lambda$) has a disadvantage with respect to the online $\lambda$-return algorithm. We analyze why this is the case, using the one-state example shown in the left of Figure \ref{fig:one-state example}. \vspace{-0.1cm} \begin{figure}[thb] \begin{minipage}[c]{0.5\textwidth} \hspace{1cm} \includegraphics[width=0.5\textwidth]{./one_state_example_upright_gamma2.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.6\textwidth} \includegraphics[width=0.6\textwidth]{./TD_versus_lambda_small.pdf} \end{minipage} \vspace{-0.2cm} \caption{{\it Left: } One-state example (the square indicates a terminal state). {\it Right: } The RMS error of the state value at the end of an episode, averaged over the first 10 episodes, for $\lambda = 1$.} \label{fig:one-state example} \end{figure} The right of Figure \ref{fig:one-state example} shows the RMS error over the first 10 episodes for different step-sizes and $\lambda = 1$. While for small step-sizes, TD($\lambda$) indeed behaves like the $\lambda$-return algorithm, for larger step-sizes the difference becomes huge. To understand the reason for the large difference in performance, we derive an analytical expression for the value at the end of an episode. First, we consider the $\lambda$-return algorithm. Because there is only one state involved, we indicate the value of this state simply by $\hat V$. The value at the end of an episode, $\hat V_T$, is equal to $\hat V_T^T$, resulting from the update sequence: $$ \hat V_{k+1}^T = \hat V^T_k + \alpha ( G_k^{\lambda | T} - \hat V^T_k) \qquad \mbox{ for } 0 \leq k < T$$ By substitution, we can directly express $\hat V_T$ in terms of the initial value, $\hat V_0$, and the update targets: $$ \hat V_T = (1-\alpha)^T \hat V_0 + \alpha (1-\alpha)^{T-1} G_0^{\lambda | T} + \alpha (1-\alpha)^{T-2} G_1^{\lambda | T} + \cdots + \alpha G_{T-1}^{\lambda | T} $$ Using that $G_k^{\lambda|T} = 1$ for all $k$, this can be written as a single pseudo-update: \begin{equation} \hat V_T = \hat V_0 + \beta (1 - \hat V_0) \label{eq:VT} \end{equation} with $\beta = 1 - (1-\alpha)^T$. Note that a larger $\alpha$ or $T$ results in a larger $\beta$, but its value is bounded. Specifically, $ 0 \leq \alpha \leq 1 \Rightarrow 0 \leq \beta \leq 1$. We now consider TD($\lambda$). The update at the end of an episode is $\hat V_T = \hat V_{T-1} + \alpha e_{T-1} \delta_{T-1}$ . In our example, $\delta_t = 0$ for $0 \leq t < T-1$, while $\delta_{T-1} = 1 - V_{T-1}$. Because $\delta_t$ is 0 for all time steps except the last, $V_{T-1} = V_0$. Furthermore, $\nabla_{\boldsymbol \theta} \hat V$ reduces to 1 in our example, resulting in $e_{T-1}= T$. Substituting all this in the above equation also reduces it to pseudo-update (\ref{eq:VT}), but with $\beta = \alpha T$. So for TD($\lambda$), $\beta$ can grow much larger than 1, causing divergence of values, even for $\alpha < 1$. This is the reason that TD($\lambda$) can be very sensitive to the step-size and it explains why the optimal step-size for TD($\lambda$) is much smaller than the optimal step-size for the $\lambda$-return algorithm in Figure \ref{eq:VT} ($\alpha \approx 0.15$ versus $\alpha = 1$, respectively). Moreover, because the variance on $\beta$ is higher for TD($\lambda$) the performance at optimal $\alpha$ of TD($\lambda$) is worse than the performance at optimal $\alpha$ for the $\lambda$-return algorithm. In Section \ref{sec:empirical}, we show empirically that the general behaviour of TD($\lambda$) shown in Figure \ref{fig:one-state example} also occurs in more complex domains. While the online $\lambda$-return algorithm has clear advantages over TD($\lambda$), it is not a practical algorithm: the number of updates that need to be performed per time step grows over time, as well as the memory requirements. On the other hand, the offline $\lambda$-return algorithm is undesirable, because it performs no updates during an episode and cannot be applied to non-episodic tasks. In the next section, we present forward TD($\lambda$), a computationally efficient algorithm that forms a middle ground between the online and the offline $\lambda$-return algorithm. \section{Forward TD($\lambda$)} The online $\lambda$-return algorithm uses update targets that grow with the data horizon. This has the advantage that updates can be performed immediately, but also causes the computation time per time step to grow over time. In this section, we present a computationally efficient method that performs updates using a $\lambda$-return with a horizon that lies a fixed number of time steps in the future: $G_t^{\lambda|t+K}$ with $K \in \{1, 2, \dots \}$. We refer to this update target as the $K$-bounded $\lambda$-return. A consequence of using update target $G_t^{\lambda|t+K}$ with fixed $K$ is that during the first $K-1$ time steps no updates occur. In other words, ${\boldsymbol \theta}_t := {\boldsymbol \theta}_0$ for $1 \leq t < K$. The weights ${\boldsymbol \theta}_K$ through ${\boldsymbol \theta}_{T-1}$ are defined as follows: $${\boldsymbol \theta}_{t+K} := {\boldsymbol \theta}_{t+K-1} + \alpha \Big( G_{t}^{\lambda|t+K} - \hat V(S_{t} | {\boldsymbol \theta}_{t +K-1}) \Big) \nabla_\theta \hat V(S_{t} | {\boldsymbol \theta}_{t+K-1})\,, \qquad\mbox{ for } \,\,0 \leq t < T - K.$$ At the end of an episode $K$ updates occur. Following the convention of the double indices when multiple updates occur at a single time step, we define ${\boldsymbol \theta}_T := {\boldsymbol \theta}^T_K$, with ${\boldsymbol \theta}^T_K$ defined incrementally by: $${\boldsymbol \theta}_{k+1}^T := {\boldsymbol \theta}_{k}^T + \alpha \Big( G_{T-K+k}^{\lambda|T} - \hat V(S_{T-K+k} | {\boldsymbol \theta}_{k}^T) \Big) \nabla_\theta \hat V(S_{T-K+k} | {\boldsymbol \theta}_{k}^T) \qquad\mbox{ for } \,\,0 \leq k < K\,,$$ with ${\boldsymbol \theta}_0^T := {\boldsymbol \theta}_{T-1}$. The question of how to set $K$ involves a trade-off. On the one hand, larger values of $K$ bring the end-of-episode weights closer to those of the $\lambda$-return algorithm; on the other hand, smaller values of $K$ result in a shorter delay of updates. In general, $K$ should be set in such a way that $G_t^{\lambda|t+K}$ is an accurate estimate of $G_t^{\lambda|T}$, while not being unnecessary large. How accurately $G_t^{\lambda|t+K}$ approximates $G_t^{\lambda|T}$ depends on the value $\gamma\lambda$, because the contribution of a reward to the $K$-bounded $\lambda$-return reduces exponentially with $\gamma\lambda$ (we will show this below). While the immediate reward has a contribution of 1, the contribution of a reward $K$ time steps in the future is only $(\gamma\lambda)^K$. Hence, a sensible strategy for setting $K$ is to find the smallest value of $K$ that still ensures that the value $(\gamma\lambda)^K$ is smaller than some fraction $\eta$. This value can be computed as follows: \begin{equation} K = ceil\big( log(\eta) / log(\gamma\lambda) \big)\,, \label{eq:K value} \end{equation} where $ceil(\cdot)$ rounds up to the nearest integer. Note that for $\gamma\lambda < \eta $ , $ K = 1$. The value $K=1$ is special because $G_t^{\lambda|t+1}$ reduces to the TD(0) update target, independent of the value of $\lambda$. Furthermore, there is no delay in updates. Hence, forward TD($\lambda$) behaves exactly like $TD(0)$ in this case. For $\gamma\lambda = 1$, no finite value of $K$ can ensure that an accurate estimate of $G_t^{\lambda|T}$ is obtained. The only way to resolve this is to postpone all updates to the end of an episode (which can be interpreted as $K = \infty$). In this case, the performance of forward TD($\lambda$) is equal to that of the offline $\lambda$-return algorithm. Next, we discuss how forward TD($\lambda$) can be implemented efficiently. Our implementation is based on two ways of computing the $K$-bounded $\lambda$-return. We derive the underlying equations in Appendix \ref{sec:bounded lambda-return}. The first way is based on the equation: \begin{equation} G^{\lambda | h+1}_t = G^{\lambda | h}_t + (\gamma\lambda)^{h-t} \delta'_{h}\,, \qquad\mbox{ for } h \geq t+1\,, \label{eq:Glambda h+1} \end{equation} with $$\delta'_h := R_{h+1} + \gamma \hat V(S_{h+1} | {\boldsymbol \theta}_{h}) - \hat V(S_{h} | {\boldsymbol \theta}_{h-1}) \,.$$ Note that $\delta'_i$ differs from $\delta_i$ in the index of the weight vector used for the value of $S_i$. Using (\ref{eq:Glambda h+1}) incrementally, $G_t^{t+K}$ can be computed, starting from $G_t^{\lambda| t+1} = R_{t+1} + \gamma \hat V(S_{t+1}| {\boldsymbol \theta}_t)$, in $K-1$ updates. The second way is based on the equation: \begin{equation} G^{\lambda | h}_{t+1} = (G^{\lambda | h}_t - \rho_t )/\gamma\lambda\,, \qquad\mbox{ for } h \geq t+2\,, \label{eq:update with rho} \end{equation} with $$ \rho_t = R_{t+1} + \gamma(1-\lambda)\, \hat V(S_{t+1} | {\boldsymbol \theta}_t)\,.$$ This equation can be used to compute $G_{t+1}^{t+K}$ from $G_{t}^{t+K}$. Performing one more update using (\ref{eq:Glambda h+1}) results in the $K$-bounded $\lambda$-return for time step $t+1$: $G_{t+1}^{\,t+1+K}$. This way of computing the $K$-bounded $\lambda$-return requires only two updates (for any value of $K$). In theory, the $K$-bounded $\lambda$-return has to be computed incrementally from scratch (using Equation \ref{eq:Glambda h+1}) only for the initial state; for the other states it can be computed efficiently using only 2 updates. Unfortunately, this approach does not work well in practise. The reason is that tiny rounding errors that occur on any computer get blown up by dividing by $\gamma\lambda$ over and over again. For example, consider $\gamma\lambda = 0.5$. Then, rounding errors in the $K$-bounded $\lambda$-return at time $t$ will be blown up by a factor $(1 / \gamma\lambda)^{100} = 2^{100}$ at time $t+100$. Fortunately, we can avoid these blow-ups in an elegant way, by recomputing the $K$-bounded $\lambda$-return from scratch every $K$ time steps. This ensures that rounding errors will never grow by a factor larger than $(1/\gamma\lambda)^{K}$. Moreover, as we argued in the previous subsection, $K$ is set in such a way that the value $\gamma\lambda^K$ is just slightly smaller than the hyper-parameter $\eta$. Hence, rounding errors will not grow by a factor larger than approximately $1/\eta$. Because $\eta$ will typically be set to $0.01$ or larger (smaller values of $\eta$ will result in longer update delays, which is undesirable), no issues with rounding error blow-ups will occur. We now analyze the computational complexity of forward Sarsa($\lambda$). For reference purposes, the pseudocode for implementing forward TD($\lambda$) in provided in Algorithm \ref{al:forward TD(lambda)}. First, we look at computation time. Between time step $K$ and the end of an episode, exactly one state-value evaluation and one state-value update occur. All other computations have $\mathcal{O}(1)$ cost. At the end of the episode an additional $K-1$ value updates occur, so there is a spike in computation at the end of an episode, but because during the first $K-1$ time steps of an episode no updates occur, on average the algorithm still performs only one value update and one value evaluation per time step. This is the same as for TD(0). Hence, forward TD($\lambda$) is very efficient from a computation time perspective. In terms of memory, forward TD($\lambda$) requires the storage of the $K$ most recent feature vectors. So, if $n$ is the number of features, forward TD($\lambda$) requires additional memory of size $\mathcal{O}(nK)$ over TD(0) (note that forward TD($\lambda$) does not require storage of an eligiblity-trace vector). If $n$ is large and memory is scarce, $K$ can be bounded by some value $K_{\max}$ to deal with this. \vspace{-0.2cm} \begin{algorithm}[!thb] \small \begin{algorithmic}[0] \STATE {\bf INPUT:} $\alpha, \lambda, \gamma, {\boldsymbol \theta}_{init}, \eta , K_{max}$ \it{(optional)} \STATE ${\boldsymbol \theta} \leftarrow {\boldsymbol \theta}_{init}$ \STATE If $\,\,\gamma\lambda > 0\,\,$ then: $\,\,K = ceil\big( \log(\eta) / \log(\gamma\lambda) \big)\,\,$, else: $\,\,\,K = 1\,\,\,$ \STATE $K = \min (K_{max}, K)$ \qquad \it{(optional)} \STATE $c_{final} \leftarrow (\gamma\lambda)^{K-1}$ \STATE Loop (over episodes): \STATE \quad $\mathcal{F} \leftarrow \emptyset$ \qquad// $\mathcal{F}$ is a FIFO queue (max length: $K$) \STATE \quad $U_{sync} \leftarrow 0; \,\,\, i \leftarrow 0; \,\,\, c \leftarrow 1; \,\,\, V_{current} \leftarrow 0; \,\,\, ready \leftarrow false $ \STATE \quad obtain initial state $S$ \qquad // or ${\boldsymbol \phi}(S)$ \STATE \quad While $S$ is not terminal, do: \STATE \quad\qquad observe reward $R$ and next state $S'$ \STATE \quad\qquad If $S'$ is terminal: \,\,$V_{next} \leftarrow 0 $\,\,, else: \,\,$V_{next} \leftarrow \hat V(S' | {\boldsymbol \theta})$ \STATE \quad\qquad $\rho \leftarrow R + \gamma (1-\lambda) V_{next}$ \STATE \quad\qquad push tuple $\langle S, \rho \rangle$ on $\mathcal{F}$ \qquad // or \,$\langle {\boldsymbol \phi}(S), \rho \rangle$ \STATE \quad\qquad $\delta' \leftarrow R + \gamma V_{next} - V_{current}$ \STATE \quad\qquad$V_{current} \leftarrow V_{next}$ \STATE \quad\qquad If $i = K -1 $ : \STATE \quad\qquad\qquad $U \leftarrow U_{sync}$ \STATE \quad\qquad\qquad $U_{sync} \leftarrow V_{current}\,;\,\,\,i \leftarrow 0;\,\,\, c \leftarrow 1; \,\,\, ready \leftarrow true$ \STATE \quad\qquad Else: \STATE \quad\qquad\qquad $U_{sync} \leftarrow U_{sync} + c \cdot \delta' $ \STATE \quad\qquad\qquad $ i \leftarrow i+1;\,\,\,c \leftarrow \gamma\lambda\cdot c$ \STATE \quad\qquad If $ready$ : \STATE \quad\qquad\qquad $U \leftarrow U + c_{final}\cdot \delta'$ \qquad\qquad// $G_t^{\lambda|t+K} \Leftarrow G_t^{\lambda|t+K-1}$ \STATE \quad\qquad\qquad pop $\langle S_p, \rho_p \rangle$ from $\mathcal{F}$ \STATE \quad\qquad\qquad update ${\boldsymbol \theta}$ using $S_p$ and $U$ \STATE \quad\qquad\qquad If $\,\, K \neq 1: \,\, U \leftarrow \big(U - \rho_p \big) / (\gamma\lambda) $ \qquad\qquad// $G_{t+1}^{\lambda|t + K} \Leftarrow G_t^{\lambda| t + K}$ \STATE \quad\qquad $S \leftarrow S'$ \STATE \quad If $ready = false$: \,\,\,$U \leftarrow U_{sync}$ \STATE \quad While $\mathcal{F}$ not empty: \STATE \quad\qquad pop $\langle S_p, \rho_p \rangle$ from $\mathcal{F}$ \STATE \quad\qquad update ${\boldsymbol \theta}$ using $S_p$ and $U$ \STATE \quad\qquad If $\,\, K \neq 1:\,\, U \leftarrow (U - \rho_p) / \gamma\lambda$ \caption{forward TD($\lambda$)} \label{al:forward TD(lambda)} \end{algorithmic} \end{algorithm} \section{Empirical Comparisons} \label{sec:empirical} In our first experiment, we evaluate the performance of TD($\lambda$), forward TD($\lambda$) and the online/offline $\lambda$-return algorithm on the standard mountain car task \citep{sutton:book98}. The state-space consists of the position and velocity of the car, scaled to numbers within the range [-1, 1]. The value function is approximated with a neural network that has the two state-variables as input, one output variable representing the state value, and a single hidden layer of 50 nodes in between. The backpropagation algorithm is used for obtaining the derivative of the value function with respect to the weights (in a similar way as done by Tesauro, 1994). The evaluation policy is a near-optimal policy. All rewards are drawn from a normal distribution with mean -1 and standard deviation 2. We fixed $\lambda = 0.9$ and set $\eta = 0.01$ and show the performance for different step-sizes. Our performance metric is the RMS error (over the state distribution induced by the policy) at the end of an episode, averaged over the first 50 episodes. The left graph of Figure \ref{fig:mountain car eval} shows the results. The results are averaged over 50 independent runs. TD($\lambda$) shows the same behaviour as in the one-state example (Figure \ref{fig:one-state example}). That is, the error quickly diverges. Surprisingly, forward TD($\lambda$) outperforms the online $\lambda$-return algorithm. That delaying updates results in better performance in this case is probably related to the reason that the DQN algorithm uses a separate target network that is updated in a delayed way \citep{mnih:nature15}. Most likely because it reduces instability. For our second experiment, we compared the performance of forward TD($\lambda$) with $\eta \in \{0.01, 0.1, 0.3 \}$ and no maximum $K$ value, for $\alpha = 0.015$ and different $\lambda$ values. In addition, we tested $\eta = 0.01$ with $K_{max} = 50$. The experimental settings are the same as in the first experiment, except we average over 200 independent runs instead of 50. The right graph of Figure \ref{fig:mountain car eval} shows the results. This graph shows that the performance at optimal $\lambda$ is not really affected by $\eta$. Hence, in practise $\eta$ can just be fixed to some small value. \begin{figure}[thb] \begin{minipage}[c]{0.5\textwidth} \hspace{1cm} \includegraphics[width=0.8\textwidth]{./MCar_eval_all.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=0.8\textwidth]{./forwardTD_eta2.pdf} \end{minipage} \vspace{-0.2cm} \caption{RMS error averaged over the first 50 episodes of the mountain car evaluation task, normalized by the initial RMS error. {\it Left: } RMS error for different methods at $\lambda = 0.9$. {\it Right: } RMS error of forward TD($\lambda$) for different values of $\eta$ at $\alpha = 0.015$.} \label{fig:mountain car eval} \end{figure} For our third and fourth experiment, we used control tasks. Here the goal is to improve the policy in order to maximize the return. To deal with these tasks, we used one neural network per action to represent the action-value and used $\epsilon$-greedy action selection. Effectively, this changes TD($\lambda$) into Sarsa($\lambda$) and forward TD($\lambda$) into forward Sarsa($\lambda$). Our first control domain is the mountain car task, but now with deterministic rewards of -1. We compared the average return of Sarsa($\lambda$) and forward Sarsa($\lambda$) over the first 50 episodes for different $\lambda$. For each $\lambda$ and each method we optimized $\alpha$. We used $\eta = 0.01$ and $\epsilon = 0.05$. The left graph of Figure \ref{fig:control tasks} shows the results. Results are averaged over 200 independent runs. Forward Sarsa($\lambda$) outperforms Sarsa($\lambda$) for all $\lambda$ values, except for $\lambda = 1.0$. This can be explained by the fact that for $\lambda = 1$, all updates are delayed until the end of the episode for forward Sarsa($\lambda$), in contrast to the updates of Sarsa($\lambda$). Our second control domain is the cart-pole benchmark task, in which a pole has to be balanced upright on a cart for as long as possible \citep{barto:smc83}. The state-space consists of the position and velocity of the cart, as well as the angle and angular velocity of the pole; there are only two actions: move left and move right. An episode ends when the angle of the pole deviates a certain number of degrees from its upright position or when the cart position exceeds certain bounds. We used $\epsilon$-greedy exploration with $\epsilon = 0.05$, and limited the episode length to 1000 steps. Again, $\eta = 0.01$. The networks we used for action-value estimation are the same as in the mountain car experiment (1 hidden layer consisting of 50 nodes), expect that each network now has four input nodes, corresponding with scaled versions of the four state-space parameters. We compared the average return over the first 1000 episodes for different $\lambda$ with optimized $\alpha$. The right graph of Figure \ref{fig:control tasks} shows the results, averaged over 200 independent runs. In this domain, higher values of $\lambda$ actually reduce the performance of Sarsa($\lambda$). By contrast, the optimal performance of forward Sarsa($\lambda$) is obtained around $\lambda = 0.6$ and is substantially higher than the performance of Sarsa(0). Overall, these results convincingly show that forward Sarsa($\lambda$) outperforms Sarsa($\lambda$), as predicted by our analysis. \begin{figure}[thb] \begin{minipage}[c]{0.5\textwidth} \hspace{1cm} \includegraphics[width=0.8\textwidth]{./MountainCar_scan_500runs_nice.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=0.8\textwidth]{./CartPole_scan_200runs_nice.pdf} \end{minipage} \vspace{-0.2cm} \caption{Average return on two control tasks for different $\lambda$ and optimized $\alpha$ (and $\eta = 0.01$). {\it Left: } Mountain car task. {\it Right: } Cart-pole task. } \label{fig:control tasks} \end{figure} \section{Conclusions} We identified the reason why TD($\lambda$) often performs poorly on domains with non-linear function approximation. Deviations from the general TD update rule make TD($\lambda$) susceptible to divergence of value estimates and causes additional variance that reduces performance. While the $\lambda$-return algorithm implements the general update rule exactly, it is not a practical alternative, because its computation-time per step, as well as its memory requirements, are much more expensive. To address this, we presented a new method, called forward TD($\lambda$), that exactly implements the general update rule (like the $\lambda$-return algorithm), but is also very efficient (like TD($\lambda$)). Specifically, its computation-time complexity is the same as that of TD(0). While forward TD($\lambda$) performs its updates with a delay, we have shown empirically that the performance increase due to exactly following the general update rule more than makes up for the performance decrease due to the update delays. In fact, one of our experiments suggests that the delay in updates could actually have a positive impact on the performance when non-linear function approximation is used. This surprising result is likely related to the same reason that DQN uses a separate target network that is updated in a delayed way and is an interesting topic for future research. \section*{Acknowledgements} The author thanks Itamar Arel for discussions leading to the development of forward TD($\lambda$). This work was partly supported by grants from Alberta Innovates -- Technology Futures and the National Science and Engineering Research Council of Canada.
proofpile-arXiv_067-7344
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} An important and still unresolved problem in fluid dynamics is the question of global regularity in three dimensional Euclidian $R^{3}$ space, utilizing Navier-Stokes equation for incompressible fluid: \[\frac{\partial \vec{u}}{\partial t} +(\vec{u}\cdot \nabla )\vec{u}=-\frac{\nabla p}{\rho } +\nu \Delta \vec{u}+\vec{f}\] for $\nabla \cdot \vec{u}=0$ at any position in space $\vec{x}\in R^{3} $ at any time $t\ge 0$. Fluid velocity, a physical quantity representing the ratio between fluid parcel spatial position change and the increment of elapsed time, satisfying realistic boundary conditions- could conceivably develop singularity in finite time. The literature refers to such phenomenon as a "blow-up", as, over some finite time, a mathematical representation of fluid velocity and it’s derivatives, could reach values corresponding to physically unreasonable results. As per the Clay Mathematics Institute official existence and smoothness of the Navier-Stokes equation problem statement: \emph{"A fundamental problem in analysis is to decide whether such smooth, \textbf{physically reasonable} solutions exist for the Navier–Stokes equations"}. In addition, it is not known if a smooth, divergence-fee vector field $\vec{u}^{0}(\vec{x})$ and smooth $\vec{f}(\vec{x},t)$ exist, for which there exist no solutions for pressure $p(\vec{x}, t)$ and given velocity vector field $\vec{u}$ at any position in space $\vec{x}\in R^{3} $ at $t=0$. \newline In this paper we share specific example of the fluid velocity vector field $\vec{u}^{0}(\vec{x})=\vec{u}(\vec{x})$ for fluid occupying all of $R^{3}$ space, as well as the $\vec{f}(\vec{x},t)$ vector field, for which we prove that the Navier-Stokes equation for incompressible fluid does not have a solution at any position in space $\vec{x} \in R^{3}$ at $t=0$. \newline We present single theorem which encapsulates the following approach and results: \newline \textbf{Definition of vector fields:} The theorem statement includes definition of fluid velocity vector field as: \begin{wrapfigure}{r}{0.3\textwidth} \begin{center} \includegraphics[width=0.3\textwidth]{3NestedToriOrange3.jpg} \label{fig:databaseUserTable} \end{center} \end{wrapfigure} \[u_{i} (\vec{x})=2\frac{x_{h(i-1)} -x_{h(i+1)} }{\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{2} } \] for $i=\{ 1,2,3\} $ where $h(l)$ is defined as: \[h(l)=\left\{\begin{array}{ccc} {l} & {;1\le l\le 3} & {} \\ {1} & {;l=4} & {} \\ {3} & {;l=0} & {} \end{array}\right. \] for any $\vec{x}\in R^{3}$. \newline Also, we define external force related vector field as: \[\vec{f}(\vec{x},t)=(0,0,\frac{1}{1+t^{2} (\sum _{j=1}^{3}x_{j} )^{2}})\] for any $\vec{x}\in R^{3}$ and $t\ge 0$. \newline \textbf{Analysis of $\vec{u}(\vec{x})$ vector field:} Starting from the $\vec{u}(\vec{x})$ definition, we analyze the fluid velocity vector field: \begin{itemize} \item Proving that the velocity vector field $\vec{u}(\vec{x})$ is divergence free $\nabla\cdot\vec{u}=\vec{0}$. \item Integrating square of fluid scalar velocity $\left|\vec{u}\right|^{2}$ over the whole $R^{3}$ space in order to verify energy as bounded and finite. The result of integration is $\int _{R^{3} }\left|\vec{u}\right|^{2} dx=\pi ^{2}$, which is constant. \item Proving that velocity vector field $\vec{u}(\vec{x})$ is continuously differentiable $\vec{u}(\vec{x})\in C^{\infty }$. \item Deriving the general form of the partial derivative $\frac{\partial^{\alpha} \vec{u}}{\partial x_{k}^{\alpha}}$ of any order $\alpha$, $k=\{1,2,3\}$. \item Proving that the partial derivative $\frac{\partial^{\alpha} \vec{u}}{\partial x_{k}^{\alpha}}$ of any order $\alpha$, $k=\{1,2,3\}$, is zero vector at coordinate origin $\vec{x}=\vec{0}$ and that it converges to zero vector when $\left|\vec{x}\right| \to \infty$. \item Proving that partial derivative $\frac{\partial^{\alpha} \vec{u}}{\partial x_{k}^{\alpha}}$ of any order $\alpha$, $k=\{1,2,3\}$ must be finite $\left|\frac{\partial^{\alpha} \vec{u}}{\partial x_{k}^{\alpha}}\right|\leq C_{\alpha}$ for $C_{\alpha} \in R$, for any position in space $\vec{x}\in R^{3}$. \end{itemize} \begin{wrapfigure}{r}{0.3\textwidth} \begin{center} \includegraphics[width=0.3\textwidth]{vftori.jpg} \label{fig:databaseUserTable2} \end{center} \end{wrapfigure} \textbf{Analysis of $\vec{f}(\vec{x},t)$ vector field:} We perform an analysis for the external force vector field $\vec{f}(\vec{x},t)$: \begin{itemize} \item Proving that vector field $\vec{f}(\vec{x},t)$ is continuously differentiable $\vec{f}(\vec{x})\in C^{\infty }$. \item Deriving $ \deg _{x} (\frac{\partial ^{\alpha } }{\partial x^{\alpha } } \frac{\partial ^{m} }{\partial t^{m} }\vec{f})$ of any order $\alpha$, $m$ \item Deriving $ \deg _{t} (\frac{\partial ^{\alpha } }{\partial x^{\alpha } } \frac{\partial ^{m} }{\partial t^{m} }\vec{f})$ of any order $\alpha$, $m$ \item Proving that the partial derivative $\frac{\partial ^{\alpha } }{\partial x^{\alpha } } \frac{\partial ^{m} }{\partial t^{m} }\vec{f}$ of any order $\alpha$, $m$ is zero vector at the coordinate origin $\vec{x}=\vec{0}$ and that it converges to zero vector when $\left|\vec{x}\right| \to \infty$. \end{itemize} \textbf{Appllication of $\vec{u}(\vec{x})$ and $\vec{f}(\vec{x},t)$ in the Navier-Stokes equation and solving for pressure $p(\vec{x}, t)$:} We apply the velocity vector field $\vec{u}(\vec{x})$ and $\vec{f}(\vec{x},t)$ in the Navier-Stokes equation for incompressible fluid, obtaining the following results: \begin{itemize} \item We apply $\vec{u}(\vec{x})$ and $\vec{f}(\vec{x},t)$ to each applicable term of the Navier-Stokes equation, and we re-arrange the equation so that $\frac{\nabla p}{\rho}$ is the only term on one side of the resulting equation. \item As $\nabla p=(\frac{\partial}{\partial x_{1}},\frac{\partial}{\partial x_{2}},\frac{\partial}{\partial x_{3}})p$, we integrate the resulting vector field components for $x_{1}$,$x_{2}$,$x_{3}$ expecting that the resulting pressure $p(\vec{x}, t)$ must be the same for all three integrations performed. \item Once the three integrations are performed for $x_{1}$,$x_{2}$,$x_{3}$, we obtain the three mutually different results for pressure $p(\vec{x}, t)$ for all positions in space $\vec{x}\in R^{3}$ and for $t\ge 0$. In addition, one of the resulting pressure solutions includes the term $\frac{ArcTan(t (x_{1}+x_{2}+x_{3}))}{t}$ which at $t=0$ evaluates to $\frac{0}{0}$, which is indeterminate for all positions in space $\vec{x}\in R^{3}$ at $t=0$. \end{itemize} \textbf{Conclusion:} Based on the analysis performed and the obtained results, we conclude that for fluid velocity $\vec{u}(\vec{x})$ and $\vec{f}(\vec{x},t)$ vector fields as specified, the Navier-Stokes equation for incompressible fluid does not have a solution for all positions in space $\vec{x}\in R^{3}$ at $t=0$. \begin{thrm}\label{T0} \noindent Let $\vec{u}(\vec{x})=(u_{i} (\vec{x}))_{1\le i\le 3} \in R^{3}$ be divergence-free $\nabla \cdot \vec{u}=0$ velocity vector field for incompressible fluid $\rho =const$ occupying all of $R^{3}$ space, where $p(\vec{x},t)\in R$ represents fluid pressure and $\vec{f}(\vec{x},t)$ represents the vector field related to the external force applied on fluid, for any position $\vec{x}\in R^{3} $ in space at $t\ge 0$. \newline Than the Navier-Stokes equation for incompressible fluid \[\frac{\partial \vec{u}}{\partial t} +(\vec{u}\cdot \nabla )\vec{u}=-\frac{\nabla p}{\rho } +\nu \Delta \vec{u}+\vec{f}\] \textbf{does not have solution} for all positions in space $\vec{x}\in R^{3} $ at $t=0$, for vector fields $\vec{u}(\vec{x})$ and $\vec{f}(\vec{x},t)$ defined as: \[u_{i} (\vec{x})=2\frac{x_{h(i-1)} -x_{h(i+1)} }{\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{2}}\quad ;i=\{1,2,3\}\] where $h(l)$ is defined as \[h(l)=\left\{\begin{array}{ccc} {l} & {;1\le l\le 3} & {} \\ {1} & {;l=4} & {} \\ {3} & {;l=0} & {} \end{array}\right.\] \noindent and vector field $\vec{f}(\vec{x},t)$ defined as \[\vec{f}(\vec{x},t)=(0,0,\frac{1}{(1+t^{2} (\sum _{j=1}^{3}x_{j} )^{2} )} )\] \end{thrm} \begin{proof} In order to simplify the manipulation of equations, let us define \begin{align} d_{i} =x_{h(i-1)} -x_{h(i+1)} \label{eq1.1}\end{align} for $i=\{1,2,3\}$ and \begin{align} S=1+\sum _{j=1}^{3}x_{j}^{2} =1+\left|\vec{x}\right|^{2} \label{eq1.2}\end{align} for any $\vec{x}\in R^{3}$. Once (\ref{eq1.1}) and (\ref{eq1.2}) are applied to the statement for fluid velocity vector field $u_{i}(\vec{x})$ as defined by the theorem statement: \begin{align} u_{i} (\vec{x})=2\frac{x_{h(i-1)} -x_{h(i+1)} }{\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{2} } \label{eq1.3}\end{align} for any $\vec{x}\in R^{3};i=\{1,2,3\}$, vector field components can be represented as: \begin{align} u_{i} (\vec{x})=2\frac{d_{i} }{S^{2} } \label{eq1.4}\end{align} for any $\vec{x}\in R^{3};i=\{1,2,3\}$. \newline\newline \textbf{Validation of $\nabla \cdot \vec{u}(\vec{x})=0$ } \newline Let us verify the velocity vector field $\vec{u}(\vec{x})$ as divergence-free. The divergence of the velocity vector field can be expressed as \begin{align} \nabla \cdot \vec{u}=\sum _{i=1}^{3}\frac{\partial u_{i} }{\partial x_{i} } \label{eq1.5}\end{align} for any $\vec{x}\in R^{3};i=\{1,2,3\}$. Applying (\ref{eq1.4}) in (\ref{eq1.5}): \begin{align} \nabla \cdot \vec{u}=\sum _{i=1}^{3}\frac{\partial }{\partial x_{i} } \left(2\frac{d_{i} }{S^{2} } \right) \label{eq1.6}\end{align} \begin{align} \nabla \cdot \vec{u}=\sum _{i=1}^{3}\frac{2}{S^{4} } \left(\frac{\partial d_{i} }{\partial x_{i} } S^{2} -d_{i} 2S\frac{\partial S}{\partial x_{i} } \right) \label{eq1.7}\end{align} for any $\vec{x}\in R^{3};i=\{1,2,3\}$. Let us analyze $\frac{\partial d_{i} }{\partial x_{i} }$ within the brackets of the statement (\ref{eq1.7}): \begin{align} \frac{\partial d_{i} }{\partial x_{i} } =\frac{\partial }{\partial x_{i} } (x_{h(i-1)} -x_{h(i+1)} ) \label{eq1.8}\end{align} for any $\vec{x}\in R^{3};i=\{1,2,3\}$. As the indices $h(i-1)\ne i$ and $h(i+1)\ne i$ in partial differentiation in statement (\ref{eq1.8}) are always different compared to the index $i$, then both partial differentiations must be zero: \begin{align} \frac{\partial x_{h(i-1)} }{\partial x_{i} } =\frac{\partial x_{h(i+1)} }{\partial x_{i} } =0 \label{eq1.9}\end{align} for $i=\{1,2,3\}$. Applying (\ref{eq1.9}) in (\ref{eq1.8}): \begin{align} \frac{\partial d_{i} }{\partial x_{i} } =\frac{\partial }{\partial x_{i} } (x_{h(i-1)} -x_{h(i+1)} )=0 \label{eq1.10}\end{align} for $i=\{1,2,3\}$. Applying (\ref{eq1.10}) in (\ref{eq1.7}): \begin{align} \nabla \cdot \vec{u}=-\sum _{i=1}^{3}\frac{4d_{i} }{S^{3} } \frac{\partial S}{\partial x_{i} } \label{eq1.12}\end{align} for any $\vec{x}\in R^{3};i=\{1,2,3\}$. As per the definition of $S$ by statement (\ref{eq1.2}), let us derive $\frac{\partial S}{\partial x_{i}}$: \begin{align} \frac{\partial S}{\partial x_{i} } =\frac{\partial }{\partial x_{i} } \left(1+\sum _{j=1}^{3}x_{j}^{2} \right)=\frac{\partial }{\partial x_{i} } \left(\sum _{j=1}^{3}x_{j}^{2} \right)=\sum _{j=1}^{3}\frac{\partial x_{j}^{2} }{\partial x_{i} } =2x_{i} \label{eq1.13}\end{align} for $i=\{1,2,3\}$. Applying (\ref{eq1.13}) in (\ref{eq1.12}): \begin{align} \nabla \cdot \vec{u}=-\frac{8}{S^{3} } \sum _{i=1}^{3}d_{i} x_{i} \label{eq1.15}\end{align} for any $\vec{x}\in R^{3};i=\{1,2,3\}$. Applying (\ref{eq1.1}) in (\ref{eq1.15}) and following expanding: \[\nabla \cdot \vec{u}=-\frac{8}{S^{3} } (2(x_{3} -x_{2} )x_{1} +2(x_{1} -x_{3} )x_{2} +2(x_{2} -x_{1} )x_{3} )\] \[\nabla \cdot \vec{u}=-\frac{8}{S^{3} } (2x_{3} x_{1} -2x_{2} x_{1} +2x_{1} x_{2} -2x_{3} x_{2} +2x_{2} x_{3} -2x_{1} x_{3} )=0\] \begin{align} \nabla \cdot \vec{u}=0 \label{eq1.19}\end{align} for any $\vec{x}\in R^{3}$, which proves that $\vec{u}(\vec{x})$ is divergence-free for any $\vec{x}\in R^{3}$. \newline\newline \textbf{Validation of $\vec{u}(\vec{x})$ continuity, convergence to zero at infinity, zero at coordinate origin} \newline Based on the definition of the fluid velocity vector field $\vec{u}(\vec{x})$, as per the theorem statement, we can conclude that vector field $\vec{u}(\vec{x})$ does not have singularity at any position in space $\vec{x}\in R^{3}$. The denominator for each of the velocity vector field components $\left\{u_{i} \right\}_{1\le i\le 3}$ \begin{align} \left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{2} \ge 1 \label{eq1.19.1}\end{align} for $i=\{1,2,3\}$ is positive definite for any position $\vec{x}\in R^{3}$. The vector field $\vec{u}(\vec{x})$ at infinity converges to zero vector as: \[\deg_{x} ( \vec{u}(\vec{x}))=\deg_{x}(\frac{ x_{h(i-1)} -x_{h(i+1)} }{\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{2}}) =1-4=-3\] \begin{align} \lim _{\left|\vec{x}\right|\to \infty } \vec{u}(\vec{x})=\vec{0} \label{eq1.29}\end{align} Also, at the coordinate origin the velocity vector field is zero vector: \begin{align} \vec{u}(\vec{0})=\vec{0} \label{eq1.30}\end{align} Based on statements (\ref{eq1.29}) and (\ref{eq1.30}), we can could be conclude that fluid velocity vector field is zero vector at the coordinate origin, and converges to zero vector when the position converges to infinity. \newline\newline \textbf{Validation of bounded energy for $\vec{u}(\vec{x})$ } \newline Let us evaluate whether the fluid velocity vector field $\vec{u}(\vec{x})$ has bounded energy by integrating the square of its scalar value $\left|\vec{u}(\vec{x})\right|^{2} $ across all of $R^{3}$ space. Square of scalar value of $\vec{u}$ can be represented as: \begin{align} \left|\vec{u}(\vec{x})\right|^{2} =\sum _{i=1}^{3}u_{i}^{2} \label{eq1.40}\end{align} for any $\vec{x}\in R^{3};i=\{1,2,3\}$. Once the vector field components $u_{i}$ are applied to statement (\ref{eq1.40}), as per the definition of this theorem: \[\left|\vec{u}(\vec{x})\right|^{2} =\left(2\frac{(x_{h(1-1)} -x_{h(1+1)} )}{(1+\sum _{j=1}^{3}x_{j}^{2} )^{2} } \right)^{2} +\left(2\frac{(x_{h(2-1)} -x_{h(2+1)} )}{(1+\sum _{j=1}^{3}x_{j}^{2} )^{2} } \right)^{2} +\left(2\frac{(x_{h(3-1)} -x_{h(3+1)} )}{(1+\sum _{j=1}^{3}x_{j}^{2} )^{2} } \right)^{2} \] \[\left|\vec{u}(\vec{x})\right|^{2} =4\frac{(x_{h(0)} -x_{h(2)} )^{2} }{(1+\sum _{j=1}^{3}x_{j}^{2} )^{4} } +4\frac{(x_{h(1)} -x_{h(3)} )^{2} }{(1+\sum _{j=1}^{3}x_{j}^{2} )^{4} } +4\frac{(x_{h(2)} -x_{h(4)} )^{2} }{(1+\sum _{j=1}^{3}x_{j}^{2} )^{4} } \] \[\left|\vec{u}(\vec{x})\right|^{2} =4\frac{(x_{3} -x_{2} )^{2} +(x_{1} -x_{3} )^{2} +(x_{2} -x_{1} )^{2} }{(1+\sum _{j=1}^{3}x_{j}^{2} )^{4} } \] \begin{align} \left|\vec{u}(\vec{x})\right|^{2} =8\frac{x_{1}^{2} -x_{1} x_{2} +x_{2}^{2} -x_{2} x_{3} +x_{3}^{2} -x_{3} x_{1} }{(1+\sum _{j=1}^{3}x_{j}^{2} )^{4} } \label{eq1.41.1}\end{align} for any $\vec{x}\in R^{3}$. Let us integrate the square of the scalar function $\left|\vec{u}(\vec{x})\right|^{2} $ across all of $R^{3}$ space: \begin{align} \int _{R^{3} }\left|\vec{u}(\vec{x})\right|^{2} dx =\int _{R^{3} }8\frac{x_{1}^{2} -x_{1} x_{2} +x_{2}^{2} -x_{2} x_{3} +x_{3}^{2} -x_{3} x_{1} }{(1+\sum _{j=1}^{3}x_{j}^{2} )^{4} } dx \label{eq1.42}\end{align} Once integration in statement (\ref{eq1.42}) is performed, result is constant: \begin{align} \int _{R^{3} }\left|\vec{u}\right|^{2} dx =\pi ^{2} \label{eq1.43}\end{align} As per (\ref{eq1.43}), the result of integration is constant $\pi ^{2}=Const$. Based on this, we can conclude that the fluid total kinetic energy, is constant across all of $R^{3}$ space. \newline\newline \textbf{Validation that vector field $\vec{u}(\vec{x})$ is continuously differentiable} \newline In this section, we confirm that the fluid velocity vector field $\vec{u}(\vec{x})$ is continuously differentiable. Let us apply the partial derivative $\frac{\partial}{\partial x_{k}}$ to the fluid velocity vector field $\vec{u}(\vec{x})$ components $u_{i}$: \begin{align} \frac{\partial u_{i} (\vec{x})}{\partial x_{k} } =\lim _{dx_{k} \to 0} \frac{u_{i} (\vec{x}+dx_{k} )-u_{i} (\vec{x})}{dx_{k} } \label{eq1.43.1}\end{align} for any $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Let us introduce the function $g(j,k) \in \{0,1\}$, such that it has value 1 in case that the index $j\in\{1,2,3\}$ is passed to it as a parameter, is equal to the index $k\in\{1,2,3\}$ of $x_{k}$; otherwise, it is 0: \begin{align} g(j,k)=\left\{\begin{array}{cc} {0} & {;j\ne k} \\ {1} & {;j=k} \end{array}\right. \label{eq1.43.3}\end{align} for $k\in\{1,2,3\}$; $j\in\{1,2,3\}$. Once the vector field components $u_{i}$, as defined by this theorem statement, are applied to the statement (\ref{eq1.43.1}) together with (\ref{eq1.43.3}), statement (\ref{eq1.43.1}) can be equivalently represented as: \begin{align} \frac{\partial u_{i} (\vec{x})}{\partial x_{k} } =\lim _{dx_{k} \to 0} \frac{2\frac{(x_{h(i-1)} +g(h(i-1),k)dx_{k} )-(x_{h(i+1)} +g(h(i+1),k)dx_{k} )}{\left(1+\sum _{j=1}^{3}(x_{j} +g(j,k)dx_{k} )^{2} \right)^{2} } -2\frac{x_{h(i-1)} -x_{h(i+1)} }{\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{2} } }{dx_{k} } \label{eq1.49.5}\end{align} for any $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. For terms in the brackets of statement (\ref{eq1.49.5}): \[x_{h(i-1)} +g(h(i-1),k)dx_{k} \] for $i=\{1,2,3\}$, $k=\{1,2,3\}$. Function $g(h(i-1),k)$ would have value 1 in case when $h(i-1)=k$, so the resulting statement in that scenario would become: \[x_{h(i-1)} +dx_{k} \] for $i=\{1,2,3\}$, $k=\{1,2,3\}$. Otherwise, when $h(i-1)\ne k$ function $g(j,k)$ would return zero, and the result would be \[x_{h(i-1)} \] for $i=\{1,2,3\}$ Once we apply the same principle for the other two pairs of terms in the second brackets of statement (\ref{eq1.49.5}): \[(x_{h(i+1)} +g(h(i+1))dx_{k} )\] for $i=\{1,2,3\}$, $k=\{1,2,3\}$. as well as for denominator: \begin{align} \left(1+\sum _{j=1}^{3}(x_{j} +g(j,k)dx_{k} )^{2} \right)^{2} \label{eq1.50}\end{align} for $k=\{1,2,3\}$, we can conclude that the resulting statement (\ref{eq1.49.5}) will include $dx_{k}$ for a proper and matching $x_{j}$, for which indices are the same $j=k$, in order to form a correct statement for partial differentiation of $u_{i} (\vec{x})$ by $x_{k}$. Therefore, in statement (\ref{eq1.50}) $dx_{k}$ will be included in case when $j=k$ for which $g(j,k)=1$. With that, we expand and rearrange statement (\ref{eq1.50}), which could be equivalently represented as: \begin{align} \left(2x_{k} dx_{k} +dx_{k}^{2} +1+\sum _{j=1}^{3}x_{j}^{2} \right)^{2} \label{eq1.50.1}\end{align} for $k=\{1,2,3\}$. Applying (\ref{eq1.2})in (\ref{eq1.50}): \begin{align} \left(S+(2x_{k} dx_{k} +dx_{k}^{2} )\right)^{2} \label{eq1.52}\end{align} for $k=\{1,2,3\}$. Let us expand (\ref{eq1.52}): \begin{align} S^{2} +dx_{k} (4Sx_{k} +dx_{k} (2S+4x_{k}^{2} +4x_{k} dx_{k} +dx_{k}^{2} )) \label{eq1.53}\end{align} Applying (\ref{eq1.53}) in (\ref{eq1.49.5}) \begin{align} \frac{\partial u_{i} (\vec{x})}{\partial x_{k} } =\lim _{dx_{k} \to 0} \frac{1}{dx_{k} } 2\left(\frac{(x_{h(i-1)} +g(h(i-1),k)dx_{k} )-(x_{h(i+1)} +g(h(i+1),k)dx_{k} )}{S^{2} +dx_{k} (4Sx_{k} +dx_{k} (2S+4x_{k}^{2} +4x_{k} dx_{k} +dx_{k}^{2} ))} -\frac{x_{h(i-1)} -x_{h(i+1)} }{S^{2} } \right) \label{eq1.53.1}\end{align} for any $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. After applying common denominator and simplifying statement (\ref{eq1.53.1}): \begin{align} \frac{\partial u_{i} (\vec{x})}{\partial x_{k} } =\lim _{dx_{k} \to 0} \frac{2((g(h(i-1),k)-g(h(i+1),k)))S^{2}}{(S^{2} +dx_{k} (4Sx_{k} +dx_{k} (2S+4x_{k}^{2} +4x_{k} dx_{k} +dx_{k}^{2} )))S^{2} }- \label{eq1.53.2}\end{align} \[ -\frac{(4Sx_{k} +dx_{k} (2S+4x_{k}^{2} +4x_{k} dx_{k} +dx_{k}^{2} ))(x_{h(i-1)} -x_{h(i+1)} )}{(S^{2} +dx_{k} (4Sx_{k} +dx_{k} (2S+4x_{k}^{2} +4x_{k} dx_{k} +dx_{k}^{2} )))S^{2} }\] for any $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Once limit for $dx_{k} \to 0$ is applied: \begin{align} \frac{\partial u_{i} (\vec{x})}{\partial x_{k} } =\frac{2((g(h(i-1),k)-g(h(i+1),k)))S^{2} -8Sx_{k} (x_{h(i-1)} -x_{h(i+1)} )}{S^{4} } \label{eq1.53.3}\end{align} for any $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Depending on indices $i$ and $k$, there are three scenarios: \newline 1)For $k=i$ then $g(h(i-1),k)=0$ and $g(h(i+1),k)=0$. Applying this in (\ref{eq1.53.3}): \begin{align} \frac{\partial u_{i} (\vec{x})}{\partial x_{k} } =-\frac{8Sx_{k} (x_{h(i-1)} -x_{h(i+1)} )}{S^{4} } \label{eq1.54}\end{align} for any $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. \newline 2)For $k=i+1$ then $g(h(i-1),k)=0$ and $g(h(i+1),k)=1$. Applying this in (\ref{eq1.53.3}): \begin{align} \frac{\partial u_{i} (\vec{x})}{\partial x_{k} } =-\frac{2S+8x_{k} (x_{h(i-1)} -x_{h(i+1)} )}{S^{3} } \label{eq1.55}\end{align} 3)For $k=i-1$ then $g(h(i-1),k)=1$ and $g(h(i+1),k)=0$. Applying this in (\ref{eq1.53.3}): \begin{align} \frac{\partial u_{i} (\vec{x})}{\partial x_{k} } =\frac{2S-8x_{k} (x_{h(i-1)} -x_{h(i+1)} )}{S^{3} } \label{eq1.56}\end{align} for any $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. In all three cases above (\ref{eq1.54}), (\ref{eq1.55}), (\ref{eq1.56}), the denominators are in general form $S^{n} =\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{n} $ for $n \in \{3,4\}$, which is positive definite and must be greater or equal to 1 \begin{align} \left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{n} \ge 1 \label{eq1.56.1}\end{align} for $n \in \{3,4\}$. Statement (\ref{eq1.56.1}) has minimal value at the coordinate origin \begin{align} \left. \left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{n} \right|_{\vec{x}=\vec{0}} =1 \label{eq1.56.2}\end{align} Based on this, we can conclude that the velocity vector field $\vec{u}(\vec{x})$ is continuously differentiable for first partial derivatives. \noindent As the denominator of the first partial derivative $\frac{\partial u_{i} (\vec{x})}{\partial x_{k}}$ is in form $S^{n} =\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{n} $ for $n \in \{3,2\}$ , which has same form of denominator as the velocity vector field itself $S^{2} =\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{2}$. Repeating the same process of partial differentiation as above, we can conclude that for multiple partial derivatives $\frac{\partial ^{\alpha } u_{i} (\vec{x})}{\partial x_{k}^{\alpha } } $, the resulting derivative's denominators must have the same general form $\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{n} \ge 1$ ; $\{n>2\} \in N$, which must be positive definite and cannot create singularities. \noindent In addition, partial differentiation can be repeated an unlimited number of times. We can therefore conclude that the velocity vector field $\vec{u}(\vec{x})$ is continuously differentiable $u(\vec{x})\in C^{\infty } $ for any position $\vec{x}\in R^{3} $. \newline\newline \textbf{Derivation of partial derivative $\frac{\partial^{\alpha} u_{i}}{\partial x_{k}^{\alpha}}$ of any order $\alpha$} \newline Let us explore partial derivatives of the velocity vector field $\vec{u}(\vec{x})=(u_{i} (\vec{x}))_{1\le i\le 3} \in R^{3} $ for any order $\alpha$ of partial differentiation by $x_{k} $ ; $k\in \{ 1,2,3\} $ \newline First Partial Derivative: \begin{align} \frac{\partial u_{i} }{\partial x_{k} } =\frac{\partial }{\partial x_{k} } \left(2\frac{d_{i} }{S^{2} } \right) \label{eq1.58.1}\end{align} \begin{align} \frac{\partial u_{i} }{\partial x_{k} } =\frac{2}{S^{4} } \left(\frac{\partial d_{i} }{\partial x_{k} } S^{2} -d_{i} 2S\frac{\partial S}{\partial x_{k} } \right) \label{eq1.59}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Per (\ref{eq1.1}): \begin{align} \frac{\partial d_{i} }{\partial x_{k} } =\frac{\partial }{\partial x_{k} } x_{h(i-1)} -\frac{\partial }{\partial x_{k} } x_{h(i+1)} =\left\{\begin{array}{ccc} {1} & {\qquad ;k=h(i-1)} & {} \\ {-1} & {\qquad ;k=h(i+1)} & {} \\ {0} & {;k=i} & {} \end{array}\right. \label{eq1.60}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. In the most general case, for the non-zero result when $k\ne i$, as per (\ref{eq1.60}) \begin{align} \frac{\partial d_{i} }{\partial x_{k} } =\pm 1 \label{eq1.60.1}\end{align} once (\ref{eq1.60.1}) and (\ref{eq1.13}) are applied in (\ref{eq1.59}) \begin{align} \frac{\partial u_{i} }{\partial x_{k} } =\pm \frac{2}{S^{2} } -8\frac{d_{i} x_{k} }{S^{3} } \label{eq1.60.2}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Once (\ref{eq1.2}) is applied in (\ref{eq1.60.2}) \begin{align} \frac{\partial u_{i} }{\partial x_{k} } =\pm \frac{2}{\left(1+\left|\vec{x}\right|^{2} \right)^{2} } -8\frac{d_{i} x_{k} }{\left(1+\left|\vec{x}\right|^{2} \right)^{3} } \label{eq1.61.1}\end{align} let us determine the degree for $x$ of the first partial derivative: \begin{align} \deg _{x} (\frac{\partial u_{i} }{\partial x_{k} } )=\deg (\pm \frac{2}{\left(1+\left|\vec{x}\right|^{2} \right)^{2} } -8\frac{d_{i} x_{k} }{\left(1+\left|\vec{x}\right|^{2} \right)^{3} } ) \label{eq1.62}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. \begin{align} \deg _{x} (\frac{\partial u_{i} }{\partial x_{k} } )=-4 \label{eq1.62.1}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Note that both terms in the brackets of (\ref{eq1.62}) are of the same degree. \newline Second Partial Derivative: Continuing partial differentiation from (\ref{eq1.60.2}): \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{k}^{2} } =\frac{\partial }{\partial x_{k} } \left(\pm \frac{2}{S^{2} } -8\frac{d_{i} x_{k} }{S^{3} } \right) \label{eq1.62.2}\end{align} \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{k}^{2} } =\pm \frac{\partial }{\partial x_{k} } \left(\frac{2}{S^{2} } \right)-\frac{\partial }{\partial x_{k} } \left(8\frac{d_{i} x_{k} }{S^{3} } \right) \label{eq1.62.3}\end{align} \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{k}^{2} } =\mp \frac{4}{S^{3} } \frac{\partial S}{\partial x_{k} } -8\frac{\partial d_{i} }{\partial x_{k} } \frac{x_{k} }{S^{3} } -8d_{i} \frac{\partial }{\partial x_{k} } \left(\frac{x_{k} }{S^{3} } \right) \label{eq1.62.4}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. In the most general case, for non-zero result when $k\ne i$, as per (\ref{eq1.60}) \begin{align} \frac{\partial d_{i} }{\partial x_{k} } =\pm 1 \label{eq1.62.5}\end{align} applying (\ref{eq1.62.5}) in (\ref{eq1.62.1}) \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{k}^{2} } =\mp 8\frac{x_{k} }{S^{3} } \mp 8\frac{x_{k} }{S^{3} } -8d_{i} \frac{1}{S^{6} } \left(\frac{\partial x_{k} }{\partial x_{k} } S^{3} -x_{k} 3S^{2} \frac{\partial S}{\partial x_{k} } \right) \label{eq1.63}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Once (\ref{eq1.13}) and (\ref{eq1.2}) are applied in (\ref{eq1.63}) \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{k}^{2} } =\frac{\mp 16x_{k} -8d_{i} }{\left(1+\left|\vec{x}\right|^{2} \right)^{3} } +48\frac{d_{i} x_{k}^{2} }{\left(1+\left|\vec{x}\right|^{2} \right)^{4} } \label{eq1.64}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. The degree of $x$ for the second partial derivative, as per statement (\ref{eq1.64}) is: \begin{align} \deg _{x} (\frac{\partial ^{2} u_{i} }{\partial x_{k}^{2} } )=\deg (\frac{\mp 16x_{k} -8d_{i} }{\left(1+\left|\vec{x}\right|^{2} \right)^{3} } +48\frac{d_{i} x_{k}^{2} }{\left(1+\left|\vec{x}\right|^{2} \right)^{4} } ) \label{eq1.65}\end{align} \begin{align} \deg _{x} (\frac{\partial ^{2} u_{i} }{\partial x_{k}^{2} } )=-5 \label{eq1.65.1}\end{align} Note that both terms in the brackets of statement (\ref{eq1.64}) are of the same degree for $x$. In case that the first partial derivative is by $x_{k}$ and the second partial derivative is by $x_{j} $ for which $j\ne k$, starting from statement (\ref{eq1.60.2}): \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{j} \partial x_{k} } =\frac{\partial }{\partial x_{j} } \left(\pm \frac{2}{S^{2} } -8\frac{d_{i} x_{k} }{S^{3} } \right) \label{eq1.65.2}\end{align} \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{j} \partial x_{k} } =\pm \frac{\partial }{\partial x_{j} } \left(\frac{2}{S^{2} } \right)-8\frac{\partial }{\partial x_{j} } \left(\frac{d_{i} x_{k} }{S^{3} } \right) \label{eq1.65.3}\end{align} \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{j} \partial x_{k} } =\mp \frac{4}{S^{3} } \frac{\partial S}{\partial x_{j} } -\frac{8}{S^{6} } \left(\frac{\partial (d_{i} x_{k} )}{\partial x_{j} } S^{3} -d_{i} x_{k} 3S^{2} \frac{\partial S}{\partial x_{j} } \right) \label{eq1.66}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$, $j\ne k$. As per (\ref{eq1.1}): \begin{align} \frac{\partial (d_{i} x_{k} )}{\partial x_{j} } =\frac{\partial }{\partial x_{j} } \left((x_{h(i-1)} -x_{h(i+1)} )x_{k} \right)=\frac{\partial }{\partial x_{j} } \left(x_{h(i-1)} x_{k} -x_{h(i+1)} x_{k} \right) \label{eq1.66.1}\end{align} There are three scenarios to consider: Scenario 1: $j=h(i-1)$: \newline \begin{align} \frac{\partial (d_{i} x_{k} )}{\partial x_{j} } =\frac{\partial }{\partial x_{j} } \left(x_{j} x_{k} -x_{h(i+1)} x_{k} \right) \label{eq1.66.2}\end{align} \begin{align} \frac{\partial (d_{i} x_{k} )}{\partial x_{j} } =x_{k} \label{eq1.66.3}\end{align} Scenario 2: $j=h(i+1)$: \newline \begin{align} \frac{\partial (d_{i} x_{k} )}{\partial x_{j} } =\frac{\partial }{\partial x_{j} } \left(x_{h(i-1)} x_{k} -x_{j} x_{k} \right) \label{eq1.66.4}\end{align} \begin{align} \frac{\partial (d_{i} x_{k} )}{\partial x_{j} } =-x_{k} \label{eq1.66.5}\end{align} Scenario 3: $j\ne h(i\pm 1)\wedge j\ne k$: \newline \begin{align} \frac{\partial (d_{i} x_{k} )}{\partial x_{j} } =\frac{\partial }{\partial x_{j} } \left(x_{h(i-1)} x_{k} -x_{h(i+1)} x_{k} \right) \label{eq1.66.6}\end{align} \begin{align} \frac{\partial (d_{i} x_{k} )}{\partial x_{j} } =0 \label{eq1.66.7}\end{align} Based on the three scenarios (\ref{eq1.66.3}), (\ref{eq1.66.5}) and (\ref{eq1.66.7}), statement (\ref{eq1.66}) has the largest degree for $x$ when \begin{align} j=h(i\pm 1)\vee j=k \label{eq1.66.8}\end{align} as otherwise, the result of differentiation by $x_{j} $ is zero. As the scenario when $j=k$ is already covered with $\frac{\partial ^{2} u_{i} }{\partial x_{k}^{2} }$ with statement (\ref{eq1.65}), let us select scenario $j=h(i\pm 1)$ when as per (\ref{eq1.66.3}) and (\ref{eq1.66.5}) \begin{align} \frac{\partial (d_{i} x_{k} )}{\partial x_{j} } =\pm x_{k} \label{eq1.67}\end{align} once (\ref{eq1.67}) and (\ref{eq1.13}) are applied in statement (\ref{eq1.66}) \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{j} \partial x_{k} } =\mp \frac{4}{S^{3} } 2x_{j} -\frac{8}{S^{6} } \left(\pm x_{k} S^{3} -3d_{i} x_{k} S^{2} 2x_{j} \right) \label{eq1.67}\end{align} \begin{align} \frac{\partial ^{2} u_{i} }{\partial x_{j} \partial x_{k} } =\mp \frac{16}{S^{3} } \left(x_{j} +x_{k} \right)+\frac{48}{S^{4} } d_{i} x_{k} x_{j} \label{eq1.68}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. The degree for $x$ as per (\ref{eq1.68}) is \begin{align} \deg _{x} (\frac{\partial ^{2} u_{i} }{\partial x_{j} \partial x_{k} } )=\deg _{x} (\mp \frac{16}{S^{3} } \left(x_{j} +x_{k} \right)+\frac{48}{S^{4} } d_{i} x_{k} x_{j} ) \label{eq1.68.1}\end{align} As per (\ref{eq1.2}) and (\ref{eq1.68.1}): \begin{align} \deg _{x} (\frac{\partial ^{2} u_{i} }{\partial x_{j} \partial x_{k} } )=-5 \label{eq1.68.2}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Let us represent partial derivatives by $x$ in a more generic fashion: \begin{align} \frac{\partial ^{\alpha } u_{i} }{\partial x^{\alpha } } \label{eq1.68.3}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. Where $\alpha$ represents the order of partial derivative by $\alpha$-tuple of $x_{i} $ ;$i=\{ 1,2,3\}$. Also, instead of specific numerical values that appear with various terms, let us define $C \in R$ such that it represents the generalized form of any constant value, representing numerical values which do not contribute to this analysis. Based on this, let us express the first partial derivative of velocity vector field components $u_{i}$;$i=\{ 1,2,3\}$, as per statement (\ref{eq1.61.1}) in a more general form: \begin{align} \frac{\partial u_{i} }{\partial x} =\frac{C}{\left(1+\left|\vec{x}\right|^{2} \right)^{2} } +\frac{C }{\left(1+\left|\vec{x}\right|^{2} \right)^{3} } x^{2} \label{eq1.69}\end{align} The second partial derivative, as per (\ref{eq1.68.1}) can be represented as \begin{align} \frac{\partial ^{2} u_{i} }{\partial x^{2} } =\frac{C }{\left(1+\left|\vec{x}\right|^{2} \right)^{3} } x+\frac{C }{\left(1+\left|\vec{x}\right|^{2} \right)^{4} } x^{3} \label{eq1.70}\end{align} The third partial derivative can be represented as: \begin{align} \frac{\partial ^{3} u_{i} }{\partial x^{3} } =C\frac{\partial }{\partial x} \left(\frac{x}{S^{3} } \right)+C\frac{\partial }{\partial x} \left(\frac{x^{3} }{S^{4} } \right) \label{eq1.71}\end{align} \begin{align} \frac{\partial ^{3} u_{i} }{\partial x^{3} } =\frac{C}{S^{6} } \left(\frac{\partial x}{\partial x} S^{3} -x3S^{2} \frac{\partial S}{\partial x} \right)+\frac{C}{S^{8} } \left(3x^{2} S^{4} -x^{3} 4S^{3} \frac{\partial S}{\partial x} \right) \label{eq1.72}\end{align} once (\ref{eq1.13})is applied in (\ref{eq1.72}) \begin{align} \frac{\partial ^{3} u_{i} }{\partial x^{3} } =\frac{C}{S^{6} } \left(S^{3} -3xS^{2} 2x\right)+\frac{C}{S^{8} } \left(3x^{2} S^{4} -4x^{3} S^{3} 2x\right) \label{eq1.74}\end{align} \begin{align} \frac{\partial ^{3} u_{i} }{\partial x^{3} } =\frac{C}{S^{3} } +\frac{C}{S^{4} } x^{2} +\frac{C}{S^{5} } x^{4} \label{eq1.77}\end{align} for $\vec{x}\in R^{3}$, $k=\{1,2,3\}$, $i=\{1,2,3\}$. The degree of $x$ as per (\ref{eq1.77}) is: \begin{align} \deg _{x} (\frac{\partial ^{3} u_{i} }{\partial x^{3} } )=\deg _{x} (\frac{C}{S^{3} } +\frac{C}{S^{4} } x^{2} +\frac{C}{S^{5} } x^{4} ) \label{eq1.78}\end{align} \begin{align} \deg _{x} (\frac{\partial ^{3} u_{i} }{\partial x^{3} } )=-6 \label{eq1.79}\end{align} Notably, all terms in statement (\ref{eq1.78}) are of the same degree. Let us perform one additional partial derivative. The fourth partial derivative can be represented as \begin{align} \frac{\partial }{\partial x} \left(\frac{\partial ^{3} u_{i} }{\partial x^{3} } \right)=\frac{\partial }{\partial x} \left(\frac{C}{S^{3} } +\frac{C}{S^{4} } x^{2} +\frac{C}{S^{5} } x^{4} \right) \label{eq1.80}\end{align} \begin{align} \frac{\partial ^{4} u_{i} }{\partial x^{4} } =\frac{\partial }{\partial x} \left(\frac{C}{S^{3} } +\frac{C}{S^{4} } x^{2} +\frac{C}{S^{5} } x^{4} \right) \label{eq1.81}\end{align} The partial derivatives of each individual term of statement (\ref{eq1.81}) in the brackets are: \begin{align} \frac{\partial }{\partial x} \left(\frac{C}{S^{3} } \right)=-3\frac{C}{S^{6} } S^{2} \frac{\partial S}{\partial x} =\frac{C}{S^{4} } \frac{\partial S}{\partial x} ==\frac{C}{S^{4} } 2x=\frac{C}{S^{4} } x \label{eq1.82}\end{align} \begin{align} \frac{\partial }{\partial x} \left(\frac{C}{S^{4} } x^{2} \right)=\frac{C}{S^{8} } 2xS^{4} -\frac{C}{S^{8} } x^{2} 4S^{3} \frac{\partial S}{\partial x} =\frac{C}{S^{4} } x-\frac{C}{S^{5} } 4x^{2} 2x=\frac{C}{S^{4} } x+\frac{C}{S^{5} } x^{3} \label{eq1.83}\end{align} \begin{align} \frac{\partial }{\partial x} \left(\frac{C}{S^{5} } x^{4} \right)=\frac{C}{S^{10} } 4x^{3} S^{5} -\frac{C}{S^{10} } x^{4} 5S^{4} \frac{\partial S}{\partial x} =\frac{C}{S^{5} } x^{3} -\frac{C}{S^{6} } x^{4} 2x=\frac{C}{S^{5} } x^{3} +\frac{C}{S^{6} } x^{5} \label{eq1.84}\end{align} Once all three partial derivatives as per statements (\ref{eq1.82}), (\ref{eq1.83}), (\ref{eq1.84}) are summed up together: \begin{align} \frac{\partial ^{4} u_{i} }{\partial x^{4} } =\frac{C}{S^{4} } x+\frac{C}{S^{5} } x^{3} +\frac{C}{S^{6} } x^{5} \label{eq1.85}\end{align} for any $\vec{x}\in R^{3}$. The degree of $x$ as per (\ref{eq1.85}) is: \begin{align} \deg _{x} (\frac{\partial ^{4} u_{i} }{\partial x^{4} } )=\deg _{x} (\frac{C}{S^{4} } x+\frac{C}{S^{5} } x^{3} +\frac{C}{S^{6} } x^{5} )=-7 \label{eq1.86}\end{align} Also, notably, all terms above are of the same degree. \newline Based on statements (\ref{eq1.62.1}), (\ref{eq1.65.1}), (\ref{eq1.79}), (\ref{eq1.86}), we can conclude that the $\alpha $-th order of the partial differential of the fluid velocity vector field component $u_{i}$ by $x$ can be expressed as: \begin{align} \deg _{x} (\frac{\partial ^{\alpha } u_{i} }{\partial x^{\alpha } } )=-\alpha -3 \label{eq1.86.1}\end{align} for any $\{\alpha > 0\}\in N$, $\vec{x}\in R^{3}$, $i=\{1,2,3\}$. Based on statements (\ref{eq1.62.1}), (\ref{eq1.65.1}), (\ref{eq1.79}), (\ref{eq1.86}), the partial differentiation by $x$ of any order $\alpha$ can be represented as: \begin{align} \frac{\partial ^{\alpha } u_{i} }{\partial x^{\alpha } } =\frac{C }{\left(1+\left|\vec{x}\right|^{2} \right)^{\left\lfloor \frac{\left|\alpha \right|}{2} \right\rfloor +2} } +\sum _{j=1}^{\left\lfloor \frac{\left|\alpha \right|-1}{2} \right\rfloor +2}C\frac{ x^{k_{j} } }{\left(1+\left|\vec{x}\right|^{2} \right)^{m_{j} } } \label{eq1.94}\end{align} for any $\{\alpha > 0\}\in N$, $x \in R^{3}; i=\{ 1,2,3\}$ and where $x^{k_{j} }$ represents $k_{j} $ - tuple of any of $x_{i} $ ; $j=\{ 1,2,3\}$ \newline Also, notably, that each of the terms in statement (\ref{eq1.94}) are of the same degree: \begin{align} \deg _{x} (C\frac{ x^{k_{j} } }{\left(1+\left|\vec{x}\right|^{2} \right)^{m_{j} } } )=-\alpha -3 \label{eq1.95}\end{align} Then, for each of the terms of sum in statement (\ref{eq1.94}), limit for $|\vec{x}|\to \infty$: \begin{align} \lim _{\left|\vec{x}\right|\to \infty } C \frac{ x^{k_{j} } }{\left(1+\left|\vec{x}\right|^{2} \right)^{m_{j} } }=0 \label{eq1.96}\end{align} At the coordinate origin $\vec{x}=\vec{0}$, each of the terms of the sum in statement (\ref{eq1.94}) is zero: \begin{align} \left[C \frac{x^{k_{j} } }{\left(1+\left|\vec{x}\right|^{2} \right)^{m_{j} } } \right]_{\vec{x}=\vec{0}} =0 \label{eq1.97}\end{align} Let us name $C_{\alpha } (i,j)\in R$ such that \begin{align} C_{\alpha } (i,j)=C \frac{ x^{k_{j} } }{\left(1+\left|\vec{x}\right|^{2} \right)^{m_{j} } } \label{eq1.100}\end{align} In order to verify that $C_{\alpha } (i,j)$ has to be finite, let us assume the opposite: \begin{align} C_{\alpha } (i,j)=\infty \label{eq1.100.1}\end{align} As per statement (\ref{eq1.96}) we concluded that for $\left|\vec{x}\right| \to \infty$, the value of each term of statement (\ref{eq1.94}) converges to zero, and at the coordinate origin, as per (\ref{eq1.96}), each of the terms has value zero. Then, for any selected position $\vec{x}\in R^{3} $, each of the terms can be evaluated as infinite only in case that at least one of terms' denominator is equal to zero: \begin{align} 1+\left|\vec{x}\right|^{2} =0 \label{eq1.102}\end{align} However, $1+\left|\vec{x}\right|^{2} $ is definite positive and cannot be zero. Its minimal value is 1. Based on this, we can conclude that statement (\ref{eq1.100.1}) is impossible, and therefore $C_{\alpha } (i,j)$ must be finite. \newline As $1+\left|\vec{x}\right|^{2} $ is definite positive than the first term of statement (\ref{eq1.94}) \begin{align} \frac{a_{0} }{\left(1+\left|\vec{x}\right|^{2} \right)^{\left\lfloor \frac{\left|\alpha \right|}{2} \right\rfloor +2} } \label{eq1.104}\end{align} also has to be finite for any $\{\alpha>0\} \in N$. \newline Based on these two conclusions, the sum of all terms of statement (\ref{eq1.94}) must be finite as well: \begin{align} \frac{\partial ^{\alpha } u_{i} }{\partial x^{\alpha } } \ne \infty \label{eq1.105}\end{align} Based on (\ref{eq1.105}), there must be some finite $\{ C_{\alpha } (i)\ge 0\} \in R$ such that \begin{align} \left|\frac{\partial ^{\alpha } u_{i} }{\partial x^{\alpha } } \right|=\left|\frac{a_{0} }{\left(1+\left|\vec{x}\right|^{2} \right)^{\left\lfloor \frac{\left|\alpha \right|}{2} \right\rfloor +2} } +\sum _{j=1}^{n(\alpha )}\frac{a_{j} x^{k_{j} } }{\left(1+\left|\vec{x}\right|^{2} \right)^{m_{j} } } \right|\le C_{\alpha } (i) ; i=\{ 1,2,3\} \label{eq1.106}\end{align} then for \begin{align} \left|\partial _{x}^{\alpha } \vec{u}(\vec{x})\right|=\sqrt{\sum _{i=1}^{3}\left|\frac{\partial ^{\alpha } u_{i} (\vec{x})}{\partial x^{\alpha } } \right|^{2} } \label{eq1.107}\end{align} as per (\ref{eq1.106}) and (\ref{eq1.107}) \begin{align} \sum _{i=1}^{3}\left|\frac{\partial ^{\alpha } u_{i} (\vec{x})}{\partial x^{\alpha } } \right|^{2} \le \sum _{i=1}^{3}C_{\alpha } (i) \label{eq1.108}\end{align} based on this, there must exists $C_{\alpha } \in R$ such that \begin{align} C_{\alpha } \ge \sqrt{\sum _{i=1}^{3}C_{\alpha } (i) } \label{eq1.109}\end{align} based on (\ref{eq1.109}, (\ref{eq1.108}) and (\ref{eq1.107}): \begin{align} \left|\partial _{x}^{\alpha } \vec{u}(\vec{x})\right|\le C_{\alpha } \label{eq1.110}\end{align} for any $\{\alpha > 0\}\in N$, $x \in R^{3}$. \newline\newline \textbf{Continuous differentiability of force field $\vec{f}(\vec{x},t)$} \newline As per the theorem statement, $f_{3}$, the component of $\vec{f}$ is \begin{align} f_{3} (\vec{x},t)=\frac{1}{1+t^{2} \left(\sum _{j=1}^{3}x_{j} \right)^{2} } \label{eq1.120}\end{align} for any $x \in R^{3}$, $t \ge 0$. Let us define the denominator of statement (\ref{eq1.120}) as: \begin{align} B=1+t^{2} \left(\sum _{j=1}^{3}x_{j} \right)^{2} \label{eq1.121}\end{align} applying (\ref{eq1.121}) in (\ref{eq1.120}): \begin{align} f_{3} (\vec{x},t)=\frac{1}{B} \label{eq1.122}\end{align} Let us apply partial differential by $t$ on (\ref{eq1.122}): \begin{align} \frac{\partial }{\partial t} f_{3} (\vec{x},t)=\frac{\partial }{\partial t} \frac{1}{B} \label{eq1.123}\end{align} \begin{align} \frac{\partial }{\partial t} f_{3} (\vec{x},t)=-\frac{1}{B^{2} } \frac{\partial B}{\partial t} \label{eq1.124}\end{align} as per (\ref{eq1.121}): \begin{align} \frac{\partial B}{\partial t} =\frac{\partial }{\partial t} \left(1+t^{2} \left(\sum _{j=1}^{3}x_{j} \right)^{2} \right)=2t\left(\sum _{j=1}^{3}x_{j} \right)^{2} \label{eq1.125}\end{align} applying (\ref{eq1.125})in (\ref{eq1.124}): \begin{align} \frac{\partial }{\partial t} f_{3} (\vec{x},t)=-\frac{2t}{B^{2} } \left(\sum _{j=1}^{3}x_{j} \right)^{2} \label{eq1.126}\end{align} Let us define \begin{align} H=\sum _{j=1}^{3}x_{j} \label{eq1.127}\end{align} applying (\ref{eq1.127})in (\ref{eq1.126}): \begin{align} \frac{\partial }{\partial t} f_{3} (\vec{x},t)=-\frac{2t}{B^{2} } H^{2} \label{eq1.128}\end{align} let us find the second partial derivative by $t$: \begin{align} \frac{\partial ^{2} }{\partial t^{2} } f_{3} (\vec{x},t)=\frac{\partial }{\partial t} \left(-\frac{2t}{B^{2} } SH^{2} \right) \label{eq1.129}\end{align} as $H=\sum _{j=1}^{3}x_{j}$ is not in the function of time: \begin{align} \frac{\partial ^{2} }{\partial t^{2} } f_{3} (\vec{x},t)=-2H^{2} \frac{\partial }{\partial t} \left(\frac{t}{B^{2} } \right) \label{eq1.130}\end{align} \begin{align} \frac{\partial ^{2} }{\partial t^{2} } f_{3} (\vec{x},t)=-2H^{2} \frac{1}{B^{4} } \left(B^{2} \frac{\partial t}{\partial t} -t2B\frac{\partial B}{\partial t} \right) \label{eq1.132}\end{align} as per statements (\ref{eq1.121}) and (\ref{eq1.127}), $\frac{\partial B}{\partial t}$ is: \begin{align} \frac{\partial B}{\partial t} =2t\left(\sum _{j=1}^{3}x_{j} \right)^{2} =2tH^{2} \label{eq1.133}\end{align} once (\ref{eq1.133}) is applied in (\ref{eq1.132}): \begin{align} \frac{\partial ^{2} }{\partial t^{2} } f_{3} (\vec{x},t)=-2H^{2} \frac{1}{B^{4} } \left(B^{2} -t2B2tH^{2} \right) \label{eq1.134}\end{align} \begin{align} \frac{\partial ^{2} }{\partial t^{2} } f_{3} (\vec{x},t)=-2H^{2} \frac{1}{B^{4} } \left(B^{2} -4t^{2} BH^{2} \right) \label{eq1.135}\end{align} \begin{align} \frac{\partial ^{2} }{\partial t^{2} } f_{3} (\vec{x},t)=-2\frac{H^{2} }{B^{2} } +8\frac{t^{2} H^{4} }{B^{3} } \label{eq1.136}\end{align} Similarly, the third partial derivative by $t$ is: \begin{align} \frac{\partial ^{3} }{\partial t^{3} } f_{3} (\vec{x},t)=-48\frac{t^{3} H^{6} }{B^{4} } +24\frac{tH^{4} }{B^{3} } \label{eq1.137}\end{align} The general form of $m$-th partial derivative by $t$ can be expressed as: \begin{align} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)=\sum _{l=1}^{\left\lfloor \frac{m}{2} \right\rfloor +1}(-1)^{\left\| m-\left\lfloor \frac{m}{2} +1\right\rfloor +l\right\| _{2} } \frac{t^{\left\| m\right\| _{2} +2(l-1)} H^{2\left(m-\left\lfloor \frac{m}{2} +1\right\rfloor +l\right)} }{B^{\left\lfloor \frac{m-1}{2} \right\rfloor +l+1} } \label{eq1.138}\end{align} for any $x \in R^{3}$, $t \ge 0$, where \begin{align} h=\left\| m\right\| _{2} +2(l-1) \label{eq1.139}\end{align} \begin{align} e=2\left(m-\left\lfloor \frac{m}{2} +1\right\rfloor +l\right) \label{eq1.140}\end{align} \begin{align} s=\left\lfloor \frac{m-1}{2} \right\rfloor +l+1 \label{eq1.141}\end{align} once (\ref{eq1.139}), (\ref{eq1.140}), (\ref{eq1.141})are applied in (\ref{eq1.138}): \begin{align} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)=\sum _{l=1}^{\left\lfloor \frac{m}{2} \right\rfloor +1}(-1)^{\left\| \frac{e}{2} \right\| _{2} } \frac{t^{h} S^{e} }{B^{s} } \label{eq1.142}\end{align} for any $x \in R^{3}$, $t \ge 0$, $\{m>0\} \in N$. In order to demonstrate that statement (\ref{eq1.142}) correctly represents the partial derivation for any order of differentiation $\{m>0\} \in N$, let us apply a few orders of partial derivatives by $t$ using statement above: \begin{align} \frac{\partial }{\partial t} f_{3} (\vec{x},t)=-2\frac{tH^{2} }{B^{2} } \label{eq1.143}\end{align} \begin{align} \frac{\partial ^{2} }{\partial t^{2} } f_{3} (\vec{x},t)=-2\frac{H^{2} }{B^{2} } +8\frac{t^{2} H^{4} }{B^{3} } \label{eq1.144}\end{align} \begin{align} \frac{\partial ^{3} }{\partial t^{3} } f_{3} (\vec{x},t)=-48\frac{t^{3} H^{6} }{B^{4} } +24\frac{tH^{4} }{B^{3} } \label{eq1.145}\end{align} \begin{align} \frac{\partial ^{4} }{\partial t^{4} } f_{3} (\vec{x},t)=384\frac{t^{4} H^{8} }{B^{5} } -288\frac{t^{2} H^{6} }{B^{4} } +24\frac{H^{4} }{B^{3} } \label{eq1.146}\end{align} \begin{align} \frac{\partial ^{5} }{\partial t^{5} } f_{3} (\vec{x},t)=-3840\frac{t^{5} H^{10} }{B^{6} } +3840\frac{t^{3} H^{8} }{B^{5} } -720\frac{tH^{6} }{B^{4} } \label{eq1.147}\end{align} Statements from (\ref{eq1.143}) to (\ref{eq1.147}) match the partial derivatives by $t$ performed in incremental fashion on the basis of the previous order of the partial derivative, confirming that (\ref{eq1.142}) represents the general form of partial derivative by $t$ or any order $\{m>0\} \in N$. \newline As in statement (\ref{eq1.142}) $B$ as per (\ref{eq1.121}) is definite positive, then $\frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)$ is continuous for any $\{ m > 0\} \in N$ .As $\vec{f}(\vec{x},t)=(0,0,f_{3} )$ we can conclude that $\frac{\partial ^{m} }{\partial t^{m} } \vec{f}(\vec{x},t)$ is also continuous for any $\{ m > 0\} \in N$ . Therefore, we can conclude that the vector field $\vec{f}(\vec{x},t)$ is continuously differentiable $\vec{f}(\vec{x},t)\in C^{\infty } $ for any position $\vec{x}\in R^{3} $ and $t\ge 0$. \newline\newline \textbf{Determining $\deg_{x} (\frac{\partial ^{m} }{\partial t^{m}}\vec{f})$ and $\deg_{t} (\frac{\partial ^{m} }{\partial t^{m}}\vec{f})$} \newline Let us start with an analysis of $\deg _{x}(\frac{\partial ^{m}}{\partial t^{m}}\vec{f})$. Per (\ref{eq1.142}), expanding $H$ and $B$ as per (\ref{eq1.127}) and (\ref{eq1.121}): \begin{align} \deg _{x} (\frac{\partial ^{m}f_{3}}{\partial t^{m} } )=Max(\deg _{x} (\frac{t^{k} \left(\sum _{j=1}^{3}x_{j} \right)^{e} }{\left(1+t^{2} \left(\sum _{j=1}^{3}x_{j} \right)^{2} \right)^{s} } ))_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.150}\end{align} \begin{align} \deg _{x} (\frac{\partial ^{m}f_{3}}{\partial t^{m} })=Max(\deg _{x} (\frac{\left(\sum _{j=1}^{3}x_{j} \right)^{e} }{\left(\sum _{j=1}^{3}x_{j} \right)^{2s} } ))_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.150.2}\end{align} \begin{align} \deg _{x} (\frac{\partial ^{m}f_{3} }{\partial t^{m}})=Max\left(e-2s\right)_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.150.3}\end{align} expanding $e$ and $s$, as per (\ref{eq1.140}) and (\ref{eq1.141}) in (\ref{eq1.150.3}): \begin{align} \deg _{x} (\frac{\partial ^{m}f_{3} }{\partial t^{m}})=Max\left(2\left(m-\left\lfloor \frac{m}{2} +1\right\rfloor +l\right)-2(\left\lfloor \frac{m-1}{2} \right\rfloor +l+1)\right)_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.152}\end{align} \begin{align} \deg _{x} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})=Max\left(2m-2\left\lfloor \frac{m}{2} +1\right\rfloor -2\left\lfloor \frac{m-1}{2} \right\rfloor -2\right)_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.154}\end{align} Statement (\ref{eq1.154}) is not in function of $l$, then (\ref{eq1.154}) is equivalent to \begin{align} \deg _{x} (\frac{\partial ^{m}f_{3}}{\partial t^{m} })=2m-2\left\lfloor \frac{m}{2} +1\right\rfloor -2\left\lfloor \frac{m-1}{2} \right\rfloor -2 \label{eq1.155}\end{align} Let us evaluate statement (\ref{eq1.155}) for a few consecutive values of $m$: \newline \[m=1 \Longrightarrow \deg _{x}(\frac{\partial f_{3}}{\partial t })=2-2-2=-2\] \[m=2 \Longrightarrow \deg _{x}(\frac{\partial ^{2}f_{3}}{\partial t^{2}})=4-4-2=-2\] \[m=3 \Longrightarrow \deg _{x}(\frac{\partial ^{3}f_{3}}{\partial t^{3}})=6-4-2-2=-2\] \[m=4 \Longrightarrow \deg _{x}(\frac{\partial ^{4}f_{3}}{\partial t^{4}})=8-6-2-2=-2\] \[m=5 \Longrightarrow \deg _{x}(\frac{\partial ^{5}f_{3}}{\partial t^{5}})=10-6-4-2=-2\] Therefore, for any order $m$ of partial differentiations by time, the degree by $x$ for $\frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)$can be expressed as \begin{align} \deg _{x} (\frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=-2 \label{eq1.156}\end{align} for any $\{m>0\} \in N$. \newline As already concluded, the degree for $x$ is not in function of $l$ as used in sum of statement (\ref{eq1.142}) meaning that each of the terms $\frac{t^{h} H^{e} }{B^{s} }$ of the sum are of the same degree, or in other words: \begin{align} \deg _{x} (\frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=\deg _{x}( \frac{t^{h} H^{e} }{B^{s}})=-2 \label{eq1.156.1}\end{align} \newline As the degree for $x$ is $-2$ regardless of $m$, then for $\vec{x}$ approaching infinity $\left|\vec{x}\right|\to \infty $ \begin{align} \lim _{\left|\vec{x}\right|\to \infty } \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)=0 \label{eq1.157}\end{align} limit of $m$-th order of partial derivative by time must be equal to zero. \noindent Also, at the coordinate origin $\vec{x}=\vec{0}$ : \begin{align} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)|_{\vec{x}=\vec{0}} =\sum _{l=1}^{\left\lfloor \frac{m}{2} \right\rfloor +1}(-1)^{\left\| \frac{e}{2} \right\| _{2} } \frac{t^{h} 0^{e} }{B^{s} } =0 \label{eq1.158}\end{align} $m$-th order of partial derivative by time is zero as well. \newline Now, let us check the degree of $t$ as per statement (\ref{eq1.150}): \begin{align} \deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})=Max(\deg _{t} (\frac{t^{h} \left(\sum _{j=1}^{3}x_{j} \right)^{e} }{\left(1+t^{2} \left(\sum _{j=1}^{3}x_{j} \right)^{2} \right)^{s} } ))_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.159}\end{align} \begin{align} \deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})=Max(\deg _{t} (\frac{t^{h} }{t^{2s} } ))_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.160}\end{align} expanding $h$ and $s$, as per (\ref{eq1.139}) and (\ref{eq1.141}) in (\ref{eq1.160}): \begin{align} \deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})=Max(\deg _{t} (\frac{t^{\left\| m\right\| _{2} +2(l-1)} }{t^{2(\left\lfloor \frac{m-1}{2} \right\rfloor +l+1)} } ))_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.161}\end{align} \begin{align} \deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})=Max(\left\| m\right\| _{2} +2(l-1)-2(\left\lfloor \frac{m-1}{2} \right\rfloor +l+1))_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.162}\end{align} \begin{align} \deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})=Max(\left\| m\right\| _{2} +2l-2-2\left\lfloor \frac{m-1}{2} \right\rfloor -2l-2)_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.163}\end{align} \begin{align} \deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})=Max(\left\| m\right\| _{2} -2\left\lfloor \frac{m-1}{2} \right\rfloor -4)_{l=\{ 1,\left\lfloor \frac{m}{2} \right\rfloor +1\} } \label{eq1.164}\end{align} As per (\ref{eq1.164}), the degree of $t$ does not depend on $l$. Therefore, (\ref{eq1.164}) is equivalently represented as: \begin{align} \deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}} )=\left\| m\right\| _{2} -2\left\lfloor \frac{m-1}{2} \right\rfloor -4 \label{eq1.165}\end{align} for any $\{m>0\} \in N$. Let us evaluate statement (\ref{eq1.165}) for a few consecutive values of $m$: \[\deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})_{m=1} =\left[\left\| m\right\| _{2} -2\left\lfloor \frac{m-1}{2} \right\rfloor -4\right]_{m=1} =1-0-4=-3\] \[\deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})_{m=2} =\left[\left\| m\right\| _{2} -2\left\lfloor \frac{m-1}{2} \right\rfloor -4\right]_{m=2} =0-0-4=-4\] \[\deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})_{m=3} =\left[\left\| m\right\| _{2} -2\left\lfloor \frac{m-1}{2} \right\rfloor -4\right]_{m=3} =1-2-4=-5\] \[\deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})_{m=4} =\left[\left\| m\right\| _{2} -2\left\lfloor \frac{m-1}{2} \right\rfloor -4\right]_{m=4} =0-2-4=-6\] \[\deg _{t} (\frac{\partial ^{m}f_{3}}{\partial t^{m}})_{m=5} =\left[\left\| m\right\| _{2} -2\left\lfloor \frac{m-1}{2} \right\rfloor -4\right]_{m=5} =1-4-4=-7\] Based on this, the general form of $\deg _{x}(\frac{\partial ^{m}}{\partial t^{m}}\vec{f})$ can be expressed as: \begin{align} \deg _{t} (\frac{\partial ^{m}f_{3} }{\partial t^{m} })=-m-2 \label{eq1.166}\end{align} for any $\{m>0\} \in N$. As per statements (\ref{eq1.142}) and (\ref{eq1.166}) for $t$ approaching infinity $t\to \infty $ \begin{align} \lim _{t\to \infty } \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)=0 \label{eq1.167}\end{align} for any $\{m>0\} \in N$. The limit of the $m$-th order of the partial derivative by $t$ must be equal to zero. Also, when $t=0$ : \begin{align} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)|_{t=0} =0 \label{eq1.168}\end{align} for any $\{m>0\} \in N$. \newline\newline \textbf{Determining $ \deg _{x} (\frac{\partial ^{\alpha } }{\partial x^{\alpha } } \frac{\partial ^{m} }{\partial t^{m} }\vec{f})$} \newline Now, beginning with statement (\ref{eq1.142}), let us perform the partial differentiation by $x$ : \begin{align} \frac{\partial }{\partial x} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)=\frac{\partial }{\partial x} \sum _{l=1}^{\left\lfloor \frac{m}{2} \right\rfloor +1}(-1)^{\left\| \frac{e}{2} \right\| _{2} } \frac{t^{h} H^{e} }{B^{s} } \label{eq1.169}\end{align} \begin{align} \frac{\partial }{\partial x} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)=\sum _{l=1}^{\left\lfloor \frac{m}{2} \right\rfloor +1}(-1)^{\left\| \frac{e}{2} \right\| _{2} } t^{h} \frac{\partial }{\partial x} \frac{H^{e} }{B^{s} } \label{eq1.169.1}\end{align} let us expand $\frac{\partial }{\partial x} \frac{H^{e} }{B^{s} }$ in statement (\ref{eq1.169.1}) \begin{align} \frac{\partial }{\partial x} \frac{H^{e} }{B^{s} } =\frac{1}{B^{2s} } \left(B^{s} \frac{\partial }{\partial x} H^{e} -H^{e} \frac{\partial }{\partial x} B^{s} \right) \label{eq1.170}\end{align} \begin{align} \frac{\partial }{\partial x} \frac{H^{e} }{B^{s} } =\frac{1}{B^{2s} } \left(B^{s} eH^{e-1} \frac{\partial H}{\partial x} -H^{e} sB^{s-1} \frac{\partial B}{\partial x} \right) \label{eq1.171}\end{align} applying partial derivative $ \frac{\partial}{\partial x}$ on $H$, as defined per (\ref{eq1.127}): \begin{align} \frac{\partial H}{\partial x} =\frac{\partial }{\partial x} \sum _{j=1}^{3}x_{j} =1 \label{eq1.172}\end{align} applying partial derivative $ \frac{\partial}{\partial x}$ on $B$ as defined per (\ref{eq1.121}): \begin{align} \frac{\partial B}{\partial x} =\frac{\partial}{\partial x}(1+t^{2} \left(\sum _{j=1}^{3}x_{j}\right)^{2})=2t^{2} \sum _{j=1}^{3}x_{j} =2t^{2} H \label{eq1.173}\end{align} then applying (\ref{eq1.173}) and (\ref{eq1.172}) on (\ref{eq1.171}): \begin{align} \frac{\partial }{\partial x} \frac{H^{e} }{B^{s} } =\frac{1}{B^{2s} } \left(B^{s} eH^{e-1} -H^{e} sB^{s-1} 2t^{2} H\right) \label{eq1.174}\end{align} \begin{align} \frac{\partial }{\partial x} \frac{H^{e} }{B^{s} } =\frac{1}{B^{2s} } \left(eB^{s} H^{e-1} -2st^{2} B^{s-1} H^{e+1} \right) \label{eq1.175}\end{align} \begin{align} \frac{\partial }{\partial x} \frac{H^{e} }{B^{s} } =\frac{eH^{e-1} }{B^{s} } -\frac{2st^{2} H^{e+1} }{B^{s+1} } \label{eq1.177}\end{align} once (\ref{eq1.177}) is applied in (\ref{eq1.169.1}): \begin{align} \frac{\partial }{\partial x} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t)=\sum _{l=1}^{\left\lfloor \frac{m}{2} \right\rfloor +1}(-1)^{\left\| \frac{e}{2} \right\| _{2} } \left(e\frac{t^{h} H^{e-1} }{B^{s} } -2s\frac{t^{h+2} H^{e+1} }{B^{s+1} } \right) \label{eq1.179}\end{align} Let us determine the degree for $x$ as per statement (\ref{eq1.179}). \begin{align} \deg _{x} (\frac{\partial }{\partial x} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=\deg _{x} (\sum _{l=1}^{\left\lfloor \frac{m}{2} \right\rfloor +1}\left(e\frac{t^{h} H^{e-1} }{B^{s} } -2s\frac{t^{h+2} H^{e+1} }{B^{s+1} } \right) ) \label{eq1.180}\end{align} Let us determine the degree of the first term in the brackets of sum above (\ref{eq1.180}). As per definition of $H$ by statement (\ref{eq1.127}): \begin{align} \deg _{x}(H)=\deg _{x}(\sum _{j=1}^{3}x_{j}) = 1 \label{eq1.180.1}\end{align} based on (\ref{eq1.180.1}) and the first term of the sum in statement (\ref{eq1.180}): \begin{align} \deg _{x} (\frac{t^{h} H^{e-1} }{B^{s} } )=\deg _{x} (\frac{t^{h} H^{e} }{B^{s} } )-\deg _{x} (H)=\deg _{x} (\frac{t^{h} H^{e} }{B^{s} } )-1 \label{eq1.181}\end{align} as per statement (\ref{eq1.156.1}): \begin{align} \deg _{x} (\frac{t^{h} H^{e} }{B^{s} } )=-2 \label{eq1.182}\end{align} therefore \begin{align} \deg _{x} (\frac{t^{h} H^{e-1} }{B^{s} } )=\deg _{x} (\frac{t^{h} H^{e} }{B^{s} } )-1=-3 \label{eq1.183}\end{align} Let us analyze the degree of the second term $\frac{t^{h} H^{e+1} }{B^{s+1} }$ in the brackets of the sum in statement (\ref{eq1.180}): \begin{align} \deg _{x} (\frac{t^{h} H^{e+1} }{B^{s+1} } )= \deg _{x} (\frac{t^{h} H^{e} }{B^{s} } )+\deg_{x}(H)-\deg_{x}(B) \label{eq1.184}\end{align} As per definition of $B$ by statement (\ref{eq1.121}): \begin{align} \deg _{x}(B)=\deg _{x}(1+t^{2} \left(\sum _{j=1}^{3}x_{j} \right)^{2})=\deg _{x}(\left(\sum _{j=1}^{3}x_{j} \right)^{2})=2 \label{eq1.184.1}\end{align} applying (\ref{eq1.184.1}) and (\ref{eq1.180.1}) in (\ref{eq1.184}), we can conclude: \begin{align} \deg _{x} (\frac{t^{h} H^{e+1} }{B^{s+1} } )=\deg _{x} (\frac{t^{h} H^{e} }{B^{s} } )+1-2=-3 \label{eq1.184.2}\end{align} As per (\ref{eq1.184.2}) and (\ref{eq1.183}), we can conclude that both terms within brackets of the sum in statement (\ref{eq1.180}) are of same degree $-3$ for $x$. In addition, the degree for $x$ for statement (\ref{eq1.180}) is not in function of $l$ as used in the sum of the statement. Based on this, we conclude that the derived degree for $x$ for all of the terms of sum in the statement (\ref{eq1.180}) are mutually equal and applicable for the whole statement (\ref{eq1.180}): \begin{align} \deg _{x} (\frac{\partial }{\partial x} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=-3 \label{eq1.186}\end{align} equivalently, for partial derivatives by $x$ of second degree: \begin{align} \deg _{x} (\frac{\partial ^{2} }{\partial x^{2} } \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=(\deg _{x} (\frac{t^{h} S^{e} }{B^{s} } )-1)-1=-4 \label{eq1.187}\end{align} equivalently, for partial derivatives by $x$ of third degree: \begin{align} \deg _{x} (\frac{\partial ^{3} }{\partial x^{3} } \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=((\deg _{x} (\frac{t^{h} S^{e} }{B^{s} } )-1)-1)-1=-5 \label{eq1.188}\end{align} Based on (\ref{eq1.186}),(\ref{eq1.187}),(\ref{eq1.188}), the general form for the degree of $x$ for any order of the partial derivative $\alpha$ by x can be expressed as \begin{align} \deg _{x} (\frac{\partial ^{\alpha } }{\partial x^{\alpha } } \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=-\alpha -2 \label{eq1.189}\end{align} \newline\newline \textbf{Determining $ \deg _{t} (\frac{\partial ^{\alpha } }{\partial x^{\alpha } } \frac{\partial ^{m} }{\partial t^{m} }\vec{f})$} \newline Now that we have determined the degree for $x$ in case of any order $\{\alpha>0\} \in N$ of the partial derivative by $x$, and any order $\{m>0\} \in N$ of the partial derivative by $t$, let us determine the degree for $t$ as well. Beginning in incremental fashion, let us determine the degree for $x$, starting from statement (\ref{eq1.179}): \begin{align} \deg _{t} (\frac{\partial }{\partial x} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=\deg _{t} (\sum _{l=1}^{\left\lfloor \frac{m}{2} \right\rfloor +1}\left(e\frac{t^{h} H^{e-1} }{B^{s} } -2s\frac{t^{h+2} H^{e+1} }{B^{s+1} } \right) ) \label{eq1.190}\end{align} Let us determine the degree of the first term in the brackets of the sum in (\ref{eq1.190}). As per definition of $H$ by statement (\ref{eq1.127}): \begin{align} \deg _{t}(H)=\deg _{t}(\sum _{j=1}^{3}x_{j}) = 0 \label{eq1.190.1}\end{align} based on (\ref{eq1.190.1}) and first term of sum in statement (\ref{eq1.190}): \begin{align} \deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} } )=\deg _{t} (\frac{t^{h}}{B^{s}})+(e-1)\deg _{t} (H)=\deg _{t} (\frac{t^{h}}{B^{s}}) \label{eq1.190.2}\end{align} As per definition of $B$ by statement (\ref{eq1.121}): \begin{align} \deg _{t}(B)=\deg _{t}(1+t^{2} \left(\sum _{j=1}^{3}x_{j} \right)^{2})=2 \label{eq1.190.3}\end{align} as per (\ref{eq1.190.3}) and (\ref{eq1.190.2}) \begin{align} \deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} } )= \deg _{t} (\frac{t^{h}}{B^{s}}) = \deg _{t}({t^{h}}) - \deg _{t}({B^{s}})= \deg _{t}({t^{h}}) -s \deg _{t}(B)=h-2s \label{eq1.190.4}\end{align} once $h$ and $s$ are expanded in (\ref{eq1.190.4}) as per (\ref{eq1.139}) and (\ref{eq1.141}): \begin{align} \deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} } )=h-2s=\left\| m\right\| _{2} +2(l-1) - 2(\left\lfloor \frac{m-1}{2} \right\rfloor +l+1) \label{eq1.190.5}\end{align} \begin{align} \deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} } )=h-2s=\left\| m\right\| _{2} +2l-2- 2\left\lfloor \frac{m-1}{2} \right\rfloor -2l-2 \label{eq1.190.6}\end{align} \begin{align} \deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} })=h-2s=\left\| m\right\| _{2} - 2\left\lfloor \frac{m-1}{2} \right\rfloor -4 \label{eq1.190.7}\end{align} Let us evaluate statement (\ref{eq1.190.7}) for a few consecutive values of $m$: \[\deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} })_{m=1}=1-0-4=-3\] \[\deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} })_{m=2}=0-0-4=-4\] \[\deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} })_{m=3}=1-2-4=-5\] \[\deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} })_{m=4}=0-2-4=-6\] \[\deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} })_{m=5}=1-4-4=-7\] which could be in general form expressed as: \begin{align} \deg _{t} (\frac{t^{h} H^{e-1} }{B^{s} })=-m-2 \label{eq1.190.8}\end{align} Let us determine the degree of the second term in the brackets of the sum in (\ref{eq1.190}): \begin{align} \deg _{t} (2s\frac{t^{h+2} H^{e+1} }{B^{s+1}})= \deg _{t} (\frac{t^{h} H^{e-1} }{B^{s}})+\deg _{t} (\frac{t^{2} H^{2} }{B^{1}}) \label{eq1.192}\end{align} once (\ref{eq1.190.8}), (\ref{eq1.190.1}) and (\ref{eq1.190.3}) are applied on (\ref{eq1.192}): \begin{align} \deg _{t} (2s\frac{t^{h+2} H^{e+1} }{B^{s+1}})= -m-2+2+0-2=-m-2 \label{eq1.192.1}\end{align} As per (\ref{eq1.192.1}) and (\ref{eq1.190.8}), we can conclude that both terms within the brackets of the sum in statement (\ref{eq1.190}) are of same degree $-m-2$ for $t$. In addition, the degree for $t$ is not in function of $l$ used in the sum of statement (\ref{eq1.190}). Based on this, we can conclude that the degree derived for $t$ is applicable to the whole statement (\ref{eq1.190}): \begin{align} \deg _{t} (\frac{\partial }{\partial x} \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=-m-2 \label{eq1.193}\end{align} For each next derivative by x, the same process can be applied. Therefore, the degree of $t$ can be generalized as: \begin{align} \deg _{t} (\frac{\partial ^{\alpha } }{\partial x^{\alpha } } \frac{\partial ^{m} }{\partial t^{m} } f_{3} (\vec{x},t))=-m-2 \label{eq1.194}\end{align} for any $\{\alpha > 0\} \in N$, $\{m > 0\} \in N$, $\vec{x} \in R^{3}$. Statements (\ref{eq1.194}) and (\ref{eq1.189}) can be more compactly expressed as: \begin{align} \deg _{x} (\partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t))=-\alpha -2 \label{eq1.195}\end{align} \begin{align} \deg _{t} (\partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t))=-m-2 \label{eq1.196}\end{align} as per (\ref{eq1.195}) and (\ref{eq1.196}): \begin{align} \lim _{\left|\vec{x}\right|\to \infty } \partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)=0 \label{eq1.197}\end{align} \begin{align} \lim _{t\to \infty } \partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)=0 \label{eq1.198}\end{align} As $H$ is defined as: \begin{align} H=\sum _{j=1}^{3}x_{j} \label{eq1.199}\end{align} At at the coordinate origin where $x_{j}=0;j=\{1,2,3\}$ their sum must be zero $H=0$. As numerators of each of the resulting terms for the partial derivative $\partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)$ has to include minimally $H$ to the power of 1, then we can conclude that all resulting terms of the partial derivative $\partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)$ must be zero at the coordinate origin $\vec{x}=\vec{0}$: \begin{align} \partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)|_{\vec{x}=\vec{0}} =0 \label{eq1.200}\end{align} for any $\{\alpha > 0\} \in N$, $\{m > 0\} \in N$, $\vec{x} \in R^{3}$. Also, as each of the resulting terms for the partial derivative $\partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)$ has to include minimally $t$ to the power of 1, then we can conclude that all resulting terms of the partial derivative $\partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)$ must be zero when $t=0$: \begin{align} \partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)|_{t=0} =0 \label{eq1.201}\end{align} for any $\{\alpha > 0\} \in N$, $\{m > 0\} \in N$, $\vec{x} \in R^{3}$. The force vector field $\vec{f}$, as per the theorem definition is: \begin{align} \vec{f}=(0,0,f_{3} (\vec{x},t)) \label{eq1.202}\end{align} for any $\vec{x} \in R^{3}$, $t \geq 0$. Then applying partial derivatives $\partial _{x}^{\alpha } \partial _{t}^{m}$ on $\vec{f}$: \begin{align} \partial _{x}^{\alpha } \partial _{t}^{m} \vec{f}=(0,0,\partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t)) \label{eq1.203}\end{align} as per (\ref{eq1.203}): \begin{align} \left|\vec{f}\right|=f_{3} (\vec{x},t) \label{eq1.203.1}\end{align} therefore \begin{align} \left|\partial _{x}^{\alpha } \partial _{t}^{m} \vec{f}\right|=\partial _{x}^{\alpha } \partial _{t}^{m} f_{3} (\vec{x},t) \label{eq1.204}\end{align} as per (\ref{eq1.197}): \begin{align} \lim _{\left|\vec{x}\right|\to \infty } \left|\partial _{x}^{\alpha } \partial _{t}^{m} \vec{f}\right|=0 \label{eq1.207.1}\end{align} also, as per (\ref{eq1.200}): \begin{align} \left|\partial _{x}^{\alpha } \partial _{t}^{m} \vec{f}\right|_{\vec{x}=\vec{0}}=0 \label{eq1.207.2}\end{align} \newline\newline \textbf{Deriving solution for pressure $p(\vec{x},t)$} \newline The Navier-Stokes equation for incompressible fluid is expressed in following form: \begin{align} \frac{\partial \vec{u}}{\partial t} +(\vec{u}\cdot \nabla )\vec{u}=-\frac{\nabla p}{\rho } +\nu \Delta \vec{u}+\vec{f} \label{eq1.230}\end{align} for $\nabla\cdot\vec{u}=0$ at any position in space $\vec{x} \in R^{3}$ and any time $t\geq0$. \newline As per statement (\ref{eq1.19}) the vector field $\vec{u}$, as defined by this theorem, is divergence free $\nabla\cdot\vec{u}=0$. Once the fluid velocity vector field $\vec{u}$ and force field $\vec{f}$, as defined in the theorem statement, are applied in the Navier-Stokes equation (\ref{eq1.230}), the terms of the Navier-Stokes equation can be expressed in following way: \begin{align} \frac{\partial \vec{u}}{\partial t} =\vec{0} \label{eq1.231}\end{align} for any $\vec{x} \in R^{3}$, $t\geq0$. \newline Let us define the diffusion term vector field components as \begin{align} (\vec{u}\cdot \nabla)\vec{u}=(c_{1} ,c_{2} ,c_{3} ) \label{eq1.233.1}\end{align} once $\vec{u}$, as per statement of this theorem, is applied in (\ref{eq1.233.1}), the diffusion term related vector field components $c_{i}$ are: \begin{align} c_{i} =4\frac{\sum _{j=1}^{3}x_{j} -3x_{i} }{\left(1+\sum _{j=1}^{3}x_{j}^{2} \right)^{4} } \label{eq1.233.1.1}\end{align} for $i=\{ 1,2,3\}$ for any $\vec{x} \in R^{3}$, $t\geq0$. \newline Let us define the viscosity related term vector field components as \begin{align} \nu \Delta \vec{u}=(l_{1} ,l_{2} ,l_{3} ) \label{eq1.233.2}\end{align} once $\vec{u}$, as per statement of this theorem, is applied in (\ref{eq1.233.2}), viscosity term related vector field components $l_{i}$ are: \begin{align} l_{i} =8\nu \frac{d_{i} \left(\left(\sum _{j=1}^{3}x_{j}^{2} +1\right)-6\right)}{\left(\sum _{j=1}^{3}x_{j}^{2} +1\right)^{4} } \label{eq1.233.3}\end{align} for $i=\{ 1,2,3\}$ and any $\vec{x} \in R^{3}$, $t\geq0$. \newline Let us express $\frac{\nabla p}{\rho}$ in form of the vector field components as: \begin{align} \frac{\nabla p}{\rho}=(g_{1} ,g_{2} ,g_{3} )=\vec{g} \label{eq1.235}\end{align} Terms of the Navier-Stokes equation (\ref{eq1.230}) and as per (\ref{eq1.235}), can be rearranged in following way: \begin{align} \vec{g}=\frac{\nabla p}{\rho }=\nu \Delta \vec{u}+\vec{f}-\frac{\partial \vec{u}}{\partial t} -(\vec{u}\cdot \nabla )\vec{u} \label{eq1.235.1}\end{align} Applying (\ref{eq1.231}), (\ref{eq1.233.1.1}), (\ref{eq1.233.3}) and $\vec{f}(\vec{x},t)$, as per the statement of this theorem in (\ref{eq1.235.1}), the resulting vector field components of $\vec{g}=\frac{\nabla p}{\rho}$ are: \begin{align} g_{i} =\frac{8\nu d_{i} \left(\left(\sum _{j=1}^{3}x_{j}^{2} +1\right)-6\right)-4\left(\sum _{j=1}^{3}x_{j} -3x_{i} \right)}{\left(\sum _{j=1}^{3}x_{j}^{2} +1\right)^{4} } +\frac{I(i)}{\left(1+\left(\sum _{j=1}^{3}x_{j} \right)^{2} \right)\left(1+t\right)} \label{eq1.236}\end{align} for $i=\{ 1,2,3\} $ where $I(n)=\left\{\begin{array}{cc} {1} & {;n=3} \\ {0} & {;n\ne 3} \end{array}\right.$ \newline Once $g_{1} $ is integrated by $x_{1} $, the resulting pressure is: \begin{align} p(\vec{x},t) = 4 \frac{12 \nu x_{1} x_{2}- x_{1} x_{2}-12 \nu x_{1} x_{3}- x_{1} x_{3}-2 x_{2}^2-2 x_{3}^2-2}{6 \left(x_{2}^2+ x_{3}^2+1\right) \left(1+\sum _{i=1}^{3}x_{i}\right)^3}- \label{eq1.237}\end{align} \[4 \frac{x_{1}\left(12 \nu x_{2}^3-12 \nu x_{2}^2 x_{3}-48 \nu x_{2}+12 \nu x_{2} x_{3}^2+5 x_{2}-12 \nu x_{3}^3+48 \nu x_{3}+5 x_{3}\right)}{16 \left(x_{2}^2+ x_{3}^2+1\right)^3 \left(1+\sum _{i=1}^{3}x_{i}\right)}-\] \[4 \frac{ x_{1}\left(12 \nu x_{2}^3-12 \nu x_{2}^2 x_{3}-48 \nu x_{2}+12 \nu x_{2} x_{3}^2+5 x_{2}-12 \nu x_{3}^3+48 \nu x_{3}+5 x_{3}\right)}{24 \left(x_{2}^2+ x_{3}^2+1\right)^2 \left(1+\sum _{i=1}^{3}x_{i}\right)^2}+\] \[4 \frac{\left(-12 \nu x_{2}^3+12 \nu x_{2}^2 x_{3}+48 \nu x_{2}-12 \nu x_{2} x_{3}^2-5 x_{2}+12 \nu x_{3}^3-48 \nu x_{3}-5 x_{3}\right) ArcTan\left(\frac{ x_{1}}{\sqrt{ x_{2}^2+ x_{3}^2+1}}\right)}{16 \left(x_{2}^2+ x_{3}^2+1\right)^{7/2}}+\] \[C(y,z)+C\] for any $\vec{x} \in R^{3}$, $t\geq0$. Once $g_{2} $ is integrated by$x_{2} $, the pressure is: \begin{align} p(\vec{x},t) = -\frac{2 \left(2 x_{1}^2+12 \nu x_{1} x_{2}+x_{1} x_{2}-12 \nu x_{2} x_{3}+x_{2} x_{3}+2 x_{3}^2+2\right)}{3 \left(x_{1}^2+x_{3}^2+1\right) \left(1+\sum _{i=1}^{3}x_{i}\right)^3}+ \label{eq1.238}\end{align} \[\frac{x_{2} \left(12 \nu x_{1}^3-12 \nu x_{1}^2 x_{3}-48 \nu x_{1}+12 \nu x_{1} x_{3}^2-5 x_{1}-12 \nu x_{3}^3+48 \nu x_{3}-5 x_{3}\right)}{4 \left(x_{1}^2+x_{3}^2+1\right)^3 \left(1+\sum _{i=1}^{3}x_{i}\right)}+\] \[\frac{x_{2} \left(12 \nu x_{1}^3-12 \nu x_{1}^2 x_{3}-48 \nu x_{1}+12 \nu x_{1} x_{3}^2-5 x_{1}-12 \nu x_{3}^3+48 \nu x_{3}-5 x_{3}\right)}{6 \left(x_{1}^2+x_{3}^2+1\right)^2 \left(1+\sum _{i=1}^{3}x_{i}\right)^2}+\] \[\frac{\left(12 \nu x_{1}^3-12 \nu x_{1}^2 x_{3}-48 \nu x_{1}+12 \nu x_{1} x_{3}^2-5 x_{1}-12 \nu x_{3}^3+48 \nu x_{3}-5 x_{3}\right) ArcTan \left(\frac{x_{2}}{\sqrt{x_{1}^2+x_{3}^2+1}}\right)}{4 \left(x_{1}^2+x_{3}^2+1\right)^{7/2}}+\] \[C(x,z)+C\] for any $\vec{x} \in R^{3}$, $t\geq0$. Once $g_{3} $ is integrated by $x_{3} $, the pressure is: \begin{align} p(\vec{x},t) = \frac{\tan^{-1}(t (x_{1}+x_{2}+x_{3}))}{t}+ \label{eq1.239}\end{align} \[\frac{2 \left(-2 x_{1}^2+12 \nu x_{1} x_{3}-x_{1} x_{3}-2 x_{2}^2-12 \nu x_{2} x_{3}-x_{2} x_{3}-2\right)}{3 \left(x_{1}^2+x_{2}^2+1\right) \left(1+\sum _{i=1}^{3}x_{i}\right)^3}-\] \[\frac{x_{3} \left(12 \nu x_{1}^3-12 \nu x_{1}^2 x_{2}-48 \nu x_{1}+12 \nu x_{1} x_{2}^2+5 x_{1}-12 \nu x_{2}^3+48 \nu x_{2}+5 x_{2}\right)}{4 \left(x_{1}^2+x_{2}^2+1\right)^3 \left(1+\sum _{i=1}^{3}x_{i}\right)}-\] \[\frac{x_{3} \left(12 \nu x_{1}^3-12 \nu x_{1}^2 x_{2}-48 \nu x_{1}+12 \nu x_{1} x_{2}^2+5 x_{1}-12 \nu x_{2}^3+48 \nu x_{2}+5 x_{2}\right)}{6 \left(x_{1}^2+x_{2}^2+1\right)^2 \left(1+\sum _{i=1}^{3}x_{i}\right)^2}+\] \[\frac{\left(-12 \nu x_{1}^3+12 \nu x_{1}^2 x_{2}+48 \nu x_{1}-12 \nu x_{1} x_{2}^2-5 x_{1}+12 \nu x_{2}^3-48 \nu x_{2}-5 x_{2}\right) ArcTan \left(\frac{x_{3}}{\sqrt{x_{1}^2+x_{2}^2+1}}\right)}{4 \left(x_{1}^2+x_{2}^2+1\right)^{7/2}}+\] \[C(x,y)+C\] for any $\vec{x} \in R^{3}$, $t\geq0$. Each of the statements (\ref{eq1.237}), (\ref{eq1.238}) and (\ref{eq1.239}) represent the solution for presure $p(\vec{x},t)$. All three results are mutually different. In addition to that, the first term in statement (\ref{eq1.239}) is: \begin{align} \frac{ArcTan(t (x_{1}+x_{2}+x_{3}))}{t} \label{eq1.239.1}\end{align} which at the point in time $t=0$, once applied in (\ref{eq1.239.1}), results with \begin{align} \frac{ArcTan\left(0\right)}{0}{\rm =}\frac{0}{0} \label{eq1.240}\end{align} which cannot be determined for any $\vec{x} \in R^{3}$ at $t=0$. \newline Based on the three mutually different resulting equations for pressure (\ref{eq1.237}), (\ref{eq1.238}) and (\ref{eq1.239}), one of which (\ref{eq1.239}) incudes the term (\ref{eq1.240}), which cannot be determined at any position $\vec{x} \in R^{3}$ at $t=0$, we can conclude that the Navier-Stokes equation for incompressible fluid, for the velocity vector field $\vec{u}(\vec{x})$ and the external force related vector field $\vec{f}(\vec{x},t)$, as specified by the statement of this theorem, does not have solution at any position in space $\vec{x}\in R^{3} $ at $t=0$, which proves this theorem. \end{proof} \section{Discussion} Results of the analysis performed, demonstrate and prove that there exists $\vec{u}(\vec{x})$ and $\vec{f}(\vec{x},t)$ smooth vector fields, such that the Navier-Stokes equation for incompressible fluid does not have solution for any position in $R^3$ space at $t=0$. Such a result strongy indicates that Navier-Stokes equation for incompressible fluid has to be better understood, in order to determine the root causes of obtained results. \newline As is well known, many fluids are not very compressible. Therefore, incompressability as a mathematical approximation might appear as a logical and reasonable approach for simplifying the mathematical modeling of fluid behavior over space and time, however, the interpretation of \textbf{physical reasonability}, as referred to by the Clay Mathematics Institute’s official existence and smoothness of the Navier-Stokes equation problem statement, implicitly includes the understanding that such mathematical model(s) must be in alignment with the recognized laws of physics. \newline All material bodies, as per the laws of physics, are to some degree compressible, regardless how small such incompressibility is. On the other hand, incompressibility, mathematically, does not allow for compressibility at all. The question is if such behavior of incompressible fluids mathematically modelled and included in the form of the incompressibility condition, represents behavior which cannot be supported by recognized laws of physics. \newline The hypothesis, which could be of use to be further explored, is that fluid incompressibility as an mathematical approximation may be beyond the boundaries of what is ’physically reasonable’ on a macroscopic scale in conjunction with the recognized laws of physics. If so, this might account for the difficulties of obtaining unreasonable fluid velocities and blow-ups, which would be worth exploring further. \newpage
proofpile-arXiv_067-7425
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In almost all Galactic globular clusters (GGC), the abundance variations of individual elements have been reported. Since the study of \citet{Sim68,Sim70}, the significance of the variation in C, N, and O abundances has been emphasized owing to their high abundances in chemical compositions \citep{Ren77,Dem80,Roo85}. Stellar models, in which chemical composition is adopted from observations, have been constructed for comparison studies (e.g., \citet{Van85}). Because ‘chemical anomalies’ of various other elements and anti-correlations of $CNONa$ and $MgAl$ in GGCs have been reported \citep{Osb71,Can98,Gra01,Coh05,Car09a,Car09b,Lee13}, it has become more important to consider observational results. For various other elements, the effects on stars of the abundance variations have been investigated (e.g., \citet{Dot07,Van12}). Furthermore, some studies suggest scenarios for explaining the general abundance pattern present in the stars of GGCs, including chemical compositions with positive variations in C, N, Na, negative variations in O \citep{Sal06,Cas08,Pie09,Ven09}, and positive variations in Al and negative variations in Mg content \citep{Cas13}. \begin{table*}[ht!] \centering \caption{Input Physics and Parameters} \label{tbl1} \begin{tabular}{cc} \hline \hline Input Physics & Source \\ \hline \multirow{2}{*}{Solar mixture} & \citet{GS98}\\ & ($Z$/$X$ = 0.0230) \\ Abundances and ratio of $\alpha$ elements & $[\alpha/Fe] = 0.3$ \citep{Van00} \\ OPAL Rosseland mean opacities & \citet{OPALOPAC}\footnotemark[1] \\ Low temperature opacities & \citet{Fer05}\footnotemark[1] \\ Equations of state & OPAL EOS \citep{OPALEOS} \\ Helium diffusion & \citet{Tho94} \\ Mixing length parameter & ${\it l}/H_{p} = 1.859$ (solar calibrated value) \\ \hline \end{tabular} \footnotetext[1]{The opacity tables for the mixtures have been generated for this work.} \end{table*} Spectroscopic data of some GCCs show substantially more variations in the contents of a few elements than those previously recognized. For example, abundance variations of Al above $1.0~dex$ in [Al/Fe] have been reported in the stars of NGC 2808 and NGC 6752. It has been generally considered that changes in the content of less abundant elements such as Na and Al cause only minor effects on stellar evolution (e.g. \citet{Van12}). However, their abundance variations up to extreme values reported by recent observations may have considerable impacts on stellar evolution. We have chosen eight elements, including C, N, O, Na, Mg, Al, Si, and Fe, to investigate cases in which their extreme values are reached, based on the values reported in previous studies. For various contents of these individual metals, stellar models and isochrones are constructed to analyze their effects and physical changes. In this study, we focus on the variation in total particle number of metal ions, and we present the changes and shifts of isochrones in a systematic way. The changes in the isochrone shape versus the relative number variation of metal ions are investigated and the results indicate that the behaviors of Na, Mg, and Al are similar in the stellar interior. Thus, these three elements can be considered as a group, which is similar to the well-recognized group of C, N, and O. Moreover, the influence of Fe on isochrones differs from those of other metal elements. The details of the stellar model constructions are given in section 2. The method of presenting the isochrone characteristics for systematic comparison is discussed in section 3. Section 4 presents the results and analysis based on the isochrone shape change in terms of the changes in the total numbers of metal ions. Further discussion and a summary are given in sections 5 and 6, respectively. \section{Stellar Model Construction} \subsection{Code and input physics} Stellar evolutionary models were constructed by using a version of the Yale Stellar Rotational Evolution Code (YREC), which is a standard stellar evolutionary code utilized for $Y^{2}$ isochrones \citep{Yi01,Yi08,Kim02}. The models in this study are compatible with the $Y^{2}$ isochrone set, although two updates have been made in the computation. First, the reference solar mixture of \citet{GS98} was used rather than that of \citet{GN93} used for the $Y^{2}$ isochrone projects. Although recent studies on the solar mixture have been conducted \citep{Asp05,Asp09,Lod09,Caf11,Ste16}, studies of helioseismology and neutrinos prefer the higher metallicity of mixtures reported in older studies. Thus, the solar mixture of \citet{GS98} was chosen for this work. As \citet{Van12} stated, the results in this type of differential study are not affected by the selection of the reference mixture. Second, the OPAL equation of state (OPAL EOS) was updated to its later version \citep{OPALEOS}. The change associated with the two updates is less than the typical observational error \citep{Beom14}. Table \ref{tbl1} summarizes the input physics employed for the computations in this work. For clarity, the following two points are worth mentioning. First, the cosmic chemical enrichment, specifically $\Delta Y/ \Delta Z$, was considered in the computation of the reference isochrones. In the comparison isochrones, however, to focus on the effects of abundance changes of the individual elements, the He content was kept constant even though the total metal abundance $Z$ was changed. This treatment is discussed further in the following subsection. Second, the stellar models in this work were constructed assuming that the ratio between metal contents in a mixture does not change during the evolution of a star. The surface chemical mixture, which is accessible through spectroscopic observations, is the integrated result of various physical mechanisms such as dredge-up, diffusion, and dynamics in the stellar atmosphere. However, our knowledge of these physical processes for each metal element is still rather limited. Because these physical processes have not been included, the ratio between metal contents in a mixture did not change during the evolution of the stellar models. \subsection{Construction of stellar models and isochrones} \begin{table}[t!] \centering \caption{The chemical abundance grids for the reference stellar models} \label{tbl2} \begin{tabular}{lccc} \hline \hline Globular Cluster & $Z$\footnotemark[1] & $Y$\footnotemark[2] & [Fe/H] \\ \hline Metal poor & 0.0002 & 0.2304 & -2.173\\ & 0.0005 & 0.2310 & -1.775\\ Intermediate & 0.0010 & 0.2320 & -1.473\\ & 0.0020 & 0.2340 & -1.170\\ & 0.0040 & 0.2380 & -0.866\\ Metal rich & 0.0070 & 0.2440 & -0.618\\ Extremely metal rich & 0.0120 & 0.2540 & -0.375 \\ \hline \end{tabular} \footnotetext[1]{The mass fraction of the elements heavier than the helium} \footnotetext[2]{The mass fraction of the helium} \end{table} \begin{table*}[ht!] \centering \caption{The observed values of $[m/Fe]$ for the metal elements} \label{tbl3} \begin{tabular}{ccccc} \hline \hline \multirow{2}{*}{Elements} & Number Fraction & \multicolumn{2}{c}{$\Delta[m/Fe]$} & \multirow{2}{*}{References} \\ & in the Reference Mixture & \multicolumn{2}{c}{in Observations} & \\ \hline C & 0.14824 & -0.5 & +0.5 & \citet{Car05}\\ N & 0.03724 & \multicolumn{2}{c}{+1.6} & \citet{Car05,Coh05}\\ O & 0.60388 & \multicolumn{2}{c}{-0.6} & \citet{Car09a}\footnotemark[1]\\ Na & 0.00187 & -0.3 & +0.5 & \citet{Car09a}\footnotemark[1]\\ Mg & 0.03396 & -0.8 & +0.2 & \citet{Lee13}\\ Al & 0.00069 & \multicolumn{2}{c}{+1.5} & \citet{Car09b}\footnotemark[1]\\ Si & 0.03243 & \multicolumn{2}{c}{+0.4} & \citet{Car09b}\footnotemark[1]\\ Fe & 0.01416 & \multicolumn{2}{c}{+0.4} & \\ \hline \end{tabular} \footnotetext[1]{\citet{Car09a,Car09b}, “Na-O anti-correlation and HB VII and VIII.”} \end{table*} The ranges of mass and metallicity for the stellar model construction were set for globular clusters in our Galaxy. The metallicity range of $[Fe/H] = -2.173 \sim -0.375$ ($Z = 0.0002 \sim 0.012$), which extends slightly beyond the usual range, was adopted. Including a cosmic primordial He abundance of 0.23 and a $\Delta Y/\Delta Z$ value of two, the chemical compositions for the reference models are summarized in Table \ref{tbl2}. For isochrones of 9 Gyr and older with this metallicity, the mass range was chosen as $0.7\sim1.1~M_{\odot}$ with $0.05~M_{\odot}$ increments. Considering the globular clusters in our Galaxy, the mixture of \citet{GS98} with $\alpha$ elements enhanced by 0.3 dex was adopted as the reference mixture for this work. The ratios of $\alpha$ elements were taken from \citet{Van00}. \begin{table*}[ht!] \centering \caption{The list of the metal compositions for each cases} \label{tbl4} \begin{tabular}{lcccccccccc} \hline \hline \multirow{2}{*}{Case} & \multicolumn{2}{c}{Metallicity\footnotemark[1]} & \multicolumn{8}{c}{Abundance of Individual Elements ($\log$~$N_{m}$)\footnotemark[2]} \\ & $Z$\footnotemark[3] & [Fe/H] & C & N & O & Na & Mg & Al & Si & Fe \\ \hline $Reference$ & 0.0010 & -1.473 & 7.166 & 6.566 & 7.776 & 5.266 & 6.526 & 4.836 & 6.506 & 6.146 \\ $C-0.5$ & 0.0009 & -1.473 & 6.666 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ $C+0.5$ & 0.0012 & -1.473 & 7.666 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ $N+0.8$ & 0.0012 & -1.473 & $\cdots$ & 7.366 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ $N+1.6$ & 0.0022 & -1.472 & $\cdots$ & 8.166 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ $O-0.3$ & 0.0007 & -1.473 & $\cdots$ & $\cdots$ & 7.476 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ $O-0.6$ & 0.0006 & -1.473 & $\cdots$ & $\cdots$ & 7.176 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ $Na-0.3$ & 0.0010 & -1.473 & $\cdots$ & $\cdots$ & $\cdots$ & 4.966 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ $Na+0.5$ & 0.0010 & -1.473 & $\cdots$ & $\cdots$ & $\cdots$ & 5.766 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ $Mg-0.8$ & 0.0010 & -1.473 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & 5.726 & $\cdots$ & $\cdots$ & $\cdots$ \\ $Mg+0.2$ & 0.0010 & -1.473 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & 6.726 & $\cdots$ & $\cdots$ & $\cdots$ \\ $Al+0.8$ & 0.0010 & -1.473 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & 5.636 & $\cdots$ & $\cdots$ \\ $Al+1.5$ & 0.0010 & -1.473 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & 6.336 & $\cdots$ & $\cdots$ \\ $Si+0.4$ & 0.0011 & -1.473 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & 6.906 & $\cdots$ \\ $Fe+0.4$ & 0.0011 & -1.073 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & 6.546 \\ \hline \end{tabular} \footnotetext[1]{The metallicities for the cases of $Z_{ref.}=0.0010$ $([Fe/H]_{ref.}=-1.473)$. The abundance variation is so slight that the metallicities seems to be unchanged for most of the cases.} \footnotetext[2]{The abundances for the cases of $Z_{ref.}=0.0010$ on the scale $\log N_{He}=11.00$; $A_{m} = \log N_{m}/N_{He} +11.00$.} \footnotetext[3]{The mass fraction for the heavier elements than He.} \end{table*} To focus on the effects of each element clearly, the abundance of each specific element was changed individually. We chose to fix all of the other elements except for H. For enhancement of a specific element, for example, the content of H is decreased by the increment of the target element. This causes the stellar model to have slightly higher $Z$ and $[Fe/H]$. However, any change shown in a model was confirmed beforehand to be attributed primarily to the abundance change of the target element rather than to the associated change in the $\Delta X/X$ value. Because the relative variation in $X$, i.e., $\Delta X/X$, is very small, the changes in the value of $[Fe/H]$ in this work are also very small, at less than 0.01. The net effect of the increased metal and the decreased H combined is an overall enhancement of opacity throughout the stellar interior, which causes the tracks to shift toward the lower temperature. Even though H is the most important opacity source in the stellar interior, the changes in the $\Delta X/X$ value are not large enough to compensate for the opacity change due to the increased metal content. The treatment of this work produces tracks comparable to those produced by \citet{Van12}, in which $[Fe/H]$ and $Y$ were fixed. The isochrones were generated by utilizing the method developed for the third version of $Y^{2}$ isochrone projects. Each track was divided into several parts based on the key equivalent evolutionary phases (EEP) in the evolution, the zero-age main sequence (ZAMS), the main-sequence turnoff point (MSTO), the base of the giant branch (the base of GB), the bumps on the GB, and the GB tip. In each part between the key EEPs, numerous secondary EEPs were assigned on the basis of the track length on the $\log$ $T_{eff}$ versus $\log$ $L$ diagram. Isochrones were generated by interpolating tracks at the same secondary EEPs. \subsection{Abundance variations} The eight elements selected for this study, C, N, O, Na, Mg, Al, Si, and Fe, are either abundant metals in the stellar chemical makeup or elements having significant abundance variation according to recent observations of the stars in GGCs. Their abundance variations were taken from the extreme values, or the mean values of sub-population stars in the observed globular clusters. To compare the effects of Fe and Si directly, the abundance variation for Fe was chosen to be the same as that of Si. Table \ref{tbl3} shows the target elements, their variation, and the references. For N, O, and Al, of which the variations are quite large, the intermediate value cases were included. The various cases in this study are summarized in Table \ref{tbl4}. The first column in the table lists the cases, with the name consisting of the atomic symbol for the element changed and the amount of the change in $[m/Fe]$ with respect to the reference mixture. For example, $C-0.5$ indicates that the abundance of C is decreased by 0.5 dex in [C/Fe] with respect to the reference mixture. For example, the remaining columns of the table present the chemical composition set in the case of $Z_{ref.} = 0.001$, where the subscript $ref.$ refers to the reference model. For the second and the third columns, the metallicities of each case are expressed in $Z$ and $[Fe/H]$. As mentioned in section 2.2, the total metallicities are slightly modified as well. The last eight columns show the abundances by number for the eight target elements; these values were calculated on the scale of $\log N_{He}=11.00$. \section{Analysis method: a systematic presentation of the changes in isochrone shape} In previous studies, such as \citet{Van12} and \citet{Pie09}, the effects of abundance variation of individual elements have been discussed extensively. The abundance variations of an element mainly influence the opacity and energy generation, resulting in shape changes of evolutionary tracks and isochrones. To present these effects quantitatively, we examined the differences in the locations of MSTO and GB, and the length of sub-GB between isochrones. \begin{figure}[ht!] \figurenum{1} \includegraphics[width=\columnwidth]{f1} \caption{12 Gyr isochrones for various $Na$, $Mg$, $Al$, $Si$, or $Fe$ abundances. The colored lines present the 12 Gyr isochrones of the different mixtures as indicated in the legend. The order of the mixture in the legend was chosen according to the shift of the isochrones with respect to the reference isochrone. The spread of isochrones is significant only at the RGB at low metallicity (upper panel), whereas the spread is significant even at the MS and at the RGB at high metallicity (bottom panel).} \label{NaFe_iso} \end{figure} The net effect on isochrones is measured by considering an isochrone as a whole, from the main sequence (MS) to the red GB (RGB) tip. It is common knowledge that the influence of C, N, and O is mainly on the MS phase and that of Na, Mg, Al, Si, and Fe is mainly on the RGB. This simple description, however, is not sufficient. When the metallicity is sufficiently high in the MS stars to create effective temperatures of less than about 3.80 in $\log T_{eff}$, Na, Mg, Al, Si, or Fe can affect the opacity significantly in the outermost parts of these stars. As a result, a metal-rich star can be influenced by the effect of this high surface opacity during its entire lifetime. In Figure \ref{NaFe_iso}, this phenomenon is clearly indicated by a comparison of the upper and bottom panels, which show isochrones with metallicities of $[Fe/H] = -1.473$ and $-0.618$, respectively. The colored lines represent isochrones with various abundance variations. Contrary to that in the upper panel, the high-metallicity isochrones in the bottom panel show noticeable shifts of the MS that are comparable to those of the RGB. Moreover, such modifications on the MS phase are comparable to, or more than, those caused by C, N, or O. Therefore, the changes in the locations of both the MS and RGB were estimated for all elements considered. Because the effect of the abundance variation can differ depending on the locations along an RGB, the shifts in various locations were estimated from the RGB base to the tip. In this study, the changes in the shapes of isochrones are presented in three parameters: the MS location, RGB location, and length of the sub-GB. In determining the difference in MS location between isochrones, the hottest points were selected as the MSTOs. The RGB locations were compared at four different levels set at points 1.5, 3.0, 4.5, and 6.0 brighter in magnitude than the MSTO. The dimmest and the brightest levels among these locations represent the RGB base and tip, respectively. Except for the cases of C, N, and O variations, the location differences are considered only in terms of effective temperature rather than luminosity because the changes in the latter were small in the mass and metallicity ranges of this study. For the length of the sub-GB, the difference in effective temperature between the MSTO and the base of the RGB were estimated. The base of the RGB was set to the location 1.5 mag brighter than that of the MSTO. It should be noted that the length of the sub-GB in this study is commonly used for age estimation of globular clusters \citep{Dem90,Sar90,Van13}. \section{Results} \begin{figure}[ht!] \figurenum{2} \includegraphics[width=\columnwidth]{f2} \caption{From the sub-GB to RGB bump of the 12 Gyr isochrones for six cases are given, namely, $reference$, $Na+0.5$, $Mg+0.2$, $Al+1.5$, $Si+0.4$, and $Fe+0.4$. The order of the mixture in the legend was chosen according to the shift of the isochrones with respect to the reference isochrone. The isochrones of $Mg+0.2$, $Al+1.5$, and $Fe+0.4$ show the almost the same shape so that their lines nearly overlap.} \label{Al_iso} \end{figure} In this section, we analyze the changes in isochrone shape for mixtures with abundance variations of individual elements. We focus on the parameter, namely, the relative changes in the total number of metal ions introduced by the variation of the individual elements. Because these changes are approximately the same for $Mg+0.2$, $Fe+0.4$, and $Al+1.5$, the isochrones of these three mixtures almost overlap in the Hertzsprung–Russell diagram (HRD). When the isochrone shape change is reviewed in terms of this parameter, it is found that Na, Mg, and Al have similar effects. It is widely accepted that C, N, and O can be considered as one group and that the total sum of their content can be utilized for describing the effect on stellar structure and evolution (e.g. \citet{Sim70,Ren77,Dem80,Van85}). The newly identified group of Na, Mg, and Al in this paper can be considered similarly. In addition, Fe shows distinctive characteristics from other elements. As that of Si, the influence of Fe is less than those of Na, Mg, and Al near RGB tip, in metal rich stars. In MS stars, however, the influence of Fe is much larger than those of Na, Mg, Al, and even Si. \subsection{Influence of extreme variation in abundance} It has been considered that the less abundant metals such as Al have limited influence on stellar evolution. However, huge increases or decreases reported by recent observations may have considerable impacts. For $Al+1.5$, which is the extreme case based on the values from observations, the isochrone shows noticeable changes with respect to the reference in Figure \ref{Al_iso}. The isochrone for $Al+1.5$ shows the same degree of change as those for $Mg+0.2$ and $Fe+0.4$. In fact, the three isochrones are difficult to distinguish in the figure. On the contrary, the isochrone for Na does not show significant change in the figure. Although Na has about twice the abundance of Al in the reference mixture, the variation of $Na+0.5$ is not as large as that of $Al+1.5$. The amount of Na increase that makes a noticeable change in isochrone shape will be discussed in section 5.1. For the three cases of $Al+1.5$, $Mg+0.2$, and $Fe+0.4$, whose isochrones overlap in Figure \ref{Al_iso}, the actual changes in the total number of metal ions are similar even though the abundance variations presented in dex differ significantly, namely +1.5, +0.2, +0.4. Thus, we focused on the relative changes in the total number of metal ions ($\Delta N_{Z}/N_{Z,ref.}$) to describe the abundance variation in the study of the effect of individual elements on isochrones. The unit of dex ($[m/Fe]$) is generally used to express abundance variation. However, when comparing cases involving different elements, values expressed in $[m/Fe]$ can be misleading because it is a value scaled for the solar mixture. The mixture with enhanced C by 0.5, for example, shows more variation in the number of metal ions than that shown by the mixture with enhanced N by 0.8 because C is more abundant than N in the solar mixture. For this reason, the abundances of abundant elements such as O, Mg, and Si, have mainly been considered in the chemical composition of stars \citep{Roo85,Van12}. Another issue is that the value is log scaled. For example, a comparison of mixtures with N enhanced by 1.6 and that by 0.8 in dex revealed a twofold difference in number although the actual change in the number of N particles differed about fourfold. Thus, to express the abundance variation using a single parameter, the relative changes in the total number of metal ions is examined in this study. \begin{table}[t!] \centering \caption{The relative variation in total number of metal ions according to each case} \label{tbl5} \begin{tabular}{lr} \hline \hline \multirow{2}{*}{Mixture} & $\Delta N_{Z}/N_{Z,ref.}$\footnotemark[1] \\ & \multicolumn{1}{c}{(\%)} \\ \hline $Reference$ & 0.00 \\ $C-0.5$ & -10.14 \\ $C+0.5$ & +32.05 \\ $N+0.8$ & +19.77 \\ $N+1.6$ & +144.50 \\ $O-0.3$ & -30.12 \\ $O-0.6$ & -45.22 \\ $Na-0.3$ & -0.09 \\ $Na+0.5$ & +0.40 \\ $Mg-0.8$ & -2.86 \\ $Mg+0.2$ & +1.99 \\ $Al+0.8$ & +0.37 \\ $Al+1.5$ & +2.12 \\ $Si+0.4$ & +4.90 \\ $Fe+0.4$ & +2.14 \\ \hline \end{tabular} \footnotetext[1]{The relative variation in total number of metal ions, expressed in percentage. Furthemore this parameter is independent of the total metallicity.} \end{table} \subsection{Abundance variation and net effect} \begin{figure*} \centering \figurenum{3} \includegraphics[width=120mm]{f3} \caption{For isochrones of various $Na$, $Mg$, or $Al$ contents, the number variation of metal elements versus the degree of net effect represented by the changes in the length of the sub-GB, at three different metallicity conditions is given. The x axis is the number variation of metal ions, whereas the y axis is the length of the sub-GB. The seemingly linear relation may indicate that $Na$, $Mg$, and $Al$ have similar behavior in the stellar structure.} \label{len_SGB_NaMgAl} \end{figure*} \begin{figure*} \centering \figurenum{4} \includegraphics[width=120mm]{f4} \caption{Same as Figure \ref{len_SGB_NaMgAl} except for isochrones of various $C$, $N$, and $O$ contents. Linearity can be found in each panel, which means $C$, $N$, and $O$ have similar behavior in stellar models.} \label{len_SGB_CNO} \end{figure*} \begin{figure*} \centering \figurenum{5} \includegraphics[width=120mm]{f5} \caption{For isochrones of various $C$, $N$, or $O$ contents, the number variation of metal elements versus the degree of net effect represented by the shifts of the MSTO is shown for three different metallicity conditions. The x axis is the number variation of metal ions, and the y axis is the shift of the MSTO on the HRD. A linear relation can be seen in all panels, as is shown in Figure \ref{len_SGB_CNO}.} \label{l_MSTO_CNO} \end{figure*} \begin{figure} \figurenum{6} \includegraphics[width=\columnwidth]{f6} \caption{Upper RGB of isochrones for various $Na$, $Mg$, $Al$, $Si$, and $Fe$ mixtures. The colored lines represent the 12 Gyr isochrones of the different mixtures as indicated in the legend. The order of the mixture in the legend was chosen according to the shift of the isochrones with respect to the reference isochrone. Upward along the RGB, the isochrones of $Si$ and $Fe$ approach the reference isochrones, which means that the influence of $Si$ and $Fe$ decreases more than that of the other elements as the effective temperature decreases.} \label{SiFe1} \end{figure} \begin{figure*} \centering \figurenum{7} \includegraphics[width=155mm]{f7} \caption{ For isochrones of various $Na$, $Mg$, and $Al$ mixtures, the number variation of metal elements versus the degree of net effect represented by the shifts of the RGB is given for three different metallicity conditions. The x axis is the number variation of metal ions, and the y axis presents the shifts of the RGB measured at the various locations on the HRD. The influence of $Si$ and $Fe$ is reduced more than that of other elements as the metallicity increases and the upper location of the RGB is approached.} \label{l_RGB_NaFe} \end{figure*} To express the abundance variation of various elements in the same parameter domain, the chosen parameter is the relative change in particle number with respect to the reference mixture. Because its value is relative to the reference mixture, this parameter is independent of the total metallicity. For example, when the parameter is +0.3, the total number of metal ions in the comparison model is enhanced by $+30~\%$ over that in the reference model. The parameter is presented as \begin{eqnarray} \Delta N_{Z}/N_{Z,ref.} \nonumber &=& \frac{N_{Z,mod.}-N_{Z,ref.}}{N_{Z,ref.}} \nonumber \\ \label{eq:def} \end{eqnarray} where $N_{Z}$ is the abundance in number for all metal elements, and the subscripts $ref.$ and $mod.$ indicate the stellar models before and after the modification, respectively. The parameter can be calculated in the following equation with variation in dex and the number fraction in the reference mixture: \begin{eqnarray} \Delta N_{Z}/N_{Z,ref.} \nonumber &=& f_{m,ref.} (10^{\Delta[m/Fe]} - 1) \nonumber \\ \label{eq:cal} \end{eqnarray} where $m$ is a target element having a change in abundance, and $f_{m,ref.}$ and $\Delta[m/Fe]$ are the number fraction in the reference mixture and the variation in $dex$, respectively; the other variables are the same as those in Equation~(\ref{eq:def}). The values of the parameter for the cases in this work are summarized in Table \ref{tbl5}. When the abundance change is expressed with the parameter, the degree of the shape change of the isochrones appears to be linearly related to the abundance variation. \subsection{Grouping of elements and Their abundance sum} The impact of abundance variation in Na, Mg, and Al contents is shown in Figure \ref{len_SGB_NaMgAl}. The figure presents the difference in sub-GB length for three different metallicities in terms of the abundance variation, which was introduced in the previous section. In the figure, the points appear to form a straight line. This indicates that the changes in isochrone shape are linearly correlated with the relative changes in the number of metal ions regardless of the choice of element among Na, Mg, or Al. That is, when the number variations are the same, the isochrones for the three elements have similar shapes. The same is true for the shifts of MS and RGB, which is discussed further in the following subsection. This correlation can be described as behavior that the three elements affect similarly in stellar interior models. The abundance change for these elements affects the interior models mainly through the opacity. As reported by \citet{Van12}, the abundance change of these elements increases the opacity in the similar temperature ranges of the stellar interior. These elements have similar atomic structures; therefore, their ionization potential is similar, and they donate a similar number of electrons in a certain physical condition. Therefore, the opacity is similarly changed when the abundance changes of these elements are the same. The effect on the energy generation, however, is very limited. Unlike C, N, and O, the impact of the elements on the energy generation is through only changes of the physical condition associated with the structural changes. It is widely accepted that C, N, and O have similar roles and influence in a star and that the degree of the effects depends on $[C+N+O/Fe]$. Figure \ref{len_SGB_CNO} is the same as Figure \ref{len_SGB_NaMgAl} except for the elements. As expected, a linear relation can be detected in each panel even though the linearity appears to be weaker in Figure 4. This may have occurred because these elements are related to nucleosynthesis as well as opacity, whereas the group of Na, Mg, and Al is directly related only to opacity. Figure \ref{l_MSTO_CNO} plots the shifts of MSTO in both effective temperature (upper panels) and brightness (lower panels). In all panels, a linear relation can be found. This again means that the degree of the net effect is the same when the number of the target element is changed by the same amount regardless of which element is chosen among C, N, and O. In summary, it appears that the Na, Mg, and Al can be considered as one group. Similar to $[C+N+O/Fe]$, $[Na+Mg+Al/Fe]$ determines the degree of the effect on stellar models. Therefore, no significant change can be expected in stars in which the sum of Na, Mg and Al contents is not significantly changed. For example, in stars of the AGB phase the $Mg-Al$ reaction reduces the Mg and increases the Al contents. Thus, next-generation stars polluted by the winds from AGB stars show $Mg-Al$ anti-correlation. Because the sum of Mg and Al contents is not significantly changed, however, the effect associated with the abundance variation in these contents may be very limited. \subsection{Distinct characteristics of Si and Fe \\from other elements} In this section, we focus on the characteristics of Fe and Si that distinguish them from the other elements considered in this work. It is considered that Si and Fe generally behave in a manner similar to other metals, especially Mg, in stellar models (e.g., \citet{Van12}). In particular, Si has been considered to be similar to Mg owing to their roles in stellar models and their high abundance. However, \citet{Dot07} reported that the influence of Si on opacity becomes smaller than that of Mg at low temperatures. In the present study, the effect of Fe, as well as Si, was shown to be lower than that of other metals at low temperatures. In addition, the influence of Fe is much larger than those of Na, Mg, Al, and even Si. In high-metallicity conditions, the influence of Fe as well as Si decrease near the RGB tip where effective temperature is lower. Figure \ref{SiFe1} shows the RGB of the 12 Gyr isochrones with $[Fe/H] = −0.866$. In the right panel, unlike others, the isochrones of $Si+0.4$ and $Fe+0.4$ are inclined toward the reference isochrone as they approach the RGB tip. The same is shown in Figure \ref{l_RGB_NaFe}, which also shows the location differences of the RGBs with respect to the abundance variation. The four columns of the panels represent the four different values of metallicity, and the three rows represent cases with different locations of RGB shift measurement. In the left panels of the figure, the points of Si and Fe are located near the line formed by the points of Na, Mg, and Al. In the upper and right panels, however, which represent the shift of RGB measured at the brighter points along RGB in addition to higher metallicity, the two points appear not to be associated with the linear relation formed by the other points. For the abundance variations of $Si+0.4$ and $Fe+0.4$, the shift of RGB was expected to be increased with the points located farther below. The fact that this did not occur can be interpreted as the lesser influence of Si and Fe in stars with low effective temperature, i.e., for higher metallicity and positions closer to the RGB tip. Furthermore, this effect is stronger for Si than Fe. The influence of each element on opacity differs in various temperature and density according to its atomic structure. Thus, it is expected that the influence of Na, Mg, and Al might differ from each other only in cases with higher metallicity. However, the metallicity is beyond the usual metallicity range of globular clusters in our Galaxy (above about $[Fe/H] = -0.5$). In comparison, the reduced influence of Si and Fe can be seen in the RGB stars of a metal-rich cluster above $[Fe/H] = -1.0$. \begin{figure*} \centering \figurenum{8} \includegraphics[width=120mm]{f8} \caption{Same as the upper panels of Figure \ref{l_MSTO_CNO}, except for isochrones of various $Na$, $Mg$, $Al$, $Si$ and $Fe$ cases. The influence of $Fe$ near the MSTO is noticeably higher than that of the other elements. However, $Si$ appears to follow the trend of $Na$, $Mg$, and $Al$.} \label{l_MSTO_NaFe} \end{figure*} \begin{figure*}[ht!] \centering \figurenum{9} \includegraphics[width=120mm]{f9} \caption{Same as Figure \ref{len_SGB_NaMgAl}, but including points of $Si+0.4$ and $Fe+0.4$. It should be noted that $Si+0.4$ and $Fe+0.4$ show less change than $Mg+0.2$ and $Al+1.5$ in the length of the sub-GB.} \label{len_SGB_NaFe} \end{figure*} The influence of Fe near the MSTO is greater than that of Na, Mg, Al, and even Si. Figure \ref{l_MSTO_NaFe} shows the shift of the MSTO as a function of the abundance variation. The point of $Fe+0.4$ is located far below the line formed by other points. This indicates that more changes of the MSTO location than that of the other elements is expacted when the degree of the abundance variation is the same. Although the abundance variation of $Fe+0.4$ is similar to that of $Mg+0.2$ and $Al+1.5$, the MSTO of the isochrone shift of $Fe+0.4$ is twice that of the others. On the contrary, the point of $Si+0.4$ appears to be on the line formed by the points of the cases for Na, Mg, and Al. In the inner part of a stellar model, the opacity peak associated with Fe is at a different temperature range from that of the other elements; the opacity peaks of Na, Mg, Al, and Si occur almost in the same region \citep{Van12}. These characteristics are shown in Figure \ref{len_SGB_NaFe}, which is the same as Figure \ref{len_SGB_NaMgAl} except for the addition of Fe and Si cases. In all panels of the figure, the points of $Fe+04$ and $Si+0.4$ do not appear to be associated with the linear relation of Na, Mg, and Al. The length of the sub-GB for $Fe+04$ and $Si+0.4$ is modified less because of the lower influence of Fe and Si on the RGB. Additionally, $Fe+04$ shows a significant shift of the MSTO, which is in the same direction as the RGB; thus, the change in the length of the sub-GB is less than that in other cases. \section{Discussion} The first subsection discusses abundance variations of less abundant metals. The previous results show that their extreme variations can cause considerable change in isochrone shape. The lower limits of abundance variations are discussed based on the abundance variations and associated shape changes in the isochrone. Furthermore, the results of the section 4 indicate that the two parameters, specifically $[C+N+O/Fe]$ and $[Na+Mg+Al/Fe]$ are key parameters in stars whose mixtures have $CNONa$ and $MgAl$ anti-correlations. Thus, the last subsection discusses the implication of the new group of Na, Mg, and Al in evolutionary point of view. \subsection{Lower limits of abundance variation} To determine the limiting value in the abundance variation that causes noticeable changes in the isochrones, the length of the sub-GB was investigated. It was assumed that the typical error of the age estimation utilizing the length of the sub-GB for a globular cluster is about $0.5~Gyr$ \citep{Van13}. For C and N, the enhancement of about $0.5~dex$ in $[C/Fe]$ or about $1.0~dex$ in $[N/Fe]$ cause a noticeable increase in the age estimation. Because of their low abundances in the reference mixture, however, any amount of depletion in the abundance of these elements cannot produce isochrones that resemble those that are $0.5~Gyr$ younger. Conversely, both enhancement and depletion are attainable for O. For Na and Al, this factor depends on the metallicity. Metallicity lower than $[Fe/H] = -1.0$, $[Na/Fe] = 1.3~dex$ and above produce an isochrone appearing $0.5~Gyr$ younger, which corresponds with the enhancements of $\Delta[Na/Fe] = +1.0$ with respect to the reference mixture. For Al, the value is $[Al/Fe] = 1.2~dex$ and above, which is the same as $\Delta[Al/Fe] = +1.5$ with respect to reference mixture. For a metallicity higher than $[Fe/H] = -1.0$, the value is about $[Na/Fe] = 1.1~dex$ and above, which corresponds to $\Delta[Na/Fe] = +0.8$. And the value is $[Al/Fe] = 0.9~dex$ and above, which is $\Delta[Al/Fe] = +1.2$. Because of the low abundance an isochrone resembling $0.5~Gyr$ older cannot be generated by any amount of depletion in Na and Al contents. For Mg, however, both enhancement and depletion may be considered. As discussed in section 4.3, for elements in a group, the variation in the total sum rather than in individual variations is an important parameter. For $\Delta[C+N+O/Fe]$, an abundance variation of about $\Delta[C+N+O/Fe]^{+0.15}_{-0.20}$ and greater can produce recognizable changes in age estimation. Depending on metallicity, the variations beyond $\Delta[Na+Mg+Al/Fe]^{+0.20}_{-0.35}$ in the lower metallicity of $[Fe/H] = -1.0$ and $\Delta[Na+Mg+Al/Fe]^{+0.11}_{-0.14}$ in the higher metallicity of $[Fe/H] = -1.0$ are considerable. \subsection{Implication of the general abundance pattern} The fact that Na, Mg, and Al can be considered as a group, such as the case of $CNO$, has an important implication on the studies of the GGCs that show $CNONa$ and $MgAl$ anti-correlations. According to the anti-correlations, the contents of C and N of second-generation stars is enhanced, whereas that of O is depleted. Similarly, the abundance variation of Na and Al is opposite that of Mg. When the changes in the content of an individual element in a group cancel out, the resulting effect may be less than expected otherwise. For example, some second-generation stars have a mixture enhanced by $0.8~dex$ of Na and $1.0~dex$ of Al and depleted by 0.3 Mg \citep{Car09b}. The total abundance change is -0.0008 in $\Delta N_{Z}/N_{Z,ref}$ and -0.010 in terms of $\Delta[Na+Mg+Al/Fe]$. Therefore, no significant change is expected in the shape of its isochrone. This expectation, similar to that reported by \citet{Cas13}, can serve as an example of the case mentioned in the last part of section 4.3. For the case of \citet{Sal06}, second-generation stars have a mixture of 0.6 dex increase of C, $1.8~dex$ increase of N, and $0.8~dex$ decrease of O. The actual changes in total are 2.24 in $\Delta N_{Z}/N_{Z,ref}$ and 0.585 in $\Delta[C+N+O/Fe]$. Because of the huge increase in N, the total variation is very large even after some cancellation. Therefore, the isochrones for these second-generation stars differ significantly from those for first-generation stars. For an isochrone shape on the theoretical HRD, O is commonly taken as the representative among $CNO$ because it is the most abundant element. Similarly, Mg can represent Na, Mg, and Al. Thus, instead of all six elements of C, N, O, Na, Mg, and Al, only two elements of O and Mg need to be considered. For the first example, the isochrone for $\Delta[Mg/Fe] = -0.010$ is the same as that with variations in the content of all three elements; the case of $\Delta[O/Fe] = 0.674$ is the same as that in the second example. \section{Summary} The abundance variations of individual elements have been reported in stars of GGCs; their effects on stellar evolution and isochrones have been investigated for extreme cases. In this study, stellar models and isochrones were constructed with mass and metallicity ranges for GGCs of $[Fe/H] = -2.173 \sim -0.375$ ($Z = 0.0002 \sim 0.012$) and $0.7 \sim 1.1 M_{\odot}$. In the analysis of its effect of an individual element, the changes in the total number of metal ions introduced by the abundance variation is shown to be a key parameter. The changes of the isochrone shape is expressed in terms of the change in the total number of metal ions. Analysis of the isochrones for various elements revealed that a few elements have similar roles in stellar models. In particular, Na, Mg, and Al can be considered as one group as in the case of C, N, and O. Therefore, the abundance variations can be considered as variations of $[Na+Mg+Al/Fe]$ and $[C+N+O/Fe]$. The influence of Si and Fe on the MS and RGB differed from those of Na, Al, and Mg. Specifically, the influence of Si and Fe on the RGB became smaller than that of Na, Mg, and Al in a star with low effective temperature. Thus, the influence of Si and Fe decreases as the metallicity increases and in points ascending along the RGB. Additionally, the influence of Fe on the MS is larger than that of Na, Mg, Al, and even Si. To produce noticeable change in isochrones, the variation in the content of elements should be larger than certain values. Depending on the metallicity, these values are about $[Na/Fe] = 1.1~dex$ or more, $[Al/Fe] = 0.9~dex$ or more with the enhancements of about $0.5~dex$ in $[C/Fe]$, and about $1.0~dex$ in $[N/Fe]$. This can also be presented as $\Delta[C+N+O/Fe]^{+0.15}_{-0.20}$ and $\Delta[Na+Mg+Al/Fe]^{+0.11}_{-0.14}$. According to the general abundance pattern of second-generation stars, the abundances of the relatively less abundant elements such as C, N, Na, and Al were highly enhanced while those of the abundant elements such as O and Mg were depleted (e.g., \citet{Car09a,Car09b}). However, because Na, Mg, and Al can be considered as a single group similar to the case of C, N, and O, the abundance sum of $NaMgAl$ can also be utilized in the studies examining the effect caused by their abundance variation in stellar models. \acknowledgments This research is supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2011-0025296). We thank Y.~-W. Lee for the initial suggestion and helpful discussions during this research.\\
proofpile-arXiv_067-7427
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Given an integer $n \in \Zp$ and some numbers $\alpha,\beta \in \R$ such that $\alpha<\beta$, a sequence of real numbers $(a_i)_{i = 1}^k$ is said to \textbf{fluctuate at least $n$ times} across the interval $(\alpha,\beta)$ if there are indexes $1 \leq i_0 < i_1 < \dots < i_n \leq k$ such that \begin{aufziii} \item if $j$ is odd, then $a_{i_j} < \alpha$; \item if $j$ is even, then $a_{i_j} > \beta$. \end{aufziii} In this case it is clear that for every even $j$ we have \[ a_{i_j} > \beta \quad \text{and} \quad a_{i_{j+1}} < \alpha, \] i.e., $(a_i)_{i=1}^k$ has at least $\lceil \frac n 2 \rceil$ \textbf{downcrossings} from $\beta$ to $\alpha$ and at least $\lfloor \frac n 2 \rfloor$ \textbf{upcrossings} from $\alpha$ to $\beta$. If $(a_i)_{i \geq 1}$ is an infinite sequence of real numbers, we use the same terminology and say that $(a_i)_{i \geq 1}$ fluctuates at least $n$ times across the interval $(\alpha,\beta)$ if some initial segment $(a_i)_{i=1}^k$ of the sequence fluctuates at least $n$ times across $(\alpha,\beta)$. We denote the sets of all real-valued sequences having at least $n$ fluctuations across an interval $(\alpha, \beta)$ by $\mathcal{F}_{(\alpha,\beta)}^n$, and it will be clear from the context if we are talking about finite or infinite sequences. The main result of this article is the following theorem, which generalizes the results in \cite{kw1999} about fluctuations of averages of nonnegative functions. \begin{thmn} Let $\Gamma$ be a group of polynomial growth and let $(\alpha, \beta) \subset \Rps$ be some nonempty interval. Then there are some constants $c_1,c_2 \in \Rps$ with $c_2<1$, which depend only on $\Gamma$, $\alpha$ and $\beta$, such that the following assertion holds. For any probability space $\prX=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\prX$ and any measurable $f \geq 0$ on $X$ we have \[ \mu(\{ x: (\avg{g \in \bl(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq 1$. \end{thmn} The paper is structured as follows. We provide some background on groups of polynomial growth in Section \ref{ss.grouppolgr}, discuss some special properties of averages on groups of polynomial growth and a transference principle in Section \ref{ss.avgongrp} and prove effective Vitali covering theorem in Section \ref{ss.vitcov}. The main theorem of this paper is Theorem \ref{t.expdec}, which is proved in Section \ref{s.upcrineq}. This research was done during the author's PhD studies under the supervision of Markus Haase. I would like to thank him for his support and advice. \section{Preliminaries} \subsection{Groups of Polynomial Growth} \label{ss.grouppolgr} Let $\Gamma$ be a finitely generated group and $\{ \gamma_1,\dots,\gamma_k\}$ be a fixed generating set. Each element $\gamma \in \Gamma$ can be represented as a product $\gamma_{i_1}^{p_1} \gamma_{i_2}^{p_2} \dots \gamma_{i_l}^{p_l}$ for some indexes $i_1,i_2,\dots,i_l \in 1,\dots,k$ and some integers $p_1,p_2,\dots,p_l \in \mathbb{Z}$. We define the \textbf{norm} of an element $\gamma \in \Gamma$ by \[ \| \gamma \|:=\inf\{ \sum\limits_{i=1}^l |p_i|: \gamma = \gamma_{i_1}^{p_1} \gamma_{i_2}^{p_2} \dots \gamma_{i_l}^{p_l} \}, \] where the infinum is taken over all representations of $\gamma$ as a product of the generating elements. The norm $\| \cdot \|$ on $\Gamma$, in general, does depend on the generating set. However, it is easy to show \cite[Corollary 6.4.2]{ceccherini2010} that two different generating sets produce equivalent norms. We will always say what generating set is used in the definition of a norm, but we will omit an explicit reference to the generating set later on. For every $n \in \Rp$ let \[ \bl(n):= \{ \gamma \in \Gamma: \| \gamma \| \leq n\} \] be the closed ball of radius $n$. The norm $\| \cdot \|$ yields a right invariant metric on $\Gamma$ defined by \[ d_R(x,y):=\| x y^{-1}\| \quad (x,y \in \Gamma), \] and a left invariant metric on $\Gamma$ defined by \[ d_L(x,y):=\| x^{-1} y\| \quad (x,y \in \Gamma), \] which we call the \textbf{word metrics}. The right invariance of $d_R$ means that the right multiplication \[ R_g: \Gamma \to \Gamma, \quad x \mapsto x g \quad ( x \in \Gamma) \] is an isometry for every $g \in \Gamma$ with respect to $d_R$. Similarly, the left invariance of $d_L$ means that the left multiplications are isometries with respect to $d_L$. We let $d:=d_R$ and view $\Gamma$ as a metric space with the metric $d$. For $x\in \Gamma$, $r \in \Rp$ let \[ \bl(x,r):=\{ y \in \Gamma: d(x,y) \leq r\} \] be the closed ball of radius $r$ with center $x$. Using the right invariance of the metric $d$, it is easy to see that \[ \cntm{\bl(x,r)} = \cntm{\bl(y,r)} \quad \text{ for all } x,y \in \Gamma. \] Let $\mathrm{e} \in \Gamma$ be the neutral element. It is clear that \[ \bl(n) = \{ \gamma: d_R(\mathrm{e},\gamma) \leq n\} = \{ \gamma: d_L(\mathrm{e},\gamma) \leq n\}, \] i.e., the ball $\bl(n)$ is precisely the ball $\bl(\mathrm{e}, n)$ with respect to the left and the right word metric. It is important to understand how fast the balls $\bl(n)$ in the group $\Gamma$ grow as $n \to \infty$. The \textbf{growth function} $\gamma: \mathbb{N} \to \mathbb{N}$ is defined by \[ \gamma(n):=\cntm{\bl(n)} \quad (n \in \mathbb{N}). \] We say that the group $\Gamma$ is of \textbf{polynomial growth} if there are constants $C,d>0$ such that for all $n \geq 1$ we have \[ \gamma(n) \leq C(n^d+1). \] \begin{exa} \label{ex.zdex} Consider the group $\mathbb{Z}^d$ for $d \in \mathbb{N}$ and let $\gamma_1,\dots,\gamma_d \in \mathbb{Z}^d$ be the standard basis elements of $\mathbb{Z}^d$. That is, $\gamma_i$ is defined by \[ \gamma_i(j):=\delta_i^j \quad (j=1,\dots, d) \] for all $i=1,\dots,d$. We consider the generating set given by elements $\sum\limits_{k \in I} (-1)^{\varepsilon_k}\gamma_k$ for all subsets $I \subseteq [1,d]$ and all functions $\varepsilon_{\cdot} \in \{ 0,1\}^I$. Then it is easy to see by induction on dimension that $\bl(n) = [-n,\dots,n]^d$, hence \[ \cntm{\bl(n)} = (2n+1)^d \quad \text{ for all } n \in \mathbb{N} \] with respect to this generating set, i.e., $\mathbb{Z}^d$ is a group of polynomial growth. \end{exa} Let $d \in \Zp$. We say that the group $\Gamma$ has \textbf{polynomial growth of degree $d$} if there is a constant $C>0$ such that \[ \frac 1 C n^d \leq \gamma(n) \leq C n^d \quad \text{ for all } n \in \mathbb{N}. \] It was shown in \cite{bass1972} that, if $\Gamma$ is a finitely generated nilpotent group, then $\Gamma$ has polynomial growth of some degree $d \in \Zp$. Furthermore, one can show \cite[Proposition 6.6.6]{ceccherini2010} that if $\Gamma$ is a group and $\Gamma' \leq \Gamma$ is a finite index, finitely generated nilpotent subgroup, having polynomial growth of degree $d \in \Zp$, then the group $\Gamma$ has polynomial growth of degree $d$ as well. A surprising fact is that the converse is true as well. Namely, it was proved in \cite{gromov1981} that, if $\Gamma$ is a group of polynomial growth, then there is a finite index, finitely generated nilpotent subgroup $\Gamma' \leq \Gamma$. It follows that if $\Gamma$ is a group of polynomial growth with the growth function $\gamma$, then there is a constant $C>0$ and an integer $d\in \Zp$, called the \textbf{degree of polynomial growth}, such that \[ \frac 1 C n^d \leq \gamma(n) \leq C n^d \quad \text{ for all } n \in \mathbb{N}. \] An even stronger result was obtained in \cite{pansu1983}, where it is shown that, if $\Gamma$ is a group of polynomial growth of degree $d \in \Zp$, then the limit \begin{equation} \label{eq.pansu} c_{\Gamma}:=\lim\limits_{n \to \infty} \frac{\gamma(n)}{n^d} \end{equation} exists. As a consequence, one can show that groups of polynomial growth are amenable. \begin{prop} Let $\Gamma$ be a group of polynomial growth. Then $(\bl(n))_{n \geq 1}$ is a F{\o}lner sequence in $\Gamma$. \end{prop} \begin{proof} We want to show that for every $g \in \Gamma$ \[ \lim\limits_{n \to \infty} \frac{\cntm{g \bl(n) \sdif \bl(n)}}{\cntm{\bl(n)}} = 0. \] Let $m:=d(g,e) \in \Zp$. Then $g \bl(n) \subseteq \bl(n+m)$, hence \[ \frac{\cntm{g \bl(n) \sdif \bl(n)}}{\cntm{\bl(n)}} \leq \frac{\cntm{\bl(n+m)} - \cntm{ \bl(n)}} {\cntm{\bl(n)}} \to 0, \] where we use the existence of the limit in Equation \eqref{eq.pansu}. \end{proof} It will be useful later to have a special notion for the points which are `close enough' to the boundary of a ball in $\Gamma$. Let $W:=\bl(y,s)$ be some ball in $\Gamma$. For a given $r \in \Rps$ the \textbf{$r$-interior} of $W$ is defined as \[ \intr{r}(W):=\bl(y,(1-5/r)s). \] The \textbf{$r$-boundary} of $W$ is defined as \[ \bdr{r}(W):=W \setminus \intr{r}(W). \] If a set $\mathcal{C}$ is a disjoint collection of balls in $\Gamma$, we define the $r$-interior and the $r$-boundary of $\mathcal{C}$ as \[ \intr{r}(\mathcal{C}):=\bigsqcup\limits_{W \in \mathcal{C}} \intr{r}(W) \] and \[ \bdr{r}(\mathcal{C}):=\bigsqcup\limits_{W \in \mathcal{C}} \bdr{r}(W) \] respectively. It will be essential to know that the $r$-boundary becomes small (respectively, the $r$-interior becomes large) for large enough balls and large enough $r$. More precisely, we state the following lemma, whose proof follows from the result of Pansu (see Equation \eqref{eq.pansu}). \begin{lemma} \label{l.smallbdr} Let $\Gamma$ be a group of polynomial growth and $\delta \in (0,1)$ be some constant. Then there exist constants $n_0, r_0 \in \mathbb{N}$, depending only on $\Gamma$ and $\delta$, such that the following holds. If $\mathcal{C}$ is a finite collection of disjoint balls with radii greater than $n_0$, then for all $r > r_0$ \[ \cntm{\intr{r} ( \mathcal{C} )} > (1-\delta) \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W} \] and \[ \cntm{\bdr{r} ( \mathcal{C} )} < \delta \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W}. \] \end{lemma} \subsection{Averages on Groups of Polynomial Growth and a Transference Principle} \label{ss.avgongrp} We collect some useful results about averages on groups of polynomial growth in this subsection. At the end of the subsection we will discuss a transference principle, which will become essential later in Section \ref{s.upcrineq}. We start with a preliminary lemma, whose proof is straightforward. \begin{lemma} \label{l.ballgrowth} Let $f$ be a nonnegative function on a group of polynomial growth $\Gamma$. Let $\{ B_1, \dots, B_k\}$ be some disjoint balls in $\Gamma$ such that \[ \avg{g \in B_i} f(g) > \beta \quad \text{for each } i=1,\dots,k. \] Let $B$ be a ball in $\Gamma$, containing all $B_i$'s, such that \begin{equation*} \avg{g \in B} f(g) < \alpha. \end{equation*} Then \[ \frac{\sum\limits_{i=1}^k \cntm{B_i}}{\cntm{B}} < \frac{\alpha}{\beta}. \] \end{lemma} \noindent We refine this result as follows. \begin{lemma} \label{l.uskip} Let $\varepsilon \in (0,1)$. There is $n_0 \in \mathbb{N}$, depending only on the group of polynomial growth $\Gamma$ and $\varepsilon$, such that the following assertion holds. Given a nonnegative function $f$ on $\Gamma$, the condition \begin{equation} \label{eq.fluctcond} \avg{g \in \bl(n)} f( g ) > \beta \quad \text{and} \quad \avg{g \in \bl(m)} f( g ) < \alpha \end{equation} for some $n_0 \leq n < m$ and an interval $(\alpha,\beta) \subset \Rps$ implies that \[ \frac m n > (1-\varepsilon) \left(\frac {\beta}{\alpha} \right)^{1/d}. \] \end{lemma} \begin{proof} First of all, note that condition \eqref{eq.fluctcond} implies that \[ \frac{\cntm{\bl(m)}}{\cntm{\bl(n)}} > \frac{\beta}{\alpha} \] for \emph{all} indexes $n<m$ (see the previous lemma). Using the result of Pansu (Equation \eqref{eq.pansu}), we deduce that there is $n_0$ depending only on $\Gamma$ and $\varepsilon$ such that for all $n_0 \leq n<m$ we have \[ \frac{m^d}{n^d}>(1-\varepsilon)^d \frac{\cntm{\bl(m)}}{\cntm{\bl(n)}}. \] This implies that \[ \frac m n > (1-\varepsilon) \left( \frac{\beta}{\alpha}\right)^{1/d}, \] and the proof of the lemma is complete. \end{proof} \noindent Lemma \ref{l.uskip} has the following straightforward corollary. \begin{cor} \label{c.skip} For a constant $\varepsilon \in (0,1)$ and a group of polynomial growth $\Gamma$ let $n_0:=n_0(\varepsilon)$ be given by Lemma \ref{l.uskip}. Given a measure-preserving action of $\Gamma$ on a probability space $\prX$, a nonnegative function $f$ on $X$ and $x \in X$, the condition that the sequence \[ \left( \avg{g \in \bl(i)} f(g \cdot x) \right)_{i=n}^m \] fluctuates at least $k$ times across an interval $(\alpha, \beta) \subset \Rps$ with $n>n_0$ implies that \[ \frac m n > (1-\varepsilon)^{\lceil \frac k 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac k 2 \rceil} \cdot \frac 1 d} \] \end{cor} Finally, we will need an adapted version of the `easy direction' in Calder\'{o}n's transference principle for groups of polynomial growth. Suppose that a group $\Gamma$ of polynomial growth acts on a probability space $\prX=(X,\mathcal{B},\mu)$ by measure-preserving transformations and that we want to estimate the size of a measurable set $E$. Fix an integer $m \in \Zp$. For an integer $L \in \mathbb{N}$ and a point $x \in X$ we define the set \[ B_{L,m,x}:= \{ g: \ g \cdot x \in E \text{ and } \| g \| \leq L-m \} \subseteq \bl(L). \] The lemma below tells us that each universal upper bound on the density of $B_{L,m,x}$ in $\bl(L)$ bounds the measure of $E$ from above as well. \begin{lemma}[Transference principle] \label{l.caldtrans} Suppose that for a given constant $t \in \Rp$ the following holds: there is some $L_0 \in \mathbb{N}$ such that for all $L \geq L_0$ and for $\mu$-almost all $x \in X$ we have \[ \frac 1 {\cntm{\bl(L)}} \cntm{B_{L,m,x}} \leq t. \] Then \[ \mu(E) \leq t. \] \end{lemma} \begin{proof} Indeed, since $\Gamma$ acts on $\prX$ by measure-preserving transformations, we have \[ \sum\limits_{g \in \bl(L)} \int\limits_{\prX} \indi{E}(g \cdot x) d \mu = \cntm{\bl(L)} \mu(E). \] Then \begin{align*} \mu(E) &= \int\limits_{\prX} \left( \frac 1 {\cntm{\bl(L)}} \sum\limits_{g \in \bl(L)} \indi{E} (g \cdot x) \right) d \mu \leq \\ &\leq \int\limits_{\prX} \left( \frac{\cntm{B_{L,m,x}} + \cntm{\bl(L) \setminus \bl(L-m)}}{\cntm{\bl(L)}} \right) d \mu, \end{align*} and the proof is complete since $L$ can be arbitrarily large and $\Gamma$ is a group of polynomial growth. \end{proof} \subsection{Vitali Covering Lemma} \label{ss.vitcov} In this section we discuss the generalization of Effective Vitali Covering lemma from \cite{kw1999} to groups of polynomial growth. We fix some notation first. Given a number $t \in \Rp$ and a ball $B=\bl(x,r) \subseteq \prX$ in a metric space $\prX$, we denote by $t \cdot B$ the $t$-enlargement of $B$, i.e., the ball $\bl(x,rt)$. We state the basic finitary Vitali covering lemma first, whose proof is well-known. \begin{lemma} \label{l.fvc} Let $\mathcal{B}:=\{ B_1,\dots,B_n \}$ be a finite collection of balls in a metric space $\prX$. Then there is a finite subset $\{ B_{j_1},\dots, B_{j_m}\} \subseteq \mathcal{B}$ consisting of pairwise disjoint balls such that \[ \bigcup\limits_{i=1}^n B_i \subseteq \bigcup\limits_{l=1}^m 3 \cdot B_{j_l}. \] \end{lemma} Infinite version of this lemma is used, for example, in the proof of the standard Vitali covering theorem, which can be generalized to arbitrary doubling measure spaces. However, the standard Vitali covering theorem is not sufficient for our purposes. It was shown in \cite{kw1999} that the groups $\mathbb{Z}^d$ for $d \in \mathbb{N}$, which are of course doubling measure spaces when endowed with the counting measure and the word metric, enjoy a particularly useful `effective' version of the theorem. We prove a generalization of this result to groups of polynomial growth below. \begin{thm}[Effective Vitali covering] \label{t.evc} Let $\Gamma$ be a group of polynomial growth of degree $d$. Let $C \geq 1$ be a constant such that \[ \frac 1 C m^d \leq \gamma(m) \leq C m^d \quad \text{ for all } m \in \mathbb{N} \] and let $c:=3^d C^2$. Let $R,n,r>2$ be some fixed natural numbers and $X \subseteq \bl(R)$ be a subset of the ball $\bl(R) \subset \Gamma$. Suppose that to each $p \in X$ there are associated balls $A_1(p),\dots,A_n(p)$ such that the following assertions hold: \begin{aufzi} \item $p \in A_i(p) \subseteq \bl(R)$ for $i=1,\dots,n$; \item For all $i=1,\dots,n-1$ the $r$-enlargement of $A_i(p)$ is contained in $A_{i+1}(p)$. \end{aufzi} Let \[ S_i:=\bigcup\limits_{p \in X} A_i(p) \quad (i=1,\dots,n). \] There is a disjoint subcollection $\mathcal{C} $ of $\{ A_i(p) \}_{p \in X, i=1,\dots,n}$ such that the following conclusions hold: \begin{aufzi} \item The union of $\left( 1+ \frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ together with the the set $S_n \setminus S_1$ covers all but at most $\left( \frac {c-1} c \right)^n$ of $S_n$; \item The measure of the union of $\left( 1+ \frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ is at least $(1 - \left( \frac {c-1} c \right)^n)$ times the measure of $S_1$. \end{aufzi} \end{thm} \begin{rem} \label{r.maxball} Prior to proceeding to the proof of the theorem we make the following remarks. Firstly, we do not require the balls $A_i(p)$ from the theorem to be centered around $p$. Secondly, the balls of the form $A_i(p)$ for $i=1,\dots,n$ and $p \in X$ will be called \textbf{$i$-th level balls}. An $i$-th level ball $A_i(p)$ is called \textbf{maximal} if it is not contained in any other $i$-th level ball. It is clear that each $S_i$ is the union of maximal $i$-level balls as well. It will follow from the proof below that the balls in $\mathcal{C}$ can be chosen to be maximal. \end{rem} \begin{proof} To simplify the notation, let \[ s:=1+\frac 4 {r-2} \] be the scaling factor that is used in the theorem. The main idea of the proof is to cover a positive fraction of $S_n$ by a disjoint union of $n$-level balls via Lemma \ref{l.fvc}, then cover a positive fraction of what remains in $S_{n-1}$ by a disjoint union of $(n-1)$-level balls and so on. Thus we begin by covering a fraction of $S_n$ by $n$-level balls. Let $\mathcal{C}_n \subseteq \{ A_n(p) \}_{p \in X}$ be the collection of disjoint balls, obtained by applying Lemma \ref{l.fvc} to the collection of all $n$-th level \emph{maximal} balls. For every ball $B=\bl(p,m) \in \mathcal{C}_n$ we have \[ \cntm{3 \cdot B} \leq C (3m)^d \leq C^2 3^d \cntm{B}, \] hence \[ \cntm{S_n} \leq \cntm{\bigcup\limits_{B \in \mathcal{C}_n} 3 \cdot B} \leq \sum\limits_{B \in \mathcal{C}_n} c \cntm{B} \] and so \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_n} B} \geq \frac 1 c \cntm{S_n}. \] Let $U_n:=\bigsqcup\limits_{B \in \mathcal{C}_n} B$. The computation above shows that \begin{equation} \label{eq.stepa} U_n \text{ covers at least } \frac 1 c \text{-fraction of } S_n \end{equation} and \begin{equation} \label{eq.stepb} \cntm{S_1} - \cntm{U_n} \leq \cntm{S_1} - \frac 1 c \cntm{S_1} = \frac{c-1} c \cntm{S_1}. \end{equation} We proceed by restricting to $(n-1)$-level balls. Assume for the moment that the following claim is true. \begin{claimn} If a ball $A_{n-1}(p)$ has a nonempty intersection with $U_n$, then $A_{n-1}(p)$ is contained in the $s$-enlargement of the ball in $\mathcal{C}_n$ that it intersects. \end{claimn} \noindent Let \begin{align*} \widetilde \mathcal{C}_{n-1}:=\{ A_{n-1}(p): \ &A_{n-1}(p) \text{ is a maximal } (n-1)-\text{level ball} \\ &\text{ such that } A_{n-1}(p) \cap U_n = \varnothing\} \end{align*} be the collection of all maximal $(n-1)$-level balls disjoint from $U_n$ and let $\widetilde U_{n-1}$ be its union. We apply Lemma \ref{l.fvc} once again to obtain a collection $\mathcal{C}_{n-1} \subseteq \widetilde \mathcal{C}_{n-1}$ of pairwise disjoint maximal balls such that \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_{n-1}} B} \geq \frac 1 c \cntm{\widetilde U_{n-1}}. \] Let $U_{n-1}:=\bigsqcup\limits_{B \in \mathcal{C}_{n-1}} B$. In order to show that \begin{equation} \label{eq.s1est} \cntm{S_1} - \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}} \leq \left( \frac{c-1} c \right)^2 \cntm{S_1} \end{equation} it suffices to prove that \begin{equation} \label{eq.snm1} \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}} \geq \cntm{U_n} + \frac 1 c \cntm{S_{n-1} \setminus U_n}, \end{equation} due to the obvious inequalities \begin{align*} \cntm{S_{n-1} \setminus U_n} \geq &\cntm{S_{n-1}} - \cntm{U_n} \geq \cntm{S_1} - \cntm{U_n},\\ &\cntm{U_n} \geq \frac 1 c \cntm{S_1}. \end{align*} We decompose the set $S_{n-1} \setminus U_n$ as follows \[ S_{n-1} \setminus U_n = \widetilde U_{n-1} \sqcup \left( S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})\right). \] The part $S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})$ is covered by the $(n-1)$-level balls intersecting $U_n$. Hence, if Claim 1 above is true, the set $S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})$ is covered by the $s$-enlargements of balls in $\mathcal{C}_n$. Next, $U_{n-1}$ covers at least $\frac 1 c$ fraction of $\widetilde U_{n-1}$. It follows that the set $\bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}$ covers the set $U_n$ and at least $\frac 1 c$-fraction of the set $S_{n-1} \setminus U_n$. Thus we have proved inequalities \eqref{eq.snm1} and \eqref{eq.s1est}. A similar argument shows that \begin{align} \label{eq.2ndstepa} \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup & \bigcup\limits_{B \in \mathcal{C}_{n-1}} \left(s \cdot B \right) \cup (S_n \setminus S_{n-1}) \text{ covers all but} \\ &\text{ at most } \left( 1 - \frac 1 c\right)^2 \text{ of } S_n. \nonumber \end{align} Comparing Equations \eqref{eq.2ndstepa} and \eqref{eq.s1est} to the statements $\mathrm{(a)}$ and $\mathrm{(b)}$ of the theorem, we see that the proof would be complete apart from Claim 1 if $n$ was equal to $2$. So we proceed further to $(n-2)$-level balls and use the following claim. \begin{claimn} If a ball $A_{n-2}(p)$ has a nonempty intersection with $U_n \cup U_{n-1}$, then $A_{n-2}(p)$ is contained in the $s$-enlargement of the ball in $\mathcal{C}_n \cup \mathcal{C}_{n-1}$ that it intersects. \end{claimn} We let $\mathcal{C}_{n-2}$ be the collection of all maximal $(n-2)$-level balls disjoint from $U_n \cup U_{n-1}$ and let $\widetilde U_{n-2}$ be its union. We apply Lemma \ref{l.fvc} once again to obtain a collection $\mathcal{C}_{n-2} \subseteq \widetilde \mathcal{C}_{n-2}$ of pairwise disjoint balls such that \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_{n-2}} B} \geq \frac 1 c \cntm{\widetilde U_{n-2}} \] and let $U_{n-2}:=\bigsqcup\limits_{B \in \mathcal{C}_{n-2}} B$. Similar arguments show that \begin{equation*} \cntm{S_1} - \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup \bigcup\limits_{B \in \mathcal{C}_{n-1}} \left(s \cdot B \right) \cup U_{n-2}} \leq \left( \frac{c-1} c \right)^3 \cntm{S_1} \end{equation*} and that the union of $s$-enlargements of balls in $\mathcal{C}_n$, $\mathcal{C}_{n-1}$ and $\mathcal{C}_{n-2}$, together with $S_n \setminus S_{n-2}$, covers all but at most $\left( 1 - \frac 1 c\right)^3$ of $S_n$. It is obvious that one can continue in this way down to the $1$-st level balls, using the obvious generalization of Claim 2. This would yield a collection of maximal balls \[ \mathcal{C}:=\bigcup\limits_{i=1}^n \mathcal{C}_i \] so that the union of $s$-enlargements of balls in $\mathcal{C}$ together with $S_n \setminus S_1$ covers all but most $\left( 1 -\frac 1 c\right)^n$ of $S_n$ and that the measure of the union of these $s$-enlargements is at least $\left( 1- \left( 1 - \frac 1 c\right)^n\right)$ times the measure of $S_1$. We conclude that the proof is complete once we prove the claims above and their generalizations. For this it suffices to prove the following statement: \begin{claimn} If $1 \leq i < j \leq n$ and $A_j(q)$ is a maximal ball, then for all $p \in X$ \[ A_i(p) \cap A_j(q) \neq \varnothing \Rightarrow A_{i}(p) \subseteq s \cdot A_j(q). \] \end{claimn} Suppose this is not the case. Let $x,y$ be the centers and $r_1,r_2$ be the radii of $A_{i}(p)$ and $A_j(q)$ respectively. Recall that $s = 1+\frac 4 {r-2}$. Since the $s$-enlargement of $A_j(q)$ does not contain $A_i(p)$, it follows $\frac {4 r_2} {r-2} \leq 2 r_1$, hence \[ r r_1 \geq 2 r_1 + 2 r_2. \] The intersection of $A_i(p)$ and $A_j(q)$ is nonempty, hence $d \leq r_1 + r_2$. This implies that \[ r r_1 \geq d+r_1+r_2, \] so the $r$-enlargement of the ball $A_i(p)$ contains $A_j(q)$. Since $r \cdot A_i(p) \subseteq A_{i+1}(p)$, we conclude that the ball $A_j(q)$ is not maximal. Contradiction. \end{proof} \begin{cor} \label{c.evccor} Suppose that in addition to all the assumptions of Theorem \ref{t.evc} we have \[ \cntm{S_n} \leq (c+1) \cntm{S_1}, \] where $c$ is the constant defined in Theorem \ref{t.evc}. Then there is a disjoint subcollection $\mathcal{C}$ of maximal balls such the union of $\left( 1+ \frac 4 {r-2}\right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1-(c+1) \left( \frac{c-1}{c}\right)^n \right)$ of $S_1$. \end{cor} \begin{proof} From the proof of Theorem \ref{t.evc} it follows that one can find a disjoint collection $\mathcal{C}$ of maximal balls satisfying assertions \textrm{(a)} and \textrm{(b)} of the theorem. The statement of the corollary is an easy consequence of \textrm{(a)}. \end{proof} As the main application we will use the corollary above in the proof of Theorem \ref{t.expdec}. It will be essential to know that one can ensure that the extra $\left( 1+\frac 4 {r-2}\right)$-enlargement does change the size of the union of the balls too much. \begin{lemma} \label{l.smallenl} Let $\Gamma$ be a group of polynomial growth and $\delta \in (0,1)$ be some constant. Then there exist integers $n_0, r_0 > 2$, depending only on $\Gamma$ and $\delta$, such that the following assertion holds. If $\mathcal{C}$ is a finite collection of disjoint balls with radii greater than $n_0$, then for all $r \geq r_0$ we have \[ \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W} \geq (1-\delta) \cntm{\bigcup\limits_{W\in \mathcal{C}} \left( 1+\frac 4 {r-2} \right) \cdot W}. \] \end{lemma} \noindent The proof of the lemma follows from the result of Pansu (see Equation \eqref{eq.pansu}). \section{Fluctuations of Averages of Nonnegative Functions} \label{s.upcrineq} The purpose of this section is to prove the following theorem. \begin{thm} \label{t.expdec} Let $\Gamma$ be a group of polynomial growth of degree $d \in \Zp$ and let $(\alpha, \beta) \subset \Rps$ be some nonempty interval. Then there are some constants $c_1,c_2 \in \Rps$ with $c_2<1$, which depend only on $\Gamma$, $\alpha$ and $\beta$, such that the following assertion holds. For any probability space $\prX=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\prX$ and any measurable $f \geq 0$ on $X$ we have \[ \mu(\{ x: (\avg{g \in \bl(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq 1$. \end{thm} To simplify the presentation we use the adjective \textbf{universal} to talk about constants determined by $\Gamma$ and $(\alpha,\beta)$. When a constant $c$ is determined by $\Gamma, (\alpha, \beta)$ and a parameter $\delta$, we say that $c$ is \textbf{$\delta$-universal}. Prior to proceeding to the proof of Theorem \ref{t.expdec}, we make some straightforward observations. \begin{rem} \label{r.fluctbd} It easy to see how one can generalize the theorem above for arbitrary functions bounded from below. If a measurable function $f$ on $X$ is greater than $-m$ for some constant $m \in \Rp$, then \[ \mu(\{ x: (\avg{ g \in \bl(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < \widetilde{c}_1 \widetilde{c}_2^N, \] where the constants $\widetilde{c}_1, \widetilde{c}_2$ are given by applying Theorem \ref{t.expdec} to the function $f+m$ and the interval $(\alpha+m,\beta+m)$. \end{rem} \begin{rem} \label{r.abclose} Recall that $\gamma: \Zp \to \Zp$ is a growth function of a group $\Gamma$. Let $C \geq 1$ be a constant such that \[ \frac 1 C r^d \leq \gamma(r) \leq C r^d \quad \text{ for all } r\in \mathbb{N} \] and let $c:=3^d C^2$. Then it suffices to prove Theorem \ref{t.expdec} only for intervals $(\alpha,\beta)$ such that \[ \frac{\beta}{\alpha} \leq \frac{c+1}{c}. \] If the interval does not satisfy this condition, we replace it with a sufficiently small subinterval and apply Theorem \ref{t.evc}. The importance of this observation will be apparent later. \end{rem} \begin{rem} \label{r.largen} Instead of proving the original assertion of Theorem \ref{t.expdec}, we will prove the following weaker assertion, which is clearly sufficient to deduce Theorem \ref{t.expdec}. \emph{There is a universal integer $\widetilde N_0 \in \mathbb{N}$ such that for any probability space $\prX=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\prX$ and any measurable $f \geq 0$ on $\prX$ we have \[ \mu(\{ x: (\avg{g \in \bl(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq \widetilde N_0$. }\end{rem} The upcrossing inequalities given by Theorem \ref{t.expdec} and Remark \ref{r.fluctbd} allow for a short proof of the pointwise ergodic theorem on $\Ell{\infty}$ for actions of groups of polynomial growth. \begin{thm} \label{t.polergthm} Let $\Gamma$ be a group of polynomial growth acting on a probability space $\prX=(X, \mathcal{B}, \mu)$ by measure-preserving transformations. Then for every $f \in \Ell{\infty}(\prX)$ the limit \[ \lim\limits_{n \to \infty} \avg{g \in \bl(n)} f(g \cdot x) \] exists almost everywhere. \end{thm} \begin{proof} Let \[ X_0:= \{x \in X: \lim\limits_{n \to \infty} \avg{g \in \bl(n)} f(g \cdot x) \text{ does not exist} \} \] be the set of the points in $\prX$ where the ergodic averages do not converge. Let $((\alpha_i,\beta_i))_{i \geq 1}$ be a sequence of nonempty intervals such that each nonempty interval $(c,d) \subset \R$ contains some interval $(a_i,b_i)$. Then it is clear that if $x \in X_0$, then there is some interval $(a_i,b_i)$ such that the sequence of averages $\left( \avg{g \in \bl(n)} f(g \cdot x) \right)_{n \geq 1}$ fluctuates over $(a_i,b_i)$ infinitely often, i.e., \[ X_0 \subseteq \{ x \in X: \left(\avg{g \in \bl(n)} f(g \cdot x)\right)_{n \geq 1} \in \bigcup\limits_{i \geq 1} \bigcap\limits_{k \geq 1} \mathcal{F}_{(a_i,b_i)}^k\}. \] By Theorem \ref{t.expdec} and Remark \ref{r.fluctbd} we have for every interval $(a_i,b_i)$ that \[ \mu(\{ x \in X: \left(\avg{g \in \bl(n)} f(g \cdot x)\right)_{n \geq 1} \in \bigcap\limits_{k \geq 1} \mathcal{F}_{(a_i,b_i)}^k\}) = 0, \] hence $\mu(X_0) = 0$ and the proof is complete. \end{proof} We now begin the proof of Theorem \ref{t.expdec}, namely we will prove the assertion in Remark \ref{r.largen}. Assume from now on that the group $\Gamma$ of polynomial growth of degree $d \in \Zp$ and the interval $(\alpha, \beta) \subset \Rps$ are \emph{fixed}. Given a measure-preserving action of $\Gamma$ on a probability space $\prX =(X,\mathcal{B},\mu)$, let \[ E_N:=\{ x: \left( \avg{g \in \bl(k)} f(g \cdot x) \right)_{k \geq 1} \in \mathcal{F}_{(\alpha, \beta)}^N\} \] be the set of all points $x \in X$ where the ergodic averages fluctuate at least $N \geq \widetilde N_0$ times across the interval $(\alpha,\beta)$. Here $\widetilde N_0$ is a universal constant, which will be determined later. For $m \geq 1$ define, furthermore, the set \[ E_{N,m}:=\{ x: \left( \avg{g \in \bl(k)} f(g \cdot x) \right)_{k=1}^m \in \mathcal{F}_{(\alpha, \beta)}^N\} \] of all points such that the finite sequence $\left(\avg{\bl(k)} f(g \cdot x) \right)_{k=1}^m$ fluctuates at least $N$ times across $(\alpha,\beta)$. Then, clearly, $(E_{N,m})_{m \geq 1}$ is a monotone increasing sequence of sets and \[ E_N = \bigcup\limits_{m \geq N} E_{N,m}. \] We will complete the proof by giving a universal estimate for $\mu(E_{N,m})$ for all $m \geq N$. For that we use the transference principle (Lemma \ref{l.caldtrans}), i.e., for an integer $L > m$ and a point $x \in X$ we let \[ B_{L,m,x}:=\{g: \ g \cdot x \in E_{N,m} \text{ and } \| g \| \leq L-m \}. \] The goal is to show that the density of the set \[ B_0:=B_{L,m,x} \subset \bl(L) \] can be estimated by $c_1 c_2^N$ for some universal constants $c_1,c_2$. The main idea is as follows. For every point $z \in B_0$ the sequence of averages \[ k \mapsto \avg{g \in \bl(k)} f( (gz) \cdot x), \quad k = 1,\dots,m \] fluctuates at least $N$ times. Since the word metric $d=d_R$ on $\Gamma$ is right-invariant, the set $\bl(k)z$ is in fact a ball of radius $k$ centered at $z$ for each $k=1,\dots,m$. Given a parameter $\delta \in (0, 1-\sqrt{{\alpha}/{\beta}})$, we will pick some of these balls and apply effective Vitali covering theorem (Theorem \ref{t.evc}) multiple times to replace $B_0$ by a sequence \[ B_1,B_2,\dots, B_{\lfloor (N-N_0)/T \rfloor} \] of subsets of $\bl(L)$ for some $\delta$-universal integers $T, N_0 \in \mathbb{N}$ which satisfies the assumption \begin{equation} \label{eq.oddstep} B_{2i+1} \text{ covers at least } \left( 1-\delta \right)-\text{fraction of } B_{2i} \quad \text{ for all indices } i \geq 0 \end{equation} at `odd' steps and the assumption \begin{equation} \label{eq.evenstep} \cntm{B_{2i}} \geq \frac{\beta}{\alpha}(1-\delta) \cntm{B_{2i-1}} \quad \text{ for all indexes } i \geq 1 \end{equation} at `even' steps. Each $B_i$ is, furthermore, a union \[ \bigsqcup\limits_{B \in \mathcal{C}_i} B \] of some family $\mathcal{C}_i$ of disjoint balls with centers in $B_0$. If such a sequence of sets $B_1,\dots,B_{\lfloor (N-N_0)/T \rfloor}$ exists, then \begin{align*} \cntm{\bl(L)} \geq \cntm{B_{\lfloor (N-N_0)/T \rfloor}} \geq \left( \frac{\beta}{\alpha}(1-\delta)^2 \right)^{\lfloor \frac {N-N_0}{2 T} \rfloor } \cntm{B_0}, \end{align*} which gives the required exponential bound on the density of $B_0$ with \[ c_2:=\left( \frac{\alpha}{\beta}(1-\delta)^{-2} \right)^{ 1 / 2T} \] and a suitable $\delta$-universal $c_1$. To ensure that conditions \eqref{eq.oddstep} and \eqref{eq.evenstep} hold, one has to pick sufficiently large $\delta$-universal parameters $r$ and $n$ for the effective Vitali covering theorem. We make it precise at the end of the proof, for now we assume that $r$, $n$ are `large enough'. In order to force the sufficient growth rate of the balls (condition \textrm{(b)} of Theorem \ref{t.evc}), we employ the following argument. Let $K>0$ be the smallest integer such that \[ \left(1-\frac{1-(\alpha/\beta)^{1/d}}{2} \right)^{\lceil \frac K 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac K 2 \rceil} \cdot \frac 1 d} \geq r. \] Then, applying Corollary \ref{c.skip}, we obtain a universal integer $n_0 \in \mathbb{N}$ such that if a sequence \[ (\avg{g \in B(i)} f((gz) \cdot x))_{i=n}^m \quad \text{ for some } n>n_0, z \in B_0 \] fluctuates at least $K$ times across the interval $(\alpha,\beta)$, then \begin{equation} \label{eq.evccondb} \frac m n > \left(1-\frac{1-(\alpha/\beta)^{1/d}}{2}\right)^{\lceil \frac K 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac K 2 \rceil} \cdot \frac 1 d} \geq r. \end{equation} Let $n$ be large enough for use in effective Vitali covering theorem. We define $T:=2nK$ and let $N_0 \geq n_0$ be sufficiently large (this will be made precise later). The first $N_0$ fluctuations are skipped to ensure that the balls have large enough radius, and the rest are divided into $\lfloor (N-N_0) / T \rfloor$ groups of $T$ consecutive fluctuations. The $i$-th group of consecutive fluctuations is used to construct the set $B_i$ for $i=1,\dots,\lfloor (N-N_0)/T \rfloor$ as follows. We distinguish between the `odd' and the `even' steps. \noindent \textbf{Odd step:} First, let us describe the procedure for odd $i$'s. For each point $z \in B_{i-1}$ we do the following. By induction we assume that $z \in B_{i-1}$ belongs to some unique ball $\bl(u,s)$ from $(i-1)$-th step with $u \in B_0$. If $i=1$, then $z \in B_0$. Let $A_1(z)$ be the $(K+1)$-th ball $\bl(u,s_1)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_1(z)} f(g \cdot x) > \beta, \] $A_2(z)$ be the $(2K+1)$-th ball $\bl(u,s_2)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_2(z)} f(g \cdot x) > \beta \] and so on up to $A_n(z)$. It is clear that the $r$-enlargement of $A_j(z)$ is contained in $A_{j+1}(z)$ for all indexes $j<n$ and that the balls defined in this manner are contained in $\bl(L)$. Thus the assumptions of Theorem \ref{t.evc} are satisfied. There are two further possibilities: either this collection satisfies the additional assumption in Corollary \ref{c.evccor}, i.e., \begin{equation} \label{eq.corcond} \cntm{S_n} \leq (c+1) \cntm{S_1} \end{equation} or not. If \eqref{eq.corcond} holds, then by the virtue of Corollary \ref{c.evccor} we obtain a disjoint collection $\mathcal{C}$ of maximal balls such that the measure of the union of $\left( 1+\frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1 - (c+1) \left(\frac{c-1}{c} \right)^n \right)$ of $S_1$. We let \[ B_{i}:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and $\mathcal{C}_i:=\mathcal{C}$. Condition \eqref{eq.oddstep} is satisfied if $r$ and $n$ are large enough, and we proceed to the following `even' step. If, on the contrary, \[ \cntm{S_n} > (c+1) \cntm{S_1}, \] then we apply the standard Vitali covering lemma to the collection of maximal $n$-th level balls and obtain a disjoint subcollection $\mathcal{C}$ such that \begin{equation} \label{eq.10cincr} \cntm{\bigsqcup\limits_{B \in \mathcal{C}} B} \geq \frac{1}{c}\cntm{S_n} > \frac{c+1}{c}\cntm{S_1} \end{equation} We assume without loss of generality that $\frac{\beta}{\alpha} \leq \frac{c+1}{c}$ (see Remark \ref{r.abclose}). We let \begin{align*} B_i&:=B_{i-1}, \\ B_{i+1}&:=\bigsqcup\limits_{B \in \mathcal{C}} B \end{align*} and \begin{align*} \mathcal{C}_i&:=\mathcal{C}_{i-1}, \\ \mathcal{C}_{i+1}&:=\mathcal{C}. \end{align*} The conditions \eqref{eq.oddstep}, \eqref{eq.evenstep} are satisfied and we proceed to the next `odd' step. \noindent \textbf{Even step:} We now describe the procedure for even $i$'s. For each point $z \in B_{i-1}$ we do the following. By induction we assume that $z \in B_{i-1}$ belongs to some unique ball $\bl(u,s)$ from $(i-1)$-th step with $u \in B_0$. Let $A_1(z)$ be the $(K+1)$-th ball $\bl(u,s_1)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_1(z)} f(g \cdot x) < \alpha, \] $A_2(z)$ be the $(2K+1)$-th ball $\bl(u,s_2)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_2(z)} f(g \cdot x) < \alpha \] and so on up to $A_n(z)$. It is clear that the $r$-enlargement of $A_j(z)$ is contained in $A_{j+1}(z)$ for all indexes $j<n$ and that the balls defined in this manner are contained in $\bl(L)$. Thus the assumptions of Theorem \ref{t.evc} are satisfied. There are two further possibilities: either this collection satisfies the additional assumption in Corollary \ref{c.evccor}, i.e., \begin{equation} \label{eq.corcond1} \cntm{S_n} \leq (c+1) \cntm{S_1} \end{equation} or not. If \[ \cntm{S_n} > (c+1) \cntm{S_1}, \] then we apply the standard Vitali covering lemma to the collection of maximal $n$-th level balls and obtain a disjoint subcollection $\mathcal{C}$ such that \begin{equation} \label{eq.10cincr1} \cntm{\bigsqcup\limits_{B \in \mathcal{C}} B} \geq \frac{1}{c}\cntm{S_n} > \frac{c+1}{c}\cntm{S_1} \end{equation} We assume without loss of generality that $\frac{\beta}{\alpha} \leq \frac{c+1}{c}$ (see Remark \ref{r.abclose}). We let \[ B_i:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and proceed to the following `odd' step. If \eqref{eq.corcond1} holds, then by the virtue of Corollary \ref{c.evccor} we obtain a disjoint collection $\mathcal{C}$ of maximal balls such that the measure of the union of $\left( 1+\frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1 - (c+1) \left(\frac{c-1}{c} \right)^n \right)$ of $S_1$. We let \[ B_{i}:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and $\mathcal{C}_i:=\mathcal{C}$. The goal is to prove that condition \eqref{eq.evenstep} is satisfied. If the balls from $\mathcal{C}_{i-1}$ were completely contained in the balls from $\mathcal{C}_i$, the proof would be completed by applying Lemma \ref{l.ballgrowth}. This, in general, might not be the case, so we argue as follows. First, we prove the following lemma. \begin{lemma} \label{l.bdrint} If a ball $W_1$ from $\mathcal{C}_{i-1}$ intersects $\intr{r}(W_2)$ for some ball $W_2 \in \mathcal{C}_i$, then $W_1 \subseteq W_2$. \end{lemma} \begin{proof} Let $W_1 = \bl(y_1,s_1)$ and $W_2=\bl(y_2,s_2)$ for some $y_1,y_2 \in B_0$. Since $W_1$ intersects $\intr{r}(W_2)$, we have \[ d(y_1,y_2) \leq s_2(1-5/r)+s_1. \] If $W_1$ is not contained in $W_2$, then $d(y_1,y_2) > s_2-s_1$. From these inequalities it follows that \[ s_1 \geq d(y_1,y_2)-s_2(1-5/r) >s_2-s_1-s_2+\frac{5s_2}{r}, \] hence $s_2<\frac{2rs_1}{5}$. We deduce that the $r$-enlargement of $W_1$ contains $W_2$. This is a contradiction since $W_2$ is maximal and the $r$-enlargement of $W_1$ is contained in $n$-th level ball $A_n(y_1)$. \end{proof} From the lemma above it follows that the set $B_{i-1}$ can be decomposed as \begin{align*} B_{i-1} = \left( \bigsqcup\limits_{W \in \mathcal{C}_{i-1}'} W \right) \sqcup (\bdr{r}(\mathcal{C}_i) \cap B_{i-1}) \sqcup (B_{i-1} \setminus B_{i}), \end{align*} where \[ \mathcal{C}_{i-1}':=\{ W \in \mathcal{C}_{i-1}: \ W \cap \intr{r}(V) \neq \varnothing \text{ for some } V \in \mathcal{C}_i \}. \] The rest of the argument depends on how much of $B_{i-1}$ is contained in $\bdr{r}(\mathcal{C}_i)$, so let \[ \Delta:=\frac{\cntm{\bdr{r}(\mathcal{C}_i) \cap B_{i-1}}}{\cntm{B_{i-1}}}. \] There are two possibilities. First, suppose that $\Delta>\frac{\delta} 3$. Then $\cntm{B_{i-1}} \leq \frac{\cntm{\bdr{r}(\mathcal{C}_i)}}{\delta/3}$. Let $r$ and the radii of the balls in $\mathcal{C}_i$ be large enough (see Lemma \ref{l.smallbdr}) so that \[ \frac{\cntm{\bdr{r}(\mathcal{C}_i)}}{\cntm{B_i}} < \frac{\alpha}{\beta} \frac{\delta} 3 (1-\delta)^{-1}. \] It is then easy to see that condition \eqref{eq.evenstep} is satisfied. Suppose, on the other hand, that $\Delta \leq \frac{\delta} 3$. Then, if $n$ and $r$ are large enough so that $\cntm{B_{i-1} \setminus B_i}$ is small compared to $\cntm{B_{i-1}}$, we obtain \begin{align*} \cntm{B_{i-1}} &\leq \frac{\alpha}{\beta}\cntm{B_i}+\cntm{\bdr{r}(\mathcal{C}_i) \cap B_{i-1}}+\cntm{B_{i-1} \setminus B_i} \leq \\ &\leq \frac{\alpha}{\beta}\cntm{B_i}+\frac{\delta}{3} \cntm{B_{i-1}}+\frac{\delta} 3 \cntm{B_{i-1}}, \end{align*} which implies that \[ \cntm{B_i} \geq \frac{\beta}{\alpha}(1-\frac{2 \delta} 3) \cntm{B_{i-1}}, \] i.e., condition \eqref{eq.evenstep} is satisfied as well. We proceed to the following `odd' step. The proof of the theorem is essentially complete. To finish it we only need to say how one can choose the constants $N_0, r, n$ and $\widetilde N_0$. Recall that $\delta \in (0, 1- \left( \alpha / \beta \right)^{1/2})$ is an arbitrary parameter. First, the integer $n \in \mathbb{N}$ is chosen so that \[ (c+1) \left( 1- \frac 1 c\right) ^n \leq 1-\sqrt{1-\delta / 4}. \] Next, we choose $r$ as the maximum of \begin{aufziii} \item the integer $r_0$ given by Lemma \ref{l.smallbdr} with the parameter $\frac{\alpha}{\beta}\frac{\delta} 3 (1-\delta)^{-1}$; \item the integer $r_0$ given by Lemma \ref{l.smallenl} with the parameter $1-\sqrt{1-\delta/4}$. \end{aufziii} The integer $K>0$ is picked so that condition \eqref{eq.evccondb} is satisfied. We choose $N_0$ as the maximum of \begin{aufziii} \item the integer $n_0$ given by Lemma \ref{l.smallbdr} with the parameter $\frac{\alpha}{\beta}\frac{\delta} 3 (1-\delta)^{-1}$; \item the integer $n_0$ given by Lemma \ref{l.smallenl} with the parameter $1-\sqrt{1-\delta/4}$; \item the integer $n_0$ given by Corollary \ref{c.skip} with the parameter $\frac{1-(\alpha/\beta)^{1/d}}{2}$; \end{aufziii} Finally, we define $\widetilde N_0$ as $\widetilde N_0:=N_0+4nK+1$. A straightforward computation shows that this choice of constants satisfies all requirements. We do not assert, however, that this choice yields \emph{optimal} constants $c_1$ and $c_2$. \qed \printbibliography[] \end{document}
proofpile-arXiv_067-7442
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{introduction} Traffic Modeling has been extensively studied since seminal work by~\cite{wardrop1952road} : economical and technical elements at stake justify the need for a fine understanding of mechanisms ruling traffic flows at different scales. Many approaches with different purposes coexist today, of which we can cite dynamical micro-simulation models, generally opposed to equilibrium-based techniques. Whereas the validity of micro-based models has been largely discussed and their application often questioned, the literature is relatively poor on empirical studies assessing the stationary equilibrium assumption in the Static User Equilibrium (SUE) framework. Various more realistic developments have been documented in the literature, such as Dynamic Stochastic User Equilibrium (DSUE) (see e.g. a description by~\cite{han2003dynamic}). An intermediate between static and stochastic frameworks is the Restricted Stochastic User Equilibrium, for which route choice sets are constrained to be realistic (\cite{rasmussen2015stochastic}). Extensions that incorporate user behavior with choice models have more recently been proposed, such as~\cite{zhang2013dynamic} taking into account both the influence of road pricing and congestion on user choice with a Probit model. Relaxations of other restricting assumptions such as pure user utility maximization have been also introduced, such as the Boundedly Rational User Equilibrium described by~\cite{mahmassani1987boundedly}. In this framework, user have a range of satisfying utilities and equilibrium is achieved when all users are satisfied. It produces more complex features such as the existence of multiple equilibria, and allows to account for specific stylized facts such as irreversible network change as developed by~\cite{guo2011bounded}. Other models for traffic assignment, inspired from other fields have also recently been proposed : in~\cite{puzis2013augmented}, an extended definition of betweenness centrality combining linearly free-flow betweenness with travel-time weighted betweenness yield a high correlation with effective traffic flows, acting thus as a traffic assignment model. It provides direct practical applications such as the optimization of traffic monitors spatial distribution. Despite all these developments, some studies and real-world applications still rely on Static User Equilibrium. Parisian region e.g. uses a static model (MODUS) for traffic management and planning purposes. \cite{leurent2014user} introduce a static model of traffic flow including parking cruising and parking lot choice: it is legitimate to ask, specifically at such small scales, if the stationary distribution of flows is a reality. An example of empirical investigation of classical assumptions is given in~\cite{zhu2010people}, in which revealed route choices are studied. Their conclusions question ``Wardrop’s first principle'' implying that users choose among a well-known set of alternatives. In the same spirit, we investigate the possible existence of the equilibrium in practice. More precisely, SUE assumes a stationary distribution of flows over the whole network. This assumption stays valid in the case of local stationarity, as soon as time scale for parameter evolution is considerably greater than typical time scales for travel. The second case which is more plausible and furthermore compatible with dynamical theoretical frameworks, is here tested empirically. The rest of the paper is organized as follows : data collection procedure and dataset are described ; we present then an interactive application for the interactive exploration of the dataset aimed to give intuitive insights into data patterns ; we present then results of various quantitative analyses that give convergent evidence for the non-stationarity of traffic flows ; we finally discuss implications of these results and possible developments. \section{Data collection} \subsection{Dataset Construction} We propose to work on the case study of Parisian Metropolitan Region. An open dataset was constructed for highway links within the region, collecting public real-time open data for travel times (available at www.sytadin.fr). As stated by~\cite{bouteiller2013open}, the availability of open datasets for transportation is far to be the rule, and we contribute thus to a data opening by the construction of our dataset. Our data collection procedure consists in the following simple steps, executed each two minutes by a \texttt{python} script : \begin{itemize} \item fetch raw webpage giving traffic information \item parse html code to retrieve traffic links id and their corresponding travel time \item insert all links in a \texttt{sqlite} database with the current timestamp. \end{itemize} The automatized data collection script continues to enrich the database as time passes, allowing future extensions of this work on a larger dataset and a potential reuse by scientists or planners. The latest version of the dataset is available online (sqlite format) under a Creative Commons License\footnote{at \texttt{http://37.187.242.99/files/public/sytadin{\_}latest.sqlite3}}. \subsection{Data Summary} A time granularity of 2 minutes was obtained for a three months period (February 2016 to April 2016 included). Spatial granularity is in average 10km, as travel times are provided for major links. The dataset contains 101 links. Raw data we use is effective travel time, from which we can construct travel speed and relative travel speed, defined as the ratio between optimal travel time (travel time without congestion, taken as minimal travel times on all time steps) and effective travel time. Congestion is constructed by inversion of a simple BPR function with exponent 1, i.e. we take $c_i = 1 - \frac{t_{i,min}}{t_i}$ with $t_i$ travel time in link $i$ and $t_{i,min}$ minimal travel time. \section{Methods and Results} \subsection{Visualization of spatio-temporal congestion patterns} As our approach is fully empirical, a good knowledge of existing patterns for traffic variables, and in particular of their spatio-temporal variations, is essential to guide any quantitative analysis. Taking inspiration from an empirical model validation literature, more precisely Pattern-oriented Modeling techniques introduced by~\cite{grimm2005pattern}, we are interested in macroscopic patterns at given temporal and spatial scales: the same way stylized facts are in that approach extracted from a system before trying to model it, we need to explore interactively data in space and time to find relevant patterns and associated scales. We implemented therefore an interactive web-application for data exploration using \texttt{R} packages \texttt{shiny} and \texttt{leaflet}\footnote{source code for the application and analyses is available on project open repository at\\ \texttt{https://github.com/JusteRaimbault/TransportationEquilibrium}}. It allows dynamical visualization of congestion among the whole network or in a particular area when zoomed in. The application is accessible online at \texttt{http://shiny.parisgeo.cnrs.fr/transportation}. A screenshot of the interface is presented in Figure~\ref{fig:fig-1}. Main conclusion from interactive data exploration is that strong spatial and temporal heterogeneity is the rule. The temporal pattern recurring most often, peak and off-peak hours is on a non-negligible proportion of days perturbed. In a first approximation, non-peak hours may be approximated by a local stationary distribution of flows, whereas peaks are too narrow to allow the validation of the equilibrium assumption. Spatially we can observe that no spatial pattern is clearly emerging. It means that in case of a validity of static user equilibrium, meta-parameters ruling its establishment must vary at time scales smaller than one day. We argue that traffic system must in contrary be far-from-equilibrium, especially during peak hours when critical phase transitions occur at the origin of traffic jams. \begin{figure} \vspace{1cm} \centering \includegraphics[width=\textwidth]{gr1} \caption{Capture of the web-application to explore spatio-temporal traffic data for Parisian region. It is possible to select date and time (precision of 15min on one month, reduced from initial dataset for performance purposes). A plot summarizes congestion patterns on the current day.} \label{fig:fig-1} \end{figure} \subsection{Spatio-temporal Variability of Travel Path} Following interactive exploration of data, we propose to quantify the spatial variability of congestion patterns to validate or invalidate the intuition that if equilibrium does exist in time, it is strongly dependent on space and localized. The variability in time and space of travel-time shortest paths is a first way to investigate flow stationarities from a game-theoretic point of view. Indeed, the static User Equilibrium is the stationary distribution of flows under which no user can improve its travel time by changing its route. A strong spatial variability of shortest paths at short time scales is thus evidence of non-stationarity, since a similar user will take a few time after a totally different route and not contribute to the same flow as a previous user. Such a variability is indeed observed on a non-negligible number of paths on each day of the dataset. We show in Figure~\ref{fig:fig-2} an example of extreme spatial variation of shortest path for a particular Origin-Destination pair. The systematic exploration of travel time variability across the whole dataset, and associated travel distance, confirms, as described in Figure 3, that travel time absolute variability has often high values of its maximum across OD pairs, up to 25 minutes with a temporal local mean around 10min. Corresponding spatial variability produces detours up to 35km. \begin{figure} \centering \vspace{1.5cm} \includegraphics[width=0.47\textwidth]{gr21}\hfill \includegraphics[width=0.47\textwidth]{gr22} \caption{Spatial variability of travel-time shortest path (shortest path trajectory in dotted blue). In an interval of only 10 minutes, between 11/02/2016 00:06 (left) and 11/02/2016 00:16 (right), the shortest path between \emph{Porte d'Auteuil} (West) and \emph{Porte de Bagnolet} (East), increases in effective distance of $\simeq 37$km (with an increase in travel time of only 6min), due to a strong disruption on the ring of Paris.} \label{fig:fig-2} \end{figure} \begin{figure}[t]\vspace*{4pt} \centering \centerline{\includegraphics[width=0.8\textwidth]{gr31}} \centerline{\includegraphics[width=0.8\textwidth]{gr32}} \caption{Travel time (top) in min and corresponding travel distance (bottom) maximal variability on a two weeks sample. We plot the maximal on all OD pairs of the absolute variability between two consecutive time steps. Peak hours imply a high time travel variability up to 25 minutes and a path length variability up to 35km.} \label{fig:fig-3} \end{figure} \subsection{Stability of Network measures} The variability of potential trajectories observed in the previous section can be confirmed by studying the variability of network properties. In particular, network topological measures capture global patterns of a transportation network. Centrality and node connectivity measures are classical indicators in transportation network description as recalled in~\cite{bavoux2005geographie}. The transportation literature has developed elaborated and operational network measures, such as network robustness measures to identify critical links and measure overall network resilience to disruptions (an example among many is the Network Trip Robustness index introduced in~\cite{sullivan2010identifying}). More precisely, we study the betweenness centrality of the transportation network, defined for a node as the number of shortest paths going through the node, i.e. by the equation \begin{equation} b_i = \frac{1}{N(N-1)}\cdot \sum_{o\neq d \in V}\mathbbm{1}_{i\in p(o\rightarrow d)} \end{equation} where $V$ is the set of network vertices of size $N$, and $p(o\rightarrow d)$ is the set of nodes on the shortest path between vertices o and d (the shortest path being computed with effective travel times). This index is more relevant to our purpose than other measures of centrality such as closeness centrality that does not include potential congestion as betweenness centrality does. We show in Figure 4 the relative absolute variation of maximal betweenness centrality for the same time window than previous empirical indicators. More precisely we plot the value of \begin{equation} \Delta b(t) = \frac{\left|\max_i (b_i(t + \Delta t)) - \max_i (b_i(t))\right|}{\max_i (b_i(t))} \end{equation} where $\Delta t$ is the time step of the dataset (the smallest time window on which we can capture variability). This absolute relative variation has a direct meaning : a variation of 20\% (which is attained a significant number of times as shown in Fig.~\ref{fig:fig-4}) means that in case of a negative variation, at least this proportion of potential travels have changed route and the local potential congestion has decrease of the same proportion. In the case of a positive variation, a single node has captured at least 20\% of travels. Under the assumption (that we do not try to verify in this work and assume to be also not verified as shown by~\cite{zhu2010people}, but that we use as a tool to give an idea of the concrete meaning of betweenness variability) that users rationally take the shortest path and assuming that a majority of travels are realized such a variation in centrality imply a similar variation in effective flows, leading to the conclusion that they can not be stationary in time (at least at a scale larger than $\Delta t$) nor in space. \begin{figure} \includegraphics[width=\textwidth]{gr4} \caption{Temporal stability of maximal betweenness centrality. We plot in time the normalized derivative of maximal betweenness centrality, that expresses its relative variations at each time step. The maximal value up to 25\% correspond to very strong network disruption on the concerned link, as it means that at least this proportion of travelers assumed to take this link in previous conditions should take a totally different path.} \label{fig:fig-4} \end{figure} \subsection{Spatial heterogeneity of equilibrium} To obtain a different insight into spatial variability of congestion patterns, we propose to use an index of spatial autocorrelation, the Moran index (defined e.g. in~\cite{tsai2005quantifying}). More generally used in spatial analysis with diverse applications from the study of urban form to the quantification of segregation, it can be applied to any spatial variable. It allows to establish neighborhood relations and unveils spatial local consistence of an equilibrium if applied on localized traffic variable. At a given point in space, local autocorrelation for variable c is computed by \begin{equation} \rho_i = \frac{1}{K}\cdot \sum_{i\neq j}{w_{ij}\cdot (c_i - \bar{c})(c_j - \bar{c})} \end{equation} where $K$ is a normalization constant equal to the sum of spatial weights times variable variance and $\bar{c}$ is variable mean. In our case, we take spatial weights of the form $w_{ij} = \exp{\left(\frac{-d_{ij}}{d_0}\right)}$ with $d_0$ typical decay distance and compute the autocorrelation of link congestion localized at link center. We capture therefore spatial correlations within a radius of same order than decay distance around the point $i$. The mean on all points yields spatial autocorrelation index $I$. A stationarity in flows should yield some temporal stability of the index. Figure~\ref{fig:fig-5} presents temporal evolution of spatial autocorrelation for congestion. As expected, we have a strong decrease of autocorrelation with distance decay parameter, for both amplitude and temporal average. The high temporal variability implies short time scales for potential stationarity windows. When comparing with congestion (fitted to plot scale for readability) for 1km decay, we observe that high correlations coincide with off-peak hours, whereas peaks involve vanishing correlations. Our interpretation, combined with the observed variability of spatial patterns, is that peak hours correspond to chaotic behaviour of the system, as jams can emerge in any link: correlation thus vanishes as feasible phase space for a chaotic dynamical system is filled by trajectories in an uniform way what is equivalent to apparently independent random relative speeds. \begin{figure} \includegraphics[width=\textwidth,height=0.6\textheight]{gr5} \caption{Spatial auto-correlations for relative travel speed on two weeks. We plot for varying value of decay parameter (1,10km) values of auto-correlation index in time. Intermediate values of decay parameter yield a rather continuous deformation between the two curves. Points are smoothed with a 2h span to ease reading. Vertical dotted lines correspond to midnight each day. Purple curve is relative speed fitted at scale to have a correspondence between auto-correlation variations and peak hours.} \label{fig:fig-5} \end{figure} \section{Discussion} \subsection{Theoretical and practical implications of empirical conclusions} We argue that the theoretical implications of our empirical findings do not imply in a total discarding of the Static User Equilibrium framework, but unveil more a need of stronger connections between theoretical literature and empirical studies. If each newly introduced theoretical framework is generally tested on one on more case study, there are no systematic comparisons of each on large and different datasets and on various objectives (prediction of traffic, reproduction of stylized facts, etc.) as systematic reviews are the rule in therapeutic evaluation for example. This imply however broader data and model sharing practices than the current ones. The precise knowledge of application potentialities for a given framework may induce unexpected developments such as its integration into larger models. The example of Land-use and Transportation Interaction studies (LUTI models) is a good illustration of how the SUE can still be used for larger purpose than transportation modeling. \cite{kryvobokov2013comparison} describe two LUTI models, one of which includes two equilibria for four-step transportation model and for land-use evolution (households and firms relocation), the other being more dynamical. The conclusion is that each model has its own advantages regarding the pursued objective, and that the static model can be used for long time policy purposes, whereas the dynamic model provide more precise information at smaller time scale. In the first case, a more complicated transportation module would have been complicated to include, what is an advantage of the static user equilibrium. Concerning practical applications, it seems natural that static models should not be used for traffic forecast and management at small time scales (week or day) and efforts should be made to implement more realistic models. However the use of models by the planning and engineering community is not necessarily directly related to academic concerns and state-of-the-art. For the particular case of France and mobility models, \cite{commenges2013invention} showed that engineers had gone to the point of constructing inexistent problems and implementing corresponding models that they had imported from a totally different geographical context (planning in the United States). The use of one framework or type of model has historical reasons that may be difficult to overcome. \subsection{Towards explanative interpretations of non-stationarity} An assumption we formulate regarding the origin of non-stationarity of network flows, in view of data exploration and quantitative analysis of the database, is that the network is at least half of the time highly congested and in a critical state. The off-peak hours are the larger potential time windows of spatial and temporal stationarity, but consist in less than half of the time. As already interpreted through the behavior of autocorrelation indicator, a chaotic behavior may be at the origin of such variability in the congested hours. The same way a supercritical fluid may condense under the smallest external perturbation, the state of the link may qualitatively change with a small incident, producing a network disruption that may propagate and even amplify. The direct effect of traffic events (notified incidents or accidents) can not be studied without external data, and it could be interesting to enrich the database in that direction. It would allow establishing the proportion of disruptions that do appear to have a direct effect and quantify a level of criticality of network congestion in time, or to investigate more precise effects such as the consequences of an incident on traffic of the opposite lane. \subsection{Possible developments} Further work may be planned towards a more refined assessment of temporal stability on a region of the network, i.e. the quantitative investigation of consideration of peak stationarity given above. To do so we propose to compute numerically Liapounov stability of the dynamical system ruling traffic flows using numerical algorithms such as described by~\cite{goldhirsch1987stability}. The value of Liapounov exponents provides the time scale by which the unstable system runs out of equilibrium. Its comparison with peak duration and average travel time, across different spatial regions and scales should provide more information on the possible validity of the local stationarity assumption. This technique has already been introduced at an other scale in transportation studies, as e.g.~\cite{tordeux2016jam} that study the stability of speed regulation models at the microscopic scale to avoid traffic jams. Other research directions may consist in the test of other assumptions of static user equilibrium (as the rational shortest path choice, which would be however difficult to test on such an aggregated dataset, implying the use of simulation models calibrated and cross-validated on the dataset to compare assumptions, without necessarily a direct clear validation or invalidation of the assumption), or the empirical computation of parameters in stochastic or dynamical user equilibrium frameworks. \section{Conclusion} We have described an empirical study aimed at a simple but from our point of view necessary investigation of the existence of the static user equilibrium, more precisely of its stationarity in space and time on a metropolitan highway network. We constructed by data collection a traffic congestion dataset for the highway network of Greater Paris on 3 months with two minutes temporal granularity. The interactive exploration of the dataset with a web application allowing spatio-temporal data visualization helped to guide quantitative studies. Spatio-temporal variability of shortest paths and of network topology, in particular betweenness centrality, revealed that stationarity assumptions do not hold in general, what was confirmed by the study of spatial autocorrelation of network congestion. We suggest that our findings highlight a general need of higher connections between theoretical and empirical studies, as our work can discard misunderstandings on the theoretical static user equilibrium framework and guide the choice of potential applications.
proofpile-arXiv_067-7485
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{A Transmission Mechanism for LTE-LAA SBS Operation}\label{sec:channelaccess} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.25]{3_bands_combined.pdf} \caption{Illustration of the proposed SBS transmission mechanism on the licensed and unlicensed bands. The two possible states upon sensing the unlicensed channel (idle and busy) are demonstrated. SBS will remain in a sensing state when it encounters a busy channel. The three states of SBS (i.e., transmission on the licensed, unlicensed and both bands) are also shown.}\label{channel_access} \vspace{-0.35cm} \end{center} \end{figure} LTE is designed for the exclusive use of the spectrum and hence when operating on the unlicensed band, a new channel access scheme is needed to coexist with other devices having different air interfaces. Therefore, in this section, we propose a transmission mechanism for the operation of an LTE-LAA small cell on the licensed and unlicensed bands. This mechanism builds upon the problem formulation from Section~\ref{sec:formulation} and incorporates a channel access scheme on the unlicensed channel that would allow LTE-LAA SBS to transmit on the unlicensed band in a way that would not disrupt any ongoing WiFi transmissions. For our proposed mechanism, we divide the time domain into $T$ epochs, where in each epoch we aim at finding the optimal values of $\alpha$ and $\beta$ using the results of Section~\ref{sec:formulation}. Taking into account that LTE transmits only at the beginning of a subframe, our proposed transmission mechanism is aligned with LTE frame structure where $(1-\alpha) T$ and $\beta T$ are rounded to an integer multiple of an LTE subframe duration (1 msec). Moreover, we define $\delta$ as the duration of time the SBS would sense the unlicensed channel before attempting to transmit. Let $\delta$ be such that $\mathrm{SIFS < \delta < DIFS}$, and hence this will guarantee that the ACK of any previous WiFi transmission is received at the sender and that SBS would get access to the unlicensed channel before any other WiFi STA that would be sensing the channel at the same time. The proposed LTE-LAA transmission mechanism is illustrated in Figure~\ref{channel_access} where the two possible states upon sensing the channel (idle and busy) are demonstrated. Moreover, the steps of the proposed mechanism are summarized as follows: \begin{enumerate} \item SBS calculates the values of $\alpha$ and $\beta$ before the beginning of a $T$ period based on the throughput values and number of active nodes of the previous $T$ period and using the results of Section~\ref{sec:formulation}. \item At the beginning of a $T$ period, SBS remains silent for the period $\alpha T$ on the unlicensed band and transmits for the period $\beta T$ on the licensed band. \item SBS senses the unlicensed channel for $\delta$ sec before $\alpha T$ expires in order to detect any ongoing WiFi transmissions and guarantee alignment with LTE frame structure. \item If the channel is idle, SBS transmits for a period of $(1-\alpha)T$. \item If the channel is busy, SBS keeps on listening to the channel until it detects a silent period for a duration of $\delta$ sec in order to avoid the disruption to any ongoing WiFi transmission. After detecting a silent period of $\delta$ sec, SBS sends a clear-to-send (CTS) with the duration of the remaining time of the $(1-\alpha)T$ period to reserve the channel for SBS transmission on the unlicensed band. It is important to note that the maximum channel occupancy time is limited to 10 msec after which the unlicensed channel must be released and the LBT process is repeated. Therefore, for the cases where $(1-\alpha)T$ is less than 10 msec, there is a risk that the SBS will not be able to get access to the unlicensed band when the WLAN burst is larger than $(1-\alpha)T$. For such scenarios, the WLAN transmission period for the next $T$ period is shortened accordingly to maintain the average time allocated for LTE-LAA and WLAN. \end{enumerate} \section{Discussion} In this section, we briefly discuss a couple of issues that warrant detailed exploration in future work. \subsection{Multiple Channels} Although we focus on a single unlicensed channel, our traffic balancing scheme can be extended to multiple unlicensed channels, each with a different muting variable \{$\alpha_1$, ..., $\alpha_c$\}, provided that the WiFi networks occupy disjoint channels (non-overlapping channels). Note that in such scenarios, the computational complexity increases due to the increase in the number of variables and thus would make it hard to obtain an online solution. An efficient extension to multiple channels is a key aspect for future work where one could potentially combine channel selection (as studied in \cite{channel_selection_1, channel_selection_2}) with the work in this paper in a joint framework. \subsection{Hidden Terminals} LTE use of unlicensed bands in the SDL mode gives rise to hidden terminal situations that need to be handled. In WLAN, this issue is addressed via the request-to-send/clear-to-send (RTS/CTS) messages; however, this method cannot be used for LTE-LAA since only DL transmissions are supported and hence SUEs are not able to transmit the CTS on the unlicensed spectrum. Therefore, to solve the hidden node problem, device-assisted enhancements need to be considered along with other existing mechanisms of the LTE system such as the periodic transmission of UE CSI/interference measurement over the licensed band. On the unlicensed band, a hidden terminal can be detected if SBS senses a good channel while the CSI report from the SUE shows a high interference value. This allows SBS to perform scheduling changes prior and during its operation on the unlicensed channel i.e., exclude the victim SUE for scheduling until its channel becomes idle and schedule other SUEs meanwhile. Alternatively, SBS may select another unlicensed channel to operate on~\cite{hidden_node}. \section{Conclusion}\label{sec:conclusion} In this paper, we have presented a formulation of the holistic LTE-LAA SBS traffic balancing across the licensed and unlicensed bands as an optimization problem that seeks to achieve a proportional fair coexistence of WiFi STAs, SUEs and MUEs. We have derived a closed form solution for the aforementioned optimization problem and proposed a transmission mechanism for the operation of the LTE-LAA SBS on both bands. Results show that LTE-LAA SBS aided by our solution would switch between or aggregate the licensed and unlicensed bands based on the interference/traffic level and number of active UEs in each band. It also provides a better performance for WLAN when coexisting with LTE and an efficient utilization of the radio resources compared to alternative approaches from the literature as it allows a better tradeoff between maximizing the total network throughput and achieving fairness among all network flows. \section{Acknowledgement} We would like to thank Paul Patras for his discussions during the early stages of this research work. \section{Evaluation}\label{sec:evaluation} In this section, we examine the behavior of our proposed holistic traffic balancing scheme in various scenarios using a combination of numerical and simulation results. We also conduct a comparative study of our holistic traffic balancing approach with respect to \cite{imp_extended} and other alternative approaches, representing other proposed techniques from the literature. In simulations, for WiFi we consider the 802.11 distributed coordination function (DCF) medium access mechanism based on carrier sense multiple access with collision avoidance (CSMA/CA). We assume randomly located STAs that transmit and receive packets according to an independent Poisson process. For simplicity, we consider that all WiFi STAs use the same physical layer parameters, 64-QAM modulation with a 5/6 coding rate when using a 20 MHz channel, which provides a 65 Mbps MAC layer throughput. The simulation parameters for the 802.11 network are the same as those used in~\cite{cano}. For the LTE and LTE-LAA networks, we assume the same channel conditions for all RBs on both bands and hence the same modulation and coding scheme (MCS) i.e., 64 QAM with 5/6 coding rate, is applied to all RBs of the given 20 MHz channel. Maximum MAC layer throughput for LTE with the above settings is 75 Mbps. These simulation parameters are similar to the ones used in \cite{imp_extended}. We assume a Round Robin (RR) scheduler and equal transmit power for all OFDM symbols in a Transmission Time Interval (TTI) due to the fact that all RBs have the same MCS and thus equal number of bits are allocated to each subcarrier. The maximum transmit power for MBS and SBS is 43 dBm and 23 dBm, respectively. We consider an urban area characterized by the path loss model (for outdoor and indoor locations of the base station and UEs) as given in \cite{pathloss}. A constant payload size of 1500 bytes is assumed for MUEs, SUEs and WiFi STAs. Simulation results are provided for the average of 1000 runs with a 95\% confidence interval. \subsection{Behavior of $\alpha$ and $\beta$ in different scenarios} In this subsection, we study the effect of the variation of the traffic arrival rate as well as the number of active UEs on the values of $\alpha$ and $\beta$ by conducting numerical and simulation results for different practical deployment scenarios. For the numerical results, we consider three different scenarios with different number of MUEs, SUEs and WiFi STAs. Figure~\ref{numerical} shows the optimal values of $(1-\alpha)$ and $\beta$ as a function of the MBS to SUE interference level on the licensed band, for a fixed value of the SBS to MUE interference level (-85 dBm) and two different WLAN traffic loads ($\overline{R}_w$=0.5 and 0.9). Note that the MBS to SUE interference and the SBS to MUE interference levels are relevant during the non-ABS period only. For the simulation results (shown in Figure~\ref{simulation_1}), we consider only scenario (a) of Figure~\ref{numerical} due to space limitations. Figure~\ref{simulation_1} shows the variation of the proportion of time SBS is transmitting on the licensed and unlicensed bands during the period $T$ as a function of the WLAN traffic arrival rate ($\mathrm{\lambda_{WLAN}}$ (packets/sec)) and for a low and high MUEs traffic arrival rates, i.e., $\mathrm{\lambda_{MUE}}$ = 0.5 and 2 (packets/sec) respectively. Note that $\mathrm{\lambda_{WLAN}}$ and $\mathrm{\lambda_{MUE}}$ correlate to $\overline{R}_w$ and inter-tier interference level respectively of Figure~\ref{numerical}. Each data point in the simulation results is obtained from 1000 runs, each of length 200 msec and with $T$ set to 20 msec. We can make the following observations from Figures~\ref{numerical} and~\ref{simulation_1}. First, comparing the three considered scenarios of Figure~\ref{numerical}, we conclude that \emph{our proposed traffic balancing scheme provides per node airtime fairness among each of the MUEs, SUEs and WiFi STAs.} For example, consider -60 dBm for the value of MBS to SUEs interference level and $\overline{R}_w$=0.5 for the WLAN load, we observe that in scenario (c), SBS transmits more on the unlicensed band (80\%) and less on the licensed band (20\%) as compared to scenario (b) where SBS transmits 50\% on the unlicensed band and 50\% on the licensed band. This is because the number of each MUEs and SUEs is larger than that of WiFi STAs in scenario (c) while in scenario (b) the number of WiFi STAs is larger than each of the number of MUEs and SUEs. \begin{figure}[t!] \centering \includegraphics[scale=0.2]{simulation_same_UEs.pdf}\\ \caption{Simulation results for the variation of the proportion of time the SBS transmits on the licensed ($\beta$) and unlicensed bands ($1-\alpha$) as a function of the WLAN traffic arrival rate ($\mathrm{\lambda_{WLAN}}$) and for a low and high MUEs traffic arrival rates i.e., $\mathrm{\lambda_{MUE}}$= 0.5 and 2 (packets/sec) respectively, for a scenario of equal number of MUEs, SUEs and WLAN STAs.}\label{simulation_1} \end{figure} Second, \emph{our proposed scheme copes with the interference level on both bands by adapting the values of $\alpha$ and $\beta$.} This can be observed for high values of inter-tier interference in Figure~\ref{numerical} or high values of $\mathrm{\lambda_{MUE}}$ in Figure~\ref{simulation_1}. In those scenarios, WLAN shares the unlicensed band with SBS for a proportion of time larger than its idle period, i.e., larger than (1-$\overline{R}_w$), in order to decrease the effect of inter-tier interference on the UEs throughput on the licensed band. For example, in Figure~\ref{numerical}, for scenario (a) and $\overline{R}_w$=0.9, SBS transmits for 55\% of the time on the unlicensed band when the MBS to SUEs interference level is -60 dBm as compared to 10\% when the MBS to SUEs interference level is -95 dBm. This can also be noted from Figure~\ref{simulation_1} where (1-$\alpha$) is equal to 20\% for $\mathrm{\lambda_{MUE}}$= 0.5 (packets/sec) but increases to 55\% for $\mathrm{\lambda_{MUE}}$= 2 (packets/sec), for $\mathrm{\lambda_{WLAN}}$=1.5. Third, \emph{our proposed traffic balancing scheme allows SBS to transmit on either one of the two bands or aggregate both bands through CA and thus increasing its capacity.} Given that SBS is muted for the period of $\alpha$ and (1-$\beta$) on the unlicensed and licensed bands respectively, we can deduce that it transmits on both bands simultaneously for a period of $(\beta-\alpha) T$ sec, and on one of the two bands for the remaining duration of the $T$ period i.e., for a period of $(1-(\beta-\alpha)) T$ sec, as per our proposed transmission mechanism of Section~\ref{sec:channelaccess}. For example, in Figure~\ref{numerical}, for scenario (b), $\overline{R}_w$=0.5 and MBS to SUEs interference level of -90 dBm, $\alpha$=0.5 and $\beta$=0.75 and thus SBS transmits on both bands simultaneously for 25\% of the $T$ period. This can also be shown in Figure~\ref{simulation_1} where $\alpha$=0.6 and $\beta$=0.9 for $\mathrm{\lambda_{WLAN}}$=1 and $\mathrm{\lambda_{MUE}}$=0.5 and hence SBS transmits on both bands simultaneously for 30\% of the $T$ period. Fourth, for all the considered scenarios of Figures~\ref{numerical} and~\ref{simulation_1}, we notice that \emph{the unlicensed band is always utilized by either WLAN or SBS and hence this avoids its under-utilization.} In other words, SBS is always transmitting on the unlicensed band for at least the portion of time that it is not utilized by WLAN i.e., (1-$\alpha$) is always greater than or equal to (1-$\overline{R}_w$), consistent with constraint (\ref{cons_1}) in the optimization problem, irrespective of the value of inter-tier interference on the licensed band. For example, for $\overline{R}_w$=0.5 and 0.9, (1-$\alpha$) is always greater than or equal to 0.5 and 0.1 respectively. Fifth, for all the studied scenarios, \emph{there exists an upper limit for the value of (1-$\alpha$) which corresponds to the maximum proportion of time that WLAN would share its unlicensed band with LTE.} This can be observed in the cases of high inter-cell interference on the licensed band where a minimum airtime portion for WLAN, that is a function of the number of active UEs and WLAN activity, is guaranteed and thus allowing a fair LTE-WiFi coexistence. For example, in Figure~\ref{numerical}, for an equal number of SBS and WLAN UEs (i.e. scenario (a)), the upper limit for (1-$\alpha$) is approximately 0.5. \emph{Overall, the results demonstrate that our traffic balancing scheme performs as per expectations by steering SBS traffic from one band to another or using both bands simultaneously depending on the level of inter-tier interference on the licensed band, WiFi offered load and number of UEs in each band.} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.21]{comparison_LIU_2.pdf} \caption{Simulation results for (i) the optimal value of the transmission ratio of SBS on the unlicensed band i.e., (1-$\alpha$) and (ii) the total achieved network throughput as a function of the MBS traffic arrival rate ($\lambda_\mathrm{{MUE}}$) for our proposed traffic balancing scheme (Our scheme) and the scheme in \cite{imp_extended} (Liu (2014)). For the comparative study, we consider moderate and high WLAN offered load i.e., $\overline{R}_w$=0.5 and 0.9 respectively.}\label{comparison_LIU} \end{center} \end{figure} \subsection{Comparison with existing traffic balancing scheme~\cite{imp_extended}} In this subsection, we compare the performance of our proposed scheme with that of~\cite{imp_extended} which also studies the problem of SBS traffic balancing across licensed and unlicensed bands. Unlike our scheme that jointly optimizes the muting pattern on both bands, the work in~\cite{imp_extended} takes a sequential approach adapting the power level in the licensed band first followed by adjusting the muting pattern on the unlicensed channel. Figure~\ref{comparison_LIU} shows simulation results for (i) the value of (1-$\alpha$) and (ii) the total network throughput for the two schemes as a function of the MBS traffic arrival rate for two different values of the WLAN traffic load ($\overline{R}_w$=0.5 and $\overline{R}_w$=0.9). We can make the following high-level observations from Figure~\ref{comparison_LIU}: \textbf{Observation 1:} \emph{Overall, our proposed traffic balancing scheme achieves better LTE-WiFi coexistence.} \textbf{Observation 2:} \emph{For all the studied network scenarios, our proposed traffic balancing scheme achieves higher total network throughput.} In what follows, we examine the reasons behind these observations. First, for scenarios of high WLAN load and when MBS is not in a full buffer state (i.e. $\lambda_\mathrm{{MUE}} <$ 2.5 (packets/sec)), corresponding to candidate solutions 2 or 6, our proposed scheme provides better LTE-WiFi coexistence while also achieving higher total network throughput as compared to~\cite{imp_extended}. This gain is due to the use of subframe muting instead of power adaptation, optimizing the MBS and SBS in a coordinated fashion instead of having a fixed level of performance for MBS, and optimizing the licensed and unlicensed bands in a holistic (joint) manner instead of adopting a sequential approach (see all aspects for \cite{imp_extended} discussed in Section~\ref{sec:related}). The gain for solving the problem holistically as compared to sequentially is characterized separately in Section~\ref{sec:evaluation_comparison} where we consider a variant of our scheme that adopts an independent muting strategy on both bands. On the other hand, the gain due to the other two differences between our scheme and that of~\cite{imp_extended} can be clearly seen from the value of $\alpha$ for candidate solutions 2 or 6 with $N_f$=1 \begin{equation}\label{alpha_comparison} \alpha=\frac{N_w(T_f^l + s_f^u)}{s_f^u(N_w+1)} \end{equation} where $T_f^l$ is the throughput achieved by SBS on the licensed band and corresponds to $\beta \cdot s_f^L$ for our proposed scheme and $s_f^L(P_f^*)$ (i.e., a function of the optimal allocated power) for the proposed algorithm of~\cite{imp_extended}. Therefore, from Equation (\ref{alpha_comparison}), we can note that higher values of $T_f^l$ result in higher values for $\alpha$ and thus less utilization of the unlicensed band. Given that ABS muting achieves better macro-layer performance at less degradation of the SBS layer performance as compared to power adaptation, for a specified level of performance for MUEs (e.g., minimum outage level, minimum interference level from SBSs to MUEs), ABS muting causes less degradation in the performance of the SBS layer as compared to power control, i.e., $\beta \cdot s_f^L > s_f^L(P_f^*)$. Following Equation (\ref{alpha_comparison}), our proposed scheme results in less utilization of the unlicensed band and thus allows more WLAN transmission opportunities as compared to~\cite{imp_extended} while maximizing the total network performance. On the other hand, in the case of a full-buffer MBS (i.e. $\lambda_\mathrm{{MUE}} \geq$ 2.5 (packets/sec)) and at high WLAN load, corresponding to candidate solution 5, we can notice that the value of (1-$\alpha$) for our proposed scheme (0.51) is slightly higher than that of~\cite{imp_extended} (0.49). This is due to the high interference level on the licensed band and thus the need to steer more traffic on the unlicensed band in order to guarantee that the SBS is transmitting on at least one of the two bands at a given time (see constraint (\ref{cons_2}) in the optimization problem). Note, however, that (1-$\alpha$) would converge to its upper limit (i.e., $\sim$ 0.5 for the studied scenarios) and thus allowing a fair LTE-WiFi coexistence. Second, our proposed scheme achieves similar performance on the \emph{unlicensed} band as that of~\cite{imp_extended} for the case of moderate WLAN load ($\overline{R}_w=0.5$) but it results in a higher total network throughput. For these scenarios, the value of $\alpha$ is limited by $\overline{R}_w$ (corresponding to candidate solutions 1, 3 or 4) and thus the increase in the total network throughput is due to the improvement in the performance on the licensed band i.e., due to the use of subframe muting instead of power adaptation and optimizing the MBS and SBS in a coordinated fashion instead of having a fixed level of performance for MBS (i.e., see aspects (1) and (2) of \cite{imp_extended} discussed in Section~\ref{sec:related}). In summary, our proposed scheme achieves better utilization of the available resources compared to \cite{imp_extended} (an increase of 28.3\% in the total network throughput for the studied scenarios) while increasing the transmission opportunities for WiFi on the unlicensed band. \subsection{Comparison with alternative approaches}\label{sec:evaluation_comparison} In this subsection, we compare the performance of our proposed traffic balancing approach with a broad spectrum of alternative approaches. As performance metrics, we consider throughput and fairness obtained using each of the various different approaches. Denote by $\eta(s_i)$ the efficiency of a resource allocation scheme where $\eta(s_i)$ is defined as the sum of all the UEs throughput i.e., $\eta(s_i)$= $\sum_{i=1}^Ns_i$ (where $i$ is $m$, $f$, or $w$ and $N$=$N_m$+$N_f$+$N_w$), and its fairness is given by the Jain's index defined below~\cite{jain}: \begin{equation} \mathcal{J}(s_i)=\frac{\big(\sum_{i=1}^N s_i\big)^2}{N\cdot\sum_{i=1}^N s_i^2} \end{equation} The value of the Jain's fairness index lies in [$\frac{1}{N}$, 1] where the value of ($\frac{1}{N}$) corresponds to the least fair allocation in which only one UE attains a non-zero throughput and the value of (1) corresponds to the most fair allocation in which all UEs achieve equal rates. Therefore, an efficient allocation of the radio resources seeks to provide a tradeoff between $\eta(s_i)$ and $\mathcal{J}(s_i)$~\cite{jain}. We compare the throughput and fairness of our proposed scheme with the following set of approaches: \begin{itemize} \item \textit{Case 1 - No Muting on Licensed}: SBS operates on both bands, however, considering a PF muting strategy on the unlicensed band only and hence providing a coexistence technique with WLAN only. On the licensed band, MBS and SBS transmit simultaneously, and hence inter-tier interference is not eliminated. \item \textit{Case 2 - No Muting on Unlicensed}: SBS operates on both bands, however, considering a PF muting strategy on the licensed band only and hence providing a coexistence technique with MBS only. On the unlicensed band, SBS is transmitting all the time, and hence excluding any opportunity for WiFi transmissions. \item \textit{Case 3 - No Transmission on Licensed}: SBS operates on the unlicensed band only and shares the spectrum with WLAN by muting adaptively. This corresponds to previously suggested approaches such as the work proposed in~\cite{ICC, xing, cano, dyspan, cu_LTE}. For this case, we specifically consider a muting pattern based on PF coexistence of SBS and WLAN on the unlicensed band which is similar to \cite{cano}. \item \textit{Case 4 - No Transmission on Unlicensed}: SBS operates on the licensed band only and shares the spectrum with MBS by muting adaptively. This corresponds to previously suggested approaches in the area of ICIC such as the work proposed in~\cite{ABS_1} based on muting (ABS). For this case, we specifically consider a muting pattern based on PF coexistence of MBS and SBS on the licensed band. \item \textit{Case 5 - Independent Muting}: SBS operates on both bands, however, an independent mechanism is applied on each band for its coexistence with LTE and WLAN i.e., the coexistence of SBS and MBS on the licensed band and the coexistence of SBS and WLAN on the unlicensed band are solved separately. To realize this case, we consider two independent PF coexistence formulations for the muting of SBS on each of the licensed and unlicensed bands. In other words, when solving for $\alpha$, we consider the WLAN and SBS throughput on the unlicensed band only, and when solving for $\beta$, we consider the MBS and SBS throughput on the licensed band only. \end{itemize} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.2]{comparison_simulations.pdf} \caption{The aggregate throughput of the WLAN, MBS, SBS and total network for our proposed traffic balancing scheme in comparison with other approaches.}\label{simulation_comparison} \vspace{-0.3cm} \end{center} \end{figure} Note that cases 1 and 2, respectively, do not consider coexistence mechanisms on the licensed and unlicensed bands and thus are not practical solutions; however, we include them in our study for the sake of completeness. Figure~\ref{simulation_comparison} shows the throughput achieved by WLAN, MBS, SBS and the total network for our proposed scheme as well as the other five studied approaches; the corresponding Jain's fairness index $\mathcal{J}(s_i)$ values are given in Table~\ref{jain_index}. We can make the following observations from these results. First, the WLAN throughput can be improved when coexisting with LTE-LAA small cells on the unlicensed band by taking into account the transmission of LTE-LAA small cells on the licensed and unlicensed bands and considering a holistic approach for the allocation of the resources on both bands i.e., optimizing both bands jointly. This can be observed from Figure~\ref{simulation_comparison} by comparing the total achieved throughput of WLAN for our proposed scheme with that of cases 1, 2, 3 and 5. Similarly, MBS throughput is higher with our proposed scheme compared to cases 1, 2, 4 and 5. Note that the WLAN and MBS throughputs will be, respectively, maximum when they exclusively use the unlicensed (case 4) and licensed bands (case 3), due to the absence of inter-technology interference in the former and lack of inter-tier interference in the latter. However, the total network throughput is the lowest for case 4; and case 3 results in a relatively unfair sharing of the radio resources as compared to our proposed scheme. Second, considering an independent muting mechanism on the licensed and unlicensed bands (case 5) leads to performance degradation in terms of throughput and fairness, indicating that the effectiveness of our proposed traffic balancing scheme stems from its holistic nature. This is validated from Figure~\ref{simulation_comparison} and Table~\ref{jain_index} by comparing the total network throughput and Jain's fairness index of our approach to that of case 5 i.e., $\mathcal{J}(s_i)$=0.82 and 0.57 respectively and 5.5\% improvement in the total network throughput. As another observation, the independent muting approach provides very close performance for MBS to case 4 due to the fact that $\alpha$=1 and hence the optimization problem would be a function of the variable $\beta$ only and would correspond to the sub-problem of the coexistence on the licensed band of case 5. Similar argument applies for the WLAN throughput of case 5 which is similar to that of case 3 (where $\beta$=0). Third, our proposed traffic balancing scheme utilizes the radio resources in the most efficient way compared to the other studied schemes as it provides a better tradeoff between efficiency (throughput) $\eta(s_i)$ and fairness $\mathcal{J}(s_i)$. In terms of efficiency, case 2 provides the maximum total network throughput since SBS will be transmitting on both bands simultaneously, however, WLAN would not be given opportunities for transmission and hence this would result in the least value of $\mathcal{J}(s_i)$ (0.45) as the radio resources are not shared fairly among the different technologies. Note also that our proposed scheme provides similar throughput as case 3; the major contribution to overall throughput in case 3 comes from MBS throughput which is maximum due to its exclusive use of the licensed band. However, comparing Jain's index fairness of our approach to that of case 3, we observe that our scheme allocates the radio resources in a more fair way unlike case 3 that causes a degradation in the WLAN and SBS throughputs. In terms of fairness, case 4 provides the most fair allocation of the licensed and the unlicensed bands as $\mathcal{J}(s_i)$ is the closest to 1 but it comes at the expense of throughput efficiency; total network throughput is the lowest with case 4. The reason for this high value of $\mathcal{J}(s_i)$ is because WLAN would have more transmission opportunities and hence its throughput would increase when using the channel exclusively as compared to sharing it with LTE-LAA SBS. On the other hand, the decrease in the value of $\eta(s_i)$ is due to the difference in the MAC layer throughputs with WiFi and LTE (65 Mbps and 75 Mbps respectively in our simulation setup) and the inter-tier interference level on the licensed band which results in the degradation of the SBS and MBS throughput. \begin{table}[t!] \renewcommand{\arraystretch}{1} \caption{Jain's fairness index for the UEs achieved throughput of our proposed scheme and the other five cases.}\label{jain_index} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Cases} & \textbf{Our scheme} & \textbf{(1)} & \textbf{(2)} & \textbf{(3)} & \textbf{(4)} & \textbf{(5)}\\ \hline \textbf{$\mathcal{J}(s_i)$} & 0.82 & 0.55 & 0.45 & 0.73 & 0.92 & 0.57 \\ \hline \end{tabular} \end{center} \end{table} \section{Holistic Traffic Balancing}\label{sec:formulation} In order to maximize the total network throughput while coexisting fairly with other LTE and WiFi cells, we aim in this section at proposing a traffic balancing approach that aims at providing a proportional fair coexistence of WiFi STAs, SUEs and MUEs. The rationale behind this approach is to allow SBS to either switch between or aggregate the unlicensed and licensed bands based on the interference level on each band. This will allow higher throughput for MUEs that are in the vicinity of the SBS when SBS is not transmitting on the licensed band, and similarly, more transmission opportunities for WiFi nodes when SBS is not transmitting on the unlicensed band. Therefore, the utility function can be expressed as the product of the throughputs obtained by SUEs, MUEs and WiFi STAs: \begin{equation} \mathcal{U}= \prod_{m=1}^{N_m} s_m \prod_{f=1}^{N_f} s_f \prod_{w=1}^{N_w} s_w \end{equation} $\mathcal{U}$ in turn can be expressed as the summation of the logarithmic function of the achieved rates as given below: \begin{eqnarray} \mathcal{U}_{\mathrm{log}}= && \sum_{m=1}^{N_m} \mathrm{log} (s_m) + \sum_{f=1}^{N_f} \mathrm{log} (s_f) + \sum_{w=1}^{N_w} \mathrm{log} (s_w) \nonumber \\ = && \sum_{m=1}^{N_m} \mathrm{log} \Big[ \beta s_m^{\mathrm{noABS}} + (1-\beta) s_m^{\mathrm{ABS}} \Big] \nonumber \\ &&+ \sum_{f=1}^{N_f} \mathrm{log} \Big[ \beta s_f^\mathrm{l} + (1-\alpha)s_f^\mathrm{u} \Big] \nonumber\\ &&+ \sum_{w=1}^{N_w} \mathrm{log} \Big[ \alpha s_{w} \Big] \end{eqnarray} The proposed utility function $\mathcal{U}_{\mathrm{log}}$ corresponds to a proportional fair coexistence of MUEs, SUEs and WiFi STAs. The PF scheduling algorithm has been an attractive allocation criterion in wireless networks since it maintains a balance between maximizing the total network throughput while achieving good fairness among network flows~\cite{PF}. Therefore, our optimization problem is formulated as follows: \begin{equation} \max_{\substack{\alpha, \beta}} \; \; \mathcal{U}_{\mathrm{log}} \label{obj} \end{equation} subject to \hspace{-0.33cm} \begin{equation} \alpha \leq \overline{R}_{w} \hspace{1cm} \label{cons_1} \end{equation} \begin{equation} \alpha \leq \beta \hspace{1cm} \label{cons_2} \end{equation} \begin{equation} 0 \leq \alpha \leq 1, 0 \leq \beta \leq 1 \label{cons_3} \end{equation} where $\overline{R}_{w} (\leq 1)$ corresponds to the normalized offered load across all WiFi stations; it can be obtained via long-term channel sensing where SBS would monitor the WLAN activity on the unlicensed band and estimate the average WLAN traffic load. In the above formulation, constraint (\ref{cons_1}) limits the fraction of time SBS is muted on the unlicensed band to the time it is busy due to WiFi activity. In other words, it is to make sure that the unlicensed band is not underutilized. The purpose of constraint (\ref{cons_2}) is to ensure that SBS transmits on either the licensed or the unlicensed channel at any given point in time. Constraints (\ref{cons_3}) limit the range of values variables $\alpha$ and $\beta$ can take. \begin{lem} $\mathrm{log}(x)$ is concave. It follows that the utility function $\mathcal{U}_{\mathrm{log}}$ is an affine combination of concave functions, and hence is concave. Therefore, the optimization problem defined by~(\ref{obj})-(\ref{cons_3}) is concave since the objective function and the feasible region defined by the constraints are concave and hence a closed form solution can be obtained using the Karush-Kuhn-Tucker (KKT) conditions at optimality~\cite{KKT}. \end{lem} Based on the above lemma, we now aim to derive a closed form solution for the optimization problem~(\ref{obj})-(\ref{cons_3}) using the KKT conditions at optimality. The KKT conditions are necessary and sufficient for convex optimization problems and consist of the stationarity, primal and dual feasibility, and complementary slackness conditions~\cite{KKT}. Therefore, the Lagrangian of the optimization problem~(\ref{obj})-(\ref{cons_3}) can be written as follows: \begin{multline} \mathcal{L}(\alpha, \beta, \lambda_1, \lambda_2, \lambda_3, \lambda_4, \lambda_5, \lambda_6)=-\mathcal{U}_{\mathrm{total}} + \lambda_1(\alpha - \overline{R}_{w}) \\+ \lambda_2(\alpha - \beta) - \lambda_3 \alpha + \lambda_4(\alpha-1) - \lambda_5 \beta + \lambda_6(\beta-1) \end{multline} where $\lambda_1$, $\lambda_2$, $\lambda_3$, $\lambda_4$, $\lambda_5$ and $\lambda_6$ correspond to the lagrangian multipliers of constraints~(\ref{cons_1})-(\ref{cons_3}). In the {\em first step}, we compute the candidates for an optimal solution pair ($\alpha^*$, $\beta^*$) from the possible combinations of feasible solutions satisfying the stationarity and complementary slackness conditions. Note that the total number of possible combinations for the Lagrangian multipliers is 64 (i.e., $2^6$) where a given multiplier could be either zero (Z) or non-zero (NZ) at an optimal solution. However, for our optimization formulation, only 6 combinations are possible candidates for an optimal solution due to some infeasible and redundant combinations. For instance, the combinations that have $\lambda_4$ and $\lambda_5$ as NZ can be omitted since their corresponding solution is ($\alpha^*$, $\beta^*$) = (1,0), however, this will lead to the violation of constraint (\ref{cons_2}). Similarly, if a constraint has finite values for both lower and upper bounds, one would need to consider the possible combinations when at most one of the Lagrange multipliers for that constraint is NZ. This is due to the fact that one or the other, or both, of the multipliers will always be equal to zero since only one of the bounds can be active at a time. Therefore, the combinations that have both $\lambda_3$ and $\lambda_4$ or $\lambda_5$ and $\lambda_6$ as NZ can be omitted. Moreover, we impose a non-zero muting period on the unlicensed band (i.e., restrict $\alpha$ to be greater than 0) in order to allow the small cell to sense WiFi activity and number of stations and thus we omit the combinations having $\lambda_3$ as NZ. Based on the above, the 6 candidate solutions for $\alpha^*$, $\beta^*$ and $(\lambda_1^*, \lambda_2^*, \lambda_3^*, \lambda_4^*, \lambda_5^*, \lambda_6^*)$ are as follows: \emph{Candidate solution 1: $\lambda$=(NZ,0,0,0,0,NZ)} \begin{equation}\nonumber \alpha_1=\overline{R}_{w} \;\;\; \mathrm{and} \;\;\; \beta_1=1 \end{equation} \begin{equation}\nonumber \label{sol1_lambda1} \lambda_1=- \sum_{f=1}^{N_f} \frac{s_{f}^{u}}{\beta_1 s_{f}^{l} + (1-\alpha_1) s_{f}^{u}} + \frac{N_w}{\alpha_1} \end{equation} \begin{multline}\nonumber \label{sol1_lambda6} \lambda_6= \sum_{m=1}^{N_m} \frac{(s_{m}^{\mathrm{noABS}}-s_{m}^{\mathrm{ABS}})}{\beta_1 s_{m}^{\mathrm{noABS}} + (1-\beta_1)s_{m}^{\mathrm{ABS}}} \\+ \sum_{f=1}^{N_f}\frac{s_{f}^{l}}{\beta_1 s_{f}^{l} + (1-\alpha_1)s_{f}^{u}} \end{multline} \emph{Candidate solution 2: $\lambda$=(0,0,0,0,0,NZ)} $\alpha_2$ corresponds to the solution of the following equation: \begin{equation}\nonumber \sum_{f=1}^{N_f} \frac{s_{f}^{u}}{s_{f}^{l} + (1-\alpha_2) s_{f}^{u}} - \frac{N_w}{\alpha_2}=0 \end{equation} \begin{equation}\nonumber \beta_2=1 \end{equation} \begin{multline}\nonumber \lambda_6= \sum_{m=1}^{N_m} \frac{(s_{m}^{\mathrm{noABS}}-s_{m}^{\mathrm{ABS}})}{\beta_2 s_{m}^{\mathrm{noABS}} + (1-\beta_2)s_{m}^{\mathrm{ABS}}} \\+ \sum_{f=1}^{N_f}\frac{s_{f}^{l}}{\beta_2 s_{f}^{l} + (1-\alpha_2)s_{f}^{u}} \end{multline} \emph{Candidate solution 3: $\lambda$=(NZ,NZ,0,0,0,0)} \begin{equation}\nonumber \alpha_3=\overline{R}_{w} \;\;\; \mathrm{and} \;\;\; \beta_3=\overline{R}_{w} \end{equation} \begin{multline}\nonumber \lambda_2= -\sum_{m=1}^{N_m} \frac{(s_{m}^{\mathrm{noABS}}-s_{m}^{\mathrm{ABS}})}{\beta_3 s_{m}^{\mathrm{noABS}} + (1-\beta_3)s_{m}^{\mathrm{ABS}}} \\- \sum_{f=1}^{N_f}\frac{s_{f}^{l}}{\beta_3 s_{f}^{l} + (1-\alpha_3)s_{f}^{u}} \end{multline} \begin{equation}\nonumber \lambda_1=- \sum_{f=1}^{N_f} \frac{s_{f}^{u}}{\beta_3 s_{f}^{l} + (1-\alpha_3) s_{f}^{u}} + \frac{N_w}{\alpha_3}-\lambda_2 \end{equation} \emph{Candidate solution 4: $\lambda$=(NZ,0,0,0,0,0)} \begin{equation}\nonumber \alpha_4=\overline{R}_{w} \end{equation} $\beta_4$ corresponds to the solution of the following equation: \begin{equation}\nonumber -\sum_{m=1}^{N_m} \frac{(s_{m}^{\mathrm{noABS}}-s_{m}^{\mathrm{ABS}})}{\beta_4 s_{m}^{\mathrm{noABS}} + (1-\beta_4)s_{m}^{\mathrm{ABS}}} - \sum_{f=1}^{N_f}\frac{s_{f}^{l}}{\beta_4 s_{f}^{l} + (1-\alpha_4)s_{f}^{u}}=0 \end{equation} \begin{equation}\nonumber \lambda_1=-\sum_{f=1}^{N_f} \frac{s_{f}^{u}}{\beta_4 s_{f}^{l} + (1-\alpha_4) s_{f}^{u}} + \frac{N_w}{\alpha_4} \end{equation} \emph{Candidate solution 5: $\lambda$=(0,NZ,0,0,0,0)} $\alpha_5$ is equal to $\beta_5$ and their corresponding value is the solution of the following equation: {\small\begin{multline}\nonumber -\sum_{m=1}^{N_m} \frac{(s_{m}^{\mathrm{noABS}}-s_{m}^{\mathrm{ABS}})}{\alpha_5 s_{m}^{\mathrm{noABS}} + (1-\alpha_5)s_{m}^{\mathrm{ABS}}} - \sum_{f=1}^{N_f}\frac{s_{f}^{l}}{\alpha_5 s_{f}^{l} + (1-\alpha_5)s_{f}^{u}} \\+ \sum_{f=1}^{N_f} \frac{s_{f}^{u}}{\alpha_5 s_{f}^{l} + (1-\alpha_5) s_{f}^{u}} - \frac{N_w}{\alpha_5} = 0 \end{multline}} \begin{equation}\nonumber \lambda_2=-\sum_{f=1}^{N_f} \frac{s_{f}^{u}}{\beta_5 s_{f}^{l} + (1-\alpha_5) s_{f}^{u}} + \frac{N_w}{\alpha_5} \end{equation} \emph{Candidate solution 6: $\lambda$=(0,0,0,0,0,0)} $\alpha_6$ and $\beta_6$ correspond to the solution of the following two equations: \begin{equation}\nonumber \sum_{f=1}^{N_f} \frac{s_{f}^{u}}{\beta_6 s_{f}^{l} + (1-\alpha_6) s_{f}^{u}} - \frac{N_w}{\alpha_6}=0 \end{equation} \begin{multline}\nonumber -\sum_{m=1}^{N_m} \frac{(s_{m}^{\mathrm{noABS}}-s_{m}^{\mathrm{ABS}})}{\beta_6 s_{m}^{\mathrm{noABS}} + (1-\beta_6)s_{m}^{\mathrm{ABS}}} - \sum_{f=1}^{N_f}\frac{s_{f}^{l}}{\beta_6 s_{f}^{l} + (1-\alpha_6)s_{f}^{u}}=0 \end{multline} Note that two more candidate solutions exist for $\lambda$= (NZ,NZ,0,NZ,0,NZ) and $\lambda$= (0,NZ,0,NZ,0,NZ) where $\alpha$ and $\beta$ are both equal to 1. However, we can avoid checking these two candidate solutions as they exist only in the case when $\overline{R}_{w}$=1 and hence their solution matches with that of candidate solution 1. In the {\em second step}, we check the primal and dual feasibility conditions for each of the 6 candidate solution pairs and the pair satisfying these conditions is the optimal solution. Note that all the candidate solutions are independent of the WiFi throughput $s_{w}$ and hence SBS needs to know only the normalized WiFi offered load as well as the number of active WiFi STAs; the SBS can learn the number of active WiFi STAs based on their corresponding MAC addresses during the sensing period~\cite{imp_extended}. The number of MUEs and their throughput can be conveyed to the SBS through the X2 interface. Using this information, SBS can determine the optimal values for $\alpha$ and $\beta$ {\em locally} when needed. \section{Introduction} The 3rd Generation Partnership Project (3GPP) is considering the deployment of LTE in the 5 GHz unlicensed bands, an approach known as Licensed-Assisted Access using LTE (LTE-LAA)~\cite{edinburgh}, as one of the key mechanisms to cope with the dramatic growth in mobile data traffic as well as the spectrum scarcity problem, especially below 6 GHz. LTE-LAA is an attractive solution for small cells due to the limits on maximum transmit power in unlicensed bands. It will allow the opportunistic use of the unlicensed spectrum as a complement to the licensed spectrum for offloading best-effort traffic via the LTE carrier aggregation (CA) framework, while critical control signalling, mobility, voice and control data will always be transmitted on licensed bands. Therefore, the performance experienced by mobile UEs as well as the utilization of the unlicensed spectrum will be enhanced. LTE-LAA, however, introduces new and inter-dependent challenges of LTE-WiFi coexistence in unlicensed bands, traffic offloading from licensed to unlicensed spectrum, and inter-operator spectrum sharing in unlicensed bands~\cite{overview}. LTE-WiFi coexistence depends on the extent to which LTE-LAA small cells (operating in both licensed and unlicensed bands) rely on unlicensed spectrum to meet their traffic demand, and this in turn is dependent on the nature of inter-tier interference in the licensed spectrum shared by a macro cell and small cells in its coverage area. This link between LTE small cell operation in the unlicensed band and inter-tier/inter-cell interference in the licensed spectrum is essentially the traffic balancing problem\footnote{Traffic balancing can be seen as addressing LTE-WiFi coexistence and LTE traffic offloading challenges together.} and the focus of this paper. The transmission of the small cell base station (SBS) on the unlicensed band can disrupt WiFi transmissions as the latter relies on a contention-based channel access and hence starvation may occur when co-existing with LTE. On the other hand, LTE-LAA SBS transmission on the licensed band can cause inter-tier/inter-cell interference to the macro cell and other small cell users, potentially degrading their throughput. Thus addressing the traffic balancing problem is challenging as it entails a LTE-LAA small cell base station to adaptively decide on how to steer its traffic between the licensed and unlicensed bands while optimizing the overall network performance and achieving fair coexistence among the technologies operating on both bands. Though the above discussion highlights the importance of traffic balancing for optimizing the performance of co-located networks based on different technologies (LTE and WiFi) sharing same unlicensed bands, and for more effective LTE-WiFi coexistence, this problem has till date received little attention in the research literature with \cite{imp_extended} as the only notable work. Nevertheless, the work in \cite{imp_extended} leads to an inefficient utilization of the available resources due to the inefficient coexistence mechanism on the licensed band as well as the sequential adaptation approach for optimizing both bands, which we further discuss in Section 2. In this paper, we take a holistic approach for LTE-LAA small cell traffic balancing across licensed and unlicensed bands. In other words, we aim to jointly address the LTE-LAA small cell operation in licensed and unlicensed bands by determining its transmission behavior on both bands in a coordinated fashion depending on the interference/traffic levels on each of the bands. Specifically, we make the following key contributions: \begin{itemize} \item We present a formulation of the optimization problem for holistic traffic balancing that seeks proportional fair coexistence of WiFi, small cell and macro cells by deciding on the transmission probability of LTE-LAA small cell in the licensed and unlicensed bands. The intention behind this formulation is for the LTE-LAA SBS to switch between or aggregate licensed and unlicensed bands depending on the interference/traffic level and number of active UEs in each cell. We derive a closed form solution for the aforementioned optimization problem. An attractive aspect of our solution is that it can be applied online by each LTE-LAA SBS, adapting its transmission behavior in each of the bands, and without explicit communication with WiFi nodes. (Section~\ref{sec:formulation}) \item We also propose a transmission mechanism for the operation of SBS on the licensed and unlicensed bands. Our mechanism leverages the above mentioned traffic balancing solution and aims at avoiding the disruption to on-going WiFi transmissions while adhering to the LTE frame structure. (Section~\ref{sec:channelaccess}) \item We provide extensive numerical and simulation results using several scenarios to highlight the main capabilities of our proposed scheme. Results show that LTE-LAA SBS, aided by our scheme, would adaptively steer its traffic from one band to another or transmit on both bands simultaneously depending on the interference/traffic levels and number of active UEs on each of the bands. Simulation results additionally demonstrate the effectiveness of our proposed scheme in comparison with \cite{imp_extended} and other approaches, representing the state-of-the-art. They reveal that approaches focusing on coexistence in one band while ignoring the other cause load imbalance and a decrease in the total network throughput and/or fairness. On the other hand, our approach, aided by its holistic nature, results in improved network performance as it achieves a better tradeoff between maximizing the total network throughput and attaining fairness among all network flows while also providing better LTE-WiFi coexistence. (Section~\ref{sec:evaluation}) \end{itemize} \section{Related Work}\label{sec:related} LTE use of unlicensed bands has been receiving growing amount of attention within the research community in recent years. The authors in~\cite{overview} provide an overview of LTE-LAA as well as the benefits and challenges it brings. Several papers have looked at the performance impact of LTE operating in unlicensed bands on WiFi. In a recent paper~\cite{ICC_experimental_evaluation}, the authors conduct an experimental evaluation for characterizing the interference impact of LTE-LAA on WiFi under various network conditions; it is shown that the impact of LTE-LAA on WiFi throughput depends on the channel bandwidth, center frequency and MIMO and can be heavily degraded for some scenarios. Concerning mechanisms for LTE-WiFi coexistence, most of the previous work uses muting (adaptive duty cycling)~\cite{ICC, xing, cano, dyspan, cu_LTE}. More crucially, much of the existing work does not consider the operation of LTE-LAA SBS in the licensed band while optimizing its use in the unlicensed bands alongside WiFi. This can however lead to a suboptimal resource allocation when seen globally. For instance, it can result in an over-utilization of the unlicensed band by LTE-LAA SBS and a decrease in WLAN performance, as it will be shown later in Section~\ref{sec:evaluation}. LTE-LAA small cells enable efficient and flexible use of the unlicensed spectrum, leveraging the LTE-Advanced carrier aggregation feature. Nevertheless, early work on traffic balancing across licensed and unlicensed bands (e.g., \cite{saad,fujitsu}) focused on dual-access small cells (with both LTE and WiFi air interfaces) and thus lacking these benefits. To the best of our knowledge, \cite{imp_extended} is the only notable traffic balancing work in the literature that applies to LTE-LAA small cells. The proposed traffic balancing technique in \cite{imp_extended} is based on adjusting the power level in the licensed spectrum and the number of muted subframes in the unlicensed bands. We identify three aspects of the work in \cite{imp_extended} discussed below, which together result in a lower WLAN performance and a degradation in the overall network performance compared to our proposed scheme, as shown later in Section~\ref{sec:evaluation}. \begin{enumerate} \item {\em Use of power control in the licensed band.} In the context of inter-cell interference coordination (ICIC) management in HetNets, 3GPP Release 10 introduced almost blank subframes (ABS) as an efficient way to enhance the network performance. In~\cite{ABS_1}, the authors evaluate the 3GPP enhanced ICIC (eICIC) techniques through realistic system-level simulations where it is shown that the ABS eICIC time method provides the best macrocell UE (MUE) protection as compared to other eICIC power methods. There is other work (e.g., \cite{muting_power_2}) which also shows that ABS muting achieves better macro-layer performance at less degradation of the SBS layer performance as compared to power adaptation. Therefore, the use of power control on the licensed band in~\cite{imp_extended} leads to a sub-optimal performance on both the licensed and the unlicensed bands given the fact that the coexistence mechanism in the licensed spectrum directly influences the optimization process in the unlicensed band. \item {\em Considering a fixed level of performance for macrocell base station (MBS).} The use of a fixed and predefined interference threshold value for MBS in~\cite{imp_extended} results in prioritizing the MBS performance irrespective of the degradation level caused to the SBS layer. This uncoordinated optimization approach on the licensed band would result in an unfair share of that band which in turn could lead to an over-utilization of the unlicensed band by the SBS and thus a degradation in the WLAN performance. \item {\em Sequential approach to optimizing the licensed band first then the unlicensed band.} The authors in~\cite{imp_extended} consider a sequential approach for optimizing both bands i.e., the output of the power allocation sub-problem in the licensed spectrum serves as an input to the muting sub-problem for the unlicensed bands. This results in prioritizing the licensed band and potentially over-utilizing the unlicensed band by SBS as well as degrading the total network performance. \end{enumerate} \section{System Model}\label{sec:model} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.21]{system_model.pdf} \caption{Illustration of the system model.}\label{system_model} \end{center} \end{figure} We consider a system model (depicted in Figure~\ref{system_model}) similar to that in \cite{imp_extended, cano} consisting of a macrocell base station, a small cell and multiple independently operated WiFi networks. We assume a dual band small cell that transmits on both licensed and unlicensed bands via the LTE carrier aggregation feature. The licensed band is shared between MBS and SBS where smaller portions of the spectrum, referred to as Resource Blocks (RBs), are allocated to UEs. On the other hand, SBS and WiFi networks share an unlicensed channel in the time domain and hence at a particular time, the unlicensed channel is occupied by either SBS or WiFi. This represents a dense WiFi deployment scenario where SBS and WiFi may need to time share the same channel. Let $N_m$, $N_f$ and $N_w$, respectively, denote the number of macro-cell UEs, small cell UEs (SUEs) and WiFi stations (STAs) in a given time period $T$. We assume the supplemental downlink (SDL) mode for the transmission of the small cell in the unlicensed band. On the other hand, traffic for WiFi STAs can be in either uplink or downlink directions. A full-buffer traffic model is assumed for the SBS, consistent with the motivation for SBS to use both licensed and unlicensed bands to meet its traffic demand. In order to coexist with MBS on the licensed band and WLAN on the unlicensed band, we adopt in our model a holistic traffic balancing approach where SBS adjusts the proportion of time it transmits on both licensed and unlicensed bands. Therefore, at a particular time, the small cell would adaptively choose to transmit on the licensed, unlicensed or both bands depending on the interference level and traffic load of MUEs and WiFi nodes. The proposed scheme can be implemented at the MAC layer and hence the traffic assignment would be transparent to applications on the UEs. SBS would defer from transmission on the unlicensed band in order to allow WiFi transmission opportunities and on the licensed band in order to avoid inter-tier interference. Therefore, to decide on the proportion of time the small cell transmits on the licensed and unlicensed bands, the following decision variables are defined: \begin{itemize} \item $\alpha \epsilon [0,1]$: the fraction of time SBS is \emph{muted} on the unlicensed channel. \item $\beta \epsilon [0,1]$: the fraction of time SBS is \emph{transmitting} on the licensed band. \end{itemize} Note that upon muting on the licensed band, SBS would defer from sending data on the physical channels, however, would still send control and reference signals, an approach known as almost blank subframe~\cite{ABS_1}. On the other hand, the use of unlicensed band by the small cell is limited to data plane traffic while control and reference signals are transmitted by the SBS on a licensed carrier, which is essentially the license assisted access (LAA) aspect of LTE-LAA. Concerning the LTE-WiFi coexistence mechanism in the unlicensed band, even though the work of 3GPP LTE-LAA study group is in the direction of standardizing the listen-before-talk (LBT) mechanism, we choose muting as the coexistence mechanism in this work influenced by two observations: (i) most of the LTE-WiFi coexistence literature focuses on adaptive muting; (ii) recent work in~\cite{cano_comparison} shows that conceptually both LBT and adaptive duty cycling (muting) provide the same level of fairness to WiFi transmissions when properly configured. \subsection{Throughput Modeling}\label{sec:throughput} In order to assess the network performance for the coexistence of LTE MBS, LTE-LAA small cell and WiFi, we define the throughput for each of the MUEs, SUEs and WiFi STAs. Upon the transmission on the licensed band, SBS would share the frequency band with MBS. In LTE, the downlink RB allocation among UEs is via OFDMA, implying no intra-cell interference. However, frequency reuse in LTE can be one where macro and adjacent small cells may transmit on the same frequency leading to inter-cell interference. On the other hand, when SBS is transmitting on the unlicensed channel, it shares the channel with WLAN. Therefore, the downlink SINR at SUE $f$, served by SBS $F$, in our model assuming a single MBS and SBS, during the transmission of SBS on the licensed and unlicensed channels respectively, can be expressed as follows: \begin{equation} \Gamma_{F,f}^l=\frac{P_{F,f}}{\sigma^{2}+ I_{M,f}} \qquad\text{and}\qquad \Gamma_{F,f}^u=\frac{P_{F,f}}{\sigma^{2}+ I_{W,f}} \end{equation} where $P_{F,f}$ denotes the received signal power for SUE $f$ from its serving SBS $F$, $\sigma^{2}$ is the thermal noise power, $I_{M,f}$ represents the interference power from MBS $M$ on SUE $f$ and $I_{W,f}$ corresponds to the aggregate interference power from neighboring WLAN APs/STAs on SUE $f$. Note that upon the transmission of SBS on the unlicensed channel, WLAN would defer from transmission since WiFi STAs sense the carrier, i.e. listen to the channel before transmissions, and transmit only if the channel is idle. Therefore, $I_{W,f}$ corresponds to the interference power due to WLAN hidden terminals. Similarly, the downlink SINR at MUE $m$, served by MBS $M$, during the non-ABS and ABS periods of SBS on the licensed band respectively, can be expressed as follows: \begin{equation} \Gamma_{M,m}^{\mathrm{noABS}}=\frac{P_{M,m}}{\sigma^{2}+ I_{F,m}} \qquad\text{and}\qquad \Gamma_{M,m}^{\mathrm{ABS}}=\frac{P_{M,m}}{\sigma^{2}} \end{equation} where $P_{M,m}$ denotes the received signal power for MUE $m$ from its serving MBS $M$, and $I_{F,m}$ represents the interference power from SBS $F$ on MUE $m$. We denote by $s_k$ the total throughput attained by an LTE UE $k$ (where $k$ is $m$ or $f$). An upper bound for the downlink UE throughput, based on Shannon's capacity, is computed as follows: \begin{equation} s_k(\mathrm{bps})=\mathrm{BW}_k\cdot\mathrm{log}_2(1+\Gamma_{k}) \end{equation} where $\mathrm{BW}_k$ is the channel bandwidth allocated to UE $k$ and $\Gamma_{k}$ is the SINR value of UE $k$. To derive the throughput attained by a WiFi STA $w$ when using the unlicensed band exclusively, we consider a slotted channel, as per the IEEE 802.11 modus operandi \cite{802.11:2012}. Let $\tau_w$ denote the stationary probability that station $w$ is attempting transmission in a randomly chosen slot time. The total throughput $\hat{s}_{w}$ attained by a WiFi STA $w$ when using the channel {\em exclusively} is: \begin{equation} \hat{s}_{w}(\mathrm{bps})=\frac{P_{w,succ}\cdot E[D_w]}{P_{w,idle} \cdot \sigma + P_{w,busy} \cdot T_b}, \end{equation} where $E[D_w]$ is the expected payload size for station $w$, $P_{w,succ}$ is the probability of a successful transmission and can be expressed as $P_{w,succ}=\tau_w \prod_{i=1, i\neq w}^{N_w} (1-\tau_i)$, $P_{w,idle}$ is the probability of an idle slot and can be expressed as $P_{w,idle}=\prod_{w=1}^{N_w}(1-\tau_w)$ and $P_{w,busy}$ is the probability of a busy slot, regardless of whether it corresponds to a collision or a successful transmission and can be expressed as $P_{w,busy}=1-\prod_{w=1}^{N_w}(1-\tau_w)$ \cite{Duffy:2010}. $\sigma$ and $T_b$ correspond to the average durations of an idle and a busy slot respectively and thus the denominator corresponds to the mean duration of a WiFi MAC slot. Therefore, during an epoch $T$, the throughput attained by a macro, small cell and WiFi UE respectively can be expressed as follows: \begin{equation} s_m=\beta s_m^{\mathrm{noABS}} + (1-\beta) s_m^{\mathrm{ABS}} \end{equation} \begin{equation} s_f=\beta s_f^\mathrm{l} + (1-\alpha)s_f^\mathrm{u} \end{equation} and \begin{equation} s_w=\alpha \hat{s}_{w} \end{equation} where $s_m$, $s_f$ and $s_w$ are the achieved throughputs of MUEs, SUEs and WiFi STAs respectively during a given period of time $T$. $s_m^{\mathrm{noABS}}$ and $s_m^{\mathrm{ABS}}$ correspond to the throughput achieved by MUE $m$ during the transmission of the SBS on the licensed band and during the ABS period of SBS, respectively. $s_f^\mathrm{l}$ and $s_f^\mathrm{u}$ correspond to the throughput of SUE $f$ during the transmission of SBS on the licensed band and an unlicensed channel, respectively.
proofpile-arXiv_067-7500
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Neurodegenerative diseases are a category of progressive loss of structure or function of neurons, including Alzheimer's Disease (AD), Parkinson's disease (PD), Huntington's disease (HD) and Spinal muscular atrophy (SMA), etc. Neuroimaging has advanced the analysis of neurodegenerative processes profoundly in the past two decades with large-scale group studies \cite{sdreview1, shen2014genetic, sdreview2}. The large-scale neuroimaging computing methods of neurodegenerative diseases can be generally categorised into cross-sectional and longitudinal. Though the majority of the recent neuroimaging studies focused on the cross-sectional group comparison with regional measurements \cite{sdmbk, sdpgf, sqisbi14, sqtbme}, the longitudinal analysis of brain tissue changes is effective in describing an evolving neurodegenerative process \cite{sqisbi15, scahill2003longitudinal,ho2003progressive}. A number of investigations have attempted to analyse the longitudinal changes from neuroimaging biomarkers at serial time-points and to make predictions of the underlying process \cite{Davatzikos20011361,HBM:HBM22511,Sabuncu20149}. However, it is difficult to evaluate any such predictions in practice due to the lack of ground truth. We suggest that simulation of follow-up brain images can be useful for validating the predictions obtained from neuroimaging biomarkers. The longitudinal simulation of the follow-up structural magnetic resonance (sMR) images can be performed by resampling the baseline sMR image with a weighted average of the longitudinal non-linear transformations from a template population \cite{modat2014simulating}. When the template population is sufficiently large to include a majority of neurodegenerative changes, the confidence in simulated results would heavily depend on the metrics used to measure the affinity between the target patient and the template population. Previous frameworks of longitudinal sMR simulation generally depended on two assumptions about the progress of neurodegeneration: (1) a common rate of atrophy is shared across all subjects with the same diagnostic label; (2) brains with similar morphology evolve in a similar way \cite{Davatzikos20011361,sharma2013estimation,modat2014simulating}. Alzheimer's disease (AD), which is the most common neurodegenerative disorder, progresses in a special pattern which has multiple stages. The neurofibrillary tangles (NFT), which are thought to contribute to local atrophy, spread from memory related areas towards areas in the medial temporal lobe, the parietal cortex and the prefrontal cortex \cite{braak2011stages}. Thus, patients in different stages of progression stages might not have similar atrophy rates across all brain regions. There might also be difficulty in comparing the local structural morphology accurately between different patients only based on the sMR intensities due to the inter-subject variance in the original structural appearances. Besides the sMR intensities, the historical progression of the same patient (longitudinal) as well as the difference between the current state of the patient and a normal brain template may generate the simulation from an alternative perspective. In this study, we present a proof-of-concept framework to simulate neurodegeneration in follow-up with sMR data. An overview of the framework is illustrated in Fig.~\ref{fig:framework}. We hypothesise that brains with similar detected cross-sectional and longitudinal atrophic deformations would have similar follow-up evolution. The cross-sectional changes are derived by a symmetric non-linear registration from a standard template to the subjects; the longitudinal registration between two serial sMR images of the same patient is used to extract the intra-subject serial changes. We apply the voxel-based morphometry (VBM) to measure both types of structural brain deformations. Each subject recruited in the template population is expected to have at least 3 serial sMR visits. Both the longitudinal VBM map and the cross-sectional VBM map are collected for each template subject. The simulation of the future sMR volume of a target patient is performed by resampling the latest available sMR volume with an average weighted sum of the longitudinal transformations in the following period in the template population. Only a minority of the best-matched templates in both VBM maps are selected to contribute to the simulation results. The cut-off threshold depends on the size of the recruited template population. \begin{figure*}[!htb] \centering \includegraphics[width=1\textwidth]{framework.png} \caption{An illustration of the proposed framework of the longitudinal MR simulation. } \label{fig:framework} \end{figure*} \section{Methods} \subsection{Preprocessing} Each template subject $S_i$ is expected to have at least 3 serial sMR visits taken at time-points $a,b,c$ and $a<b<c \wedge b-a \approx c-b$. Each template MR volume is skull-stripped using the Brain Extraction Tool with a standard space pre-masking applied \cite{HBM:HBM10062}. Then all template MR images are affine registered to the MNI152 template brain space with FLIRT \cite{Jenkinson2002825}. A symmetric diffeometric registration is performed between each pair of the standardised MR images $[I_i^{(a)}, I_i^{(b)}]$ and $[I_i^{(b)}, I_i^{(c)}]$ to obtain the longitudinal brain tissue deformations $T_i^{(ab)}$ and $T_i^{(bc)}$. The symmetric registration firstly registers the two adjacent MR volumes to a mid-way space to ensure the inverse consistency of the forward and backward transformations which was known to be important for atrophy calculation \cite{leung2012consistent,christensen2001consistent}. The MNI152 template $M$ is symmetrically registered to each $I_i^{(b)}$ image to obtain the cross-sectional deformation $T_i^{(Mb)}$. \subsection{Template Weighting with Voxel-Based Morphometry} The voxel-based morphometry (VBM) maps \cite{ashburner2000voxel,chung2001unified} were calculated on $T_i^{(ab)}$ and $T_i^{(tb)}$ respectively as the Jacobian determinant of the spatial transforms. $U$ is the displacement of $T$. The displacement tensor of $U$ over time $t$ is represented as \begin{equation} \frac{\partial U}{\partial x}(x, t)= \begin{pmatrix} \frac{\partial U_1}{\partial x_1} & \frac{\partial U_1}{\partial x_2} & \frac{\partial U_1}{\partial x_3} \\ \frac{\partial U_2}{\partial x_1} & \frac{\partial U_2}{\partial x_2} & \frac{\partial U_2}{\partial x_3} \\ \frac{\partial U_3}{\partial x_1} & \frac{\partial U_3}{\partial x_2} & \frac{\partial U_3}{\partial x_3} \end{pmatrix} \end{equation} The determinant $det(\frac{\partial U}{\partial x}(x, t))=det(I+\nabla U)$ is calculated at each voxel and forms the VBMs. To measure the distance between two detected tissue deformation $T_i$ and $T_j$, the squared Euclidean distance of the VBMs is calculated within the area of a dilated brain mask $\wp$ in the standardised space as $D(J_i, J_j) = \sum_{v \in \wp}{(d_v^{(i)} - d_v^{(j)})^2}/|\wp|$, where $d_v^{(i)}$ is the Jacobian determinant of $T_i$ at voxel $v$. The area near the outer boundary of the dilated brain mask is used to capture the deformation on or near the cortical surface. \begin{table*}[t] \begin{center} \caption{The visual check of a successful simulation and a failed simulation. The column Year 1 is the MR image at the second time-point; the column Year 2 is the real follow-up MR image of this patient and the column Simulated is the predicted Year 2 MR image. The VBM maps are overlaid on the Year 2 and Simulated images. The failure was probably caused by the spatial mismatch introduced in the automatic affine normalisation.} \bgroup \begin{tabular}{c c c c} \toprule & Year 1 & Year 2 & Simulated \\ \hline \multirow{14}*{Successful} & \raisebox{-\totalheight}{\includegraphics[width=0.28\textwidth]{successyear1.png}} & \raisebox{-\totalheight}{\includegraphics[width=0.28\textwidth]{successrealyear2-jd-annotated.png}} & \raisebox{-\totalheight}{\includegraphics[width=0.28\textwidth]{successpredyear2-jd-annotated.png}}\\\hline \multirow{14}*{Failed} & \raisebox{-\totalheight}{\includegraphics[width=0.28\textwidth]{failyear1.png}} & \raisebox{-\totalheight}{\includegraphics[width=0.28\textwidth]{failrealyear2-jd-annotated.png}} & \raisebox{-\totalheight}{\includegraphics[width=0.28\textwidth]{failpredyear2-jd-annotated.png}} \\ \bottomrule \end{tabular} \egroup \\ \label{fig:visualcheck} \end{center} \end{table*} \subsection{Follow-Up MR Simulation} To simulate the follow-up MR volume $I_\star^c$ of the target patient $S_\star$, we require the patient to have two MR volumes available $I_\star^a$ and $I_\star^b$ and the interval $(b-a)$ to approximately equal the intervals in the templates. The symmetric longitudinal and cross-sectional deformations are obtained as $T_\star^{ab}$ and $T_\star^{Mb}$. The VBMs of both transformations are correspondingly computed as $J_\star^{ab}$ and $J_\star^{Mb}$ which are used for collecting the masked longitudinal distances $D_i^{long}(\star,i)=D(J_\star^{ab}, J_i^{ab})$ and the cross-sectional distances $D_i^{cross}(\star,i)=D(J_\star^{Mb}, J_i^{Mb})$. Two types of collected distances are respectively normalised within [-1, 1] as $\tilde{D}=(D-\bar{D})/max(D-\bar{D})$ where $\bar{D}$ is the mean value of $D$. Then both distances are summed with relative weights to obtain the combined $D_i^{\prime}=\alpha \bar{D}_i^{long} + \beta \bar{D}_i^{cross}$, $\alpha+\beta=1$. k nearest neighbours of $S_\star$ are selected to form a new template set K. The majority of the templates are dropped out for one simulation because the distant subjects would introduce bias rather than contributing to the simulation accuracy, even Gaussian distributed weights are used. Based on $D_i^{\prime}$, the follow-up deformation $T_\star^bc$ of subject $S_\star$ is computed as an average weighted sum of the template transformations $T_i^{bc}$ as \begin{equation} T_\star^{bc} = \frac {\sum_{i\in |K|}{T_i^{a\star} \times e^{-D^\prime_i / g}}} {\sum_{i\in |K|}{e^{-D^\prime_i / g}}} \end{equation} where $T_i^{a\star}$ is the forward transformation from the template image $I_i^a$ to the target image $I_\star^a$; $g$ is the Gaussian kernel density which is set to 0.5 \cite{modat2014simulating}. The follow-up image $I_\star^c$ can be simulated by resampling $I_\star^b$ with $T_\star^bc$. Notably, in this proposed framework, subjects with different diagnostic labels are not required to be computed separately, since there might not be clear boundaries between the atrophic progression patterns from different diagnostic labels. For example, some follow-up transformations of early AD patients may contribute to simulate the future neurodegeneration of a late MCI patient. With the drop-out threshold $|K|$, the distant subjects are expected to be filtered out before the template merging. \begin{figure*}[!htb] \centering \includegraphics[width=0.9\textwidth]{3err-merge.png} \caption{The plot of the three types of MR image distances considered for this evaluation: The distances between the predicted images and the real follow-ups (P-B); The distances between the predicted images and the registered 1 year MR images (P-rA); The distances between the registered 1 year MR images and the real follow-up MR images (REAL) } \label{fig:3errors} \end{figure*} \section{Evaluation and Results} \begin{figure*}[!htb] \centering \begin{minipage}[b]{1\textwidth} \centering \centerline{\includegraphics[width=1\linewidth]{p-b.png}} \centerline{(a) Sorted distances between the predicted MR and the real follow-up MR (P-B)}\medskip \end{minipage} \hfill \begin{minipage}[b]{1\textwidth} \centering \centerline{\includegraphics[width=1\linewidth]{p-ra.png}} \centerline{(b) Sorted distances between the predicted and the registered Year 1 (P-rA)}\medskip \end{minipage} \caption{The plot of different template weighting methods: cross-sectional VBM weighting (cross), longitudinal VBM weighting (long), combined weighting (combine), intensity-based weighting (intensity). All distances were zero-mean rescaled into $[-1,1]$ and the subjects were sorted according to the intensity-based weighting. Lower values indicate better simulation results.} \label{fig:compare} \end{figure*} We recruited GradWarped MR images with N3 Correction from the publicly available ADNI 1 and ADNI GO dataset (http://adni.loni.usc.edu/) \cite{jack2010update}. The slice thickness of all MR volumes is 1.3mm. We kept 60 subjects who had at least three continuous MR visits with an interval of 12 months available, resulting in 180 MR volumes and 120 MR longitudinal pairs in total to be registered. After transforming each MR volume to the standard MNI152 space, skull stripping was conducted with a fraction threshold of 0.3. We used the MNI152 template with 2mm slice thickness for the image normalisation. We used the leave-one-out strategy to evaluate the framework. The $I^a$ and $I^b$ of each selected patient were used as the inputs to simulate the unknown follow-up $I^c$. The rest of the patients were used as the simulation templates. All the transformations and images belong to the testing target are excluded from the template construction. The simulated $I^c$ is then compared with the real follow-up image as well as the image derived by registering $I^b$ to the real follow-up image. The symmetric non-linear registration was implemented with the Advanced Normalization Tools (ANTS) \cite{Avants20112033}. The integration of the entire framework was implemented based on the Nipype framework \cite{gorgolewski2011nipype}. Examples of a successful simulation on an AD patient and a failed simulation of a normal control patient are presented in Table.~\ref{fig:visualcheck}. In each case, the Year 1 MR image ($I^b$) is shown as a reference; The real follow-up Year 2 MR image ($I^c$) and the simulated follow-up MR image ($I^c_\star$) are overlaid on the VBM map extracted between them and the Year 1 MR image to visualise the longitudinal deformations. Comparing the simulated MR image and the real follow-up of the successful case, the tendency for the ventricles to increase in size, reflecting more cortical atrophy was successfully simulated. The VBM map also showed an approximately matched tissue deformation along the ventricle as well as the cortical areas. The example that failed showed a mismatch of the atrophy in the MR images and the VBM maps. The increase in size of the lateral ventricles was neglected and the cortical atrophy was overestimated. Such simulation failure can be generally attributed to the affine normalisation errors and the insufficiency of the longitudinal template collection. To evaluate the strengths of different template weighting strategies, we considered the square distances between the simulated volume and the real follow-up volume (P-B) as well as the distances between the simulated volume and the registered Year 1 volume (P-rA), which is the output of registering the Year 1 image to its real follow-up MR image (Fig.~\ref{fig:3errors}). In Fig.~\ref{fig:3errors}, unlike P-rA, the P-B curve is correlated with the original registration error. It might indicate that P-rA could be a more unbiased evaluation criterion for such simulations. All the distances in Fig.~\ref{fig:compare} are sorted according to the image distances of the voxel intensity based simulation (intensity). The distances were zero-meaned and rescaled to make the individual differences identifiable. In Fig.~\ref{fig:compare}, we compared the longitudinal (long) and cross-sectional (cross) simulations as well as the combined weights of both (combine) respectively according to the P-B distances (Fig.~\ref{fig:compare}-(a)) and the P-rA distances (Fig.~\ref{fig:compare}-(b)). It is noticeable that the longitudinal information (long) outperformed the cross-sectional information (cross) in most cases. Since neither method achieved the lowest simulation errors in all trials, the combined weights (combine) could be used to balance two perspectives. Taking the intensity-based method (intensity) as a reference, at least one of the proposed morphometry-based methods (long, cross, combine) achieved lower simulation errors in most cases regarding both criteria. \section{Conclusion} We present a framework which automatically simulates future neurodegeneration from the longitudinal atrophic changes as well as the cross-sectional difference from a statistical average normal template. This framework expects the patient to have at least 2 historical MR images to increase the confidence of the simulated 3D MR volume. The brain tissue deformation was represented by the voxel-based morphometry (VBM). Our evaluation showed that the intra-subject longitudinal information enhances the simulation accuracy. The results from at least one of the proposed morphometry-based methods outperformed the state-of-the-art intensity-based method in most evaluated cases. With a sufficient template collection, our proposed framework can be used for validating the prediction made by neuroimaging measurements extracted from MR data. \section*{Acknowledgment} This work was supported by ARC, AADRF, NA-MIC (NIH U54EB005149), and NAC (NIH P41RR013218). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
proofpile-arXiv_067-7916
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Analysis on Relation Between Phase and Displacement} \section{Discussions on Pulse Modulation and Detection} \subsection{Detailed Description on Pulse Modulation} \begin{figure}[htpb] \begin{subfigure}[b]{0.5\textwidth} \begin{center} \includegraphics[width=2.7in]{fillthetagood-crop} \end{center} \vspace{-0.1in} \caption{Measured Displacement} \label{fig:fthetagood} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{center} \includegraphics[width=2.7in]{fmtgood.pdf} \end{center} \vspace{-0.1in} \caption{Detected Pulses} \label{fig:fmtgood} \end{subfigure} \caption{Calculated displacement and pulses from the same signal. } \label{fig:thetam} \vspace{-0.1in} \end{figure} Figure \ref{fig:thetam} in section \ref{sec:aaaa} shows an example of detected pulse when the user moves the phone forward and backward twice and then stops. We modulate three adjacent pulses per $T_2=0.25s$. Three adjacent pulses can be seen as a compensated periodical pulse with the period $T=T_2=0.25s$. The time difference of the adjacent pulses is $T_3=0.03s$. In Figure \ref{fig:fthetagood}, the estimated displacement is smooth and have no jitters whenever the phone is static or moving. We zoom in the calculated phase to see the performance of PLL when there are pulses in $s_2(t)$. The calculated phase is not locked to the real phase. Instead, it seems that PLL has not detected the pulses that the phase is very smooth. Specifically, while the maximum variation of the real phase is $\pi$, the corresponding variation computed by PLL is less than 0.4rad, which corresponds to the displacement of about $1mm$ The reason is that the parameters ($k_1$, $k_2$) of PLL are very small, and does track the fast changing phase. Moreover, as the phase at the beginning of a pulse equals the one at the end and the variation by PLL is small, the tracked phase finally becomes stable and the phase is the same with the one at the begining. \subsection{Proof on Properties of Modulated Pulse} We prove that the pulse $s_2$ does not take much acoustic bandwidth; and has little effects on the result of displacement tracking by PLL. First, the central frequency of $s_2$ is the same as the one of $s_1$, except that the phase changes when there is a pulse. Hence, $s_1$ and $s_2$ share the same frequency band. Second, since the bandwidth of the pulse is about $\frac{\pi}{T_p}$ \cite{DBLP:journals/tim/SahuG08}, we set $T_p=0.007s$ so that the bandwidth is about 460Hz. As the minimum frequency is 17000Hz when the acoustic is non-audible, and the maximum frequency which is supported by the phone is 24000Hz, the maximum concurrent signals that \ourprotocol supports in one place is $(24000-17000)/460 \approx 15$. Actually, if the pulse has more narrow bandwidth, \ourprotocol will support more concurrent signals, whereas the pulse becomes harder to be detected. How to modulate signals with more narrow bandwidth and demodulate the signal more accurately is left for future work. Third, the component $s_3(t)=\pi\sin\frac{\pi(t-\tau_i)}{T_p}$ is the phase shift of the sine signal. Furthermore, $s_3(t)$ starts and ends at the same value $0$, and the maximum value of $s_3$ is $\pi$. Hence, the displacement will not be affected by the pulse theoretically. \subsection{Choosing Parameters} There is a trade off on choosing the parameters $T_p$, $T_1$, $T_2$, $T_3$, we show the analysis on choosing the parameters as follows: \begin{itemize} \item{$T_p$:} As the bandwidth of pulses equals to $\frac{\pi}{T_p}$, smaller $T_p$ would require more bandwidth and support less acoustic anchors in the same room. If we increase $T_p$, the duration of pulses in $s_2(t)$ increases. Since the pulses are seen as noises in displacement tracking, the accuracy of displacement tracking decreases. \item{$T_1$:} Since there are 3 adjcented pulses in one compensated periodical pulse, $T_1=T_2-3T_3$. \item{$T_2$:} Recall that $T_2=T$ which is the period of compensated pulses. Smaller $T_2$ will enhance the accuracy of measuring the receiving time of pulses for we have more pulses for matching. However, if we choose smaller $T_2$, we may face the ambiguity problem. Specifically, denote the receiving time of a pulse is $t_r$ and the sending time of periodical pulses is $t_s+kT_2$. The calculated distance is $v_a(t_r-t_s-kT_2)$, where $k$ is an undetermined integer which also makes the distance undetermined. To get $k$, we further leverage the maxmium distance from speaker to anchor, denoted as $l_m$. Since $v_a(t_r-t_s-kT_2)<l_m$, to get the unique solution of distance, $v_aT_2$ should be greater than $l_m$. In our paper, we assume that $l_m=85m$ which infers $T_2=0.25s$. \item{$T_3$:} $T_3$ has limitation on its minimum value. Firstly, to avoid overlaps of adjacent pulses, $T_3>T_p$. Secondly, there also should be intervals between adjacent pulses. We zoom in Figure \ref{fig:fthetagood} and find that PLL needs time longer than the duration of pulses to lock the displacement to the real value after the pulse terminates. Hence, if adjcent pulses are too close, PLL may become very unstable. If $T_3$ increases, for $T_1=T_2-3T_3>0$, $T_2$ may also increase which also affects the performance of \ourprotocol. \end{itemize} \subsection{Pulse Detection} We discuss how we detect the receiving time $\tau_i'=\tau_i+t_l$ of the $i$th pulse by leveraging the component $s_3(t)$. Assuming the locked phase by PLL is $\phi_r$ before the pulse starts, the expected pulse is $\tilde r(t)=\cos(2\pi f t+\phi_r+\pi \sin \frac{\pi(t-\tau_i')}{T_p})$. Hence, for the received sample $r(kT_s)$, we compute the likelihood $m(kT_s)=\sum_{i=k}^{k+T_p/T_s}r(iT_s)\tilde r (iT_s) $, \ie, when $m(kT_s)$ reaches the maximum, the corresponding $kT_s$ is the starting time of the received pulse. Note that, if we set expected pulse $\hat r(t)=\cos(2\pi f t+\phi_r+\pi)$ and there is no pulse for the next $T_p$ that $r(t)=\cos(2\pi f t+\phi_r)$, $\hat m(kT_s)=\sum_{i=k}^{k+T_p/T_s}r(iT_s) \hat r (iT_s) $ will reach the minimum. Actually, $s_3(t)$ is the filtered version of pulse $\hat s_3(t)=\pi$ that the pulse $s_3(t)$ has narrower bandwidth. Accordingly, $\tilde r(t) \approx \hat r(t)$ which means $m(t)$ will reach the value close to minimum when there is no pulse in the next $T_p$. Hence, arrival time $\tau_i'$ of the shape can be detected by $m(t)$. \subsection{Analysis on Design of Pulse Detection} As mentioned earlier in section \ref{sec:sync}, our PLL takes $s_2(t)$ as noises and only tracks $s_1(t)$. There are two advantages based on above results: 1) the pulses have very small effects on the tracked displacement. 2) For the variation is very small and the variation of $\phi_r$ is stable when there are pulses, peaks of $m(t)$ become clear to be detected. In Figure \ref{fig:fmtgood}, $m(t)$ reaches the peak value (\ie, 150), when there is a pulse at $t$ and the bottom value (\ie, -50) when there are almost no pulses. As a whole, it shows an interesting result that on demodulating $s(t)$, the peak of $m(t)$ is very clear for synchronization in Figure \ref{fig:fmtgood}, while the corresponding calculated phase is very smooth for displacement tracking in Figure \ref{fig:fthetagood}. We can also find that when the phone is static, the peaks corresponding to the pulses are clear. However, they are unclear when the phone is moving. Furthermore, when the signal is weak, the periodical peaks cannot be detected by $m(t)$ in Figure \ref{fig:fmtbad} due to noises. Hence, we make further solution to make the peaks more clear in case that the phone moves or the signal is weak. The solution is based on the observation that expected peaks still appear at expected time, though they sink in the noises. Meanwhile, random peaks have fewer chances to appear periodically. Hence, we assign $m_1(t)=m(t-T_3)+m(t)+m(t+T_3)$ in Figure \ref{fig:fmtbad1}, where the peaks are more clear to be identified in $m_1(t)$. Then, we assign $m_2(t)=m_1(t-T_2)+m_1(t)+m_1(t+T_2)$ in Figure \ref{fig:fmtbad2}, where the peaks can be easily detected. Moreover, when the phone is moving and the corresponding phase is in Figure \ref{fig:fthetagood}, the peaks are also very clear in Figure \ref{fig:fmtgood2}. \subsection{Dealing With Multipath Effects:} As mentioned in section \ref{sec:synchronization} that the result of synchronization is affected by multipath effects, especially when the smart device is static. We further study and make improvement on pulse detection. \begin{figure}[htpb] \begin{subfigure}[b]{0.23\textwidth} \begin{center} \includegraphics[width=1.7in]{fmultipath_good} \end{center} \caption{Good Case.} \label{fig:multipathgood} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \begin{center} \includegraphics[width=1.7in]{fmultipath_bad} \end{center} \caption{Bad Case (Multipath).} \label{fig:multipathbad} \end{subfigure} \caption{Pulse detection in case of multipath effects.} \label{fig:multipath} \end{figure} We find that when the phone is static, there is another property which can be leveraged: the distance from the smart device to the dummy speaker is constant. Hence, we can use $m_3(kT_2)=\sum_{i\in\{x|x=k \mod T_2\}}m(iT_2)$, which sums all the $m_3(t)$ of pulses and make the detected time of pulses more clear. The result of summed $m_3(kT)$ is shown in Figure \ref{fig:multipath}. In Figure \ref{fig:multipathgood} when there is no multipath effect, there are 3 pulses in a period $T_2$. However, in Figure \ref{fig:multipathbad}, which is gather from the shopping mall, there are 9 pulses at least, which means there is 2 additional paths reflected from walls or other objects. In this case, all the 3 paths are the possible pulses directly received from the dummy speaker. After recognizing the possible multipaths, we make further step to filter the direct path. Specifically, we use the result of PLL, which corresponds to the displacement. As displacement tracking is less affected by multipath effects, we compare with the result of PLL and pulse detection when a user walks from one position and stops at another one. In this case, denote that the displacement by PLL is $d$, and the receiving time of pulses are in the set $T_a=\{t_{a1},t_{a2},t_{a3},\dots\}$ and $T_b=\{t_{b1}, t_{b2},t_{b2},\dots\}$ at the start point and end point respectively. Hence we obtain the receiving time $(t_a, t_b)=\underset{t_a \in T_a, t_b \in T_b}{\arg \min} |(t_a-t_b)v_a-d|$. \subsection{Avoiding Signal Conflicts from Multiple Signals} To avoid signal conflicts, we discuss the solutions for different scenes: \\ \begin{itemize} \item Virtual business card sharing: In this case, users are usually close to each other, and we can choose to narrow the bandwidth pulses in synchronization. Hence, \ourprotocol can support more number of users who broadcast signals simultaneously, while we reduce the accuracy of pulse detection and long distance positioning. \item Virtual shopping guide: Note that we further evaluate the performance using multiple speakers, and only 2 co-existing anchors per position are far enough for absolute positioning (like normal localization schemes). Hence, \ourprotocol also supports unlimited number of shopping guides by simple and sparse deployment to avoid conflicts. Hence, we suggest that that if there is requirement of more shopping guides, only a few speakers are used and serve as anchors in normal indoor localization, instead of just relative positioning. \end{itemize} \section{Conclusion} \label{sec:conclusion} We propose and implement \ourprotocol, a localization scheme that calculates the relative position from a smart device to a dummy speaker. The dummy speaker only needs to emit acoustic signals at non-audible frequency, so that COTS speakers can serve as anchors. Furthermore, \ourprotocol directly obtains both distance and direction from smart device to speaker, which is quite different from existing localization systems that are capable of obtaining only the distance or the direction. As a result, \ourprotocol only requires one anchor for localization, while others need multiple anchors for calculating the final position, such as trilateration. By pushing the limit of the anchor's number, \ourprotocol is not only capable of indoor localization, but also has a potential for wider applications, such as augmented-reality or mobile social applications. \section{Discussion} \subsection{Limitations} In the preceding sections, we have presented and evaluated the \ourprotocol system. \ourprotocol proposes a novel method to calculate the relative position, and currently there are some limitations as follows and we will address these challenges in our future work. Since the relative displacement tracking of \ourprotocol is sensitive to NLoS effects, a user has to hold the phone in hand to get the relative position, and cannot put the phone in the pocket. Hence, it is more practical to install \ourprotocol on wearable devices, such as smartwatch \cite{samunggear,sonysmartwatch} or smartglasses \cite{GoogleGlass}. A possible improvement is to replace acoustic signals with WIFI signals, where there is much less attenuation of WIFI signals in case of NLoS effects. \ourprotocol can immediately calculate precise distance from user to source when the distance from phone to source is within 8m, but needs historical precise measurements for ranging when the distance increases. (Note that for direction estimation, there is no such limitation.) For indoor localization, an easy solution is to place anchors at the place where users always pass through. When users try to find nearby friend by \ourprotocol, since direction estimation is accurate without the requirement of synchronization, they can only use direction estimation when the friend is far away, and additionally use ranging when they are getting closer. \subsection{Privacy Issue} \ourprotocol shows new potential privacy attacks in mobile applications. Recently, thousands of Android malwares have been identified in security community \cite{Zhou2012}. Most of the attacks are based on invalid use of permissions, such as SEND\_SMS (FakePlayer \cite{Fakeplayer}), or ACCESS\_FINE\_LOCATION (Tapsnake \cite{tapsnake}). Hence, many approaches aims at detecting and preventing undesirable usage of permissions \cite{Zhang2013,Yang2013}. However, observe that the phone does not need any permissions to act as acoustic source or count walking steps using IMU sensors. Furthermore, as our PLL tracks the relative displacement from acoustic source to receivers, \ourprotocol can also be used in a scene where the receivers, \eg, microphones, act as anchors. Therefore, by emitting non-audible signals on the phone without user awareness and relative positioning by \ourprotocol, the phone's fine-grained location can also be obtained without the permission of ACCESS\_FINE\_LOCATION. \section{Tracking Displacement} \label{sec:pll} In this section, we show how we design the acoustic wave emitted by the dummy speaker and infer the displacement from the acoustic wave when the user walks. \subsection{Brief Design of the Acoustic Wave} The modulated wave in audio is used for two purposes: displacement tracking and synchronization. Hence, the wave $s(t)$ contains two parts respectively. More specifically, we formally define the wave in the following equations, \begin{equation} s(t)=\begin{cases} s_{1}(t) & kT_2\leq t<kT_2+T_{1}\\ s_{2}(t) & kT_2+T_{1}\leq t<(k+1)T_2 \end{cases} \end{equation} where $T_2=0.25s$ is the cycle of the wave and $k$ is the natural number. $T_1=0.16s$ is the duration of $s_1(t)$ in each cycle. We mainly use $s_1(t)$ for tracking the displacement. Intuitively, $s_1(t)$ is a sine wave, and phase of the corresponding received signal $r_1(t)$ is changed when the distance changes. We prove in the following subsection that the phase of $r_1(t)$ is proportional to relative displacement. So by tracking the phase of $r_1(t)$, the displacement is tracked. The displacement tracking algorithm (PLL) will be detailed in this section. The relation between phase and displacement is illustrated in section \ref{sec:placedis} and we explain how to preprocess signal and track phase in section \ref{sec:preprocess} and \ref{sec:subpll} respectively. Note that $s_2(t)$ is not only used for synchronization, but also capable of displacement tracking, like $s_1(t)$. As a result, the measurement of displacement is rarely affected by additional function of synchronization. In section \ref{sec:sync}, we will discuss the use of $s_2(t)$. \subsection{Phase \& Displacement} \label{sec:placedis} In order to track displacement, we define $s_1(t)$ as follows: \begin{equation} s_1(t)=\cos(2\pi ft) \label{eq:s1} \end{equation} where $f$ is the frequency. To make the audio inaudible and to have the frequency supported by commercial speaker, we set $17000Hz <f< 24000Hz$. On receiving the signal $r_1(t)$, there is a phase shift $\phi$ compared with $s_1(t)$, such that $r_1(t)=\cos(2\pi ft+\phi)$. For instance in Figure \ref{fig:example}, the displacement \cite{DBLP:journals/corr/HuangXLLMYL13} is \begin{equation} d=l_1-l_2=\frac{v_a}{2\pi f}(\phi_2-\phi_1) \label{eq:relationphi} \end{equation} where $\phi_1$ and $\phi_2$ is the calculated phase at $O_1$ and $O_2$ respectively and $v_a$ is the travelling speed of acoustic wave. \subsection{Preprocessing Received Signal} \label{sec:preprocess} Before tracking phase $\phi$ from $r_1(t)$, we have to preprocess the received signal. For the sent signal $s_1(t)$, the actual received signal $r_{raw}(t)$ does not equal to $r_1(t)$. Its amplitude $A(t)$ always changes and it is also mixed with noises $\sigma(t)$. We denote $r_{raw}(t)=A(t)\cos(2\pi ft+\phi(t))+\sigma(t)$. Hence, we need to firstly eliminate $A(t)$ and $\sigma(t)$ before tracking the phase $\phi(t)$. To eliminate the noise $\sigma(t)$, we let $r_{raw}(t)$ pass through a Band Pass Filter (BPF). The processed signal $r_{filter} \approx A(t)\cos(2\pi ft+\phi(t))$. $r_{filter}$ is then processed by Automatic Gain Control (AGC) \cite{rice2008digital}. After that, $A(t)$ is removed and the signal can be seen as $r_1(t)$ \cite{DBLP:journals/corr/HuangXLLMYL13}. \subsection{Tracking the Phase} \label{sec:subpll} To track phase for inferring displacement, we adopt the second-order Phase Locked Loop (PLL) to track the phase when the smart device moves, rather than the ordinary first-order PLL. PLL is a classical method in signal processing and can be regarded as a device that tracks the phase and frequency of a sinusoid. In our design, it is implemented purely by software due to the limited capabilities of smartphone platform. \begin{figure}[htpb] \begin{center} \includegraphics[width=2.8in]{fpll-crop} \end{center} \vspace{-0.1in} \caption{Design of the Second-Order Phase Locked Loop. } \label{fig:pll} \vspace{-0.1in} \end{figure} \begin{figure*}[t] \begin{subfigure}[b]{0.245\textwidth} \begin{center} \includegraphics[width=1.8in]{fmtbad.pdf} \end{center} \caption{m(t) of Weak Signal} \label{fig:fmtbad} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \begin{center} \includegraphics[width=1.8in]{fmtbad1.pdf} \end{center} \caption{$m_1(t)$ of Weak Signal} \label{fig:fmtbad1} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \begin{center} \includegraphics[width=1.8in]{fmtbad2.pdf} \end{center} \caption{$m_2(t)$ of Weak Signal} \label{fig:fmtbad2} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \begin{center} \includegraphics[width=1.8in]{fmtgood-pulsematch.pdf} \end{center} \caption{$m_2(t)$ of Good Signal} \label{fig:fmtgood2} \end{subfigure} \caption{Detection of the arrival time of the pulse. } \label{fig:syncgood} \vspace{-0.2in} \end{figure*} We show our design of PLL in Figure \ref{fig:pll}. The PLL contains three main components: phase detector, loop filter and direct digital synthesizer (DDS). The phase detector detects the difference $\phi_e=\phi-\hat \phi$, where $\hat \phi$ is the estimation of $\phi$. According to $\phi_e$, the loop filter analyzes and predicts the offset $\gamma_2+\gamma_3$ of $\hat \phi$ for the next cycle of the loop, where the variance of $\gamma_2, \gamma_3$ is affected by parameter $k_1$ and $k_2$ respectively. The DDS updates the next $\hat \phi$ by adding the offset and prepares $\gamma_1$ for the next phase detection. In the process of the phase detector in Figure \ref{fig:pll}, for the $n$th input $r_1(nT_s)$, $r_1 \gamma_1= \sin(\phi-\hat \phi) -\sin(4\pi f T_s n +\phi + \hat \phi)$. Here we denote $T_s$ as sampling period of received signal. As $\phi_e=\textrm{LPF}(r_1 \gamma_1)$ where LPF is the low pass filter, the high frequency component of $r_1\gamma_1$ is eliminated and $\phi_e\approx \sin(\phi - \hat \phi)$. If the phase is locked (\ie, $\hat \phi$ is close to $\phi$), $\sin (\phi- \hat \phi) \approx \phi- \hat \phi $. Hence $\phi_e \approx \phi - \hat \phi$. In Figure \ref{fig:pll}, the Loop Filter is the key part of PLL. There have been many proposals on design of loop filter \cite{best2003phase}, and the type and parameter of Loop Filter should be carefully chosen for different purposes. Here we adopt a second-order filter, \ie, the proportional-plus-integrator \cite{rice2008digital} filter, as the Loop filter. It uses two updated variables $\gamma_2$, $\gamma_3$ and two constant parameters $k_1$, $k_2$. Particularly, if $k_2=0$, it degrades to a first-order PLL. We explain why the first-order PLL cannot be used in our case. When the phone is static and the PLL becomes stable after several cyclic loops, $\phi_e\approx 0$ and $\hat \phi$ is close to constant. Hence in Figure \ref{fig:pll}, $\gamma_2 \approx 0$ and $\gamma_2+\gamma_3 \approx 0$ which infers $\gamma_3 \approx 0$. It means that $k_2$ can be eliminated and the first-order PLL is sufficient. However, if the phone moves, and we still use the first-order PLL, the performance is good in case of high signal-to-noise ratio (SNR) but also limited by SNR. For instance, assume the user moves at a constant relative speed and $\phi$ increases $\Delta \phi$ per $T_s$, \ie, the cyclical time of the loop. When the PLL becomes close to stable, $\phi_e \approx \Delta \phi +\phi_s$ where $\phi_s$ is error caused by random noises. If the SNR is high that $\phi_s \ll \Delta \phi $, we can set $k_1>\Delta\phi$ to let $\hat \phi$ catch up with the variation of $\phi$. However, when increasing the value of $k_1$, as the magnitude of $\gamma_2$ increases, $\hat \phi$ becomes unstable and tends to be affected by noises. The bad case is that the phase, which is actually $\phi$, is intended to be locked or already locked to $\phi+2\pi$ or $\phi-2\pi$. We call this phenomenon the \textit{jitter} for convenience. The error of the corresponding displacement is $\frac{v_a}{2\pi f}2\pi \approx 1.8cm$ which affects the accuracy of position estimation in section \ref{sec:posest}. To show the limitation in the experiment, we let the user hold the phone for a while, move the phone forward to the speaker and backward for three times, and finally stop at the starting point. In Figure \ref{fig:displacement}, we show the result of PLL with different parameters, when the acoustic signal is weak, \ie, $l=32m$. In Figure \ref{fig:plla}, the $k_1$ is large enough to catch up with the real displacement. However, it is affected by the noises and sometimes cannot lock when moving. It results in occasional jitters on the up-and-down curve. Then, the calculated displacement from the start to the end, which should be close to 0, accumulates to $17cm$ after total moving length of about $100cm$. On the contrary in Figure $\ref{fig:pllb}$ where $k_1$ is small, the calculated phase displacement cannot catch up with the real phase and jitters frequently when moving. Hence, there is limitation of using first-order PLL for supporting both high-speeding moving and high noises. Therefore, for solving the above problem, the updated component $\gamma_3$ is added, which turns the first-order PLL into the second-order one. $\gamma_3$ can be seen as the phase variation $\Delta \phi$ per $T_s$, which corresponds to the relative speed from the phone to the speaker. If the PLL becomes stable, in each cyclic loop, the loop filter predicts next phase with the added $\gamma_3$, which results in $\phi_e \approx \phi_s$, instead of $\phi_e \approx \Delta \phi +\phi_s$. It means that it is no longer needed to set large $k_1$ to let $\hat \phi$ catch up with the dynamic $\phi$. Hence, $k_1$ can be much smaller that the PLL is more robust to the noises. In Figure \ref{fig:pllc}, we choose the second-order PLL by setting $k_2\not = 0$. Meanwhile, $k_1$ is much smaller than the one in Figure \ref{fig:plla} that the PLL is more robust to noises and does not cause observable jitters. $k_1$ equals to the one in Figure $\ref{fig:pllb}$, but has no problem of catching up with the fast displacement for $k_2\not = 0$. The accumulate displacement error is less than $2cm$ which is about at least 9 times more accurate than the one in Figure \ref{fig:plla}. \section{Estimation of Relative Position} \label{sec:posest} In this section, we propose the method on distance and direction estimation from smart device to speaker, \ie, the relative position. The intuition is that when the user walks, there is a unique pattern of displacement according to relative position. Hence, we use the displacements (calculated in section \ref{sec:pll}) to deduce user's positions. \begin{figure}[htpb] \centering \begin{center} \includegraphics[height=1.3in]{fdistance1} \end{center} \caption{Positioning when the user walks along a line.} \label{fig:distance} \vspace{-0.2in} \end{figure} \subsection{Positioning When User Walks along a Line} To estimate the distance, we first consider a simple scenario when a user starts walking from $O_1$ and steps at $O_2$, $O_3, \dots, O_n$, shown in Figure \ref{fig:distance}. $A$ is the position of the speaker. Denote the height of the speaker relative to the smart device as $h=|\overline{AG}|$. Assume the stride length is close to constant $s=\overline{O_iO_{i+1}}$. Both $h$ and $s$ are assumed to be given and used for distance and direction estimation. The other inputs are the displacements of all the steps, \ie, $d_i=l_i-l_{i+1}$ for the step $\overline{O_iO_{i+1}}$, which are calculated using PLL. Observe that the distance from the speaker to $\overline{O_iO_{i+1}}$ is constant $y=|\overline{AH}|$, where $\overline{AH} \perp \overline{O_1O_n}$. Hence, we first estimate $x=|\overline{HO_1}|$ and $y$ from those inputs and then estimate the position at each step point $O_i$ according to $x$ and $y$. Intuitively, $x$ and $y$ can be found by traversing the positions and using maximum likelihood estimation. Specifically, as $|HO_i|= x+(i-1)s$, $i=1,2,3,\dots$, denoting that \begin{eqnarray} \label{eq:mine0} l_i'=&\sqrt{y^2+(x+(i-1)s)^2}\\ e_i=&l_i'-l_{i+1}'-d_i \label{eq:mine} \end{eqnarray} Then $e_i=0$ if $d_i$ is accurate. Hence, for $n$ displacements $d_1, d_2, \dots, d_n$, $x$ and $y$ can be solved from above $n$ equations by $(x,y)=\underset{x,y}{\arg\min}\sum_{i=1}^ne_i^2$. Here we use the Newton's Method \cite{Madsen04immmethods} to reduce the computation overhead. Observe that $L_i=|\overline{GO_i}|$ and $\psi_i'=\angle GO_iO_n$, instead of $l_i$ and $\psi_i$, are the horizontal distance and direction and used for positioning when $x$ and $y$ are estimated, we make the distance and direction results in the following equations: \begin{equation} \begin{cases} L_i=&\sqrt{(x+(i-1)s)^2 + y^2-h^2} \\ \cos\psi_i'=& - \frac{x+(i-1)s}{L_i} \end{cases} \label{eq:estposition} \end{equation} \subsection{Synthesizing When User Turns Direction} When a user turns direction while walking, we can always calculate the relative position as follows. Assume that the user starts from $O_a$ and walks along the linear segment $\overline{O_aO_b}$, $\overline{O_bO_c}$, $\overline{O_cO_d}$, $\overline{O_dO_e}$ in Figure \ref{fig:posuserturns}. We use the calculated displacements in this case. We also use the step counter to estimate the linear length $n_as$, $n_bs$, $n_cs$, where $s$ is the stride length and $n_a$ is the number of steps when user walks from $O_a$ to $O_b$. \noindent \textbf{Calculating angle of turning direction:} We calculate angle of user's rotation mainly by using gyroscope, when the user turns direction. Though Zee \cite{2012-MOBICOM-Zeezeroeffort} can directly calculate walking direction, it is mainly affected by inaccurate compass and usually cannot distinguish whether the user is walking forward or backward along a direction. \ourprotocol does not require the knowledge of absolute direction of user's walking for we only need to know relative position from the user to the target. It only requires the angle of user's rotation when the user turns, which is used for calculating position in this section. For instance, assume the initial walking direction is $\zeta_a$ and the following direction is $\zeta_b$. We do not calculate $\zeta_a$ or $\zeta_b$ by magnetic sensor, but directly calculate the difference of walking direction, \ie, $\zeta_b-\zeta_a$, from the gyroscope. The purpose is to avoid errors caused by magnetic sensor of the smart device, where the errors might be huge in indoor environments. Note that $\zeta_a$ can be eliminated as we will convert the position in WCS (World Coordinate System) into the one in RCS (relative coordinate system), as mentioned in section \ref{sec:exp}. Hence, our problem is much easier that the gyroscope can accurately calculate the angle of user's rotation. By using this rotation angle together with the step detection, we can get the walking trace without knowing the relative position. Furthermore, when the relative position is obtained by additionally using the acoustic signal, the coarse-grained position can be obtained according to the same technique by only using the inertial sensors, if there is no signal received (\ie, NLoS effects). \begin{figure}[tpb] \centering \includegraphics[width=2.7in]{fmultipleWalkingLines} \caption{Positioning when the user walks and turns.} \label{fig:posuserturns} \vspace{-0.2in} \end{figure} \noindent \textbf{Calculating position:} Now we calculate relative position $G(g_x,g_y)$, which is the projection of acoustic speaker $A$ at a horizontal plane and is at the same height with the receiver. Denote $O_a$ is at $(0,0)$, we estimate the next positions $O_c, O_d\dots$ from the step counter and gyroscope. For example, $O_c$ is at the position $(c_x,c_y)=(n_as\cos(\zeta_a)+n_bs\cos(\zeta_b), n_as\sin(\zeta_a)+n_bs\sin(\zeta_b))$, and so forth. Given the calculated displacement $d_{c_1},d_{c_2}, \dots, d_{c_{n_c}}$, similar to \eqqref{eq:mine0}, \eqref{eq:mine}, the distance from each stride point to $G$ is \begin{equation} l_{c_i}=\sqrt{[c_x+(i-1)s\cos(\zeta_c)]^2+[c_y+(i-1)s\sin(\zeta_c)]^2+h^2} \label{eq:userturns} \end{equation} Denote the calculated error at the $i$th step along line $\overline{O_cO_d}$ is \begin{equation} e_{c,i}=l_{c_i}-l_{c_{i+1}}-d_{c_{i}} \end{equation} Hence, we obtain the position of $G$ using the following equation: \begin{equation} (g_x,g_y)=\underset{g_x,g_y}{\arg\min}\sum_{i\in\{a,b,c,d,e\}}\sum_{j=1}^{n_c-1}e_{i,j}^2 \end{equation} \section{Performance Evaluation} \label{sec:exp} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.16\textwidth} \begin{center} \includegraphics[width=1.2in]{fXLerror} \end{center} \vspace{-0.1in} \caption{ Ranging } \label{fig:perrorlx} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \includegraphics[width=1.2in]{fYLerror} \end{center} \vspace{-0.1in} \caption{Ranging } \label{fig:perrorly} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \includegraphics[width=1.2in]{fXaerror} \end{center} \vspace{-0.1in} \caption{Direction} \label{fig:perrorax} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \includegraphics[width=1.2in]{fYaerror} \end{center} \vspace{-0.1in} \caption{Direction} \label{fig:perroray} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \includegraphics[width=1.2in]{steplerror} \end{center} \vspace{-0.1in} \caption{ Ranging } \label{fig:steplerror} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \includegraphics[width=1.2in]{stepaerror} \end{center} \vspace{-0.1in} \caption{Direction} \label{fig:stepaerror} \end{subfigure} \label{fig:steperror} \caption{The accuracy of ranging and direction finding 1) when the user starts walking at different positions (a)(b)(c)(d), 2) when the user walks for smaller number of steps (e)(f). } \vspace{-0.2in} \end{figure*} In this section, we perform system evaluation by using two types of speakers: Samsung Galaxy Note 2 and normal dummy speakers. The speaker merely broadcasts acoustic waves and does not perform communications. We mainly use Google Nexus 4 to receive the acoustic signals. We do not make any modifications to the phone or jailbreak the operation system, and all the components, such as BPF, AGC, PLL, are implemented by the software. We evaluate the performance in an empty room, an office and the shopping mall. The micro benchmarks are made for position estimation and synchronization. We then evaluate the total performance where all the components are used. Note that in our system, we do not measure the walking direction in World Coordinate System (WCS). The main reason is that this measurement is not necessarily needed in our system and we only calculate the relative position between walkers and speaker. Furthermore, accurately measuring walking direction in WCS is still challenging \cite{2012-MOBICOM-Zeezeroeffort}, which is mainly caused by unpredictable errors when using compass, especially in indoor environment. Therefore, to evaluate the accuracy of our system, we build and rely on relative coordinate system (RCS), instead of classical World Coordinate System (WCS). In RCS, we set the direction of the piecewise linear segment as X axis, and starting point of the segment as origin point. For instance, assuming $Y=\sqrt{y^2-h^2}$ and $X=-x$ in Figure \ref{fig:distance}, $(X,Y)$ is the position of the speaker when the user starts walking. \begin{figure*}[htpb] \centering \begin{subfigure}[b]{0.155\textwidth} \begin{center} \includegraphics[height=1.1in]{fstepperson} \end{center} \vspace{-0.1in} \caption{ Stride Length} \label{fig:personstep} \end{subfigure} \begin{subfigure}[b]{0.145\textwidth} \begin{center} \includegraphics[height=1.05in]{fLperson} \end{center} \vspace{-0.1in} \caption{ Ranging } \label{fig:personL} \end{subfigure} \begin{subfigure}[b]{0.145\textwidth} \begin{center} \includegraphics[height=1.1in]{fAperson} \end{center} \vspace{-0.1in} \caption{Direction } \label{fig:personA} \end{subfigure} \begin{subfigure}[b]{0.12\textwidth} \begin{center} \includegraphics[height=1.1in]{fLplacement} \end{center} \vspace{-0.1in} \caption{ Ranging } \label{fig:placementL} \end{subfigure} \begin{subfigure}[b]{0.12\textwidth} \begin{center} \includegraphics[height=1.1in]{fAplacement} \end{center} \vspace{-0.1in} \caption{Direction } \label{fig:placementA} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \begin{center} \includegraphics[height=1.1in]{fLdevice} \end{center} \vspace{-0.1in} \caption{ Ranging } \label{fig:deviceL} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \begin{center} \includegraphics[height=1.1in]{fAdevice} \end{center} \vspace{-0.1in} \caption{Direction } \label{fig:deviceA} \end{subfigure} \vspace{-0.1in} \caption{The mean and standard deviation of ranging and direction estimation for different users (a)(b)(c), placements (d)(e), and smart devices (f)(g). } \label{fig:expperson} \vspace{-0.2in} \end{figure*} \subsection{Position Estimation} We evaluate position estimation in several types of cases, \ie, different related positions from the phone to the speaker, number of walking steps, users, orientation of devices, device diversity and environments, which may affect accuracy of the estimation. \subsubsection{Positions} \label{sec:casepos} We make evaluation in an empty room to evaluate the performance at different places. In this experiment, the speaker is placed at different locations, \ie, $X=2,4,6,8m$, and $Y=2,4,6,8m$. We let the user walk for $9\sim 10$ steps with the walking lengths of about $6m$. The relative height $h$ is about $0.3m$. For each location, the user holds the phone in hand and walks for 35 times to gather samples, \ie, we get 560 samples in this micro benchmark. Note that by using other smart devices, such as smart glasses, or smart watches, the user can have more comfortable experience. Due to that we only use IMU sensors and microphone which are frequently used in smart devices, we use the smartphone as smart device in the experiment. Then, we calculate the relative position for evaluating the accuracy of calculated distance $L=\sqrt{X^2+Y^2}$ and direction $\psi'=\arccos(X/L)$. Note that since the user walks for only several steps and the walking distance is short, we only evaluate the accuracy of the initial position $(X,Y)$. In Figure \ref{fig:perrorlx}, the accuracy of distance estimation is very close for different $X$. We further study the distribution of large errors in Figure \ref{fig:perrorly}. We find an interesting fact that the errors are nearly proportional to $Y$. Hence, when $Y=2,4,6,8m$, the corresponding errors are within $0.35m$, $0.55m$, $0.97m$, $1.88m$ at the percentage of $80\%$. The result is acceptable in our case for the user requires higher level of accuracy when s/he is close to the speaker. Furthermore, we use synchronization and synthesizing scheme achieves the accurate ranging in longer distances, instead of this position estimation scheme. For direction estimation, it is still very accurate when the $X$ or $Y$ increases in Figure \ref{fig:perrorax}, \ref{fig:perroray}. As a total, the mean of ranging and angle error is $0.63m$ and $2.46^o$ respectively. \subsubsection{Number of Steps} The accuracy of the position estimation depends on number of walking steps. We compare the results when the user walks for smaller number of steps $n_s$ in Figure \ref{fig:steplerror}, \ref{fig:stepaerror}. The samples are the same ones gathered in section \ref{sec:casepos}, and the only difference is that we only use part of each sample which infers fewer walking steps. The results show that the ranging errors increase quickly when $n_s$ reduces. The reasons are: 1) The user's stride length varies occasionally. 2) User's phone also shifts left and right regularly, \ie, it does not move strictly in a line, when the user holds the phone and walks. As these facts will have less effects on the accuracy when $n_s$ is larger, it can be foreseen that the accuracy will continue to be improved when $n_s>10$, though it is already very accurate when $n_s=10$. The estimated direction is also affected by the smaller $n_s$ in Figure \ref{fig:stepaerror}. But it is still acceptable that the angle errors are under $8^o$ at the percentage of $80\%$, when $n_s=6$. As a whole, when $n_s$ is small, the accuracy is enough for direction estimation of surrounding speaker, while it requires latter synthesizing scheme to obtain accurate distance. \subsubsection{Users} Different user has different stride length and user motion when the user walks, which causes variation of displacement patterns displacement $d_i$ and might affect the positioning result. Hence, we recruit 8 volunteers in this experiment: each user walks in a line of about $6m$ for 35 times where $(X,Y)=(4,4)$. We have the following observations in Figure \ref{fig:expperson}: The standard deviations (std) of the ranging and direction are small for most users. In Figure \ref{fig:personstep}, the person 1,2,4,6,7 have small stride lengths while the rest ones have bigger length, but the result is similar for all the users (except for the person 6,7). The results infer that the stride length is very stable and the positioning accuracy is not much affected by variation of stride length, though the stride length between different users may be much different. \subsubsection{Orientation of Speaker and Microphone} We consider the cases when the speaker or the microphone faces to different directions: (1) (default) the microphone faces to the sky, and the speaker faces to the walking line. (2) microphone, facing to the front. (3) microphone, perpendicular to the walking direction and facing to the speaker. (4) microphone, facing to the ground. (5) microphone, perpendicular to the walking direction and speaker is at the back of the microphone. (6) speaker, facing to the ground. The result in Figure \ref{fig:placementL}, \ref{fig:placementA} shows that the std is small in all cases and the result is very stable. We also find that the mean value of distance increases when the signal is weaker in case (2), (4) and decreases when signal is stronger in case (3). The reason is that when the signal is weak, PLL will lose some signals and the displacement decreases, which makes the calculated distance become larger. Hence, based on our measurements in displacement tracking, we make calibrations on the calculated PLL. More specifically in case (1) that the displacement $d= 1.22 \frac{v_a}{2\pi f}\Delta\phi$; if $d>0$, and $d= 1.69 \frac{v_a}{2\pi f}\Delta\phi$, if $d<0$, where $\Delta \phi$ is the tracked phase shift. Note that we make calibration with constant factor (\ie, 1.22), for the environment has limited effect on the result of PLL when the signal is strong enough. However, when $d<0$, which means the speaker is at the back of the walking user, $d$ is usually not used for position estimation if the tracked phase is abnormal (\eg, when \ourprotocol cannot detect pulses from the phase). \subsubsection{Device Diversity} We test several Commercial Off-the-Shelf (COTS) smart devices as acoustic receivers: (1) Nexus 4, (2) Samsung Galaxy Note 2. (3) Nexus 7. We choose $(X,Y)=(4,4)$ as the start point of walking, and the error of position estimation is shown in Figure \ref{fig:deviceL}, \ref{fig:deviceA}. The result shows that these smart devices have similar performance. We also use normal dummy speakers as acoustic speakers when we make experiment in a large shopping mall, for we consider the case that the normal speakers serve as virtual shopping guides. \noindent \textbf{Calibration of clock drift:} We find some interesting phenomenon: different from the previous smart devices, the normal speaker has serious clock drift and needs to be calibrated. For instance, when a speaker is supposed to broadcast signal at 19000Hz, the actual received signal is 19007Hz. If the frequency drift is 0.1Hz, the error of distance measuring is about 600*340*0.1/19000=1.07m, when the smart device performs synchronization for 10 minutes. To solve this problem, our design of PLL measures the precise clock offset when the receiver is static for only a few seconds. In this case, $\gamma_2$ in Figure \ref{fig:pll} rapidly converges to a constant value. As $\gamma_2$ equals to the phase shift per sampling time $T_s$, the frequency offset equals to $\frac{k_2}{2\pi T_s}$. Hence, once we let the smart device be static for a few seconds, the precise frequency offset is obtained. Afterward, we calibrate the clock drift in real-time using the constant frequency offset and there is not any clock drift after calibration. \subsubsection{Environments} We compare the accuracy of position estimation in the empty room and at different locations in the office. We find that it shows the similar results. We further evaluate the effects in a shopping mall in the latter subsection. \subsection{Synchronization} \label{sec:synchronization} In Figure \ref{fig:syncexp}, we choose 8 locations in an empty room and the office to evaluate the performance of synchronization. For example, E32 means that the experiment is in the room and the distance from the smart device to the speaker is $32m$, and O16 means that it is in the office and the distance is $16m$. In each position we test two cases: the phone is static or moving back and forth without stop. For each case, the phone records the audio for 100 seconds, which means there are 400 signals for synchronization in the samples. Then, we evaluate the accuracy of pulse detection. For easier understanding of our results, the error of arrival time $t_e$ is converted to distance measurement error $l_e=v_at_e$. For instance, if the error is the time interval of 1 acoustic sample, \ie, $t_e=\frac{1}{44100}s$, the corresponding distance error is $l_e\approx0.8cm$. \begin{figure}[htpb] \vspace{-0.15in} \begin{subfigure}[b]{0.235\textwidth} \begin{center} \includegraphics[width=1.7in]{fsyncpercentage} \end{center} \vspace{-0.1in} \caption{ $80cm$ criterion. } \label{fig:syncexp1} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \begin{center} \includegraphics[width=1.7in]{fsyncstd} \end{center} \vspace{-0.1in} \caption{Stand Deviation} \label{fig:syncexp2} \end{subfigure} \caption{(a) Percentage of successful experiments at different locations (b) standard deviation.} \label{fig:syncexp} \vspace{-0.2in} \end{figure} Since we find that there are occasional significant errors ($>3m$), we first set threshold $l_t=80cm$ and evaluate ratio of successful detection that $l_e<l_t$. In Figure \ref{fig:syncexp1}, the successful detection rate is above $80\%$ for most cases when the phone is static. When the phone is moving, the performance is good as well if the distance is within $24m$ and $8m$ in the empty room and office respectively. In some cases the rate is close to $100\%$. There is also an exception that at location E8 when the phone is static, the rate is only $61.0\%$, while it reaches $100\%$ at the same place when the phone is moving. So, we conduct the experiment again at the same place, and the result is close to the previous one. We suppose it is caused by the multipath effects: the phase $\phi_r$ changes according to the mixed signals and becomes stable when it is static, which affects the result of pulse matching. The reason of high successful rate in case of moving phone is that: though it is also affected by multipath, the phases of reflected signal at different positions are irregular. In other words, the PLL locks the phase of the signal directly from the speaker, \ie, the multipath signal is regarded as noises by PLL. Hence, the performance is better when the phone is moving. We find the location E4, E8, E16 also have the same phenomenon, which validates our hypothesis. Actually, this is a good result for \ourprotocol: when the user is walking, the synchronization result is very good and can be directly used for synthesizing; when the user is walking, as the successful detection rate is above $60 \%$, \ourprotocol collects enough samples and then determines the most possible receiving time. In Figure \ref{fig:syncexp2}, we show the standard deviation of results in case of successful detection. The std in most cases are around $10cm$ expect that the std is $30.9cm$ and $49.2cm$ when the phone is moving at O4 and O16 respectively. \subsection{Positioning after Synchronization} \label{sec:putit} We evaluate the performance of \ourprotocol which uses position estimation and synchronization in the following steps: \begin{enumerate} \item The user walks in a line where the initial coordinate of the speaker is $(4,4)$. In this step, we calculate the distance through position estimation and then calculate the sending time of periodical signals $s_2(t)$ by synchronization. \item The user then turns, walks and stops at the position where relative coordinate of speaker is $(X,Y)$. \item The user walks again for about $6m$. The position, which is supposed to be $(X,Y)$, is then computed according to the sending time and the received samples in this short duration of walking. \end{enumerate} \begin{figure}[htpb] \vspace{-0.2in} \begin{subfigure}[b]{0.235\textwidth} \begin{center} \includegraphics[width=1.8in]{fLerrortotal} \end{center} \vspace{-0.1in} \caption{ Ranging } \label{fig:totallerror} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \begin{center} \includegraphics[width=1.8in]{fAerrortotal.pdf} \end{center} \vspace{-0.1in} \caption{Direction Estimation} \label{fig:totalaerror} \end{subfigure} \caption{Accuracy of positioning by synchronization. } \vspace{-0.15in} \label{fig:totalerror} \end{figure} We conduct the experiment in the empty room and the office. Specifically, we set $(X,Y)=(4,12)$ and $(4,20)$ in the empty room to gather the samples and $(4,8)$ and $(4,16)$ in the office. In Figure \ref{fig:totallerror}, the ranging errors are under $0.32m$ and $0.66m$ at the percentage of $80\%$ and $90\%$ for most cases. It means that both position estimation and synchronization achieve considerable accuracy. There are also occasional errors for each cases which are greater than $2m$. It is caused by the multipath effects in synchronization. Especially for the case of $Y=12m$ in the empty room, the big errors are at the percentage of $12\%$. We can find the corresponding results at E8 and E16 shown in Figure \ref{fig:syncexp1}, where the successful detection rate is also much lower than other cases in synchronization. Actually, since the successful detection rate in synchronization is above $80\%$ for most cases, the result would converge to the correct value if given enough time and the abnormal result would be eliminated. Hence, we conclude that the ranging results are very good in these cases. \subsection{Putting it All Together in a Severe Environment} \label{subsec:all} We evaluate \ourprotocol in a shopping mall, where the environment is quite severe for acoustic based systems: the shopping mall itself is broadcasting loud audios; there are always people walking around who blocks the sight line of speakers or blocks the road that we have to turn walking direction. Furthermore, as it may affect the business if we set up speakers on the ceiling and conduct frequent debugging (which may have better results), we only put the speakers at the side of the aisles, as shown in Figure \ref{fig:map}, \ref{fig:shoppingmall}. Hence, our system has to deal with serious NLoS effects. \begin{figure}[htpb] \vspace{-0.15in} \centering \includegraphics[width=3in]{fmap} \caption{Map of the shopping mall.} \label{fig:map} \vspace{-0.1in} \end{figure} We evaluate the performance of positioning in two cases: a) \textit{relative} positioning by \textit{one speaker}. b) \textit{absolute} positioning by \textit{5 speakers} (like normal indoor localization). We choose a $35m \times 17m$ area (about $600 m^2$) in Figure \ref{fig:map}, and put 5 normal dummy speakersin this area. Each speaker broadcasts signals at different central frequency, which are inaudible and not discovered by surrounding customers. We emulate the behavior of normal shopping users in evaluation: the experimenter stands at a test point and walks for a few steps (less than 6m) in a line; then he stops or turns the direction and continues walking, and so on. We gather 8 samples per point. Hence, we can evaluate the performance when leveraging all the walking segments to get the position. We set central frequency of the speakers to 17000Hz, 18000Hz, 19000Hz, 20000Hz, 21000Hz, respectively. The smart device differentiates the signals by using the subcomponent BPF in the Figure \ref{fig:overview}. For example, if we need to analyze the signal of the second speaker (18000Hz), we set the frequency band of BPF which filters the signal at 18000Hz, and other signals are blocked. \begin{figure}[htpb] \vspace{-0.25in} \begin{subfigure}[b]{0.235\textwidth} \begin{center} \includegraphics[height=1.2in]{fshoppingmall} \end{center} \vspace{-0.1in} \caption{Shopping mall.} \label{fig:shoppingmall} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \begin{center} \includegraphics[height=1.4in]{fbnherror} \end{center} \vspace{-0.15in} \caption{Position Errors} \label{fig:shoppingmallerror} \end{subfigure} \caption{(a) Shopping mall and the dummy speaker. (b) Result of relative positioning (using 1 speaker), and absolute positioning (using 5 speakers).} \label{fig:shopptingmallpic} \vspace{-0.1in} \end{figure} The results show that these 5 speakers have much different performances in relative positioning, though they are the same product model. The signal of speaker at 17000Hz only covers $13\%$ of the area, but the signal of speaker at 19000 and 20000 covers about $54\%$ and $51\%$ of the area. The reason of this diversity may be caused by several facts: anchor positions, quality of different anchor speakers, etc. We leave the study on configuration of speakers in our future work. Totally, the average coverage per speaker is $38\%$, which is about $222 m^2$ in our specific area. We show the relative position errors when using one speaker in Figure \ref{fig:shoppingmallerror}. Note that we only calculate the accuracy of the relative position where the starting point is covered by the signal of the speaker. Though we can still estimate position according to historical positioning result when there is no signal, we exclude the results of this case and obtains the direct result. The results show that for one speaker, the position errors are under $1.2m, 2m$ at the percentage of $50\%$ and $80\%$. The mean error of relative positioning is $1.28m$. We also explore the localization capabilities when all 5 speakers are used as anchors. We evaluate the errors at all points and the results show that the position errors are under $1.5m$ at the percentage of $90\%$. Since the average coverage per speaker is $38\%$, the smart device can receive audio from $38\%*5\approx 2$ speakers on average. The accuracy is intuitively better when using multiple signals for localization. \subsection{Overhead} The computation overhead is mainly caused by 3 components: displacement tracking (Including BPF, AGC, PLL), pulse detection and position estimation. We run \ourprotocol using matlab R2013a on Mac OS, and the CPU is 3.1GHz Intel Core i5. For 1 second of received samples, phase tracking, pulse detection, and position estimation takes 0.09s, 0.12s, 0.05s respectively. In fact, there is a trade-off between the overhead and accuracy. For example, we can use infinite BPF instead of finite BPF, which reduces the computation overhead significantly, but incurs larger errors. For the smart devices, it is recommended to send the recorded samples to cloud server, and obtains the result from the cloud, which requires much less computation overhead, meanwhile with low energy consumption. \section{Introduction} Many proposals on ranging \cite{2013-MobiSys-Guoguoenablingfine,Priyantha:2000:CLS:345910.345917,1097833,6566822, DBLP:conf/mobicom/HarterHSWW99} and directing estimation \cite{Joshi:2013:PLI:2482626.2482651,Xiong:2013:AFI:2482626.2482635, 4509717} have been proposed in indoor localization. They achieve considerable accuracy but require special hardwares as anchors or receivers. The scheme that implements both techniques becomes a more challenging problem, which also leads to a postponement of many attractive applications to be developed. For example, in mobile social networks, the user enters a cafe to make friends with nearby person whose online ID is bound to the position, or the firefighters search and localize people where they are hard to yell for help due to the heavy smoke, etc. Until recently, some applications have tried the similar functions, such as Google's Latitude \cite{GoogleLatitude} and Facebook's Friendshake \cite{friendshake}, but they are mainly based on GPS and cannot track the people in indoor environments. This paper introduces \ourprotocol, the first real-time ranging and direction estimation method that locates acoustic source using the smartphones. The only requirement for the source is that the audio it broadcasts is mixed with non-audible signals. Unlike the methods that range between phones\cite{2007-SenSys-BeepBeephighaccuracy,2011-SenSys-feasibilityrealtime}, the source does not need communication or computation capabilities. Hence, commercial off-the-shelf (COTS) speakers can serve as anchors. Compared to Swadloon \cite{DBLP:journals/corr/HuangXLLMYL13}, \ourprotocol achieves the real-time direction estimation that the user does not need to shake the phone when s/he wants to know the direction. Moreover, \ourprotocol implements both ranging and direction estimation that using one anchor nodes. The key insight of our paper is that when the user walks in a line, the pattern of the relative displacement from the user to the source is related to the position of the source. For example in Figure \ref{fig:example}, the user walks and steps at $O_1$, $O_2$, and $O_3$. The relative displacements $d_1=l_1-l_2$ and $d_2=l_2-l_3$ are measured and the user's walking length, \eg, $|\overline{O_1O_2}|$, is given. Intuitively, $d_1\approx0$ which infers that $O_1$ and $O_2$ is close to $H$ where $\overline{AH} \bot \overline{O_1O_2}$ and $d_2<0$ which infers that the source is at the back of the walking user. They infer the coarse-grained direction. Another intuition is that when the distance $|\overline{AH}|$ becomes larger, $|d_2-d_1|$ becomes relatively smaller, which infers the coarse-grained distance. Hence, the position of the source $A$ relative to the walking line $\overline{O_1O_3}$ can be estimated. Moreover, if source serves as an anchor and its coordinate in the room is given, the real-time position of the user (\eg, $O_1$, $O_2$, $O_3$,\dots) can be inferred. \begin{figure}[htpb] \vspace{-0.1in} \begin{center} \includegraphics{fexample-crop.pdf} \end{center} \caption{Example of relations between relative displacement and positions.} \label{fig:example} \vspace{-0.15in} \end{figure} As the displacement is small for each user's step and the step length of the user is not strictly constant, the practical challenge is to obtain the position when the user walks for very few steps in a line. Especially when the user is far from the source, the measured displacement are more sensitive to noises and $|d_2-d_1|$ becomes much small that even tiny error in the measurement would cause large errors in positioning. Hence, it requires accurate displacement measurement and robust locating algorithm. To track the relative displacement, the source is mixed with a sine signal, and we use the phase locked loop (PLL) to track the phase of the signal, which corresponds to the displacement. Unlike \cite{DBLP:journals/corr/HuangXLLMYL13} that uses the first order PLL for tracking the phase, we adopt the second order PLL which avoids jitters and shows improved accuracy when the signal is weak. Then the distance and direction are estimated according to the relative displacement when the user walks in a line. When the user walks for longer paths which include several line segments, there is a chance to further improve the accuracy and robustness. In this case, we synthesize all the estimations by proposing the synchronizing scheme. Specifically, the sine wave in the audio is modulated with periodical pulses. The phone then detects and records the arrival time of these pulses. In synthesizing, all estimations are used to compute the sent time of pulses. Hence, when the computed sent time is accurate enough, the distance can be derived according to the receiving time and predicted sent time of the pulse, instead of the previous distance estimation. Finally, the real-time position is computed. We make performance evaluation on all the components with several types of cases and then the total performance of \ourprotocol. The extensive experiments show that in position estimation, the ranging and angle errors is under $0.99m$ and $3.7^o$ at the percentage of $80\%$ correspondingly. In synchronization, the successful detection rate is above $80\%$ for most cases and the corresponding standard deviation is around $10cm$. For the total performance, the ranging errors are under $0.16m$, $0.32$, $0.54m$ at the percentage of $50\%$, $80\%$, $90\%$ correspondingly and the mean error is $0.71m$. The angle errors are under $1.05^o$, $2.81^o$, $5.02^o$ at the percentage of $50\%$, $80\%$, $90\%$ correspondingly and the mean error is $2.20^o$. The rest of the paper is organized as follows: we review the related work in Section \ref{sec:relatedwork} and present the overview of \ourprotocol in Section \ref{sec:overview}. We propose the displacement tracking method in Section \ref{sec:pll} and then present the position estimation based on displacement tracking in Section \ref{sec:posest}. We present the synchronization and synthesizing in Section \ref{sec:sync}. We report our extensive experimental results in Section \ref{sec:exp}. We conclude the paper in Section \ref{sec:conclusion}. \section{Introduction} With the rapid development of smart phones and wearable devices, attractive Augmented Reality (AR) apps have been developed, \eg, Sky Map, Wikitude, Augmented Car Finder. One of AR's key features is to display useful information about a person's surroundings, which relies on location information. For example, Wikitude uses GPS and inertial sensors to provide interactive information about objects that are seen through the camera of smart devices. In this paper, we explore localization techniques to enable more kinds of AR applications on smart devices. For instance, a person walks in a large shopping mall and a virtual shopping guide recommends the surrounding goods that are new arrivals or on sale; or shares her/his virtual business card with people walking around in a party. Such applications require the knowledge of \textit{relative position} between targets (\eg, goods, person) and inquiring users. However, current localization systems cannot be directly applied to \textit{relative positioning} satisfactorily due to various limitations. For instance, these systems can only be used in some places with specified infrastructure being deployed, or require feature-rich hardware serving as target. More specifically, GPS can calculate the location of outdoor users, but is unavailable in indoor environments. Pure WiFi-based indoor localization can achieve 3$\sim$4 meter accuracy in absolute positioning and there always exists large errors (\eg, 6$\sim$8m) \cite{2012-MOBICOM-PushlimitWiFi}. So the errors are much greater than 4 meters when inferring \textit{relative position} from \textit{absolute positions} of the smartphone and the target. Other indoor localization schemes \cite{Xiong:2013:AFI:2482626.2482635,2013-MobiSys-Guoguoenablingfine} are accurate enough (\eg, sub-meter accuracy), but require special-purpose infrastructure or hardware. There are schemes calculating relative direction (e.g., Swadloon \cite{DBLP:journals/corr/HuangXLLMYL13}) and distance (e.g., BeepBeep \cite{2007-SenSys-BeepBeephighaccuracy}). However, Swadloon requires unusual behavior of querying user (\ie, phone-shaking movement) before getting the direction of a target. More importantly, Swadloon cannot obtain distance from the target. Though BeepBeep can be added for calculating distance, it requires that the target has rich functions, such as broadcasting and receiving acoustic signals, communication for exchanging data and computation for processing data. It is feasible when the target uses a smartphone which has all these functions, but other applications, such as shopping guide, may prefer a cheaper target device with much fewer functions. We propose and implement \ourprotocol, which calculates the \textit{relative position} from a user with a smart device to a target for Augmented Reality. \ourprotocol does not need any infrastructure being deployed, and the application of \ourprotocol is not limited by places. The only requirement of \ourprotocol is that the target is attached with a dummy speaker for broadcasting audio, which can be received by the smart device and then processed to directly infer the relative position. The dummy speaker merely broadcast audio without requirement of any other features, \eg, audio recording, communication or computation. Hence, they are widely used and some of them are cheap and simple, such as speaker embedded in user's smart devices, or even loudspeaker originally for sales promotion in a shopping mall. Moreover, the broadcast audio is inaudible that the loudspeaker, which used to be a noisy tool for sales promotion, can now be ``silent'' for the same job by ``broadcasting'' its relative position. Our work is based on the observation that when a user walks, the distance between the object and the user changes; and the pattern of \textit{displacement} (variance of distance) relates to the relative position. In other words, by letting a device receive and analyze the signal (audio signal mixed with non-audible signal) broadcast by a target (speaker), we are able to track the displacement and further compute the relative position accurately and efficiently, i.e., finishing both ranging and direction estimation at the same time. However, we have to solve a number of issues in our scheme. First, since the displacement is relatively small, the practical challenge is how to obtain the position precisely when a user walks for only very few steps. Second, when the user is far from the speaker, the measured displacement is prone to be influenced by noises and the difference of displacement becomes more indistinguishable such that even a tiny error in the measurement could cause large errors in positioning. Hence, both accurate displacement measurement scheme and robust positioning strategy are needed. In order to get an accurate displacement measurement scheme, we track the phase of the signal (corresponding to the displacement) utilizing the second-order Phase Locked Loop (PLL), which could avoid jitters and has high accuracy when the signal is weak. Hence, the distance and direction could be computed accurately when a user is close to a speaker. Next, as we have mentioned above, the estimated position may have a bigger error when a user is far from a speaker. In this case, we adopt two strategies to further improve the accuracy and robustness. One is to utilize the measurement results when the user is close to the speaker if available. Otherwise, we synthesize all the estimations (longer path a user passed) by the synchronizing scheme. The main idea is based on the following observation: Since the distance can be obtained according to the difference between the sent time and received time of a signal, where the receiving time is directly computed but the sent time is unknown for the receiver, we add a periodical pulse into the audio to get the sent time in a novel way. Specifically, when a good estimation is obtained, the distance along with the sent time of the pulse is calculated. Hence, the sent time of the later periodical pulses is predicted, which can infer the distance according to sent time and receiving time of the pulses. \ourprotocol also addresses a number of practical issues based on the main solution:\\ \textbf{The user frequently turns the walking direction:} we provide enhanced algorithm that gathers all the pieces of small linear segments at different directions to obtain the position.\\ \textbf{Multipath effects on pulse detection:} \ourprotocol detects arrival time of all the pulses, including the pulse directly from sender and also reflected ones. Then it eliminates the false pulses by leveraging the result of PLL.\\ \textbf{Non-Line-of-Sight (NLoS):} \ourprotocol uses historical position results, and infers the current position by additionally using inertial sensors; once the smart device is within the coverage of the signal, \ourprotocol updates accurate position by synchronization.\\ \textbf{Device diversity:} The main problem is serious clock drift of normal dummy speaker, otherwise the receiver obtains wrong receiving time of periodical pulses and further wrong distance in synchronization. We leverage the result of PLL, and calibrate the clock precisely in case that the receiver is static for only a few seconds. \\ \textbf{Device Orientation:} Different orientation of the speaker or receiver affects the quality of the received signal. We find that the quality mainly affects the result of PLL and further the displacement. More specifically, when the signal quality is poor for certain orientation, the tracked displacement becomes smaller than the real displacement. To enhance the accuracy, we make calibrations on tracked displacement based on our measurements. \\ \textbf{Conflicts of multiple signals:} In \ourprotocol, the periodical pulses in synchronization possess bandwidth, which limits the number of co-existing signals. We carefully design the pulse that possesses narrow bandwidth, and also show the way of supporting more number of co-existing speakers. \\ \textbf{Noisy environment:} \ourprotocol uses Band Pass Filter to eliminate the noises and it works well in the noisy shopping mall. We implement \ourprotocol and evaluate the performance of all the components separately with several types of cases and then the performance of \ourprotocol. \\\textbf{a).} For the case when a user is within 8 meters away from a speaker, the mean error of ranging and direction estimation is $0.63m$ and $2.45^o$. It shows considerable accuracy when a user shares virtual business card with surroundings. \\\textbf{b).} When the user is within $20m$ and uses synchronization for positioning, the ranging and direction estimation errors are less than $0.32m$ and $2.81^o$ at the percentage of $80\%$ respectively. Note that the results in this case only infer the accuracy of the subcomponent (synchronization), instead of the total accuracy of \ourprotocol. \\\textbf{c).} We combine all the work together and evaluate \ourprotocol in a severe environment, \ie, the noisy shopping mall. We conduct the experiment in 2 cases: \begin{itemize} \item \textbf{c1).} \textit{Relative} positioning of \textit{one} speaker; \item \textbf{c2).} \textit{Absolute} positioning using \textit{multiple} speakers (\ie, ordinary indoor localization). \end{itemize} We put 5 dummy speakers in a $600m^2$ area, and the positions of speakers are limited to be deployed, (just at the side of aisle, instead of the position on the ceiling). Even in this case, by using only \textit{one} of these speakers, \ourprotocol also achieves the mean error of $1.28m$ for the \textit{relative} position. Hence, a user knows the accurate relative position of a virtual shopping guide attached with a dummy speaker. To explore the possibility of \textit{absolute} positioning, we also use all the 5 speakers as anchors for localization and the mean error is $0.89m$, where each position is covered by the signal of less than 2 speakers on average. Since normal anchor-based systems only obtain distance or direction but cannot calculate both metrics, they require at least 3 anchors for trilateration. Hence it shows another advantage of \ourprotocol that it is robust when the anchors are sparse in deployment. The rest of the paper is organized as follows. We first present the overview of \ourprotocol in Section \ref{sec:overview}. We propose the position estimation based on displacement tracking in Section \ref{sec:posest} and displacement tracking method in Section \ref{sec:pll}. We give the details of the synchronization in Section \ref{sec:sync}. We report our extensive experimental results in Section \ref{sec:exp}. We review some related work in Section \ref{sec:relatedwork}. We conclude the paper in Section \ref{sec:conclusion}. \section{Architecture Overview DELETED} \label{sec:overview-old} The key insight of our paper is that when a user walks along a line, the pattern of displacements from the user to an acoustic speaker is related to relative position directly. For example in Figure \ref{fig:example}, a user walks and steps at $O_1$, $O_2$, and $O_3$. The displacements $d_1$(=$l_1-l_2$) and $d_2$(=$l_2-l_3$) are measured respectively and the user's stride length ($|\overline{O_1O_2}|$) is given. Intuitively, $d_1\approx0$ infers that $O_1$ and $O_2$ are close to $H$ where $\overline{AH} \bot \overline{O_1O_2}$ and $d_2<0$, which tells us that the speaker is at the back of the walking user, hence infers the coarse-grained direction. Another observation is that when the distance $|\overline{AH}|$ increases, the value of $|d_2-d_1|$ decreases, which infers the coarse-grained distance as well. So, the relative position between the speaker $A$ and the piecewise linear segment $\overline{O_1O_3}$ can be computed. Clearly, in this case, if the speaker serves as an anchor and its coordinate in the room is given, the real-time position of the user (\eg, $O_1$, $O_2$, $O_3$,\dots) can be inferred directly, and vice versa. However, when $|\overline{AH}|$ becomes much larger, $|d_2-d_1|$ is much smaller which results in less accurate measurement by leveraging this property. Hence, we use synchronization to enhance the accuracy and robustness. Note that to the best of our knowledge, current approaches cannot directly obtain distances $l_1$ without synchronization between the receiver and the dummy speaker. They require additional capabilities of the speaker, such as communication capability that exchanges synchronization information \cite{2007-SenSys-BeepBeephighaccuracy}. Instead, we calculate the displacement $d=l_1-l_2$ by PLL. In synchronization, our main purpose is to take advantage of historical samples to improve robustness and accuracy. Once a good estimation of distance is obtained from historical result, the latter distances can be precisely obtained in case of even longer distances. Specifically, we add periodical pulses to the sending signal, where the sending time of the pulse is unknown and solved by synthesizing the historical samples. Then, the sending time is used to directly calculate the latter distances according the difference from the receiving time to the sending time. We show the architecture of \ourprotocol in Figure \ref{fig:overview}. \ourprotocol consists of the following steps: \begin{enumerate} \item The dummy speaker broadcasts audio mixed with modulated non-audible wave. The main component is the high-frequency sine wave which is used to track the displacement in step 2. We also modulate the sine wave with periodical pulses for every 0.25 seconds. It is used for synchronization in step 4. \item The smart device uses the second-order Phase Locked Loop (PLL) to track the phase derived from the wave, which corresponds to the displacement from the phone to the user when the user walks. Note that the received signal goes through Band-Pass Filter (BPF) and Automatic Gain Control (AGC) before it is tracked by PLL \cite{DBLP:journals/corr/HuangXLLMYL13}. \item In case that the user walks and the distance within $8m$, \ourprotocol estimates the distance from the speaker to the piecewise linear segment. Then relative position from the user to the dummy speaker at each step of the linear segment is also estimated. For estimating the location of the user, we not only use the displacement stemmed from step 2, but also the user's walking time, which is detected by Inertial Measurement Unit (IMU) of smart phones. Note that the accuracy of this estimation is unstable due to different relative positions and lengths of the linear segment. To remedy the deficiencies, \ourprotocol further improves the accuracy and robustness in step 4. \item \ourprotocol detects the periodical pulses and calculates the precise receiving time. \ourprotocol then computes the sending time of the pulses by synthesizing all the results in step 3. More specifically, as the localization accuracy in step 3 for each walking segment is different, we choose the location result of the more accurate one. Then, we infer the sending time of each periodical pulses by using the arrival time of the pulses and the distance which is inferred from the location result. When the sending time of the pulses is accurately calculated, the real-time distance is no longer calculated in step 3, but by using the accurate predicted sending time and newly detected received time of the pulse. Finally, the real-time distance and the direction are computed. \end{enumerate} \section{Overview} \label{sec:overview} \subsection{Problem Description} \ourprotocol calculates \textit{relative position} between a user with a smart device and a target attached with a dummy speaker, where the relative position can decompose into distance and direction from the smart device to the dummy speaker. The dummy speaker merely broadcasts inaudible audio without the requirement of any other features. The smart device has a microphone and inertial sensors (\ie, compass, accelerometer, gyroscope), which are common components in almost all smart devices. \subsection{Intuitive Solution} \label{sec:subsolution} The key insight of our paper is that when a user walks along a line, the pattern of displacements from the user to a dummy speaker is related to relative position directly. We illustrate the intuitive solution on a simple case in Figure \ref{fig:example}, where a user walks and steps at $O_1$, $O_2$, and $O_3$. Suppose the displacements $d_1$(=$l_1-l_2$) and $d_2$(=$l_2-l_3$) are measured beforehand and the user's stride length ($|\overline{O_1O_2}|$) is given. Intuitively, $d_1\approx0$ infers that $O_1$ and $O_2$ are close to $H$ where $\overline{AH} \bot \overline{O_1O_2}$ and $d_2<0$, which tells us that the speaker is at the back of the walking user, hence infers the coarse-grained direction. Another observation is that when the distance $|\overline{AH}|$ increases, the value of $|d_2-d_1|$ decreases, which infers the coarse-grained distance as well. So, the relative position between $O_1$ and $A$ can be estimated. \subsection{Main Technical Issues} From the above example, the following main technical issues required to be solved: \noindent \textbf{Formal solution of relative positioning (Section \ref{sec:posest}).} Given the real-time relative displacement, we need to calculate the precise relative position, instead of coarse-grained one. \noindent \textbf{Tracking relative displacement (Section \ref{sec:pll}).} The relative displacement needs to be tracked before relative positioning. Note that to the best of our knowledge, current approaches cannot directly obtain distances $l_1$ without synchronization between the receiver and the dummy speaker. They require additional capabilities of the speaker, such as communication capability that exchanges synchronization information \cite{2007-SenSys-BeepBeephighaccuracy}. Instead, we calculate the displacement: $d_1 (=l_1-l_2)$ by PLL. \noindent \textbf{Extended solution when distance is longer (Section \ref{sec:sync}.} When $|\overline{AH}|$ becomes much longer, $|d_2-d_1|$ is much smaller. Since there are errors on tracking the displacement $d_1$, $d_2$, the ideal case is that small change of distance corresponds to large value of $|d_2-d_1|$, which results in high accuracy of calculated distance. However, when $|\overline{AH}|$ becomes much larger, it is the opposite case that tiny error on measuring $d_1$ or $d_2$ will result in large error on calculating $|\overline{AH}|$. Hence, the accuracy of ranging declines when $|\overline{AH}|$ becomes larger, and we need extended solution in this case. Note that the accuracy of direction finding is not much affected that we mainly propose the extended solution for ranging. \begin{figure}[t] \begin{subfigure}[h]{0.14\textwidth} \begin{center} \includegraphics[width=1.0in]{fexample.pdf} \end{center} \caption{Brief Example. } \label{fig:example} \end{subfigure} \begin{subfigure}[h]{0.24\textwidth} \begin{center} \includegraphics[width=2.6in]{farchitecture} \end{center} \vspace{-0.1in} \caption{Architecture of \ourprotocol.} \label{fig:overview} \end{subfigure} \caption{Example of relations between displacement and relative positions, and architecture of \ourprotocol.} \vspace{-0.2in} \end{figure} \subsection{Architecture} To solve the technical issues, we divide \ourprotocol into 3 main components in Figure \ref{fig:overview}: input of smart device, acoustic processing, and positioning scheme. \noindent \textbf{Input:} The microphone and inertial sensors are used in \ourprotocol. The microphone records audio for acoustic processing. The inertial sensors mainly serve as a step counter, which records the time when the user steps on the ground. When the user turns direction, the angle of user's rotation is also calculated by the gyroscope. \noindent \textbf{Acoustic processing:} This component generates intermediate results preparing for the positioning scheme. One result is \textit{relative displacement}, which is tracked by analyzing the recorded audio (in Section \ref{sec:pll}). The audio firstly passes through the Band Pass Filter (BPF) that the signal at the specified frequency passes and other signals including human voice and noises are eliminated. Then, the filtered signal is processed by Automatic Gain Control (AGC), and then the amplitude of the signal is close to constant. The signal then passes through our carefully-designed Phase Locked Loop (PLL), and the phase of the signal, which is proportional to relative displacement, is tracked. Another intermediate result provides additional information for the extended solution. More specifically, we encode periodical pulses in the sent signal, and the smart device detects the corresponding pulses to determine the \textit{receiving time of the pulses} (in Section \ref{sec:sync}). The problem is that the pulse should take very little bandwidth, otherwise the number of concurrent speakers is much limited. We carefully encode the signal to solve this problem, and design the pulse detection algorithm to precisely determine receiving time of the pulses. Note that the pulse detection analyzes tracked phase from output of PLL, for we directly modulate the phase, rather than the raw audio, to encode the pulses in order to save bandwidth. \noindent \textbf{Positioning scheme:} The scheme calculates position by receiving the intermediate results. The scheme firstly estimates position by using the relative displacements and user's step time. It leverages the intuitive solution in Section \ref{sec:subsolution}, which is formally illustrated in Section \ref{sec:posest}. Then, if the computed distance is very short ($<8m$), the calculated position is accurate enough and accepted as valid result. Otherwise, the calculated direction is accurate, but the calculated distance is inaccurate. In this case, the scheme invokes synchronization in Section \ref{sec:sync} to compute the relative position. The synchronization uses the historical results of relative position to infer the distance. By additionally using the historical receiving time of the pulses, the sending time of the periodical pulses is then calculated. The accurate distance is then inferred from the detected receiving time and the predicted sending time of the current pulse. \section{Related Work} \label{sec:relatedwork} \subsection{Ranging} There have been many localization systems which are based on ranging \cite{2013-MobiSys-Guoguoenablingfine,Priyantha:2000:CLS:345910.345917,6566822, DBLP:conf/mobicom/HarterHSWW99}. They achieve considerable accuracy of ranging, but require special hardwares for synchronization purpose. Specifically, the sender records sending time of signal which is used for ranging, while the receiver detects the arrival time of the signal. Each individuals calculate the sending time or arrival time independently without referring any time information on other devices. Hence, synchronization among devices is needed. In Bat System \cite{DBLP:conf/mobicom/HarterHSWW99}, the base-station uses radio channel and communications for synchronization. Cricket \cite{Priyantha:2000:CLS:345910.345917} uses special device to send the RF signal together with the ultrasound signal at the same time. Then the receiver obtains the distance according to the different traveling time of the two signals. Guoguo \cite{2013-MobiSys-Guoguoenablingfine} uses RF signals to synchronize all the acoustic anchors, the location can be obtained according to the differences of the receiving time by the phone. BeepBeep \cite{2007-SenSys-BeepBeephighaccuracy} calculates the distance between the phones. It solves the synchronization problem by letting two phones emit acoustic signals and exchange the sending and receiving time via wireless channel. \ourprotocol uses dummy speaker to implement synchronization and ranging. The synchronization information is obtained by a novel position estimation method that it does not need any special hardwares or additional communication channels. The other difference is that these systems are only based on ranging results of anchors which requires multiple speakers ($\ge 3$), while \ourprotocol also implements direction estimation from phone to speaker and only one speaker is needed for localization. \subsection{Direction Estimation} Most methods on direction estimation also require specialized hardwares, which use the directional antenna \cite{4711074,4509717,Niculescu:2004:VBS:1023720.1023727} or the antenna array \cite{Joshi:2013:PLI:2482626.2482651,Xiong:2013:AFI:2482626.2482635, 4509717}. For example, by rotating the beam of directional antenna, a receiver can pinpoint the direction of the AP as the direction that provides the highest received strength \cite{4509717}. For the antenna array \cite{Joshi:2013:PLI:2482626.2482651,Xiong:2013:AFI:2482626.2482635, 4509717}, the receiving time of the signal by each antenna is different, and magnitude of the difference corresponds to angle of the arrival signal. There have been proposals without requirement of specialized hardwares as well. \cite{2011-MOBICOM-Iamantenna} emulates the functionality of a directional antenna by rotating the phone around the user's body, to locate outdoor APs. \cite{2011-SenSys-feasibilityrealtime} leverages multiple microphones of the smartphone and communication channels for positioning within 4 meters, which is used for short-distance positioning and phone-to-phone games. Some other methods leverage Doppler effects by swinging \cite{2012-MobiQuitous2011-ProposalDirectionEstimation} or shaking \cite{DBLP:journals/corr/HuangXLLMYL13} the phone. \cite{DBLP:conf/mobicom/ZhangLHLZJFJL14} calculates direction by head nodding or shaking using smart glasses. They are based on different frequency shift when the phone are moving at different directions. Compares to \cite{2012-MobiQuitous2011-ProposalDirectionEstimation,DBLP:journals/corr/HuangXLLMYL13}, \ourprotocol makes further steps that a user can obtain direction without any additional actions on the phone so that s/he can get the real-time direction while walking. Furthermore, \cite{DBLP:journals/corr/HuangXLLMYL13} requires only the speakers as anchors as well, but does not address the ranging problem, while \ourprotocol can compute both the direction and distance from the phone to the speaker. \section{Positioning by Synchronization} \label{sec:sync} Though we synthesize all the walking segments when user walks and turns, the problem is that the method has accumulated errors when we estimate the latter position by using the previous position, estimated walking directions and walking steps. Especially when the user is far away and loses the signal from the speaker for a long time, the error increases and the historical measured position can no longer be used. To solve this problem, we propose a synchronization mechanism that we leverage historical measurement to improve the robustness of \ourprotocol. In synchronization, we additionally encode periodical pulses $s_2(t)$ in sending signal and propose the demodulating method to detect the receiving time of the pulses. Since the pulses are periodical, the sending time of latter pulses can be predicted, if we can accurately estimate the sending time of one periodical pulse. Hence, by using samples which can be directly used to calculate accurate position, we obtain the estimated distance, which infers traveling time $t_l$ from the speaker to the phone. Then, we detect the receiving time $\tau'$ of one pulse in these samples and get the accurate sending time of the pulse $\tau=\tau'-t_l$. Furthermore, the sending time of latter $i$th pulse equals to $\tau_i=\tau+iT$, where $T$ is denoted as period of pulses. Hence, on obtaining the receiving time of $i$th pulse $\tau_i'$, we finally obtain real-time distance by using $\tau_i$ and $\tau_i'$ instead of the estimation method in Section \ref{sec:posest}. \subsection{Pulse Modulation} \label{sec:aaaa} To design synchronization pulses $s_2(t)$ and the detection algorithm, several problems should be addressed: \begin{itemize} \item Each speaker should not take much acoustic bandwidth in order to support more speakers in the room. Hence, $s_1(t)$ and $s_2(t)$ should be at the same frequency band, otherwise additional bandwidth for $s_2(t)$ is needed. Moreover, bandwidth of $s_2(t)$ needs to be narrow. However, it is challenging that $s_2(t)$ should occupy more bandwidth if it can be successfully detected. \item $s_2(t)$ can also be used for displacement tracking by PLL. Otherwise, PLL will lose phase locks when processing $s_2(t)$. \end{itemize} Based on these requirements, we design $s_2(t)$: \begin{equation} s_2(t)=\begin{cases} \cos(2\pi ft+ \pi\sin\frac{\pi(t-\tau_i)}{T_p}) & \tau_i \le t \le \tau_i+ T_p \\ \cos(2\pi ft) & \textrm{otherwise} \end{cases} \end{equation} where we construct pulses starting at $\tau_1, \dots, \tau_i$, and the duration of each pulse is $T_p$. \begin{figure}[htpb] \begin{subfigure}[b]{0.5\textwidth} \begin{center} \includegraphics[width=2.5in]{fillthetagood-crop} \end{center} \vspace{-0.1in} \caption{Measured Displacement} \label{fig:fthetagood} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{center} \includegraphics[width=2.5in]{fmtgood.pdf} \end{center} \vspace{-0.1in} \caption{Detected Pulses} \label{fig:fmtgood} \end{subfigure} \caption{Calculated displacement and pulses from the same signal. } \label{fig:thetam} \vspace{-0.2in} \end{figure} More specifically, Figure \ref{fig:thetam} shows an example of detected pulse when the user moves the phone forward and backward twice and then stops. We encode three adjacent pulses per $T_2=0.25s$. Three adjacent pulses can be seen as a compensated periodical pulse with the period $T=T_2=0.25s$. The time difference of the adjacent pulses is $T_3=0.03s$. In Figure \ref{fig:fthetagood}, the estimated displacement is smooth and have no jitters whenever the phone is static or moving. We zoom in the calculated phase to show the performance of PLL when there are pulses in $s_2(t)$: the calculated phase is not locked to the real phase; instead, it seems that PLL has not detected the pulses that the phase is very smooth. Specifically, while the maximum variation of the real phase is $\pi$, the corresponding variation computed by PLL is less than 0.4rad, which corresponds to the displacement of about $1mm$. The cause of the phenomenon is that the parameters ($k_1$, $k_2$) of PLL are very small, and does track the fast changing phase. Moreover, as the phase at the beginning of a pulse equals to the one at the end and the variation by PLL is small, the tracked phase finally becomes stable and the phase is the same with the one at the begining. \subsubsection{Proof on Properties of Modulated Pulse} We prove that the pulse $s_2$ does not take much acoustic bandwidth; and has little effects on the result of displacement tracking by PLL. First, the central frequency of $s_2$ is the same as the one of $s_1$, except that the phase changes when there is a pulse. Hence, $s_1$ and $s_2$ share the same frequency band. Second, since the bandwidth of the pulse is about $\frac{\pi}{T_p}$ \cite{DBLP:journals/tim/SahuG08}, we set $T_p=0.007s$ so that the bandwidth is about 460Hz. As the minimum frequency is 17000Hz when the acoustic is non-audible, and the maximum frequency which is supported by the phone is 24000Hz, the maximum concurrent signals that \ourprotocol supports in one place is $(24000-17000)/460 \approx 15$. Actually, if the pulse has more narrow bandwidth, \ourprotocol will support more concurrent signals, whereas the pulse becomes harder to be detected. How to modulate signals with more narrow bandwidth and demodulate the signal more accurately is left for future work. Third, the component $s_3(t)=\pi\sin\frac{\pi(t-\tau_i)}{T_p}$ is the phase shift of the sine signal. Furthermore, $s_3(t)$ starts and ends at the same value $0$, and the maximum value of $s_3$ is $\pi$. Hence, the displacement will not be affected by the pulse theoretically. \subsubsection{Discussions of Pulse Modulation} \textbf{Choosing Parameters:} There is a trade off on choosing the parameters $T_p$, $T_1$, $T_2$, $T_3$, we show the analysis on choosing the parameters as follows: \begin{inparaenum}[\itshape a\upshape)] \\\indent\item{$T_p$:} As the bandwidth of pulses equals to $\frac{\pi}{T_p}$, smaller $T_p$ results in wider bandwidth requirement and less simulateneous signals in the same room. On the other hand, greater $T_p$ results in less accuracy of displacement tracking. The reason is that the pulses are regarded as noises in displacement tracking. \\\indent\item{$T_1$:} Since there are 3 adjcented pulses in one compensated periodical pulse, $T_1=T_2-3T_3$. \\\indent\item{$T_2$:} Recall that $T_2=T$ which is the period of compensated pulses. Smaller $T_2$ will enhance the accuracy of measuring the receiving time of pulses for we have more pulses for matching. However, if we choose smaller $T_2$, we may face the ambiguity problem. Specifically, denote the receiving time of a pulse is $t_r$ and the sending time of periodical pulses is $t_s+kT_2$. The calculated distance is $v_a(t_r-t_s-kT_2)$, where $k$ is an undetermined integer which also makes the distance undetermined. To get $k$, we further leverage the maxmium distance from speaker to anchor, denoted as $l_m$. Since $v_a(t_r-t_s-kT_2)<l_m$, to get the unique solution of distance, $v_aT_2$ should be greater than $l_m$. In our paper, we assume that $l_m=85m$ which infers $T_2=0.25s$. \\\indent\item{$T_3$:} $T_3$ has limitation on its minimum value. Firstly, to avoid overlaps of adjacent pulses, $T_3>T_p$. Secondly, there also should be intervals between adjacent pulses. We zoom in Figure \ref{fig:fthetagood} and find that PLL needs time longer than the duration of pulses to lock the displacement to the real value after the pulse terminates. Hence, if adjcent pulses are too close, PLL may become very unstable. If $T_3$ increases, for $T_1=T_2-3T_3>0$, $T_2$ may also increase which also affects the performance of \ourprotocol. \end{inparaenum} \noindent\textbf{Reducing Signal Conflicts:} As explained earlier, due to the bandwidth limitation, our default parameter of pulse modulation supports 15 concurrent signals. Here, to reduce signal conflicts, we find that further optimizations can be made for different applications as follows: \begin{inparaenum}[\itshape a\upshape)] \\\indent\item Virtual business card sharing: In this case, users are usually close to each other, and we can choose to narrow the bandwidth of pulses in synchronization. Hence, \ourprotocol can support more users who broadcast signals simultaneously, while we only reduce the accuracy of pulse detection and long distance positioning, which are not much required. \\\indent\item Virtual shopping guide: We suggest that if there is requirement of more shopping guides, we can use only a few speakers for normal indoor localization, instead of just relative positioning. Our further evaluations in Section \ref{subsec:all} prove that \ourprotocol supports unlimited number of shopping guides by simple and sparse deployment of speakers, \ie, the smart device only receives signals from 2 speakers on average, but gains 1-meter accuracy. \end{inparaenum} \subsection{Pulse Detection} We discuss how we detect the receiving time $\tau_i'=\tau_i+t_l$ of the $i$th pulse by leveraging the component $s_3(t)$. Assuming the locked phase by PLL is $\phi_r$ before the pulse starts, the expected pulse is $\tilde r(t)=\cos(2\pi f t+\phi_r+\pi \sin \frac{\pi(t-\tau_i')}{T_p})$. Hence, for the received sample $r(kT_s)$, we compute the likelihood $m(kT_s)=\sum_{i=k}^{k+T_p/T_s}r(iT_s)\tilde r (iT_s) $, \ie, when $m(kT_s)$ reaches the maximum, the corresponding $kT_s$ is the starting time of the received pulse. Note that, if we set expected pulse $\hat r(t)=\cos(2\pi f t+\phi_r+\pi)$ and there is no pulse for the next $T_p$ that $r(t)=\cos(2\pi f t+\phi_r)$, $\hat m(kT_s)=\sum_{i=k}^{k+T_p/T_s}r(iT_s) \hat r (iT_s) $ will reach the minimum. Actually, $s_3(t)$ is the filtered version of pulse $\hat s_3(t)=\pi$ that the pulse $s_3(t)$ has narrower bandwidth. Accordingly, $\tilde r(t) \approx \hat r(t)$ which means $m(t)$ will reach the value close to minimum when there is no pulse in the next $T_p$. Hence, arrival time $\tau_i'$ of the shape can be detected by $m(t)$. \subsubsection{Analysis on Design of Pulse Detection} As mentioned earlier, our PLL takes $s_2(t)$ as noises and only tracks $s_1(t)$. There are two advantages based on above results: 1) the pulses have very small effects on the tracked displacement. 2) For the variation is very small and the variation of $\phi_r$ is stable when there are pulses, peaks of $m(t)$ become clear to be detected. In Figure \ref{fig:fmtgood}, $m(t)$ reaches the peak value (\ie, 150), when there is a pulse at $t$ and the bottom value (\ie, -50) when there are almost no pulses. As a whole, it shows an interesting result that on demodulating $s(t)$, the peak of $m(t)$ is very clear for synchronization in Figure \ref{fig:fmtgood}, while the corresponding calculated phase is very smooth for displacement tracking in Figure \ref{fig:fthetagood}. We can also find that when the phone is static, the peaks corresponding to the pulses are clear. However, they are unclear when the phone is moving. Furthermore, when the signal is weak, the periodical peaks cannot be detected by $m(t)$ in Figure \ref{fig:fmtbad} due to noises. Hence, we make further solution to make the peaks more clear in case that the phone moves or the signal is weak. The solution is based on the observation that expected peaks still appear at expected time, though they sink in the noises. Meanwhile, random peaks have fewer chances to appear periodically. Hence, we assign $m_1(t)=m(t-T_3)+m(t)+m(t+T_3)$ in Figure \ref{fig:fmtbad1}, where the peaks are more clear to be identified in $m_1(t)$. Then, we assign $m_2(t)=m_1(t-T_2)+m_1(t)+m_1(t+T_2)$ in Figure \ref{fig:fmtbad2}, where the peaks can be easily detected. Moreover, when the phone is moving and the corresponding phase is in Figure \ref{fig:fthetagood}, the peaks are also very clear in Figure \ref{fig:fmtgood2}. \subsubsection{Dealing With Multipath Effects:} We also find that the result of synchronization is affected by multipath effects, especially when the smart device is static. Hence, we make further study and improvement on pulse detection. \begin{figure}[htpb] \vspace{-0.1in} \begin{subfigure}[b]{0.23\textwidth} \begin{center} \includegraphics[width=1.7in]{fmultipath_good} \end{center} \caption{Good Case.} \label{fig:multipathgood} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \begin{center} \includegraphics[width=1.7in]{fmultipath_bad} \end{center} \caption{Bad Case (Multipath).} \label{fig:multipathbad} \end{subfigure} \caption{Pulse detection in case of multipath effects.} \label{fig:multipath} \vspace{-0.2in} \end{figure} We find that when the phone is static, there is another property which can be leveraged: the distance from the smart device to the dummy speaker is constant. Hence, we can use $m_3(kT_2)=\sum_{i\in\{x|x=k \mod T_2\}}m(iT_2)$, which sums all the $m_3(t)$ of pulses and make the detected time of pulses more clear. The result of summed $m_3(kT)$ is shown in Figure \ref{fig:multipath}. In Figure \ref{fig:multipathgood} when there is no multipath effect, there are 3 pulses in a period $T_2$. However, in Figure \ref{fig:multipathbad}, which is gathered from the shopping mall, there are 9 pulses at least, which means there are 2 additional paths reflected from walls or other objects. In this case, all the 3 paths are the possible pulses directly received from the dummy speaker. After recognizing the possible multipaths, we make further step to filter the direct path. Specifically, we use the result of PLL, which corresponds to the displacement. As displacement tracking is less affected by multipath effects, we compare with the result of PLL and pulse detection when a user walks from one position and stops at another one. In this case, denote that the displacement by PLL is $d$, and the receiving time of pulses are in the set $T_a=\{t_{a1},t_{a2},t_{a3},\dots\}$ and $T_b=\{t_{b1}, t_{b2},t_{b2},\dots\}$ at the start point and end point respectively. Hence we obtain the receiving time $(t_a, t_b)=\underset{t_a \in T_a, t_b \in T_b}{\arg \min} |(t_a-t_b)v_a-d|$. \subsection{Positioning after Synchronization} Assume sending time of the next pulse is $t_s$, which is the result of synthesizing and the detected receiving time is $t_r$. Then distance $l=v_a(t_r-t_s)$ and the distance at the horizontal plane is $L=\sqrt{l^2-h^2}$. For direction estimation, we first calculate $x$ and $y$ using newly obtained $L$, previous $s$ and $d_i$. For example, on calculating $\psi_1'$ in Figure \ref{fig:distance}, assume $l_1$ is obtained from synthesizing. Since $x=-l_1\cos\psi_1$ and $y=l_1\sin\psi_1$, $l_i'$ in \eqqref{eq:mine0} has the following form \begin{equation} l_i'=\sqrt{l_1^2 \sin^2 \psi_1+(-l_1\cos \psi_1 +(i-1)s)^2} \label{eq:mine1} \end{equation} Hence, $\psi_1$ is obtained by $ \underset{\psi_1}{\arg \min} \sum_{i=1}^ne_i^2$ where $e_i$ is calculated by \eqqref{eq:mine}, \eqref{eq:mine1}. Then $\cos\psi_1'= \frac{l_1 \cos \psi_1}{\sqrt{l_1^2-h^2}}$.
proofpile-arXiv_067-7920
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Pre\^{a}mbulo}\label{Int} O atomismo filos\'{o}fico teve sua origem no per\'{\i}odo de florescimento da filosofia grega, mais precisamente com as contribui\-\c{c}\~{o}es de Leucipo e Dem\'{o}crito. Segundo esse \'{u}ltimo, ``nada exis\-te al\'{e}m de \'{a}tomos e vazio; tudo mais \'{e} opini\~{a}o''. Devido \`{a} limita\c{c}\~{a}o de espa\c{c}o, n\~{a}o abordaremos aqui os aspectos mais especulativos acerca do \'{a}tomo, remetendo o leitor \`{a}s refer\^{e}ncias \cite{FM}-\cite{Chalmers}. Nossa narrativa ter\'{a} in\'{\i}cio na contribui\c{c}\~{a}o de Newton ao desenvolvimento de uma ``nova'' Qu\'{\i}mica. Ao reunificar a F\'{\i}sica, propondo que os movimentos celestes eram descritos pela mesma lei que regia a queda dos corpos na Terra -- a lei da Gravita\c{c}\~{a}o Universal -- Newton, de certa forma, atribui um car\'{a}ter especial \`{a} for\c{c}a ``peso''. N\~{a}o por acaso, alguns historiadores da ci\^{e}ncia defendem que a revolu\c{c}\~{a}o que o franc\^{e}s Antoine Laurent Lavoisier introduziu na Qu\'{\i}mica do s\'{e}culo XVIII tenha a ver com a f\'{e} que ele tinha na balan\c{c}a de precis\~{a}o. De fato, para o qu\'{\i}mico franc\^{e}s, toda mudan\c{c}a podia e devia ser explicada e mensurada. Se observarmos bem, o principal programa cient\'{\i}fico da Qu\'{\i}mica do s\'{e}culo XIX, a partir de John Dalton, foi medir sistematicamente os {\em pesos at\^{o}micos dos elementos qu\'{\i}micos}, o que acabou permitindo ao qu\'{\i}mico russo Dmitri Mendeleiev construir a sua famosa {\em Tabela Pe\-ri\-\'{o}dica} \cite{Jensen}. Todas as simetrias e regularidades nela contidas ti\-ve\-ram que aguardar d\'{e}cadas para poderem ser efetivamente com\-preendidas com base em uma vis\~{a}o atom\'{\i}stica da mat\'{e}ria~\cite{FM}. \section*{A r\~{a} amb\'{\i}gua e a eletr\'{o}lise} Por volta de 1780, o anatomista e m\'{e}dico italiano Luigi Galvani havia descoberto que quando se tocavam duas extremidades de um m\'{u}sculo de uma r\~{a} dissecada com metais diferentes este se contra\'{\i}a. Galvani atribuiu tal fen\^{o}meno a propriedades do pr\'{o}prio m\'{u}sculo, postulando a exist\^{e}ncia de uma {\em eletricidade ani\-mal} que, de alguma forma, se relacionaria com a {\em vida}. O f\'{\i}sico italiano Alessandro Volta polemizou com Galvani durante d\'{e}cadas \cite{Pera}. Segundo Volta, o experimento com a r\~{a} nada tinha a ver com ela, mas, sim, com os dois metais diferentes. No final de 1799, para provar sua tese, Volta concluiu seu experimento com o que chamou, talvez n\~{a}o sem ironia, {\em \'{o}rg\~{a}o de ele\-tricidade artificial}, hoje conhecido como a {\em pilha voltaica}. Em 1800, os cientistas ingleses, William Nicholson e Anthony Carlisle constroem uma pilha e fazem a primeira eletr\'{o}lise da \'{a}gua. Este foi um marco experimental na compreens\~{a}o do \'{a}tomo. Com ele, mostra-se, pela primeira vez, que a eletricidade pode ser utilizada para decompor liga\c{c}\~{o}es qu\'{\i}micas. Ora, at\'{e} ent\~{a}o, pensava-se que as transforma\c{c}\~{o}es qu\'{\i}micas eram devidas a ``for\c{c}as qu\'{\i}micas''. Com a eletr\'{o}lise viu-se que as for\c{c}as el\'{e}tricas s\~{a}o capazes de provocar rea\c{c}\~{o}es qu\'{\i}micas. Por associa\c{c}\~{a}o direta, pode-se imaginar que as for\c{c}as de liga\c{c}\~{o}es qu\'{\i}micas sejam de natureza el\'etrica. \'{E} o in\'{\i}cio da {\em eletroqu\'{\i}mica}. O estudo quantitativo da eletr\'{o}lise foi empreendido pelo f\'{\i}si\-co e qu\'{\i}mico ingl\^{e}s Michael Faraday, que chegou \`{a}s conhecidas {\em leis de Faraday}. Essas leis, junto com a hip\'{o}tese at\^{o}mica, permitem antever uma estrutura at\^{o}mica para a eletricidade. De fato, d\'{e}cadas mais tarde, o irland\^{e}s George Johstone Stoney estimou o valor da carga elementar como algo da ordem de $10^{-20}$~C. Esta carga seria, posteriormente, identificada como a carga do {\em el\'etron}, denomina\c{c}\~{a}o dada aos {\em \'{a}tomos de eletricidade} pelo pr\'{o}prio Stoney. Essa interpreta\c{c}\~{a}o criou condi\c{c}\~{o}es para uma melhor compreens\~{a}o da natureza at\^{o}mica da eletricidade, principalmente devido a observa\c{c}\~{o}es de fen\^{o}menos resultantes de descargas el\'{e}tricas em gases rarefeitos. Discursando em homenagem a Faraday, o f\'{\i}sico e m\'{e}dico alem\~{a}o Hermann von Helmholtz destacou o que seria o resultado mais importante dos estudos sobre a eletr\'{o}lise com as seguintes palavras: \begin{quotation} \noindent {\em Se aceitamos a hip\'{o}tese de que as subst\^{a}ncias elementares s\~{a}o compostas de \'{a}tomos, n\~{a}o podemos deixar de concluir que tamb\'{e}m a eletricidade, tanto positiva quanto negativa, se subdivide em por\c{c}\~{o}es elementares que se comportam como {\rm \'atomos de eletricidade.}} \end{quotation} James Clerk Maxwell referiu-se assim \`{a} relev\^{a}ncia dos estudos da eletr\'{o}lise: \begin{quotation} \noindent {\em De todos os fen\^{o}menos el\'{e}tricos, a eletr\'{o}lise parece ser o que melhor nos oferece um maior discernimento sobre a verdadeira natureza da corrente el\'{e}trica, porque encontramos correntes de mat\'{e}ria ordin\'{a}ria e correntes de ele\-tri\-cidade formando partes essenciais do mesmo fen\^{o}meno.} \end{quotation} \section*{A espectroscopia} Muitas das ideias sobre a estrutura at\^{o}mica e molecular que surgiram no in\'{\i}cio do s\'{e}culo~XX estavam, de certo modo, intimamente ligadas ao desenvolvimento da investiga\c{c}\~{a}o da radia\c{c}\~{a}o emitida pela mat\'{e}ria s\'{o}lida ou gasosa, gra\c{c}as ao trabalho pioneiro dos alem\~{a}es Robert Wilhelm Bunsen e Gustav Kirchhoff \cite{Kirchhoff-Bunsen}, a partir da inven\c{c}\~{a}o do espectr\'{o}grafo \'{o}ptico e do desenvolvimento, entre 1855 e 1863, do que se convencionou chamar de {\em espectroscopia}. Estudos nessa \'{a}rea permitiram a des\-coberta de novos elementos qu\'{\i}micos e levaram tamb\'{e}m o f\'{\i}sico estadunidense Albert Abraham Michelson a definir um novo padr\~{a}o para o metro: 1 m = 1\,553\,163,5 comprimentos de onda da linha vermelha do c\'{a}dmio. Deram tamb\'{e}m origem a uma nova \'{a}rea de investiga\c{c}\~{a}o astrof\'{\i}sica, que busca conhecer a composi\c{c}\~{a}o qu\'{\i}mica das estrelas. Cada elemento qu\'{\i}mico d\'{a} origem a um espectro de emiss\~{a}o caracter\'{\i}stico, como se fosse uma esp\'{e}cie de ``impress\~{a}o digital'', \'{u}nica para cada elemento. Para os gases mono-at\^{o}micos, esses espectros, projetados em um anteparo ou visualizados por meio de um microsc\'{o}pio, apresentam-se, em geral, como um conjunto de {\em linhas} espa\c{c}adas e paralelas e, para os gases contendo dois ou mais \'{a}tomos, como {\em bandas} cont\'{\i}nuas. Sem d\'{u}vida, o espectro mais famoso acabou sendo o do \'{a}tomo de hidrog\^{e}nio, o elemento mais simples encontrado na Natureza. As regularidades desse espectro, descritas matematicamente pelo professor de Matem\'{a}tica e Latim, o su\'{\i}\c{c}o Johann Jakob Balmer, foram de fundamental import\^{a}ncia para o sucesso do modelo de \'{a}tomo proposto pelo f\'{\i}sico dinamarqu\^{e}s Niels Bohr, j\'{a} no seculo~XX. Em \'{u}ltima an\'{a}lise, foi a chave para a compreens\~{a}o da natureza qu\^{a}ntica do \'{a}tomo \cite{FM}. Observando o espectro de absor\c{c}\~{a}o na descarga el\'{e}trica entre eletrodos de carbono, iluminado com a luz do Sol, Foucault, em 1849, concluiu que a subst\^{a}ncia que emite luz de uma dada frequ\^{e}ncia tamb\'{e}m absorve melhor a luz nessa frequ\^{e}ncia. Essa conclus\~{a}o parece refor\c{c}ar a ideia de que os fen\^{o}menos de emiss\~{a}o e absor\c{c}\~{a}o seriam devidos a uma esp\'{e}cie de resson\^{a}ncia entre a radia\c{c}\~{a}o e os \'{a}tomos de uma subst\^{a}ncia, ou seja, sugere que os {\em \'{a}tomos} seriam {\em sistemas compostos}. Segundo Maxwell, \begin{quotation} \noindent {\em foram essas observa\c{c}\~{o}es que primeiro levaram \`{a} conclus\~{a}o de que o espectro implicava que os \'{a}tomos tivessem estrutura, ou seja, fossem um sistema capaz de executar movimentos internos de vibra\c{c}\~{a}o.} \end{quotation} \section*{O experimento de Thomson} Outras evid\^{e}ncias de uma subestrutura do \'{a}tomo foram obtidas a partir do surgimento dos chamados {\em tubos de Geissler}, {\em ampolas de Crookes}, ou ainda {\em tubos de raios cat\'{o}dicos}, e dos estudos de novos fen\^{o}menos descobertos com esses tubos \cite{FM}. Vamos nos referia aqui apenas ao trabalho do f\'{\i}sico ingl\^{e}s Joseph John Thomson. Segundo ele, \begin{quotation} \noindent {\em Temos nos raios cat\'{o}dicos mat\'{e}ria em um novo estado, um estado no qual a subdivis\~{a}o da mat\'{e}ria \'{e} levada muito al\'{e}m do que no estado gasoso ordin\'{a}rio: um estado no qual toda mat\'{e}ria -- isto \'{e}, mat\'{e}ria derivada de diferentes fontes, como hidrog\^{e}nio, oxig\^{e}nio etc. -- \'{e} uma e do mesmo tipo; essa mat\'{e}ria \'{e} a subst\^{a}ncia da qual todos os elementos qu\'{\i}micos s\~{a}o feitos.} \end{quotation} Thomson havia estabelecido que os raios cat\'{o}dicos eram des\-viados tanto por campos magn\'{e}ticos como por campos ele\-trost\'{a}ticos. Usando esse fato e um tubo de raios cat\'{o}dicos ele foi capaz de calcular a raz\~{a}o $e/m$ entre a carga el\'{e}trica e a massa desses corp\'{u}sculos universais (os {\em el\'{e}trons}). Com essas medidas, constatou que a raz\~{a}o $e/m$ para os raios cat\'{o}dicos era aproximadamente 1\,840 vezes maior que a mesma raz\~{a}o para o hidrog\^{e}nio ionizado. O estabelecimento do el\'{e}tron como constituinte subat\^{o}mico levou o pr\'{o}prio Thomson a propor um modelo f\'{\i}sico para o \'{a}tomo baseado em for\c{c}as eletrost\'{a}ticas. Estava, assim, definitivamente mostrado que o \'{a}tomo {\em n\~{a}o era indivis\'{\i}vel}, como proposto pelos gregos e at\'{e} ent\~{a}o aceito pelos qu\'{\i}micos. Restava, ainda, se chegar a um modelo at\^{o}mico coerente e est\'{a}vel. O caminho foi longo \cite{FM}-\cite{Pullman}. \section*{O espectro de raios X e o n\'{u}mero de el\'{e}trons} Uma outra descoberta que resultou do estudo emp\'{\i}rico envolvendo os tubos de raios cat\'{o}dicos foi a dos raios~X, pelo alem\~{a}o Wilhelm R\"{o}ntgen, quando os f\'{\i}sicos come\c{c}aram a se perguntar se os raios cat\'{o}dicos se propagariam fora dos tubos. Em 1894, Philipp Lenard, ent\~{a}o assistente de Heinrich Hertz, idealizou um aparato, com o qual estudou o que aconteceria com os raios cat\'{o}dicos ao se propagarem no ar, fora do tubo. Com esse dispositivo, Lenard p\^{o}de observar que os raios cat\'{o}dicos se propagavam at\'{e} uma dist\^{a}ncia de poucos cent\'{\i}metros do tubo, n\~{a}o apenas no ar, mas tamb\'{e}m em outros gases. Verificou, ainda, que os raios eram capazes de impressionar chapas fotogr\'aficas e de tornar fluorescentes certos materiais, como, por exemplo, o platino-cianeto de b\'{a}rio, s\'{o}lido cristalino que apresenta tona\-li\-dades verde e amarela conforme a incid\^{e}ncia de luz que o ilumina. Foi utilizando um tubo de Lenard que R\"{o}ntgen se prop\^os a estudar, em novembro de 1895, a fluoresc\^{e}ncia de certas subs\-t\^{a}ncias. Para eliminar efeitos indesej\'aveis, R\"ontgen introduziu o tubo com o qual trabalharia em uma caixa de papel\~{a}o preto, de modo a bloquear raios vis\'{\i}veis e ultravioleta provenientes do tubo. Desse modo, apenas os raios cat\'{o}dicos passariam pela janela de Lenard, sendo colimados para a dire\c{c}\~{a}o dos objetos contendo as subst\^{a}ncias fluorescentes. Com a sala completamente escura, R\"{o}ntgen observou que um cart\~{a}o coberto por uma solu\c{c}\~{a}o de platino-cianeto de b\'{a}rio estava iluminado. Entretanto, os raios cat\'{o}dicos se propagam no ar por apenas alguns poucos cent\'{\i}metros, e o cart\~{a}o alvejado estava localizado a muito mais do que isso; cerca de 2~m. Com o tubo isolado, qual seria a origem da fluoresc\^{e}ncia? Mais surpreendente ainda foi o fato de que o papel n\~{a}o estava na linha do feixe de raios cat\'{o}dicos. O que provocava, ent\~{a}o, aquela luminesc\^{e}ncia? Intrigado e perplexo com sua origem desconhecida, R\"{o}ntgen deu a esses raios o nome provis\'orio de raios~X -- baseado na letra normalmente atribu\'{\i}da \`{a} inc\'{o}gnita de um problema a resolver -- nome este que passou a ser definitivamente adotado. Poucas descobertas de f\'{\i}sica b\'{a}sica tiveram aplica\c{c}\~{o}es pr\'{a}ticas t\~{a}o cedo e espetaculares. Os raios X permitiram aos m\'{e}dicos ``verem'' dentro dos corpos sem ter que abri-los. Deu tamb\'{e}m enorme impulso \`{a} \'{a}rea da {\em Cristalografia} e permitiu a confirma\c{c}\~{a}o de que a mat\'{e}ria em estado cristalino seria um arranjo regular de \'{a}tomos e mol\'{e}culas dispostos em camadas, respaldando, portanto, a vis\~{a}o atom\'{\i}stica da mat\'{e}ria. Ainda do ponto de vista da F\'{\i}sica b\'{a}sica, em 1904, Charles Barkla dedicou-se a determinar o n\'{u}mero de el\'{e}trons contidos em \'{a}tomos-alvo em processos de espalhamento de raios~X. Somente em 1911, com dados mais precisos, Barkla foi capaz de mostrar que, para \'{a}tomos leves, o n\'{u}mero de el\'{e}trons \'{e} a metade do n\'{u}mero de massa do correspondente elemento qu\'{\i}mico. Por fim, no bi\^{e}nio 1913-1914, os trabalhos de Henry Moseley com espectros de raios~X confirmam algumas das ideias de Rutherford e de Bohr sobre a constitui\c{c}\~{a}o at\^{o}mica da mat\'{e}ria \cite{FM}. Os raios~X foram tamb\'{e}m relevantes no estudo do chamado {\em efeito Compton}, o qual ser\'{a} discutido adiante no texto. \section*{A radioatividade} O f\'{\i}sico franc\^{e}s Henry Becquerel, ap\'{o}s tomar conhecimento dos trabalhos de R\"{o}ntgen, passou a investigar se algumas subst\^{a}ncias, que se tornavam fosforescentes sob a incid\^{e}ncia de luz, eram capazes de emitir qualquer tipo de radia\c{c}\~{a}o penetrante, com os raios~X. Descobriu, assim, os {\em raios ur\^{a}nicos}. O que mais o intrigava era a natureza {\em espont\^{a}nea} da emiss\~{a}o desses raios, aparentemente sem causas externas. Essa quest\~{a}o mobilizou muitos f\'{\i}sicos e s\'{o} foi compreendida d\'{e}cadas mais tarde, com a Mec\^{a}nica Qu\^{a}ntica. O casal Curie dedicou-se muito ao estudo dessas emiss\~{o}es, e Marie Curie cunhou o termo {\em Radioatividade}. De forma muito resumida, podemos dizer desses estudos, com a descoberta dos raios $\alpha$, $\beta$ e $\gamma$, que a radioatividade natural possu\'{\i}a duas componentes formadas de part\'{\i}culas carregadas e uma terceira ($\gamma$) de natureza eletromagn\'{e}tica. Segundo Rutherford, a ``grande semelhan\c{c}a das mudan\c{c}as no r\'{a}dio, t\'{o}rio e act\'{\i}nio \'{e} muito not\'{a}vel e indica alguma peculiaridade da constitui\c{c}\~{a}o at\^{o}mica que ainda est\'{a} por ser elucidada''. Al\'{e}m disso, havia uma pergunta relevante sem resposta: por que todos os \'{a}tomos de um certo elemento n\~{a}o decaem no mesmo tempo j\'{a} que todos eles s\~{a}o id\^{e}nticos entre si? A resposta a essa quest\~{a}o s\'{o} viria a partir de um novo olhar sobre o microcosmo que ainda estava por vir com a Mec\^{a}nica Qu\^{a}ntica. Mas mesmo sem a resposta, a {\em lei dos decaimentos radioativos} permitiram que Ernest Rutherford calculasse o {\em n\'{u}mero de Avogadro} de um modo at\'{e} ent\~{a}o inesperado, confirmando a natureza molecular da mat\'{e}ria. Nesse sentido, cabe tamb\'{e}m recordar o sucesso da interpreta\c{c}\~{a}o estat\'{\i}stica da Teoria Cin\'{e}tica dos Gases \cite{Brush}. \section*{O movimento browniano} O movimento browniano tornou-se, no in\'{\i}cio do s\'{e}culo~XX, uma das mais convincentes provas acerca da realidade das mo\-l\'{e}\-cu\-las, ou seja, da hip\'{o}tese corpuscular da mat\'{e}ria. Isto porque o f\'{\i}sico franc\^{e}s Jean Perrin, em uma s\'{e}rie de trabalhos sistem\'{a}ticos, mediu o n\'{u}mero de Avogadro de muitas formas di\-fe\-ren\-tes, encontrando sempre resultados compat\'{\i}veis. Se ainda havia qualquer d\'{u}vida acerca da natureza molecular da mat\'{e}ria (como para o qu\'{\i}mico alem\~{a}o Friedrich Wilhelm Ostwald) ela se dissipou com os resultados obtidos por Perrin. Provavelmente nenhuma outra constante fundamental despertou o interesse de tantos f\'{\i}sicos do porte de Amp\`{e}re, Losch\-midt, Max\-well, Boltzmann, Thomson, Rayleigh, Planck, Einstein, Rutherford, Millikan, Perrin e outros, quanto o {\em n\'{u}mero de Avogadro}. Esse fato por si s\'{o} j\'{a} sugere a for\c{c}a da concep\c{c}\~{a}o at\^{o}mica da mat\'{e}ria e seu papel basilar na cons\-tru\c{c}\~{a}o do co\-nhe\-cimento cient\'{\i}fico moderno que, concluindo, podem muito bem ser resumidos nas palavras do f\'{\i}sico americano Richard Feynman: \begin{quotation} \noindent {\em Se, em algum cataclismo, todo o conhecimento cient\'{\i}fico fosse destru\'{\i}do e somente uma senten\c{c}a fosse transmitida para as pr\'{o}ximas ge\-ra\-\c{c}\~{o}es de cria\-tu\-ras, que e\-nun\-cia\-do con\-te\-ria mais in\-for\-ma\c{c}\~{a}o em me\-nos pa\-la\-vras? Acre\-di\-to que se\-ja a {\rm hi\-p\'o\-te\-se at\^o\-mi\-ca} (...) de que {\em to\-das as coi\-sas s\~ao fei\-tas de \'{a}tomos} (...). Nessa \'{u}nica sen\-ten\-\c{c}a, vo\-c\^{e} ver\'{a}, exis\-te uma {\rm e\-nor\-me} quan\-ti\-da\-de de in\-for\-ma\-\c{c}\~{o}es sobre o mundo.} \end{quotation} \section*{O experimento de Rutherford} O experimento de espalhamento de part\'{\i}culas $\alpha$ por alvos delgados serviu para que Rutherford introduzisse o importante conceito de {\em n\'{u}cleo at\^{o}mico}. Sua conclus\~{a}o foi de que o \'{a}tomo tem uma enorme regi\~{a}o vazia. H\'{a} um n\'{u}cleo ocupando uma pequena regi\~{a}o da ordem de $10^{-12}$~cm, enquanto o el\'{e}tron na \'{o}rbita mais pr\'{o}xima descreve uma trajet\'{o}ria circular com raio da ordem de $10^{-8}$~cm. Em outras palavras, em uma escala na qual o n\'{u}cleo at\^{o}mico tivesse um raio de cerca de 1~m, o el\'{e}tron mais pr\'{o}ximo estaria a 10~km de dist\^{a}ncia. Essa nova vis\~{a}o abrir\'{a} caminho para uma nova \'{a}rea de pesquisa em F\'{\i}sica B\'{a}sica: a {\em F\'{\i}sica Nuclear}. A hist\'{o}ria das tentativas de conhecer e dominar o n\'{u}cleo at\^{o}mico todos conhecem. Por um lado, h\'{a} a triste inven\c{c}\~{a}o da bomba at\^{o}mica, e de outras armas antes impens\'{a}veis, mas tamb\'{e}m trouxe, por outro lado, a medicina nuclear, com grande impacto social. \section*{O efeito Compton} O {\em efeito Compton} foi, na realidade, a evid\^{e}ncia experimental que faltava para que a comunidade cient\'{\i}fica admitisse a exist\^{e}ncia do {\em f\'{o}ton} como constituinte da luz, colocando um ponto final nessa quest\~{a}o que foi assunto de grande pol\^{e}mica. O que era aceito at\'{e} ent\~{a}o pela maioria dos f\'{\i}sicos era o resultado das discuss\~{o}es surgidas na Confer\^{e}ncia de Solvay, de 1911, onde foi aceita apenas a descontinuidade na emiss\~{a}o e na absor\c{c}\~{a}o de luz, e n\~{a}o da pr\'{o}pria energia da luz, como havia proposto Einstein. De fato, Compton mostrou que o espalhamento de raios~X pela mat\'{e}ria resulta da colis\~{a}o de f\'{o}tons com el\'{e}trons praticamente livres no interior da mat\'{e}ria. Sua compreens\~{a}o se constituiu em um argumento definitivo em favor da ideia de quantiza\c{c}\~{a}o da radia\c{c}\~{a}o, ou seja, da exist\^encia de \textit{f\'{o}tons}. \section*{Coment\'{a}rios finais}\label{conclusions} Em 1932, chegou-se a um quadro conceitual para explicar a mat\'{e}ria e a luz, constitu\'{\i}do de quatro {\em part\'{\i}culas elementares}: o el\'{e}tron, o pr\'{o}ton e o n\^{e}utron (os constituintes do \'{a}tomo) e o f\'{o}ton, ({\em quantum} da luz). Portanto, a F\'{\i}sica nos levou a um \'{a}tomo novo, dotado de uma estrutura f\'{\i}sica e n\~{a}o mais indivis\'{\i}vel e imut\'{a}vel. Sua descri\c{c}\~{a}o, do ponto de vista te\'{o}rico, exigiu o desenvolvimento de uma nova teoria: a {\em Mec\^{a}nica Qu\^{a}ntica}, que inclui conceitos novos como o de {\em spin} \cite{Tomonaga}. Sem esse conceito e sem esse novo referencial te\'{o}rico, n\~{a}o se pode compreender, finalmente, as regularidades da Tabela Peri\'{o}dica em termos da distribui\c{c}\~{a}o eletr\^{o}nica dos \'{a}tomos. Sugerimos que o leitor interessado em ler mais sobre essa hist\'{o}ria consulte nosso livro citado na refer\^{e}ncia \cite{FM}, no qual as bases dessa nova mec\^{a}nica s\~{a}o apresentadas em uma pers\-pectiva hist\'{o}rica. O que procuramos fazer aqui, conciliando a sugest\~{a}o de tema feita pelos organizadores do evento com a limita\c{c}\~{a}o de espa\c{c}o, foi apenas mostrar o qu\~{a}o rica e intricada foi a hist\'{o}ria da compreens\~{a}o do {\em \'{a}tomo} e que, em grande medida, v\'{a}rios resultados experimentais foram cruciais nesse processo de busca, para o qual n\~{a}o se vislumbra um horizonte.
proofpile-arXiv_067-7999
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} It is well known that stellar black holes (BHs) are formed by gravitational collapse or fallback from core-collapse supernovae of high-mass stars (\citealt{1939PhRv...56..455O}; \citealt{1966PhRv..141.1232M,1990RvMP...62..801B,1999ApJ...522..413F,fryer2001theoretical,2012ARNPS..62..407J}). A key open question is their mass-function and its relation to the Salpeter stellar initial mass function (IMF) (\citealt{salpeter1955luminosity}), \begin{equation} \label{eq:imf} dN \propto M^{-\alpha_S}\,dM, \end{equation} where $M$ is the mass in units of $M_{\odot}$, $dN$ is the number of stars in the corresponding mass range and $\alpha_{S}$ = 2.35 (\citealt{salpeter1955luminosity}). In the case of $M$\,$>$\,5 $M_{\odot}$, Eq.~(\ref{eq:imf}) appears to be remarkably universal and robust against variations in metallicity following studies of star forming regions of the Large Magellanic Cloud with Z\,$=$\,0.008 (\citealt{dario2009complete,gouliermis2006low}), though some dependencies may exist more generally (\citealt{2012ApJ...760...71C,2013ApJ...771...29G,2013ApJ...763..110K}). Through a complex process of massive stellar binary evolution, Eq.~(\ref{eq:imf}) is expected to have an imprint on the mass-function of their offspring in binary black holes (BBHs) recently observed by LIGO\footnote{The Laser Interferometer Gravitational-Wave Observatory}-Virgo (Fig.~\ref{fig:catalog}). Starting from Eq.~(\ref{eq:imf}) normalized to $\xi\,=\,M/M_{\odot}$, the cumulative count $\propto M^{-\alpha +1}$ and mass $\propto M^{-\alpha +2}$ depends on the lower cut-off $\xi_{c}\,=\,M_{c}/M_{\odot}$, where $M_{c}$ denotes the minimum for hydrogen ignition. Slightly above the Jupiter mass, $0.01<\xi_{c}<0.1$, the precise value of which is uncertain. We assume $\xi \sim 20$ for the low mass cut-off to produce BHs \citep[e.g.][]{1999ApJ...522..413F}, whereby \begin{equation} \begin{array}{l} {M_{\rm{BH}}}/{M_{\rm{tot}}} = (\xi_c/\xi)^{1.35} \sim 7-16\,\%, \\ {N_{\rm{BH}}}/{N_{\rm{tot}}} = (\xi_c/\xi)^{0.35} \sim 0.004-0.08\,\%. \end{array} \end{equation} Observational support for these percentages is found in population studies of the Milky Way \citep{elbert2018counting}, showing a number fraction of BHs of about 0.09\,$\%$ with aforementioned $\xi_{c}\simeq0.1$. This implies an expected mean BH-progenitor stellar mass $\bar{M}_{\rm{BH}} = M_{\rm{BH}}/N_{\rm{BH}}$ = $\left(M_{\rm{BH}}/M_{\rm{tot}}\right)$ $\left(N_{\rm{BH}}/N_{\rm{tot}}\right)^{-1}$ $\bar{M}_{*}$, where $\bar{M}_{*} = 0.1\--0.5\,M_{\odot}$ is the mean stellar mass. For the Milky Way, $\bar{M}_{*}/N_{\rm{tot}}$ can be estimated from the stellar population by total mass $5\times10^{10}M_{\odot}$ \citep{licquia2015improved} and count $(1-5)\times10^{11}$. Based on $\bar{M}_{\rm{BH}}$, the typical BH-progenitor masses are somewhere between 20 and $900\,M_{\odot}$, that we summarize by the geometric mean \begin{equation} {M}_{0}\!\,=\sqrt{20\times900} \approx 137\,M_{\odot}. \label{EQN_geo} \end{equation} As a progenitor stellar mass, Eq.~(\ref{EQN_geo}) is subject to mass-loss in the evolution of their progenitor binaries of massive stars. Preserving binary association limits mass-loss, in winds from the upper atmosphere of a star and in the core-collapse supernova producing the BHs. Regarding the first, simulations predict a positive correlation of mass loss with stellar mass, which tends to suppress the population of heavier stars \citep[ameliorated by their relatively shorter lifetimes,][]{2012ApJ...757...91B} and restrict the maximum BH mass produced in the final core-collapse supernova \citep{fryer2001theoretical}. To illustrate the second, consider two (consecutive) spherically symmetric explosions in circular binaries, neglecting kick velocities and induced ellipticity \citep{1975ApJ...200..145W}. Mass-loss in each is bounded by one-half the total mass for the binary to survive. Let $0<\epsilon<1$ denote the fractional mass loss in each supernova. As the more massive star explodes ($M_2^\prime \le M_1^\prime$), it blows off a stellar mass fraction $\epsilon$ leaving a BH mass $M_{1} \le (1-\epsilon)\,M'_{1}$ and total mass of the binary $\frac{1}{2}\left( M_1^\prime + M_2^\prime\right)< M''\le M'_{2} + (1-\epsilon)\,M'_{1}$, where the single prime now refers to masses just prior to the supernova event, after mass-loss in winds. The condition for the binary to survive implies $\epsilon<(1+M_2^\prime/M_1^\prime)$. The explosion of the secondary similarly must satisfy $\epsilon\,M'_{2} < \frac{1}{2}M^{\prime\prime} \leq \frac{1}{2} (M'_{2} + (1-\epsilon)\,M'_{1})$, giving the more restrictive condition $\epsilon < \min\left\{\frac{2}{3},\frac{1}{2}\left(1+{M_2^\prime}/{M_1^\prime}\right)\right\}$. In the approximation that the mass-loss fraction $\epsilon$ in the progenitor stars suppresses the mass of their BH remnants, approximately equal-mass stellar binaries are expected to produce \begin{eqnarray} \bar{M}\simeq \left(1-\epsilon\right)M_0 \simeq 46\,M_\odot. \label{EQN_geo1} \end{eqnarray} A very similar result derives from a putative stellar progenitor binary of, say, 100\,$M_{\odot}$ and 200\,$M_{\odot}$ with geometric mean Eq.~(\ref{EQN_geo}). Mass-loss by the supernova explosion at the limit $\epsilon=2/3$ produces a binary with intermediate masses $M_1=33\,M_{\odot}$ and $M_2=67\,M_{\odot}$ with geometric mean $47\,M_\odot$ - illustrative for the intermediate BBH range in GWTC-2 (Fig.~\ref{fig:catalog}). \begin{figure} \centering \includegraphics[width = 0.9\linewidth]{f1.jpg} \caption{Overview of the primary $(M_1)$ and secondary $(M_2)$ mass estimates of the intermediate BBHs in GWTC-2. Grey crosses indicate uncertainties. A linear fit shows a slope 0.67 with a correlation coefficient $r=0.93\pm0.06$ (\S\ref{IBMF}).} \label{fig:catalog} \end{figure} To date, intermediate-mass black holes (IMBHs) such as Eq.~(\ref{EQN_geo1}) have remained elusive in electromagnetic (EM) observations for reasons not well understood. However, such may derive from a mass-correlation in binary stellar progenitors. If so, BBH systems form rather promptly with high-mass BHs with high-mass stellar companions practically undetectable (in the electromagnetic spectrum) by the short lifetime of the latter. Indeed, the observed black hole masses of X-ray binary systems are $(3-18)\times M_{\odot}$ (e.g., \citealt{remillard2006x}; \citealt{miller2014astrometric}) - \emph{much} smaller than the geometric mean in Eq.~(\ref{EQN_geo}) predicted by the Salpeter IMF. Over the past few years, however, a new window opens to probe BBH systems by gravitational radiation (Fig.~\ref{fig:catalog}). The GW150914 event, the first LIGO-detected, is a merger of the BBH system with masses of 36\,$M_\odot$ and 29\,$M_\odot$ (\citealt{abbott2016observation}). The second Gravitational-Wave Transient Catalog (GWTC-2; \citealt{abbott2020gwtc}) of the LIGO-Virgo collaboration provides a unique opportunity for us to study this high-mass population of BBHs. The most massive BH merger event, GW190521, comprises 91.4\,$M_{\odot}$ and 66.8\,$M_{\odot}$ - \emph{not} surprising given Eq.~(\ref{EQN_geo}). A key question is the cosmological origin in binary stellar progenitors of the BBHs, notably relative to the peak in the cosmic star formation rate around redshift $z\simeq1.9$ (\citealt{2014ARAA..52..415M}). To address whether the majority of the progenitor systems are found before or after, we study the binary mass-function of BHs of GWTC-2 in the context of Eq.~(\ref{eq:imf}) and possible effects of cosmic time-dilation. To start, we consider some statistical properties and mass-function of the intermediate-mass BBHs in GWTC-2 (\S\ref{IBMF}), focused on power-law behavior in the tail of this population. Cosmic time-dilation effects on the power-law index is studied in \S\ref{sec:cosm}. We summarize our results in \S\ref{sec:summary}. \section{Mass-function index in GWTC-2}\label{IBMF} We consider LIGO compact binary mergers excluding events involving neutron stars (GW170817, GW190425 and GW190814) based on estimated masses, and three more by limited False Alarm Rates (\citealt{abbott2019gwtc}; \citealt{abbott2020gwtc}). In the BBHs in GWTC-2 (Fig.~\ref{fig:catalog}), the primary ($M_{1}$) and secondary masses ($M_{2}$) are strongly correlated with Pearson correlation coefficient $r$ and slope $s$, \begin{equation} r = 0.93\,\pm\,0.06,~~s=0.67, \label{EQN_r} \end{equation} representing a mean $\bar{q}$ of their mass-ratio $M_2/M_1$. The minimum and maximum of the mean binary masses are 6.9\,$M_{\odot}$ and 79.1\,$M_{\odot}$, respectively. Fig.~\ref{fig:norm_delm} shows the normalized mass difference $\delta M/\mu$, $\delta M = M_1-\mu$, {where \begin{eqnarray} \mu=\frac{1}{2}\left(M_1+M_2\right) \label{EQN_mudef} \end{eqnarray} refers to the mean binary mass.} Shown is a distinctly narrow skewed Gaussian mass-distribution, evidencing a relatively small mass-difference between primary and secondary in these binaries. The relatively modest standard deviation of 22\% in the signed mass-differences (Fig.~\ref{fig:norm_delm}) shows the BBHs are composed of roughly equal-mass BHs, leaving the mean mass as the primary parameter in their mass-distribution. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{f2.jpg} \caption{(Top panel.) Histogram of normalized mass differences in the BBHs of GWTC-2, $\left| \delta M/\mu\right|$. (Bottom panel.) Symmetric histogram of signed normalized mass difference (over all individual BH masses) in three bins with standard deviation 22\%.} \label{fig:norm_delm} \end{figure} Figs. \ref{fig:catalog}-\ref{fig:norm_delm} highlight a rather tight correction between the two masses, see also \cite{2020ApJ...891L..27F}. This permits parameterizing BBHs by their mean mass $\mu$. After all, their observation is determined by their luminosity in gravitational-wave emission, primarily as a function of chirp mass, satisfying \begin{eqnarray} {\cal M} = \frac{M_1^{3/5}M_2^{3/5}}{\left(M_1+M_2\right)^{1/5}}\simeq 2^{-{1}/{5}}\mu\left(1-\frac{3}{5}x^2\right), \label{EQN_Mc} \end{eqnarray} where $x=\delta M/\mu$. The distribution of $\delta M/\mu$ shown in Fig.~\ref{fig:norm_delm} shows $x^2\simeq 0.04$. By (\ref{EQN_Mc}), the chirp mass ${\cal M}$ of our BBH binaries in Fig.~\ref{fig:catalog} is tracked by the mean mass $\mu$ to better than 10\%. In light of this and aforementioned slope in Fig.~\ref{fig:catalog}, $\mu$ provides a statistically equivalent parameterization to primary mass $M_1$ \citep{2021ApJ...913L...7A}, satisfying \begin{eqnarray} \mu = \kappa\, M_1, \label{EQN_kappa} \end{eqnarray} where $\kappa = \frac{1}{2}\left( s + 1\right)=0.8350$, ignoring any potential mass-dependence in this correlation. We next turn to the observed and true mass-functions of the BBH mergers \citep{2021ApJ...913L...7A}. \section{Broken power law mass distribution} \begin{figure*} \centering \includegraphics[width =0.89\textwidth]{plinearB.jpg} \caption{(Top panel.) The true and observed BBH mass-functions of GWTC-2 modeled by a broken power distribution \citep{2021ApJ...913L...7A}, shown in a loglog plot rescaled to $\mu$ according to (\ref{EQN_kappa}) in units of Gpc$^{-3}$yr$^{-1}M_\odot^{-1}$ and, respectively, $M_\odot^{-1}$. Breaks shown are at $\mu_{b1}=4.64M_\odot$ and $\mu_{b2}=31.4M_\odot$ (solid black dots). (By \ref{EQN_kappa}, equivalent values $M_1=5.56M_\odot$ and $M_1=37.6M_\odot$ obtain, the latter consistent with $m_{break}=39.7_{-9.1}^{20.3}M_\odot$ in \cite{2021ApJ...913L...7A}.) Section A refers to a model extension, below the observed BBH mass-range $6.9M_\odot \le \mu \le 79.1M_\odot$. (Lower panel.) The tail of the distribution corresponds (Section C) shows a power-law index $-4.98$ of the true distribution steeper by about 17\% than $-4.26$ of the observed distribution.} \label{fig:plaw} \end{figure*} \begin{figure*} \centering \includegraphics[width =0.49\textwidth]{f3c.jpg} \includegraphics[width =0.49\textwidth]{f3b.jpg} \caption{(Left panel.) The binned PDF of the BBH mass-function by mean mass $\mu$ of BBH systems in GWTC-2. The tail of the $\mu$-distribution beyond a peak at $\gtrsim\,31.4\,M_\odot$ (solid green) is fitted to derive a power-law index $\alpha_B$ indicated by the straight line (red with black uncertainties by counting statistics) in the bins with 13, 5, 1, and 1 element(s). (Right panel.) $\mu$-redshift relation (circles) with elements in the tail highlighted (filled). Mean values $(\bar{\mu},\bar{z})=(41.9\,M_\odot,0.51)$ of mass and redshift in the tail (+) is indicated with $1\sigma$ uncertainty (ellipse). } \label{fig:hist} \end{figure*} LIGO BBHs mergers show a power law behavior in their true (astrophysical, per unit volume and observation time) mass distribution inferred from the observed distribution, here shown in Fig.~\ref{fig:plaw} - a loglog plot of data from Figs. 3-4 of \cite{2021ApJ...913L...7A}. Here, our focus is on the tail $\mu \geq \mu_{b2}=31.4M_\odot$, shown in Section C of Fig.\, \ref{fig:plaw}, rather than the intermediate mass range of Section B ($\mu_{b1}=4.64M_\odot < \mu < \mu_{b2}$). The location of the second break is consistent with $\mu_{break}=33.15M_\odot$ estimated by (\ref{EQN_kappa}) from $m_{break}=39.7_{-9.1}^{20.3}M_\odot$ in \cite{2021ApJ...913L...7A} with power law indices of Section B-C reported to be $\alpha_1\simeq 1.58_{-0.86}^{+0.82}$ and, respectively, $\alpha_2\simeq 5.6_{-2.6}^{+4.1}$. Fig.\,\ref{fig:plaw} quantifies the conversion of observed-to-true power-law indices in GWTC-2. For the tail of interest, it shows \begin{equation} \frac{\alpha_{B,true}}{\alpha_{B,obs}} \simeq \frac{4.98}{4.26}\simeq 1.17, \label{EQN_alphaB} \end{equation} ignoring systematic uncertainties in this conversion process. Revisiting uncertainty by scatter given the relatively modest number of events in the tail of the present survey, Fig.\,\ref{fig:hist} repeats the estimate of $\alpha_B=\alpha_{B,obs}$ using four bins in a loglog plot of event count versus $\mu$. By (\ref{EQN_alphaB}), we infer \begin{eqnarray} \alpha_B = 4.08\pm 0.73,~\alpha_{B,true}=4.77\pm 0.73. \label{alphaB} \end{eqnarray} Our estimate is hereby consistent with but slightly less than $\alpha_2=\alpha_{B,true}$ of \cite{2021ApJ...913L...7A}. In particular, $\alpha_{B,true}$ satisfies the astrophysical bound discussed in the following section. \section{Astrophysical bounds on power-laws} The observed steep index in Eq.~(\ref{EQN_alphaB}) satisfies some priors derived from aforementioned Salpeter IMF as follows. From Eq.~(\ref{eq:imf}), the progenitor binary mass-function $M_1^\prime$ and $M_2^\prime$ of the BBH can be expressed as a function of mean $\mu^\prime$ and normalized mass difference $x=\nu^\prime/\mu^\prime$, $\nu^\prime=(M_1^\prime-M_2^\prime)/2$. (Equivalently, consider $\mu^\prime$ and $Q=2x^2/(1-x^2)$, $x=\nu^\prime/\mu^\prime$ in symmetrized mass-ratio $Q=(q+1/q)/2-1, q=M_2^\prime/M_1^\prime$, $\nu^\prime=\delta M^\prime$.) The two-parameter binary mass-function $\psi_B(\mu^\prime,x)$ has two natural limits derived from essentially {\em correlated} or {\em uncorrelated} stellar masses. On the first, we note that nearby Galactic open stellar clusters show a large fraction of O-type stars born as binaries \citep{sana2012binary}. Radiation-hydrodynamic simulation suggests fragmentation of rotating gas disk by gravitational instabilities leads to the formation of massive binary stars with similar masses \citep{krumholz2009formation,sana2012binary,moe2017mind,2022arXiv220101905L}. On the other hand, such mass-correlation is expected to weaken as binary separation becomes large. Accordingly, therefore, the binary stellar mass-function is expected to be bounded by either of the two limits { \begin{align} \psi_B \propto \Bigg\{ \begin{array}{ll} (\mu^\prime)^{-\alpha_S}, \\\\ \left[ (\mu^\prime)^2-(\nu^\prime)^2\right]^{-\alpha_S} \end{array} \label{eq:psi} \end{align} for the correlated and, respectively, uncorrelated case with $M_{1,2}^\prime=\mu^\prime \pm \nu^\prime$. } {The IMF in $\mu^\prime$ of uncorrelated binary masses assumes a power-law index distinctly steeper than $\alpha_S$ in the correlated case upon marginalization over $x=\nu^\prime/\mu^\prime =(1-q)/(1+q)$ given fluctuations in mass ratio $q$.} {The result of integrating out $\nu^\prime$ depends on the expected astrophysical range of $q$. Following a change of variables $dM_1^\prime dM_2^\prime =2d\mu^\prime d\nu^\prime = 2 \mu^\prime d\mu^\prime dx$, $\nu^\prime=\mu^\prime x$, $M_{1,2}^\prime=\mu^\prime\left(1\pm x\right)$, an index $2\alpha_S-1$ obtains after integration over $0\le x \le 1$ covering all of $0\le q \le 1$. However, such includes extreme mass ratios $q=0$, i.e., $M_2=0$, $\nu=\mu$ ($x=1$). This limit is ruled out by a lower bound on the mass of black hole progenitor stars. Excluding this suggests, alternatively, the approximation $(\mu^\prime)^2-(\nu^\prime)^2\simeq (\mu^\prime)^2$. Integration over a finite strip in $\nu^\prime$, uncorrelated to $\mu^\prime$, produces an index $2\alpha_S$.} {According to the above,} we expect $\psi_B \propto \left(\mu^\prime\right)^{-\alpha_{B,true}^\prime}$ with \begin{equation} \alpha_S \lesssim \alpha_{B,true}^\prime \lesssim 2\alpha_S. \label{EQN_AS} \end{equation} The index Eq.~(\ref{EQN_alphaB}) is significantly steeper than aforementioned Salpeter IMF with $\alpha_{S}$ = 2.35 of their progenitor stellar mass-function. In fact, Eq.~(\ref{EQN_alphaB}) shows an index essentially equal to {\em twice the Salpeter value of a binary IMF with uncorrelated masses}. Some of the steepening in Eq.~(\ref{EQN_alphaB}) might alternatively be attributed to cosmological time-dilation. \section{Steepening by cosmic time-dilation}\label{sec:cosm} For a given cosmological background evolution, redshifts $z^\prime$ of BBH progenitors can be traced back by times of coalescence (\citealt{peters1964gravitational,celoria2018lecture,mapelli2018astrophysics}), $t_{c} = (5/256)c^{-5}G^{-3}a^4 \left(M_1 M_2\,(M_1 + M_2)\right)^{-1}$, where $c$ is the speed of light, $G$ is the gravitational constant, $a$ is the initial separation, and $M_1$ and $M_2$ are mass of the primary BH and that of secondary BH. That is, \begin{equation} {t_{c}} \approx 0.1576\,\left[ \frac{ a } { 0.2 AU } \right]^{4}\,\left[ \frac{\mu} {50 M_{\odot}} \right]^{-3} {t_{H}}, \label{eq:tcoal} \end{equation} where $\mu$ is the mean mass, and $t_{H}=13.7\,$Gyr is the Hubble time. The PDF in the source frame, $P^\prime\left(\mu,z^\prime\right)$ = $P\left(\mu,z\right)$ $(\partial z/\partial z^\prime)\left(1+z^\prime\right)$, derives from invariance of count $N$ and $\mu$, where $dN = {\psi}d{\mu}dT$ for an IMF $\psi$ and time interval $T$, whereby $P({\mu},z)d{\mu}dzdt = P'({\mu},z')d{\mu}dz'dt'$. Consequently, the transformation $(z,\mu)\rightarrow(z^\prime,\mu)$ with $z^\prime=z^\prime(z,\mu)$ gives $P({\mu},z^\prime) (\partial({\mu},z)/\partial({\mu},z^\prime)) d{\mu}dz^\prime=P^\prime({\mu},z^\prime)Jd{\mu}dz^\prime$ with Jacobian $J = \partial z/\partial z'$, where $z$ and $z^\prime$ are related on given cosmological background evolution. Hence, $P^\prime({\mu},z^\prime) = P({\mu},z)(\partial z/\partial z^\prime)(dt/dt^\prime)$, i.e.: \begin{equation} P^\prime({\mu},z^\prime) = P({\mu},z) J_z(z) \label{EQN_P} \end{equation} as asserted, where $dt/dt^\prime=1+z^\prime$, with the cosmic steepening factor \begin{equation} J_z = \frac{\partial z} {\partial z^\prime}\left(1+z^\prime\right). \label{EQN_P2} \end{equation} For a concrete illustration, we consider the following Ansatz for progenitor redshift $z'$: \begin{equation} a = f \left(\frac{\mu}{\mu_0}\right)^{\gamma} \label{eq:m_a} \end{equation} in the two parameters $f> 0$ and $\gamma>0$. Here, $\mu_0=31.4\,M_{\odot}$ is the minimum of the tail of the BBH mass-function (Fig.~\ref{fig:plaw}). For an initial separation $a$, the merging time $t_{c}$ satisfies \begin{equation} t_c \propto \frac{a^4}{\mu^3}\,\,{\displaystyle \propto }\,{\mu}^{4\gamma-3} \label{eq:f_gamma} \end{equation} with positive ($\gamma>0.75$), negative ($\gamma<0.75$) or neutral ($\gamma=0.75$) correlation between $t_c$ and $\mu$. For the PDF of the observed mergers, we consider \begin{equation} P(\mu)\, {\displaystyle \propto }\, {\mu}^{-\alpha_{B}}, \label{eq:pdf} \end{equation} where $\alpha_{B}$ is the observed value Eq.~(\ref{EQN_alphaB}), about the mean redshift $\bar{z}=0.51$ in light of the modest standard deviation $\sigma_z=0.2$. Based on Eq.~(\ref{EQN_P}) and Eq.~(\ref{eq:f_gamma}), we numerically solve for $z^\prime$ of the progenitor given a merger event at $z$ on a three-flat $\Lambda$CDM background. We highlight three model relationships between the initial separation $a$ and coalescence time $t_c$ with mass parameterized by $\gamma$ color-coded with blue, orange, and green, obtained with Python package {\sc cosmology} with Hubble parameter $H_0 = 67.8$\,km\,s$^{-1}$\,Mpc$^{-1}$ and matter density $\Omega_{m} = 0.307$. For illustrative purposes, Fig.~\ref{figR1} shows the change $\Delta \alpha_B=\alpha_B-\alpha_B^\prime$ by $J_z(z)$ (\ref{EQN_P2}) in (\ref{EQN_P}) for the data $(\mu,z)$ at hand, including, for illustrative purposes, the same for a uniform distribution in mass, $\mu_0\le \mu \le \mu_1$ ($\mu_0=31.4M_\odot$, $\mu_1=59.5M_\odot$) and redshift, $z_1\le z\le z_2$ ($z_1=0.42$, $z_2=0.52$), preserving similar mean values to those of the tail of the BBH distribution (Fig.~\ref{fig:plaw}). \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{f4.jpg} \caption{Steepening in the power-law index $\Delta\alpha_B$ reaches about 8\% of (\ref{EQN_alphaB}) in progenitor redshift $z^\prime$ by cosmic time-dilation factor $J_z$ in the ansatz (\ref{eq:m_a}) over the index range $0.5\le \gamma\le1$ on a three-flat $\Lambda$CDM background, shown are for $f=0.03, 0.049, 0.068$. Results for a fiducial uniform distribution ($U$) and the BBH data in the tail of the GWTC-2 catalogue ($D$) are rather similar. } \label{figR1} \end{figure} \begin{figure} \vskip0.2in \includegraphics[width=0.94\columnwidth]{fig6.jpg} \caption{ Progenitor redshifts $z^\prime$ for $\gamma=0.30$ (blue), $\gamma=0.75$ (orange) and $\gamma=1.00$ (green) in Eq.~(\ref{eq:m_a}) on a three-flat $\Lambda$CDM cosmological background. Horizontal dashed lines highlight $z^\prime = 2$ and 10. The associated mean of $z^\prime$ versus model parameter $\gamma$ increases above the mean redshift $z=0.51$ in the tail of the BBHs of GWTC-2 for $\gamma>0.75$. $\alpha_B^\prime$ of their PDFs is summarized in Table~\ref{tab:three_cases}, showing steepening when $\mu-t_c$ satisfies a positive correlation ($\gamma > 0.75$). } \label{figR2} \end{figure} Figs.~\ref{figR2} shows the results for (\ref{EQN_P}). Long merger times push the origin of high-mass binaries to high $z^\prime$. In treating the (tail of the) BBH of GWTC-2 as a uniform population, the condition $z^\prime<\infty$ sets a bound $f\lesssim 0.15$. Shown is $z^\prime$ versus $\mu$ for selected values $\gamma$ = 0.30, 0.75, and 1.00 (Case 1-3), in blue, orange, and green, respectively. Table~\ref{tab:three_cases} summarizes the steepening in the PDF$^\prime$ in the source frame compared to the observed PDF. While subtle (hard to discern by eye), the steepening present in (\ref{EQN_P}) is consistent with the anticipated modest change due to $J_z$ shown in Fig.~\ref{figR1}. Steepening appears for $\gamma>0.75$ (Case 3), otherwise absent for $\gamma\le 0.75$ (Case 1-2). \begin{table*} \centering \begin{tabular}{ccccccccccccccc} \hline \hline \multirow{2}{*} {\centering Case} & \multirow{2}{*}{Value} & \multirow{2}{*}{$\mu\--t_{c}$ relation} & \multicolumn{3}{c}{$f = 0.150$} & \cline{4-6} & & & $\alpha_B^\prime$ & $\alpha_{B,true}^\prime$ & $\bar{z^\prime}$ \\ \hline 1 & $\gamma < 0.75$ & negatively correlated & $4.10_{-0.73}^{+0.74}$ & $4.80_{-0.73}^{+0.74}$ & 0.86 \\ 2 & $\gamma = 0.75$ & constant & $4.10_{-0.74}^{+0.73}$ & $4.80_{-0.74}^{+0.73}$ & 1.07 \\ 3 & $\gamma > 0.75$ & positively correlated & $4.28_{-0.74}^{+0.73}$ & $5.00_{-0.74}^{+0.73}$ & 2.38\\ \hline \end{tabular} \caption{Estimated power-law index $\alpha_B^\prime$ in the source frame for a model relation of merging time scale $t_c$ and mass parameter $\mu$ of the binaries for three parameter values of $\gamma$ and two $f$ values. The case $\gamma > 0.75$ demonstrates steepening in $\alpha_B^\prime$ of the progenitor systems relative to the index $\alpha_B$ of the mass-function at coalescence. } \label{tab:three_cases} \end{table*} Steepening by cosmic time-dilation is noticeable when progenitor and merger redshifts differ by order unity $\left(z^\prime-z\gtrsim 1\right)$. Such may push $\alpha_B^\prime$ across the upper bound $2\alpha_S$ in Eq.~(\ref{EQN_AS}) in the binary IMF of progenitor stellar systems. Avoiding this limits the origin of the BBH progenitors to relatively low-$z$ late-time cosmology, effectively posterior to the peak in the cosmic star formation rate (\citealt{hopkins2006normalization,2014ARAA..52..415M}). \section{Discussion and conclusions}\label{sec:summary} The BBH mass-function of LIGO BBH mergers is identified with the Salpeter IMF of their stellar progenitors in the power-law tail $\mu \geq 31.4 \,M_\odot$ (Fig.~\ref{fig:plaw}), effectively parameterized by mean mass $\mu$ due to the implied tight correlation with chirp mass Eq. (\ref{EQN_kappa}). The mass scale Eq.~(\ref{EQN_geo1}) expected from the Salpter IMF, subject to preserving binary association in the final phase of binary evolution, is consistent with the observed masses in the tail shown in Figs.~\ref{fig:plaw}-\ref{fig:hist}. Following a detailed consideration of GWTC-2 data, our main findings indicate \begin{enumerate} \item A tight correlation Eq.\,(\ref{EQN_r}) between primary and secondary masses, consistent with the paucity in intermediate mass X-ray binaries; \item A broken power-law mass-function with a tail beyond $\mu\gtrsim 31.4M_\odot$. The power-law index $\alpha_{B,true}\simeq 4.77\pm 0.73$ is consistent with uncorrelated stellar progenitor masses at birth Eq.~(\ref{eq:psi}) by approaching the limit $2\alpha_S=4.7$ defined by the Salpeter index $\alpha_S$; \item A power-law index of $\mu$ is subject to steepening due to cosmological time-dilation (Fig.~\ref{figR1}), e.g., when mass and orbital separation are positively correlated. The condition Eq.\,(\ref{EQN_AS}) hereby bounds the mean of progenitor redshift $\bar{z}^\prime$. \item Subject Eq.\,\ref{EQN_AS}, Table \ref{tab:three_cases} suggests that a progenitor origin at $z^\prime\gg1$ is excluded, assuming the tail of the BBH population to derive from a uniform population. Progenitors hereby appears to be in the relatively recent epoch of cosmic star formation, about or posterior to the peak in the star formation rate. \end{enumerate} Conceivably, the last finding can be made more rigorous with BBH surveys from upcoming O4-O5 observations. Such bounds hold promise to distinguish between an origin related to the peak in the cosmic star formation rate and an association with Pop III stars \citep[e.g.][]{yajima2015can,kulkarni2014chemical,kinugawa2014possible}, previously considered for their relatively high mass of several tens of $M_\odot$ \citep{kinugawa2014possible,hosokawa2011protostellar,kinugawa2016the,kinugawa2021gravitational}. The tight correlation between primary and secondary BH masses and the uncorrelated binary progenitor masses in our findings is perhaps paradoxical. However, the pathway to black hole binaries from stellar progenitor systems is a complex process of binary stellar evolution including (uncertain) mass-losses in stellar winds \citep[e.g.][]{kriticka2014mass,chen2015parsec,belczynski2012missing,elbert2018counting}, mass-transfer potentially equalizing masses \citep[e.g.][]{kinugawa2014possible}, terminating in two core-supernovae. These processes are subject to a stringent selection criterion of preserving binary association, i.e., a 50\% mass-loss limit (in the idealized case of circular binary motion). Equalizing masses in the progenitor systems offers some hints at their stellar evolution, perhaps with further contributions to steepening in the power-law index from wind mass-loss \citep[cf.][]{vink2001mass}; \citealt{vink2011wind}; \citealt{chen2015parsec}), e.g., in binary association around cold red giants \citep{decin2020stellar}. While a detailed study of these complex radiation-hydrodynamical processes is outside the scope of this work, these processes and their down-selection effects in binary black hole formation are probably instrumental to understanding the detailed nature of the progenitor systems, here identified with ab initio uncorrelated progenitor stellar binary masses in view of $\alpha_B\simeq 2\alpha_S$ in Eq.~(\ref{EQN_r}). \mbox{}\\ {\bf Data Availability.} The data underlying this article were accessed from LIGO. \mbox{}\\ {\bf Acknowledgments.} The authors thank the anonymous reviewer for a detailed reading and constructive comments. This research is supported, in part, by NRF of Korea Nos. 2015R1D1A1A01059793, 2016R1A5A1013277 and 2018044640. Shinna Kim and Shin-Jeong Kim acknowledge a support from the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT: MSIT) (No. NRF-2022R1A2C1008706). \bibliographystyle{mnras}
proofpile-arXiv_067-8081
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} All graphs considered in this paper are simple without loops. Let $G=(V(G),E(G))$ be a graph. We use $\delta(G)$ and $\Delta(G)$ to denote its minimum and maximum degree, respectively. For $S\subseteq V(G)$, $G|_S$ denotes the subgraph of $G$ induced by $S$. Let $\mathsf{Path}_n$, $\mathsf{Cycle}_n$, $\mathsf{Star}_n$ and $\mathsf{Complete}_n$ denote a path, a cycle, a star and a complete graph of order $n$, respectively. Let $\mathsf{Lollipop}_{n-k,k}$ be a lollipop graph of order $n$ obtained by identifying one end of a $\mathsf{Path}_{n-k+1}$ with a vertex of a $\mathsf{Complete}_k$. A graph $G$ is $\ell$-connected if the resulting graph is still connected by removing any $\ell-1$ vertices from $G$. For two graphs $G$ and $H$, we let $G\cup H$ denote the disjoint union of $G$ and $H$. The friends-and-strangers graphs were introduced by Defant and Kravitz \cite{DK}, which are defined as follows. \begin{defn} Let $X$ and $Y$ be two graphs, each with $n$ vertices. The friends-and-strangers graph $\mathsf{FS}(X,Y)$ of $X$ and $Y$ is a graph with vertex set consisting of all bijections from $V(X)$ to $V(Y)$, two such bijections $\sigma$, $\sigma'$ are adjacent if and only if they differ precisely on two adjacent vertices, say $a,b\in X$ with $\{a,b\}\in E(X)$, and the corresponding mappings are adjacent in $Y$, i.e. \begin{itemize} \item $\{\sigma(a), \sigma(b)\} \in E(Y)$; \item $\sigma(a)=\sigma'(b)$, $\sigma(b)=\sigma'(a)$ and $\sigma(c)=\sigma'(c)$ for all $c\in V(X)\backslash \{a,b\}.$ \end{itemize} \end{defn} The friends-and-strangers graph $\mathsf{FS}(X,Y)$ can be interpreted as follows. View $V(X)$ as $n$ positions and $V(Y)$ as $n$ people. Two people are friends if and only if they are adjacent in $Y$ and two positions are adjacent if and only if they are adjacent in $X$. A bijection from $V(X)$ to $V(Y)$ represents $n$ people standing on these $n$ positions such that each person stands on precisely one position. At any point of time, two people can swap their positions if and only if they are friends and the two positions they stand are adjacent. A natural question is how various configurations can be reached from other configurations when multiple such swaps are allowed. This is precisely the information that is encoded in $\mathsf{FS}(X,Y)$. Note that the components of $\mathsf{FS}(X,Y)$ are the equivalence classes of mutually-reachable (by the multiple swaps described above) configurations, so the connectivity, is the basic aspect of interest in friends-and-strangers graphs. The questions and results in literature on the friends-and-strangers graph $\mathsf{FS}(X,Y)$ roughly fall in three types \begin{itemize} \item The structure of $\mathsf{FS}(X,Y)$ when at least one of $X,Y$ are specific graphs, such as paths, cycles, lollipop graphs, spider graphs and so on, \cite{DDLW}, \cite{DK}, \cite{L}, \cite{J1}, \cite{W}. \item The structure of $\mathsf{FS}(X,Y)$ when none of $X,Y$ is specific graph, such as minimum degree conditions on $X$ and $Y$, the case when $X$ has a Hamiltonian path, the non-polynomially bounded diameters and so on, \cite{ADK}-\cite{DK}, \cite{J1}, \cite{J2}. \item The structure of $\mathsf{FS}(X,Y)$ when both $X$ and $Y$ are random graphs, \cite{ADK}, \cite{M}, \cite{Wang}. \end{itemize} We note that Milojevic \cite{M} also studied a new model of friends-and-strangers graphs. The structure of $\mathsf{FS}(X,Y)$ when $X,Y$ belong to the first type is a basic question on the topic related to friends-and-strangers graphs, and the results on this type can also be used to study the other two types. For example, Alon, Defant and Kravitz \cite{ADK} used the structure of $\mathsf{FS}(\mathsf{Star}_n,Y)$ and $\mathsf{FS}(\mathsf{Lollipop}_{n-3,3},\mathsf{Star}_n)$ in researching the random aspect of friends-and-strangers graphs and minimum degree conditions on $X, Y$ for the connectedness of $\mathsf{FS}(X,Y)$, respectively; Jeong \cite{J2} used the structure of $\mathsf{FS}(\mathsf{Cycle}_n,Y)$ to investigate the connectedness of $\mathsf{FS}(X,Y)$ when $X$ is 2-connected. Fix $X=\mathsf{Path}_n$ or $\mathsf{Complete}_n$, the connectedness of $\mathsf{FS}(X,Y)$ is characterized as follows. \begin{theorem}\cite{DK}\label{path} Let $Y$ be a graph on $n$ vertices. Then $\mathsf{FS}(\mathsf{Path}_n,Y)$ is connected if and only if $Y$ is complete. \end{theorem} \begin{theorem}\cite{GR} \label{complete} Let $Y$ be a graph on $n$ vertices. Then $\mathsf{FS}(\mathsf{Complete}_n,Y)$ is connected if and only if $Y$ is connected. \end{theorem} Defant and Kravitz \cite{DK} started to consider the connectedness of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$. They first established a necessary condition in terms of $\delta(Y)$ for $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ to be connected. \begin{theorem}\cite{DK}\label{t1} Let $Y$ be a graph on $n$ vertices. If $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ is connected, then $\delta(Y)\ge n-k+1$. \end{theorem} \noindent Moreover, they tried to characterize the connectedness of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ in the case when $k=3,5$, and obtained the following. \begin{theorem}\cite{DK}\label{L1} Let $Y$ be a graph on $n$ vertices. Then $\mathsf{FS}(\mathsf{Lollipop}_{n-3,3},Y)$ is connected if and only if $\delta(Y)\ge n-2$. \end{theorem} \begin{theorem}\cite{DK}\label{L2} Let $Y$ be a graph on $n$ vertices. Then $\mathsf{FS}(\mathsf{Lollipop}_{n-5,5},Y)$ is connected if $\delta(Y)\ge n-3$, and $\mathsf{FS}(\mathsf{Lollipop}_{n-5,5},Y)$ is disconnected if $\delta(Y)\le n-5$. \end{theorem} Obviously, the connectedness of $\mathsf{FS}(\mathsf{Lollipop}_{n-3,3},Y)$ is completely determined by Theorem \ref{L1}. However, there is a ``gap'' between the lower and upper bounds of $\delta(Y)$ that guarantee $\mathsf{FS}(\mathsf{Lollipop}_{n-5,5},Y)$ to be connected or not by Theorem \ref{L2}. So, Defant and Kravitz \cite{DK} raised a question: Characterize the graphs $Y$ with $\delta(Y)=n-4$ for which $\mathsf{FS}(\mathsf{Lollipop}_{n-5,5},Y)$ is connected. \vskip 2mm In this paper, we give a sufficient and necessary condition for $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ to be connected for all $2\leq k\leq n$, and the main result is as below. \begin{theorem}\label{main} Let $2\le k \le n$ be integers and $Y$ be a graph on $n$ vertices. Then the graph $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ is connected if and only if every $k$-vertex induced subgraph of $Y$ is connected, which is equivalent to $Y$ is $(n-k+1)$-connected. \end{theorem} One can see that Theorem \ref{main} strengthens Theorem \ref{t1} and extends Theorems \ref{path} and \ref{complete}. In addition, by Theorem \ref{main}, we can easily deduce the following. \begin{corollary}\label{co} Let $Y$ be a graph on $n$ vertices with $\delta(Y)=n-4$. Then the graph $\mathsf{FS}(\mathsf{Lollipop}_{n-5,5},Y)$ is connected if and only if $Y$ does not contain any induced subgraph isomorphic to $\mathsf{Complete}_3\cup \mathsf{Path}_2$ or $\mathsf{Path}_3\cup \mathsf{Path}_2$. \end{corollary} It is clear that Corollary \ref{co} characterizes the graphs $Y$ with $\delta(Y)=n-4$ for which $\mathsf{FS}(\mathsf{Lollipop}_{n-5,5},Y)$ is connected. On the other hand, it is not difficult to check that there are many graphs $Y$ on $n$ vertices with $\delta(Y)=n-4$ which do/don't contain $\mathsf{Complete}_3\cup \mathsf{Path}_2$ or $\mathsf{Path}_3\cup \mathsf{Path}_2$ as an induced subgraph. \section{Proof of Theorem \ref{main}} In order to prove Theorem \ref{main}, we assume that the $\mathsf{Path}_{n-k+1}$ in $\mathsf{Lollipop}_{n-k,k}$ has vertex set $[n-k+1]$ and edge set $\{ \{1,2\}, \{2,3\},\dots, \{n-k,n-k+1\} \}$ and the $\mathsf{Complete}_k$ is on the set $\{n-k+1,\dots,n\}$, where $2\le k \le n$. We divide the proof of Theorem \ref{main} into two parts: connected and disconnected. \subsection{Connectedness} \begin{proposition}\label{p1} Let $2\le k \le n$ be integers and $Y$ be a graph on $n$ vertices. If every $k$-vertices induced subgraph of $Y$ is connected, then $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ is connected. \end{proposition} We need the following result due to Defant, Dong, Lee and Wei \cite{DDLW}. \begin{lemma}\cite{DDLW} \label{any} Let $X$ and $Y$ be connected graphs on $n$ vertices with $\Delta(X)=k\geq 2$. Suppose every induced subgraph of $Y$ with $k$ vertices is connected. Let $\sigma$ be a vertex of $\mathsf{FS}(X,Y)$, and fix $x\in V(X)$ and $y\in V(Y)$. Then there exists a vertex $\sigma'$ in the same component of $\mathsf{FS}(X,Y)$ as $\sigma$ such that $\sigma'(x) = y$. \end{lemma} \begin{proof}[Proof of Proposition~\ref{p1}] We proceed by induction on $n$. Note that $n\geq k$. If $n=k$, then Proposition \ref{p1} holds by Theorem \ref{complete}. Assume that Proposition \ref{p1} is true for $n-1$. We now show it also holds for $n$. Fix an arbitrary vertex $y_0\in V(Y)$ and a vertex $\sigma_0$ of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ satisfying $\sigma_0(1)=y_0$. We claim that for any other vertex $\sigma$ of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$, $\sigma$ is in the same component of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ as $\sigma_0$, which implies that $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ is connected. Let $\sigma$ any vertex of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ other than $\sigma_0$. Apply Lemma \ref{any} on $\mathsf{FS}(X,Y)$ with $X=\mathsf{Lollipop}_{n-k,k}$, $x=1$, $y=y_0$ and $\sigma$, then there exists a vertex $\sigma'$ in the same component of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ as $\sigma$ such that $\sigma'(1)=y_0=\sigma_0(1)$. The graph $\mathsf{Lollipop}_{n-k,k}|_{\{2,3,\dots,n\}}$ is isomorphic to $\mathsf{Lollipop}_{n-1-k,k}$ and the graph $Y|_{V(Y)\backslash y_0}$ satisfies the property that every $k$-vertices induced subgraph of $Y|_{V(Y)\backslash y_0}$ is connected. By the induction hypothesis, $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k}|_{\{2,3,\dots,n\}},Y|_{V(Y)\backslash y_0})$ is connected, which guarantees that $\sigma'$ and $\sigma_0$ are in the same component of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$, i.e., $\sigma$ and $\sigma_0$ are in the same component of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$. \end{proof} \subsection{Disconnectedness} \begin{proposition}\label{p2} Let $Y$ be a graph on $n$ vertices. If there exists a disconnected induced subgraph $Y_0$ of $Y$ with $k$ vertices, then $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ is disconnected. \end{proposition} \begin{proof} Let $V(Y_0)=A\cup B$ such that $A\cap B= \varnothing$ and there are no edges between $A$ and $B$. Let $X=\mathsf{Lollipop}_{n-k,k}$. We say a vertex $\sigma$ of $\mathsf{FS}(X,Y)$ is {\it special} if there exists an $a_0\in A$ such that $\sigma^{-1}(a_0)\in [n-k+1]$ and $\sigma^{-1}(B)=\{\sigma^{-1}(y)~|~y\in B\}\cap \{1,2,\dots,\sigma^{-1}(a_0)\}=\varnothing$. Such a vertex $a_0\in A$ is called a {\it timid} vertex for $\sigma$. It is easy to see that there exist both special vertex and non-special vertex in $\mathsf{FS}(X,Y)$ because $\sigma'$ is special if $\sigma'(1)\in A$ and $\sigma'$ is non-special if $\sigma'(1)\in B$. We claim that any special vertex is not adjacent to any non-special vertex, which implies that $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ is disconnected. Suppose to the contrary that a special vertex $\sigma$ is adjacent to a non-special vertex $\tau$, then there must be two adjacent vertices $a,c$ in $V(X)=[n]$ such that $\tau=\sigma \circ (a \ c)$, where $(a \ c)$ denotes the transposition of $a,c$ on $[n]$ that swaps the numbers $a$ and $c$. If none of $a, c$ is $\sigma^{-1}(a_0)$, then $\tau$ is also special with its timid vertex $a_0$. So we may assume that $a=\sigma^{-1}(a_0)$, which implies $\sigma(c)\notin B$ since $\sigma(a)$ are adjacent to $\sigma(c)$ in $Y$, but there are no edges between $A$ and $B$. In addition, $\sigma(c)$ does not belong to $A$ and $c$ does not equal to $a-1$ since otherwise $\tau$ will still be special, with the timid vertices $c$ and $a_0$ for $\tau$, respectively. We further claim that $a$ equals to $n-k+1$ and $\sigma^{-1}(A)\cap [n-k]=\varnothing$. Suppose that $a \neq n-k+1$, that is, $a\in [n-k]$, then $c$ equals to $a+1$. We have $\sigma^{-1}(B)\cap \{1,2,\dots,\tau^{-1}(a_0) \}=\varnothing$ since $\sigma^{-1}(B)\cap \{1,2, \dots, c\}=\varnothing$, i.e., $a_0$ is the timid vertex for $\tau$. So we conclude that $a=n-k+1$. If there is a $\sigma^{-1}(a_0') \in \sigma^{-1}(A)\cap [n-k]$, then we have $\sigma^{-1}(B)\cap \{1,2,\dots,\tau^{-1}(a_0') \}=\varnothing$ since $\tau^{-1}(a_0')=\sigma^{-1}(a_0')$, and so $a_0'$ is the timid vertex for $\tau$. The final contradiction arises since all the $k+1$ vertices in the set $\{\tau^{-1}(c)\} \cup \tau^{-1}(A)\cup \tau^{-1}(B)$ are contained in $[n-k+1,n]$, which is a set with only $k$ elements. \end{proof} Combining Propositions \ref{p1} and \ref{p2}, we complete the proof of Theorem \ref{main}. \vskip 2mm \noindent{\bf Remark.} Let $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_k$ be integers. The \emph{spider} $\mathsf{Spider}(\lambda_1,\lambda_2,\ldots,\lambda_{k})$ is a graph on $n=1+\sum_{i=1}^k \lambda_i$ vertices obtained by connected one of the two ends of each path of order $\lambda_1, \lambda_2, \dots\, \lambda_k$ to a new common vertex. Defant, Dong, Lee and Wei \cite{DDLW} showed the following. \begin{theorem}\cite{DDLW}\label{spider} Let $\lambda_1\geq\cdots\geq\lambda_k$ be positive integers and $n=\lambda_1+\cdots+\lambda_k+1$. Let $Y$ be a graph on $n$ vertices. If there exists a disconnected induced subgraph of $Y$ with $n-\lambda_1$ vertices, then $\mathsf{FS}(\mathsf{Spider}(\lambda_1,\ldots,\lambda_k),Y)$ is disconnected. \end{theorem} \noindent We can see that Proposition \ref{p2} strengthens Theorem \ref{spider} since $\mathsf{Spider}(\lambda_1,\ldots,\lambda_k)$ is isomorphic to a spanning subgraph of $\mathsf{Lollipop}_{\lambda_1,n-\lambda_1}$, where $n=\lambda_1+\cdots+\lambda_k+1$. \section{Open problem} Let $2\le k \le n$ be integers. The {\it dandelion} graph $\mathsf{Dand}_{n-k,k}$ is a spider of order $n$ with parameters $\lambda_1=n-k$ and $\lambda_2=\dots=\lambda_k=1$, that is, $\mathsf{Dand}_{n-k,k}$ is obtained by identifying one end of a $\mathsf{Path}_{n-k+1}$ with the center of a $\mathsf{Star}_k$. It is clear that $\mathsf{Dand}_{n-2,2}$ is precisely $\mathsf{Lollipop}_{n-2,2}$, and $\mathsf{Dand}_{n-k,k}$ is a proper spanning subgraph of $\mathsf{Lollipop}_{n-k,k}$ for $k\ge 3$. Defant and Kravitz \cite{DK} showed that $\mathsf{FS}(\mathsf{Lollipop}_{n-3,3},Y)$ is connected if and only if $\mathsf{FS}(\mathsf{Dand}_{n-3,3},Y)$ is connected. This leads us to ask when the edges not in the spanning subgraph $\mathsf{Dand}_{n-k,k}$ of $\mathsf{Lollipop}_{n-k,k}$ are not necessary for the connectedness of $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$. More precisely, we have the following. \begin{problem}\label{p} For what $k$ and $n$, it holds that $\mathsf{FS}(\mathsf{Lollipop}_{n-k,k},Y)$ is connected if and only if $\mathsf{FS}(\mathsf{Dand}_{n-k,k},Y)$ is connected? \end{problem} \noindent By Theorem \ref{main} and a result due to Defant, Dong, Lee and Wei \cite{DDLW}, one can see that the statement holds for $n\ge 2k-1$. On the other hand, by Theorem \ref{complete} and a result of Wilson \cite{W}, the statement is false for $n=k$. \section*{Acknowledgments} This research was supported by NSFC under grant numbers 12161141003 and 11931006.
proofpile-arXiv_067-8104
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Wigner representation} \label{appx:wigner_rep} For any $n$-qubit state $\rho$, we define the following complex-valued function $W_\rho : {\cal P} _n \mapsto \mathbb{C}$ as \begin{align} W_\rho( {\bm{u}} ) \coloneqq \frac{1}{2^n} \mathrm{tr} [A^\dagger_ {\bm{u}} \rho], \end{align} where $\{ A_ {\bm{u}} \}$ are the set of $2^{2n}$ \textit{phase point operators} on $n$-qubits, which are defined as follows \begin{align} A_{\bm{0}} \coloneqq \frac{1}{2^n} \sum_{ {\bm{u}} \in {\cal P} _n} D_ {\bm{u}} , \quad A_{ {\bm{u}} } \coloneqq D_ {\bm{u}} A_{\bm{0}} D^\dagger_ {\bm{u}} . \label{eq:A_phase_point} \end{align} As a consequence of \eqref{eq:displacement_reorder}, these phase-point operators can alternatively be expressed as \begin{align} A_ {\bm{u}} = \frac{1}{2^n} \sum_{ {\bm{v}} \in {\cal P} _n} (-1)^{[ {\bm{u}} , {\bm{v}} ]} D_ {\bm{v}} , \end{align} which further reveals that every $A_ {\bm{u}} $ is real in the computational basis. Despite being complex-valued, $W_\rho$ transforms covariantly under the displacement operators - informally speaking, $\rho$ is shifted by the displacement operators around phase space - just like Gross' representation in odd dimensions. Concretely, we consider the Wigner representation of $D_ {\bm{v}} \rho D_ {\bm{v}} ^\dagger$ for an arbitrary phase space displacement $ {\bm{v}} $, which is \begin{align} W_{D_ {\bm{v}} \rho D^\dagger_ {\bm{v}} }( {\bm{u}} ) &= \frac{1}{2^n} \mathrm{tr} [A^\dagger_ {\bm{u}} D_ {\bm{v}} \rho D_ {\bm{v}} ^\dagger]\\ &= \frac{1}{2^n} \mathrm{tr} [D_ {\bm{v}} ^\dagger A^\dagger_ {\bm{u}} D_ {\bm{v}} \rho ]\\ & = \frac{1}{2^n} \mathrm{tr} \left[D_ {\bm{v}} ^\dagger \left(\sum_{ {\bm{a}} \in {\cal P} _n} (-1)^{[ {\bm{u}} , {\bm{a}} ]} D^\dagger_ {\bm{a}} \right) D_ {\bm{v}} \rho\right]\\ & = \frac{1}{2^n} \mathrm{tr} \left[\left(\sum_{ {\bm{a}} \in {\cal P} _n} (-1)^{[ {\bm{u}} , {\bm{a}} ]} (D_ {\bm{a}} D_ {\bm{v}} )^\dagger D_ {\bm{v}} \right) \rho\right]. \end{align} Since $-[ {\bm{a}} , {\bm{v}} ] = -( {\bm{a}} _z \cdot {\bm{v}} _x - {\bm{a}} _x \cdot {\bm{v}} _z) = {\bm{u}} _z \cdot {\bm{a}} _x - {\bm{u}} _x \cdot {\bm{a}} _z = [ {\bm{u}} , {\bm{a}} ]$, we have \begin{align} D_ {\bm{v}} D_ {\bm{a}} = (-1)^{[ {\bm{a}} , {\bm{v}} ]} D_ {\bm{a}} D_ {\bm{v}} \Rightarrow (-1)^{-[ {\bm{a}} , {\bm{v}} ]} D_ {\bm{v}} D_ {\bm{a}} = D_ {\bm{a}} D_ {\bm{v}} \Rightarrow (-1)^{[ {\bm{v}} , {\bm{a}} ]} D_ {\bm{v}} D_ {\bm{a}} = D_ {\bm{a}} D_ {\bm{v}} \end{align} which means for all $ {\bm{u}} , {\bm{v}} \in {\cal P} _n$ \begin{align} W_{D_ {\bm{v}} \rho D^\dagger_ {\bm{v}} }( {\bm{u}} ) &= \frac{1}{2^n} \mathrm{tr} \left[\left(\sum_{ {\bm{a}} \in {\cal P} _n} (-1)^{[ {\bm{u}} , {\bm{a}} ]} [(-1)^{[ {\bm{v}} , {\bm{a}} ]} D_ {\bm{v}} D_ {\bm{a}} ]^\dagger D_ {\bm{v}} \right) \rho\right]\\ &= \frac{1}{2^n} \mathrm{tr} \left[\left(\sum_{ {\bm{a}} \in {\cal P} _n} (-1)^{[ {\bm{u}} , {\bm{a}} ]} (-1)^{[ {\bm{v}} , {\bm{a}} ]} D_ {\bm{a}} ^\dagger D^\dagger_ {\bm{v}} D_ {\bm{v}} \right) \rho\right]\\ &= \frac{1}{2^n} \mathrm{tr} \left[\left(\sum_{ {\bm{a}} \in {\cal P} _n} (-1)^{[ {\bm{u}} + {\bm{v}} , {\bm{a}} ]} D^\dagger_ {\bm{a}} \right) \rho\right]\\ &= W_\rho( {\bm{u}} + {\bm{v}} ), \end{align} which confirms that $W_\rho$ transforms covariantly under the action of the displacement operators. \subsection{Properties of qubit phase point operators and Wigner function} \label{appx:A_and_W_rho_props} We first establish that the phase point operators of a joint system are simply tensor products of phase point operators on its subsystems: \begin{enumerate}[label=\normalfont \textbf{(A\arabic*)}] \label{A_property:qubit_phase_point_op_tensor_product} \item \textit{(Factorization).} On a bipartite system with subsystems $X$ and $Y$, we have that $A_{ {\bm{u}} _X \oplus {\bm{u}} _Y} = A_{ {\bm{u}} _X} \otimes A_{ {\bm{u}} _Y}$. \end{enumerate} \begin{proof} From the definition of $D_ {\bm{u}} $ in \eqref{eq:Dx}, it is clear that \begin{align} D_ {\bm{u}} = D_{ {\bm{u}} _X} \otimes D_{ {\bm{u}} _Y}. \end{align} Let $n_X$ and $n_Y$ be the numbers of qubits in subsystems $A$ and $B$ respectively. Then the zero phase point operator on the bipartite system, $A_{\bm{0}}$, is \begin{align} A_{\bm{0}} &\coloneqq \frac{1}{2^{n_X+n_Y}} \sum_{ {\bm{u}} \in {\cal P} _{AB}} D_ {\bm{u}} = \frac{1}{2^{n_X+n_Y}} \sum_{ {\bm{u}} _X \in {\cal P} _A, {\bm{u}} _Y \in {\cal P} _B} D_{ {\bm{u}} _X \oplus {\bm{u}} _Y} = \frac{1}{2^{n_X+n_Y}} \sum_{ {\bm{u}} _X \in {\cal P} _A} \sum_{ {\bm{u}} _Y \in {\cal P} _B} D_{ {\bm{u}} _X} \otimes D_{ {\bm{u}} _Y} = A_{\bm{0}_X} \otimes A_{\bm{0}_Y}, \end{align} which in turn implies that any phase point operator $A_ {\bm{u}} \coloneqq A_{ {\bm{u}} _X \oplus {\bm{u}} _Y}$ for some $ {\bm{u}} _X \in {\cal P} _A$ and $ {\bm{u}} _Y \in {\cal P} _B$ is \begin{align} A_{ {\bm{u}} } \coloneqq A_{ {\bm{u}} _X \oplus {\bm{u}} _Y} = D_{ {\bm{u}} _X \oplus {\bm{u}} _Y} A_{\bm{0}} D^\dagger_{\bm{u}_A \otimes \bm{u}_B} = \left(D_{ {\bm{u}} _X} A_{\bm{0}_X} D_{ {\bm{u}} _X}^\dagger\right) \otimes \left(D_{ {\bm{u}} _Y} A_{\bm{0}_Y} D_{ {\bm{u}} _Y}^\dagger\right) = A_{ {\bm{u}} _X} \otimes A_{ {\bm{u}} _Y} . \end{align} \end{proof} Property \ref{A_property:qubit_phase_point_op_tensor_product} enables us to break down any $n$-qubit phase-point operator $A_ {\bm{u}} $ as a tensor product of single-qubit phase-point operators, \begin{align} A_ {\bm{u}} = \bigotimes_{i=1}^n A_{ {\bm{u}} _j}, \quad {\bm{u}} = \bigoplus_{i=1}^n {\bm{u}} _j, \label{eq:A_u_breakdown} \end{align} where $ {\bm{u}} _j \in \mathbb{Z}_2 \times \mathbb{Z}_2$ is a co-ordinate in the phase space of the $j$th qubit \emph{only}. It is therefore instructive to calculate the single-qubit phase point operators, which are \begin{align} A_{0,0} &= \frac{1}{2}( \mathbbm{1} + X + Z + iY), A_{0,1} = \frac{1}{2}( \mathbbm{1} - X + Z - iY), \notag \\ A_{1,0} &= \frac{1}{2}( \mathbbm{1} + X - Z - iY), A_{1,1} = \frac{1}{2}( \mathbbm{1} - X - Z + iY). \label{eq:single_qubit_A} \end{align} We next demonstrate how the explicit forms of single-qubit phase point operators can then be leveraged, via \eqref{eq:A_u_breakdown}, to prove two further properties for general $n$-qubit phase point operators. In particular, we show how distinct $n$-qubit phase point operators are orthogonal under the Hilbert-Schmidt inner product: \begin{enumerate}[label=\normalfont \textbf{(A\arabic*)}] \setcounter{enumi}{1} \item \textit{(Orthogonality).} \label{A_property:orthogonality} Let $A_ {\bm{u}} $ and $A_ {\bm{v}} $ be two $n$-qubit phase point operators. Then $ \mathrm{tr} [A^\dagger_{ {\bm{u}} } A_{ {\bm{v}} }] = 2^n \delta_{ {\bm{u}} , {\bm{v}} }$. \end{enumerate} \begin{proof} Let us first decompose $ {\bm{u}} = \bigoplus_{i=1}^n {\bm{u}} _j$ and $ {\bm{v}} = \bigoplus_{i=1}^n {\bm{v}} _j$, where $ {\bm{u}} _j$ and $ {\bm{v}} _j$ are phase point co-ordinates on the $j$th qubit only. By \eqref{eq:single_qubit_A}, we have that \begin{align} \mathrm{tr} [A^\dagger_{ {\bm{u}} _j} A_{ {\bm{v}} _j}] = 2 \delta_{ {\bm{u}} _j, {\bm{v}} _j}. \end{align} Therefore, \begin{align} \label{eq:A_orthogonal} \mathrm{tr} [A^\dagger_ {\bm{v}} A_ {\bm{u}} ] = \Pi_{j=1}^n \mathrm{tr} [A^\dagger_{ {\bm{u}} _j} A_{ {\bm{v}} _j} ] = \Pi_{j=1}^n 2\delta_{ {\bm{v}} _j, {\bm{u}} _j} = 2^n \delta_{ {\bm{u}} , {\bm{v}} }. \end{align} \end{proof} Since there is one phase point operator defined per point in phase space, there are $\abs{P_n} = \abs{\mathbb{Z}_2^n \times \mathbb{Z}_2^n} = 4^n$ phase point operators on $n$-qubits. Property \ref{A_property:orthogonality} thus implies $\{A_ {\bm{u}} \}_{ {\bm{u}} \in {\cal P} _n}$ forms an orthogonal complex basis for $\mathbb{C}^{2^n \times 2^n}$, i.e. the complex vector space of $2n \times 2n$ complex matrices under the Hilbert-Schmidt inner product. Therefore, $W_\rho$ is an \emph{informationally complete} representation of $n$-qubit states. More precisely, any $n$-qubit quantum state $\rho$ can be uniquely decomposed as \begin{equation} \rho = \sum_{ {\bm{u}} } W_\rho( {\bm{u}} ) A_{ {\bm{u}} }, \end{equation} where $W_\rho ( {\bm{u}} ) \coloneqq \frac{1}{2^n} \mathrm{tr} [A^\dagger( {\bm{u}} ) \rho]$ is a complex function on $ {\cal P} _n$. Furthermore, every phase point operator has trace 1: \begin{enumerate}[label=\normalfont \textbf{(A\arabic*)}] \setcounter{enumi}{2} \item \textit{(Unit trace).} \label{A_property:unit_trace} Let $A_ {\bm{u}} $ be any $n$-qubit phase point operator. Then we have $ \mathrm{tr} [A_{ {\bm{u}} }] = 1$. \end{enumerate} \begin{proof} Let us first decompose $ {\bm{u}} = \bigoplus_{j=1}^n {\bm{u}} _j$, where $ {\bm{u}} _j$ is a point in the phase space of the $j$th qubit. We see that $ \mathrm{tr} [A_{ {\bm{u}} _j}] = 1$ from From \eqref{eq:single_qubit_A}. Therefore, \begin{align} \mathrm{tr} [A_ {\bm{u}} ] = \Pi_{j=1}^n \mathrm{tr} [A_{ {\bm{u}} _j}] = \Pi_{j=1}^n 1 = 1. \end{align} \end{proof} Property \ref{A_property:unit_trace} implies that all $n$-qubit functions are \emph{normalized}. Since any $n$-qubit state $\rho$ has trace 1, we have that \begin{align} 1 = \mathrm{tr} [\rho] = \mathrm{tr} \left[\sum_{ {\bm{u}} \in {\cal P} _n} W_\rho( {\bm{u}} ) A_ {\bm{u}} \right] = \sum_{ {\bm{u}} \in {\cal P} _n} W_\rho( {\bm{u}} ) \mathrm{tr} [A_ {\bm{u}} ] = \sum_{ {\bm{u}} \in {\cal P} _n} W_\rho( {\bm{u}} ), \end{align} where the last equality is established by the unit trace of phase point operators. We will also find it useful to identify the following property of phase point operators: \begin{enumerate}[label=\normalfont \textbf{(A\arabic*)}] \setcounter{enumi}{3} \item \textit{(Resolution of the identity).} \label{A_property:identity} \begin{align} \sum_{ {\bm{u}} \in {\cal P} _n} A_ {\bm{u}} = 2^n \mathbbm{1} _n \end{align} \end{enumerate} \begin{proof} Adopting the decomposition of each $A_ {\bm{u}} $ in \eqref{eq:A_u_breakdown}, we see that \begin{align} \sum_{ {\bm{u}} \in {\cal P} _n} A_ {\bm{u}} = \sum_{ {\bm{u}} _1 \in {\cal P} _1},\dots,\sum_{ {\bm{u}} _n \in {\cal P} _1} \left(\bigotimes_{j=1}^n A_{ {\bm{u}} _j}\right) = \bigotimes_{i=1}^n \left(\sum_{ {\bm{u}} _j \in {\cal P} _1} A_{ {\bm{u}} _j} \right). \end{align} Using the explicit forms of single-qubit phase-point operators in \eqref{eq:single_qubit_A}, we calculate that \begin{align} \sum_{ {\bm{v}} \in {\cal P} _1} A_ {\bm{v}} = 2 \mathbbm{1} , \end{align} from which the result immediately follows. \end{proof} In the next section we focus in on qubit states that have real components in the computational basis. \subsection{Wigner representation for rebits} \label{appx:rebit_rep} Any $n$-qubit state $\rho$ can be decomposed as \begin{equation} \rho = \left[\frac{1}{2} \left(\rho + \rho^T\right)\right] + i\left[\frac{-i}{2}\left(\rho - \rho^T\right)\right], \end{equation} where the transposition is taken with respect to the computational basis. Because $\rho^* = \rho^T$ in any basis, we can identify \begin{align} \rho^{(0)} \coloneqq \frac{1}{2} (\rho + \rho^T) = \mathfrak{Re}(\rho),\ \rho^{(1)} = i\left[\frac{-i}{2}(\rho - \rho^T)\right] = \mathfrak{Im}(\rho), \label{eq:rho_real_im_defn} \end{align} i.e. $\rho^{(0)}$ and $\rho^{(1)}$ are respectively the real and imaginary components of the density matrix of $\rho$ in the computational basis. We will first prove Lemma \ref{lemma:rho_W_real_im_correspondence}, which shows there is a direct correspondence between the real/imaginary components of a state's Wigner representation and the real/imaginary components of its density matrix in the computational basis: \rhoWrealim* \begin{proof} Using the identification $\rho^{(0)} = \mathfrak{Re}(\rho)$ and $\rho^{(1)} = \mathfrak{Re}(\rho)$, we can then decompose $W_\rho( {\bm{u}} )$ as \begin{align} W_\rho( {\bm{u}} ) = \frac{1}{2^n} \mathrm{tr} \left[A^\dagger_ {\bm{u}} \rho^{(0)}\right] + i \frac{1}{2^n} \mathrm{tr} \left[A^\dagger_ {\bm{u}} \rho^{(1)}\right]. \end{align} We have noted in the main text that $A_ {\bm{u}} $ is always real. Since $\rho^{(0)}$ and $\rho^{(1)}$ are themselves real by their definition in \eqref{eq:rho_real_im_defn}, we conclude that $ \mathrm{tr} [A^\dagger_ {\bm{u}} \rho^{(0)}]$ and $ \mathrm{tr} [A^\dagger_ {\bm{u}} \rho^{(1)}]$ are both real for all $ {\bm{u}} \in {\cal P} _n$. Therefore, \begin{align} &\mathfrak{Re}(W_\rho( {\bm{u}} )) = \frac{1}{2^n} \mathrm{tr} \left[A^\dagger_ {\bm{u}} \rho^{(0)}\right] = W_{\rho^(0)}( {\bm{u}} )\\ &\mathfrak{Im}(W_\rho( {\bm{u}} )) = \frac{1}{2^n} \mathrm{tr} \left[A^\dagger_ {\bm{u}} \rho^{(1)}\right] = W_{\rho^{(1)}}( {\bm{u}} ) \end{align} \end{proof} An $n$-rebit Wigner representation $W^{(0)}_\rho$ was introduced by Delfosse et al.~\cite{delfosse2015}, which is defined as which is \begin{align} W^{(0)}_\rho( {\bm{u}} ) \coloneqq \frac{1}{2^n} \mathrm{tr} [A^{(0)}_ {\bm{u}} \rho], \label{eq:W_delfosse} \end{align} for all $ {\bm{u}} \in {\cal P} _n$, where \begin{align} A_{ {\bm{u}} }^{(0)} &\coloneqq \frac{1}{2^n} \sum_{ {\bm{a}} \in {\cal P} ^0} (-1)^{[ {\bm{u}} , {\bm{a}} ]} D_{ {\bm{a}} } \label{eq:ax_rebit} \end{align} for \begin{align} {\cal P} ^0_n \coloneqq \left\{ {\bm{a}} \, : \, {\bm{a}} _x \cdot {\bm{a}} _z = 0 \right\}. \end{align} By comparison with \eqref{eq:W_rho} and \eqref{eq:A_phase_point}, we see that the difference between $W_\rho( {\bm{u}} )$ and $W^{(0)}_ {\bm{u}} $ comes down to the fact that $A_ {\bm{u}} $ sums over all displacement operators defined over the phase space $ {\cal P} _n$, whereas $A^{(0)}_ {\bm{u}} $ only sums over displacement operators defined over the \emph{subspace} $ {\cal P} ^{(0)}_n$. By \eqref{eq:Dx}, we see that the condition $ {\bm{a}} _x \cdot {\bm{a}} _z = 0$ implies $D_ {\bm{a}} ^\dagger = D_ {\bm{a}} ^T = D_ {\bm{a}} $, i.e. that $D_ {\bm{a}} $ is \emph{real symmetric}, from which $A^{(0)\dagger}_ {\bm{u}} = A^{(0)T}_ {\bm{u}} = A^{(0)}_ {\bm{u}} $, i.e. that $A^{(0)}_ {\bm{u}} $ is real symmetric, immediately follows. Each $D_ {\bm{a}} \in {\cal P} ^{(0)}_n$ lies inside the (real) span of $n$-rebit states, since that is the vector space of all real $2^n \times 2^n$ symmetric matrices. It will be helpful to introduce the complement of $ {\cal P} ^{(0)}_n$ in $ {\cal P} _n$, \begin{align} {\cal P} ^{(1)}_n &\coloneqq \{ {\bm{a}} \, : \, {\bm{a}} _x \cdot {\bm{a}} _z = 1 \} \end{align} and the set of \emph{real anti-symmetric} phase point operators \begin{align} A^{(1)}_{ {\bm{u}} } \coloneqq \frac{1}{2^n} \sum_{ {\bm{a}} \in {\cal P} ^1} (-1)^{[ {\bm{u}} , {\bm{a}} ]} D_{ {\bm{a}} } \text{ for any } {\bm{u}} \in {\cal P} _n. \label{eq:A_IH} \end{align} The anti-symmetry ($A_ {\bm{u}} ^{(1)\dagger}=A_ {\bm{u}} ^{(1)T} = -A_ {\bm{u}} $) follows from the fact that $D_ {\bm{x}} ^\dagger = D_ {\bm{x}} ^T = - D_ {\bm{x}} $ whenever $ {\bm{x}} \in {\cal P} ^{(1)}_n$. We have seen that every displacement operator is either symmetric and anti-symmetric. The fact that $A^{(0)}_ {\bm{u}} $ only sums over \emph{symmetric} displacement operators defined on $ {\cal P} _n$ is the reason why, unlike $A_ {\bm{u}} $, it fails to be locally tomographic. For example, while ${A_{(0,0)} = \frac{1}{2}( \mathbbm{1} + X + Z + iY)}$, ${A^{(0)}_{(0,0)} = \frac{1}{2}( \mathbbm{1} + X + Z)}$. However, some global symmetric displacement operators are formed as a tensor product of \emph{anti-symmetric} local displacement operators. For instance, $A_{(0,0) \oplus (0,0)} = A_{(0,0)} \otimes A_{(0,0)}$ sums over the global two-qubit symmetric displacement operator $(iY) \otimes (iY)$, which is formed from the anti-symmetric local displacement operator $(iY)$ in each $A_{(0,0)}$. Because $A^{(0)}_{(0,0)\oplus (0,0)}$ also sums over $(iY) \otimes (iY)$, yet $A^{(0)}_{(0,0)}$ does not contain an $(iY)$ component, it is not locally tomographic. By \eqref{eq:ax_rebit} and \eqref{eq:A_IH}, we see that each $A_ {\bm{u}} $ splits up as \begin{align} A_ {\bm{u}} = A^{(1)}_ {\bm{u}} + A^{(1)}_ {\bm{u}} \end{align} We can correspondingly split up the Wigner representation of $\rho$ as \begin{align} W_\rho( {\bm{u}} ) &= \frac{1}{2^n} \mathrm{tr} [A^\dagger_ {\bm{u}} \rho ] =\frac{1}{2^n} \mathrm{tr} \left[\left(A^{(0)}_{ {\bm{u}} } + A^{(1)}_ {\bm{u}} \right)^\dagger\rho \right] = \frac{1}{2^n} \mathrm{tr} \left[A^{(0)\dagger}_{ {\bm{u}} } \rho \right] + \frac{1}{2^n} \mathrm{tr} \left[A^{(1)\dagger}_{ {\bm{u}} }\rho \right] \eqqcolon W_{\rho}^{(0)}( {\bm{u}} ) + W_{\rho}^{(1)}( {\bm{u}} ), \end{align} where \begin{equation} W_{\rho}^{(1)}( {\bm{x}} ) :=\frac{1}{2^n} \mathrm{tr} \left[A^{(1)\dagger}_{\bm{x}}\rho \right]=-\frac{1}{2^n} \mathrm{tr} \left[A^{(1)}_{\bm{x}}\rho \right]. \end{equation} We can then prove \begin{lemma} \label{lemma:ours_delfosse_relationship} Given any $n$-qubit state $\rho$, we have that \begin{align} &\mathfrak{Re}(W_\rho( {\bm{u}} )) = W^{(0)}_\rho( {\bm{u}} ) \\ &i\mathfrak{Im}(W_\rho( {\bm{u}} )) = W^{(1)}_\rho( {\bm{u}} ) \end{align} \end{lemma} \begin{proof} For $k=0,1$, \begin{align} \left[W^{(k)}_\rho( {\bm{u}} )\right]^* &= \frac{1}{2^n} \mathrm{tr} \left[A^{(k)\dagger}_{ {\bm{u}} }\rho \right]^*\\ &= \frac{1}{2^n} \mathrm{tr} \left[(A^{(k)\dagger}_{ {\bm{u}} }\rho)^\dagger \right]\\ &= \frac{1}{2^n} \mathrm{tr} \left[(-1)^k(A^{(k)}_{ {\bm{u}} }\rho)\right]= (-1)^k W^{(k)}_\rho( {\bm{u}} ), \end{align} which implies $W^{(0)}_\rho( {\bm{u}} ) = \mathfrak{Re}[W_\rho( {\bm{u}} )]$ is the real component while $W^{(1)}_\rho( {\bm{u}} ) = i\mathfrak{Im}[W_\rho( {\bm{u}} )]$. \end{proof} By combining Lemmas \ref{lemma:rho_W_real_im_correspondence} and \ref{lemma:ours_delfosse_relationship}, we arrive at \begin{align} W_{\mathfrak{Re}[\rho]}( {\bm{u}} ) = W^{(0)}_ {\bm{u}} (\rho). \end{align} When $\rho$ is an $n$-rebit state, $\mathfrak{Re}(\rho) = \rho$, so $W_\rho( {\bm{u}} ) = W^{(0)}_ {\bm{u}} (\rho)$. \subsection{Wigner representation of qubit channels} \label{appx:wigner_channel_rep} We recall from the main text that the Wigner representation of a channel $ {\cal E} : {\cal B} ( {\cal H} ^n_2) \rightarrow {\cal B} ( {\cal H} ^m_2)$ is the linear map $W_ {\cal E} : {\cal P} _n \rightarrow {\cal P} _m$ defined as \begin{align} W_{ {\cal E} }( {\bm{v}} | {\bm{u}} ) \coloneqq 2^{2n} W_{ {\cal J} ( {\cal E} )}( {\bm{u}} \oplus {\bm{v}} ),\ \forall {\bm{v}} \in {\cal P} _m, {\bm{u}} \in {\cal P} _n, \end{align} where the Choi state~\cite{watrous_2018} of $ {\cal E} $, $ {\cal J} ( {\cal E} )$, is defined as $ {\cal J} ( {\cal E} ) = ( \mathcal{I} \otimes {\cal E} ) \ketbra{\phi^+_n}$ for the canonical maximally entangled state $\ket{\phi^+_n}$ on two sets of $n$ qubits, \begin{align} \ket{\phi^+_n}\coloneqq \frac{1}{\sqrt{2n}} \left(\sum_{ {\bm{k}} \in \{0,1\}^n} \ket{ {\bm{k}} } \otimes \ket{ {\bm{k}} }\right). \end{align} We note that for $i = 1,2,\dots,n$, we have that \begin{align} Z_i Z_{n+i} \ket{\phi^+_n} = \frac{1}{\sqrt{2n}} \left(\sum_{ {\bm{k}} \in \{0,1\}^n} Z_i \ket{ {\bm{k}} } \otimes Z_i \ket{ {\bm{k}} }\right) = \frac{1}{\sqrt{2n}} \left(\sum_{ {\bm{k}} \in \{0,1\}^n} (-1)^{k_i} \ket{ {\bm{k}} } \otimes (-1)^{k_i} \ket{ {\bm{k}} }\right) = \ket{\phi^+_n}. \end{align} Therefore, $\ket{\phi^+_n}$ is stabilized by $Z_i Z_{n+i}$ for $i = 1,2,\dots,n$. Furthermore, we have that $X_i\ket{ {\bm{k}} } = \ket{ {\bm{k}} '}$, where $ {\bm{k}} '$ is identical to $ {\bm{k}} $ except that its $i$th bit has been flipped. Since the set of $n$-bit binary strings and the set of $n$-bit binary strings whose $i$th bit has been flipped are identical, we conclude that \begin{align} X_i X_{n+i} \ket{\phi^+_n} = \frac{1}{\sqrt{2n}} \left(\sum_{ {\bm{k}} \in \{0,1\}^n} X_i\ket{ {\bm{k}} } \otimes X_i\ket{ {\bm{k}} }\right) = \frac{1}{\sqrt{2n}} \left(\sum_{ {\bm{k}} \in \{0,1\}^n} \ket{ {\bm{k}} '} \otimes \ket{ {\bm{k}} '} \right) = \frac{1}{\sqrt{2n}} \left(\sum_{ {\bm{k}} ' \in \{0,1\}^n} \ket{ {\bm{k}} '} \otimes \ket{ {\bm{k}} '} \right) = \ket{\phi^+_n}. \end{align} Therefore, $\ket{\phi^+_n}$ is stabilized by $X_i X_{n+i}$ for $i=1,2,\dots,n$. As $[Z_i Z_{n+i}, X_j X_{n+j}] = 0$ for all $i,j=1,2,\dots,n$, we have now found $2n$ commuting and independent stabilizers for $\ket{\phi^+_n}$. We therefore conclude that the stabilizer group of $\ket{\phi^+_n}$ is $\langle Z_i Z_{n+i}, X_i X_{n+i} \rangle_{i=1,\dots,n}$, which means $\ket{\phi^+_n}$ is a \emph{CSS state}. The factorization property (\ref{A_property:qubit_phase_point_op_tensor_product}) of phase point operators implies that \begin{align} W_{ {\cal E} }( {\bm{y}} | {\bm{x}} ) &\coloneqq 2^{2n} W_{ {\cal J} ( {\cal E} )}( {\bm{x}} \oplus {\bm{y}} ) =\frac{2^{2n}}{2^{n+m}} \mathrm{tr} \left[ \left(A^\dagger_{ {\bm{x}} \oplus {\bm{y}} }\right) {\cal J} ( {\cal E} ) \right] = \frac{2^n}{2^m} \mathrm{tr} \left[ \left( A^\dagger_{ {\bm{x}} } \otimes A^\dagger_{ {\bm{y}} } \right) {\cal J} ( {\cal E} ) \right]. \end{align} Using the identity ${ {\cal E} (X) = 2^n \mathrm{tr} _n \left[ (X^T \otimes \mathbbm{1} ) {\cal J} ( {\cal E} ) \right]}$ for transposition taken with respect to the computational basis, and recalling that $A_ {\bm{u}} $ is real in the computational basis, we then conclude \begin{align} W_{ {\cal E} }( {\bm{y}} | {\bm{x}} ) = \frac{1}{2^m} \mathrm{tr} [A^\dagger_{ {\bm{y}} } {\cal E} ((A_{ {\bm{x}} }^\dagger)^T)] = \frac{1}{2^m} \mathrm{tr} [A^\dagger_{ {\bm{y}} } {\cal E} (A^*_{ {\bm{x}} })] = \frac{1}{2^m} \mathrm{tr} [A^\dagger_{ {\bm{y}} } {\cal E} (A_{ {\bm{x}} })] \label{eq:W_E_defn} \end{align} Therefore, if $\sigma = {\cal E} (\rho)$, then we obtain \eqref{eq:W_E_in_action} from the main text, i.e. \begin{align} W_{\sigma}( {\bm{x}} ) &= \frac{1}{2^m} \mathrm{tr} \left[A^\dagger_{ {\bm{x}} } {\cal E} (\rho)\right]\\ &= \frac{1}{2^m} \mathrm{tr} \left[ {\cal E} \left( \sum_{ {\bm{y}} \in {\cal P} _n} W_{\rho}( {\bm{y}} ) A_{ {\bm{y}} } \right) A^\dagger_{ {\bm{x}} }\right] \\ &= \frac{1}{2^m}\sum_{ {\bm{y}} \in {\cal P} _n} \mathrm{tr} \left[ A^\dagger_{ {\bm{x}} } {\cal E} \left(A_{ {\bm{y}} } \right)\right] W_{\rho}( {\bm{y}} )\\ &= \sum_{ {\bm{y}} \in {\cal P} _n} W_ {\cal E} ( {\bm{x}} | {\bm{y}} ) W_{\rho}( {\bm{y}} ), \end{align} We thereby see that if $ {\cal E} $ maps $\rho$ to $\sigma$, then $W_ {\cal E} $ is a matrix that maps $W_\rho$ to $W_\sigma$, and is justifiably regarded as the Wigner representation of $ {\cal E} $ because it achieves at the level of phase space representation what $ {\cal E} $ achieves at the level of states. By property \ref{A_property:identity} of the phase point operators, we have that $\sum_{ {\bm{v}} \in {\cal P} _m} A_ {\bm{v}} = 2^m \mathbbm{1} _m$. By applying this to the alternative formulation of $W_ {\cal E} $ in \eqref{eq:W_E_defn}, we see that \begin{align} \sum_{ {\bm{v}} \in {\cal P} _m} W_ {\cal E} ( {\bm{v}} | {\bm{u}} ) = \frac{1}{2^m} \sum_{ {\bm{v}} \in {\cal P} _m} \mathrm{tr} \left[A^\dagger_{ {\bm{y}} } {\cal E} (A_{ {\bm{x}} })\right]= \frac{1}{2^m} \mathrm{tr} \left[\left(\sum_{ {\bm{v}} \in {\cal P} _m} A_{ {\bm{y}} }\right)^\dagger {\cal E} (A_{ {\bm{x}} })\right] = \frac{1}{2^m} \mathrm{tr} [2^m \mathbbm{1} _m {\cal E} (A_ {\bm{x}} )] = \mathrm{tr} [ {\cal E} (A_ {\bm{x}} )]. \end{align} Then recalling that $ \mathrm{tr} [A_ {\bm{x}} ] = 1$ (property \ref{A_property:unit_trace}), we obtain \eqref{eq:W_normalization} from the main text, i.e., \begin{align} \sum_{ {\bm{v}} \in {\cal P} _m} W_ {\cal E} ( {\bm{v}} | {\bm{u}} ) = \mathrm{tr} [A_ {\bm{u}} ] = 1. \end{align} This means every column of $W_ {\cal E} $ sums up to 1. Finally, we show that the representation we have chosen respects sequential and parallel composition of processes. Let $ {\cal E} : {\cal B} ( {\cal H} ^n_2) \rightarrow {\cal B} ( {\cal H} ^m_2)$ and $ {\cal F} = {\cal B} ( {\cal H} ^k_2) \rightarrow {\cal B} ( {\cal H} ^l_2)$ be two multiqubit channels. Since $\{A_ {\bm{y}} \}_ {\bm{y}} \in {\cal P} _m$ are an orthogonal basis for $2^n \times 2^n$ complex matrices under the Hilbert-Schmidt inner product, we have that ${ {\cal E} (A_ {\bm{x}} ) = \frac{1}{2^m} \sum_{ {\bm{y}} \in {\cal P} _m} \mathrm{tr} [A^\dagger_ {\bm{y}} {\cal E} (A_ {\bm{x}} )]}$. Therefore, when $m = k$, we have that \begin{align} W_{[ {\cal E} ' \circ {\cal E} ]}( {\bm{z}} | {\bm{x}} ) &= \frac{1}{2^l} \mathrm{tr} [A^\dagger_ {\bm{z}} {\cal E} ' \circ {\cal E} (A_ {\bm{x}} )]\\ &= \frac{1}{2^l} \mathrm{tr} [A^\dagger_ {\bm{z}} {\cal E} ' \left(\frac{1}{2^m} \sum_{ {\bm{y}} \in {\cal P} _m} \mathrm{tr} [A^\dagger_ {\bm{y}} {\cal E} (A_ {\bm{x}} )] A_ {\bm{y}} \right)]\\ &= \sum_{ {\bm{y}} \in {\cal P} _m} \frac{1}{2^l} \mathrm{tr} [A^\dagger_ {\bm{z}} {\cal E} '(A_ {\bm{y}} )] \frac{1}{2^m} \mathrm{tr} [A^\dagger_ {\bm{y}} {\cal E} (A_ {\bm{x}} )]\\ &= \sum_{ {\bm{y}} \in {\cal P} _m} W_{ {\cal E} '}( {\bm{z}} | {\bm{y}} ) W_ {\cal E} ( {\bm{y}} | {\bm{x}} ), \end{align} or in matrix notation, \begin{align} W_{ {\cal E} \circ {\cal F} } = W_ {\cal E} W_ {\cal F} . \end{align} Furthermore, due to the factorization property of the phase point operators, we have that \begin{align} W_{ {\cal E} \otimes {\cal F} }( {\bm{u}} \oplus {\bm{v}} | {\bm{x}} \oplus {\bm{y}} ) = \frac{1}{2^{(m+l)}} \mathrm{tr} [A^\dagger_ {\bm{u}} \otimes A^\dagger_ {\bm{v}} {\cal E} \otimes {\cal F} (A_ {\bm{x}} \otimes A_ {\bm{y}} )] = \frac{1}{2^m} \mathrm{tr} [A^\dagger_ {\bm{u}} {\cal E} (A_ {\bm{x}} )] \frac{1}{2^m} \mathrm{tr} [A^\dagger_ {\bm{v}} {\cal E} (A_ {\bm{y}} )] = W_ {\cal E} ( {\bm{u}} | {\bm{x}} ) W_ {\cal F} ( {\bm{v}} | {\bm{y}} ) \end{align} or in matrix notation \begin{align} W_{ {\cal E} \otimes {\cal F} } = W_ {\cal E} \otimes W_ {\cal F} . \end{align} \section{Completely CSS-preserving operations} \subsection{Completely CSS-preserving unitaries} \label{appx:CCSSP_vs_CSSP} The $n$-qubit CSS-preserving unitaries are~\cite{delfosse2015} \begin{equation} {\cal G}_+(n) \coloneqq \langle H(n), CNOT_{i,j}, X_i, Z_i\rangle_{i,j=1,\dots,n, i \neq j}, \end{equation} where we have defined the \emph{collective Hadamard gate} on $n$-qubits, $H(n) \coloneqq H^{\otimes n}$. Any unitary $U_+ \in {\cal G}_+(n)$ can be written as $U_+ = [H(n)]^a U$ for some $a \in \{0,1\}$ and $U \in {\cal G}(n)$, where we recall \begin{equation} {\cal G}(n) \coloneqq \langle CNOT_{i,j}, Z_i, X_i \rangle_{i,j = 1,\dots,n, i \neq j}. \end{equation} This follows from the commutation relations satisfied by the generators of ${\cal G}(n)$ and the fact that $H(n)$ is self-inverse. In particular, we have that $H(n) CNOT_{i,j} = CNOT_{j,i} H(n)$ and $H(n) X_i = Z_i H(n)$. Since every generator of ${\cal G}(n)$ is in turn a generator of ${\cal G}_+(n')$ for all $n' \ge n$, every member of ${\cal G}(n)$ is \emph{completely} CSS-preserving. Therefore, the only members of ${\cal G}_+(n)$ that are \emph{not} also completely CSS-preserving are those of the form $H(n) U$ for some $U \in {\cal G}(n)$, since $H(n)$ is not a member of ${\cal G}(n')$ for any $n' > n$. We conclude that ${\cal G}(n)$ must be the group of \emph{completely CSS-preserving unitaries} on $n$-qubits. \subsection{Completely CSS-preserving measurements} The projective measurement of any $n$-qubit Pauli observable $S$ is carried out using two projectors ${P_\pm(S) \coloneqq \frac{1}{2}( \mathbbm{1} _n \pm S)}$ corresponding to the outcomes $\pm 1$. We can then denote the post-selection channel for the $\pm 1$ outcome as $ {\cal P} _\pm(S) \coloneqq P_\pm(S) (\cdot) P_\pm(S)$. We next establish which Pauli observables can be projectively measured in CSS-preserving and \emph{completely} CSS-preserving ways: \begin{lemma} \label{lemma:CSS_proj} Let $S$ be a Pauli observable on $n$ qubits, and $\rho$ be a CSS state on $(n+m)$ qubits for any $m\ge 0$. Then the state $\sigma_{\pm}$ obtained by projectively measuring $S$ on the final $n$ qubits of $\rho$ and post-selecting on the $\pm 1$ outcome, \begin{align} \sigma_\pm \coloneqq \frac{[ \mathcal{I} _m \otimes {\cal P} _\pm(S)](\rho)}{p_\pm}, \quad p_\pm \coloneqq \mathrm{tr} ([ \mathbbm{1} _m \otimes P_\pm(S)]\rho) \label{eq:outout_Pauli_proj_conditional} \end{align} is a CSS state if and only if $S$ is $X$-type or $Z$-type. Moreover, if $S$ is neither $X$-type nor $Z$-type, then even in the case of $m = 0$, there exists a CSS state $\rho$ for which $\sigma_\pm$ are not CSS. \end{lemma} \begin{proof} We first establish that if $S = X( {\bm{a}} )$ and $\rho$ is a pure state $\ketbra{\psi}$, then $\sigma_\pm$ are CSS. Let the stabilizer group of $\ket{\psi}$ be $\langle (-1)^{b_1} X( {\bm{a}} _1),\dots, (-1)^{b_r} X( {\bm{a}} _r), (-1)^{b_{r+1}} Z( {\bm{a}} _{r+1}), \dots, -(1)^{b_{n+m}} Z( {\bm{a}} _{n+m})\rangle$, where each $ {\bm{a}} _i$ is a non-zero $(n+m)$-bit string and each $b_i$ is a binary digit. Then the (unnormalised) state obtained by projectively measuring $S$ on the final $n$ qubits of $\ket{\psi}$ and post-selecting on the $\pm 1$ outcome is \begin{align} [ \mathbbm{1} _m \otimes P_\pm(X( {\bm{a}} ))] \ket{\psi} = \left[\frac{1}{2}( \mathbbm{1} _{n+m} \pm \mathbbm{1} _m \otimes X( {\bm{a}} ))\right] \ket{\psi} = P_\pm(X(\bm{0}_m \oplus {\bm{a}} )) \ket{\psi} \coloneqq P_\pm (X( {\bm{a}} ')) \ket{\psi} \end{align} where $\bm{0}_m$ is the $m$-dimensional zero vector and we have defined $ {\bm{a}} ' \coloneqq \bm{0}_m \oplus {\bm{a}} $. There are now two possibilities: \begin{enumerate} \item $X( {\bm{a}} ')$ commutes with all $Z( {\bm{a}} _i)$ for all $i$ in the range $r+1 \le i \le n+m$. Therefore, either $X( {\bm{a}} ')$ or $-X( {\bm{a}} ')$ stabilizes $\ket{\psi}$, so either $P_+(X( {\bm{a}} '))\ket{\psi} = \ket{\psi}$ and $P_-(X( {\bm{a}} '))\ket{\psi} = 0$, or $P_-(X( {\bm{a}} '))\ket{\psi} = \ket{\psi}$ and $P_+(X( {\bm{a}} '))\ket{\psi} = 0$. \item Without loss of generality, $X( {\bm{a}} ')$ does \emph{not} commute with $Z( {\bm{a}} _i)$ for $i$ in the range ${r+1 \le i \le r+c}$ for some $c \ge 1$. We then have that $P_\pm (X( {\bm{a}} ')) \ket{\psi} = \frac{1}{\sqrt{2}}\ket{\phi_\pm}$, where $\ket{\phi_\pm}$ are respectively stabilized by $\langle (-1)^{b_1} X( {\bm{a}} _1),\dots, (-1)^{b_r} X( {\bm{a}} _r), \pm X( {\bm{a}} '), (-1)^{(b_{r+1} \oplus_2 b_{r+2})} {Z( {\bm{a}} _{r+1} \oplus_2 {\bm{a}} _{r+2})}, \dots, (-1)^{(b_{r+1} \oplus_2 b_{r+c})} {Z( {\bm{a}} _{r+1} \oplus_2 {\bm{a}} _{r+c})},\\ (-1)^{b_{r+c+1}} Z( {\bm{a}} _{r+c+1}), \dots, (-1)^{b_n} Z( {\bm{a}} _n) \rangle $, where $\oplus_2$ denotes additional modulo 2, from which we see that $\ket{\phi_\pm}$ are both CSS states. \end{enumerate} Summarising these possibilities, we conclude that, given any pure CSS state $\ket{\psi}$ on $n$ qubits, the obtained from projectively measuring $X( {\bm{a}} )$ on the last $n$ qubits and post-selecting on the $\pm 1$ outcome, \begin{align} \ket{\varphi_\pm} \coloneqq \frac{1}{\sqrt{p_\pm}} [P_\pm (X( {\bm{a}} ')) \ket{\psi}] = \frac{1}{\sqrt{p_\pm}} [ \mathbbm{1} \otimes P_\pm(X( {\bm{a}} )) \ket{\psi}],\ p_\pm \coloneqq \mathrm{tr} [ \mathbbm{1} _m \otimes P_\pm (X( {\bm{a}} )) \ketbra{\psi}] \end{align} is always CSS. Since the projective measurement of $-X( {\bm{a}} )$ requires the same projectors as that of $X( {\bm{a}} )$, the above argument carries over directly for $S = -X( {\bm{a}} )$ and $\rho = \ketbra{\psi}$. By reversing the roles of $X$ and $Z$, the argument also carries over to $S = \pm Z( {\bm{a}} )$ and $\rho = \ketbra{\psi}$. Therefore, by decomposing an \emph{arbitrary} CSS state $\rho$ on $n+m$ qubits into a statistical mixture of pure CSS states, we can show that the state $\sigma_{\pm}$ obtained from projectively measuring a Pauli observable $S$ on the final $n$ qubits and post-selecting on the $\pm 1$ outcome is CSS if $S = \pm X( {\bm{a}} )$ or $S = \pm Z( {\bm{a}} )$. We next prove that, if $S \neq \pm X( {\bm{a}} )$ and $S \neq \pm Z( {\bm{a}} )$ for all $n$-bit strings $ {\bm{a}} $, then it is always possible to find a pure CSS state $\rho$ on $n$ qubits \emph{alone} such that $\sigma_\pm$ are \emph{not} CSS. This argument will only be made for positive Pauli observables, since the projective measurements for $\pm S$ involve the same projectors. Every positive $n$-qubit Pauli observable $S$ can be represented as \begin{align} S = \bigotimes_{i=1}^n T_i, \quad T_i = \mathbbm{1} , X, Y, Z. \end{align} When $S \neq X( {\bm{a}} )$ and $S \neq Z( {\bm{a}} )$ for all possible $ {\bm{a}} $, there are three possibilities: \begin{enumerate} \item $\forall i: T_i \neq Y$. In that case, $T_i = \mathbbm{1} , X, Z$, and there must exist $j$ and $k$ for which $T_j = X$ while $T_k = Z$. Let us define the sets $ {\cal X} \coloneqq \{i \neq j |T_i = X\}$, $ {\cal Z} \coloneqq \{i \neq k| T_i = Z\}$ and $ {\cal N} \coloneqq \{i| T_i = \mathbbm{1} \}$. Consider the CSS state $\ket{\psi}$ defined by the stabilizer group \begin{align} {\text {STAB}} (\ket{\psi}) \coloneqq \langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, Z_j, Z_j X_k \rangle. \end{align} By construction, $S$ commutes with every generators of $ {\text {STAB}} (\ket{\psi})$ \emph{except} $Z_j$. Therefore, ${P_\pm(S) \ket{\psi} = \frac{1}{\sqrt{2}} \ket{\phi_\pm}}$, where $\ket{\phi_\pm}$ are stabilizer states defined by stabilizer groups \begin{align} {\text {STAB}} (\ket{\phi_\pm}) &= \langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, \pm S, Z_j X_k \rangle\\ & = \langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, X_j Z_k, Z_j X_k \rangle. \end{align} where we obtained the second equality by multiplying $\pm S$ by every othere generator except for $Z_j X_k$. This shows explicitly that $\ket{\phi_\pm}$ are \emph{not} CSS states. \item There is an odd number $w$ of values for $i$ at which $T_i = Y$. Let us define the sets $ {\cal X} \coloneqq \{i |T_i = X\}, {\cal Y} \coloneqq \{i|T_i = Y\}, {\cal Z} \coloneqq \{i| T_i = Z\}$ and $ {\cal N} \coloneqq \{i | T_i = \mathbbm{1} \}$. Let $j$ be a member of $ {\cal Y} $. Where $ {\cal Y} $ contains more than one member, let $k$ be another member of $ {\cal Y} $ besides $j$. Consider the CSS state $\ket{\psi}$ defined by the stabilizer group \begin{align} {\text {STAB}} (\ket{\psi}) = \left\langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, X_\star, \{Z_i Z_k | i \in {\cal Y} , i \neq j,k\}, Z_j \right\rangle. \end{align} where we have defined $X_\star \coloneqq \left((-1)^{\left(\frac{w-1}{2}\right)}\prod_{i \in {\cal Y} , i \neq j} X_i\right)$. By construction, $S$ commutes with every generator of $ {\text {STAB}} (\ket{\psi})$ \emph{except} $Z_j$. Therefore, $P_\pm(S)\ket{\psi} = \frac{1}{\sqrt{2}} \ket{\phi_\pm}$, where $\ket{\phi_\pm}$ are stabilizer states defined by the stabilizer groups \begin{align} {\text {STAB}} (\ket{\phi_\pm}) &= \langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, X_\star, \{Z_i Z_k | i \in {\cal Y} , i \neq j,k\}, \pm S \rangle\\ &= \langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, X_\star, \{Z_i Z_k | i \in {\cal Y} , i \neq j,k\}, \pm Y_j \rangle, \end{align} where we obtained the second equality by multiplying $\pm S$ by every other generator. This explicitly shows that $\ket{\phi_\pm}$ are \emph{not} CSS states. \item There is an \emph{even} number $w$ of values for $i$ at which $T_i = Y$. Let us define the sets $ {\cal X} \coloneqq \{i |T_i = X\}, {\cal Y} \coloneqq \{i|T_i = Y\}, {\cal Z} \coloneqq \{i| T_i = Z\}$ and $ {\cal N} \coloneqq \{i | T_i = \mathbbm{1} \}$. Let $j$ and $k$ be two distinct members of $ {\cal Y} $. Where $ {\cal Y} $ contains more than two members, let $l$ be another member of $ {\cal Y} $ besides $j$ and $k$. Consider the CSS state $\ket{\psi}$ defined by the stabilizer group \begin{align} {\text {STAB}} (\ket{\psi}) = \left\langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, X_\star, \{Z_i Z_l | i \in {\cal Y} , i \neq j,k,l\}, Z_j X_k, X_k \right\rangle. \end{align} where we have defined $X_\star \coloneqq \left((-1)^{\left(\frac{w-2}{2}\right)}\prod_{i \in {\cal Y} , i \neq j,k} X_i\right)$. By construction, $S$ commutes with every generator of $ {\text {STAB}} (\ket{\psi})$ \emph{except} for $X_k$, which means ${P_\pm(S)\ket{\psi} = \frac{1}{\sqrt{2}} \ket{\phi_\pm}}$, where $\ket{\phi_\pm}$ are defined by the stabilizer groups \begin{align} {\text {STAB}} (\ket{\phi_\pm}) &= \langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, X_\star, \{Z_i Z_l | i \in {\cal Y} , i \neq j,k,l\}, Z_j X_k, \pm S \rangle\\ &= \langle \{Z_i | i \in {\cal N} \}, \{Z_i | i \in {\cal Z} \}, \{X_i | i \in {\cal X} \}, X_\star, \{Z_i Z_l | i \in {\cal Y} , i \neq j,k,l\}, Z_j X_k, \pm X_j Z_k\rangle \end{align} where the second equality was obtained by multiplying $\pm S$ by every other generator. This explicitly shows that $\ket{\phi_\pm}$ are \emph{not} CSS states. \end{enumerate} \end{proof} We therefore group Pauli observables of the forms $\pm X( {\bm{a}} )$ or $\pm Z( {\bm{a}} )$ together as \emph{CSS observables}. Furthermore, we highlight the fact that Lemma \ref{lemma:CSS_proj} shows CSS observables to be the only Pauli observables whose projective measurement can be carried out in a \emph{completely} CSS-preserving way ($m > 0$). \subsection{CSS Operations} \label{appx:CCSSP_discussion} Essential to the application of majorization to magic state protocols is identifying a sufficiently large subset of stabilizer operations that can be stochastically represented. From the main text, we have seen that all \emph{completely CSS-preserving channels} are stochastically represented. We then show in Lemma \ref{lemma:CCSSP_powers} that completely CSS-preserving channels overlaps with a substantial subset of stabilizer operations, which we restate and prove here: \CCSSPpowers* \begin{proof} Let $\{ {\cal E} _i\}$ be a set of completely CSS-preserving operations from $n$ to $m$ qubits, and let $\rho$ be a state on $(a + n)$ qubits. Given any probability distribution, we then have that \begin{align} \sigma \coloneqq \left(\sum_i p_i \mathcal{I} _a \otimes {\cal E} _i\right)\rho = \sum_i p_i [( \mathcal{I} _a \otimes {\cal E} _i) \rho] \in {\cal D} _{css}, \end{align} since $( \mathcal{I} _a \otimes {\cal E} _i)\rho$ is a CSS state on $(a+m)$ qubits due to $ {\cal E} _i$ being completely CSS-preserving. Therefore, any statistical mixture of completely CSS-preserving channels (between the same input and output systems) are also completely CSS-preserving. Furthermore, let $ {\cal E} $ be a completely CSS-preserving channel from $n$ to $m$ qubits, and let $ {\cal F} $ be a completely CSS-preserving channel from $m$ to $l$ qubits. Letting $\rho$ once again be a CSS state on $(a+n)$ qubits, we have that $( \mathcal{I} _a \otimes {\cal F} ) \circ ( \mathcal{I} _a \otimes {\cal E} )\rho$ is another CSS state, since $( \mathcal{I} _a \otimes {\cal E} )\rho$ must be a CSS state due to $ {\cal E} $ being completely CSS-preserving, which $( \mathcal{I} _a \otimes {\cal F} )$ would map onto another CSS state since $ {\cal F} $ is also completely CSS-preserving. Therefore, any sequential composition of completely CSS-preserving channels (where the input of one channel matches the output of the other) is also completely CSS-preseerving. All that remains is to prove that the four stabilizer operations listed in the theorem are completely CSS-preserving. \begin{enumerate} \item Any tensor product of two CSS states is also a CSS state. \item These gates are completely CSS-preserving. \item Let $\{ {\cal P} _\pm(S) \coloneqq P_\pm(S)(\cdot)P_\pm(S)\}$ be the projective channels carrying out a measurement of the CSS observable $S$, and $\rho$ be a CSS state on $(n+m)$ qubits. From Lemma \ref{lemma:CSS_proj}, we see that $( \mathcal{I} _m \otimes {\cal P} _\pm(C)))\rho = p_\pm \sigma_\pm$, where $p_\pm \coloneqq \mathrm{tr} [(I \otimes P_\pm(S))\rho]$ and $\sigma_\pm$ are CSS states. Therefore, since $ {\cal E} _\pm$ are completely CSS-preserving, we have that $\sigma' \coloneqq \left[\sum_{\pm} \mathcal{I} _m \otimes {\cal E} _\pm \circ \mathcal{I} _m \otimes P_\pm(S)\right](\rho) = \sum_\pm p_\pm [ \mathcal{I} \otimes {\cal E} _\pm](\sigma_\pm)$ is also CSS. \item Consider tracing out $m$ qubits from $n$. Since we can freely relabel subsystems, we can, without loss of generality, only consider tracing out the \emph{last} $m$ qubits of $n$. Since tracing out is unaffected by first performing a computational basis measurement on the last $m$ qubitsLet $\ket{\psi}$ be a pure CSS state on $(a+n)$ qubits, we have that \begin{align} \mathcal{I} _{a} \otimes \mathrm{tr} _{n-m,\dots,n}(\ketbra{\psi}) = \sum_{ {\bm{k}} \in \{0,1\}^m} \mathrm{tr} _{a+n-m,\dots,a+n}\left[ \mathbbm{1} _{a+n-m} \otimes \ketbra{ {\bm{k}} } \ketbra{\psi} \mathbbm{1} _{a+ n-m} \otimes \ketbra{ {\bm{k}} } \right]. \end{align} We then observe that \begin{align} \mathbbm{1} _{a+n-m} \otimes \ketbra{ {\bm{k}} } = {\cal P} _+((-1)^{k_1} Z_{a+n-m+1}) \circ \dots \circ {\cal P} _+((-1)^{k_m} Z_{a+n}), \end{align} and so by Lemma \ref{lemma:CSS_proj}, $( \mathbbm{1} _{m'+n-m} \otimes \ketbra{ {\bm{k}} })\ket{\psi}$ is a (subnormalised) pure CSS state of the form $\sqrt{p_ {\bm{k}} } \ket{\phi_ {\bm{k}} } \otimes \ket{ {\bm{k}} }$, where $\ket{\phi_ {\bm{k}} }$ must be a CSS state on the first $(a+n-m)$ qubits in order to keep the complete state CSS, and $p_{ {\bm{k}} }$ is the probability of getting the $\ket{ {\bm{k}} }$ outcome in the computational basis measurement. Therefore, \begin{align} \mathcal{I} _{a} \otimes \mathrm{tr} _{n-m,\dots,n}(\ketbra{\psi}) = \sum_{ {\bm{k}} \in \{0,1\}^m} \mathrm{tr} _{a+n-m, \dots, a+n}[p_ {\bm{k}} \ketbra{\phi_ {\bm{k}} } \otimes \ketbra{ {\bm{k}} }] = \sum_{ {\bm{k}} \in \{0,1\}^m} p_ {\bm{k}} \ketbra{\phi_ {\bm{k}} }, \end{align} which is a CSS state on $(a+n-m)$ qubits. Therefore, by decomposing an arbitrary CSS state $\rho$ on $(a+n)$ qubits as a statistical mixture of CSS states, we can show that $ \mathcal{I} _{m'} \otimes \mathrm{tr} _{n-m,\dots n}[\rho]$ must be CSS, which implies tracing out subsystems is completely CSS-preserving. \end{enumerate} \end{proof} Consider any quantum circuit $ {\cal E} $ formed from preparing CSS states, performing gates from the larger group of \emph{CSS-preserving unitaries}, discarding qubits and projectively measuring CSS observables. Let ${\cal U}_H(n) \coloneqq H(n) (\cdot) H(n)$. We now note that \begin{itemize} \item Given any $n$-bit string $ {\bm{a}} $, because $U_H(n) X( {\bm{a}} ) U_H(n) = Z( {\bm{a}} )$, and every CSS observable $S$ on $n$ qubits is of the form $\pm X( {\bm{a}} )$ or $\pm Z( {\bm{a}} )$, we have the conjugation relation $P_\pm(S) U_H(n) = U_H(n) P_\pm (S')$, where $S' \coloneqq H(n) S H(n)$ is another CSS observable. Therefore, letting $ {\cal E} $ be the projective measurement of a CSS observable $S$, we conclude that $ {\cal E} \circ {\cal U}_H(n) = {\cal U}_H(n) \circ {\cal E} '$, where $ {\cal E} '$ is the projective measurement of the CSS observable $S'$. \item Given any $n$-qubit state $\rho$, we have that $H(n) \rho H(n) \otimes \sigma = H(n) \rho H(n) \otimes H(m) \sigma' H(m)$, where $\sigma' \coloneqq H(m) \sigma H(m)$ is also CSS state since $H(m)$ is CSS-preserving on $m$ qubits. Therefore, letting $ {\cal E} $ be the channel introducing a CSS state $\sigma$, we conclude that ${\cal U}_H(n) \circ {\cal E} = {\cal E} ' \circ {\cal U}_H(n+m)$, where $ {\cal E} '$ is the channel introducing another CSS state $\sigma'$. \item Let ${\cal R}$ be any subset of $m$ qubits out of $n$. We then have $ \mathrm{tr} _{\cal R} \circ {\cal U}_H(n) = {\cal U}_H(n-m) \circ \mathrm{tr} _{\cal R}$. \item As we have already seen in Appendix \ref{appx:CCSSP_vs_CSSP}, given any \emph{completely} CSS-preserving $n$-qubit unitary channel ${\cal U}$, we have that ${\cal U}_H(n) \circ {\cal U} = {\cal U}' \circ {\cal U}_H(n)$, where ${\cal U}'$ is another \emph{completely} CSS-preserving unitary channel. \end{itemize} We also saw in Appendix \ref{appx:CCSSP_vs_CSSP} that any CSS-preserving unitary channel can be written as $[{\cal U}_H(n)]^b \circ {\cal U}_+$, where ${\cal U}_+$ is a \emph{completely} CSS-preserving unitary channel, and $b$ is a binary digit. After decomposing every CSS-preserving unitary in $ {\cal E} $ into this form, we can conjugate any collective Hadamard gate to the end of the circuit as described above. Therefore, $ {\cal E} $ is operationally equivalent to a CSS protocol $ {\cal F} $, followed by a collective Hadamard gate conditioned upon certain some measurement outcomes obtained during the protocol. We conclude that $ {\cal E} $ can be converted reversibly into a CSS protocol using Clifford post-processing, which implies CSS-preserving and completely CSS-preserving unitaries generate equally powerful distillation protocols. \subsection{CSS code projections}\label{appx:CSS_code_proj} We first demonstrate that any CSS code can be decoded using a completely CSS-preserving unitary. \begin{lemma} \label{lemma:CNOT_gaussian_elim} Let $ {\cal S} \coloneqq \langle (-1)^{b_i} S_i \rangle_{i=1,\dots,n-k}$ be the stabilizer group defining an $[[n,k]]$ CSS code, where \begin{align} S_i = \begin{cases} X( {\bm{u}} _i)& \text{ for } 1 \le i \le r\\ Z( {\bm{v}} _i)& \text{ for } r+1 \le i \le n-k \end{cases} \end{align} in which each $ {\bm{u}} _i, {\bm{v}} _i$ is a non-zero $n$-dimensional binary vector and each $b_i$ is a binary digit. Then there a exists a completely CSS-preserving unitary $U$ such that \begin{align} U [(-1)^{b_i}S_i] U^\dagger = \begin{cases} X_i& \text{ for } 1 \le i \le r\\ Z_i& \text{ otherwise.} \end{cases} \end{align} \end{lemma} \begin{proof} The proof proceeds by construction. Let us first consider the $X$-type generators of $ {\cal S} $ \emph{without their signs}, i.e. $ {\cal X} \coloneqq \{X( {\bm{u}} _i) \}_{i=1,\dots,r}$. We will prove that there exists a sequence of $CNOT$ operations that transforms $X( {\bm{u}} _i)$ to $X_i$ for all $1 \le i \le r$. Let $G$ be the group formed by the set of positive $X$-type $n$-qubit CSS observables under matrix multiplication, i.e. ${G \coloneqq ({\{X( {\bm{a}} ) | {\bm{a}} \in \{0,1\}^n\}},\cdot)}$, and let $G'$ be the group formed by the set of $n$-dimensional binary strings under binary addition of corresponding entries, i.e. $G' \coloneqq (\{0,1\}^n,\oplus_2)$. Then $G \cong G'$ under the intuitive isomorphism $X( {\bm{a}} ) \leftrightarrow {\bm{a}} $. We can represent an $m$-tuple $ {\cal A} $ of $m$ positive $X$-type $n$-qubit CSS observables, $ {\cal A} \coloneqq (X( {\bm{a}} _1),\dots,X( {\bm{a}} _m))$ where $ {\bm{a}} _i \in \{0,1\}^n$ for $1 \le i \le m$, as the columns of an $n \times m$ matrix $M_{ {\cal A} }$, i.e. \begin{align} M_ {\cal A} \coloneqq \begin{bmatrix} \vert & \dots& \vert \\ {\bm{a}} _1 & \dots& {\bm{a}} _m \\ \vert & \dots& \vert \end{bmatrix}. \end{align} Because no element of $ {\cal X} $ can be formed by multiplying other elements together, their image in $G'$, $\{ {\bm{u}} _i\}_{i=1,\dots,r}$, are a set of $r$ \emph{linearly independent elements} in $V_n$, the $n$-dimensional vector space over $F_2$. Therefore, $M_ {\cal X} $ has rank $r$. We now demonstrate that the following two operations on $M_ {\cal A} $ can be accomplished by performing sequences of $CNOT$ operations on all the members of $ {\cal A} $ simultaneously: \begin{enumerate} \item \textit{Swapping rows $j$ and $k$.} The unitary operation ${\cal U}_{SWAP}(\cdot) \coloneqq U_{SWAP}(\cdot)U_{SWAP}^\dagger$, where ${U_{SWAP} \coloneqq CNOT_{j,k} CNOT_{k,j} CNOT_{j,k}}$, swaps qubits $j$ and $k$. Therefore, the matrix $M_{ {\cal A} '}$ representing the $m$-tuple $ {\cal A} ' \coloneqq ({\cal U}_{SWAP}(X( {\bm{a}} _1)),\dots,{\cal U}_{SWAP}(X( {\bm{a}} _m))$ can be obtained from $M_ {\cal A} $ by swapping rows $j$ and $k$. \item \textit{Adding row $j$ to row $k$.} The action of $CNOT_{j,k}$ on $X_k$ is \begin{align} CNOT_{j,k} X_i CNOT_{j,k} = \begin{cases} X_jX_k& \text{ for } i = j\\ X_i& \text{ otherwise.} \end{cases} \label{eq:CNOT_workings} \end{align} Therefore, \begin{align} CNOT_{j,k} X( {\bm{a}} ) CNOT_{j,k} = X( {\bm{a}} \oplus_2 a_j {\bm{e}} _k) \coloneqq X( {\bm{a}} '), \end{align} where $ {\bm{e}} _k$ is an $n$-bit string with 1 in the $k$th entry and 0 everywhere else. In words, $ {\bm{a}} '$ is formed from adding the $j$th entry of $ {\bm{a}} $ to its $k$th entry. Therefore, the matrix $M_{ {\cal A} '}$ representing the $m$-tuple ${ {\cal A} ' \coloneqq (CNOT_{j,k} X( {\bm{a}} _1) CNOT_{j,k},\dots,CNOT_{j,k} X( {\bm{a}} _1) CNOT_{j,k})}$ can be obtained from $M_ {\cal A} $ by entrywise binary addition of row $j$ to row $k$. \end{enumerate} We further demonstrate that Gauss-Jordan elimination reduces to a sequence of such row swaps and additions on $V_n$. For a vector space over a general field $F$, Gauss-Jordan elimination is a sequence of three moves: \begin{enumerate} \item Swap the positions of two rows. \item Add to one row a non-zero scalar multiple of another. \item Multiply any row by a non-zero scalar. \end{enumerate} Since the only scalars available in $F_2$ are 0 and 1, the third move has no effect when $F = F_2$ and can be neglected, while the second move reduces to the entrywise binary addition of rows. We now note that we can perform Gauss-Jordan elimination on $M_ {\cal X} $ and convert it into its unique reduced row echelon form, \begin{align} D_ {\cal X} \coloneqq \begin{bmatrix} I_{r,r}\\ 0_{n-r,r} \end{bmatrix}, \end{align} where $I_{r,r}$ is an $r \times r$ identity matrix while $0_{n-r,r}$ is a $(n-r) \times r$ null matrix. The $CNOT$ sequence corresponding to this Gauss-Jordan elimination, which we can denote by the unitary operation ${\cal U}_ {\cal X} (\cdot) \coloneqq U_ {\cal X} (\cdot)U^\dagger_ {\cal X} $, accomplishes ${\cal U}_ {\cal X} (X( {\bm{u}} _i)) = X_i$. We next consider the $Z$-type generators of $ {\cal S} $ \emph{without their signs}, i.e. $ {\cal Z} \coloneqq \{Z( {\bm{v}} _{r+1}), \dots, Z( {\bm{v}} _{n-k}\}$. The action of $CNOT_{j,k}$, on $Z_i$ is \begin{align} CNOT_{j,k} Z_i CNOT_{j,k} = \begin{cases} Z_jZ_k& \text{ for } i = k\\ Z_i& \text{ otherwise,} \end{cases} \label{eq:CNOT_workings_Z} \end{align} so ${\cal U}_ {\cal X} $ only transforms positive $Z$-type CSS observables into other positive $Z$-type CSS observables, i.e. we can find $n$-bit binary strings $\{ {\bm{v}} '_i\}_{i=r+1,\dots,n-k}$ such that ${\cal U}_ {\cal X} (Z( {\bm{v}} _i)) = Z( {\bm{v}} '_i)$ for all $i$ in the range $r+1 \le i \le n-k$. However, since the $X$-type generators of $ {\cal S} $ commute with the $Z$-type generators, $Z( {\bm{v}} '_i)$ must commute with $X_1$ through $X_r$ for all $i$ in the range $r+1 \le i \le n-k$. Therefore, $Z( {\bm{v}} '_i)$ acts non-trivially on qubits $r+1$ through $n$ \emph{only}. Everything that we have done for the $X$-type generators can then be repeated for the $Z$-type generators. The only thing that needs to be checked is that, when we represent the $m$-tuple $ {\cal B} \coloneqq (Z( {\bm{a}} _1),\dots,Z( {\bm{a}} _m))$ of $m$ positive $Z$-type $n$-qubit CSS observables as the columns of a matrix $M_ {\cal B} $, row addition on $M_ {\cal B} $ can be performed by executing a $CNOT$ operation on all elements of $ {\cal B} $ simultaneously, just as in the case of $X$-type CSS observables. This is confirmed via \eqref{eq:CNOT_workings_Z}, which implies \begin{align} CNOT_{j,k} Z( {\bm{a}} ) CNOT_{j,k} = Z( {\bm{a}} \oplus_2 a_k {\bm{e}} _j) \coloneqq Z( {\bm{a}} ') \end{align} where we see that $ {\bm{a}} '$ is formed by adding entry $k$ in $ {\bm{a}} $ to entry $j$. Therefore, the matrix $M_{ {\cal B} '}$ representing the $m$-tuple $ {\cal B} ' \coloneqq (CNOT_{j,k}Z( {\bm{a}} _1)CNOT_{j,k},\dots, CNOT_{j,k}Z( {\bm{a}} _m)CNOT_{j,k})$ can be obtained from $M_ {\cal B} $ by adding row $k$ to row $j$. We conclude that there also exists a sequence of $CNOT$ operations ${\cal U}_ {\cal Z} \coloneqq U_ {\cal Z} (\cdot)U^\dagger_ {\cal Z} $ such that ${\cal U}_ {\cal Z} (Z( {\bm{v}} '_i)) = Z_i$ for all $i$ in the range $r+1 \le i \le n-k$. Since $Z( {\bm{v}} '_i)$ acts non-trivially on qubits $r+1$ through $n$ \emph{only} for all $i$ in the range $r+1 \le i \le n-k$, we can choose ${\cal U}_ {\cal Z} $ to act \emph{only} on qubits $r+1$ through $n$, and so ${\cal U}_Z(X_i) = X_i$ for all $i$ in the range $1 \le i \le r$. We now have \begin{align} ({\cal U}_ {\cal Z} \circ {\cal U}_ {\cal X} ) [(-1)^{b_i} S_i] = \begin{cases} (-1)^{b_i} X_i& \text{ for } 1 \le i \le r\\ (-1)^{b_i} Z_i& \text{ for } r+1 \le i \le n-k. \end{cases} \end{align} Defining ${\cal U}_C(\cdot) \coloneqq U_C(\cdot)U^\dagger_C$, where $U_C \coloneqq \left[\prod_{i=r+1}^{n-k} X_i^{b_i}\right] \left[ \prod_{i=1}^r Z_i^{b_i}\right]$, we see that \begin{align} ({\cal U}_C \circ {\cal U}_Z \circ {\cal U}_ {\cal X} ) [(-1)^{b_i} S_i] = \begin{cases} X_i& \text{ for } 1 \le i \le r\\ Z_i& \text{ for } r+1 \le i \le n-k. \end{cases} \end{align} Since $U_ {\cal X} , U_ {\cal Z} , U_C$ are completely CSS-preserving, as they are sequences of $CNOT$ gates or local qubit $X$ and $Z$ gates, we have constructed a completely CSS-preserving unitary $U \coloneqq U_C U_ {\cal Z} U_ {\cal X} $ that accomplishes the Lemma's claim. \end{proof} We next identify a collection of primitive channels that are stochastically represented and which can be arbitrarily combined to construct magic protocols. \begin{lemma}\label{lemma:projectors} Let $\{ {\cal P} _k\}$ be a set of projective channels on $n$ qubits defined as $ {\cal P} _k(X) \coloneqq p_k P_k X P_k$, where $p_k$ is a probability and $P_k$ is a product of commuting projectors onto the eigenspaces of CSS observables. Furthermore, let $\{ {\cal E} _k\}$ be a set of completely CSS-preserving channels from $n$ to $m$ qubits. Then $\sum_k {\cal E} _k \circ {\cal P} _k$ is stochastically represented whenever $\sum_k p_k P_k = \mathbbm{1} _n$. \end{lemma} \begin{proof} Let $\rho$ be a CSS state on $(n+a)$ qubits. By repeated applications of \lemref{lemma:CSS_proj} to each projection forming the product $P_k$, we obtain \begin{align} ( \mathcal{I} _a \otimes {\cal P} _k)\rho = p_k \mathrm{tr} [( \mathbbm{1} _a \otimes P_k) \rho] \sigma_k, \end{align} where $\sigma_k$ is a CSS state on $(n+m)$ qubits. Therefore, \begin{align} \sigma \coloneqq \mathcal{I} _a \otimes \left(\sum_k {\cal E} _k \circ {\cal P} _k\right)\rho = \sum_k p_k \mathrm{tr} [( \mathbbm{1} _a \otimes P_k) \rho] ( \mathcal{I} _a \otimes {\cal E} _k)(\sigma_k). \end{align} Since $ {\cal E} _k$ is completely CSS-preserving, $( \mathcal{I} _a \otimes {\cal E} _k) (\sigma_k)$ is a CSS state. When $\sum p_k P_k = \mathbbm{1} _n$, we have that ${\sum_k p_k \mathrm{tr} [( \mathbbm{1} _a \otimes P_k) \rho] = 1}$, which means $\sigma$ is a CSS state. We conclude that, if $\sum p_k P_k = \mathbbm{1} _n$, then $\sum_k {\cal E} _k \circ {\cal P} _k$ is completely CSS-preserving and thus stochastically represented. \end{proof} We are now in a position to prove that \ntoonestoch* \begin{proof} Let $ {\cal{C}}$ be an $[[n,k]]$ CSS code. The trace-preserving version of its code projection can be represented as \begin{equation} {\cal E} (\rho) \coloneqq \mathrm{tr} _{[1,n-k]}[{\cal U} \circ {\cal P} (\rho)] \otimes \ketbra{0} + \mathrm{tr} [\overline{ {\cal P} } (\rho)] \sigma \otimes \ketbra{1}, \end{equation} where ${\cal U} \coloneqq U (\cdot) U$ and $ {\cal P} \coloneqq P_{\bm{0}} (\cdot) P_{\bm{0}}$ for the decoding unitary $U$ and codespace projector $P_{\bm{0}}$ of $ {\cal{C}}$, and $\overline{ {\cal P} }\coloneqq \overline{P}_{\bm{0}}(\cdot)\overline{P}_{\bm{0}}$ for $\overline{P}_{\bm{0}} \coloneqq \mathbbm{1} _n - P_{\bm{0}}$. Let the codespace of $ {\cal{C}}$ be stabilized by $ {\cal S} \coloneqq \langle S_i \rangle_{i=1,\dots,n-k}$, where $\{S_i\}_{i=1,\dots,n-k}$ are a set of $(n-k)$ commuting and independent $n$-qubit CSS observables, of which the first $r \le n-k$ are $X$-type while the rest are $Z$-type. Let us define channels $ {\cal E} _0, {\cal E} _1$ as \begin{align} {\cal E} _0(\cdot) &\coloneqq ( \mathrm{tr} _{[1,n-k]} \circ {\cal U}) (\cdot) \otimes \ketbra{0}, \text{ and} \\ {\cal E} _1(\cdot) &\coloneqq \sigma \otimes \ketbra{1} \mathrm{tr} (\cdot) \end{align} We can then express $ {\cal E} $ as $ {\cal E} = {\cal E} _0 \circ {\cal P} + {\cal E} _1 \circ \overline{ {\cal P} }$. By Lemma \ref{lemma:CNOT_gaussian_elim}, $ {\cal{C}}$ can be decoded by a unitary $U = \left[ H^{\otimes r} \otimes \mathbbm{1} _{n-k-r}\right] V$ for some $V \in {\cal G}(n)$. Since the Hadamards before $V$ only take place on qubits 1 through $n-k$, we have by the cyclic property of the trace that \begin{align} {\cal E} _0(\cdot) = ( \mathrm{tr} _{[1,n-k]} \circ {\cal V}) (\cdot) \otimes \ketbra{0}, \end{align} where ${\cal V}(\cdot) \coloneqq V(\cdot) V^\dagger$. We thereby see from Lemma \ref{lemma:CCSSP_powers} that $ {\cal E} _0$ and $ {\cal E} _1$ are both completely CSS-preserving. Let $ {\bm{s}} $ be an $n$-bit string denoting the outcome of the syndrome measurement for $ {\cal{C}}$, where $s_i$ is the outcome of measuring the observable $S_i$. By definition, $P_{\bm{0}}$ projects onto the subspace corresponding to the zero syndrome outcome, and is therefore the product of commuting CSS projectors that successively project onto the +1 eigenspaces of each $S_i$, i.e. \begin{align} P_{\bm{0}} = \prod_{i=1}^{n-k} P_0(S_i), \quad P_0(S_i) \coloneqq \frac{1}{2}( \mathbbm{1} _n + S_i). \end{align} Furthermore, we have that \begin{align} {\cal E} _1 \circ \overline{ {\cal P} }(X) = \sigma \otimes \ketbra{1} \mathrm{tr} (\overline{P}_{\bm{0}} X \overline{P}_{\bm{0}}) = \sigma \otimes \ketbra{1} \mathrm{tr} (\overline{P}_{\bm{0}} X) \label{eq:local_1} \end{align} Now $\overline{P}_{\bm{0}} = \sum_{ {\bm{s}} \neq \bm{0}} P_ {\bm{s}} $, where each $P_ {\bm{s}} $ is a product of commuting projectors onto the $(-1)^{s_i}$ eigenspace of each $S_i$, i.e. \begin{align} P_ {\bm{s}} = \prod_{i=1}^{n-k} P_{s_i}(S_i), \quad P_{s_i}(S_i) \coloneqq \frac{1}{2}( \mathbbm{1} _n + (-1)^{s_i} S_i). \end{align} Substituting into \eqref{eq:local_1}, and defining $ {\cal P} _{ {\bm{s}} } \coloneqq P_ {\bm{s}} (\cdot) P_ {\bm{s}} $, we obtain \begin{align} {\cal E} _1 \circ \overline{ {\cal P} }(X) &= \sigma \otimes \ketbra{1} \mathrm{tr} \left[\left(\sum_{ {\bm{s}} \neq \bm{0}} P_ {\bm{s}} \right) X\right]\\ &= \sum_{ {\bm{s}} \neq \bm{0}} \sigma \otimes \ketbra{1} \mathrm{tr} (P_ {\bm{s}} X P_ {\bm{s}} ) \\ &= \sum_{ {\bm{s}} \neq \bm{0}} {\cal E} _1 \circ {\cal P} _ {\bm{s}} . \end{align} We have thereby shown that $ {\cal E} = {\cal E} _0 \otimes {\cal P} _{\bm{0}} + \sum_{ {\bm{s}} \neq \bm{0}} {\cal E} _1 \otimes {\cal P} _{ {\bm{s}} }$, in which $P_{ {\bm{s}} }$ is a product of commuting projectors onto the eigenspaces of CSS observables such that $\sum_{ {\bm{s}} } P_ {\bm{s}} = \mathbbm{1} _n$, and $ {\cal E} _0$ and $ {\cal E} _1$ are both completely CSS-preserving channels. Therefore, by Theorem \ref{lemma:projectors}, we conclude that $ {\cal E} $ can be stochastically represented. \end{proof} \section{Bounds for generic completely CSS-preserving operations} \label{appx:general_bound} We now present the lemma that forms the basis of this work, which is a generalisation of Theorem~11 of \cite{koukoulekidis2022constraints}. We first define the set of positively represented and real represented quantum states, respectively, \begin{align} {\cal W} ^+ &\coloneqq \{\rho : W_\rho( {\bm{u}} ) >0 , W_\rho( {\bm{u}} )\in \mathbb{R} , \forall {\bm{u}} \in {\cal P} \}, \\ {\cal W} ^\mathbb{R} &\coloneqq \{\rho : W_\rho( {\bm{u}} )\in \mathbb{R} , \forall {\bm{u}} \in {\cal P} \}, \end{align} in any given representation $ {\cal W} $. \begin{lemma} \label{lemma:nick_restatement} Let $\rho$ and $\tau$ be qudit states on a $d$-dimensional Hilbert space $ {\cal H} _{d}$, such that $\rho \in {\cal W} ^{\mathbb{R}}$ and $\tau \in {\cal W} ^+$ in some Wigner representation $ {\cal W} $. Let $ {\cal E} : {\cal B} ( {\cal H} _{d}) \mapsto {\cal B} ( {\cal H} _{d'})$ be a stochastically represented channel in $ {\cal W} $. Then the $\alpha$-R\'{e}nyi divergence $D_\alpha (\cdot || \cdot)$ is well-defined and satisfies the following properties for all $\alpha\in {\cal A} $: \begin{enumerate} \item \label{property:D_nonnegative} $D_\alpha(W_\rho || W_\tau) \ge 0$. \item \label{property:D_zero} $D_\alpha(W_\rho || W_\tau)=0$ if and only if $\rho =\tau$. \item \label{property:D_multiplicative} $D_\alpha (W_{\rho^{\otimes n}} ||W_{\tau^{\otimes n}}) = n D_\alpha(W_\rho || W_\tau) $ for all $n\in \mathbb{N}$. \item \label{property:D_dataprocessing} $ \Delta D_\alpha \ge 0$ where $\Delta D_\alpha \coloneqq D_\alpha(W_\rho|| W_\tau) -D_\alpha(W_{ {\cal E} (\rho)}|| W_{ {\cal E} (\tau)})$ for all stochastically represented $ {\cal E} $ such that $ {\cal E} (\tau) \in {\cal W} ^+$. \end{enumerate} \end{lemma} \begin{proof} In general, $W_\rho$ is a quasiprobability distribution, but for $\alpha \in {\cal A} $ we see that $W_\rho( {\bm{u}} )^\alpha\ge 0$. Therefore $D_\alpha (W_\rho || W_\tau)$ is always well-defined and real-valued. The proof of \ref{property:D_nonnegative}-\ref{property:D_dataprocessing} are then identical to the proof given for Theorem~11 of \cite{koukoulekidis2022constraints} (see supplementary note 5). \end{proof} Importantly for our purposes, this abstract but general result applies to input and output systems of any (even or odd) finite dimension. With this in hand, we can now give a proof of \thmref{thrm:general_qubit_bound}, which we restate for clarity: \generalBound* \begin{proof} Since $\tau$ and $\tau'$ are in the interior of $ {\cal D} _{css}$, they are positively represented. Moreover, every CSS distillation protocol is stochasically represented. Therefore, the results of \lemref{lemma:nick_restatement} apply, and in particular properties \ref{property:D_multiplicative} and \ref{property:D_dataprocessing} give \begin{align} n D_\alpha(W_\rho || W_\tau) - D_\alpha(W_{\rho'}||W_{\tau'}) \ge 0, \label{appeq:ineq} \end{align} for all $\alpha \in {\cal A} $. Since $\rho$ is magic whereas $\tau \in {\cal D} _{css} $ we have $\rho \neq \tau$, which implies $D_\alpha(W_\rho || W_\tau)>0$ (properties \ref{property:D_nonnegative} and \ref{property:D_zero}). Therefore we can rearrange \eqref{appeq:ineq} to give the result as stated. \end{proof} \section{Upper and lower bounds on CSS code protocols} \subsection{Structure of the necessary conditions} \label{appx:structure_necessary_condition} We recall from \secref{sec:structure_CSS} that when $\rho^{\otimes n}$ and $\left[\frac{ \mathbbm{1} }{2}\right]^{\otimes n }$ are passed through a CSS code projection channel, the output states are $\rho_p$ and $\tau_{n,k}$, respectively, where \begin{align} \rho_p &\coloneqq p\rho' \otimes\ketbra{0}+(1-p)\sigma \otimes\ketbra{1} , \\ \tau_{n,k} &\coloneqq 2^{k-n} \frac{ \mathbbm{1} _k}{2^k} \otimes \ketbra{0}+(1-2^{k-n})\sigma \otimes \ketbra{1}. \end{align} For any given $p$ and rebit state $\rho$, we have the following continuous function of $n$ defined on the interval $n \in [k,\infty)$ for all $\alpha \in {\cal A} $, \begin{align} \Delta D_\alpha&\coloneqq D_\alpha(W_{\rho^{\otimes n}} ||W_{\left[\frac{ \mathbbm{1} }{2}\right]^{\otimes n}}) - D_\alpha\left(W_{\rho_p} \, || \, W_{\tau_{n,k}}\right) \\ &=n D_\alpha(W_\rho ||W_{\frac{ \mathbbm{1} }{2}}) - D_\alpha\left(W_{\rho_p} \, || \, W_{\tau_{n,k}}\right), \label{eq:Delta_alpha_n} \end{align} which gives change in distance (as measured by the $\alpha$-R\'{e}nyi divergence) between two quasiprobability distributions and their reference probability distributions. This allows us to express the necessary condition in \thmref{thrm:general_qubit_bound} specialized to the transition $\rho^{\otimes n} \rightarrow p \rho'$ under a $n$-to-$k$ CSS code reduction via \begin{align} \Delta D_\alpha\ge 0. \end{align} It will also be quite useful to introduce the following general mean $Q_\alpha(\cdot || \cdot)$ on the quasidistributions $ {\bm{w}} \coloneqq (w_1, \dots w_N)^T$, $ {\bm{r}} \coloneqq (r_1, \dots r_N)^T$, which we define via \begin{align} Q_\alpha( {\bm{w}} || {\bm{r}} )\coloneqq 2^{(\alpha -1) D_\alpha( {\bm{w}} || {\bm{r}} )} = \sum_{i=1}^N w_i^\alpha r_i^{1-\alpha}. \end{align} We now have the following lemma, which will allow us to simplify the expression of our constraint functions $\Delta D_\alpha$. \begin{lemma} \label{lemma:general_mean} Consider the rebit quantum states $\rho_0, \rho_1 , \tau_0,\tau_1$, where $\tau_i$ for $i \in \{ 0,1\}$ lie in the interior of $ {\cal D} _{css}$. Moreover let $\psi_0,\psi_1$ be two perfectly distinguishable register states in $ {\cal D} _{css}$ such that $ \mathrm{tr} [\psi_0\psi_1] =0$. Then we have the following identity \begin{align} Q_\alpha( W_{p_0 \rho_0 \otimes \psi_0 +p_1 \rho_1 \otimes \psi_1} || W_{q_0 \tau_0 \otimes \psi_0 +q_1 \tau_1 \otimes \psi_1} ) = p_0^\alpha q_0^{1-\alpha} Q_\alpha(W_{\rho_0} || W_{\tau_0} ) + p_1^\alpha q_1^{1-\alpha} Q_\alpha(W_{\rho_1} || W_{\tau_1} ), \label{eq:Q_alpha_equality} \end{align} which in turn implies the following inequality \begin{align} Q_\alpha( W_{p_0 \rho_0 \otimes \psi_0 +p_1 \rho_1 \otimes \psi_1} || W_{q_0 \tau_0 \otimes \psi_0 +q_1 \tau_1 \otimes \psi_1} ) \ge p_0^\alpha q_0^{1-\alpha} Q_\alpha(W_{\rho_0} || W_{\tau_0} ) . \label{eq:Q_alpha_inequality} \end{align} \end{lemma} \begin{proof} By assumption the register states $\psi_0, \psi_1$ have zero overlap: \begin{equation} \label{eq:overlappp} \mathrm{tr} [\psi_0 \psi_1] = 2 \sum_{ {\bm{u}} } W_{\psi_0}( {\bm{u}} )W_{\psi_1}( {\bm{u}} ) =0. \end{equation} Now since $\psi_i \in {\cal D} _{css}$ we must have $W_{\psi_i}( {\bm{u}} ) \ge 0$ for all $ {\bm{u}} \in {\cal P} $ and for each $i\in \{0,1\}$. We can thus conclude from \eqref{eq:overlappp} that \begin{align} V_0 &\coloneqq \supp(W_{\psi_0}) \subseteq \ker(W_{\psi_1}) ; \\ V_1 &\coloneqq \supp(W_{\psi_1}) \subseteq \ker(W_{\psi_0}). \end{align} With this in hand, we can explicitly evaluate: \begin{align} &Q_\alpha( W_{p_0 \rho_0 \otimes \psi_0 +p_1 \rho_1 \otimes \psi_1} || W_{q_0 \tau_0 \otimes \psi_0 +q_1 \tau_1 \otimes \psi_1} ) = \\ &\sum_{ {\bm{u}} \in {\cal P} } \left[\sum_{ {\bm{v}} \in V_0} \left( p_0 W_{\rho_0}( {\bm{u}} ) W_{\psi_0}( {\bm{v}} ) \right)^\alpha \left( q_0 W_{\tau_0}( {\bm{u}} ) W_{\psi_0}( {\bm{v}} ) \right)^{1-\alpha} + \sum_{ {\bm{v}} \in V_1} \left( p_1 W_{\rho_1}( {\bm{u}} ) W_{\psi_1}( {\bm{v}} ) \right)^\alpha \left( q_1 W_{\tau_1}( {\bm{u}} ) W_{\psi_1}( {\bm{v}} ) \right)^{1-\alpha} \right] \\ &=\sum_{ {\bm{u}} \in {\cal P} } \left[ p_0^\alpha q_0^{1-\alpha } W_{\rho_0}( {\bm{u}} )^\alpha W_{\tau_0}( {\bm{u}} )^{1-\alpha} \sum_{ {\bm{v}} \in V_0} W_{\psi_0}( {\bm{v}} ) + p_1^\alpha q_1^{1-\alpha } W_{\rho_1}( {\bm{u}} )^\alpha W_{\tau_1}( {\bm{u}} )^{1-\alpha} \sum_{ {\bm{v}} \in V_1} W_{\psi_1}( {\bm{v}} ) \right] \\ &= p_0^\alpha q_0^{1-\alpha } \sum_{ {\bm{u}} \in {\cal P} } W_{\rho_0}( {\bm{u}} )^\alpha W_{\tau_0}( {\bm{u}} )^{1-\alpha} + p_1^\alpha q_1^{1-\alpha } \sum_{\bm{u'} \in {\cal P} } W_{\rho_1}(\bm{u'} )^\alpha W_{\tau_1}(\bm{u'} )^{1-\alpha} \\ &=p_0^\alpha q_0^{1-\alpha} Q_\alpha(W_{\rho_0} || W_{\tau_0} ) + p_1^\alpha q_1^{1-\alpha} Q_\alpha(W_{\rho_1} || W_{\tau_1} ), \end{align} where in the third equality we used the normalisation of the representation, which gives the equality. The inequality in the statement of the lemma then follows from the fact that both terms on the right hand side of \eqref{eq:Q_alpha_equality} must be non-negative for all $\alpha \in {\cal A} $, completing the proof.\end{proof} With this property in hand we now have the following lemma, which makes the non-trivial $n$-dependence in $\Delta D_\alpha$ more explicit and moreover highlights the fact that the choice of CSS state the system is left in following a failed run of the protocol is arbitrary. \begin{lemma} \label{lemma:f__n_expanded} Let us define the function $Q_\alpha(\cdot || \cdot)\coloneqq 2^{(\alpha -1) D_\alpha(\cdot || \cdot)}$ and let the maximally mixed state on $k$ qubits be written $\frac{ \mathbbm{1} _k}{d_L}$ where $d_L \coloneqq 2^k$, then we have: \begin{align} \Delta D_\alpha = n \left(1- H_\alpha[W_\rho]\right) +k - \frac{1}{\alpha -1}\log \left[ p^\alpha Q_\alpha\left(W_{\rho'} \, \bigg|\bigg| \, W_{\frac{ \mathbbm{1} _k}{d_L}}\right)+(1-p)^\alpha \left(\frac{1}{2^{n-k} -1}\right)^{\alpha -1} \right] \label{eq:f__n_expanded} \end{align} and in particular the function $\Delta D_\alpha$ is independent of the choice of state $\sigma$. \end{lemma} \begin{proof} By \lemref{lemma:general_mean}, we have the following expansion \begin{align} Q_\alpha \left(W_{\rho_p} \, || \, W_{\tau_{n,k}}\right) &=p^\alpha \left( 2^{k-n}\right)^{1-\alpha} Q_\alpha\left(W_{\rho' } \, || \, W_{\frac{ \mathbbm{1} _k}{d_L}}\right)+ (1-p)^\alpha \left(1 - 2^{k-n}\right)^{1-\alpha} Q_\alpha\left(W_{\sigma } \, || \, W_{\sigma }\right) \\ &= \left(2^{n-k}\right)^{\alpha -1} \left[ p^\alpha Q_\alpha\left(W_{\rho'} \, || \, W_{\frac{ \mathbbm{1} _k}{d_L}}\right)+(1-p)^\alpha \left(\frac{1}{2^{n-k} -1}\right)^{\alpha -1} \right], \end{align} where in the last equality we have used $Q_\alpha( {\bm{p}} || {\bm{p}} ) = 1$ for all probability distributions $ {\bm{p}} $. Therefore, since $D_\alpha(\cdot || \cdot)= \frac{1}{\alpha -1} \log Q_\alpha (\cdot || \cdot)$ we have \begin{align} D_\alpha\left(W_{\rho_p} \, || \, W_{\tau_{n,k}}\right) = n-k + \frac{1}{\alpha -1}\log \left[ p^\alpha Q_\alpha\left(W_{\rho'} \, || \, W_{\frac{ \mathbbm{1} _k}{d_L}}\right)+(1-p)^\alpha \left(\frac{1}{2^{n-k} -1}\right)^{\alpha -1} \right]. \label{eq:DDD} \end{align} Substituting \eqref{eq:DDD} and $D_\alpha\left(W_\rho \bigg|\bigg| W_{\frac{ \mathbbm{1} }{2}}\right)= 2 - H_\alpha[W_\rho]$ into \eqref{eq:Delta_alpha_n} gives the result as claimed. \end{proof} \subsection{Bounds are independent of choice of preparation state on failed post-selection} \label{appx:bounds_sigma_indep} By inspection, the form for $\Delta D_\alpha$ given in \lemref{lemma:f__n_expanded} has no $\sigma$-dependence. We therefore immediately have to following corollary. \begin{corollary} The bounds in \thmref{thrm:lower_bound_n_CSS_code} are independent of the choice of preparation state $\sigma \in {\cal D} _{css}$. \end{corollary} This result also follows from resource theoretic arguments. In particular, we observe that the following quantum channel is straightforwardly stochastically represented for any $\omega \in {\cal D} _{css}$ \begin{align} {\cal E} _\omega(\cdot) \coloneqq \mathcal{I} \otimes P_0 (\cdot) + \omega \mathrm{tr} \otimes P_1 (\cdot), \end{align} where $P_k (\cdot) \coloneqq \ketbra{k} (\cdot) \ketbra{k}$, since it is a composite channel formed from elements of $ {\cal O} $. It then follows that for any $\omega$ in the interior of $ {\cal D} _{css}$ we have \begin{align} D_\alpha\left(W_{\rho_p} \, || \, W_{\tau_{n,k}}\right) \ge D_\alpha\left(W_{ {\cal E} _\omega[\rho_p]} \, || \, W_{ {\cal E} _\omega[\tau_{n,k}]}\right) \ge D_\alpha\left(W_{ {\cal E} _\sigma \circ {\cal E} _\omega[\rho_p]} \, || \, W_{ {\cal E} _\sigma \circ {\cal E} _\omega [\tau_{n,k}]}\right) = D_\alpha\left(W_{\rho_p} \, || \, W_{\tau_{n,k}}\right). \end{align} We can therefore conclude that \begin{align} D_\alpha\left( {\cal E} _\omega(W_{\rho_p}) \, || \, {\cal E} _\omega (W_{\tau_{n,k}})\right) = D_\alpha\left(W_{\rho_p} \, || \, W_{\tau_{n,k}}\right), \end{align} and thus the corresponding bounds will be unaffected by varying the choice of $\sigma$ in the interior of $ {\cal D} _{css}$. \subsection{Proof of \lemref{lemma:properties_fn}} \label{appx:fn_properties_proof} We now note the following properties of the constraint function $\Delta D_\alpha$, in the following Lemma reproduced from the main text: \fnProperties* \begin{proof} In this proof, we simplify notation by defining the following non-negative constants $c_1 \coloneqq p^\alpha Q_\alpha\left(W_{\rho'} \, \bigg|\bigg| \, W_{\frac{ \mathbbm{1} _k}{d_L}}\right) $ and $c_2 \coloneqq (1-p)^\alpha$. \textbf{\textit{Proof of \ref{fproperty_concavity}:}} Let us define the following function \begin{align} g(n) \coloneqq \left[ c_1+c_2 \left(\frac{1}{2^{n-k} -1}\right)^{\alpha -1} \right]. \end{align} This means that from \lemref{lemma:f__n_expanded} we can write \begin{equation} \Delta D_\alpha = n \left(1- H_\alpha[W_{\rho}]\right) +k- \frac{1}{\alpha -1} \log g(n), \end{equation} and since the first term is linear we need only check the second derivative of the second term to establish that $\Delta D_\alpha$ is concave. We have \begin{align} \partial_n^2 \Delta D_\alpha &= -\frac{1}{\alpha-1} \partial_n^2 \log g(n) \\ &= - \left[ \frac{ \ln 2 \, c_2 2^{k+n} \left(c_1 \left(2^k+(\alpha -1) 2^n\right) \left(2^{n-k}-1\right)^{\alpha }+c_2 \left(2^n-2^k\right)\right)}{\left(2^n-2^k\right) \left(c_1 2^k \left(2^{n-k}-1\right)^{\alpha }+c_2 \left(2^n-2^k\right)\right)^2} \right]. \end{align} Since $c_1,c_2 \ge 0$ for all $\rho'$ and $p$, the term in square brackets is non-negative for all $n > k, \alpha>1, \rho'$ and $p$ (strictly positive for $p<1$), which implies $\partial_n^2 \Delta D_\alpha$ is non-positive everywhere on our restricted domain. Therefore $\Delta D_\alpha$ is concave, as claimed. \textbf{\textit{Proof of \ref{fproperty_lim_one}:}} Recalling that $\alpha > 1$, we have from \lemref{lemma:f__n_expanded} that \begin{align} \lim_{n\rightarrow k^+}\Delta D_\alpha &=\lim_{n\rightarrow k^+} \left\{n \left(1- H_\alpha[W_\rho]\right) +k - \frac{1}{\alpha -1}\log \left[c_1+c_2 \left(\frac{1}{2^{n-k}-1}\right)^{\alpha -1} \right] \right\} \notag \\ &= - k H_\alpha[W_{\rho'}] - \frac{1}{\alpha -1}\lim_{n\rightarrow k^+} \left\{\log \left[c_1+c_2 \left(\frac{1}{2^{n-k}-1}\right)^{\alpha -1} \right]\right\} = - \infty<0, \end{align} so long as $c_2 > 1$, which is true if and only if $p < 1$. \textbf{\textit{Proof of \ref{fproperty_lim_infty}:}} We have: \begin{align} \lim_{n\rightarrow \infty}\Delta D_\alpha &=\lim_{n\rightarrow \infty} \left\{n \left(1- H_\alpha[W_\rho]\right) +k - \frac{1}{\alpha -1}\log \left[c_1+c_2 \left(\frac{1}{2^{n-k} -1}\right)^{\alpha -1} \right] \right\} \\ &= k - \frac{1}{\alpha -1} \log [c_1] + \lim_{n \rightarrow \infty}\left\{n \left(1- H_\alpha[W_\rho]\right) \right\} \\ &= k - D_\alpha\left(W_{p \rho'} \, || \, W_{\frac{ \mathbbm{1} _k}{2^k}}\right) + \lim_{n \rightarrow \infty}\left\{n \left(1- H_\alpha[W_{\rho}]\right) \right\} \\ &=H_\alpha[W_{p \rho'}] -k + \lim_{n \rightarrow \infty}\left\{n \left(1- H_\alpha[W_{\rho}]\right) \right\} \\ &= \begin{cases} -\infty , & H_\alpha[W_\rho]> 1, \\ + \infty , & H_\alpha[W_\rho]< 1, \\ H_\alpha[W_{p \rho'}] -k , & \mathrm{ otherwise}. \end{cases} \end{align} Therefore, if $H_\alpha[W_\rho]> 1$ then $\lim_{n\rightarrow \infty}\Delta D_\alpha<0$, as claimed. This completes the proof.\end{proof} \subsection{Analytic upper and lower bounds for arbitrary dimension} \label{appx:extension_n_m} Here we consider the more general case of an $[[n,k]]$ code projection for distilling $n$-copies of a \textit{qudit} state (on a $d$-dimensional system), that can be described by a stochastically represented channel. This includes the qubit CSS code projections considered here or more generally stabilizer code projections for odd dimensional systems. Let the quantum channel that describes an arbitrary $n$-to-$k$ code reduction be \begin{equation} {\cal E} (\cdot) \coloneqq \mathrm{tr} _{1,\dots,n-k}[{\cal U} \circ {\cal P} (\cdot)] \otimes \ketbra{0} \notag + \mathrm{tr} [\overline{ {\cal P} } (\cdot)] \sigma \otimes \ketbra{1}, \end{equation} where $\sigma$ is an arbitrary positively represented state of the theory (e.g. CSS), the choice of which we have previously shown doesn't make a difference to the obtained bounds. Now, recall that $n$ copies of our input noisy magic state $\rho^{\otimes n}$ will transform under the action of this channel as \begin{align} {\cal E} \left[ \rho^{\otimes n } \right] = p \rho' \otimes \ketbra{0} + (1-p) \sigma \otimes \ketbra{1} \eqqcolon \rho_p \end{align} where we assume the output state following successful post-selection $\rho'$ is $\epsilon'$-close to $k$-copies of our pure magic target state $\psi$, where closeness is measured by the trace distance $\norm{\rho - \sigma}_1$ where $\norm{X}_1 \coloneqq \mathrm{tr} \abs{X} = \mathrm{tr} \sqrt{X^\dagger X}$ is the trace norm (also known as the Schatten-1 norm). In other words, we assume \begin{equation}\norm{\rho' - \psi^{\otimes k}}_1 \le \epsilon'< \norm{\rho - \psi}_1.\end{equation}. We also define the Frobenius norm (also known as the Schatten-2 norm) as \begin{align*} \norm{X}_2 \coloneqq \sqrt{ \mathrm{tr} [X^\dagger X]} \end{align*} Analagously, we also define the $\ell_1$- and $\ell_2$-norms, respectively, of a vector $ {\bm{w}} \in \mathbb{R}^d$ via \begin{align} \norm{ {\bm{w}} }_1 &\coloneqq \sum_{i=1}^d \abs{w_i} , \\ \norm{ {\bm{w}} }_2 &\coloneqq \left[\sum_{i=1}^d \abs{w_i}^2\right]^{\frac{1}{2}} . \end{align} In the following, we will need the following known result (e.g. see ~\cite{bhatia2013matrix}) which is a consequence of the Cauchy-Schwarz inequality. \begin{lemma} \label{lemma:L1_upper_bound} For all $ {\bm{w}} \in \mathbb{R}^{d}$ \begin{align} \norm{ {\bm{w}} }_1 \le \sqrt{d} \norm{ {\bm{w}} }_2. \end{align} \end{lemma} This allows us to show the following lemma which shows that small variations in quantum states correspond to small variations in their corresponding Wigner representations: \begin{lemma} \label{lem:trace_distance_wigner} If $ \norm{\rho - \sigma}_1 \le \epsilon$ then for any well-defined Wigner representation defined by a set of phase-point operators that satisfy $ \mathrm{tr} [A_ {\bm{x}} A_ {\bm{y}} ] = d \delta_{ {\bm{x}} , {\bm{y}} } , \forall {\bm{x}} , {\bm{y}} \in {\cal P} $ we have \begin{align} \label{eq:trace_distance_general} \norm{W_\rho - W_\sigma}_1 \le \sqrt{d} \epsilon. \end{align} Furthermore, if the set of phase point operators satisfy $A_ {\bm{x}} \ge 0 $ for all $ {\bm{x}} \in {\cal P} $ then this can be tightened to \begin{align} \norm{W_\rho - W_\sigma}_1 \le \epsilon. \label{appeq:trace_distance_A_x_posdef} \end{align} \end{lemma} \begin{proof} To simplify notation, we first define the state difference $\Delta \coloneqq \rho - \sigma$ such that $W_\Delta = W_\rho - W_\sigma$. Since the Schatten-$p$ norms are non-increasing with respect to $p$~\cite{bhatia2013matrix} we have \begin{align} \norm{\rho - \sigma}_1 &\ge \norm{\rho - \sigma}_2 = \norm{\sum_ {\bm{x}} W_\Delta( {\bm{x}} ) A_ {\bm{x}} }_2 = \sqrt{ \sum_{ {\bm{x}} , {\bm{y}} } W_\Delta^*( {\bm{x}} ) W_\Delta( {\bm{y}} ) \mathrm{tr} [A_ {\bm{x}} ^\dagger A_ {\bm{x}} ]} \\ &=\sqrt{ \sum_{ {\bm{x}} , {\bm{y}} } W_\Delta^*( {\bm{x}} ) W_\Delta( {\bm{y}} ) d \delta_{ {\bm{x}} , {\bm{y}} }} = \sqrt{d} \norm{W_\Delta}_2 \ge \frac{1}{\sqrt{d}} \norm{W_\Delta}_1, \end{align} where in the second inequality we employ \lemref{lemma:L1_upper_bound}. Therefore $\norm{W_\rho - W_\sigma}_1 \le \sqrt{d} \norm{\rho - \sigma}_1$, which completes the proof of \eqref{eq:trace_distance_general} We now prove the second statement in the Lemma, by further assuming that $A_ {\bm{x}} \ge 0$. Let us write the Hermitian matrix $\Delta$ in its spectral decomposition as $\Delta = \sum_i s_i \ketbra{s_i}$. Then we can evaluate the following: \begin{align} \norm{W_\rho - W_\sigma}_1 &\coloneqq \sum_{ {\bm{x}} } \abs{ W_\rho( {\bm{x}} ) - W_\sigma( {\bm{x}} ) } = \frac{1}{d} \sum_{ {\bm{x}} } \abs{ \mathrm{tr} [ \Delta A_ {\bm{x}} ^\dagger]} \\ &= \frac{1}{d} \sum_{ {\bm{x}} } \abs{\sum_i s_i \bra{s_i}A_ {\bm{x}} ^\dagger\ket{s_i}} \le \frac{1}{d} \sum_{ {\bm{x}} ,i}\abs{ s_i } \abs{ \bra{s_i}A_ {\bm{x}} ^\dagger\ket{s_i}} \label{appeq:cauchy}, \end{align} where to obtain the inequality we have employed the Cauchy-Schwarz inequality. Now, $A_ {\bm{x}} \ge 0$ implies $A_ {\bm{x}} ^\dagger \ge 0$ and therefore $\bra{\psi} A_ {\bm{x}} ^\dagger \ket{\psi} \ge 0 $ for all pure states $\psi$. Thus continuing from \eqref{appeq:cauchy} we obtain \begin{align} \norm{W_\rho - W_\sigma}_1 &\le \frac{1}{d} \sum_{ {\bm{x}} ,i}\abs{ s_i } \bra{s_i}A_ {\bm{x}} ^\dagger\ket{s_i} = \sum_i \abs{s_i} \bra{s_i} \left[ \frac{1}{d} \sum_ {\bm{x}} A_ {\bm{x}} ^\dagger\right]\ket{s_i} = \sum_i \abs{s_i} = \norm{ \rho - \sigma}_1, \end{align} where we have used the resolution of the identity in~\ref{A_property:identity} in the final line. This completes the proof of \eqref{appeq:trace_distance_A_x_posdef}, as required.\end{proof} \begin{lemma} \label{lem:continuity_renyi_entropy} Let $\rho$ and $\sigma$ be two quantum states on $ {\cal H} _d$ such that $\norm{\rho - \sigma}_1 \le \epsilon$. Then \begin{align} H_\alpha[W_\rho] - H_\alpha [W_\sigma] \le \frac{\alpha}{\alpha -1} \begin{cases} \log [1 + \epsilon d^{2}], & \text{if } A_ {\bm{x}} \ge 0, \forall {\bm{x}} \in {\cal P} , \\ \log [1 + \epsilon d^{\frac{5}{2}}],& \text{otherwise}. \end{cases} \end{align} \end{lemma} \begin{proof} Theorem 7 (2) of \cite{woods2019resource} applies to quasiprobability distributions and tells us that for two $d^2$-dimensional distributions $ {\bm{w}} , {\bm{w}} '$, we have the following continuity statement on the $\alpha$-R\'{e}nyi entropies \begin{align} \abs{H_\alpha( {\bm{w}} ) - H_\alpha( {\bm{w}} ')} \le \frac{\alpha}{\alpha-1} \log [1 + \norm{ {\bm{w}} - {\bm{w}} '}_1 d^2 ]. \end{align} The proof of this is essentially reliant on the monotonicity of the $p$-norms $\norm{ {\bm{w}} }_p \coloneqq \left( \sum_{i=1}^{d^2} \abs{w_i}^p \right)^{1/p}$, i.e. for $1\le \alpha<q\le \infty$, $\norm{ {\bm{w}} }_{\alpha} \ge \norm{ {\bm{w}} }_\beta$, which also holds for quasidistributions. The result then follows immediately from \lemref{lem:trace_distance_wigner}. \end{proof} We highlight that our qubit phase point operators \eqref{eq:single_qubit_A} have eigenvalues in $\{0,1\}$ and therefore so do multi-qubit phase point operators, which implies all elements of the set $\{A_ {\bm{x}} \}$ are positive semidefinite. However, phase point operators for Gross' odd dimensional representation are both unitary and Hermitian and therefore have eigenvalues $\{-1,1\}$. We now prove the following theorem from which \thmref{thrm:upper_bound_n_m} follows as the more general case. \begin{theorem} Let $ \rho$ and $\psi$ be a noisy and pure magic state respectively on $ {\cal H} _d$ with real Wigner distributions. If $\rho^{\otimes n } \rightarrow p \rho'$, where $\norm{\rho' - \psi^{\otimes k} }_1\le \epsilon'$, under a stochastically-represented distillation protocol that is the code projection for an $[[n,k]]$ stabilizer code, then we have the following family of upper bounds on $n$ \begin{align} n \le \frac{ k \left[ H_\alpha (W_{\psi}) - \log d \right] + \frac{\alpha}{1-\alpha}\log( \frac{p}{1 + \epsilon' g(d) }) }{\left[ H_\alpha (W_{\rho}) - \log d \right]}, \label{appeq:analytic_upper_bound} \end{align} for all $\alpha \in {\cal A} $ for which $H_\alpha(W_\rho) > \log d $, and the family of lower bounds on $n$: \begin{align} n \ge \frac{ k \left[ \log d - H_\alpha (W_{\psi}) \right] - h_\alpha( \frac{p}{1 + \epsilon' g(d) }) }{\left[ \log d -H_\alpha (W_{\rho}) \right]}, \label{appeq:analytic_lower_bound} \end{align} for all $\alpha \in {\cal A} $ for which $H_\alpha(W_\rho) < \log d $. where we have defined the function \begin{align} g(d) \coloneqq \begin{cases} d^{2} & \text{if }A_ {\bm{x}} \ge0, \, \forall {\bm{x}} \in {\cal P} , \\ d^{\frac{5}{2}} & \text{otherwise.} \end{cases} \end{align} \end{theorem} \begin{proof} From \lemref{lemma:general_mean} we have \begin{align} Q_\alpha(W_{\rho_p} || W_{\tau_{n,k}}) \ge p^\alpha \left( \frac{1}{d^{n-k}}\right)^{1-\alpha} Q_{\alpha} \left( W_{\rho'} \Big|\Big| W_{\frac{ \mathbbm{1} }{d^k} } \right) . \end{align} Since for $\alpha >1$, $\log(\cdot)/(\alpha-1)$ is a monotonically increasing function it therefore follows that \begin{align} D_\alpha(W_{\rho_p} || W_{\tau_{n,k}}) &\ge \frac{1}{\alpha -1 } \log \left[p^\alpha \left( \frac{1}{d^{n-k}}\right)^{1-\alpha} Q_{\alpha} \left( W_{\rho'} \Big|\Big| W_{\frac{ \mathbbm{1} }{d^k} } \right) \right] \\ &= D_\alpha (W_{\rho'} || W_{\frac{ \mathbbm{1} }{d^k} } ) + \frac{\alpha}{\alpha -1} \log p + (n-k)\log d \\ &= k \log d -H_\alpha(W_{\rho'})+ \frac{\alpha}{\alpha -1} \log p + n\log d , \end{align} where in the final equality we have used the identity $D_\alpha(W_\rho || W_{\frac{ \mathbbm{1} _d}{d}}) = 2 \log d - H_\alpha(W_\rho) $. We can now make use of the continuity of the R\'{e}nyi entropy as stated in \lemref{lem:continuity_renyi_entropy} to further lower bound this divergence as \begin{align} D_\alpha(W_{\rho_p} || W_{\tau_{n,k}}) &\ge k \log d - H_\alpha (W_{\psi^{\otimes k}}) - \frac{\alpha}{\alpha -1}\log [1 + \epsilon' g(d)] + \frac{\alpha}{\alpha -1} \log p+n\log d \\ &= k \left[\log d - H_\alpha (W_{\psi}) \right]- \frac{\alpha}{1 -\alpha}\log \frac{p}{1+ \epsilon' g(d)} + n \log d, \end{align} where the equality follows from the multiplicativity of the $\alpha$-R\'{e}nyi entropy. This gives rise to the following upper bound on the relative entropy difference $\Delta D_\alpha \coloneqq n D_\alpha(W_\rho || \frac{ \mathbbm{1} }{d}) -D_\alpha(W_{\rho_p} || W_{\tau_{n,k}})$ as follows \begin{align} 0 \le \Delta D_\alpha &\le n[ \log d- H_\alpha (W_\rho)] + k \left[ H_\alpha (W_{\psi}) -\log d \right] + \frac{\alpha}{1 -\alpha}\log \frac{p}{1+ \epsilon' g(d)}. \end{align} This gives a weaker but still necessary constraint on the transformation $\rho^{\otimes n } \mapsto \rho_p$ and $\left(\frac{ \mathbbm{1} }{d}\right)^{\otimes n} \mapsto \tau_{n,k}$, which we rearrange to write as \begin{align} n[H_\alpha (W_\rho)-\log d ] \le k \left[ H_\alpha (W_{\psi}) - \log d \right] +\frac{\alpha}{1 -\alpha}\log \frac{p}{1+ \epsilon' g(d)}. \label{apeq:upperbound_unarranged} \end{align} Therefore for $H_\alpha(W_\rho) > \log d$ we can rearrange \eqref{apeq:upperbound_unarranged} to obtain \eqref{appeq:analytic_upper_bound}, whereas for $H_\alpha(W_\rho) < \log d$ we obtain \eqref{appeq:analytic_lower_bound}, which completes the proof.\end{proof} \section{Paired CSS code projections for Hadamard distillation} \label{appx:Hadamard_output_form} \begin{lemma} If there exists an $n$-to-1 CSS code projection $ {\cal{C}}$ whose successful run sends $n$ copies of $\rho(\epsilon)$ to an arbitrary rebit state $\rho'$ with acceptance probability $p$, then there also exists another $n$-to-1 CSS code projection $\tilde{ {\cal K} }$ whose successful run sends $n$ copies of $\rho(\epsilon)$ to $H \rho' H$ with the same acceptance probability. \end{lemma} \begin{proof} We can represent $ {\cal{C}}$ as \begin{align} {\cal{C}}(\rho(\epsilon)^{\otimes n}) = \mathrm{tr} _{[1,n-1]}(UP(\rho(\epsilon)^{\otimes n})PU^\dagger) = p\rho' \end{align} where $U$ and $P$ are the decoding unitary and codespace projector respectively of an $[[n,1]]$ CSS code $K$. Let the stabilizer group defining the codespace of $ {\cal{C}}$ be $\langle (-1)^{b_i} S_i \rangle_{i=1,\dots, n-1}$, where \begin{align} S_i = \begin{cases} X( {\bm{u}} _i)& \text{ for } 1 \le i \le r\\ Z( {\bm{u}} _i)& \text{ otherwise,} \end{cases} \end{align} in which each $ {\bm{u}} _i$ is a non-zero $n$-dimensional binary vector and each $b_i$ is a binary digit. By \lemref{lemma:CNOT_gaussian_elim}, $U$ may be represented as $U = [H^{\otimes r} \otimes \mathbbm{1} _{n-r}] V$ for some $V \in {\cal G}(n)$, where \begin{align} V [(-1)^{b_i} S_i] V^\dagger = \begin{cases} X_i& \text{ for } 1 \le i \le r\\ Z_i& \text{ for } r + 1 \le i \le n-1 \end{cases} \end{align} Since all unitaries before $V$ in $U$ only act on qubits 1 through $n-1$, by the cyclic property of the trace we have that \begin{align} {\cal{C}}(\rho(\epsilon)^{\otimes n}) = \mathrm{tr} _{[1,n-1]}[VP(\rho(\epsilon)^{\otimes n})PV^\dagger] = p \rho', \end{align} which implies \begin{align} p H \rho' H &= H \mathrm{tr} _{[1,n-1]}[VP(\rho(\epsilon)^{\otimes n})PV^\dagger] H\\ &= H \mathrm{tr} _{[1,n-1]}[H^{\otimes (n-1)} \otimes \mathbbm{1} VP(\rho(\epsilon)^{\otimes n})PV^\dagger H^{\otimes (n-1)} \otimes \mathbbm{1} ] H\\ &= \mathrm{tr} _{[1,n-1]}[H(n) VP(\rho(\epsilon)^{\otimes n})PV^\dagger H(n)]\\ &= \mathrm{tr} _{[1,n-1]}[(H(n) V H(n)) (H(n) P H(n))(H(n) \rho(\epsilon)^{\otimes n} H(n)) (H(n) P H(n)) (H(n) V^\dagger H(n))]\\ &\coloneqq \mathrm{tr} _{[1,n-1]}[\tilde{V} \tilde{P} (\rho(\epsilon)^{\otimes n}) \tilde{P} \tilde{V}^\dagger], \end{align} where we have defined $\tilde{V} \coloneqq H(n) V H(n)$ and $\tilde{P} \coloneqq H(n) P H(n)$. We can represent $P$ as \begin{align} P = \prod_{i=1}^n P((-1)^{b_i} S_i), \end{align} where we recall $P((-1)^{b_i}S_i) = \frac{1}{2}( \mathbbm{1} _n + (-1)^{b_i}S_i)$ is the projector onto the $+1$ eigenspace of $(-1)^{b_i} S_i$. Therefore \begin{align} \tilde{P} = \prod_{i=1}^n H(n) P((-1)^{b_i} S_i H(n)) = \prod_{i=1}^n P((-1)^{b_i} \tilde{S}_i), \end{align} where we have defined \begin{align} \tilde{S}_i \coloneqq H(n) S_i H(n) = \begin{cases} Z( {\bm{u}} _i)& \text{ for } 1 \le i \le r\\ X( {\bm{u}} _i)& \text{ for } r+1 \le i \le n-1. \end{cases} \end{align} Therefore, $\tilde{P}$ is the codespace projector of another $[[n,1]]$ CSS code $\tilde{K}$ defined by the stabilizer group $\langle (-1)^{b_i} \tilde{S}_i \rangle$. Since $V$ is a member of ${\cal G}(n)$, it must be a product of $CNOT_{j,k}, Z_j, X_j$, where $j,k$ may be any integer in the range $1, \dots, n$ so long as $j \neq k$. Therefore, $\tilde{V}$ is a product of $H(n) CNOT_{j,k} H(n) = CNOT_{k,j}$, $H(n) Z_j H(n) = X_j$ and $H(n) X_j H(n) = Z_j$, and thus also a member of ${\cal G}(n)$. Furthermore, \begin{align} \tilde{V} [(-1)^{b_i} \tilde{S}_i] \tilde{V}^\dagger = H(n) \left[(-1)^{b_i} V S_i V^\dagger\right] H(n) = \begin{cases} Z_i& \text{ for } 1 \le i \le r\\ X_i& \text{ otherwise.} \end{cases} \end{align} Therefore, $\tilde{U} \coloneqq ( \mathbbm{1} _r \otimes H^{\otimes n-1-r} \otimes \mathbbm{1} ) \tilde{V}$ is the decoding unitary of $\tilde{ {\cal{C}}}$. We thus conclude \begin{align} p H \rho' H &= \mathrm{tr} _{[1,n-1]}\left(\tilde{V} \tilde{P} (\rho(\epsilon)^{\otimes n}) \tilde{P} \tilde{V}^\dagger\right)\\ &= \mathrm{tr} _{[1,n-1]}\left(( \mathbbm{1} _r \otimes H^{\otimes n-1-r} \otimes \mathbbm{1} ) \tilde{V} \tilde{P} (\rho(\epsilon)^{\otimes n}) \tilde{P} \tilde{V}^\dagger ( \mathbbm{1} _r \otimes H^{\otimes n-1-r} \otimes \mathbbm{1} )\right)\\ &= \mathrm{tr} _{[1,n-1]}\left(\tilde{U} \tilde{P} (\rho(\epsilon)^{\otimes n}) \tilde{P} \tilde{U}\right)\\ & \coloneqq \tilde{ {\cal{C}}}(\rho(\epsilon)^{\otimes n}), \end{align} where we have defined $\tilde{C} \coloneqq \mathrm{tr} _{[1,n-1]}(\tilde{U} \tilde{P} (\rho(\epsilon)^{\otimes n}) \tilde{P} \tilde{U})$ as the code projection for the $[[n,1]]$ CSS code $\tilde{C}$. \end{proof} We thus find that $ {\cal E} \coloneqq \frac{1}{2} {\cal{C}} + \frac{1}{2} \tilde{C}$ transforms $n$ copies of $\rho(\epsilon')$ as \begin{align} {\cal E} (\rho(\epsilon)^{\otimes n}) = p \left(\frac{1}{2} \rho' + \frac{1}{2} H \rho' H\right) \otimes \ketbra{0} + (1-p) \sigma \otimes \ketbra{1}) \coloneqq p\rho(\epsilon') \otimes \ketbra{0} + (1-p)\sigma \otimes \ketbra{1}, \end{align} where we have defined $\rho(\epsilon') \coloneqq \frac{1}{2}(\rho' + H \rho' H)$, which has the same fidelity $f$ with respect to $\ket{H}$ as $\rho$ and $\rho'$, thereby implying $\epsilon' = 2(1-f)$. We further observe that, as a mixture of stochastically-represented operations, $ {\cal E} $ must itself be stochastically represented, and also maps $n$ copies of $\frac{ \mathbbm{1} }{2}$ to $\tau_{n,k}$. Therefore, if there exists an $n$-to-1 CSS code projection that transforms $n$ copies of $\rho(\epsilon)$ to $\rho'$ with acceptance probability $p$, then there exists a stochastically-represented operation that transforms $n$ copies of $\rho(\epsilon)$ to $\rho(\epsilon')$ with acceptance probability $p$ while also mapping $n$ copies of $\frac{ \mathbbm{1} }{2}$ to $\tau_{n,k}$, where $\rho(\epsilon')$ has the same fidelity with respect to $\ket{H}$ as $\rho'$. \section{Decomposition of CSS magic distillation protocols into code projections} \label{sec:campbell_browne_reduction} \begin{theorem} Every $n$-to-$k$ CSS magic distillation protocol $ {\cal E} $ can be decomposed as a sum of CSS code projections, followed by preparations of CSS states and CSS-preserving post-processing. Thus one can write \begin{align} {\cal E} (\rho) = \sum_j p_j {\cal E} _j,\ {\cal E} _j \coloneqq {\cal U}_j \circ \left( {\cal{C}}_j(\rho) \otimes \ketbra{\varphi_j} \right), \end{align} where $p_j$ is a probability, ${\cal U}_j$ is a completely CSS-preserving unitary channel on $k$ qubits, $ {\cal{C}}_j$ is the codespace projection of an $[[n,k_j]]$ CSS code for some integer $k_j$ in the range $0 \le k_j \le k$, and $\ket{\varphi_j}$ is a CSS state on $(k-k_j)$ qubits. \label{theorem:CSS_Campbell_Browne} \end{theorem} \begin{proof} To bring $ {\cal E} $ into the desired form, we proceed in four steps: \begin{enumerate} \item Any $n$-to-$k$ CSS magic distillation protocol $ {\cal E} $ can be decomposed as a sum of channels \begin{align} {\cal E} (\rho) = \sum_i q_i {\cal E} _i(\rho),\ {\cal E} _i(\rho) \coloneqq \mathrm{tr} _{[1,\dots,n+m-k]}\left[K_i(\rho \otimes \ketbra{\psi_i}) K^\dagger_i\right] \end{align} where $q_i$ is a probability, $\ket{\psi_i}$ is a CSS state on $m$ ancillary qubits, and $K_i$ is a Kraus operator. Furthermore, each Kraus operators $K_i$ are written as products \begin{align} K_i = \tilde{U}_i \left(\prod_{l=1}^M P(S_{i,l})\right) \end{align} where $\tilde{U}_i$ is a completely CSS-preserving unitary and $P(S_{i,l})$ is the projector onto the $+1$ eigenspace of a CSS observable $S_{i,l}$. \item By considering an effective computational basis measurement on the first $n+m-k$ qubits before they are discarded, each channel $ {\cal E} _i$ can be further decomposed into a sum \begin{align} {\cal E} _i(\rho)=\sum_{ {\bm{s}} \in \{0,1\}^{n+m-k}} {\cal E} _{i, {\bm{s}} }, \end{align} where each channel $ {\cal E} _{i, {\bm{s}} }$ post-selects on the $+1$ outcome in a sequence of CSS measurements, and then performs a CSS code projection on the input and ancillary qubits. Thus one can write (note that $ {\cal{C}}$ depends on $i$ and $ {\bm{s}} $) \begin{align} {\cal E} _{i, {\bm{s}} }(\rho) = {\cal{C}}(\rho \otimes \ketbra{\psi_i}) \circ \Pi(S_{i,1}) \circ \dots \circ \Pi(S_{i,M}), \end{align} in which $\Pi(S_{i,l})(\cdot) \coloneqq P(S_{i,l})(\cdot) P(S_{i,l})$ is the post-selection channel for the $+1$ outcome in a projective measurement of the CSS observable $S_{i,l}$, and $ {\cal{C}}$ is a the code projection of an $[[n+m,k]]$ CSS code. \item Each $ {\cal E} _{i, {\bm{s}} }$ can be converted into a CSS code projection on the input and ancillary qubits, followed by preparing a CSS state and completely CSS-preserving post-processing. Thus one can write \begin{align} {\cal E} _{i, {\bm{s}} }(\rho) = q'\ {\cal U}' \circ ( {\cal{C}}'(\rho \otimes \ketbra{\psi_i})\otimes \ketbra{\varphi'}), \end{align} where $q'$ is a probability, ${\cal U}'$ is a completely CSS-preserving unitary channel on $k$ qubits, $\ket{\varphi'}$ is a CSS state on $k-k'$ qubits for some integer $k'$ in the range $0 \le k' \le k$, and $ {\cal{C}}'$ is a code projection for an $[[n+m,k']]$ CSS code. \item Each CSS code projection $ {\cal{C}}'$ on the input and ancillary qubits can be further reduced to a CSS code projection on the input qubits \emph{alone}, followed by preparing a CSS state and completely CSS-preserving post-processing. Thus one can write \begin{align} {\cal{C}}'(\rho \otimes \ketbra{\psi_i}) = q''\ {\cal U}'' \circ ( {\cal{C}}''(\rho) \otimes \ketbra{\varphi''}) \end{align} where $q''$ is a probability, ${\cal U}''$ is a completely CSS-preserving unitary channel on $k'$ qubits, $ {\cal{C}}''$ is the code projection for an $[[n,k'']]$ CSS code for some integer $k''$ in the range $0 \le k'' \le k'$, and $\ket{\varphi''}$ is a CSS state on $k'-k''$ qubits. \end{enumerate} Substituting back immediately yields the theorem result. \qedhere \end{proof} \end{appendices} \subsection{Main results} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{00regionsplot.pdf} \caption{\textbf{(Finite bounds on CSS code lengths for magic state distillation protocols).} We plot upper and lower bounds on the number of copies $n$ of the noisy $H$-state $(1-\epsilon) \ketbra{H} + \epsilon \frac{ \mathbbm{1} }{2}$ required to distill a single output qubit $\ket{H}$ with output noise $\epsilon'=10^{-9}$ via projection onto the codespace of an $[[n,1]]$ CSS code. The shaded purple region shows the accessible region of parameter space prescribed by the intersection of our numeric upper bound $n_U$ (red curve) defined in \thmref{thrm:lower_bound_n_CSS_code} and the lower bound from projective robustness (PR) introduced in~\cite{Regula2022Probabilistic} (blue curve). The analytic upper bound $n^*$ (dashed yellow curve) defined in \eqref{eq:n_star} is shown to form a good approximation to the numeric bound $n_U$. \textbf{(a)} When the acceptance probability $p$ is low ($p=0.1$) the upper bounds are less constraining; \textbf{(b)} whereas increasing $p$ to $p=0.9$ the upper bounds become considerably tighter. \label{fig:bound_comparison}} \end{figure} We show that distillation protocols based on the class of completely CSS-preserving operations $ {\cal O} $ (henceforth ``CSS protocols") can be represented by stochastic maps on a well-defined multiqubit phase space. By exploiting techniques from majorization, in \thmref{thrm:general_qubit_bound} we extend the statistical mechanical framework of~\cite{koukoulekidis2022constraints} to qubit quantum computation under the restriction of operations drawn from $ {\cal O} $. We find that protocols based on projection onto the codespace of an $[[n,k]]$ CSS code can always be implemented via a series of elements from $ {\cal O} $, and therefore this framework applies to all such protocols. In fact, akin to~\cite{campbell_browne}, it turns out that every protocol in $ {\cal O} $ can be decomposed into a sum of effective code projections, and so such protocols are at the heart of all CSS-based distillation protocols. For CSS code projection protocols, we obtain novel upper bounds on the physical size of codes $n$, as a function of the number of logical qubits $k$, acceptance probabilities and error rates (which can be related to the code distance $D$ via $\epsilon' = O(\epsilon^D)$). These upper bounds are not simply the consequence of the data-processing inequality, but exploit the underlying stochastic representation of the protocol. Combined also with new lower bounds, and prior work on projective robustness of magic~\cite{Regula2022Probabilistic} we obtain finite bounds on the physical size of the codes as shown in~\figref{fig:bound_comparison}. For example, we find simple analytic upper bounds on the number of copies of a noisy Hadamard eigenstate $(1-\epsilon) \ketbra{H} + \epsilon \frac{ \mathbbm{1} }{2}$ necessary to distill a single noiseless $H$-state as a function of output error rate $\epsilon'$ and acceptance probability $p$: \begin{align} n\le \log_{f(\epsilon)} \left[\frac{1+4\epsilon'}{p}\right]^2 \eqqcolon n^*, \end{align} where the base of the logarithm is given by $f(\epsilon) \coloneqq [1-\epsilon +\frac{\epsilon^2}{2}]^{-1} \ge 1$. More generally this upper bound is actually one of a family of upper and lower bounds parameterized by $\alpha$, which applies to arbitrary $d$-dimensonal systems, and takes the following form: \begin{customthm}{9} Let $ \rho$ and $\psi$ be a noisy and pure magic state respectively on $ {\cal H} _d$ with real Wigner distributions. If $\rho^{\otimes n } \rightarrow p \rho'$, where $\norm{\rho' - \psi^{\otimes k} }_1\le \epsilon'$, under a stochastically-represented distillation protocol that is the code projection for an $[[n,k]]$ stabilizer code, then we have the following family of upper bounds on $n$ \begin{align} n \le \frac{ k \left[ H_\alpha (W_{\psi}) - \log d \right] + h_\alpha( \frac{p}{1 + \epsilon' d^{5/2} }) }{\left[ H_\alpha (W_{\rho}) - \log d \right]}, \end{align} for all $\alpha \in {\cal A} $ for which $H_\alpha(W_\rho) > \log d $, and the family of lower bounds on $n$: \begin{align} n \ge \frac{ k \left[ \log d - H_\alpha (W_{\psi}) \right] - h_\alpha( \frac{p}{1 + \epsilon' d^{5/2} }) }{\left[ \log d -H_\alpha (W_{\rho}) \right]}, \end{align} for all $\alpha \in {\cal A} $ for which $H_\alpha(W_\rho) < \log d $. \end{customthm} Here $d$ is the dimension of the qudit systems with Hilbert space $ {\cal H} _d$, and $H_\alpha(W_\rho)$ is a R\'{e}yni entropy measure computed on a quasi-distribution representation of quantum state $\rho$. We also obtain numeric entropic lower bounds on all CSS protocols, which are seen in \figref{fig:lower_bounds_comparison} to outperform state monotones such as mana~\cite{veitch2014resource}, and outperform projective robustness in certain parameter regimes. \section{Stochastic representation for CSS operations on qubits} \label{sec:basic_theory} In this section, we review the qubit representation introduced in Ref.~\cite{Catani2018CCSSP_universal} that forms the backbone of our work. We expand upon its properties and confirm that it respects all sequential and parallel composition of processes, which crucially means that the representation of product states factorizes over subsystems, a property that Ref.~\cite{koukoulekidis2022constraints} requires for applying entropies meaningfully to the state representations. Furthermore, we show that all magic distillation protocols that form CSS circuits are stochastically represented. It is this stochasticity that we can therefore analyse using tools from majorization theory and describe magic protocols as a form of classical statistical mechanics with quasi-distributions. \subsection{Phase space representation of qubit states} \label{subsec:our_wigner_rep} We first establish some convenient notation. Let $ {\bm{u}} \coloneqq (u_1, \dots, u_n)\in \mathbb{Z}^n_2$ denote a binary vector. Furthemore, given any single qubit operator $O$, let us denote \begin{equation} O( {\bm{u}} ) \coloneqq O^{u_1} \otimes \dots \otimes O^{u_n}. \end{equation} With this in place, consider an $n$-qubit quantum system with total Hilbert space $ {\cal H} _2^n := {\cal H} _2^{\otimes n}$. We associate to this system a phase space $ {\cal P} _n := \mathbb{Z}_2^{n} \times \mathbb{Z}_2^n$, where $ {\cal P} _n$ consists of all vectors $( {\bm{u}} _x, {\bm{u}} _z)$, and has a symplectic inner product $[ {\bm{u}} , {\bm{v}} ]$ defined as \begin{align} [ {\bm{u}} , {\bm{v}} ] \coloneqq {\bm{u}} _z \cdot {\bm{v}} _x - {\bm{v}} _z \cdot {\bm{u}} _x \equiv {\bm{u}} _z \cdot {\bm{v}} _x + {\bm{v}} _z \cdot {\bm{u}} _x, \end{align} where arithmetic is carried out modulo 2. We are now in a position to define our chosen representation over $ {\cal P} _n$ (we refer the reader to \appref{appx:wigner_rep} for further details and proofs of the properties presented in the following). We first define $n$-qubit displacement operators $D_ {\bm{u}} $, with $ {\bm{u}} \coloneqq ( {\bm{u}} _x, {\bm{u}} _z) \in \mathbb{Z}_2^n \times \mathbb{Z}_2^n$ via strings of single qubit Pauli operators $X$ and $Z$ \begin{align} D_{ {\bm{u}} } \coloneqq Z( {\bm{u}} _z) X( {\bm{u}} _x). \label{eq:Dx} \end{align} which generate the Heisenberg-Weyl group ${H(2)^{\times n}}$ on $n$-qubits modulo phase factors~\cite{Zyczkowski2006Geometry}. The displacement operators $D_ {\bm{u}} $ satisfy \begin{align} D_ {\bm{u}} D_ {\bm{a}} = (-1)^{[ {\bm{a}} , {\bm{u}} ]} D_ {\bm{a}} D_ {\bm{u}} . \label{eq:displacement_reorder} \end{align} Using these displacement operators, we can construct the following representation $ \rho \mapsto W_\rho ( {\bm{u}} )$ \begin{align} \label{eq:W_rho} W_\rho( {\bm{u}} ) \coloneqq \frac{1}{2^n} \mathrm{tr} [A^\dagger_ {\bm{u}} \rho], \end{align} for any $n$-qubit state $\rho$, where $\{ A_ {\bm{u}} \}$ are the set of $2^{2n}$ \textit{phase point operators} on $n$-qubits, which can be defined as \begin{align} A_ {\bm{u}} = \frac{1}{2^n} \sum_{ {\bm{v}} \in {\cal P} _n} (-1)^{[ {\bm{u}} , {\bm{v}} ]} D_ {\bm{v}} . \end{align} $W_\rho$ provides an informationally complete representation of general $n$-qubit states that is normalized, i.e., \begin{equation} \sum_{ {\bm{u}} } W_\rho ( {\bm{u}} ) = 1, \end{equation} for any quantum state $\rho$. Like Gross' representation, the representation $W_\rho$ transforms covariantly under the displacement operators, namely, \begin{equation} W_{D_{ {\bm{v}} } ^\dagger \rho D_{ {\bm{v}} }} ( {\bm{u}} ) = W_\rho ( {\bm{u}} + {\bm{v}} ), \end{equation} for all $ {\bm{u}} , {\bm{v}} \in {\cal P} _n$. In fact, everything in the construction of this representation has proceeded in direct analogy to Gross', except for the lack of phase factors ensuring the Hermiticity of the displacement operators. As a result, the phase point operators are no longer Hermitian, which in turn implies that $W_\rho$ is in general complex and not a valid Wigner quasiprobability representation on the full set of qubit states. However, it turns out that the real and imaginary parts of $W_\rho( {\bm{u}} )$ are related to the quantum state in the following simple way (a proof of which is given in \appref{appx:rebit_rep}). \begin{restatable}[]{lemma}{rhoWrealim} \label{lemma:rho_W_real_im_correspondence} Given $n$-qubit quantum state $\rho$, we have that \begin{align} & \mathfrak{Re}[W_\rho( {\bm{u}} )]=W_{\mathfrak{Re}(\rho)}( {\bm{u}} ) \\ & \mathfrak{Im}[W_\rho( {\bm{u}} )]=W_{\mathfrak{Im}(\rho)}( {\bm{u}} ) \end{align} for all $ {\bm{u}} \in {\cal P} _n$, where $\mathfrak{Re}(\rho)$ and $\mathfrak{Im}(\rho)$ are respectively the real and imaginary parts of the density matrix of $\rho$ in the computational basis. \end{restatable} This immediately implies \begin{corollary} \label{corollary:quasiprob_rebit} The representation $W_\rho$ of an $n$-qubit state $\rho$ is a quasi-probability distribution if and only if $\rho$ is an $n$-rebit state, namely the density matrix of $\rho$ is real in the computational basis. \end{corollary} We will thus focus on rebit states for the remainder of this work, since in those cases $W_\rho$ constitutes a valid quasiprobability distribution, however one could potentially handle the full complex case by treating the real and imaginary components separately. Typically, we will consider the case of distillation of the (+1) eigenstate $\ket{H}$ of the Hadamard operator \begin{align} \ket{H} \coloneqq \cos \frac{\pi}{8} \ket{0} + \sin \frac{\pi}{8} \ket{1}, \end{align} which is equivalent to the canonical magic state $\ket{A} \coloneqq T \ket{+} = \frac{1}{\sqrt{2}} (\ket{0} + e^{i \frac{\pi}{4}} \ket{1}$ up to a Clifford unitary~\cite{Bravyi2016CliffordEquiv}, where $T\coloneqq \text{diag}(1,e^{i \frac{\pi}{4}})$ is the $T$-gate. Therefore, the $\ket{H}$ magic state can be used in a stabilizer gadgetisation circuit to implement the $T$-gate~\cite{original_magic_states}. \subsection{Phase space representation of channels} \label{subsec:channel_rep} The representation of qubit states induces a corresponding representation of qubit channels. Let $ {\cal E} $ be an arbitrary channel from $n$ to $m$ qubits, and \begin{equation} {\cal J} ( {\cal E} ) = ( \mathcal{I} \otimes {\cal E} )(\ketbra{\phi^+_n}) \end{equation} be its associated Choi state~\cite{watrous_2018}, where $|\phi^+_n\>$ is the canonical maximally entangled state on two copies of the input system. We now define a channel representation~\cite{Wang_2019} of a quantum channel $ {\cal E} $ as \begin{align} W_{ {\cal E} }( {\bm{v}} | {\bm{u}} ) \coloneqq 2^{2n} W_{ {\cal J} ( {\cal E} )}( {\bm{u}} \oplus {\bm{v}} ) \label{eq:Wigner-Choi} \end{align} for all $ {\bm{v}} \in {\cal P} _m$, and $ {\bm{u}} \in {\cal P} _n$. This channel representation has the property that it acts as a matrix on the state representation written as vectors. More precisely, if $\sigma = {\cal E} (\rho)$, then \begin{align} W_\sigma( {\bm{v}} ) = \sum_{ {\bm{u}} \in {\cal P} _n } W_ {\cal E} ( {\bm{v}} | {\bm{u}} ) W_{\rho}( {\bm{u}} ), \label{eq:W_E_in_action} \end{align} for all $ {\bm{v}} \in {\cal P} _m$. Furthermore, we note that the representation we have chosen also respects sequential and parallel composition of processes, i.e., \begin{align} W_{ {\cal E} \circ {\cal F} } &= W_ {\cal E} W_ {\cal F} \label{eq:sequential_respect},\\ W_{ {\cal E} \otimes {\cal F} } &= W_ {\cal E} \otimes W_ {\cal F} , \label{eq:parallel_respect} \end{align} respectively. One useful implication of \eqref{eq:parallel_respect} is that when $ {\cal E} $ and $ {\cal F} $ prepare states $\rho$ and $\sigma$ respectively, we arrive at \begin{equation} W_{\rho \otimes \sigma} = W_\rho \otimes W_\sigma, \end{equation} which in words informs us that our chosen representation of product states factorizes over subsystems. The transition matrix formed by $W_ {\cal E} ( {\bm{v}} | {\bm{u}} )$ preserves normalization since \begin{align} \sum_{ {\bm{v}} \in {\cal P} _m} W_ {\cal E} ( {\bm{v}} | {\bm{u}} )=1, \label{eq:W_normalization} \end{align} for any $ {\bm{u}} \in {\cal P} _n$ and any quantum channel $ {\cal E} $. (Proofs of Eqs.~(\ref{eq:W_E_in_action}) through (\ref{eq:W_normalization}) can be found in Appendix \ref{appx:wigner_channel_rep}). It therefore follows that a quantum channel $ {\cal E} $ from $n$ qubits to $m$ qubits is represented as a stochastic map if and only if $W_{ {\cal E} } ( {\bm{v}} | {\bm{u}} ) \ge 0$ for all $ {\bm{u}} , {\bm{v}} $. By inspection of~\eqref{eq:Wigner-Choi} we equivalently have that the quantum channel $ {\cal E} $ is stochastically represented if and only if the Choi state $ {\cal J} ( {\cal E} )$ on $n+m$ qubits is represented by a genuine probability distribution on the phase space $ {\cal P} _{n+m}$. Unlike the odd-dimensional case, the channel $ {\cal E} $ is not guaranteed a stochastic representation whenever $ {\cal J} ( {\cal E} )$ is a stabilizer state. This is an immediate consequence of the sequential and parallel composition rules of Eqs.~(\ref{eq:sequential_respect}) and (\ref{eq:parallel_respect}), which implies that our qubit representation cannot be nonnegative over the full stabilizer subtheory~\cite{schmid2021stabilizer}. However, we will show in the next section that $ {\cal E} $ is stochastically represented if $ {\cal J} ( {\cal E} )$ belongs to a important subset of stabilizer states known as CSS states. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{stochastic.pdf} \caption{ \textbf{(Schematic of our approach)}. We find that the set of completely CSS-preserving operations $ {\cal O} $, which include the family of CSS-code projections as a subset, are stochastically represented. Elements of $ {\cal O} $ include the $7$-$1$ and $23$-$1$ CSS distillation protocols based on the Steane $[[7,1]]$ and Golay $[[23,1]]$ codes, respectively~\cite{reichardt2005quantum}. \label{fig:stochastic_schematic} } \end{figure} \subsection{CSS states and operations} \label{subsec:CSS_states_ops} We now identify a class of protocols that arise naturally in quantum computing, are sufficiently large to enable universal quantum computation, and admit a stochastic representation. This relies on Corollary \ref{corollary:quasiprob_rebit} together with the representation of channels via their Choi states. We see that if the Choi state of a channel is CSS then it is stochastically represented. A pure CSS state on $n$ qubits is any stabilizer state whose stabilizer group can be generated by $n$ Pauli observables that are individually of $X$-type or $Z$-type only. For instance, $\ket{\phi^+}\coloneqq \frac{1}{\sqrt{2}} (\ket{00} + \ket{11})$ has the stabilizer group \begin{align} {\cal S} (\ket{\phi^+}) = \langle X_1 X_2, Z_1 Z_2 \rangle , \end{align} and is therefore CSS, while $\ket{\psi} \coloneqq \mathbbm{1} \otimes H \ket{\phi^+}$ is stabilized by \begin{align} {\cal S} (\ket{\psi}) = \langle X_1 Z_2, Z_1 X_2 \rangle , \end{align} and therefore is \emph{not} CSS, due to the stabilizer generators mixing $X$ and $Z$ operators. Letting the set of all pure CSS states be denoted $\Omega_{css}$, we then define the set of all CSS states $ {\cal D} _{css}$ as the convex hull of $\Omega_{css}$. The representation we have chosen coincides on all rebit states with an earlier one introduced by Ref.~\cite{delfosse2015} (see Appendix \ref{appx:rebit_rep}), in which a discrete Hudson's theorem can be recovered by restricting to the CSS subtheory of quantum mechanics. More precisely, it is shown that any pure $n$-rebit state is non-negatively represented if and only if it is CSS. Therefore, $W_\rho$ is a valid probability distribution for all $\rho \in {\cal D} _{css}$, and we conclude that \begin{theorem} \label{theorem:J_CSS_stoch_rep} A quantum channel $ {\cal E} $ from $n$ qubits to $m$ qubits is stochastically represented if $ {\cal J} ( {\cal E} )$ is a CSS state on $n+m$ qubits. \end{theorem} \thmref{theorem:J_CSS_stoch_rep} can be leveraged to identify stochastically-represented qubit stabilizer operations in a systematic way. Firstly, a channel $ {\cal E} $ is CSS-preserving if $ {\cal E} (\rho)$ is a CSS state for all $\rho \in {\cal D} _{css}$. Any channel $ {\cal E} $ from a system of qubits $B$ is completely CSS-preserving if, given any CSS state $\rho_{AB}$ on two systems of qubits $A$ and $B$, $ \mathcal{I} _A \otimes {\cal E} _B(\rho_{AB})$ is always CSS. We now note that the maximally entangled state $\ket{\phi^+_n}$ over two sets of $n$ qubits is CSS for all $n$ (see Appendix \ref{appx:wigner_channel_rep}). Therefore, if $ {\cal E} $ is completely CSS-preserving, $ {\cal J} ( {\cal E} )$ must also be a CSS state. By \thmref{theorem:J_CSS_stoch_rep}, it follows that every completely CSS-preserving operation is stochastically represented. To motivate the class of completely CSS-preserving operations as operationally significant, we first highlight that they cover at least the following subset of stabilizer operations (see Appendix \ref{appx:CCSSP_discussion} for proof): \begin{restatable}[]{lemma}{CCSSPpowers} \label{lemma:CCSSP_powers} (CSS operations). Any sequential composition of the following stabilizer operations: \begin{enumerate} \item Introducing a CSS state on any number of qubits, \item Any gate from the group of completely CSS-preserving gates on any number of qubits $n$, \begin{equation} {\cal G}(n) \coloneqq \langle CNOT_{i,j}, Z_i, X_i \rangle_{i,j = 1,\dots,n, i \neq j} \end{equation} \item Projective measurement of any $X$- or $Z$-type Pauli observable, followed by a completely CSS-preserving operation $ {\cal E} _\pm$ conditioned on the outcome $\pm 1$. \item Discarding any number of qubits, \end{enumerate} as well as statistical mixtures of such processes, is completely CSS-preserving. \end{restatable} It follows that every stabilizer operation covered by Lemma \ref{lemma:CCSSP_powers} is stochastically represented. We note that for the purpose of distillation, the set of (completely CSS-preserving) gates covered by \lemref{lemma:CCSSP_powers} are just as powerful as the full set of CSS-preserving gates. This motivates calling this set of operations ``CSS operations”, in way of analogy to the operational definition of stabilizer operations. We emphasise the power of the set of CSS operations. Firstly, they can be promoted to universal quantum computing when supplemented with rebit magic states~\cite{delfosse2015}. Moreover, the gate set ${\cal G}(n)$ constitute those that can be implemented fault tolerantly using defect brading in surface codes constructions~\cite{defect_braiding_Raussendorf2007}. Furthermore, we will see in \secref{sec:structure_CSS} that they form the basis of many existing protocols for magic state distillation based on CSS codes. \section{Entropic bounds on CSS protocols} \label{sec:majorization} The standard approach to obtaining distillation bounds is by tracking a magic monotone~\cite{veitch2014resource,Howard2018Application,Wang_2020}, which is any property of a quantum system that cannot be increased under stabilizer operations. The paradigmatic example is mana~\cite{veitch2014resource}, the total negativity in the Wigner representation of a state. However, this approach operates at the single-state level and does not incorporate any additional structure of distillation protocols. In contrast, the recent work~\cite{koukoulekidis2022constraints} considers how a class of magic distillation protocols transform a \emph{pair} of quantum states -- one a noisy magic state, the other a stabilizer state distinguished by the characteristic physics of those protocols. Here we briefly review the approach taken in Ref.~\cite{koukoulekidis2022constraints} to extend relative majorization to quasiprobability distributions, and how that leads to the extension of a dense subset of $\alpha$-R\'{e}yni divergences from classical statistical mechanics to quantify the non-classical order in magic states under distillation. We then adapt this work for rebit magic state distillation under CSS protocols. \subsection{Statistical mechanics of quasi-distributions} \label{subsec:maj_intro} At the heart of statistical mechanics are the notions of disorder and deviations from equilibrium. In classical statistical mechanics in the macroscopic regime, this leads to the thermodynamic entropy ${H(\mathbf{p}) = - \sum_i p_i \log p_i}$, which is the essentially unique measure of disorder of a statistical distribution ${\mathbf{p}= (p_1, \dots, p_N)}$. In odd dimensional systems~\cite{negativity_resource} or restricted qubit models~\cite{delfosse2015,Catani2018CCSSP_universal,RaussendorfDan2017}, magic states that promote an efficiently simulable part of quantum mechanics to universal quantum computation must have negativities in their representation within a phase space model. Despite this negativity, it is still possible to arrive at a well-defined statistical mechanical description that circumvents the fact that the Boltzmann entropy is not well-defined. The key observation we exploit is that the framework of majorization remains well-defined when extended to quasi-distributions, and is a more fundamental concept than the traditional entropy. Given two probability distributions $\mathbf{p} = (p_1, \dots, p_N)$ and ${\mathbf{p}' = (p_1', \dots, p_M')}$ we would like to determine which of them is ``more disordered'' than the other. This can be done by comparing $\mathbf{p}$ to some reference probability distribution ${\mathbf{r}=(r_1, \dots ,r_N)}$ of our choice, and $\mathbf{p}'$ to some other reference probability distribution $\mathbf{r}' =(r'_1, \dots ,r'_M)$, also of our choice. We then say that $(\mathbf{p}, \mathbf{r})$ relatively majorizes $(\mathbf{p}', \mathbf{r}')$ and write $(\mathbf{p},\mathbf{r}) \succ (\mathbf{p}', \mathbf{r}')$ precisely if there exists a stochastic map $A$ that sends the first pair of distributions into the second, namely \begin{align} (A \mathbf{p} , A\mathbf{r}) &= (\mathbf{p}', \mathbf{r}'). \end{align} It was shown in Ref.~\cite{koukoulekidis2022constraints} that this definition can be extended to the case of quasiprobability distributions, and the following result was established to provide an entropic measure in terms of the $\alpha$-R\'{e}nyi divergences. \begin{theorem}[\cite{koukoulekidis2022constraints}] Let $\mathbf{w}= (w_1, \dots , w_N)$ and ${\mathbf{w}' = (w'_1 ,\dots , w'_M)}$ be any two quasiprobability distributions and let $\mathbf{r} = (r_1 ,\dots , r_N)$ and $\mathbf{r}' = (r'_1, \dots, r'_M)$ be any two probability distributions with non-zero components. If $(\mathbf{w}, \mathbf{r}) \succ (\mathbf{w}',\mathbf{r}')$ then \begin{equation} D_\alpha( \mathbf{w} || \mathbf{r}) \ge D_\alpha (\mathbf{w}' || \mathbf{r}'), \end{equation} for all $ \alpha \in {\cal A} =\left\{\frac{2a}{2b-1} \, : \, a,b \in \mathbb{N}, a \ge b \right\} \cup \left\{ \infty \right\}$. \label{thm:koukouD_alpha_monotone} \end{theorem} Here $D_\alpha(\mathbf{w} || \mathbf{r})$ is an extension of the classical $\alpha$-R\'{e}nyi divergence to the case of $\mathbf{w}$ being a quasiprobability distribution. This extension requires $\alpha \in {\cal A} $ in order for the expression \begin{equation}\label{eq:D} D_\alpha(\mathbf{w}||\mathbf{r}) \coloneqq \frac{1}{\alpha-1} \log \sum_{i=1}^N w_i^\alpha r_i^{1-\alpha}, \end{equation} to be well-defined\footnote{Note that since the set $ {\cal A} $ is dense in $\alpha \in (1, \infty)$, one could extend the definition of $D_\alpha(\mathbf{w} || \mathbf{r})$ to all $\alpha >1$ by continuity.}. In the case of $\mathbf{r}$ being the uniform distribution $\mathbf{r} = (1/N, 1/N, \dots, 1/N)$, we have that \begin{equation} D_\alpha(\mathbf{w} || \mathbf{r}) = \log N - H_\alpha(\mathbf{w}), \end{equation} where $H_\alpha(\mathbf{w})$ is the R\'{e}nyi entropy evaluated on $\mathbf{w}$. Another result of Ref.~\cite{koukoulekidis2022constraints} is that $\mathbf{w}$ has negativity if and only if $H_\alpha(\mathbf{w})$ is negative for $\alpha$ close to $1$ and diverges to $-\infty$ in the limit $\alpha \rightarrow 1^+$. This provides a well-defined, and meaningful notion of negative entropy in a statistical mechanical setting. \subsection{Application to CSS protocols} \label{subsec:maj_CSS} CSS circuits become capable of universal quantum computation with the injection of rebit magic states~\cite{delfosse2015,Catani2018CCSSP_universal}, which are real and therefore represented in \eqref{eq:W_rho} by valid quasiprobability distributions with negative probabilities. Since every CSS protocol is stochastically represented, the following family of entropic constraints on magic distillation of rebits applies to them all (see \appref{appx:general_bound} for proof). \begin{restatable}[]{theorem}{generalBound} \label{thrm:general_qubit_bound} Let $\rho$ be a noisy rebit magic state, and $\tau$ be a CSS state in the interior of $ {\cal D} _{css}$. If there exists a CSS distillation protocol $ {\cal E} $ such that $ {\cal E} (\rho^{\otimes n}) =\rho'$ and $\tau' \coloneqq {\cal E} (\tau^{\otimes n})$ is also in the interior of $ {\cal D} _{css}$, then \begin{align} \Delta D_\alpha \ge 0, \end{align} for all $\alpha\in {\cal A} $, where \begin{align} \Delta D_\alpha \coloneqq nD_\alpha(W_{\rho} || W_\tau ) - D_\alpha(W_{\rho'} || W_{\tau'} ). \end{align} \end{restatable} The reference process $\tau^{\otimes n} \mapsto \tau'$ in \thmref{thrm:general_qubit_bound} can be viewed in three different ways: (1) as a variational parameter, (2) as encoding physics of the quantum device, or (3) as a way to exploit structure in a family of protocols, which we now elaborate on in turn. Firstly, it can simply be treated as a variational parameter, which can be optimized over to obtain the following set of monotones\footnote{We note that an analogous set of magic monotones can be defined for systems of odd-prime dimension via: \begin{equation} \Gamma_\alpha(\rho) = \inf_{\tau \in STAB} D_\alpha(W_\rho || W_\tau). \end{equation}} on CSS protocols \begin{align} \Lambda_\alpha(\rho) \coloneqq \inf_{\tau \in {\cal D} _{css}} D_\alpha (W_\rho || W_\tau), \label{eq:Lambda_monotone} \end{align} for all $\alpha \in {\cal A} $, where the infimum is taken over the convex set of CSS states $ {\cal D} _{css}$. To see this, we note that we have $D_\alpha(W_\rho||W_\tau)\ge 0$ for all $\rho$, $\tau$, with equality if and only if $\rho=\tau$ (see \lemref{lemma:nick_restatement}). Given any rebit state $\rho$, let $\tau_\rho$ be a solution to the optimization problem in \eqref{eq:Lambda_monotone}. Then if there exists a CSS protocol $ {\cal E} $ such that ${ {\cal E} (\rho) = \rho'}$, we obtain \begin{align} \Lambda_\alpha(\rho) = D_\alpha (W_\rho || W_{\tau_\rho}) &\ge D_\alpha(W_{\rho'} || W_{ {\cal E} (\tau_\rho)}) \notag\\ &\ge \Lambda_\alpha (\rho') , \end{align} where the first inequality follows from generalised relative majorization and the second inequality follows by the definition in \eqref{eq:Lambda_monotone}. Therefore $\{ \Lambda_\alpha\}_{\alpha \in {\cal A} }$ form an infinite set of monotones on the class of CSS protocols. It is straightforward to see that $\Lambda_\alpha$ are sub-additive, i.e. $\Lambda_\alpha(\rho^{\otimes n}) \ge n \Lambda_\alpha (\rho)$ (which follows from the additivity of the generalized $\alpha$-R\'{e}nyi divergences). Therefore, these $\Lambda_\alpha$-monotones allow us to set global bounds on any CSS protocol. More precisely, if there exists a completely CSS-preserving distillation protocol $ {\cal E} \in {\cal O} $ such that $ {\cal E} (\rho^{\otimes n} ) = \rho'$, then the overhead $n$ is lower bounded as \begin{align} n \ge \frac{\Lambda_\alpha(\rho')}{\Lambda_\alpha (\rho)}. \end{align} Secondly, the reference process can be chosen appropriately to encode the particular physics of a protocol (or family of protocols) of interest. For instance, in~\cite{koukoulekidis2022constraints} it is taken to be the Gibbs state at inverse temperature $\beta$ in order to encode some background temperature or free energy production that takes place during the distillation protocol. The third way of interpreting the reference process is demonstrated in the next section. We show explicitly how the reference protocol can be chosen in order to exploit general structure of CSS code projection protocols. \section{Entropic bounds for CSS code projection protocols} \label{sec:structure_CSS} In this section we apply \thmref{thrm:general_qubit_bound} to CSS code projection protocols, and obtain both lower and upper bounds on the overhead of magic distillation. We show how these lower bounds outperform existing bounds such as generalised robustness~\cite{Seddon2021Quantifying} and projective robustness~\cite{Regula2022Probabilistic} in some parameter regimes. The upper bounds on the code length ($\sim$ resource cost) are new, and therefore combined with the lower bounds show how fixed error rates, e.g. from hardware limitations, constrain the possibility of better and better distillation protocols. \subsection{CSS code projections} An elementary protocol for distilling magic, proposed in the seminal work~\cite{original_magic_states}, is done via projecting onto the codespace of a quantum error correcting code. The basic method involves taking $n$-copies of your noisy magic state $\rho^{\otimes n}$, measuring the stabilizer generators of an $[[n,k]]$ code $ {\cal{C}}$, and post-selecting on the no-error syndrome. The net effect is to project onto $ {\cal{C}}$. Conditional on no errors being detected in the syndrome measurements one then decodes onto $k$ output qubits, whereas when an error is detected the output state is simply discarded. In general then, the protocol will only succeed probabilistically with some acceptance probability $p$. The core idea is that if the likelihood of an undetectable error occurring is less than the input error rate $\epsilon$, then the post-selected output state will have a higher per-copy fidelity with the target magic state of choice. Many other examples are based on CSS codes, for instance the 15-to-1 protocol~\cite{original_magic_states} based on the $[[15,1]]$ punctured Reed-Muller code~\cite{knill1996threshold,steane1999quantum}, and the protocols based on Steane $[[7,1]]$ and Golay $[[23,1]]$ CSS codes analysed in~\cite{reichardt2005quantum}. A known result from the literature is that any $n$-to-1 magic distillation protocol can be decomposed as a sum of stabilizer code projections followed by Clifford post-processing~\cite{campbell_browne}. This implies that the optimal fidelity with respect to the desired magic state, but not necessarily optimal acceptance probability, can always be achieved by using a stabilizer code projection. In a similar way, we can show that any $n$-to-$k$ CSS protocol is a sum of CSS code projections followed by completely CSS-preserving post-processing (see \appref{sec:campbell_browne_reduction} for details). Therefore, in what follows we analyse CSS code projection protocols. We can write any such code projection as a completely positive trace non-increasing map $ {\cal K} $ for the given CSS code $ {\cal{C}}$ as \begin{equation} {\cal K} (\cdot) \coloneqq \mathrm{tr} _{[1,n-k]}[{\cal U} \circ {\cal P} (\cdot)], \end{equation} where we have defined ${\cal U} \coloneqq U(\cdot)U^\dagger$ and $ {\cal P} \coloneqq P(\cdot)P$ for the decoding unitary $U$ and codespace projector $P$ of $ {\cal{C}}$ respectively. Given $n$ copies of a noise magic state $\rho$, $ {\cal K} $ acts as \begin{equation} {\cal K} (\rho^{\otimes n}) = p\rho', \end{equation} where $\rho' \in {\cal D} ( {\cal H} _2^k)$ is the output magic state on $k$ qubits and we have defined the acceptance probability ${p\coloneqq \mathrm{tr} [P\rho^{\otimes n}]}$ for a single successful run of $ {\cal K} $. Distillation of the magic state $\psi \coloneqq \ketbra{\psi}$ is successful if the output $\rho'$ from a successful run has a greater per-copy fidelity with respect to a target (pure) magic state of choice than $\rho$. The majorization results do not immediately apply to the projection $ {\cal K} $ as we require trace-preservation. However, this can be fixed by simply recording the success (labelled `0') or failure (labelled `1') of $ {\cal K} $ in an ancillary qubit, and without loss of generality we can assume an arbitrary CSS state $\sigma$ on $k$ qubits is outputted in case of failure. We can therefore extend $ {\cal K} $ into the following completely positive trace preserving (CPTP) map which describes the CSS code projection: \begin{align} {\cal E} (\cdot) \coloneqq {\cal K} (\cdot) &\otimes \ketbra{0} + \mathrm{tr} [\overline{ {\cal P} } (\cdot)] \sigma \otimes \ketbra{1}, \label{eq:CSS_code_reduction_channel} \end{align} where ${\overline{ {\cal P} } \coloneqq \overline{P}(\cdot)\overline{P} \coloneqq ( \mathbbm{1} _n - P)(\cdot)( \mathbbm{1} _n - P)}$ performs a projection onto the orthogonal complement of $ {\cal{C}}$. Under such a channel, $n$ copies of $\rho$ are mapped onto \begin{align} \label{eq:rho_prime_p} {\cal E} [\rho^{\otimes n}]&= p \rho' \otimes \ketbra{0} + (1-p) \sigma \otimes \ketbra{1} \coloneqq \rho_p, \end{align} which captures the structure of the probabilistic distillation protocol. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{lower_bound_comparison.pdf} \caption{\textbf{(Lower bound comparison).} We plot lower bounds on the number of copies $n$ of the noisy $H$-state $(1-\epsilon) \ketbra{H} + \epsilon \frac{ \mathbbm{1} }{2}$ required to distill a single output qubit $\ket{H}$ with output noise $\epsilon'=10^{-9}$ and acceptance probability $p=0.9$ under a CSS-code projection protocol, plotted as a function on input noise $\epsilon$. Our lower bound from majorization (maj.) is shown to be tighter those from mana~\cite{veitch2014resource} and generalized robustness (GR) in~\cite{Seddon2021Quantifying}. However, it only outperforms the lower bound from projective robustess (PR) in~\cite{Regula2022Probabilistic} in the high $p$, high $\epsilon$ regime. \label{fig:lower_bounds_comparison}} \end{figure} \subsection{Bounds on CSS code projection protocols} We now exploit \thmref{thrm:general_qubit_bound} to derive majorization conditions that apply across \emph{all} $n$-to-$k$ CSS code projection protocols. We have seen that if there exists a CSS code projection $ {\cal K} $ that acts on $n$ copies of a noisy magic state as $ {\cal K} (\rho^{\otimes n}) = p \rho'$, then there exists a trace-preserving version of $ {\cal K} $, $ {\cal E} $, which acts as $ {\cal E} (\rho^{\otimes n})=\rho_p$. Crucially (for proof see Appendix \ref{appx:CSS_code_proj}), \begin{restatable}[]{lemma}{ntoonestoch} \label{lemma:n_to_1_stoch} The quantum channel for any $n$-to-$k$ CSS code projection is a completely CSS-preserving operation, and hence it is stochastically represented. \end{restatable} We will now see that there is a natural choice of CSS states $\tau$ and $\tau'$ such that $\tau^{\otimes n} \rightarrow \tau'$ common to all $n$-to-$k$ CSS code projections, based on the intuition that the code projection component $ {\cal K} $ of the channel in \eqref{eq:CSS_code_reduction_channel} is always sub-unital. To see this we first note that the identity operator on $n$ qubits can be decomposed as $ \mathbbm{1} _n = P + \overline{P}$ for the codespace projector $P$ of any $[[n,k]]$ CSS code. Therefore, $ {\cal K} $ always acts as \begin{align} {\cal K} ( \mathbbm{1} _n) = {\cal K} (P+\overline{P}) = {\cal K} (P). \label{eq:K_on_identity} \end{align} Since $P$ is the logical identity on $k$ logical qubits, i.e., \begin{equation} P= \sum_{ {\bm{k}} \in \{0,1\}^k} \ketbra{ {\bm{k}} _L} \equiv \mathbbm{1} _L, \end{equation} the decoding of $P$ in \eqref{eq:K_on_identity} must give an output state that is proportional to the maximally mixed state on $k$ physical qubits, and so (omitting normalization constants) we obtain $ {\cal K} ( \mathbbm{1} _n) \propto \mathbbm{1} _k$. Therefore, the code projection component of the distillation channel is indeed sub-unital. Finally we note that since $P$ is a rank-$2^k$ projector, the acceptance probability associated with this protocol is $ p= \mathrm{tr} \left[P \frac{ \mathbbm{1} _n}{2^n}\right]= 2^{k-n}$. Putting this all together, we find that under any $n$-$k$ CSS code projection the maximally mixed state on $n$ qubits gets mapped to \begin{align} {\cal E} \left[\left(\frac{ \mathbbm{1} }{2}\right)^{\otimes n}\right] &= \tau_{n,k} , \end{align} where we have defined the output state \begin{align} \tau_{n,k} &\coloneqq 2^{k-n} \frac{ \mathbbm{1} _k}{2^k} \otimes \ketbra{0}+(1-2^{k-n})\sigma \otimes \ketbra{1}, \end{align} which only depends on the number of stabilizer generators $n-k$ of the CSS code. We therefore can conclude that, if there exists an $n$-to-$k$ CSS code projection such that $\rho^{\otimes n} \mapsto p \rho'$, then $\Delta D_\alpha \ge 0 $ for all $\alpha \in {\cal A} $, where \begin{align} \label{eq:Delta_D_alpha_defn} \Delta D_\alpha \coloneqq nD_\alpha\left(W_\rho\bigg|\bigg|W_{\frac{ \mathbbm{1} }{2}}\right) - D_\alpha\left(W_{\rho_p}||W_{\tau_{n,k}}\right), \end{align} which we define over the restricted domain $n \in [k, \infty]$ (such that the number of logical qubits cannot exceed the number of physical qubits). We highlight the satisfying fact that $\Delta D_\alpha$ is independent of the choice of CSS state $\sigma$ in \eqref{eq:CSS_code_reduction_channel}. This follows from both resource-theoretic arguments and can also be seen by making use of properties of the $\alpha$-R\'{e}nyi divergence (see Appendix \ref{appx:bounds_sigma_indep} for further details). We now highlight some properties of the relative entropy difference $\Delta D_\alpha$ in the following lemma, proofs of which can be found in~\appref{appx:fn_properties_proof}. \begin{restatable}[]{lemma}{fnProperties}\label{lemma:properties_fn} The following properties of the relative entropy difference $\Delta D_\alpha$ hold for all rebit $\rho, \rho'$, $\alpha\in {\cal A} $, and $p<1$: \begin{enumerate}[label=\normalfont (\roman*)] \item \label{fproperty_concavity} $\Delta D_\alpha$ is concave in $n$ on the domain $n\in [k,\infty]$. \item \label{fproperty_lim_one} $\Delta D_\alpha$ is negative in the limit where $n=k$: \begin{equation}\lim_{n\rightarrow k^+}\Delta D_\alpha<0.\end{equation}. \item \label{fproperty_lim_infty} If $H_\alpha[W_\rho]>1$, then $\Delta D_\alpha$ is also negative in the asymptotic limit \begin{equation}\lim_{n\rightarrow \infty}\Delta D_\alpha<0.\end{equation} \end{enumerate} \end{restatable} An immediate consequence of \lemref{lemma:properties_fn} is that if $\Delta D_\alpha$ is non-negative anywhere on its well-defined domain, i.e., an $n$-to-$k$ CSS code projection protocol is not completely ruled out for any number of input copies $n$, then it has one or two roots located at $n^\alpha_{L}$ and $n^\alpha_U$. These roots correspond to lower and upper bounds on the permissible code length $n$, respectively. More formally, we arrive at the following theorem. \begin{theorem} \label{thrm:lower_bound_n_CSS_code} Let $ \rho$ be a noisy magic state on $ {\cal H} _2$. If $\rho^{\otimes n} \mapsto p \rho'$ under a $n$-to-$k$ CSS code projection with acceptance probability $p$, then we have the following lower and upper bounds on $n$, respectively \begin{align} n \ge n_L^\alpha \coloneqq \inf_n \{ n : \Delta D_\alpha \ge 0 \} , \label{eq:n_L_alpha}\\ n \le n_U^\alpha \coloneqq \sup_n \{ n : \Delta D_\alpha \ge 0 \} , \label{eq:n_U_alpha} \end{align} for all $\alpha \in {\cal A} $. Moreover, if there exists an $\alpha$ such that ${H_\alpha[W_\rho]> 1}$, we obtain a \textit{finite} upper bound on $n$ from this latter expression. \end{theorem} For sufficiently low $k$, these bounds can be computed numerically using basic root-finding methods. However, we show later in \secref{sec:extensions_n_m} that by upper and lower bounding the $n_U^\alpha$ and $n_L^\alpha$ respectively, we can find analytic upper and lower bounds $n$ which don't require numerics. We also highlight that $n$ in \thmref{thrm:lower_bound_n_CSS_code} refers to the code-length (which is related to the resource cost $C$ by $C= \frac{ n}{p k}$) of a single run of a distillation protocol, as opposed to the the asymptotic overhead. However, single-run $n$ still constitutes a useful metric for analysing the actual resource cost of a given stage of a protocol. Moreover, distillation costs are typically dominated by the final round of a multi-stage distillation protocol (see \cite{campbell2017roads} and references contained therein), and therefore we expect the above bounds to be particularly informative in this context. \subsection{Example: Hadamard state distillation} We can apply the previous general results to the case of $n$-to-1 Hadamard state distillation. It is sufficient to consider input magic states of the form \begin{align} \rho(\epsilon) \coloneqq (1-\epsilon) \ketbra{H} + \epsilon \frac{ \mathbbm{1} }{2}, \end{align} with $0 \le \epsilon \le 1$ depolarisation noise, since any other input magic state $\rho$ can be converted into this form via the pre-processing channel \begin{align} {\cal E} _{\text{prep}}(\cdot) \coloneqq \frac{1}{2} \mathcal{I} (\cdot) + \frac{1}{2}H(\cdot)H \end{align} without altering its fidelity with respect to the Hadamard state $\ket{H}$. Somewhat less obviously, it is also sufficient to only consider output states of the same form $\rho(\epsilon')$ for an output error $\epsilon'$ (see Appendix \ref{appx:Hadamard_output_form}). In \figref{fig:lower_bounds_comparison} we plot the performance of our lower bounds applied to any CSS code projection protocol for a noisy $H$-state. In all parameter regimes, our lower bounds are observed to be tighter than mana\footnote{Under the restriction to the rebit sub-theory of quantum computing subject to CSS operations, the link between negativity and magic has been restored~\cite{delfosse2015}. It therefore follows that mana is a monotone under the class of distillation protocols considered here. In fact, the monotonicity of mana is equivalent to $\Delta D_\alpha \ge 0$ in the limit $\alpha \rightarrow 1$.}~\cite{veitch2014resource} and the generalized robustness bound in Theorem 13 of~\cite{Seddon2021Quantifying}. Our lower bound gives tighter constraints than the $p$-independent projective robustess bound~\cite{Regula2022Probabilistic} in the high $p$, high $\epsilon$ regime. Outside of this regime, however, our \textit{upper bound} is still able to give additional constraints over the projective robustness bound. In particular, in \figref{fig:bound_comparison} we plot the combined information of our upper bound with the lower bound from projective robustness. Taking CSS protocols to be our free operations, the appearance of upper bounds on $n$ might first seem to contradict a resource theory perspective, where, since discarding subsystems is a CSS operation, we would anticipate $n+1$ copies of a noisy magic state to be at least as good as $n$ copies at distilling magic. However, in specialising to code projection protocols, we are in fact considering a resource theoretic approach to magic subject to the additional constraint $\left[\frac{ \mathbbm{1} }{2}\right]^{\otimes n} \rightarrow \tau_{n,k}$. Crucially the output state depends non-trvially on $n$. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{ents.pdf} \caption{ \textbf{(Wigner-R\'{e}nyi entropies \& magic distillation)} We plot the condition in \thmref{thrm:lower_bound_n_CSS_code} for the existence of finite upper bounds on $n$ in an $n$-to-1 CSS distillation for the qubit $H$-state distillation, signified by region where $H_\alpha[W_{\rho(\epsilon)}]>1$. Even in the limit of zero input noise $\epsilon=0$ we obtain a valid set of permissible $\alpha$, which implies that $H$-state distillation under $n$-to-$1$ CSS code projection is ruled out in the asymptotic limit $n\rightarrow \infty$. We further highlight that the noise level $\epsilon =0.3$ (dashed curve) is outside of the region where $\rho(\epsilon)$ is magic ($0\le \epsilon < 1 - \frac{1}{\sqrt{2}}$), and therefore $W_{\rho(\epsilon)}$ is a proper probability distribution at $\epsilon = 0.3$, which is why $H_\alpha$ is only seen to satisfy standard monotonicity properties at this input error. We also highlight that the $\alpha \rightarrow 1$ divergence corresponds to a pole in $H_\alpha[W_\rho]$ for magic state $\rho$, and its residue is the mana of the state.\label{fig:entropy_conditions} } \end{figure} \subsection{Extension to general code projection protocols} \label{sec:extensions_n_m} We have seen that, mathematically, the appearance of upper bounds on the code length of qubit CSS code projection protocols comes from the concavity of the objective function $\Delta D_\alpha $ in $n$. Here we see that this is not just a feature of qubit systems, but applies more generally to any stabilizer code projection of a finite $d$-dimensional system that can be described by a stochastically-represented channel. This set of operations includes qubit CSS code projections and, for systems of odd dimension, the full set of stabilizer code projection protocols. We now abuse notation somewhat and let $W_{\rho}$ denote any representation under which the family of code projections are stochastically represented and multiplicative under tensor products. For stabilizer codes and odd-prime dimensional systems, this can be taken to be the standard Gross' representation~\cite{koukoulekidis2022constraints}. For qubits and CSS codes, we can assume the representation given in \eqref{eq:W_rho}. To simplify expression we introduce the following single-shot classical R\'{e}nyi entropy of a single probability $q \in (0,1]$ \begin{align} h_\alpha (q) \coloneqq \frac{1}{1-\alpha} \log q^\alpha. \end{align} With this in hand, we now have the following theorem, which provides simple analytic upper and lower bounds on the resource cost of code projection protocols. \begin{restatable}[Qudit code bounds]{theorem}{UpperBoundnk} \label{thrm:upper_bound_n_m} Let $ \rho$ and $\psi$ be a noisy and pure magic state respectively on $ {\cal H} _d$ with real Wigner distributions. If $\rho^{\otimes n } \rightarrow p \rho'$, where $\norm{\rho' - \psi^{\otimes k} }_1\le \epsilon'$, under a stochastically-represented distillation protocol that is the code projection for an $[[n,k]]$ stabilizer code, then we have the following family of upper bounds on $n$ \begin{align} n \le \frac{ k \left[ H_\alpha (W_{\psi}) - \log d \right] + h_\alpha( \frac{p}{1 + \epsilon' d^{5/2} }) }{\left[ H_\alpha (W_{\rho}) - \log d \right]}, \label{eq:approx_analytic} \end{align} for all $\alpha \in {\cal A} $ for which $H_\alpha(W_\rho) > \log d $, and the family of lower bounds on $n$: \begin{align} n \ge \frac{ k \left[ \log d - H_\alpha (W_{\psi}) \right] - h_\alpha( \frac{p}{1 + \epsilon' d^{5/2} }) }{\left[ \log d -H_\alpha (W_{\rho}) \right]}, \end{align} for all $\alpha \in {\cal A} $ for which $H_\alpha(W_\rho) < \log d $. \end{restatable} A proof can be found in \appref{appx:extension_n_m}. This result includes the qubit CSS codes as a special case. Moreover we show in appendices that for the qubit representation and CSS code case that these bounds can be tightened by replacing $d^{\frac{5}{2}} \rightarrow d^2$, appearing in the single-shot entropy $h_\alpha$. One might conjecture that the conditions ${H_\alpha(W_{\rho(\epsilon)})> \log d}$ given in Theorems~\ref{thrm:lower_bound_n_CSS_code} and \ref{thrm:upper_bound_n_m} for the existence of a finite upper bound on $n$ are never actually satisfied. However, this turns out not to be the case. For $n$-to-1 CSS code projection protocols of Hadamard distillation, there always exists a valid set of $\alpha$-values such that $H_\alpha[W_{\rho(\epsilon)}]>1$ for all $\epsilon$ and hence we always obtain a valid upper bound on $n$. This can be shown by upper bounding the R\'{e}nyi entropy of $W_{\rho(\epsilon)}$ as \begin{align} H_\alpha[W_{\rho(\epsilon)}] & \ge H_\alpha [W_{\ketbra{H}}] . \end{align} In \figref{fig:entropy_conditions}, we see that even the $\epsilon=0$ condition gives a finite range of $\alpha$ such that $H_\alpha > 1$. This implies $n$-to-$1$ CSS code projection protocols for $H$-state distillation are ruled out in the limit $n\rightarrow \infty$. We find that for the $n$-$1$ Hadamard-state distillation protocols, \thmref{thrm:upper_bound_n_m} takes on a particularly simple form. By evaluating the $\alpha =2$ condition explicitly, we find that if there exists a code projection $ {\cal E} $ such that $ {\cal E} [\rho(\epsilon)^{\otimes n}] = p \rho'$ and $\norm{\rho' - \ketbra{H}}_1 \le \epsilon'$ then \begin{align} n\le n^* &\coloneqq 2 \log_{f(\epsilon)} \left[\frac{1+4\epsilon'}{p}\right] \label{eq:n_star} \end{align} where the base of the logarithm is given by $f(\epsilon) \coloneqq [1-\epsilon +\frac{\epsilon^2}{2}]^{-1}$. This expression captures the fact that under a CSS code projection protocol, there is a fundamental trade-off between acceptance probability and output fidelity. In particular, for any fixed code length $n$ and input noise $\epsilon>0$, we cannot simultaneously obtain zero output noise $\epsilon'=0$ and unit acceptance probability $p= 1$. To further investigate this trade-off, in \figref{fig:dpi_vs_maj}(b) we plot the \textit{maximum achievable fidelity} with respect to the Hadamard state achievable under an $n$-to-$1$ CSS code projection via \begin{equation} F_{\max}(\rho) = \max_{ {\cal E} } \left\{\bra{H} \rho' \ket{H} \, : \, {\cal E} (\rho^{\otimes n}) \mapsto p \rho'\right\}. \end{equation} where the maximization is performed over the set of all CSS code projection protocols. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{dpi_comparison.pdf} \caption{\textbf{(Majorization gives independent constraints over DPI).} \textbf{(a)} Shown is how (scaled) $\Delta n_U \coloneqq n^{DPI}_U - n^{maj}_U$ varies over all possible values of acceptance probability $p$ and a realistic range of input noise $\epsilon$, with fixed $\epsilon' = 10^{-9}$. Whenever we have $\log_{10}(\Delta n_U+1)>0$ means that upper bounds from majorization give tighter constraints than the DPI, reaching $\Delta n_U = O(10^4)$ in the low $p$, low $\epsilon$ regime. \textbf{(b)} We show the trade-off relation given by bounds on the maximum achievable fidelity $F_{\max}(\rho)$ vs. target acceptance probability $p$, under an $n$-$1$ CSS code projection, where $\rho=\frac{3}{4}\ketbra{H} + \frac{1}{8} \mathbbm{1} $. For $p$ above a given threshold ($\approx 0.6$) no perfect distillation is theoretically possible, even for $n\rightarrow \infty$ copies of the input state. Majorization (maj.) is shown to give stronger constraints than that of DPI. \label{fig:dpi_vs_maj}} \end{figure} \subsection{Comparison with bounds due to the data-processing inequality (DPI)} \label{subsec:DPI} We have seen that stochastically represented $[[n,k]]$ code projection protocols give rise to a set of no-go theorems (\thmref{thrm:upper_bound_n_m}) in the form of a set of upper bounds on $n$. By comparing to the data-processing inequality (DPI), we see that although the existence of upper bounds is a general feature of code projection protocols, exploiting the representation stochasticity gives strictly stronger bounds. To begin, we note that if there exists a (stochastic or otherwise) code projection channel $ {\cal E} $ that maps $\rho^{\otimes n} \rightarrow p \rho'$, then the DPI for quantum channels states that \begin{equation} \Delta \tilde{D}_\alpha\coloneqq n\tilde{D}_\alpha\left(\rho\bigg|\bigg|\frac{ \mathbbm{1} }{2}\right) - \tilde{D}_\alpha\left(\rho_p||\tau_{n,k}\right) \ge 0, \label{eq:DPI_constraint} \end{equation} for all $\alpha\in (1,\infty)$~\cite{beigi2013sandwiched}, where $\tilde{D}_\alpha (\rho ||\tau)$ is the sandwiched $\alpha$-R\'{e}nyi divergence \cite{mosonyi2014convexity,wilde2014strong} on the normalized quantum states $\rho$ and $\tau$, which is defined for $\alpha \in (1 , \infty)$ as \begin{align} \tilde{D}_\alpha (\rho||\tau) \coloneqq \frac{1}{\alpha-1} \log \mathrm{tr} \left[\left(\tau^{ \frac{1-\alpha}{2\alpha}} \rho \tau^{ \frac{1-\alpha}{2\alpha}}\right)^\alpha \right]. \end{align} Now if $\rho^{\otimes n} \rightarrow p \rho'$ under any such channel, then we have the following upper bounds on $n$ for all $\alpha \in {\cal A} $ \begin{align} n \le \tilde{n}_U \coloneqq \min_\alpha \max_n \{ n : \Delta \tilde{D}_\alpha \ge 0 \} . \end{align} which turns out to be finite whenever $\alpha$ is such that $H_\alpha(\rho) >1$. Given that we also obtain upper bounds from simple data processing of quantum states, we now ask whether majorization give genuinely new constraints on magic state distillation using CSS protocols beyond the DPI? Since our majorization conditions are a consequence of the stochastic representation of all CSS operations, while the DPI arises from the fact that all quantum channels are CPTP, this question may be loosely rephrased as asking whether stochasticity impose any additional constraints beyond those imposed by CPTP on CSS magic state distillation. \figref{fig:dpi_vs_maj} allows us to answer this question in the affirmative, since our majorization upper bound is observed to impose much stronger restrictions than that given by the DPI over a wide range of parameter regimes. For example, in \figref{fig:dpi_vs_maj}(a) the low acceptance probability $p$ and input noise $\epsilon$ regime, we find the difference in upper bounds $\Delta n_U \coloneqq n_U^{DPI} - n_U^{maj}$ (the amount by which majorization ``beats" DPI) is of the order $\Delta n_U = O(10^4)$. We thus can conclude that the upper bounds on CSS code projections stemming from majorization on quasi-distributions go beyond those of DPI. \section{Discussion and Outlook} We have shown that the statistical mechanical framework of~\cite{koukoulekidis2022constraints} can be extended to the experimentally significant case of qubit systems, by focusing on the processing of magic states under the class of completely CSS-preserving operations. To achieve this, we extended the Wigner distribution of~\cite{delfosse2015} to a complex representation that covers all $n$-qubit states, wherein completely CSS-preserving operations correspond to stochastic transformations on phase space. This set of operations is known to be sufficient for universal quantum computing in the rebit magic state injection model~\cite{delfosse2015}, and consists precisely of the gate set that can be performed fault-tolerantly on surface code constructions~\cite{Kitaev_Surface_Code2003}. Within this framework, we showed that relative majorization can be used to encode particular properties of an important class of distillation protocols based on $[[n,k]]$ CSS codes. In this context, we found general entropic constraints on such protocols in terms of both upper and lower bounds on the code length $n$. However, here we considered basic project-and-decode protocols (akin to those outlined in \cite{reichardt2005quantum}). In the context of the aim of achieving full fault-tolerance, a natural extension of this work would be to generalize our results to other more sophisticated protocols. For instance, we might ask how the use of $m$ intermediate Clifford corrections in between the measurements of stabilizer generators affect these fundamental constraints. One would expect to be able to obtain more refined bounds as a function of $m$. Moreover, while many protocols are based on CSS codes in part due to their relative ease of construction via triorthogonal matrices~\cite{BravyiHaah2012overhead}, from an operational perspective it would be of interest to see whether we can extend to the full set of stabilizer operations on qubit systems. We have also obtained a set of monotones $\{\Lambda_\alpha\}$ for the class of CSS operations, each of which form a convex optimization problem. We speculate that an analogous monotone can be constructed for any such resource theory for which the free operations are a subset of operations that completely preserve Wigner positivity. For instance, it may be of interest from the perspective of quantum optics experiments, wherein Gaussian operations and probabilistic randomness are readily available, to consider the case of continuous variable systems under the set of Gaussian operations and statistical mixtures~\cite{Albarelli2018nonGaussianity}. Since the individual $\alpha$-R\'{e}nyi divergences on quasi-distributions were seen in~\secref{subsec:DPI} to typically give rise to stronger constraints than the corresponding constraints given by $\alpha$-R\'{e}nyi divergences on quantum states, it would be interesting to see how well these quasi-distribution-based monotones perform relative to known state-based counterparts. On a technical note regarding majorization theory, we point to two interesting directions for further study. Firstly, complex majorization constraints arise naturally when we extend our setup from rebit to all qubit states. The Wigner distributions are complex due to the non-Hermiticity of the operator basis $\{A_{\bm{u}}\}$. We expect such constraints to take the form of a duplet of constraints between the real parts and imaginary parts of the Wigner function independently. In the context of non-Hermitian quantum mechanics~\cite{moiseyev2011}, results on complex majorization would also benefit theories that require an ordering between Hamiltonian eigenvalues, such as quantum thermodynamics. Secondly, the Wigner representation of~\cite{delfosse2015} recovers the covariance over symplectic affine transformations on qubit phase spaces, a property shared by the Gross' qudit Wigner function. This added structure on the phase space was ignored by our analysis, but could be utilised to tighten the obtained bounds in future work. In particular, as explained in the discussion of~\cite{koukoulekidis2022constraints}, the stochastic majorization used in our analysis is only a special case of $G$-majorization, where $G$ can be taken as a subgroup of the stochastic group, for example the symplectic group. Then, it can be shown~\cite{giovagnoli_cyclic_1996,giovagnoli_1985,steerneman_1990} that one should expect to obtain a set of finite lower bound constraints on distillation, which will be tighter than stochastic majorization constraints. \bibliographystyle{apsrev4-2}
proofpile-arXiv_067-8121
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsubsection[1]{\vspace{3pt}\noindent\textbf{#1.}} \pgfplotsset{compat=1.16} \usetikzlibrary{positioning, patterns} \title{Cracking Double-Blind Review:\\ Authorship Attribution with Deep Learning} \author{Leonard Bauersfeld\thanks{$^*$ equal contribution} \and Angel Romero$^*$ \and Manasi Muglikar \and Davide Scaramuzza \\ Robotics and Perception Group, University of Zurich } \begin{document} \maketitle \begin{abstract} Double-blind peer review is considered a pillar of academic research because it is perceived to ensure a fair, unbiased, and fact-centered scientific discussion. Yet, experienced researchers can often correctly guess from which research group an anonymous submission originates, biasing the peer-review process. In this work, we present a transformer-based, neural-network architecture that only uses the text content and the author names in the bibliography to attribute an anonymous manuscript to an author. To train and evaluate our method, we created the largest authorship-identification dataset to date. It leverages all research papers publicly available on arXiv amounting to over 2 million manuscripts. In arXiv-subsets with up to 2,000 different authors, our method achieves an unprecedented authorship attribution accuracy, where up to 95\% of papers are attributed correctly. Thanks to our method, we are not only able to predict the author of an anonymous work but we also identify weaknesses of the double-blind review process by finding the key aspects that make a paper attributable. We believe that this work gives precious insights into how a submission can remain anonymous in order to support an unbiased double-blind review process. We have open-sourced the necessary tools to reproduce our experiments. \end{abstract} \urlstyle{rm} \begin{center}{\small \textbf{Code}: \url{https://github.com/uzh-rpg/authorship_attribution}} \end{center} \urlstyle{tt} \input{sections/01_introduction} \section{Related Work} Perhaps one of the oldest examples of authorship attribution (AA) was to identify the co-writers of William Shakespeare in 36 plays (collectively called \textit{"Shakespeare canon"} \cite{hope_1994}), which began in the late 17th century. The research on authorship attribution became much more popular in 1964 when researchers studied "The Federalist" papers \cite{mosteller1963_federalist}. After this, AA advanced through the development of more involved hand-crafted feature extractors for text, resulting in over 1000 different published approaches by the year 2000 \cite{rudman1997_over1000, holmes1998_stylometry}. Subsequently, the computer-assisted approaches were further automated, and prior to the machine learning era, two dominant approaches existed: profile-based AA and instance-based AA (IAA) \cite{stamatatos2009_survey}. The former extracts one feature vector (author profile) per author and compares the feature distance of a given text with all author profiles, whereas IAA extracts a feature vector per text sample and uses a classifier (e.g. SVM \cite{stamatatos2009_survey}) to distinguish authors. Along with the rise of short electronic messages (e-mails, tweets) came a growing interest in text classification (e.g., hate speech \cite{davidson2017_automatedhatespeech}, polarizing rhetoric tweet analysis \cite{Ballard22}) and AA using short texts (e.g., detect 'hacked' accounts \cite{li2017_socialnetworkattribution}). Machine learning proved to be vital for this task since learned feature descriptors like \emph{document embedding} \cite{agun2017_documentembedding} outperform classic character n-gram \cite{bojanowski2016_word2vecngram} and bag-of-words approaches. N-gram convolutional nets also show competitive performance \cite{shrestha2017_convolutional}. From a text-length perspective, research papers are more similar to news articles and books than to tweets. In \cite{iyer2019_mlframework} a uni-gram feature in combination with an SVM is used for news articles and book AA, and they achieve 83\% classification accuracy on a dataset with 50 authors. In \cite{qian2017_dlauthorship} a study comparing different network architectures (LSTM, GRU, Siamese network) on similar data is presented, and a near-perfect classification is achieved, also on a dataset with only 50 different authors. In \cite{ma2020_survey} and \cite{sari2018_approachesattribution} it is confirmed that deep networks achieve very competitive performance on AA and authorship profiling (AP) tasks. The results obtained on public benchmark datasets in those works are used as \emph{baselines}, although they are focused on single-author documents (non-research articles) and are only applied to the comparably small benchmarks. For research papers, solving authorship attribution is a more complex task due to the length of the texts, their heterogeneity (mathematical symbols, reference sections, etc.), and the vast amount of possible authors. Therefore, authorship attribution has been applied to research articles only in very rare cases, such as \cite{simen2016_researchpapers}, where the (not publicly available) training and testing datasets are rather limited (403 authors, 1683 papers). The recent advances in natural language processing (NLP), namely the development of transformer-based architectures, allow us to tackle these difficulties. Transformers have shown impressive capabilities in NLP for mid-sized text lengths, e.g., BERT \cite{devlin2019_bert}, its smaller counterpart DistilBERT \cite{sanh2020_distilbert}, and BigBird, for longer sized sequences \cite{zaheer2020bigbird}. The success of transformers has enabled applications such as ancient text restoration and attribution, polarizing tweet analysis \cite{Ballard22}, hate speech detection \cite{Huang21}, emotion recognition in conversation \cite{Geng22} and song analysis \cite{Wang22icassp}. In \cite{cruz2020_baselines} results indicate the usefulness of using such networks as feature extractor. The increasingly large number of studies on the use of scientific documents with bibliometric applications shows the growing interest of the bibliometric community in authorship attribution \cite{editorial_mining}. Specifically, machine learning applied to bibliometrics has steadily been getting more traction in the last decade \cite{decadeCitation}. In \cite{author_id_citations}, the authors analyse the use of solely the reference section to predict the possible authors of scholarly papers. However, all the aforementioned research focuses either on the analysis of the texts themselves or solely on the references. To the best of the authors' knowledge, this work presents the first approach, where both sources of information are combined. \input{sections/dataset} \section{Architecture} In this section, we present the architecture and sub-architectures (see Fig.~\ref{fig:pipeline}) that are used throughout this paper. \begin{table*}[t!] \setlength{\tabcolsep}{5pt} \def1.1{1.1} \centering \caption{{This table summarizes the authorship identification accuracy in \% on the test split of the different arXiv datasets four our method.}} \label{tab:results_arxiv} \vspace*{-6pt} \begin{tabularx}{1\linewidth}{lC||C|ccccc||C|ccccc} \toprule & & \multicolumn{6}{c||}{Only first 512 Words} & \multicolumn{6}{c}{Document in Chunks of 512 Words}\\[6pt] \rotatebox{90}{Input} & \rotatebox{90}{Epochs} & \rotatebox{90}{LR} & \rotatebox{90}{D100} & \rotatebox{90}{D200} & \rotatebox{90}{D300} & \rotatebox{90}{D400} & \rotatebox{90}{D500} & \rotatebox{90}{LR} & \rotatebox{90}{D100-C} & \rotatebox{90}{D200-C} & \rotatebox{90}{D300-C} & \rotatebox{90}{D400-C} & \rotatebox{90}{D500-C}\\ \midrule Content & 40 & 1e-5 & 45.5 & 64.7 & 78.9 & 89.0 & 92.0 & 1e-5 & \bf 70.0 & 84.8 & \bf 90.3 & 94.8 & 96.3 \\ Content & 10 & 1e-4 & 49.7 & 68.9 & 82.4 & 91.3 & 92.9 & 1e-4 & & \bf 85.6 & \bf 90.3 & 94.4 & 95.7 \\ References & 10 & 8e-4 & 54.3 & 71.0 & 79.8 & 89.6 & 90.2 & 8e-4 & 54.3 & 71 & 79.8 & 89.6 & 90.2 \\ Ref (no self) & 10 & 8e-4 & 43.3 & 62.9 & 75.5 & 84.3 & 86.4 & 8e-4 & 43.3 & 62.9 & 75.5 & 84.3 & 86.4 \\ \bf Ref+Cont & 10 & 3e-4 & \bf 60.5 & \bf 79.0 & \bf 87.0 & \bf 94.3 & \bf 93.1 & 5e-5 & & 81.1 & \bf 90.3 & \bf 96.0 & \bf 96.6 \\ \bottomrule \end{tabularx} \vspace*{-10pt} \end{table*} \subsection{DistilBERT} First we present the architecture that has been used to process the main text of the papers (without the references). For this task, we have chosen to use DistilBERT, a transformer architecture based on a distilled version of BERT. It is smaller, faster, cheaper, and lighter, offering up to 60\% faster speeds than BERT while retaining 97\% of its language understanding capabilities \cite{sanh2020_distilbert}. In order to convert the raw text to a format that the DistilBERT architecture can take as input, a tokenizer to convert the words to tokens needs to be used. The tokenizer used in our case is based on WordPiece \cite{wordpiece}. Both the DistilBERT transformer model and the tokenizer have been initialized with a pre-trained version. Specifically, we use the checkpoint called \emph{distilbert-base-uncased}, which was pre-trained on BookCorpus \cite{bookcorpus} and the English Wikipedia. One of the main limitations of most transformer architectures is that they have a limited input size. In the case of DistilBERT, this limit is 512 tokens which means that either only the first 512 tokens can be used or the dataset is divided into chunks as described in the previous section. One solution that has been tried to solve this problem is to use BigBird \cite{zaheer2020bigbird}, which is a transformer architecture specifically designed for longer sequences and that offers an input limit of 4096 tokens. However, due to the increased size and complexity of the BigBird transformer, training times are very long, and there is no noticeable increase in accuracy compared to a chunked dataset trained with DistilBERT. When training on such chunked datasets, the chunks are processed independently of each other. During the evaluation, all parts of the text that come from the same paper are evaluated consecutively, and the output logits are averaged before converting them to probabilities and selecting the author with the highest probability. As we will show in the results part, this approach proves to be extremely successful and improves the evaluation accuracy by 5-10\% (absolute) when compared to only inputting the first part of the paper to DistilBERT. \subsection{Reference Histogram Embeddings} There are many different ways of extracting the key information from the references section of a paper. One of the most direct ones, and the one that resembles the most of how a human reader would do it, is looking directly at the relative frequency of appearance of different author names. To do this, all the extracted author names (see Section~\ref{sec:dataset}) in the dataset are concatenated to build a vocabulary. Only authors that appear frequently (more than 50 times) are added to the vocabulary of size $N_\text{Hist}$. Next, for every paper, we create a vector with the same number of elements as the vocabulary, which contains the number of times that each author in the vocabulary appears in the reference section of that paper. This vector is what we call the Reference Histogram Embedding (RHE). Once we have the RHE for each paper, it is passed through a 2 layer MLP that compresses it to a vector size of 128. This vector is then directly concatenated with the output embeddings of the DistilBERT architecture. This joint vector is then fed to the 2-layer classifier, as shown in Fig. \ref{fig:pipeline}. \input{sections/results} \input{sections/discussion} \section{Conclusion \& Discussion} We presented a transformer-based classification architecture for research papers that leverages, for the first time, a combination of the syntactic richness and topic diversity contained in research content and the information contained in the reference section. Our results show that combining both sources of information increases the authorship attribution accuracy. In cases where only limited text content is available to the network, including references increases the performance significantly (up to $11\%$). Overall, our method achieves $70\%$ accuracy on the D100-C dataset, containing over $2000$ authors, which is unprecedented to the best of our knowledge. On smaller datasets ($<50$ authors) and some benchmarks, the proposed architecture robustly identifies an author correctly well over 90\% of the time, beating state-of-the-art results. While our DistilBERT-based approach outperforms all baselines on large datasets such as \textit{Legal} and \textit{IMDb62}, it fails to outperform simple n-gram baseline on small datasets. The likely explanation here is the data-hungry nature of transformers limits the performance of our approach on smaller datasets. However, this is not truly a limitation as AA for research papers deals with very large datasets. Lastly, we also present a large-scale authorship-identification dataset by leveraging 2 million research papers publicly available on arXiv. We believe that this line of research---albeit having great implications for double-blind review---ultimately helps to improve the review process. Thus, we conclude our paper by summarizing the key insights into how a submission can remain anonymous in order to support an unbiased, double-blind review process. \begin{itemize} \item \emph{Abstract and introduction}: already the first 512 words enable robust authorship attribution. We believe that this is because the abstract and introduction often express the authors creative identity together with the research field. These personalized characteristics enable identification of the authors. \item \emph{Self-citations}: The papers in our dataset contain, on average, 10.8\% self-citations. Those citations easily give away the authors' identity as highlighted by the results shown in Tab.~\ref{tab:results_arxiv}. Therefore, it is beneficial to omit many self-citations in the submission for double-blind review. \item \emph{Citation diversity}: Even without the self-citations, the references can be used to identify the author. By also including citations of less well-known papers authors can make authorship attribution more difficult. At the same time, more equal visibility is given to all research papers in the authors' field. \end{itemize} \section{Ethical Considerations} The task of AA for research papers has some ethical concerns as it offers a potential way of breaking the double blinded peer review system, a pillar of academic research. While the proposed methodology challenges this double blind peer review system by uncovering the author identity only from text and references, we believe our method can help establishing an improved peer review system. By analysing our method and providing insights into how a paper can be attributed to an author, we hope to guide authors towards a writing style that improves double-blind review. Therefore, we believe that the possible negative consequences are outweighed by the opportunity of this exciting research direction, which has not been thoroughly pursued in the past. For this very reason, we also open-source all the tools required to reproduce our experiments and further develop our methodology under \urlstyle{rm} \url{https://github.com/uzh-rpg/authorship_attribution}. \urlstyle{tt} \section{Results} \vspace{-1pt} This section presents the results achieved using the proposed architecture presented in the previous section and depicted in Fig. \ref{fig:pipeline}. This architecture has been implemented using the \emph{Hugging Face} library for transformers \cite{hugging_face_2020}. First, results on our new \emph{arXiv} dataset are presented along with an ablation study of the optimal learning rate. Then we present a small ablation on the network architecture itself. Finally, its performance is compared to existing approaches on benchmark datasets unrelated to scientific research article AA. \begin{figure}[t!] \centering \input{media/Accuracy} \vspace*{-18pt} \caption{The two plots visualize the results presented in Tab.~\ref{tab:results_arxiv}. On the left the 'non-C' datasets using only the first 512 words are used, on the right the full paper is used. Although the AA accuracy degrades with an increasing number of authors, our approach retains an impressive 60\% for 2070 authors.} \label{fig:accuracy} \vspace*{-18pt} \end{figure} \subsection{Our Dataset} When applying our approach to the arXiv dataset, different network architectures are possible, namely a) only content ("Content"), b) only references with and without self-citations ("References", "Ref (no self)) and c) content with references ("Ref+Cont"). The results for all architectures applied to all versions of the dataset are summarized in Tab.~\ref{tab:results_arxiv}, and visualized in Fig.~\ref{fig:accuracy}. From Table~\ref{tab:results_arxiv} it is visible that including the references in almost all cases -- as expected -- increases the accuracy by up to 8\%. When all self-citations are removed from the references ("Ref (no self)") the accuracy of a reference-only design drops by over 10~p.p. for the large D100 set. Furthermore, a boost in performance is visible when comparing the non-C datasets (first 512 words only) with the whole documents. This is especially pronounced in the large datasets where relative improvements of up to 30\% (content only) are observed. However, this boost in performance comes at the cost of dramatically increased training times, as shown in Table \ref{tab:training_times}. \begin{table}[t!] \caption{{Comparison of training times on an Nvidia Quadro RTX 8000 GPU for the best model from Tab.~\ref{tab:results_arxiv}. For the D300-C, 10 epochs, content only, is used. The last column reports the increase in accuracy when including the full document (C) is used.}} \label{tab:training_times} \vspace*{-6pt} \centering \begin{tabularx}{1\linewidth}{X|p{2.0cm}p{2cm}|p{1.2cm}} \toprule & First 512 Wo\-rds (non-C) & Full Document (C) & $\Delta$~Accu\-racy\\ \midrule D100 & 13h:08m & 11d:04h:40m & 9.5\% \\ D200 & 1h:35m & 17h:49m & 6.6\% \\ D300 & 49m & 4h:35m & 3.3\% \\ D400 & 17m & 1h:57m & 1.7\% \\ D500 & 11m & 54m & 3.1\% \\ \bottomrule \end{tabularx} \end{table} It is also important to remark that gains in evaluation accuracy that are attributed to solely combining the references with the content come nearly for free in terms of training time, as training times are similar with and without the reference part added to our architecture. The evaluation boost is more prominent when dealing with bigger datasets where only the first 512 are used. For example, it is interesting to see in Table \ref{tab:results_arxiv} for the column D100 that a) there is an absolute gain of almost 11 \% when using the references combined with the content w.r.t. only using the content information; and b) the reference alone architecture yields a 54.3\% prediction accuracy, a result that is impressive by itself. Even more so, as D100 is a dataset that has more than 2000 possible labels. However, for the chunked datasets, the accuracy increase attributed to the consideration of the references gets diminished, although, for D300-C, D400-C and D500-C in Table~\ref{tab:results_arxiv} the best results (by a small margin) are obtained through this strategy. The reason for this decrease in the difference is thought to be related to the nature of transformers. It is known that transformer architectures excel at large amounts of data \cite{fabien2020}. This is evident in our experiments when comparing the chunked versions with the first-words-only ones. It is, therefore, to be expected that the increase in performance when adding the reference information is less prominent for an intensively trained transformer. One can also argue that all cited references are somehow related to the content of the paper. In a case where the content network has access to the whole article, this might not add much new information. In the case where only the first 512 words are used, it is credible that the references add valuable information not included otherwise. \begin{table}[t!] \setlength{\tabcolsep}{4.3pt} \caption{{Ablation of the learning rate for 10 epochs.}} \label{tab:ablation} \centering \vspace*{-6pt} \begin{tabularx}{1\linewidth}{X|rrrrr} \toprule \multirow{2}{*}{Rate} & Content & \multicolumn{2}{c}{References} & \multicolumn{2}{c}{Ref+Cont}\\ & D300 & D300 & D500 & D300 & D500-C \\ \midrule 1e-5 & 78.8 & & & & 93.1 \\ 2e-5 & 81.3 & & & & 95.7 \\ 5e-5 & 81.3 & & & & \bf 96.7 \\ 1e-4 & \bf 82.4 & 79.3 & 86.7 & 84.5 & 95.7 \\ 2e-4 & 82.1 & \bf 80.2 & 87.6 & 86.5 & 94.1 \\ 3e-4 & & & & \bf 87.4 & \\ 4e-4 & 81.2 & 79.6 & 89.2 & 87.2 & \\ 8e-4 & & 79.4 & \bf 90.4 & 85.4 & \\ 2e-3 & & 80.0 & 90.3 & & \\ \bottomrule \end{tabularx} \end{table} \subsubsection{Learning Rate} In order to obtain the final results that are reported in Table \ref{tab:results_arxiv}, a fine-tuning stage of the learning rate was needed. The evaluation accuracy for different learning rates for some of our datasets is shown in Table \ref{tab:ablation}. The learning rates of the rows in bold are selected for all runs of that type, e.g. all Ref+Cont architectures are trained with a learning rate of 5e-5 for whole documents. \begin{table*} \def1.1{1.1} \centering \caption{{Comparison of our DistilBERT ("Content") architecture with other methods on the most common authorship attribution benchmark datasets.}} \label{tab:results_baseline} \vspace*{-6pt} \begin{tabularx}{1\linewidth}{X|cccccc} \toprule & Legal & Blog10 & Blog50 & \multicolumn{2}{c}{Reuters50} & IMDb62 \\ \midrule Train/Test Split & 80/20 & 80/20 & 80/20 & 50/50 & 90/10 & 80/20 \\ \midrule Topic Model \cite{seroussi2014} & 93.64 & -- & -- & -- & -- & 91.79 \\ Article GRU \cite{qian2017_dlauthorship} & -- & -- & -- & -- & 69.1 & -- \\ N-Gram \cite{sari2018_approachesattribution} & 91.29 & -- & -- & \bf 72.6 & -- & 94.8 \\ BertAA \cite{fabien2020} & -- & \bf 65.4 & \bf 59.7 & -- & -- & 90.7 \\ \bf DistilBERT (Ours) & \bf 94.8 & 64.3 & 59.1 & 66.5 & \bf 83.6 & \bf 97.5\\ \bottomrule \end{tabularx} \end{table*} \subsection{Baselines} To evaluate the performance of the network architecture presented in this work, it is compared with current state-of-the-art methods on the benchmark datasets introduced in \ref{sec:Benchmarks}. Note that only the 'Content' part using the DistilBERT is used because no benchmark for research articles AA exists. The learning rate of the DistilBERT has been fine-tuned for the datasets and is set to 2e-5 for all experiments. The results are summarized in Tab.~\ref{tab:results_baseline} On the larger \emph{Legal} and \emph{IMDb62} dataset, our DistilBERT approach outperforms all baselines and nearly halves the error rate on \emph{IMBd62}. On the smaller Blog datasets, the transformer-based BertAA \cite{fabien2020} approach sightly outperforms ours by about 1\%. On the original \emph{Reuters50} dataset, the classical n-gram approach \cite{sari2018_approachesattribution} achieves a 6\% (absolute) higher accuracy compared to DistilBERT. This is most likely because the transformer-based approach requires much more training data. This theory is supported by the superior results when using a 90/10 train/test split and also by \cite{sari2018_approachesattribution}, where a similar tendency is observed. \subsection{Alternative Architecture} \input{sections/lstm} \section{Dataset} \label{sec:dataset} In contrast to the works on authorship attribution for news articles, legal documents or blog entries, this work focuses on research articles. Therefore, the benchmark datasets that are commonly used are not suitable, and a new dataset based on arXiv articles is developed. This section first introduces our arXiv dataset, then the standard benchmarks are briefly described, and a brief discussion of the challenges and features is presented. \subsection{The arXiv Dataset} The arXiv is an open-access preprint server for scientific papers in the field of computer science, math and physics, which contains over 2~million research articles at the time of writing \footnote{\url{https://arxiv.org/}}. The pdf versions of the articles can be downloaded \cite{clement2019_arxiv} together with a database file that maps the unique arXiv-identifier (e.g. 2106.08015) to the title of the paper, the authors' names and the abstract. Note that, unfortunately, no UUIDs (unique user identifiers) are assigned to authors on arXiv, which causes ambiguity between different authors with the same name. \subsubsection{Preprocessing} In order to reduce the name ambiguity, a first step discards all entries where the authors did not provide their full names but only initials. Subsequently, all authors with at least $P$ papers are selected to yield a dataset named \emph{D(\$P)}, e.g. for {$P=300$} the dataset is D300. All co-authors are treated equally as not all fields order the authors by the amount of contribution. For all in the dataset, the plain-text version of their articles is loaded and processed. In the given order, this processing \vspace*{-6pt} \begin{enumerate} \setlength\itemsep{-0.5em} \item discards the header containing the title, the authors' names, contact info and affiliation, \item extracts the content (abstract and body) of the paper, \item extracts the 'References' section, \item splits the reference section into individual references, \item extracts the cited authors' names from the references. \end{enumerate} \vspace*{-6pt} All splitting and extraction of parts are done using hand-crafted regular expressions that are 'fail-fast', meaning that if they succeeded in segmenting the paper, the result is almost always correct (e.g. a human performing the same task would segment similarly). The processing removes about 15\% of the papers from the dataset for all values of $P$. \subsubsection{Author Ambiguity} Because arXiv lacks UUIDs, the authors are only identified by their full names. This ambiguity became especially obvious for short names which had over 10000 papers assigned to them. To resolve this issue, a clustering approach is used: using a pre-trained sentence transformer \cite{reimers2019} to extract a feature embedding from the abstract. Then, DBSCAN is used to cluster the extracted feature vectors. If DBSCAN finds only one cluster and some noise, the author is assumed to be one physical person, whereas multiple clusters are identified for ambiguous authors. DBSCAN has been tuned to correctly classify a set of 20 known, unique authors (famous researchers with distinctive names) and 20 known ambiguous authors (checked via Google Scholar). For datasets with a high threshold $P$, over half of the authors are discarded, whereas only 20-25\% of the authors are removed for lower thresholds. This follows the intuition that no single physical person will have published 5000 papers, but 100 is certainly possible through co-authorships. \subsubsection{Content Chunks} Transformer architectures scale badly with the sequence length, which is why most networks have a hard limit between 256 and 4096 tokens. The DistilBERT network can process up to 512 tokens per text. Therefore, the content of the paper is divided into multiple chunks of length up to 512 words. Either the first chunk only (referred to as Dxxx, e.g. D300) or all chunks are used (referred to as Dxxx-C, e.g. D300-C). The rationale for using the first 512 tokens is that those contain the abstract and introduction, which usually summarize the whole paper. While the first 512 words of a paper almost never contain equations or tables, later chunks can. In tables and equations, the individual symbols are always surrounded by white space. Therefore, all chunks that have an average word length below 4.22 characters are discarded, as they are assumed to primarily consist of tables and equations. This threshold is computed as the 5$^\text{th}$ percentile of a distribution of the average word length in a 512 word (general English) text. The individual word lengths in this text follow the distribution of word lengths in English texts \cite{mayzner}. In a final step, an 80/20 division into train/test dataset is performed. This random split is done using stratified sampling, such that the 80/20 balance is kept for each author. If a paper in the training set was authored by multiple authors in the dataset, it is randomly assigned to one of them. Papers in the test set can contain multiple authors and are correctly classified if the network predicts that it was written by \emph{one} of its authors. Furthermore, it is ensured that a co-authored paper can not be in the train and test split at the same time. An overview of the different arXiv datasets is given in Tab.~\ref{tab:datasets}. \begin{table}[tb] \setlength{\tabcolsep}{2.7pt} \def1.1{1.1} \centering \caption{{Summary of the datasets used in this work.}} \vspace*{-6pt} \label{tab:datasets} \begin{tabularx}{1\linewidth}{Xl|ccccc} \toprule \multicolumn{2}{c}{Dataset} & \rotatebox{90}{Authors} & \rotatebox{90}{Words} & \rotatebox{90}{Words/Text} & \rotatebox{90}{Texts/Aut.} & \rotatebox{90}{Words/Aut.}\\ \midrule \parbox[t]{1mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1.7cm}{\centering Benchmarks}}}} & Legal & 3 & 3.1M & 2312 & 447 & 1.03\,M \\ & Blog10 & 10 & 2.1M & 91 & 2305 & 212\,k\\ & Blog50 & 50 & 7.2M & 98 & 1466 & 144\,k\\ & Reuters50 & 50 & 2.5M & 506 & 100 & 50\,k\\ & IMDb62 & 62 & 21.7\,M& 349 & 1000 & 350\,k\\ \midrule \parbox[t]{1mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{\parbox[c]{3.5cm}{\centering Our arXiv Dataset}}}} & D500 & 7 & 1.7\,M & 512 & 472 & 241\,k \\ & D500-C & 7 & 17.1\,M & 5184 & 472 & 2.4\,M \\ & D400 & 13 & 2.8\,M & 512 & 425 & 217\,k \\ & D400-C & 13 & 31.3\,M & 5658 & 425 & 2.4\,M \\ & D300 & 49 & 8.0\,M & 512 & 320 & 164\,k \\ & D300-C & 49 & 94.3\,M & 6024 & 320 & 2.1\,M \\ & D200 & 226 & 24.6\,M & 512 & 213 & 109\,k \\ & D200-C & 226 & 289\,M & 6016 & 213 & 1.2\,M \\ & D100 & 2070 & 105\,M & 512 & 99 & 50\,k \\ & D100-C & 2070 & 1.27\,B & 6180 & 99 & 0.6\,M \\ \bottomrule \end{tabularx} \vspace*{-12pt} \end{table} \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{media/Figure_1_Coling.pdf} \caption{Our proposed network architecture consists of two separate feature encoders for the different input modalities followed by an MLP network with a logit output layer.} \label{fig:pipeline} \vspace*{-12pt} \end{figure*} \subsection{Benchmark Datasets} \label{sec:Benchmarks} \subsubsection{Legal} This dataset consists of written rulings by three Australian High-Court judges from the year 1913 to 1975. Originally, this dataset was used to show that Judge Dixon was ghostwriting for the other two \cite{seroussi2011}. However, by only using the time period where ghostwriting was impossible, a clean dataset with long texts can be obtained, which is used as a benchmark \cite{seroussi2014, sari2018_approachesattribution}. \subsubsection{Blog10 and Blog50} The Blog dataset consists of online blog posts from the years 2002 to 2004 \cite{schler2006}. Most of the posts are very short and often contain rather explicit language. The Blog10 and Blog50 datasets include posts from the top 10 or 50 authors, respectively when sorted by the number of posts \cite{fabien2020}. \subsubsection{Reuters50} The Reuters50 (or CCAT50) is the most widely used \cite{stamatatos2008, sari2018_approachesattribution, qian2017_dlauthorship} AA dataset. It contains news stories and is an excerpt from the Reuters Corpus Volume 1 \cite{rose2002}. The top 50 authors (according to the number of stories) have been selected, and for each author, 100 texts are provided, equally split into a training and a test set \cite{stamatatos2008}. \subsubsection{IMDb62} The IMDb62 dataset \cite{seroussi2010} consists of movie reviews from the most active 62 IMDb users, where 1000 texts are provided per author. It is also a very common dataset for benchmarking. \cite{stamatatos2009_survey, sari2018_approachesattribution, fabien2020} \subsection{Discussion} Compared to the benchmarks, our dataset contains significantly longer texts per author, although there are fewer texts on average per author. Especially the big arXiv datasets (e.g. D100-C) are extremely different than benchmarks like \emph{Blog10} or the \emph{Reuters50}. For example, D100-C contains 600 times more data than \emph{Blog10}. Only the \emph{Legal} and the \emph{IMDb62} datasets are somewhat similar to the small arXiv D400 and D500 in terms of text length and dataset size. The main difference between the existing AA datasets and the arXiv dataset is that the latter includes an additional feature: the author names of the cited papers. Exploiting this additional information specific to scientific articles is a key contribution of our work. For research article AA no benchmarks exist. \section{Introduction} Most known academic and literary texts can easily be attributed to a certain author because they are signed. Yet sometimes, we find anonymous pieces of work and would like to identify an author based on the given text, a method referred to as author attribution (AA). \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{media/arxiv_figure1.pdf} \vspace*{-18pt} \caption{Our method identifies authors of anonymous scientific manuscripts by leveraging both the information contained in the text as well as the citations. We encode the main text using DistilBERT~\cite{sanh2020_distilbert} and combine this encoding with a feature vector extracted from the cited references. The encodings are subsequently fused by a two-layer classification MLP. It outputs the log-likelyhoods that the given anonymous paper has been (co-)authored by one of the over 2000 authors included in our novel dataset. } \vspace*{-18pt} \label{fig:overview} \end{figure} The AA problem is particularly interesting in the context of double-blind peer review in academic research, a technique often implemented to robustify the process against human biases. By addressing the AA task for research papers, we aim to not only demonstrate the technical feasibility of large-scale authorship attribution but hope to improve the double-blind peer review process by identifying the key aspects of a paper that allow experienced reviewers to correctly guess which group of authors a certain manuscript originated from. Especially for research papers, AA is a complex task due to the vast number of possible authors, the length of the texts, and the unavailability of a large-scale dataset. Author attribution for literary texts first became popular in 1964 when researchers studied the famous "The Federalist" papers \cite{mosteller1963_federalist}, a collection of 85 articles and essays published under the pseudonym "Publius", to identify the authors who contributed to each essay. More recently, authorship attribution for rulings written by the Australian High Court \cite{seroussi2011} and internet blogs \cite{fabien2020} has been studied. Scientific texts, however, are inherently different from the aforementioned works as individual authors are not only identifiable by a certain writing style but most likely write on similar topics in their works and cite themselves more often. Furthermore, no large-scale authorship attribution dataset for academic texts exists. We aim to address both challenges: this work presents a novel architecture (summarized in Fig.~\ref{fig:overview}) alongside a new dataset to address the problem of AA for research papers. Instead of just using the text content \cite{simen2016_researchpapers}, our method relies on both text content and the author names of the paper cited in the \emph{Reference} section of a manuscript, discarding all image data and equations. Following the latest advances in natural language processing, the transformer DistilBERT \cite{sanh2020_distilbert} is used to process the text section. For the references, a frequency histogram-embedding with a subsequent multi-layer perceptron is used. We leverage all publicly available arXiv \cite{clement2019_arxiv} submissions, that amount to more than 2 million research papers, to construct a new dataset tailored to this hybrid AA approach. The dataset includes text content as well as the references cited in a paper. On the largest arXiv-subset with 2070 candidate authors, we achieve an AA accuracy of $70\%$, while, on smaller sets with 50 possible authors, well over 90\%. We find that already the first 512 words of a manuscript (including the abstract, when available, and parts of the introduction) lead to more than 60\% of the papers being attributed correctly. Furthermore, the experiments clearly show that self-citations contribute to correct attribution by up to 25\% more than when self citations are omitted \subsubsection{Contributions} In summary, we make the following contributions. First, we present a novel deep-learning-based architecture capable of analysing and classifying hundreds of thousands of research texts and references from arXiv to address the AA problem. Second, to train this architecture, we build a large-scale dataset based on the research publications available on arXiv. Lastly, we identify the key aspects of a paper that make it vulnerable to be deanonymized during a double-blind review process.
proofpile-arXiv_067-8162
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Locality and nothing more?} Bell's discussion of the premises behind his theorem, which differs substantially with respect to his original approach is given most extensively in his final published paper `La nouvelle cuisine'~\cite{Bell2004}. We will base our discussion on it, as the work is often referred to by advocates of (A). In this work Bell fully defines his new notion of local causality. Consider two space-like separated parties, Alice and Bob. Either of them has a choice between a number of measurement settings. Denote, respectively, by $x$ and $y$ Alice's and Bob's choice of measurement settings and by $A$ and $B$ their outcomes. Correlations of the outcomes are described by a conditional probability distribution $p(A,B|x,y)$. It is argued, that the situation described by $p(A,B|x,y)$ may arise out of a statistical mixture of different situations traditionally labeled by $\lambda$ and sometimes called `causes'. The probabilities therefore acquire the following form $p(A,B|x,y)=\int d\lambda \rho(\lambda) p(A,B|x,y,\lambda)$, where $\rho(\lambda)$ is a probability distribution. The properties of conditional probabilities allow one to put $ p(A,B|x,y,\lambda)= p(A|B,x,y,\lambda) p(B|x,y,\lambda)$. Next, local causality stated by Bell as `The direct causes (and effects) of [the] event are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light', allows one to state that `what happens on Alice's side does not depend on what happens on Bob's side' and {\em vice versa}~\cite{Gisin}. This results in $p(A|B, x, y, \lambda) = p(A|x,\lambda)$ and $p(B|x,y,\lambda)=p(B|y,\lambda)$. Finally, we obtain the general mathematical structure underlying derivations of all Bell's inequalities: \begin{equation} p(A,B|x,y)=\int d\lambda \rho(\lambda) p(A|x,\lambda) p(B|y,\lambda). \label{condition} \end{equation} Proponents of (A) then argue that besides `locality' nothing more has been assumed in the derivation of equation ~(\ref{condition}). Obviously determinism is not required. The condition~(\ref{condition}) holds also for any local stochastic (indeterministic, hidden variable) theory. While a deterministic theory specifies which outcome, exactly, will happen under a given $\lambda$, a stochastic theory specifies only the probabilities for various outcomes that might be realized. (Note that the former is a special case of the latter, with the probabilities always being exclusively 1 or 0.). So, where, if at all, ``realism'' of any sort could be hidden in condition~(\ref{condition})? Note first, that in the case of the {\em mixed} separable states, one may think that $\lambda$ specifies the ``actual'' quantum state in the probabilistic mixture. All this would agree with the formula~(\ref{condition}). However when the joint quantum state is an entangled one, {{} and especially if it is additionally pure}, and no additional $\lambda$'s are introduced, one cannot have a factorization like~(\ref{condition}), see e.g. Ref.~\cite{Gisin}, or appendix B. Thus the formula (\ref{condition}) is equivalent to the introduction of additional hidden parameters, $\lambda$'s, which are {\em not} present in quantum theory. The $\lambda$, which enter Eq.~(\ref{condition}), can pop up under many guises such as, e.g. `the physical state of the systems as described by any possible future theory'~\cite{Gisin}, `local beables', `the real state of affairs', `complete description of the state', etc. Since $\lambda$ do not appear in quantum mechanics, thus they are (good old) {\em hidden variables}. Anything on which one conditions probabilities, which gives {{} {\em different structure} to formulas for probabilities than quantum mechanical formalism\footnote{{{} The basic quantum structure is $Tr\rho \Pi$}, where $\rho$ is the state and $\Pi$ is a projector.}}, is a hidden variable {\em per se}. Bell himself writes `$\lambda $ denote any number of hypothetical additional variables needed to complete quantum mechanics in the way envisaged by EPR'~(\cite{Bell2004}, page 242). This sentence of Bell's is often forgotten by supporters of (A). As we will see next, we can try to introduce $\lambda$'s not existent in quantum theory, but this will result in a condition which is stronger than locality alone. The condition~(\ref{condition}) allows assignment of (positive and properly normalized) {\em joint} probabilities for the (local) values of the entire set of pairs of (local) measurements. Denote as $A_{x}$ the value of the outcome $A$ pertaining to the situation in which Alice chooses setting $x$, which, for example, in the Clauser-Horne-Shimony-Holt (CHSH)~\cite{CHSH} scenario has two possible values, denoted here as $1,2$, and similarly define $B_{y}$ ($y=1,2$). We can always introduce a joint probability \begin{eqnarray} &p(A_{1}, A_{2},B_{1}, B_{2}|\lambda)&\nonumber\\ & = p(A|x=1,\lambda) p(A|x=2,\lambda)p(B|y=1,\lambda) p(B|y=2,\lambda)&.\\ \nonumber \end{eqnarray} When the probabilities are 0 or 1 the model is deterministic, otherwise it is stochastic. Hence, we have defined a joint probability for all possible outcomes under all possible pairs of settings in the CHSH scenario. These outcomes include, for a single run of a Bell experiment, the actually measured ones and on an equal footing the ones which could have been potentially measured. Note that $\lambda$ is no more necessary here, as to get predictions for such a model it is enough to know $p(A_{1}, A_{2},B_{1}, B_{2})$, which is given by $\int d\lambda p(A_{1}, A_{2},B_{1}, B_{2}|\lambda)\rho(\lambda)$. Starting from equation ~(\ref{condition}) one can introduce the joint probability $p(A_{1}, A_{2}, ..., B_{1}, B_{2},...)$ for an arbitrary number of settings. Note that the existence of such joint probability implies the existence of $p(A_{1}, A_{2}, ...)\geq 0,$ which has nothing to do with locality and is already in conflict with the Kochen-Specker theorem~\cite{KOCHEN}. Conversely, starting from the existence of the joint probability one can derive ~(\ref{condition}) for any pair of local settings. This is because in Kolmogorovian probability theory, which is the standard axiomatization of classical probability, if $\Omega$ is a probability space with probability measure $\rho(\lambda)$ and $A$ is a measurable set contained in $\Omega$, then indicator function $\chi_A(\lambda)$, which is $1$ if element (elementary event) $\lambda$ belongs to $A$, and $0$ when $\lambda$ belongs to the complement of $A$, that is $\Omega \setminus A$, gives by the formula $P(A)=\int_{\Omega}\chi_A(\lambda)\rho(\lambda)d\lambda$ the probability of the event $A$. The probability $p(A_{1},A_{2},B_{1}, B_{2})$ can be modelled by $p(A_{1},A_{2},B_{1}, B_{2}|\lambda) = \chi_{A_1}(\lambda) \chi_{A_2}(\lambda) \chi_{B_1}(\lambda) \chi_{B_2}(\lambda) $, that is one can put $p(A_{1},A_{2},B_{1}, B_{2}) = \int_{\Omega} \chi_{A_1}(\lambda) \chi_{A_2}(\lambda) \chi_{B_1}(\lambda) \chi_{B_2}(\lambda) \rho(\lambda) d\lambda$. Then by calculating the marginals, i.e. summing up the probabilities over the outcomes of the observables which are {\em not} measured in an actual experiment, one obtains: $P(A,B|x,y)=\int_{\Omega}\chi_{A_x}(\lambda)\chi_{B_y}(\lambda)\rho(\lambda)d\lambda, $ where $\chi_{A_x}(\lambda)$ and $\chi_{B_y}(\lambda)$ are the indicator functions for the events $A_x$ and $B_y$ respectively. This is of course, equivalent to equation ~(\ref{condition}). Thus, we see that local causality condition (\ref{condition}) is mathematically equivalent to the assumption of joint probabilities, $p(A_1,A_2,B_1,B_2)$. The latter {\it is} a form of realism: complementary observables are treated as mere numbers (`c-numbers' in Dirac's terminology~\cite{DIRAC}). One can speak about the joint probability of $A_1,A_2$ and $B_1,B_2$ only if in a theoretical construction these all values coexist together independently of which experiment is actually performed on either side, and in this sense are ``real''. Expressing it differently, the existence of a proper distribution $p(A_1,A_2,B_1,B_2)$ means that we model everything with Kolmogorovian probabilities, which always have a lack of knowledge interpretation, with the underlying $\Omega$ is treated as a sample space. Indeed, according to the previous results due to Fine~\cite{Fine}, Hall~\cite{Hall}, Gill {\em et al.}~\cite{Gill} and others, the probabilities of a local stochastic model {\em can} always be understood as stemming from our ignorance about supposed local deterministic values. However, we do not need to insist on the existence of deterministic values of $A_1,A_2$ and $B_1,B_2$, The supposed existence of their joint probabilities is sufficient for our argument. In fact, the entire discussion on local deterministic vs. stochastic models -- on which the proponents of view (A) build up their argument -- is irrelevant for the current discussion. This is most evident if one takes into account the Greenberger-Horne-Zeilinger (GHZ)~\cite{GHZ} type of argument, which involves only perfect correlations between {\em three} (or more) systems and thus no stochastic model can ever recover them. (Note, that it is symptomatic, that followers of view (A) always base their argument on correlations between two systems.) Furthermore, assuming a fundamental local indeterminism (that is some fundamental finite irreducible stochasticity, which does not allow one to use deterministic models -- even as only a mathematical tool) would lead to different bounds of the Bell inequalities. We show this in Appendix A. Somewhat ironically, not {\em any} assumption of realism but its most stringent version, namely determinism itself, must be invoked to obtain the proper bounds of Bell's inequalities. {{} The following observation can be made. Some researchers are willing to accept that an outcome measured on a single (local) system may be intrinsically probabilistic. However, they do not accept the same for correlations between outcomes measured on several such systems; the probabilities for correlations are always taken to be reducible to probabilities for local outcomes in the form like (\ref{condition}). Confronted with the experimental violation of Bell's inequalities, they accept an inherently probabilistic explanation for an individual quantum system but hold fast to a pseudo-causal \footnote{That is, not obeying relativistic causality.} nonlocal one for correlations (e.g. by introducing faster-then-light influences between distant quantum systems). } \section{Derived realism?} The school of thought (B) claims that one does need any version of the assumption of realism, or hidden variables to derive Bell's inequalities. However, it is purported, that everything is deductible via EPR type reasoning. The current classic expositions of such line of thought can be found in Refs. \cite{NORSEN-1,NORSEN-2,TUMULKA}. For example, the argumentation in Ref.~\cite{TUMULKA} tries to oppose the results of the ``Free Will Theorem" of Ref.~\cite{CONWAY}. To this end, the following two logical statements are both claimed to be true. The first one is, that the logic of Bell's theorem is in the validity of the following implication (we use the following terminology for the structure of a logical implication: $assumption\Rightarrow thesis$): \begin{equation} freedom\hspace{1mm} \& \hspace{1mm} QF\hspace{1mm} \& \hspace{1mm} locality \Rightarrow contradiction, \label{T-BELL} \end{equation} where $freedom$ stands for the freedom of choice of local measurement settings assumption, $locality$ for locality assumption and $QF$ for quantum formalism. On the other hand, the logic of the EPR paper supposedly allows to establish a yet another implication as valid: \begin{equation} freedom\hspace{1mm} \&\hspace{1mm} QF \hspace{1mm} \& \hspace{1mm} locality \Rightarrow determinism. \label{T-EPR} \end{equation} The validity of these statements is used to challenge the implication of Ref.~\cite{CONWAY}, which as put in Ref.~\cite{TUMULKA}, is \begin{equation} freedom\hspace{1mm} \&\hspace{1mm} QF\hspace{1mm} \&\hspace{1mm} locality \hspace{1mm} \&\hspace{1mm} determinism\Rightarrow contradiction. \label{WILL} \end{equation} Obviously, if the implication (\ref{T-BELL}) is valid, {{} the assumption which leads to it cannot hold}. This is because {\em contradiction} means a false statement (like $2\geq 2\sqrt{2}$). The rules of Aristotelian two-valued logic say that if the thesis of a {\em valid} (that is, true) implication is false, then its assumption must be false too ({\em modus tollens}, see the truth table of logical implication). Thus the assumption of (\ref{T-EPR}), as it is the same as of (\ref{T-BELL}), must be false and one cannot `determine' whether determinism is true, from the validity of the implication (\ref{T-EPR}), as from false statements one can derive via a valid implication both true and false statements (see again the truth table). Of course the first implication~(\ref{T-BELL}) is highly appealing to proponents of (A), while the second one~(\ref{T-EPR}) for supporters of (B). Alone the first implication would mean {\em non-locality} of quantum mechanics (provided freedom holds). But, {\em ``it ain't necessarily so"}~\cite{GERSHWIN}. It is the implication (\ref{WILL}), which correctly describes the situation, see below. \subsection{EPR Reasoning} The works \cite{NORSEN-1} and \cite{NORSEN-2} are based on an assumption of the validity of the EPR reasoning. For example in Ref.~\cite{NORSEN-2} one can find that `the existence of these local, deterministic, non-contextual hidden variables [...] is not simply assumed, but is inferred from Locality plus a certain subset of the quantum mechanical predictions, using (in essence) the EPR argument.' The work \cite{NORSEN-1} is aimed at showing that: `The hidden variables posited by Bell are not an `ad hoc assumption' but, rather, a logical implication of locality.' The reasoning of EPR is often presented in the form of (\ref{T-EPR}), but this is wrong. The logical structure of the EPR argumentation about elements of reality is \begin{eqnarray} freedom \hspace{1mm} \&\hspace{1mm} QF\hspace{1mm} \&\hspace{1mm} locality \hspace{1mm} \& \hspace{1mm} counterfactual \hspace{1mm} definitness &&\nonumber \\ \Rightarrow for\hspace{1mm} specific\hspace{1mm}observables\hspace{1mm} elements\hspace{1mm}of\hspace{1mm}reality\hspace{1mm}exist \hspace{1mm}for\hspace{1mm}the\hspace{1mm} EPR\hspace{1mm}state. &&\label{EPR-COR}\nonumber \\ \end{eqnarray} The thesis of the implication is true, provided one does not make an unfounded generalization of it to {\em arbitrary} observables and {\em arbitrary} states. Indeed, for the EPR state and momentum and position observables ($P$ and $Q$), elements of reality seem to be a consistent notion. However this is arrived at by considering two situations of which only one can be the case in the given run of the experiment (measuring either $P$ or $Q$, page 780 of \cite{EPR}). This is counterfactual definitness at work. As a matter of fact such a generalization mentioned above is the effective claim of EPR, as they aimed to prove incompleteness of the entire theory of quantum mechanics. A missing (hidden) part of the theory would be according to them {\em inherently deterministic ``elements of reality''}. By EPR, since they correspond to values of observables which can be predicted with certainty (i.e., with probability equal to unity), an act of measurement just displays the previously hidden values. However, there is a logical flaw in the paper of EPR. General existence of elements of reality is just their conjecture, based on just {\em one} example, while they claim that this is their thesis, holding always. Why conjecture? EPR introduced elements of reality just for the `original EPR' state. And this was successful, that is it did not lead to a contradiction, only because they limited themselves to specific observables, which do not exhaust all possible ones. Note here, that for different observables and the original EPR state, in Ref.~\cite{BANASZEK}, one can find a proof of internal inconsistency of the EPR concepts. In their conclusions EPR tacitly assumed that one can establish local elements of reality for all states with perfect correlations, and for all observables. This is wrong, as it can be directly shown in the case of the GHZ states. The GHZ reasoning shows that the very definition of elements of reality must be wrong, as it implies contradictory values (a $1=-1$ contradiction in the case of GHZ correlations, or $2\sqrt{2}\leq 2$ in the case of the CHSH version of the Bell theorem). This means that the thought-provoking paper of EPR does not contain a valid general statement on quantum mechanics! One could only {\em counterfactually} wonder what would have been the views of EPR, had the GHZ paper appeared before 1935. Note further that the {\em thesis} of Bohm's version of the EPR implication \cite{BOHM} \begin{eqnarray} &freedom \hspace{1mm} \&\hspace{1mm} QF\hspace{1mm} \&\hspace{1mm} locality \hspace{1mm} \& \hspace{1mm} counterfactual \hspace{1mm} definitness &\nonumber \\ &\Rightarrow elements\hspace{1mm}of\hspace{1mm}reality\hspace{1mm}exist \hspace{1mm}for\hspace{1mm}the\hspace{1mm} two-spins-1/2-singlet\hspace{1mm}state, &\label{EPR-COR-2}\nonumber\\ \end{eqnarray} is plainly {\em{not}} true. {{} The thesis of this implication leads} to a $2\geq2\sqrt{2}$ contradiction, if one forms the CHSH inequality for the elements of reality, and uses again quantum predictions. {{} Of course, counterfactual definitness was a tacit assumption in Bohm's reasoning.} Comparing the relation of (\ref{EPR-COR}) with the views of proponents of (B), one sees the following differences with the simplified description of the EPR result given by (\ref{T-EPR}). The assumption of {\em freedom} is sometimes thought to encompass also {\em counterfactual definitness}, and pre-determined values are thought to be derivable for all states with perfect correlations. The second is invalidated by the Bell theorem and even more strikingly by the GHZ reasoning. However, the basic problem here is that freedom does not encompass counterfactual definitness. This is evident, for example, in the case of the CHSH inequality where we discuss four situations (four possible pairs of local settings), only {\em one} of which can actually occur for the given pair or particles (run of the experiment). However, the proper bounds of inequalities (see appendix A) are derived by an algebraic manipulation of values for {\em a single pair}, for the actual situation and three counterfactual ones (``had one or both the observers chosen different settings'')\footnote{ For a more extended analysis see, e.g., the recent Ref. \cite{STAIRS}.}. The results that would have been obtained in such cases are then treated as unknown, but nevertheless definite real numbers ($\pm 1$ in the case of the CHSH scenario).{{} This is counterfactual definitness {\em per se}.} In this way an effective determinism enters. Counterfactual definiteness is directly irreconcilable with Bohr's complementarity principle. The principle says that, once a value of an observable is measured, one is not allowed to even speak about values for complementary observables. This is reflected by quantum formalism, according to which probability distributions for complementary (non-commuting) observables are {{} in general} not defined, or rather they a {\em undefinable} within the theory. Thus, introduction of hidden variables, determinism, counterfactual definiteness etc. is not a minor point, a soft option, or something so obvious that may be treated as a tacit, indisputable assumption. It goes directly against the very essence of quantum mechanics. \section{Conclusions} The terms `nonlocality' or `quantum nonlocality' suggest that there is some `spooky' faster-than-light influence between distant quantum systems (for a critical analysis see Ref.~\cite{Hall}). While the possibility of such an influence cannot be excluded, it is just a one-sided view. A failure of realism in all of its forms, for example, futility of considering joint probabilities for outcomes for the entire set of conceivable measurements or rejection of counterfactual reasoning, is an equally valid option, just as a failure of both realism {\em and} locality. In ``La Nouvelle Cuisine" Bell describes failure of {\em local causality} in the quantum world. This failure does not mean that we have to accept non-local causality. Individual events may have {\em spontaneous, acausal} nature. There seem to be no need to go beyond quantum mechanics within this aspect~\footnote{In his masterpiece book~\cite{PERES}, Peres, when discussing Bell's theorem, uses a phrase `non-locality is inescapable'. However, a careful reader would notice that he firmly assumes that counterfactual reasoning is acceptable (statements like had we measured a different observable than the actual one which was measured, we would have obtained a value, say $x$). Only using a counterfactual reasoning measured and unmeasured, but potentially measurable, variables can be put on an equal footing. If one does not consider this assumption as disputable, non-locality is an inescapable consequence of Bell's theorem. In his later work with Fuchs~\cite{PERES-FUCHS}, he rejects non-locality. Also his well known slogan `unperformed experiments have no results' \cite{UNPERFORM} is a clear rejection of counterfactual definitness.}. Paradoxically, it was Einstein who reluctantly introduced the notion of spontaneous events, which might be after all the root of Bell's theorem. The lesson for future could however be that we should build the notion of locality on the operationally clear ``no-signalling'' condition -- the impossibility to transfer information faster-than-light. After all this is all what theory of relativity requires. {{} The moral of the story is that Bell's theorem, in all its forms, tells us {\em not} what quantum mechanics {\em is}, but what quantum mechanics {\em is not}.} \section{Appendix A} We show that inherently stochastic local hidden variable theories (that is, the theories with some fundamental finite irreducible stochasticity) lead to lower bounds than the standard ones of Bell inequalities. To this end we derive a Bell inequality for a local {\em strictly} stochastic theory. The latter is equivalent to the requirement that all $p(A|x,\lambda)$ and $p(B|y,\lambda)$ are strictly different from 0 or 1. Therefore, suppose that there is inherent stochasticity parameter, $s>0$, which gives the following upper bound: for any $\lambda$ \begin{equation} s<p_s(X|z,\lambda)<1-s, \label{BOUND} \end{equation} where $X=A,B,$ and $z=x,y$. As Bell's inequalities are in form of multilinear combinations (functions) of the underlying probabilities, their bounds (maxima and minima) are at the border of the region of validity of the probabilities. In the standard case the borders are $0\leq p(X|z,\lambda)\leq1$. Thus, the standard bounds cannot be reached in the case of probabilities satisfying (\ref{BOUND}). For example, take the CHSH inequality~\cite{CHSH}. With (\ref{BOUND}), for every $\lambda$ one can introduce the expectation value $I(z, \lambda)=\sum_{x=\pm1}X p(X|z,\lambda)$ (we assume here the usual spectrum of the Bell observables: $X=\pm1$). Obviously $-1+2s\leq I(z, \lambda)\leq 1-2s$. Thus, `renormalized' variables $I'=\frac{1}{1-2s}I$ give the usual of the CHSH inequality, that is 2. However, we have \begin{eqnarray} &\langle I(x,\lambda)I(y,\lambda)+I(x,\lambda)I(y',\lambda)&\nonumber\\ &+I(x',\lambda)I(y,\lambda)-I(x',\lambda)I(y',\lambda)\rangle\leq 2(1-2s)^2<2.&\nonumber \end{eqnarray} The bound\footnote{ One can choose infinitesimally small but nonzero $s$ such that the new bound is arbitrarily close to 2 and still we have, in the strict mathematical sense, a non-deterministic theory. However, the different viewpoints (here determinism vs. nondeterminism with an infinitesimal $s$) would not be operationally distinguishable.} of the CHSH inequality in a local (intrinsically) stochastic theory is strictly smaller than 2. One must assume {\em both} locality and determinism in order to obtain the proper bounds for Bell's inequalities\footnote{Note that in order to obtain the usual bound it is enough to assume that e.g. results of Bob are {\em deterministic}, and one of the results on Alice's side is deterministic too. Of course such an asymmetry of indeterminism is ridiculous. This is why we assumed an irreducible stochasticity, $s$, at both sides of the experiment, for all settings.}. \section{Appendix B: Alternative analysis} {{} One can formulate the analysis of the assumption of local causality also in another alternative way. The principle of local causality, $p(A|B,x,y,\lambda)=p(A|x, \lambda)$ and $p(B|A,x,y,\lambda)=p(B|x, \lambda)$, implies in {\em the case of correlated systems}: \begin{itemize} \item An easily derivable relation \begin{equation} p(A|x,y,\lambda)=p(A|x, \lambda), \end{equation} which a form of the {\em no-signaling condition}. {This is because under local causality $p(A|x,y,\lambda)=\sum_B P(A,B|x,y,\lambda)=p(A|x, \lambda)\sum_B P(B|y,\lambda).$} \item Existence {\em at least two different} values of `causes' $\lambda$, which makes $\lambda$ a non-trivial parameter (variable) outside quantum formalism (i.e, a non-trivial hidden variable). \end{itemize} Why the second implication? If one Assumes just {\em one} common cause and local causality, this immediately leads to {\em no} correlations. To see this consider the following relations: \begin{equation} P(A,B|x, y, \psi)= P(A|B,x, y, \psi) P(B|y, \psi)= P(A| x, \psi) P(B|y, \psi), \end{equation} where is $\psi$ is the sole cause, which can be thought of a pure (entangled) quantum state describing the preparation. Here we used first probability rules and later local causality. The factorization means: {\em no} correlations. In order to describe correlations one {\em must} have at least {{} two} values for $\lambda$ and this implies that $\lambda$ is outside of the quantum formalism, as the only common cause, in the case of pure quantum states, allowed by quantum mechanics is the quantum state. Just one $\lambda=\psi$ cannot give us any correlations whatsoever. Thus local causal theories (of correlated systems) are a subset of non-trivial hidden variable theories (ones allowing for $\lambda$, outside of the quantum formalism), and local theories (which additionally impose $p(A|x,y, \lambda)=p(A|x,\lambda )$). That is, a local causal theory is a specific example of a local hidden variable theory. Quantum mechanics (QM) and quantum field theory (QFT) are no-signalling. In QFT it is assumed the for two gauge invariant observables, $O_1, O_2$ at two spatially separated space time points $\xi_1, \xi_2$ one has $[O_1(\xi_1),O_2(\xi_2)]=0$, and in QM one has for the results of a local observable $x$ defined on one of the local subsystems: $P(A|x,y,\psi)=P(A|x,\psi)$ (the meaning of the symbols is as above). Note that both QFT and QM are {\em not} local causal, but this is precisely the point of Bell's theorem. Briefly, if one wants to work outside local hidden variable theories, one cannot even formulate the local causality principle. Local causal hidden variable theories assume first of all non-trivial hidden variables $\lambda$, and next that a form of locality principle is satisfied, in the form of $P(A|B, x,y, \lambda)=P(A|x, \lambda)$, etc. } \section{Acknowledgements} MZ is supported by a CHIST-ERA NCBiR project QUASAR. {\v C}B has been supported by the European Commission Project RAQUEL, the John Templeton Foundation, FQXi, and the Austrian Science Fund (FWF) through CoQuS, SFB FoQuS, and the Individual Project 2462.
proofpile-arXiv_067-8274
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} \label{sec:intro} GRAVITY is a four-telescope beam combiner instrument for the Very Large Telescope Interferometer (VLTI) at the Paranal Observatory in Chile. The K-band light from four telescopes will be combined by GRAVITY for two objects simultaneously. By that the beam combiner will not only be capable of high resolution imaging with a beam width of $\unit{4}{\milli\arcsec}$ but will also provide narrow-angle astrometry to a precision of $\unit{10}{\micro\arcsec}$ - far beyond the current limits. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=4cm]{dual.jpg} \end{tabular} \end{center} \caption[example] { \label{fig:dOPD} Definition of dOPD. The differential optical path difference (dOPD) between two stars is the product $\Delta s \cdot B$ of the angular separation $\Delta s$ between the objects and the interferometer baseline $B$. This quantity can be derived by measuring the distance d between the white-light positions of the respective interference patterns of the targets when internal and atmospheric optical path differences are subtracted \cite{glindemann}.} \end{figure} One of the primary science goals targeted with these capabilities will be to probe physics in the Galactic Center close to the event horizon of the supermassive black hole. There, effects predicted by the theory of general relativity are expected to take place that cannot be observed in other, less extreme environments. Dynamical measurements taken with GRAVITY's astrometric mode are the key therefore. The idea is to observe two objects, one science and one reference target, simultaneously and to measure their angular separation by determining their so called Differential Optical Path Difference (dOPD) as illlustrated in Figure \ref{fig:dOPD}. However, measuring the angular separation between the targets to a precision of $\unit{10}{\micro\arcsec}$ not only requires to measure their dOPD to a level of a few nanometers but also all paths within the interferometer from the telescopes down to the instrument need to be maintained stable to that level. Here is where the metrology system of GRAVITY comes into play which will monitor internal dOPDs by means of phase-shifting interferometry. In the following, the basic working principle of the GRAVITY metrology system is presented. \section{The working principle of the GRAVITY metrology system} \subsection{Narrow-angle astrometry} The underlying idea of the metrology system is to trace the optical paths of the science light within the interferometer by a laser beam, which is illustrated in Figure \ref{fig:workingprinciple}. The laser beam travels the paths in opposite directions being injected behind the beam combination facility in GRAVITY and going all the way backwards through GRAVITY and the VLTI infrastructures to the telescopes. The laser light at first is split into two equally bright beams, each of which is injected into one of the two beam combiners of the reference and science object. Before the injection a phase-shifting device is integrated in one of the beams such that in the end the interference patterns of the two beams at the telescope pupils can be sampled by phase-shifting interferometry. For this reason a four-step phase-shifting algorithm is implemented in GRAVITY which applies four phase shifts A, B, C and D accompanied by the four corresponding intensity measurements at each individual telescope. This allows reconstructing the phases of the interfering laser beams. These phases correspond to the internal dOPDs which have to be subtracted from the respective phases of the science light beams measured at the GRAVITY detectors in order to obtain the true dOPD on sky between the targets. In more detail, a short mathematical derivation, considering only two telescopes for simplicity, can show that in such a design the dOPD needs to be calculated from three types of quantities, the science phases $\Psi_A$ and $\Psi_B$, measured at the detectors A and B, the metrology phases $\Phi_1$ and $\Phi_2$ at telescopes 1 and 2 as well as two terms related to the non-common path between metrology and science light, $\delta_1$ and $\delta_2$: \begin{equation} dOPD = \frac{\lambda_A}{2\pi}\Psi_A - \frac{\lambda_B}{2\pi}\Psi_B + \frac{\lambda_L}{2\pi} \left( \Phi_2-\Phi_1 \right) - \frac{\lambda_L}{2\pi} \left( \delta_2 - \delta_1 \right) \hspace{0.5cm} , \end{equation} where $\lambda_A$ and $\lambda_B$ denote the effective wavelengths of the light of star A and B and $\lambda_L$ stands for the wavelength of the metrology laser. Equivalent results are obtained when calculating the general case for four telescopes with six possible baselines. The difference of the non-common path terms $\delta_1$ and $\delta_2$ will be calibrated by swapping the light of two nearby stars between the beam combiners such that the zero points of the metrology phases are found. The actual adjustment will not be applied online but in the data reduction. Thus, the relevant quantities that have to be tracked by the metrology during observations are its phases at each telescope which correspond to internal dOPDs of the interferometer between the beam combination and the telescopes. As already mentioned before, the phases of the metrology light are extracted by means of a four-step phase-shifting algorithm that shall be shortly introduced now. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=7cm]{Gillessen2012workingprinciple.jpg} \end{tabular} \end{center} \caption[example] { \label{fig:workingprinciple} The GRAVITY metrology system demonstrated in a schematic overview for two telescopes. A laser beam is launched which traces the optical paths from the GRAVITY beam combiners (BC A, BC B) backwards to the telescopes (T1, T2). There, fringe patterns are formed by the interfering metrology laser beams and sampled by four photodiodes per telescope. The fringes can be moved across the diodes by the phase-shifting device implemented in the chain at a kHz-rate. Via this method the phase information can be extracted from the measured intensities at the diodes that reconstruct the fringes \cite{gillessen12}.} \end{figure} \subsection{Four-step phase-shifting algorithm} \label{sec:title} The measurement of the internal dOPDs in the optical paths from the VLTI to the instrument GRAVITY is accomplished by determining the phases of fringe patterns that form in the pupil planes. The corresponding intensity patterns are sampled at a few points with photodiodes by applying phase shifts to one of the two metrology laser beams. Simultaneously, the resulting intensities of the shifted fringes are measured at the photodiodes, which encode the OPD information. A minimum of three phase shifts and intensity measurements respectively are necessary to determine the intensity pattern and by that the fringe phase. In this respect, the GRAVITY metrology uses four shifts $\alpha_i=\left\lbrace 0, \frac{\pi}{2}, \pi, \frac{3\pi}{2} \right\rbrace $ with $i\in\left\lbrace A,B,C,D\right\rbrace $ in order to determine the intensity distribution unambiguously denoted as ABCD algorithm. According to the theory of interference of two beams with intensities $I_1$ and $I_2$, the intensities measured for the individual phase steps should lie on the distribution $I(x,y,\alpha_i)$ as a function of position (x,y) on the detection area \begin{equation} I(x,y,\alpha_i) = I_1 + I_2 + 2\sqrt{I_1 I_2}\sin(\Phi(x,y) + \alpha_i) \hspace{0.5cm} , \label{eq:fringes} \end{equation} where $\Phi(x,y)$ denotes the fringe phase. These four shifts and intensities can be written as four such equations with three unknown variables $I_1$, $I_2$ and $\Phi$. Solving this system of equations for the variable of interest, the fringe phase $\Phi$, leads to the simple relation below if the phase shifts correspond to exact multiples of $\frac{\pi}{2}$: \begin{equation} \Phi=\arctan\left( \frac{I_A - I_C}{I_B- I_D}\right) \hspace{0.5cm} . \label{eq:phi} \end{equation} In case the phase shifts $\alpha_i$ deviate from the nominal values this formula can be generalized if the deviations are known, but it is foreseen to calibrate the nominal shifts to extract the metrology phases \cite{sahlmann}. The uncertainties in the phase extraction of the metrology enters the total astrometric error budget of GRAVITY, which is shown in the next section. \subsection{Astrometric error budget} As raised above, the simple four-step algorithm implemented in the metrology will be based on exact multiples of $\frac{\pi}{2}$ being applied as phase shifts. This requires a calibration of the phase-shifting device. As a consequence, the precision of this calibration enters the uncertainty of the phase measurement. Following from the error propagation of Equation \ref{eq:phi} the phase uncertainty squared $\delta \Phi ^2$ amounts to: \begin{eqnarray} \delta \Phi ^2 &=& \left( \frac{\partial \Phi}{\partial I_A} \delta I_A\right)^2 + \left( \frac{\partial \Phi}{\partial I_B} \delta I_B\right)^2 + \left( \frac{\partial \Phi}{\partial I_C} \delta I_C\right)^2 +\left( \frac{\partial \Phi}{\partial I_D} \delta I_D\right)^2 \\ &=& \frac{\delta I^2}{2I_0^2} \hspace{0.5cm} , \label{eq:difference} \end{eqnarray} assuming that the four intensity measurements have equal uncertainties $\delta I:=\delta I_A=\delta I_B=\delta I_C=\delta I_D$ and $I_1=I_2=I_0$. Thus, the phase uncertainty in radian is defined by the reciprocal of $\sqrt{2}$ times the signal-to-noise ratio $I_0/\delta I$. For the shortest baselines the astrometric error budget leaves roughly a few nanometers to the phase measurement accuracy within 5 minutes, such that all individual uncertainties introduced by the devices involved in this measurement should amount to fractions of this value including the error coming from the calibration of the phase shifter. Table \ref{tab:astrometric} demonstrates the influence of the phase shifter calibration errors of order $\lambda/4000$, $\lambda/2000$ and $\lambda/1000$ on the astrometric error budget. The goal therefore is to calibrate the phase shifter to a nm-level of accuracy or even lower in order to reach the specified astrometric precision. Before the details of developing an appropriate calibration routine are presented, the properties of the phase-shifting device used for the GRAVITY metrology are summarized. \begin{table}[h] \centering \begin{tabular}{l|l|l|l} \textbf{Error term} & $\boldsymbol{\lambda/4000}$ & $\boldsymbol{\lambda/2000}$ & $\boldsymbol{\lambda/1000}$ \\ \hline Phase shifter calibration error & \unit{0.5}{\nano\meter} & \unit{1}{\nano\meter} & \unit{2}{\nano\meter} \\ \hline Total metrology phase extraction error & \unit{0.94}{\nano\meter} & \unit{1.22}{\nano\meter} & \unit{1.81}{\nano\meter} \\ \hline Total GRAVITY OPD error & \unit{6.83}{\nano\meter} & \unit{6.87}{\nano\meter} & \unit{7.00}{\nano\meter} \\ \hline Astrometric error & \unit{10.27}{\micro\arcsec} & \unit{10.34}{\micro\arcsec} & \unit{10.53}{\micro\arcsec} \vspace{0.3cm} \end{tabular} \caption[Influence of the phase shifter calibration error on the astrometric error of GRAVITY]{Influence of the phase shifter calibration error on the astrometric error of GRAVITY. The numbers correspond to the observation case with four VLTI Unit Telescopes (UTs) during a period of \unit{5}{\minute}. The phase shifter calibration error is one of several terms, such as the performance of the metrology laser and receivers, that enters the total metrology phase extraction error. The latter also adds up quadratically with other errors to a total GRAVITY OPD error term which can be translated into an astrometric error. A phase shifter calibration error of \unit{2}{\nano\meter} would dominate the total phase extraction error of the metrology which then would become the second largest contribution to the astrometric error.} \label{tab:astrometric} \end{table} \section{Calibration of the phase shifter} \subsection{Phase shifter} Our phase-shifting device of type MPX2000-LN-0.1 displayed in Figure \ref{fig:phaseshifter} is a component from Photline Technologies. The phase modulation of optical signals is based on a birefringent lithium niobate crystal (LiNbO$_3$). The refractive index of this material can be varied in proportion to the strength of an applied electric field. Thus, the phase of an electro-magnetic wave passing the crystal can be influenced by setting a voltage. On the surface of the crystal a waveguide is implemented by titanium diffusion. This diffusion zone leads to an increase of the refractive index such that total internal reflection keeps the injected light within the waveguide \cite{wooten}. The manufacturer's specifications of the device are given in Table \ref{tab:specs} together with the corresponding requirements from the GRAVITY metrology design. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=4cm]{PSfoto.jpg} \end{tabular} \end{center} \caption[example] { \label{fig:phaseshifter} Photline phase shifter of type MPX2000-LN-0.1 with input fiber to the left and output fiber to the right. The phase modulation is realized by setting a voltage to the radiofrequency input at the bottom of the phase shifter housing. } \end{figure} \begin{table}[h] \vspace{0.5cm} \centering \begin{tabular}{l|l|l} \textbf{Parameter} & \textbf{Photline specification} & \textbf{Metrology requirement} \\ \hline Operating wavelength & \unit{1900}{\nano\meter} -- \unit{2200}{\nano\meter} & \unit{1908}{\nano\meter} \\ \hline Electro-optic bandwidth & \unit{150}{\mega\hertz} & \unit{4}{\kilo\hertz} \\ \hline RF input power & \unit{-20}{\volt} -- \unit{+20}{\volt} & \unit{0}{\volt} -- \unit{10}{\volt} \\ \hline Optical input power & \unit{0.1}{\watt} & \unit{0.85}{\watt} \vspace{0.3cm} \end{tabular} \caption[Specifications of the phase shifter]{Specifications of the phase shifter. The device is designed for wavelengths between $\unit{1900}{\nano\meter}$ and $\unit{2200}{\nano\meter}$. The electro-optic bandwidth of the phase shifter amounts to $\unit{150}{\mega\hertz}$ and the modulation range is specified from $\unit{-20}{\volt}$ to $\unit{20}{\volt}$. For the GRAVITY metrology roughly $\unit{0}{\volt}$ to $\unit{10}{\volt}$ are used which correspond to a full wave for a wavelength of $\unit{2000}{\nano\meter}$ and a bandwidth of $\unit{4}{\kilo\hertz}$ is foreseen to operate system. Long-term tests showed that the injected power level of $\unit{0.85}{\watt}$ is withstood by the component although it is specified for a maximum input of $\unit{0.1}{\watt}$ by the manufacturer Photline Technologies.} \label{tab:specs} \end{table} The birefringent crystal has two transmission axes with different refractive indices. Since the metrology laser light is polarized and guided in PM fibers, the polarization can be maintained in this respect when being aligned to one of these axes. Due to the presence of several connectors in the metrology fiber chain small misalignments can occur. For this reason, we spliced fiber polarizers to the input and output of the phase shifter to correct for these mismatches. \newpage The phase-shifting concept of the metrology system used to determine internal dOPDs in the GRAVITY interferometer requires that the accuracy of the applied phase shifts should be of order $\unit{1}{\nano\meter}$ or less. In practical terms, this means that the translation between the voltage applied to the phase shifter and the resulting phase shift has to be known to that level. This relation can be measured in an interferometric test setup. In order to determine the relation between the applied voltage and resulting phase shift we tested two methods, a linear scan of a full wave and a phase-step insensitive algorithm, both of which will be presented here. \subsection{Linear scan} In a first simple approach, the main idea was to scan the range of a full wave by setting a linear voltage ramp from $\unit{0}{\volt}$ to $\unit{10}{\volt}$ in an interferometric test setup, which is drawn on the left side of Figure \ref{fig:schemes}. As in the final metrology design, the laser light is split into two channels of which one is feeding the phase shifter. Then the beams are combined again and the interferometric signal is read out with a receiver using the same type of photodiodes as our final design. We average over a few hundred linear scans as in Figure \ref{fig:measurement}, at the maximum rate of $\unit{500}{\hertz}$ given by the response of our receiver design, to extract the intensity distribution of the interference pattern. Fitting a function of the following shape to the acquired data allows to model the desired relation between voltage U and phase shift $\alpha$: \begin{equation} I(U) = \xi + I_1 + I_2(U) + 2\nu\sqrt{I_1 I_2(U)}\sin(\Phi + \alpha(U)) \hspace{0.5cm} . \label{eq:fringe} \end{equation} The quantities $I_1$ and $I_2(U)$ correspond to the transmission functions of the two interfering beams. The intensity $I_1$ denotes the transmission of the unmodulated arm. The intensity $I_2$ is the transmission of the phase-shifted arm. It turns out that the transmission of the phase shifter depends on the applied voltage, i. e. is not independent of the applied phase step. We measure $I_1$ and $I_2(U)$ with linear scans by disconnecting one of the interfering fiber chains before we do the actual fringe measurement. $\xi$ and $\nu$ are parameters which we have to introduce to adjust the bias and contrast of $I$ which are not perfectly reconstructed by the previously measured $I_1$ and $I_2(U)$. \newpage If the relation $\alpha(U)$ of the phase shifter is linear, the intensity is a simple sine function and the four required voltages for the ABCD phase shifts are therefore easy to derive. However, phase-shifting devices can show non-linearities like the Kerr effect such that the function $\alpha(U)$ is of more complicated shape as in our case where a polynomial of 4th order is needed to fit the measured intensity distribution. This and other effects observed during the measurements strongly limit the achievable accuracy of the phase shifter calibration error and make the related analysis of the measured data complicated. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=5cm]{measurement.jpg} \end{tabular} \end{center} \caption[example] { \label{fig:measurement} Linear scans of a full-wave modulation. The sawtooth voltage (dashed blue line) is applied to the phase shifter while the sinusoidal function (red) corresponds to the measured fringes at the receiver. } \end{figure} As a summary the limiting factors in our measurements were the following: \begin{itemize} \item The relation $\alpha(U)$ between the voltage U applied to the phase shifter and the resulting phase shift $\alpha$ is non-linear. \item The measurements are influenced by laser power fluctuations of a few percent at frequencies from $\unit{0}{\hertz}$ to $\unit{50}{\hertz}$. \item The transmission of the phase shifter depends on the applied voltage with a peak-to-peak modulation of around $10^{-2}$ as shown in Figure \ref{fig:transmission} probably due to low finesse Fabry-Perot interference of Fresnel reflections in the phase shifter. \item Fringe drifts of $\unit{0.2}{\nano\meter}$ per full wave occur due to environmental instabilities like temperature fluctuations and vibrations, which add up in the complete measurement of several hundred periods. \item The birefringence of the phase shifter generates intrinsic contrast modulation due to polarization misalignments in the component. \end{itemize} These effects need to be taken into account when modeling the fitting function used for the measured intensity distribution. In this approach, we were able to determine phase shifts with a typical accuracy of $\unit{2}{\nano\meter}$ correcting for the non-linear transmission function and the fringe drifts, however under the assumption that our model function $I(U)$ and in particular $\alpha(U)$ describe reality well. Thus, our main concerns were: \begin{itemize} \item Does our model describe reality well enough? \item Are the four ABCD values that we determine from this calibration scheme still correlated regarding that they are not measured instantaneously and in particular suffer from environmental instabilities and laser power fluctuations? \end{itemize} \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=5.5cm]{scope_05.jpg} \end{tabular} \end{center} \caption[example] { \label{fig:transmission} Measurement of the transmission modulation in the phase shifter. The corresponding setup consists of the laser source feeding the phase-shifting device and a receiver measuring its output intensity while voltage ramps from \unit{-10}{\volt} to \unit{+10}{\volt} were applied. The plot shows the averaged and normalized intensity distribution versus the applied voltage. } \end{figure} These results caused us to investigate other calibration methods which are less affected by non-linearities. We were able to overcome the main drawbacks mentioned above by using a phase-step insensitive algorithm discussed in the next section. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=7cm]{schemes.jpg} \end{tabular} \end{center} \caption[example] { \label{fig:schemes} Two calibration schemes for the phase shifter. On the left side a simple interferometric setup with the phase shifter in one of the two interfering beams is displayed. With linear voltage ramps fringes are scanned by at least a full wave and their intensity distribution is measured by a detector as a function of the applied voltage U in order to determine the phase shifts $\alpha$ that correspond to the ABCD steps. On the right, another setup is shown which includes two phase shifters. One phase shifter is used as a delay line which is continuously shifted by many different phase shifts $\beta$ at a slow rate. For every phase shift $\beta$ a phase step $|\alpha_A-\alpha_B|$ is applied to the other phase shifter by setting $U_A$ and $U_B$. This phase step can be calibrated by measuring the corresponding intensities $I_A$ and $I_B$ as a function of $\beta$. } \end{figure} \subsection{Phase-step insensitive algorithm} The basic idea behind the phase-step insensitive approach is to shift by the four values instantaneously that we believe to be close to the required ABCD phase shifts, namely $\left\lbrace\unit{0}{\volt},\unit{2}{\volt},\unit{4}{\volt},\unit{6}{\volt}\right\rbrace$ since a full wave is scanned by approximately $\unit{8}{\volt}$ for our wavelength. In this manner, we directly probe the phase shifts and the phase shifter non-linearities as well as environmental instabilities can be neglected, such that a fixed correlation is kept between the different steps or voltages applied to the component. With the phase shifter in one arm of the setup and a delay line in the other, as shown on the right in Figure \ref{fig:schemes}, we used this fact to determine our desired ABCD steps with a phase-step insensitive method. The delay line can be used to slowly move the phase in order to scan the fringes while the phase shifter is constantly repeating approximate ABCD steps for each delay line shift. The correlation of the instantaneous ABCD shifts remains fixed independently of the delay line phase. In this manner, one obtains four intensity distributions $I_i$ with $i\in\left\lbrace A,B,C,D\right\rbrace $ as a function of the delay line phase $\beta$: \begin{equation} I_i(\beta) = a_i + b_i\sin(\beta + \alpha_i) \label{eq:1} \hspace{0.5cm} , \end{equation} where the parameters $a_i$ correspond to the intensity offsets and $b_i$ to the amplitudes of the intensity modulation. These functions have the form of Cartesian coordinates of an ellipse in its parametric form. Plotting couples of these intensity distributions, $I_i$ and $I_j$, against each other will result in an elliptic figure with a shape depending on different parameters of the system such as flux and phase shift $\Delta_{ij}=\alpha_i-\alpha_j$. The data can be fitted by the ellipse equation in Cartesian coordinates \begin{equation} EI_i^2+FI_iI_j+GI_j^2+HI_i+KI_j+L=0 \label{eq:ellipse} \end{equation} with the coefficients (E, F, G, H, K, L), which allows for calculating the phase step $\Delta_{ij}$ by the following formula \cite{farrell}: \begin{equation} \Delta_{ij}=\arccos\left(\frac{-F}{\sqrt{4EG}}\right) \hspace{0.5cm} . \label{eq:delta} \end{equation} The parameters (E, F, G, H, K, L) can be written as \begin{align} \kappa &=(a_ib_j)^2+(a_jb_i)^2-2a_ib_ia_jb_j\cos(\Delta_{ij}) -b_i^2b_j^2\sin^2(\Delta_{ij}) \label{eq:kappa} \\ E &=b_j^2/\kappa \label{eq:a} \\ F &=-2b_ib_j\cos(\Delta_{ij})/\kappa \label{eq:b} \\ G &=b_i^2/\kappa \label{eq:c} \\ H &=2(a_jb_ib_j\cos(\Delta_{ij})-a_ib_j^2)/\kappa \label{eq:d} \\ K &=2(a_ib_ib_j\cos(\Delta_{ij})-a_jb_i^2)/\kappa \label{eq:e} \\ L &=1 \hspace{0.5cm} .\label{eq:f} \end{align} L is fixed to be 1 in order to avoid that the parameters shrink to zero in the $\chi^2$-minimization that we run in order to find the parameters $a_i$, $a_j$, $b_i$, $b_j$ and $\Delta_{ij}$ such that equation \ref{eq:ellipse} tends to zero. It is also possible to fit the coefficients (E, F, G, H, K, L) by direct analytical methods as demonstrated by Fitzgibbon et al. (1996)\cite{fitzgibbon} but with severe drawbacks: \begin{itemize} \item A complex inversion process is required to compute the parameters. \item Perfect fringes are required, since any flux, transmission or contrast variation during the fringe scan can bias the result by up to several degrees. \item A moderately high success rate and stability is achieved in simulations and real data when compared to a classical $\chi^2$-minimization. \end{itemize} In this basic model we include the laser power fluctuations during the measurements $P(t)$, which can be extracted from a Fourier transform of the ABCD modulation of our measurements. The measured transmission variation of the phase shifter used as delay line as a function of applied voltage lies within the frequency of the laser power fluctuations such that it is filtered out by P(t). Therefore, our final model is the following expression \begin{equation} I_i(t) = P(t)\left[a_i + b_i\sin(\beta(t) + \alpha_i)\right] \label{eq:final} \hspace{0.5cm} . \end{equation} With this function we determine the phase angles $\Delta_{AB}$ , $\Delta_{BC}$ and $\Delta_{CD}$ in our experiment. In more detail, we apply linear voltage ramps consisting of 1000 phase steps of about $\unit{4}{\nano\meter}$ to the phase shifter which is used as delay line. Due to the natural fringe drift this realization of a delay line is not limited by discrete voltage sampling. The ABCD modulation is run at frequencies close to $\unit{1}{\kilo\hertz}$. The fringe drift typically occurs at $\unit{0.1}{\nano\meter\per\milli\second}$ such that during an ABCD cycle the drift is smaller than $\unit{0.3}{\nano\meter}\simeq\lambda/6000$ and has no significant influence on our measurement of the ABCD phase angles. \begin{figure} \begin{center} \begin{tabular}{c} \hspace{8.5mm} \includegraphics[height=7.8cm]{ellipse.jpg} \\ \hspace{0.0mm} \includegraphics[height=7.675cm]{ellipse_res.jpg} \\ \hspace{1mm} \includegraphics[height=3.35cm]{PhaseResult.jpg} \end{tabular} \end{center} \caption[example] { \label{fig:results} Performance of the phase-step insensitive algorithm. The top plot shows the fit of an ellipse to data acquired at a ABCD modulation of $\unit{700}{\hertz}$. The data are the voltages measured in Volt by the receiver and correspond to the intensities $I_A$ and $I_B$ for the phases A and B as a function of delay line shifts. The fit line is displayed in red and the black data points are almost invisible due to the small dispersion. Larger dispersions are found when the setup is perturbed by vibrations, but no bias is introduced since they average out. In the figure below the fitting residuals of the $\chi^2$-minimization relative to the fringe continuum are shown as a function of time. The bottom plot displays a series of 10 measurements with a mean phase step $\bar\Delta_{AB}=\unit{88.9}{\degree}\pm\unit{0.2}{\degree}$. } \end{figure} We took series of measurements at different ABCD frequencies in the range between \unit{0.5}{\kilo\hertz} and \unit{1.5}{\kilo\hertz} and different dates few months from each other without specific control of the environment to extract the phase shifter calibration error of this phase-step insensitive algorithm. Typically, the measured phase shifts are consistent within $\unit{0.2}{\degree}$ or $\unit{1}{\nano\meter}$ between the series which also is the typical dispersion on each individual series of measurements. These results validate that the phase-step insensitive algorithm is an appropriate method to calibrate the ABCD phase steps for the metrology on a $\unit{1}{\nano\meter}$-level. In comparison to the method of linearly scanning a full wave with the phase shifter, we not only achieve a precision of factor two higher, but we are also able to extract phase angles unambiguously. The advantage is that we do not need to model the interference fringes including the various complex effects that let the intensity distribution strongly differ compared to the theory of simple two-beam interference. The deficiencies of the linear scan method become obvious when directly comparing the results of both algorithms tested in Table \ref{tab:result}. In Figure \ref{fig:results} the determined phase steps $\Delta_{AB}$ of one such series are shown as well as one of the respective ellipses together with the corresponding fit and residuals. \begin{table}[h] \centering \begin{tabular}{l|l|l|l} \textbf{Results} & $\boldsymbol{\Delta_{AB}}$ & $\boldsymbol{\Delta_{BC}}$ & $\boldsymbol{\Delta_{CD}}$ \\ \hline Applied voltage step & $\left\lbrace\unit{0}{\volt}, \unit{2}{\volt}\right\rbrace$ & $\left\lbrace\unit{2}{\volt}, \unit{4}{\volt}\right\rbrace$ & $\left\lbrace\unit{4}{\volt}, \unit{6}{\volt}\right\rbrace$\\ \hline Phase-step insensitive algorithm & $\unit{88.7}{\degree}\pm\unit{0.2}{\degree}$ & $\unit{89.2}{\degree}\pm\unit{0.2}{\degree}$ & $\unit{92.8}{\degree}\pm\unit{0.2}{\degree}$ \\ \hline Linear scan & $\unit{87.0}{\degree}\pm\unit{0.4}{\degree}$ & $\unit{86.6}{\degree}\pm\unit{0.4}{\degree}$ & $\unit{87.1}{\degree}\pm\unit{0.4}{\degree}$ \vspace{0.3cm} \end{tabular} \caption[Results of the phase-step insensitive algorithm compared to the linear scan]{Results of the phase-step insensitive algorithm compared to the linear scan. For the phase-step insensitive method the average phase shift values $\Delta_{AB}$, $\Delta_{BC}$ and $\Delta_{CD}$ were obtained from series of measurements at different ABCD modulation frequencies and different dates during a time span of a few months. The absolute value of the applied voltage steps always is \unit{2}{\volt}, but results in different phase angles due to the non-linear behavior of the phase shifter. Obviously, the linear scan shows systematically different as well as less precise results and fails in revealing the non-linearities probably due to flaws of the respective fit model.} \label{tab:result} \end{table} \section{CONCLUSIONS} The calibration of the metrology phase shifter presented here fulfills the specified requirements for performing $\unit{10}{\micro\arcsec}$-level astrometry. We could demonstrate experimentally that ABCD phase shifts can be calibrated with an accuracy of typically $\unit{1}{\nano\meter}$ in a stable setup and environment. Despite non-linearities that occur in the component, the metrology fiber chain or in the environment we were able to elaborate a method that is not as sensitive to these as simple scans of fringes in order to extract the relation between the applied voltage and the resulting phase shift. The phase-step insensitive algorithm is able to determine phases unambiguously when a robust fitting routine for ellipses is included. In our case a $\chi^2$-minimization serves this purpose. \newpage These results were achieved under normal air pressure and room temperature. In GRAVITY the phase shifter is going to be operated in vacuum at $\unit{\power{10}{-6}}{\milli\bbar}$ cooled to $\unit{240}{\kelvin}$ -- conditions which might lead to a higher calibration accuracy provided that the instrument does not introduce higher noise levels. Currently, we analyze the instrument's behavior in this respect. Furthermore, fiber differential delay lines are implemented in GRAVITY which can be used for the calibration of phase shifts and do not show a transmission dependency as the phase shifter. However, due to the transmission modulation of the phase shifter we might need to adapt the original phase extraction formula of Equation \ref{eq:phi}, since the underlying fringe model of Equation \ref{eq:fringes} does not take this effect into account. When the ABCD phase shifts will be calibrated to the specified accuracy within GRAVITY, an important step will be taken towards measuring the optical paths in the interferometer to a nm-level precision via phase-shifting interferometry and thus towards unprecedented astrometric errors at the level of $\unit{10}{\micro\arcsec}$.
proofpile-arXiv_067-8347
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $X$ be a smooth, projective, geometrically irreducible variety over a number field $L$. In \cite{Manin}, Manin showed that the Brauer group of $X$ can obstruct the Hasse principle on $X$. Let $X({\mathbb A}_L)$ denote the set of adelic points of $X$ and let $\Br(X)$ denote the Brauer group of $X$, $\Br(X)=H^2_{\mathrm{\acute{e}t}}(X,{\mathbb G}_m)$. There is a pairing $$X({\mathbb A}_L)\times \Br(X)\rightarrow {\mathbb Q}/{\mathbb Z}$$ obtained by evaluating an element of $\Br(X)$ at an adelic point and summing the local invariants \cite{Manin}. The Brauer-Manin set $X({\mathbb A}_L)^{\Br(X)}$ is the set of adelic points of $X$ which are orthogonal to $\Br(X)$ under this pairing. It contains the closure of the set of rational points in the adelic topology. $$\overline{X(L)}\subset X({\mathbb A}_L)^{\Br(X)}\subset X({\mathbb A}_L).$$ If $X({\mathbb A}_L)\neq\emptyset$ but $X({\mathbb A}_L)^{\Br(X)}=\emptyset$, there is said to be a Brauer-Manin obstruction to the Hasse principle on $X$. If $X({\mathbb A}_L)\neq X({\mathbb A}_L)^{\Br(X)}$, there is said to be a Brauer-Manin obstruction to weak approximation on $X$ Since Manin's observation, Brauer groups and the associated obstructions have been the subject of a great deal of research. Let $\overline{X}$ denote the base change of $X$ to an algebraic closure of $L$. The kernel of the natural map from $\Br(X)$ to $\Br(\overline{X})$ is called the `algebraic' part of $\Br(X)$ and denoted $\Br_1(X)$. It is usually easier to handle than the remaining `transcendental' part and a substantial portion of the literature is devoted to its study. The quotient group $\Br(X)/\Br_1(X)$, known as the transcendental part of $\Br(X)$, is generally more mysterious. Nevertheless, it has arithmetic importance -- transcendental elements in $\Br(X)$ can obstruct the Hasse principle and weak approximation, as shown by Harari in \cite{Harari} and Wittenberg in \cite{Wittenberg}. Results of Skorobogatov and Zarhin in \cite{SZtorsion} allow one to compute the transcendental part of the Brauer group for a product of elliptic curves. These results were used by Ieronymou and Skorobogatov in \cite{I-S} to compute the odd order torsion in the transcendental part of the Brauer group for diagonal quartic surfaces over the rationals. In this paper, we compute the transcendental part of the Brauer group for abelian surfaces of the form $E\times E$ where $E/L$ is an elliptic curve with complex multiplication by the ring of integers ${\mathcal O}_K$ of an imaginary quadratic field $K$. In \cite{SZ}, Skorobogatov and Zarhin proved that for $X$ an abelian variety or K3 surface, $\Br(X)/\Br_1(X)$ is a finite abelian group. Therefore, computing $\Br(X)/\Br_1(X)$ is equivalent to computing its $\ell$-primary part $(\Br(X)/\Br_1(X))_{\ell^\infty}$ for every prime number $\ell$. To a pair $(E,\ell)$ consisting of an elliptic curve $E$ defined over a number field $L$, with complex multiplication by ${\mathcal O}_K$, and a prime number $\ell$, we associate an integer $m(\ell)$ (Definition \ref{def:m}) which can be calculated using class field theory (Proposition \ref{upper bound}). We write $\Gamma_L$ for the absolute Galois group of $L$. We denote the $n$-torsion subgroup of an abelian group $A$ by $A_n$. For an elliptic curve $E/L$, we write $E_n$ for the $n$-torsion points of $E$ defined over an algebraic closure of $L$. \begin{theorem} Let $\ell\in{\mathbb Z}_{>0}$ be an odd prime and let $m=m(\ell)$. Then \begin{eqnarray*} \left(\frac{\Br(E\times E)}{\Br_1(E\times E)}\right)_{\ell^{\infty}}= \frac{\Br(E\times E)_{\ell^{m}}}{\Br_1(E\times E)_{\ell^{m}}}=\frac{\End_{\Gamma_L} E_{\ell^m}}{({\mathcal O}_K\otimes{\mathbb Z}/\ell^m)^{\Gamma_L}} \cong \begin{cases} ({\mathbb Z}/\ell^m)^2 & \textrm{if } K\subset L\\ {\mathbb Z}/\ell^m & \textrm{if } K\not\subset L. \end{cases} \end{eqnarray*} \end{theorem} For brevity, here we state only the result for odd primes. The results for all primes can be found in Theorems \ref{quotient1} and \ref{quotient2}. In Theorems \ref{geom1} and \ref{geom2}, we give a similar description of the $\ell$-primary part of $\Br(\overline{E}\times\overline{E})^{\Gamma_L}$ for every prime $\ell$. One can apply these results to gain information about the transcendental part of the Brauer group for a wider class of varieties. If $\pi:X\dasharrow Y$ is a dominant rational map of degree $d$ between K3 or abelian surfaces over $L$, then by the proof of \cite{I-S} Corollary 2.2, it induces a surjective map of $\Gamma_L$-modules $$\pi^*:\Br(\overline{Y})\rightarrow\Br(\overline{X})$$ whose kernel is annihilated by $d$. Thus, if $\ell$ is prime and coprime to $d$, then \[\left(\frac{\Br(Y)}{\Br_1(Y)}\right)_{\ell^{\infty}}\hookrightarrow \Br(\overline{Y})^{\Gamma_L}_{\ell^{\infty}}=\Br(\overline{X})^{\Gamma_L}_{\ell^{\infty}}.\] The following examples are of interest. Suppose that $E/L$ has complex multiplication by ${\mathcal O}_K$. \begin{enumerate} \item $Y=E\times E'$ where $E'/L$ is an elliptic curve which is isogenous to $E$ over $L$. Take $\ell$ coprime to the degree of the isogeny. \item $Y=E'\times E'$ where $E'/L$ is an elliptic curve with complex multiplication by a non-maximal order ${\mathcal O}\subset {\mathcal O}_K$. Take $\ell$ coprime to the index $[{\mathcal O}_K:{\mathcal O}]$. This is because there is an isogeny of degree $[{\mathcal O}_K:{\mathcal O}]$, defined over $L$, from $E'$ to an elliptic curve over $L$ with complex multiplication by ${\mathcal O}_K$. \item $Y=\Kum(E\times E)$, the K3 surface which is the minimal desingularisation of the quotient of $E\times E$ by the involution $(P,Q)\mapsto (-P,-Q)$. \end{enumerate} \medskip More is known for a Kummer surface $X=\Kum(E\times E)$. By Proposition 1.3 of \cite{SZtorsion}, there is an isomorphism of $\Gamma_L$-modules \[\Br(\overline{X})\rightarrow \Br(\overline{E}\times\overline{E})\] and therefore \[\Br(\overline{X})^{\Gamma_L}=\Br(\overline{E}\times\overline{E})^{\Gamma_L}.\] By Theorem 2.4 of \cite{SZtorsion}, for every $n\in{\mathbb Z}_{>0}$ there is an embedding \begin{equation} \label{eq:embedding} \Br(X)_n/\Br_1(X)_{n} \hookrightarrow\Br(E\times E)_{n}/\Br_1(E\times E)_{n} \end{equation} which is an isomorphism if $n$ is odd. So for $\ell$ an odd prime, \begin{equation} (\Br(X)/\Br_1(X))_{\ell^{\infty}}=(\Br(E\times E)/\Br_1(E\times E))_{\ell^{\infty}}. \end{equation} Examples involving K3 surfaces are important for applications because for abelian varieties with finite Tate-Shafarevich group, any Brauer-Manin obstruction can be explained by the algebraic part of the Brauer group, see \S 6.2 of \cite{Skorobogatov}. However, for K3 surfaces there can be obstructions which are only explained by transcendental elements in the Brauer group. Examples of this are given in \cite{HVV}, \cite{Preu} and \cite{I-S}. We give another example in Section \ref{obstruction}. We focus on elliptic curves with a transcendental element of odd order in $\Br(E\times E)$ because this will give rise to a transcendental element in the Brauer group of $\Kum(E\times E)$. \begin{theorem} \label{thm:quadratictwist} Let $E/{\mathbb Q}$ be an elliptic curve with complex multiplication by ${\mathcal O}_K$ such that $\Br(E\times E)$ contains a transcendental element of odd order. Then $E$ has affine equation $y^2=x^3+2c^3$ for some $c\in{\mathbb Q}^\times$. Moreover, for $X=\Kum(E\times E)$ we have $\Br_1(X)=\Br({\mathbb Q})$ and \[\Br(X)/\Br({\mathbb Q})=\Br(X)_3/\Br({\mathbb Q})_3=\Br(E\times E)_3/\Br_1(E\times E)_3\cong {\mathbb Z}/3.\] \end{theorem} For $c\in{\mathbb Q}^\times$, let $E^c$ denote the elliptic curve over ${\mathbb Q}$ with affine equation $y^2=x^3+2c^3$. Let $X=\Kum(E^c\times E^c)$ denote the Kummer surface, which is independent of the choice of $c\in{\mathbb Q}^\times$. \begin{theorem} \label{thm:Brauer-Maninset} Let \mbox{${\mathcal A}\in\Br(X)_3\setminus \Br({\mathbb Q})$}. Let $\nu$ be a place of ${\mathbb Q}$. Then the evaluation map \[\ev_{{\mathcal A}, \nu}:X({\mathbb Q}_\nu)\rightarrow \Br({\mathbb Q}_v)_3\] is surjective for $\nu=3$ and zero for every other place. Consequently, \[X({\mathbb A}_{{\mathbb Q}})^{\Br(X)}=X({\mathbb Q}_3)_0\times X({\mathbb R})\times \prod_{\ell\neq 3}{X({\mathbb Q}_{\ell})}\ \subsetneq\ X({\mathbb A}_{{\mathbb Q}})\] where $X({\mathbb Q}_3)_0$ denotes the points $P\in X({\mathbb Q}_3)$ with $\ev_{{\mathcal A},3}(P)=0$, and the product runs over prime numbers $\ell\neq 3$. \end{theorem} Theorem \ref{thm:Brauer-Maninset} shows that a transcendental Brauer element gives rise to a Brauer-Manin obstruction to weak approximation on $X$. Furthermore, the obstruction coming from this transcendental element is the sole reason for the failure of weak approximation on $X$. The structure of the paper is as follows. Section \ref{compute} is devoted to the computation of the transcendental part of the Brauer group of $E\times E$ for a CM elliptic curve $E$. Section \ref{examples} contains applications of these results to special cases and explicit examples. In Section \ref{obstruction}, we compute the Brauer-Manin obstruction to weak approximation on $\Kum(E\times E)$ for $E/{\mathbb Q}$ (a quadratic twist of) the elliptic curve with affine equation $y^2=x^3+2$. \smallskip \begin{notation} We fix the following notation. \smallskip \begin{tabular}{ll} $K$ & an imaginary quadratic field\\ ${\mathcal O}_K$ & the ring of integers of $K$\\ $\Delta_K$ & the discriminant of $K$\\ $H_K$ & the Hilbert class field of $K$\\ $h({\mathcal O}_K) $ & the class number of ${\mathcal O}_K$, $h({\mathcal O}_K)=[H_K:K]$\\ \end{tabular} \begin{tabular}{ll} $L$ & a number field\\ $\overline{L}$ & an algebraic closure of $L$ such that $H_K\subset \overline{L}$\\ $\Gamma_F$ & the absolute Galois group of a field $F$\\ $\mu_n$ & the group of $n$th roots of unity\\ $\zeta_n$ & a primitive $n$th root of unity\\ $E$ & an elliptic curve over $L$ with complex multiplication by ${\mathcal O}_K$\\ $\overline{E}$ & the base change of $E$ to $\overline{L}$, $\overline{E}=E\times_L \overline{L}$\\ $E_n$ & the $n$-torsion points of $E$ defined over $\overline{L}$\\ $E_n(F)$ & the $n$-torsion points of $E$ defined over a field extension $F$ of $L$\\ $\Kum(E\times E)$ & the K3 surface which is the minimal desingularisation \\ & of the quotient of $E\times E$ by the involution $(P,Q)\mapsto (-P,-Q)$\\ $f_{{\mathfrak q}/{\mathfrak p}}$ & the residue class degree $f_{{\mathfrak q}/{\mathfrak p}}=[{\mathcal O}_M/{\mathfrak q}:{\mathcal O}_F/{\mathfrak p}]$ for a prime ${\mathfrak q}$ in \\& a number field $M$ lying above a prime ${\mathfrak p}$ in a subfield $F\subset M$.\\ \end{tabular}\\ \smallskip For any $c\in{\mathbb Z}_{>0}$, we use the following notation. \\ \begin{tabular}{ll} ${\mathcal O}_{c}$ & the order ${\mathbb Z}+c{\mathcal O}_K$ of conductor $c$ in ${\mathcal O}_K$\\ $K_c$ & the ring class field corresponding to the order ${\mathcal O}_c$. \\ \end{tabular}\\ \end{notation} For an abelian group $A$ and an integer $n\in{\mathbb Z}_{>0}$, we write $A_n$ for the elements of order dividing $n$ in $A$. For a prime number $\ell\in{\mathbb Z}_{>0}$, we write $A_{\ell^{\infty}}$ for the $\ell$-primary part of the abelian group $A$. \medskip For $x\in{\mathbb R}$, let $\floor{x}$, $\ceil{x}$ denote the floor and ceiling of $x$ respectively. \section{Transcendental Brauer group computations} \label{compute} \subsection{Preliminaries} Let $L$ be a number field and let $\Gamma_L$ denote its absolute Galois group. In \cite{SZtorsion}, for $A=E\times E'$ a product of elliptic curves defined over $L$ and for every $n\in{\mathbb Z}_{>0}$, Skorobogatov and Zarhin gave a canonical isomorphism of \mbox{$\Gamma_L$-modules} \begin{equation} \label{n-torsion} \Br(\overline{A})_n=\Hom(E_n, E'_n)/(\Hom(\overline{E},\overline{E'})\otimes{\mathbb Z}/n) \end{equation} and a canonical isomorphism of abelian groups \begin{equation} \label{quotient} \Br(A)_n/\Br_1(A)_n=\Hom_{\Gamma_L}(E_n,E'_n)/(\Hom(\overline{E},\overline{E'})\otimes{\mathbb Z}/n)^{\Gamma_L}. \end{equation} They used this concrete description of the transcendental part of the Brauer group to give many examples for which $\Br(A)/\Br_1(A)$ is trivial or a finite abelian $2$-group. \smallskip From now on, we fix an elliptic curve $E/L$ with complex multiplication by ${\mathcal O}_K$. We begin with a simple observation which enables us to use \eqref{quotient} to compute \mbox{$(\Br(E\times E)/\Br_1(E\times E))_{\ell^{\infty}}$}. \begin{lemma} Let $X$ be a smooth, projective, geometrically irreducible variety over a number field. Then for any prime number $\ell$, we have \[(\Br(X)/\Br_1(X))_{\ell^{\infty}}=\Br(X)_{\ell^{\infty}}/\Br_1(X)_{\ell^{\infty}}.\] \end{lemma} \begin{proof} Since $X$ is smooth, Proposition 1.4 of \cite{Grothendieck} tells us that $\Br(X)$ is a torsion abelian group. It follows that the natural inclusion \[\Br(X)_{\ell^{\infty}}/\Br_1(X)_{\ell^{\infty}}\hookrightarrow (\Br(X)/\Br_1(X))_{\ell^{\infty}}\] is an equality. \end{proof} To each prime number $\ell\in{\mathbb Z}_{>0}$ we associate an integer $m(\ell)$ which will appear in our description of the $\ell$-primary part of the transcendental Brauer group of $E\times E$. In order to define $m(\ell)$, we use the Gr\"{o}ssencharacter $\psi_{E/KL}$ of $E$ considered as an elliptic curve over $KL$. Recall that $\psi_{E/KL}$ is unramified at the primes of $KL$ of good reduction for $E$. Therefore, for such primes we write $\psi_{E/KL}({\mathfrak q})$ for the evaluation of $\psi_{E/KL}$ at an idele $(\dots , 1,1,\pi_{{\mathfrak q}},1,1,\dots)\in {\mathbb A}^{\times}_{KL}$ where the entry $\pi_{{\mathfrak q}}$ at the prime ${\mathfrak q}$ is a uniformiser at ${\mathfrak q}$. \begin{definition} \label{def:m} For a prime number $\ell\in{\mathbb Z}_{>0}$, let $m(\ell)$ be the largest integer $k$ such that for all primes ${\mathfrak q}$ of $KL$ which are of good reduction for $E$ and coprime to $\ell$, the Gr\"{o}ssencharacter $\psi_{E/KL}$ satisfies $$\psi_{E/KL}({\mathfrak q})\in{\mathcal O}_{\ell^k}={\mathbb Z}+\ell^k{\mathcal O}_K.$$ \end{definition} We define an auxiliary integer $n(\ell)$ which aids computation of $m(\ell)$ and in most cases removes the dependence on the Gr\"{o}ssencharacter. \begin{definition} For a prime number $\ell\in{\mathbb Z}_{>0}$, let $n(\ell)$ be the largest integer $k$ for which the ring class field $K_{\ell^k}$ of the order ${\mathcal O}_{\ell^k}$ embeds into $KL$. \end{definition} \begin{proposition} \label{upper bound} Let $\ell\in{\mathbb Z}_{>0}$ be prime. Then $$m(\ell)\leq n(\ell)$$ with equality if ${\mathcal O}_K^*=\{\pm 1\}$ (in other words, if $K\notin\{{\mathbb Q}(i),{\mathbb Q}(\zeta_3)\}$). \end{proposition} \begin{proof} Write $m=m(\ell)$ and $n=n(\ell)$. Let $S$ be a set of primes of $KL$ containing the infinite primes, the primes of bad reduction for $E$, the primes dividing $\ell$, the primes which are ramified in $K_{\ell^{n+1}}L/K$, and the primes ${\mathfrak q}$ with $\psi_{E/KL}({\mathfrak q})\notin{\mathcal O}_{\ell^{n+1}}$. Suppose for contradiction that \mbox{$m\geq n+1$}, and hence $S$ is a finite set. Then, since $K_{\ell^{n+1}}\nsubseteq KL$, Exercise 6.1 of \cite{CF} tells us that there exists a prime ${\mathfrak q}$ of $KL$ with ${\mathfrak q}\notin S$ which does not split completely in $K_{\ell^{n+1}}L/KL$. Let ${\mathfrak p}={\mathfrak q}\cap{\mathcal O}_K$. Let $f_{{\mathfrak q}/{\mathfrak p}}$ denote the residue class degree of ${\mathfrak q}$ over ${\mathfrak p}$, $f_{{\mathfrak q}/{\mathfrak p}}=[{\mathcal O}_{KL}/{\mathfrak q}:{\mathcal O}_K/{\mathfrak p}]$. By Theorems 9.1 and 9.2 of \cite{Silverman}, the Gr\"{o}ssencharacter $\psi_{E/KL}$ sends ${\mathfrak q}$ to a generator of the principal ideal $N_{KL/K}({\mathfrak q})={\mathfrak p}^{f_{{\mathfrak q}/{\mathfrak p}}}$. Consider the following diagram of field extensions. \[ \xymatrix{&& K_{\ell^{n+1}}L\ar@{-}[dr] \ar@{-}[dl]& & \\ &K_{\ell^{n+1}}\ar@{-}[dr] & & KL\ar@{-}[dl] &{\mathfrak q} \\ & &K & {\mathfrak p} &\\ } \] The restriction of the Artin symbol $({\mathfrak q},K_{\ell^{n+1}}L/KL)$ to $K_{\ell^{n+1}}$ satisfies \begin{eqnarray*} \Res_{K_{\ell^{n+1}}}({\mathfrak q},K_{\ell^{n+1}}L/KL)=({\mathfrak p},K_{\ell^{n+1}}/K)^{f_{{\mathfrak q}/{\mathfrak p}}} &=&({\mathfrak p}^{f_{{\mathfrak q}/{\mathfrak p}}},K_{\ell^{n+1}}/K)\\ &=&((\psi_{E/KL}({\mathfrak q})),K_{\ell^{n+1}}/K). \end{eqnarray*} Since ${\mathfrak q}\notin S$, we have $\psi_{E/KL}({\mathfrak q})\in{\mathcal O}_{\ell^{n+1}}$ and hence \[((\psi_{E/KL}({\mathfrak q})),K_{\ell^{n+1}}/K)=1\] by definition of the ring class field $K_{\ell^{n+1}}$. But this implies that \[\Res_{K_{\ell^{n+1}}}({\mathfrak q},K_{\ell^{n+1}}L/KL)=1\] and therefore \[({\mathfrak q},K_{\ell^{n+1}}L/KL)=1.\] This is a contradiction because ${\mathfrak q}$ does not split completely in $K_{\ell^{n+1}}L/KL$. Therefore, $m\leq n$. It remains to show that $m=n$ when ${\mathcal O}_K^*=\{\pm 1\}$. From now on, suppose that ${\mathcal O}_K^*=\{\pm 1\}$. Let ${\mathfrak q}$ be a finite prime of $KL$ of good reduction for $E$ which is coprime to $\ell$ and unramified in $KL/K$. Let ${\mathfrak p}={\mathfrak q}\cap{\mathcal O}_K$ and let ${\mathfrak s}={\mathfrak q}\cap{\mathcal O}_{K_{\ell^n}}$. The Artin symbol $({\mathfrak p},K_{\ell^n}/K)$ has order $f_{{\mathfrak s}/{\mathfrak p}}$ in $\Gal(K_{\ell^n}/K)$. Since $K\subset K_{\ell^n}\subset KL$, we have $f_{{\mathfrak s}/{\mathfrak p}}\mid f_{{\mathfrak q}/{\mathfrak p}}$, whereby \[1=({\mathfrak p},K_{\ell^n}/K)^{f_{{\mathfrak q}/{\mathfrak p}}}= ({\mathfrak p}^{f_{{\mathfrak q}/{\mathfrak p}}}, K_{\ell^n}/K)=(N_{KL/K}({\mathfrak q}),K_{\ell^n}/K).\] By definition of the ring class field $K_{\ell^n}$, this implies that $$N_{KL/K}({\mathfrak q})=(\alpha)$$ for some $\alpha\in{\mathcal O}_{\ell^n}$. But $\psi_{E/KL}({\mathfrak q})$ is a generator of $N_{KL/K}({\mathfrak q})$ and ${\mathcal O}_K^*=\{\pm 1\}$ so this implies that $\psi_{E/KL}({\mathfrak q})\in{\mathcal O}_{\ell^n}$, as required. \end{proof} \begin{remark} Class field theory gives $[K_c:K]=h({\mathcal O}_c)$, where $h({\mathcal O}_c)$ denotes the class number of the order ${\mathcal O}_c$. The following formula for $h({\mathcal O}_c)$ can be found in \cite{Cox}, Theorem 7.24, for example. \begin{equation} \label{degreeformula} [K_c:K]=h({\mathcal O}_c)=\frac{h(O_K) c}{ [{\mathcal O}_K^*:{\mathcal O}_c^*]}\prod_{p\mid c}{\Bigl(1-\Bigl(\frac{\Delta_K}{p}\Bigr)\frac{1}{p}\Bigr)} \end{equation} where the product is taken over the prime factors of $c$. The symbol $(\frac{\Delta_K}{p})$ denotes the Legendre symbol for odd primes. For the prime $2$, the Legendre symbol is replaced by the Kronecker symbol $(\frac{\Delta_K}{2})$, defined as $$\Bigl(\frac{\Delta_K}{2}\Bigr)=\begin{cases} 0 & \textrm{if } 2\mid \Delta_K \\ 1 & \textrm{if } \Delta_K\equiv 1\pmod{8}\\ -1 & \textrm{if } \Delta_K\equiv 5\pmod{8}. \end{cases}$$ If $K_{\ell^k}\subset KL$, then $[K_{\ell^k}:K]$ divides $[KL:K]$. Thus, in any given example, \eqref{degreeformula} allows one to identify a finite set of primes $S$ such that $m(\ell)=n(\ell)=0$ for all $\ell\notin S$. For a prime $\ell$ in $S$, \eqref{degreeformula} gives an upper bound for $n(\ell)$, and therefore also an upper bound for $m(\ell)$. For $K\in\{{\mathbb Q}(i), {\mathbb Q}(\zeta_3)\}$, one must examine the Gr\"{o}ssencharacter in order to compute $m(\ell)$. For explicit descriptions of Gr\"{o}ssencharacters for elliptic curves with complex multiplication by ${\mathbb Q}(i)$ or ${\mathbb Q}(\zeta_3)$, see \cite{R-S} Theorems 5.6 and 5.7 respectively. \end{remark} We will use the isomorphisms \eqref{n-torsion} and \eqref{quotient} to compute the $\ell$-primary part of the transcendental Brauer group of $E\times E$ in terms of endomorphisms of the $\ell$-power torsion of $E$. We will need the following two auxiliary lemmas. \begin{lemma} \label{E+} Let $\ell\in{\mathbb Z}_{>0}$ be prime, let $k\in{\mathbb Z}_{\geq 0}$ and let $$(\End E_{\ell^k})^+=\{\psi\in\End E_{\ell^k}\mid \psi x=x\psi \ \forall x \in {\mathcal O}_K\}.$$ Then, viewing ${\mathcal O}_K\otimes{\mathbb Z}/\ell^k$ as a subring of $\End E_{\ell^k}$, we have $$(\End E_{\ell^k})^+={\mathcal O}_K\otimes{\mathbb Z}/\ell^k.$$ \end{lemma} \begin{proof} Recall that $\End \overline{E}={\mathcal O}_K$, so it makes sense to view ${\mathcal O}_K\otimes{\mathbb Z}/\ell^k$ as a subring of $\End E_{\ell^k}$. As an abelian group, $E_{\ell^k}\cong ({\mathbb Z}/\ell^k)^2$, and therefore $\End E_{\ell^k}\cong M_2({\mathbb Z}/\ell^k)$. The proof comes down to an easy calculation with two-by-two matrices with entries in ${\mathbb Z}/\ell^k$. \end{proof} \begin{lemma} \label{fixed} Let $\ell\in{\mathbb Z}_{>0}$ be prime and let $m=m(\ell)$. Let $k\in{\mathbb Z}_{\geq 0}$ and let ${\varphi}\in\End E_{\ell^k}$. Then \begin{enumerate} \item \label{partclassfixed}The class of $\varphi$ in $\End E_{\ell^{k}} /({\mathcal O}_K\otimes{\mathbb Z}/\ell^{k})$ is fixed by $\Gamma_{KL}$ if and only if for all $x\in{\mathcal O}_K$, $$\ell^m(x{\varphi}-{\varphi} x)\in(\End E_{\ell^k})^+={\mathcal O}_K\otimes{\mathbb Z}/\ell^k.$$ \item \label{partendofixed}The endomorphism ${\varphi}$ is fixed by $\Gamma_{KL}$ if and only if $$\ell^m{\varphi}\in(\End E_{\ell^k})^+={\mathcal O}_K\otimes{\mathbb Z}/\ell^k.$$ \end{enumerate} \end{lemma} \begin{proof} The action of $\Gamma_{KL}$ on $\End E_{\ell^{k}}$ factors through the abelian Galois group $\Gal(KL(E_{\ell^{k}})/KL)$. Let ${\mathfrak q}$ be a finite prime of $KL$ which is coprime to $\ell$ and of good reduction for $E$. The N\'{e}ron-Ogg-Shafarevich criterion tells us that ${\mathfrak q}$ is unramified in $KL(E_{\ell^{k}})/KL$. Since $E$ has complex multiplication by ${\mathcal O}_K$, the Artin symbol $({\mathfrak q}, KL(E_{\ell^{k}})/KL)$ acts on $E_{\ell^k}$ as multiplication by $\psi_{E/KL}({\mathfrak q})$. For a proof of this fact, see \cite{Lang}, Ch. 4, Corollary 1.3 (iii), for example. Therefore, the action of $({\mathfrak q}, KL(E_{\ell^{k}})/KL)$ on $\End(E_{\ell^k})$ is conjugation by $\psi_{E/KL}({\mathfrak q})$. The Artin symbols for the unramified primes generate $\Gal(KL(E_{\ell^{k}})/KL)$. Let $\alpha=(\Delta_K+\sqrt{\Delta_K})/2$, so ${\mathcal O}_K={\mathbb Z}[\alpha]$. Let $a, b\in{\mathbb Z}$ be such that $a+b\alpha$ is invertible in ${\mathcal O}_K\otimes{\mathbb Z}/\ell^k$. Let ${\varphi}\in\End E_{\ell^k}$. We have \[(a+b\alpha){\varphi}-{\varphi}(a+b\alpha)=b(\alpha{\varphi}-{\varphi}\alpha).\] Hence, the class of ${\varphi}$ in $\End E_{\ell^{k}} /({\mathcal O}_K\otimes{\mathbb Z}/\ell^{k})$ is fixed by conjugation by $a+b\alpha$ if and only if \begin{equation} \label{class fixed} b(\alpha{\varphi}-{\varphi}\alpha)\in{\mathcal O}_K\otimes{\mathbb Z}/\ell^k \end{equation} and ${\varphi}$ is fixed by conjugation by $a+b\alpha$ if and only if \begin{equation} \label{varphi fixed} b(\alpha{\varphi}-{\varphi}\alpha)=0. \end{equation} Recall that $m=m(\ell)$ is the largest integer $t$ such that for all finite primes ${\mathfrak q}$ of $KL$ which are of good reduction for $E$ and coprime to $\ell$, $$\psi_{E/KL}({\mathfrak q})\in{\mathcal O}_{\ell^t}={\mathbb Z}+\ell^t{\mathcal O}_K.$$ In other words, for a prime ${\mathfrak q}$ which is unramified in $KL(E_{\ell^k})/KL$, we can write \mbox{$\psi_{E/KL}({\mathfrak q})=a+b\alpha$} for some $a,b\in{\mathbb Z}$ with $\ord_{\ell}(b)=m$. Hence, by \eqref{class fixed}, the class of ${\varphi}$ in $\End E_{\ell^{k}} /({\mathcal O}_K\otimes{\mathbb Z}/\ell^{k})$ is fixed by $\Gamma_{KL}$ if and only if \begin{eqnarray*} \ell^m(\alpha{\varphi}-{\varphi}\alpha)\in {\mathcal O}_K\otimes{\mathbb Z}/\ell^k. \end{eqnarray*} By \eqref{varphi fixed}, the endomorphism ${\varphi}$ is fixed by $\Gamma_{KL}$ if and only if \begin{eqnarray*} \ell^m(\alpha{\varphi}-{\varphi}\alpha)=0. \end{eqnarray*} An application of Lemma \ref{E+} completes the proof. \end{proof} \subsection{Case I: Complex multiplication defined over the base field.} In this subsection, we compute the transcendental Brauer group of $E\times E$ in the case where the complex multiplication field $K$ is a subfield of $L$, the field of definition of $E$. \begin{theorem} \label{quotient1} Suppose that $K\subseteq L$. Let $\ell\in{\mathbb Z}_{>0}$ be prime and let $m=m(\ell)$. Then \begin{eqnarray*} \left(\frac{\Br(E\times E)}{\Br_1(E\times E)}\right)_{\ell^{\infty}}= \frac{\Br(E\times E)_{\ell^{m}}}{\Br_1(E\times E)_{\ell^{m}}}=\frac{\End E_{\ell^m}}{{\mathcal O}_K\otimes{\mathbb Z}/\ell^m} \cong ({\mathbb Z}/\ell^m)^2. \end{eqnarray*} \end{theorem} \begin{proof} By \eqref{quotient}, for all primes $\ell$ and all $k\in{\mathbb Z}_{\geq 0}$, we have \[\frac{\Br(E\times E)_{\ell^{k}}}{\Br_1(E\times E)_{\ell^{k}}}=\frac{\End _{\Gamma_L}E_{\ell^k}}{{\mathcal O}_K\otimes{\mathbb Z}/\ell^k}.\] Also, \[\frac{\End E_{\ell^k}}{{\mathcal O}_K\otimes{\mathbb Z}/\ell^k}\cong ({\mathbb Z}/\ell^k)^2.\] The result now follows from Lemma \ref{fixed}, part \ref{partendofixed}. \end{proof} \begin{theorem} \label{geom1} Suppose that $K\subseteq L$. Let $\ell\in{\mathbb Z}_{>0}$ be prime and let $m=m(\ell)$. Then \begin{eqnarray*} \Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{L}}&=& \left(\frac{\End E_{\ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}}}{{\mathcal O}_K\otimes{\mathbb Z}/\ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}}\right)^{\Gamma_L}\\ &\cong & {\mathbb Z}/\ell^{m+\floor{\ord_{\ell}(\Delta_K)/2}}\times {\mathbb Z}/\ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}. \end{eqnarray*} In particular, if $\ell\nmid \Delta_K$ then \[\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{L} =\frac{\End E_{\ell^m}}{{\mathcal O}_K\otimes{\mathbb Z}/\ell^m}\cong ({\mathbb Z}/\ell^m)^2.\] \end{theorem} \begin{proof} Fix a prime number $\ell\in{\mathbb Z}_{>0}$ and let $k\in{\mathbb Z}_{\geq 0}$. By \eqref{n-torsion}, we have \[\Br(\overline{E}\times\overline{E})_{\ell^{k}}^{\Gamma_{L}}=\left(\frac{\End E_{\ell^k} }{{\mathcal O}_K\otimes{\mathbb Z}/\ell^k}\right)^{\Gamma_L}. \] Write ${\mathcal O}_K={\mathbb Z}[\alpha]$ where $\alpha=(\Delta_K+\sqrt{\Delta_K})/2$ and let ${\varphi}\in\End E_{\ell^k}$. By part \ref{partclassfixed} of Lemma \ref{fixed}, the class of ${\varphi}$ in $\End E_{\ell^{k}} /({\mathcal O}_K\otimes{\mathbb Z}/\ell^{k})$ is fixed by $\Gamma_{L}$ if and only if \begin{equation} \label{classfixed} \ell^m(\alpha{\varphi}-{\varphi} \alpha)\in{\mathcal O}_K\otimes{\mathbb Z}/\ell^k. \end{equation} Let $P, \alpha P$ be a ${\mathbb Z}/\ell^{k}$-basis for $E_{\ell^{k}}$. With respect to this basis, multiplication by $\alpha$ is given by the following matrix: $$\begin{pmatrix}0& \ \ \ \ \frac{\Delta_K(1-\Delta_K)}{4}\\ 1 & \Delta_K\end{pmatrix}.$$ Subtracting an element of ${\mathcal O}_K\otimes{\mathbb Z}/\ell^k$ if necessary, we may assume that ${\varphi}$ is of the form $$\begin{pmatrix}0&\ \ t\\ 0 &\ \ u\end{pmatrix}$$ for some $t,u\in{\mathbb Z}/\ell^k$. In terms of matrices, equation \eqref{classfixed} becomes \[\begin{pmatrix} -\ell^mt &\ \ \ -\ell^mt\Delta_K+\ell^m u\frac{\Delta_K(1-\Delta_K)}{4} \\-\ell^mu& \ell^m t\end{pmatrix}=\begin{pmatrix}a& \ \ \ b\frac{\Delta_K(1-\Delta_K)}{4}\\ b & a+b\Delta_K\end{pmatrix}\] for some $a, b\in{\mathbb Z}/\ell^k$. The resulting equations reduce to \begin{equation} \label{uveqns} 2\ell^mt\equiv\ell^m\Delta_Kt\equiv\ell^m\Delta_K u\equiv\ell^m\frac{\Delta_K(1-\Delta_K)}{2}u\equiv 0\pmod {\ell^k}. \end{equation} We have $\ord_2(\Delta_K)\in\{0,2 ,3\}$ and for an odd prime $\ell$, $\ord_\ell(\Delta_K)\in\{0, 1\}$. Thus, \eqref{uveqns} can be summarised as \[\ell^{m+\floor{\ord_{\ell}(\Delta_K)/2}}t\equiv \ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}u\equiv 0\pmod{\ell^k}. \] Therefore, \begin{eqnarray*} \Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{L}}&=& \Br(\overline{E}\times\overline{E})_{\ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}}^{\Gamma_{L}}\\ &=&\left(\frac{\End E_{\ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}}}{{\mathcal O}_K\otimes{\mathbb Z}/\ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}}\right)^{\Gamma_L}\\ &\cong & {\mathbb Z}/\ell^{m+\floor{\ord_{\ell}(\Delta_K)/2}}\times {\mathbb Z}/\ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}. \end{eqnarray*} \end{proof} \begin{remark} The fact that $(\Br(E\times E)/\Br_1(E\times E))_{\ell^\infty}=\Br(\overline{E}\times \overline{E})^{\Gamma_L}_{\ell^\infty}$ for $\ell\nmid \Delta_K$ also follows from Proposition 5.2 of \cite{C-T--S}. A computation of the relevant intersection pairing shows that the cokernel of the map $\Br(E\times E)/\Br_1(E\times E)\hookrightarrow \Br(\overline{E}\times \overline{E})^{\Gamma_L}$ is annihilated by the discriminant of $K$. \end{remark} \subsection{Case II: Complex multiplication not defined over the base field.} Throughout this subsection, we make the assumption that $K\not\subset L$. We write $\tau$ for an element of $\Gamma_L\setminus \Gamma_{KL}.$ We set $\alpha=(\Delta_K+\sqrt{\Delta_K})/2$, so ${\mathcal O}_K={\mathbb Z}[\alpha]$. \begin{lemma} \label{tau fixed} Suppose that $K\nsubseteq L$. Let $\ell\in{\mathbb Z}_{>0}$ be prime and let $k\in{\mathbb Z}_{\geq 0}$. Let $a,b\in{\mathbb Z}$ and consider $(a+b\alpha)\tau$ as an element of $\End E_{\ell^k}$. Then \begin{enumerate} \item \label{partclasstauKL}The class of $(a+b\alpha)\tau$ in $\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k)$ is fixed by $\Gamma_{KL}$ if and only if $$\ord_{\ell}(a),\ \ord_{\ell}(b)\geq k-m(\ell)-\ord_{\ell}(\Delta_K).$$ \item \label{partclasstautau}The class of $(a+b\alpha)\tau$ in $\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k)$ is fixed by $\tau$ if and only if \[ \ord_{\ell}(b)\geq k-\ord_{\ell}(\Delta_K).\] \item \label{partplus}We have $(a+b\alpha)\tau\in(\End E_{\ell^k})^+={\mathcal O}_K\otimes{\mathbb Z}/\ell^k$ if and only if \begin{eqnarray*} \ord_{\ell}(a)\geq k-\floor{\ord_{\ell}(\Delta_K)/2}\\ \textrm{and }\ \ord_{\ell}(b)\geq k-\ceil{\ord_{\ell}(\Delta_K)/2}. \end{eqnarray*} \item \label{parttaufixKL}We have $(a+b\alpha)\tau\in\End_{\Gamma_{KL}} E_{\ell^k}$ if and only if \begin{eqnarray*} \ord_{\ell}(a)\geq k-m(\ell)-\floor{\ord_{\ell}(\Delta_K)/2}\\ \textrm{and }\ \ord_{\ell}(b)\geq k-m(\ell)-\ceil{\ord_{\ell}(\Delta_K)/2}. \end{eqnarray*} \item \label{parttaufixtau}The endomorphism $(a+b\alpha)\tau$ is fixed by the action of $\tau$ if and only if \[\ord_{\ell}(b)\geq k-\floor{\ord_{\ell}(\Delta_K)/2}.\] \end{enumerate} \end{lemma} \begin{proof} Write $m=m(\ell)$. \begin{enumerate} \item By part \ref{partclassfixed} of Lemma \ref{fixed}, the class of $(a+b\alpha)\tau$ in \mbox{$\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k)$} is fixed by $\Gamma_{KL}$ if and only if \begin{equation} \label{classfix} \ell^m(a+b\alpha)(\alpha\tau-\tau\alpha)=\ell^m\sqrt{\Delta_K}(a+b\alpha)\tau\in(\End E_{\ell^k})^+. \end{equation} By the definition of $(\End E_{\ell^k})^+$, \eqref{classfix} shows that the class of $(a+b\alpha)\tau$ in $\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k)$ is fixed by $\Gamma_{KL}$ if and only if \[\ell^m\sqrt{\Delta_K}(a+b\alpha)(\alpha\tau-\tau\alpha)=\ell^m\Delta_K(a+b\alpha)\tau\equiv 0\pmod{\ell^k}.\] \item The class of $(a+b\alpha)\tau$ in $\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k)$ is fixed by $\tau$ if and only if \begin{equation} \label{fixedbytau} (a+b\alpha)\tau-\tau(a+b\alpha)\tau\tau^{-1}=b\sqrt{\Delta_K}\tau\in {\mathcal O}_K\otimes{\mathbb Z}/\ell^k. \end{equation} By Lemma \ref{E+}, ${\mathcal O}_K\otimes{\mathbb Z}/\ell^k=(\End E_{\ell^k})^+$. So, by \eqref{fixedbytau} and the definition of $(\End E_{\ell^k})^+$, the class of $(a+b\alpha)\tau$ in $\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k)$ is fixed by $\tau$ if and only if \[\alpha b\sqrt{\Delta_K}\tau-b\sqrt{\Delta_K}\tau\alpha=b\Delta_K\tau\equiv 0\pmod{\ell^k}.\] \item By definition of $(\End E_{\ell^k})^+$, we have \begin{eqnarray*} (a+b\alpha)\tau\in(\End E_{\ell^k})^+ \iff (a+b\alpha)(\alpha\tau-\tau\alpha) \equiv 0\pmod{\ell^k}. \end{eqnarray*} Expanding $(a+b\alpha)(\alpha\tau-\tau\alpha)$ gives \[(a+b\alpha)(\alpha\tau-\tau\alpha)=\Bigl(b\frac{\Delta_K(1-\Delta_K)}{2}-\Delta_K a+(2a+b\Delta_K)\alpha\Bigr)\tau.\] The conditions of part \ref{partplus} are precisely those arising from \[b\frac{\Delta_K(1-\Delta_K)}{2}-\Delta_K a\equiv 2a+b\Delta_K\equiv 0\pmod{\ell^k}. \] \item By part \ref{partendofixed} of Lemma \ref{fixed}, \[(a+b\alpha)\tau\in\End_{\Gamma_{KL}} E_{\ell^k}\iff \ell^m(a+b\alpha)\tau\in(\End E_{\ell^k})^+.\] Now apply part \ref{partplus} of Lemma \ref{tau fixed}. \item The endomorphism $(a+b\alpha)\tau$ is fixed by the action of $\tau$ if and only if \begin{equation} (a+b\alpha)\tau-\tau(a+b\alpha)\tau\tau^{-1}=b\sqrt{\Delta_K}\tau\equiv0\pmod{\ell^k}. \end{equation} It is easily seen that $b\sqrt{\Delta_K}\equiv 0\pmod{\ell^k}$ if and only if \[\ord_{\ell}(b)\geq k-\floor{\ord_{\ell}(\Delta_K)/2}.\] \end{enumerate} \end{proof} \begin{theorem} \label{geom2} Suppose that $K\nsubseteq L$ and let $\ell\in{\mathbb Z}_{>0}$ be prime. Let $m=m(\ell)$ and let $k=m+\ord_{\ell}(\Delta_K)$. Let $\theta$ denote the image of $\tau$ in the quotient group $\End E_{\ell^{k}}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^{k})$. Then $$\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{KL}}={\mathcal O}_{K}\theta$$ and \begin{eqnarray*} \Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{L}}&=&{\mathcal O}_{\ell^m}\theta\cong\begin{cases} {\mathbb Z}/\ell^{k} & \textrm{if $\ell$ is odd or $\ell\nmid\Delta_K$}\\ {\mathbb Z}/2^{k-1}\times{\mathbb Z}/2 & \textrm{if $\ell=2$ and $2\mid\Delta_K$.} \end{cases} \end{eqnarray*} \end{theorem} \begin{proof} Since $\ord_\ell(\Delta_K)\geq \ceil{\ord_\ell(\Delta_K)/2}$, applying Theorem \ref{geom1} to $KL$ gives \begin{eqnarray} \Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{KL}}&=& \Br(\overline{E}\times\overline{E})_{\ell^{k}}^{\Gamma_{KL}}= (\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k))^{\Gamma_{KL}}\\ \label{size} &\cong& {\mathbb Z}/\ell^{m+\floor{\ord_{\ell}(\Delta_K)/2}}\times {\mathbb Z}/\ell^{m+\ceil{\ord_{\ell}(\Delta_K)/2}}. \end{eqnarray} By part \ref{partclasstauKL} of Lemma \ref{tau fixed}, $${\mathcal O}_K\theta\subset (\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k))^{\Gamma_{KL}}.$$ Using part \ref{partplus} of Lemma \ref{tau fixed} to count the number of elements in ${\mathcal O}_K\theta$ and comparing to \eqref{size} gives $${\mathcal O}_K\theta= (\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k))^{\Gamma_{KL}}.$$ Now part \ref{partclasstautau} of Lemma \ref{tau fixed} shows that $${\mathcal O}_{\ell^m}\theta= (\End E_{\ell^k}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^k))^{\Gamma_{L}}.$$ Moreover, since $\ord_\ell(\Delta_K)\leq 1$ for an odd prime $\ell$, part \ref{partplus} of Lemma \ref{tau fixed} gives ${\mathcal O}_{\ell^m}\theta\cong{\mathbb Z}/\ell^k$ if $\ell$ is odd or $\ell\nmid\Delta_K$. If $\ell=2$ and $2\mid\Delta_K$, then part \ref{partplus} of Lemma \ref{tau fixed} gives ${\mathcal O}_{2^m}\theta\cong{\mathbb Z}/2^{k-1}\times{\mathbb Z}/2$. \end{proof} \begin{theorem} \label{quotient2} Suppose that $K\nsubseteq L$ and let $\ell\in{\mathbb Z}_{>0}$ be prime. Let $m=m(\ell)$. Let $\eta$ denote the image of $\tau$ in the quotient group $\End E_{\ell^m}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^m)$. Then \begin{eqnarray*} \left(\frac{\Br(E\times E)}{\Br_1(E\times E)}\right)_{\ell^{\infty}}= \frac{\Br(E\times E)_{\ell^{m}}}{\Br_1(E\times E)_{\ell^{m}}}=\frac{\End_{\Gamma_L}E_{\ell^m}}{({\mathcal O}_K\otimes{\mathbb Z}/\ell^m)^{\Gamma_L}}=({\mathbb Z}/\ell^m)\eta \cong{\mathbb Z}/\ell^m \end{eqnarray*} unless $\ell=2$, $2\mid\Delta_K$, $m\geq 1$ and $E_2=E_2(L)$, in which case \begin{eqnarray*} \left(\frac{\Br(E\times E)}{\Br_1(E\times E)}\right)_{2^{\infty}}=\frac{\Br(E\times E)_{2^{m+1}}}{\Br_1(E\times E)_{2^{m+1}}}&=&\frac{\End_{\Gamma_L} E_{2^{m+1}}}{({\mathcal O}_K\otimes{\mathbb Z}/2^{m+1})^{\Gamma_L}}\\ &\cong & {\mathbb Z}/2^m\times{\mathbb Z}/2 \end{eqnarray*} where the copy of ${\mathbb Z}/2^m$ is generated by the image of $\tau$. \end{theorem} \begin{proof} Let $k=m+\ord_{\ell}(\Delta_K)$ and let $\theta$ denote the image of $\tau$ in the quotient group $\End E_{\ell^{k}}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^{k})$. Then \begin{equation} \label{quotient in geom} \frac{\Br(E\times E)_{\ell^{\infty}}}{\Br_1(E\times E)_{\ell^{\infty}}}\hookrightarrow \Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{L}}={\mathcal O}_{\ell^m}\theta, \end{equation} by Theorem \ref{geom2}. For all $t\in{\mathbb Z}_{\geq 0}$, \begin{equation} \label{LfixedinKLfixed} \frac{\Br(E\times E)_{\ell^t}}{\Br_1(E\times E)_{\ell^{t}}}=\frac{\End_{\Gamma_L}E_{\ell^t}}{({\mathcal O}_K\otimes{\mathbb Z}/\ell^t)^{\Gamma_L}}\hookrightarrow \frac{\End_{\Gamma_{KL}}E_{\ell^t}}{{\mathcal O}_K\otimes{\mathbb Z}/\ell^t}. \end{equation} First suppose that $\ell$ is odd or $\ell\nmid\Delta_K$. Then \eqref{quotient in geom} and \eqref{LfixedinKLfixed} combined with Theorems \ref{quotient1} and \ref{geom2} show that \begin{equation} \label{ell^m} \Bigl(\frac{\Br(E\times E)}{\Br_1(E\times E)}\Bigr)_{\ell^{\infty}}\hookrightarrow {\mathbb Z}/\ell^m. \end{equation} Consider $\tau$ as an element of $\End E_{\ell^m}$. By parts \ref{parttaufixKL} and \ref{parttaufixtau} of Lemma \ref{tau fixed}, \mbox{$\tau\in \End_{\Gamma_L} E_{\ell^m}$.} By part \ref{partplus} of Lemma \ref{tau fixed}, $\eta$ has order $\ell^m$ in \[\End_{\Gamma_L} E_{\ell^{m}}/({\mathcal O}_K\otimes{\mathbb Z}/\ell^m)^{\Gamma_L}=\Br(E\times E)_{\ell^m}/\Br_1(E\times E)_{\ell^m}.\] Hence, by \eqref{ell^m}, $$({\mathbb Z}/\ell^m)\eta= \frac{\End_{\Gamma_L} E_{\ell^{m}}}{({\mathcal O}_K\otimes{\mathbb Z}/\ell^m)^{\Gamma_L}}=\Bigl(\frac{\Br(E\times E)}{\Br_1(E\times E)}\Bigr)_{\ell^{\infty}}.$$ Now suppose that $\ell=2$ and $2\mid\Delta_K$. If $m(2)=0$, then $(\Br(E\times E)/\Br_1(E\times E))_{2^{\infty}}=0,$ by \eqref{LfixedinKLfixed} and Theorem \ref{quotient1} applied to $KL$. So we assume from now on that $m=m(2)\geq 1$. Theorems \ref{quotient1} and \ref{geom2} combined with \eqref{quotient in geom} and \eqref{LfixedinKLfixed} show that \begin{equation} \label{2power} \left(\frac{\Br(E\times E)}{\Br_1(E\times E)}\right)_{2^{\infty}}\hookrightarrow {\mathbb Z}/2^m\times{\mathbb Z}/2. \end{equation} By parts \ref{partplus}, \ref{parttaufixKL} and \ref{parttaufixtau} of Lemma \ref{tau fixed}, the image of $\tau$ generates a copy of ${\mathbb Z}/2^m$ inside $\End_{\Gamma_L} E_{2^{m+1}}/({\mathcal O}_K\otimes{\mathbb Z}/2^{m+1})^{\Gamma_L}=\Br(E\times E)_{2^{m+1}}/\Br_1(E\times E)_{2^{m+1}}.$ Therefore, \eqref{2power} shows that $(\Br(E\times E)/\Br_1(E\times E))_{2^{\infty}}$ is isomorphic to either ${\mathbb Z}/2^m$ or ${\mathbb Z}/2^m\times{\mathbb Z}/2$. First suppose that $E_2=E_2(L)$. Then $\Gamma_L$ acts trivially on $E_2$ and hence $$\frac{\Br(E\times E)_2}{\Br_1(E\times E)_2}=\frac{\End_{\Gamma_L} E_2}{({\mathcal O}_K\otimes{\mathbb Z}/2)^{\Gamma_L}}=\frac{\End E_2}{{\mathcal O}_K\otimes{\mathbb Z}/2}\cong{\mathbb Z}/2\times{\mathbb Z}/2.$$ Therefore, $$\left(\frac{\Br(E\times E)}{\Br_1(E\times E)}\right)_{2^{\infty}}=\frac{\Br(E\times E)_{2^{m+1}}}{\Br_1(E\times E)_{2^{m+1}}}\cong{\mathbb Z}/2^m\times{\mathbb Z}/2.$$ Now suppose that $E_2\neq E_2(L)$. By Theorem \ref{geom2}, $$ \Br(\overline{E}\times\overline{E})_{2^{\infty}}^{\Gamma_L} =\left(\frac{\End E_{2^{k}}}{{\mathcal O}_K\otimes{\mathbb Z}/2^{k}}\right)^{\Gamma_L}={\mathcal O}_{2^m}\theta$$ and, in particular, for any $t\in{\mathbb Z}_{\geq 0}$ the natural injection \begin{equation} \label{inj=isom} {\mathcal O}_{2^m}\theta=\left(\frac{\End E_{2^{k}}}{{\mathcal O}_K\otimes{\mathbb Z}/2^{k}}\right)^{\Gamma_L}\hookrightarrow\left(\frac{\End E_{2^{k+t}}}{{\mathcal O}_K\otimes{\mathbb Z}/2^{k+t}}\right)^{\Gamma_L} \end{equation} induced by multiplication by $2^t$ on $E_{2^{k+t}}$ is an isomorphism. Let $t\in{\mathbb Z}_{\geq 0}$ and let ${\varphi}\in\End_{\Gamma_L} E_{2^{k+t}}$. We have \begin{equation} \label{quotientingeom2} \frac{\End_{\Gamma_L} E_{2^{k+t}}}{({\mathcal O}_K\otimes{\mathbb Z}/2^{k+t})^{\Gamma_L}}\hookrightarrow \left(\frac{\End E_{2^{k+t}}}{{\mathcal O}_K\otimes{\mathbb Z}/2^{k+t}}\right)^{\Gamma_L}. \end{equation} Since $2\mid\Delta_K$, we can write ${\mathcal O}_K={\mathbb Z}[\sqrt{-d}]$ where $\Delta_K=-4d$. Since the injection in \eqref{inj=isom} is an isomorphism, we can use \eqref{quotientingeom2} to write \begin{equation} \label{varphi} {\varphi}=2^t(x+2^my\sqrt{-d})\tau+z+w\sqrt{-d} \end{equation} for some $x,y,z,w\in{\mathbb Z}/2^{k+t}$. Here we abuse notation slightly by using $\tau$ to denote the image of $\tau$ in $\End_{\Gamma_L} E_{2^{k+t}}$. Since ${\varphi}$ is fixed by $\tau$, we have \[2\sqrt{-d}(2^{m+t}y\tau+w)\equiv 0\pmod{2^{k+t}}.\] Multiplying by $\sqrt{-d}$ and recalling that $k=m+\ord_2(\Delta_K)=m+\ord_2(d)+2$, we see that \[2^{m+t}y\tau+w\equiv 0\pmod{2^{m+t+1}}.\] Therefore, $w=2^{m+t}u$ for some $u\in{\mathbb Z}/2^{k+t}$ and we have \[y\tau+u\equiv 0\pmod{2}.\] Suppose for contradiction that $y\not\equiv 0\pmod{2}$. Then $\tau$ acts as multiplication by a scalar on $E_2$. Furthermore, since $\tau$ is invertible, this scalar cannot be zero and therefore must be $1$. In other words, $\tau$ acts as the identity on $E_2$. Furthermore, since $m(2)\geq 1$, $\Gamma_{KL}$ acts trivially on $E_2$ and hence $E_2=E_2(L)$, giving the required contradiction. Therefore, $y\equiv 0\pmod{2}$ and we can write $y=2v$ for some \mbox{$v\in{\mathbb Z}/2^{k+t}$} and substituting into \eqref{varphi} gives \begin{equation} \label{cyclicclass} {\varphi}=2^t(x+2^{m+1}v\sqrt{-d})\tau+z+w\sqrt{-d}. \end{equation} Now part \ref{partplus} of Lemma \ref{tau fixed} shows that $2^{t+m+1}\sqrt{-d}\tau\in{\mathcal O}_K\otimes{\mathbb Z}/2^{k+t}.$ Thus, \eqref{cyclicclass} shows that the class of $\varphi$ in $(\End E_{2^{k+t}}/({\mathcal O}_K\otimes{\mathbb Z}/2^{k+t}))^{\Gamma_L}$ is represented by $2^tx\tau$. But $\varphi$ was arbitrary and \eqref{quotientingeom2} is injective, hence $\End_{\Gamma_L} E_{2^{k+t}}/({\mathcal O}_K\otimes{\mathbb Z}/2^{k+t})^{\Gamma_L}$ is a cyclic group. Therefore, $$\left(\frac{\Br(E\times E)}{\Br_1(E\times E)}\right)_{2^{\infty}}=\frac{\Br(E\times E)_{2^{m+1}}}{\Br_1(E\times E)_{2^{m+1}}}\cong{\mathbb Z}/2^m.$$ \end{proof} \section{Special cases and examples} We retain the notation and conventions of Section \ref{compute}. In particular, $L$ is a number field and $E/L$ is an elliptic curve with complex multiplication by ${\mathcal O}_K$. \label{examples} \begin{theorem} \label{LinK} Suppose that $L\subset H_K$, where $H_K$ denotes the Hilbert class field of $K$. Let $\ell\in{\mathbb Z}_{>0}$ be prime. Then $m(\ell)=n(\ell)=0$, except in the following special cases where $n(\ell)=1$: \begin{enumerate} \item $K={\mathbb Q}(\zeta_3)$ and $\ell\leq 3$, \item $K={\mathbb Q}(i)$ and $\ell=2$, \item $\Delta_K\equiv 1\pmod{8}$ and $\ell=2$. \end{enumerate} Consequently, if ${\mathcal O}_K^*=\{\pm 1\}$ and $\Delta_K\not\equiv 1\pmod{8}$, then \[\Br(E\times E)=\Br_1(E\times E).\] \end{theorem} \begin{proof} Let $j(E)$ denote the $j$-invariant of the elliptic curve $E$. Since $E$ is defined over $L$, we have ${\mathbb Q}(j(E))\subset L$. The theory of complex multiplication tells us that $K(j(E))=H_K$. Therefore, $[KL:K]=[H_K:K]=h({\mathcal O}_K)$. Using the formula for the degree of a ring class field, as given in \eqref{degreeformula}, we see that in every case, $[K_{\ell^2}:K]>h({\mathcal O}_K)$ so $n(\ell)\leq 1$. Furthermore, $[K_{\ell}:K]>h({\mathcal O}_K)$ except in the special cases (i), (ii) and (iii) of the theorem. The rest follows immediately from Proposition \ref{upper bound} and Theorems \ref{quotient1} and \ref{quotient2}. \end{proof} \begin{remark} Since $K(j(E))=H_K$, the hypothesis $L\subset H_K$ holds precisely when $L=H_K$ or \mbox{$L={\mathbb Q}(j(E))$}. \end{remark} If ${\mathcal O}_K^*=\{\pm1\}$, then Proposition \ref{upper bound} allows us to calculate $m(\ell)$ for all primes $\ell\in{\mathbb Z}_{>0}$, and hence compute the transcendental part of $\Br(E\times E)$. On the other hand, if $K\in\{{\mathbb Q}(i),{\mathbb Q}(\zeta_3)\}$, then Proposition~\ref{upper bound} only tells us that $m(\ell)\leq n(\ell)$ for all primes $\ell\in{\mathbb Z}_{>0}$. The following two propositions deal with $K={\mathbb Q}(i)$ and $K={\mathbb Q}(\zeta_3)$, and in each case give sufficient conditions which allow us to conclude that $m(\ell)=0$. \begin{proposition} \label{d=1} Let $\ell\in{\mathbb Z}_{>0}$ be an odd prime. Let $K={\mathbb Q}(i)$. Suppose that there exists a finite prime ${\mathfrak q}$ of $KL$ satisfying all of the following conditions. \begin{enumerate} \item ${\mathfrak q}$ is coprime to $2\ell$, \item $E$ has good reduction at ${\mathfrak q}$, \item \label{division} $f_{{\mathfrak s}/{\mathfrak p}}\mid f_{{\mathfrak q}/{\mathfrak p}}$, where ${\mathfrak p}={\mathfrak q}\cap{\mathcal O}_K$ and ${\mathfrak s}$ is a prime of $K_{2\ell}$ above ${\mathfrak p}$, \item $\psi_{E/KL}({\mathfrak q})\notin{\mathcal O}_2$. \end{enumerate} Then $m(\ell)=0$, and hence \[(\Br(E\times E)/\Br_1(E\times E))_{\ell^{\infty}}=\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{L}} =\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{KL}}=0.\] \end{proposition} Note that condition \ref{division} is trivially satisfied if $K_{2\ell}\subseteq KL$. \begin{proof} Let ${\mathfrak q}$ be a finite prime of $KL$ satisfying conditions (1)--(4). Let ${\mathfrak p}$ and ${\mathfrak s}$ be primes as described in condition \ref{division}. The Artin symbol $({\mathfrak p}, K_{2\ell}/K)$ has order $f_{{\mathfrak s}/{\mathfrak p}}$ in $\Gal(K_{2\ell}/K)$. Since $f_{{\mathfrak s}/{\mathfrak p}}$ divides $f_{{\mathfrak q}/{\mathfrak p}}$, we have \[1=({\mathfrak p},K_{2\ell}/K)^{f_{{\mathfrak q}/{\mathfrak p}}} =({\mathfrak p}^{f_{{\mathfrak q}/{\mathfrak p}}},K_{2\ell}/K) =(N_{KL/K}({\mathfrak q}),K_{2\ell}/K).\] By the definition of the ring class field $K_{2\ell}$, this implies that $$N_{KL/K}({\mathfrak q})=(\alpha)$$ for some $\alpha\in{\mathcal O}_{2\ell}$. Now $\psi_{E/KL}({\mathfrak q})$ is a generator of $N_{KL/K}({\mathfrak q})$ but $\psi_{E/KL}({\mathfrak q})\notin{\mathcal O}_2$ by the hypothesis, so $\psi_{E/KL}({\mathfrak q})=\pm i\alpha$. Therefore, $\psi_{E/KL}({\mathfrak q})\notin{\mathcal O}_{\ell}$, and hence $m(\ell)=0$. \end{proof} \begin{proposition} \label{d=3} Let $K={\mathbb Q}(\zeta_3)$ and let $\ell\in{\mathbb Z}_{>0}$ be prime with $\ell\neq 3$. Suppose that there exists a finite prime ${\mathfrak q}$ of $KL$ satisfying all of the following conditions. \begin{enumerate} \item ${\mathfrak q}$ is coprime to $3\ell$ \item $E$ has good reduction at ${\mathfrak q}$, \item $f_{{\mathfrak s}/{\mathfrak p}}\mid f_{{\mathfrak q}/{\mathfrak p}}$, where ${\mathfrak p}={\mathfrak q}\cap{\mathcal O}_K$ and ${\mathfrak s}$ is a prime of $K_{3\ell}$ above ${\mathfrak p}$, \item $\psi_{E/KL}({\mathfrak q})\notin{\mathcal O}_3$. \end{enumerate} Then $m(\ell)=0$ and hence \[(\Br(E\times E)/\Br_1(E\times E))_{\ell^{\infty}}=\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{L}} =\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{KL}}=0.\] \end{proposition} As before, condition \ref{division} is trivially satisfied if $K_{3\ell}\subseteq KL$. \begin{proof} The strategy is the same as for Proposition \ref{d=1}. \end{proof} \begin{example} Let $E$ be the elliptic curve over ${\mathbb Q}$ with affine equation \[y^2+y=x^3-x^2-7x+10.\] $E$ has complex multiplication by the ring of integers of $K={\mathbb Q}(\sqrt{-11})$. Theorem \ref{LinK} tells us that $m(\ell)=n(\ell)=0$ for every prime $\ell\in{\mathbb Z}_{>0}$ and therefore \[\Br(E\times E)=\Br_1(E\times E).\] Let $\theta$ denote the image of complex conjugation in $\End E_{11}/({\mathcal O}_K\otimes{\mathbb Z}/11)$. Then Theorem \ref{geom2} gives \[\Br(\overline{E}\times\overline{E})^{\Gamma_{{\mathbb Q}(\sqrt{-11})}} =\Br(\overline{E}\times\overline{E})^{\Gamma_{{\mathbb Q}}} ={\mathcal O}_K\theta\cong{\mathbb Z}/11.\] \end{example} \begin{example} Let $E$ be the elliptic curve over ${\mathbb Q}$ with affine equation $$y^2=x^3-Dx$$ where $D\in{\mathbb Z}\setminus\{0\}$. Then $\End E={\mathbb Z}[i]$. Let $K={\mathbb Q}(i)$. For any odd prime $\ell\in{\mathbb Z}_{>0}$, Theorem~\ref{LinK} gives \[(\Br(E\times E)/\Br_1(E\times E))_{\ell^{\infty}}=\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{{\mathbb Q}}}=\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{K}}=0.\] Theorem \ref{LinK} tells us that $n(2)=1$. We must compute $m(2)$. By Proposition \ref{upper bound}, $m(2)\leq n(2)$. Let ${\mathfrak q}$ be a finite prime of ${\mathbb Z}[i]$ that is coprime to $2D$. Let $\pi_{\mathfrak q}\in{\mathbb Z}[i]$ be the unique generator of ${\mathfrak q}$ such that \mbox{$\pi_{\mathfrak q}\equiv 1\pmod{(2+2i)}$}. Exercise 2.34 in \cite{Silverman} shows that \[\psi_{E/K}({\mathfrak q})=\Biggl(\frac{D}{\pi_{\mathfrak q}}\Biggr)^{-1}_4\pi_{\mathfrak q}\] where $(\frac{\cdot}{\cdot})_4$ denotes the quartic residue symbol on ${\mathbb Z}[i]$. First suppose that $D$ is a square in ${\mathbb Z}[i]$. Then for all finite primes ${\mathfrak q}$ which are coprime to $2D$, $\psi_{E/K}({\mathfrak q})=\pm\pi_{\mathfrak q}\in{\mathcal O}_2$ and therefore $m(2)=1$. Let $\theta$ denote the image of complex conjugation in $\End E_8/({\mathbb Z}[i]\otimes{\mathbb Z}/8)$. Applying Theorems \ref{geom2} and \ref{geom1}, we see that \begin{eqnarray*} \Br(\overline{E}\times\overline{E})^{\Gamma_{K}}&=&\Br(\overline{E}\times\overline{E})_{2^{\infty}}^{\Gamma_{K}} ={\mathbb Z}[i]\theta \cong {\mathbb Z}/4\times {\mathbb Z}/4\\ \textrm{and }\ \Br(\overline{E}\times\overline{E})^{\Gamma_{{\mathbb Q}}}&=&\Br(\overline{E}\times\overline{E})_{2^{\infty}}^{\Gamma_{{\mathbb Q}}} ={\mathcal O}_2\theta \cong {\mathbb Z}/4\times{\mathbb Z}/2. \end{eqnarray*} Applying Theorem \ref{quotient2}, we see that \begin{eqnarray*} \frac{\Br(E\times E)}{\Br_1(E\times E)}&=&\frac{\Br(E\times E)_4}{\Br_1(E\times E)_4}=\frac{\End_{\Gamma_{{\mathbb Q}}} E_4}{({\mathbb Z}[i]\otimes{\mathbb Z}/4)^{\Gamma_{{\mathbb Q}}}}\\ &\cong & \begin{cases}{\mathbb Z}/2\times{\mathbb Z}/2 & \textrm{if $D$ is a square in ${\mathbb Z}$}\\ {\mathbb Z}/2 & \textrm{if $D$ is not a square in ${\mathbb Z}$.} \end{cases} \end{eqnarray*} Now suppose that $D$ is not a square in ${\mathbb Z}[i]$. By \cite{CF}, Exercise 6.1, there exist infinitely many finite primes ${\mathfrak q}$ of $K$ coprime to $2D$ such that $D$ is not a square modulo ${\mathfrak q}$. For such ${\mathfrak q}$, we have $\psi_{E/K}({\mathfrak q})=\pm i\pi_{\mathfrak q}$ and therefore $\psi_{E/K}({\mathfrak q})\notin{\mathcal O}_2$. Consequently, $m(2)=0$. Let $\eta$ denote the image of complex conjugation in $\End E_4/({\mathbb Z}[i]\otimes{\mathbb Z}/4)$. Then Theorem \ref{geom2} gives \[\Br(\overline{E}\times\overline{E})^{\Gamma_{K}} =\Br(\overline{E}\times\overline{E})^{\Gamma_{{\mathbb Q}}} ={\mathbb Z}[i]\eta\cong{\mathbb Z}/2\times {\mathbb Z}/2\] and Theorem \ref{quotient2} gives $\Br(E\times E)=\Br_1(E\times E).$ \end{example} \begin{example} \label{egzeta3} Let $E$ be the elliptic curve over ${\mathbb Q}$ with affine equation $$y^2=x^3+D$$ where $D\in{\mathbb Z}\setminus\{0\}$. Then $\End E={\mathbb Z}[\zeta_3]$, where $\zeta_3$ denotes a primitive $3$rd root of unity. Let $K={\mathbb Q}(\zeta_3)$. For any prime $\ell>3$, Theorem \ref{LinK} tells us that $m(\ell)=0$ and therefore \[(\Br(E\times E)/\Br_1(E\times E))_{\ell^{\infty}}=\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{{\mathbb Q}}} =\Br(\overline{E}\times\overline{E})_{\ell^{\infty}}^{\Gamma_{K}}=0.\] It remains to compute $m(\ell)$ for $\ell\leq 3$. For $\ell\leq 3$, Theorem \ref{LinK} gives $m(\ell)\leq 1$. Let ${\mathfrak q}$ be a finite prime of $K$ that is coprime to $6D$. Let $\pi_{\mathfrak q}\in{\mathbb Z}[\zeta_3]$ be the unique generator of ${\mathfrak q}$ which satisfies $\pi_{\mathfrak q}\equiv 1\pmod{3}$. By \cite{Silverman}, Ch. II, Example 10.6, the Gr\"{o}ssencharacter attached to $E/K$ is given by \begin{equation} \label{grossencharacter} \psi_{E/K}({\mathfrak q})=\Biggl(\frac{4D}{\pi_{\mathfrak q}}\Biggr)_6^{-1}\pi_{\mathfrak q} \end{equation} where $(\frac{\cdot}{\cdot})_6$ denotes the sextic residue symbol on ${\mathbb Z}[\zeta_3]$. \smallskip \paragraph{\underline{\emph{Computing $m(2)$}}} By the law of cubic reciprocity, \begin{equation} \label{cubic reciprocity} \Biggl(\frac{4}{\pi_{\mathfrak q}}\Biggr)_6=\Biggl(\frac{2}{\pi_{\mathfrak q}}\Biggr)_3=\Biggl(\frac{\pi_{\mathfrak q}}{2}\Biggr)_3\equiv \pi_{\mathfrak q}\pmod{2} \end{equation} where $(\frac{\cdot}{\cdot})_3$ denotes the cubic residue symbol on ${\mathbb Z}[\zeta_3]$. Substituting \eqref{cubic reciprocity} into \eqref{grossencharacter} gives \begin{equation} \label{mod2} \psi_{E/K}({\mathfrak q})=\Biggl(\frac{4}{\pi_{\mathfrak q}}\Biggr)_6^{-1}\Biggl(\frac{D}{\pi_{\mathfrak q}}\Biggr)_6^{-1}\pi_{\mathfrak q}\equiv\Biggl(\frac{D}{\pi_{\mathfrak q}}\Biggr)_6^{-1}\pmod{2}. \end{equation} First, suppose that $D$ is a cube in ${\mathbb Z}$ (equivalently, $D$ is a cube in ${\mathbb Z}[\zeta_3]$). Then $\bigl(\frac{D}{\pi_{\mathfrak q}}\bigr)_6=\pm 1$ and \eqref{mod2} shows that $\psi_{E/K}({\mathfrak q})\in{\mathcal O}_2$ for all finite primes ${\mathfrak q}$ that are coprime to $6D$. Therefore, $m(2)=1$. Now suppose that $D$ is not a cube in ${\mathbb Z}$. By \cite{CF}, Exercise 6.1, there exists a finite prime ${\mathfrak q}$ of $K$ coprime to $6D$ such that $D$ is not a cube modulo ${\mathfrak q}$. For such ${\mathfrak q}$, $\bigl(\frac{D}{\pi_{\mathfrak q}}\bigr)_6\neq\pm 1$, and \eqref{mod2} shows that $\psi_{E/K}({\mathfrak q})\notin{\mathcal O}_2$. Therefore, $m(2)=0$. \smallskip \paragraph{\underline{\emph{Computing $m(3)$}}} First suppose that $4D$ is a cube in ${\mathbb Z}$. Then \eqref{grossencharacter} shows that for all finite primes ${\mathfrak q}$ which are coprime to $6D$, $\psi_{E/K}({\mathfrak q})=\pm\pi_{\mathfrak q}\in{\mathcal O}_3$. Hence, $m(3)=1$. Now suppose that $4D$ is not a cube in ${\mathbb Z}$. By \cite{CF}, Exercise 6.1, there exists a finite prime ${\mathfrak q}$ of $K$ coprime to $6D$ such that $4D$ is not a cube modulo ${\mathfrak q}$. For such ${\mathfrak q}$, $\bigl(\frac{4D}{{\mathfrak q}}\bigr)_6\neq\pm 1$, whereby $\psi_{E/K}({\mathfrak q})\notin{\mathcal O}_3$. Therefore, $m(3)=0$. \end{example} \section{Transcendental Brauer-Manin obstructions to weak approximation} \label{obstruction} Let $L$ be a number field and let $E/L$ be an elliptic curve with complex multiplication by an order ${\mathcal O}$ of an imaginary quadratic field $K$. Let $X=\Kum(E\times E)$ be the K3 surface which is the minimal desingularisation of the quotient of $E\times E$ by the involution $(P,Q)\mapsto (-P,-Q)$. \begin{proposition} \label{Br1trivial} If $\Delta_K\equiv 1\pmod{4}$ and $2\nmid [{\mathcal O}_K:{\mathcal O}]$ then $$\Br_1(X)=\Br(L)$$ and consequently there is no algebraic Brauer-Manin obstruction to weak approximation on $X$. \end{proposition} \begin{proof} By Proposition 1.4 of \cite{SZtorsion}, it suffices to show that $H^1(L,{\mathcal O})=0$. Inflation-restriction gives \begin{eqnarray*} 0\rightarrow H^1(\Gal(KL/L),{\mathcal O})\rightarrow H^1(L,{\mathcal O})\rightarrow H^1(KL,{\mathcal O})&=\Hom_{cts}(\Gamma_{KL},{\mathbb Z}^2)=0. \end{eqnarray*} Therefore, $H^1(L,{\mathcal O})\cong H^1(\Gal(KL/L),{\mathcal O})$. If $K\subset L$ then \mbox{$H^1(\Gal(KL/L),{\mathcal O})=0$,} so suppose that $$\Gal(KL/L)=\langle\tau\rangle\cong {\mathbb Z}/2.$$ Then $$H^1(\Gal(KL/L),{\mathcal O})=\frac{\{x\in{\mathcal O}\mid x+\tau(x)=0\}}{\{\tau(x)-x\mid x\in{\mathcal O}\}}.$$ Writing ${\mathcal O}={\mathbb Z}[f\alpha]$, where $f=[{\mathcal O}_K:{\mathcal O}]$ and $\alpha=(1+\sqrt{\Delta_K})/2$, gives $$\{x\in{\mathcal O}\mid x+\tau(x)=0\}=\{\tau(x)-x\mid x\in{\mathcal O}\}=f\sqrt{\Delta_K}\cdot{\mathbb Z}.$$ \end{proof} By \eqref{eq:embedding}, the existence of a transcendental element of odd order in $\Br(E\times E)$ implies that $\Br(X)$ contains a transcendental element. The same cannot be said for transcendental elements of even order. For this reason, we concentrate on elliptic curves $E$ for which $\Br(E\times E)$ contains a transcendental element of odd order. \begin{theorem} \label{oddtor} Let $E/{\mathbb Q}$ be an elliptic curve with complex multiplication by ${\mathcal O}_K$ such that $\Br(E\times E)$ contains a transcendental element of odd order. Then \mbox{$K={\mathbb Q}(\zeta_3)$} and $E$ has affine equation $y^2=x^3+2c^3$ for some squarefree $c\in {\mathbb Z}$. Furthermore, $$\Br(E\times E)/\Br_1(E\times E)=\Br(E\times E)_3/\Br_1(E\times E)_3=({\mathbb Z}/3)\eta\cong{\mathbb Z}/3$$ where $\eta$ denotes the image of complex conjugation in $\End E_3/({\mathbb Z}[\zeta_3]\otimes{\mathbb Z}/3)$. \end{theorem} \begin{proof} Setting $L={\mathbb Q}={\mathbb Q}(j(E))$ in Theorem \ref{LinK} shows that $K={\mathbb Q}(\zeta_3)$. Since ${\mathbb Z}[\zeta_3]$ has class number $1$, $E$ is isomorphic over $\overline{{\mathbb Q}}$ to the elliptic curve $E'$ with affine equation $y^2=x^3+1$. Therefore, $E$ is the sextic twist of $E'$ by a class in $H^1({\mathbb Q},\mu_6)={\mathbb Q}^\times/({\mathbb Q}^\times)^6$. Consequently, $E$ has an affine equation of the form $y^2=x^3+D$ for some sixth-power-free $D\in{\mathbb Z}$. Example~\ref{egzeta3} shows that $m(\ell)=0$ for every odd prime $\ell$ with $\ell\neq 3$. Since $\Br(E\times E)$ contains a transcendental element of odd order, we have $m(3)\neq 0$. The computation of $m(3)$ in Example~\ref{egzeta3} shows that $m(3)=1$ and $4D$ is a cube in ${\mathbb Z}$. Now the computation of $m(2)$ in Example~\ref{egzeta3} gives $m(2)=0$. Thus, the statement on the transcendental Brauer group follows from Theorem~\ref{quotient2}. \end{proof} Henceforth, for each $c\in{\mathbb Q}^\times$, let $E^c$ be the elliptic curve over ${\mathbb Q}$ with affine equation \[y^2=x^3+2c^3.\] Let $X=\Kum(E^c\times E^c)$. An affine model for $X$ is \begin{equation} u^2=(x^3+2c^3)(t^3+2c^3) \end{equation} Note that $X$ is independent of $c\in{\mathbb Q}^\times$, since $(x,t,u)\mapsto (x/c,t/c,u/c^3)$ gives the following alternative affine model for $X$ \begin{equation} \label{eq:Kummer} u^2=(x^3+2)(t^3+2). \end{equation} By Proposition \ref{Br1trivial}, $\Br_1(X)=\Br({\mathbb Q})$ and therefore there is no algebraic Brauer-Manin obstruction to weak approximation on $X$. By \eqref{eq:embedding}, $$\Br(X)/\Br({\mathbb Q})=\Br(X)_3/\Br_1(X)_3=\Br(E^c\times E^c)_3/\Br_1(E^c\times E^c)_3.$$ Let $\tau\in\Gamma_{{\mathbb Q}}\setminus \Gamma_{{\mathbb Q}(\zeta_3)}$ and let $\theta$ denote the image of $\tau$ in $\End E^c_3$. The image of $\tau$ generates $\End_{\Gamma_{{\mathbb Q}}}(E^c_3)/({\mathbb Z}/3)\cong\Br(X)/\Br({\mathbb Q})\cong{\mathbb Z}/3$. Let ${\mathcal A}\in \Br(X)\setminus\Br({\mathbb Q})$ be a corresponding generator of $\Br(X)/\Br({\mathbb Q})$. For a prime $\ell$, let \[ \xymatrix{\cup: H^1({\mathbb Q}_\ell, E^c_3)\times H^1({\mathbb Q}_\ell,E^c_3)\ar[r] & \Br({\mathbb Q}_\ell)_3\ar[r]^{\mathrm{inv}_\ell} &\frac{1}{3}{\mathbb Z}/{\mathbb Z}} \] be the non-degenerate pairing given by the composition of the cup product, the Weil pairing and the local invariant. Let $\theta^*$ denote the map induced by $\theta$ on $H^1({\mathbb Q}_\ell,E^c_3)$. For $P\in E({\mathbb Q}_\ell)$, let $\chi_P$ denote the image of $P$ under the homomorphism $$\chi: E^c({\mathbb Q}_\ell)\rightarrow H^1({\mathbb Q}_\ell,E^c_3).$$ \begin{proposition} \label{prop:cupproduct} Let $P,Q\in E^c({\mathbb Q}_\ell)\setminus E^c_2$. The ${\mathbb Q}_\ell$-point $(P,Q)$ on $E^c\times E^c$ gives rise to a point $R\in X({\mathbb Q}_\ell)$. We have \begin{equation} \label{evaluation} \ev_{{\mathcal A},\ell}(R)=\chi_P\cup\theta^*(\chi_Q)\in\frac{1}{3}{\mathbb Z}/{\mathbb Z}. \end{equation} \end{proposition} \begin{proof} The statement follows from the results of \cite{SZtorsion}, Section 3. The details are explained in Section 5.1 of \cite{I-S}. \end{proof} \begin{theorem} Let ${\mathcal A}\in\Br(X)_3\setminus\Br({\mathbb Q})$. Let $\nu\neq 3$ be a rational place. Then the evaluation map $\ev_{{\mathcal A},\nu}:X({\mathbb Q}_{\nu})\rightarrow \Br({\mathbb Q}_\nu)_3$ is zero. \end{theorem} \begin{proof} The statement for the infinite place is clear, since $\Br({\mathbb R})={\mathbb Z}/2$ has trivial $3$-torsion. By \cite{goodred}, finite primes of good reduction do not appear in the description of the Brauer-Manin set. Lemma 4.2 of \cite{Matsumoto} shows that odd primes of good reduction for an abelian surface are primes of good reduction for the corresponding Kummer surface. Thus, by \eqref{eq:Kummer}, $\ev_{{\mathcal A},\ell}$ is zero for every finite prime $\ell\nmid 6$. From now on, let $\ell=2$. Let $R\in X({\mathbb Q}_{2})$. We will show that $\ev_{{\mathcal A},2}(R)=0$. We can represent $R$ by $(x_0,t_0,u_0)$ satisfying \eqref{eq:Kummer}. Let $d_R=t_0^3+2$. Since the evaluation map $\ev_{{\mathcal A},2}:X({\mathbb Q}_2)\rightarrow \Br({\mathbb Q}_2)_3$ is locally constant, we are free to use the implicit function theorem to replace $R$ by a point $R'=(x_1,t_1,u_1)\in X({\mathbb Q}_{2})$, sufficiently close to $R$, such that $d=d_{R'}\in{\mathbb Q}^\times$ and $u_1\neq 0$. Now $R'$ gives rise to $P=(dx_1,du_1)\in E^d({\mathbb Q}_2)$ and $Q=(dt_1,d^2)\in E^d({\mathbb Q}_2)$. Recalling that $X=\Kum(E^d\times E^d)$, we apply Proposition \ref{prop:cupproduct} to see that \begin{equation} \label{eq:cupprod2} \ev_{{\mathcal A},2}(R')=\chi_P\cup\theta^*(\chi_Q)\in\frac{1}{3}{\mathbb Z}/{\mathbb Z}. \end{equation} The elliptic curve $E^d$ has either good or additive reduction. First suppose that $E^d$ has additive reduction. Denote by $E^d_0({\mathbb Q}_2)$ the ${\mathbb Q}_2$-points of $E^d$ that reduce to smooth points on the reduction of $E^d$ modulo $2$. By Theorem 1 of \cite{Rene}, $E^d_0({\mathbb Q}_2)$ is topologically isomorphic to ${\mathbb Z}_2$, which is \mbox{$3$-divisible}. An application of Tate's algorithm (see \cite{Silverman}, Ch. IV, \S9, for example) shows that $\#E^d({\mathbb Q}_2)/E^d_0({\mathbb Q}_2)\in\{1,2\}$. Therefore, $E^d({\mathbb Q}_2)$ is $3$-divisible and $\chi=0$. Now suppose that $E^d$ has good reduction. Tate's algorithm shows that $E^d$ has a minimal Weierstrass equation of the form $y^2+y=x^3+a$ for $a\in {\mathbb Z}_2$. Therefore, $E^d({\mathbb Q}_2)/E^d_1({\mathbb Q}_2)\cong{\mathbb Z}/3$, where $E^d_1({\mathbb Q}_2)$ denotes the kernel of the reduction map. Thus, $3E^d({\mathbb Q}_2)\subset E^d_1({\mathbb Q}_2)$. We will show that this inclusion is an equality. The standard filtration on the ${\mathbb Q}_2$-points of $E^d$ gives $$E^d({\mathbb Q}_2)\supset E^d_1({\mathbb Q}_2)\supset E^d_2({\mathbb Q}_2) \supset \dots $$ The theory of formal groups shows that $E^d_2({\mathbb Q}_2)\cong 4{\mathbb Z}_2$. Hence, $E^d_2({\mathbb Q}_2)$ is \mbox{$3$-divisible}. Since $E^d_1({\mathbb Q}_2) /E^d_2({\mathbb Q}_2)\cong {\mathbb Z}/2$, it follows that $E^d_1({\mathbb Q}_2)$ is $3$-divisible. Therefore, $$E^d_1({\mathbb Q}_2)=3E^d_1({\mathbb Q}_2)=3E^d({\mathbb Q}_2).$$ Thus, $\chi$ factors through $E^d({\mathbb Q}_2)/3E^d({\mathbb Q}_2) = E^d({\mathbb Q}_2)/E^d_1({\mathbb Q}_2)\cong{\mathbb Z}/3$ and it is enough to show that \[\chi_P\cup\theta^*(\chi_P)=0\] for any $P\in E^d({\mathbb Q}_2)\setminus E^d_1({\mathbb Q}_2)$ with $2P\neq 0$. The diagonal embedding $E^d\rightarrow E^d\times E^d$ induces a map $E^d\rightarrow X$ whose image is a copy of ${\mathbb P}^1_{\mathbb Q}$. The restriction of ${\mathcal A}$ to ${\mathbb P}^1_{{\mathbb Q}}$ is in $\Br({\mathbb P}^1_{{\mathbb Q}})=\Br({\mathbb Q})$. In other words, ${\mathcal A}$ restricts to a constant algebra on the image of $E^d$ in $X$. Thus, the evaluation of ${\mathcal A}$ at a point on $X$ corresponding to $(P,P)$ on $E^d({\mathbb Q}_2)\times E^d({\mathbb Q}_2)$ is independent of the point $P$. Hence, it suffices to show that $\chi_P\cup\theta^*(\chi_P)=0$ for a single $P\in E^d({\mathbb Q}_2)$. Taking $P\in 3E^d({\mathbb Q}_2)$ completes the proof. \end{proof} The main result of this section is the following theorem. \begin{theorem} \label{obstructionthm} The evaluation map \[\ev_{{\mathcal A}, 3}:X({\mathbb Q}_3)\rightarrow \frac{1}{3}{\mathbb Z}/{\mathbb Z}\] is surjective. Consequently, \[X({\mathbb A}_{{\mathbb Q}})^{\Br(X)}=X({\mathbb Q}_3)_0\times X({\mathbb R})\times \prod_{\ell\neq 3}{X({\mathbb Q}_{\ell})}\ \subsetneq\ X({\mathbb A}_{{\mathbb Q}})\] where $X({\mathbb Q}_3)_0$ denotes the points $P\in X({\mathbb Q}_3)$ with $\ev_{{\mathcal A},3}(P)=0$, and the product runs over prime numbers $\ell\neq 3$. \end{theorem} Theorem \ref{obstructionthm} will be proved via several auxiliary results. \begin{lemma} \label{lem:sufficient} In order to show that $\ev_{{\mathcal A},3}:X({\mathbb Q}_3)\rightarrow\frac{1}{3}{\mathbb Z}/{\mathbb Z}$ is surjective, it is enough to exhibit $c\in {\mathbb Q}^\times$ and $P\in E^c({\mathbb Q}_3)$ such that $\theta^*(\chi_P)$ is not in the image of $E^c({\mathbb Q}_3)$ inside $H^1({\mathbb Q}_3, E^c_3)$. \end{lemma} \begin{proof} Suppose that $P\in E^c({\mathbb Q}_3)$ is such that $\theta^*(\chi_P)$ is not in the image of $E^c({\mathbb Q}_3)$ inside $H^1({\mathbb Q}_3, E^c_3)$. Since the image of $E^c({\mathbb Q}_3)$ is a maximal isotropic subspace inside $H^1({\mathbb Q}_3, E^c_3)$, there exists $Q\in E^c({\mathbb Q}_3)$ such that $\chi_Q\cup\theta^*(\chi_P)\neq 0$. Note that $P,Q\notin E^c_2$ because if, for example, $2P=0$ then $\chi_P=\chi_{3P}=0$. Now by Proposition \ref{prop:cupproduct}, the point $R\in X({\mathbb Q}_3)$ coming from $(Q,P)\in E^c\times E^c$ satisfies \[\ev_{{\mathcal A},3}(R)=\chi_Q\cup\theta^*(\chi_P)\neq 0.\] Surjectivity follows since for every $n\in{\mathbb Z}$, $\chi_{nQ}\cup \theta^*(\chi_P)=n(\chi_{Q}\cup \theta^*(\chi_P))$. \end{proof} In Proposition \ref{prop:3}, we will show that we can take $c=3$ and $P=(3,9)$ in Lemma \ref{lem:sufficient}. From now on, let $E=E^{(3)}$ be the elliptic curve with affine equation $y^2=x^3+2.3^3$. First, we determine the group $E({\mathbb Q}_3)/3$ and give explicit generators. \begin{lemma} \label{lem:generators} We have $E({\mathbb Q}_3)/3\cong ({\mathbb Z}/3)^2$, with generators $P=(3,9)$ and $Q=(4,\sqrt{2.59})$. \end{lemma} \begin{proof} Denote by $E_0({\mathbb Q}_3)$ the ${\mathbb Q}_3$-points of $E$ that reduce to smooth points on the reduction of $E$ modulo $3$. Denote by $E_1({\mathbb Q}_3)$ the kernel of reduction. The elliptic curve $E/{\mathbb Q}_3$ has additive reduction and hence \begin{equation} E_0({\mathbb Q}_3)/E_1({\mathbb Q}_3)\cong{\mathbb F}_3. \end{equation} Applying Tate's algorithm, we find that \begin{equation} \label{eq:c=3} E({\mathbb Q}_3)/E_0({\mathbb Q}_3)\cong{\mathbb Z}/3. \end{equation} By Theorem 1 of \cite{Rene}, $E_0({\mathbb Q}_3)\cong {\mathbb Z}_3$. The following sequence is exact. \[ \xymatrix{ 0\ar[r] & \frac{E_0({\mathbb Q}_3)}{3E({\mathbb Q}_3)}\ar[r] & \frac{E({\mathbb Q}_3)}{3E({\mathbb Q}_3)}\ar[r] & \frac{E({\mathbb Q}_3)}{E_0({\mathbb Q}_3)}\ar[r] & 0. } \] Since $E_0({\mathbb Q}_3)\cong{\mathbb Z}_3$ and $E_0({\mathbb Q}_3)/E_1({\mathbb Q}_3)\cong{\mathbb F}_3$, we have $3E_0({\mathbb Q}_3)=E_1({\mathbb Q}_3)$. By \eqref{eq:c=3}, $E({\mathbb Q}_3)/E_0({\mathbb Q}_3)\cong{\mathbb Z}/3$. A suitable generator is $P=(3,9)$. A calculation shows that \mbox{$3P=(3^{-2}.19, -3^{-3}.5.43)\in E_1({\mathbb Q}_3)$}. Therefore, $3E({\mathbb Q}_3)=E_1({\mathbb Q}_3)$. The point $Q$ generates $E_0({\mathbb Q}_3)/E_1({\mathbb Q}_3)$. \end{proof} In light of Lemma \ref{lem:sufficient}, we will study the action of $\theta$ on the image of $E({\mathbb Q}_3)$ in $H^1({\mathbb Q}_3, E_3)$. We have \begin{equation*} E_3=\{O_E, (0,3\sqrt{6}), (0,-3\sqrt{6})\}\cup\bigcup_{0\leq k\leq 2} \{(-6\zeta_3^k,9\sqrt{-2}), (-6\zeta_3^k,-9\sqrt{-2}) \}. \end{equation*} Let $F={\mathbb Q}_3(E_3)={\mathbb Q}_3(\zeta_3)$. The inflation-restriction exact sequence gives \begin{eqnarray*} H^1(\Gal(F/{\mathbb Q}_3), E_3)\rightarrow H^1({\mathbb Q}_3,E_3)\rightarrow H^1(F,E_3)^{\Gal(F/{\mathbb Q}_3)}\rightarrow H^2(\Gal(F/{\mathbb Q}_3), E_3). \end{eqnarray*} Since $[F:{\mathbb Q}_3]=2$, we have $H^1(\Gal(F/{\mathbb Q}_3), E_3)=H^2(\Gal(F/{\mathbb Q}_3), E_3)=0$. Therefore, the restriction map gives an isomorphism $$H^1({\mathbb Q}_3,E_3)\rightarrow H^1(F,E_3)^{\Gal(F/{\mathbb Q}_3)}.$$ Let $T\in E({\mathbb Q}_3)$. In a slight abuse of notation, we continue to write $\chi_T$ for the image of $T$ in \mbox{$H^1(F,E_3)=\Hom_{\mathrm{cts}}(\Gamma_F,E_3)$}. In order to study the action of $\theta$ on $\chi_T(\Gamma_F)\subset E_3$, we will use the following polynomials. Let $f_T\in{\mathbb Q}_3[t]$ be the degree $9$ polynomial satisfied by the $x$-coordinates of the points $R\in E(\overline{{\mathbb Q}_3})$ such that $3R=T$. By Exercise III.3.7 of \cite{Silverman}, \begin{equation} \label{eq:f} f_{T}(t)=3^2t^2(t-x(T))(t^3+2^3.3^3)^2-2^3(t^3+2.3^3)(t^6+2^3.3^3.5t^3-2^5.3^6). \end{equation} Let $g_T\in{\mathbb Q}_3(\zeta_3)[t]$ be the cubic polynomial satisfied by the $x$-coordinates of the points \mbox{$S\in E(\overline{{\mathbb Q}_3})$} such that $(1-\zeta_3)S=T$. The addition formula shows that \begin{equation} \label{eq:g} g_T(t)=t^3+3\zeta_3x(T)t^2+2^3.3^3. \end{equation} Combining Lemma \ref{lem:sufficient} with Proposition \ref{prop:3} below completes the proof of Theorem \ref{obstructionthm}. \begin{proposition} \label{prop:3} Let $P=(3,9)\in E({\mathbb Q}_3)$. Then $\theta^*(\chi_P)$ is not in the image of $E({\mathbb Q}_3)$ inside $H^1({\mathbb Q}_3,E_3)$. \end{proposition} \begin{proof} We have ${\mathbb Q}_3(E_3)={\mathbb Q}_3(\zeta_3)$. By Lemma \ref{lem:generators}, $E({\mathbb Q}_3)/3$ is generated by \mbox{$P=(3,9)$} and $Q=(4,\sqrt{2.59})$. A calculation using MAGMA \cite{Magma} shows that the degree $9$ polynomial $f_P$ given by \eqref{eq:f} is irreducible over ${\mathbb Q}_3$ and therefore also irreducible over ${\mathbb Q}_3(\zeta_3)$. By \eqref{eq:g}, we have \begin{eqnarray*}g_P(t)=t^3+3^2\zeta_3t^2+2^3 .3^3 & \textrm{ and } & g_Q(t)=t^3+2^2 3\zeta_3t^2+2^3. 3^3.\end{eqnarray*} Making a change of variables $t=3u$, we see that $g_Q(t)$ defines the same extension of ${\mathbb Q}(\zeta_3)$ as $h_Q(u)=u^3+2^2\zeta_3u^2+2^3$. Now $h_Q(u)\equiv u^3+u^2-1\pmod{(1-\zeta_3)}$, which is irreducible over the residue field ${\mathbb F}_3$ of ${\mathbb Q}_3(\zeta_3)$. Thus, $g_Q(t)$ defines an unramified extension of ${\mathbb Q}_3(\zeta_3)$. On the other hand, we claim that $g_P(t)$ defines a ramified extension of ${\mathbb Q}_3(\zeta_3)$. Making a change of variables $t=3(u+1)$, we see that $g_P(t)$ defines the same extension of ${\mathbb Q}(\zeta_3)$ as \mbox{$h_P(u)=u^3+3(1+\zeta_3)u^2+3(1+2\zeta_3)u+3\zeta_3 +3^2$}. Let $\pi=(1-\zeta_3)$. Examining the $\pi$-adic valuation of the terms in $h_P(u)$, we see that any root of $h_P(u)$ has $\pi$-adic valuation $2/3$. Therefore, $g_P(t)$ defines a ramified extension of ${\mathbb Q}_3(\zeta_3)$, as claimed. Let $R_P, R_Q\in E(\overline{{\mathbb Q}_3})$ be such that $3R_P=P$ and $3R_Q=Q$. Let \mbox{$S_P=(1-\zeta_3^2)R_P$} and let $S_Q=(1-\zeta_3^2)R_Q$. Recall that ${\mathbb Q}_3(\zeta_3, x(R_P))$ is the degree $9$ extension of ${\mathbb Q}_3(\zeta_3)$ defined by $f_P$. Since $P$ is not a $2$-torsion point, ${\mathbb Q}_3(\zeta_3, x(R_P))={\mathbb Q}_3(\zeta_3, R_P)$. Likewise, $g_P$ defines the ramified cubic extension ${\mathbb Q}_3(\zeta_3, S_P)/{\mathbb Q}_3(\zeta_3)$ and $g_Q$ defines the unramified cubic extension ${\mathbb Q}_3(\zeta_3, S_Q)/{\mathbb Q}_3(\zeta_3)$. Therefore, there exists $\sigma\in\Gamma_{{\mathbb Q}_3(\zeta_3)}$ such that $\sigma(S_Q)\neq S_Q$, $\sigma(S_P)=S_P$ and $\sigma(R_P)\neq R_P$. We have \[(1-\zeta_3^2)\chi_P(\sigma)=(1-\zeta_3^2)(\sigma(R_P)-R_P)=\sigma(S_P)-S_P= 0\] and \[(1-\zeta_3^2)\chi_Q(\sigma)=(1-\zeta_3^2)(\sigma(R_Q)-R_Q)=\sigma(S_Q)-S_Q\neq 0.\] Thus, $\chi_Q(\sigma)\notin E_{(1-\zeta_3)}$ and $\chi_P(\sigma)\in E_{(1-\zeta_3)}\setminus \{O_E\}$. Suppose for contradiction that $\theta^*(\chi_P)$ is in the image of $E({\mathbb Q}_3)$ inside $H^1({\mathbb Q}_3,E_3)$, so that \begin{equation} \label{eq:linearcombo} \theta^*(\chi_P)=\chi_{(aP+bQ)}=a\chi_P+b\chi_Q \end{equation} for $a,b\in{\mathbb F}_3$. Note that $\theta$ acts as multiplication by $-1$ on $E_{(1-\zeta_3)}=\{O_E, (0,3\sqrt{6}), (0,-3\sqrt{6})\}$, so \begin{equation} -\chi_P(\sigma)=\theta^*(\chi_P)(\sigma)=a\chi_P(\sigma)+b\chi_Q(\sigma) \end{equation} which implies that $b\chi_Q(\sigma)\in E_{(1-\zeta_3)}$ and hence $b=0$ and $a=-1$. Since $g_P$ is irreducible over ${\mathbb Q}_3(\zeta_3)$, there exists $\rho\in \Gamma_{{\mathbb Q}(\zeta_3)}$ such that $\rho(S_P)\neq S_P$. For such $\rho$ we have \[(1-\zeta_3^2)\chi_P(\rho)=(1-\zeta_3^2)(\rho(R_P)-R_P)=\rho(S_P)-S_P\neq 0\] and hence $\chi_P(\rho)\notin E_{(1-\zeta_3)}$. Therefore, $\chi_P(\Gamma_{{\mathbb Q}(\zeta_3)})=E_3$. In particular, $T=(-6\zeta_3,9\sqrt{-2})$ is in the image of $\chi_P$. But $\theta(T)\neq -T$, which contradicts $\theta^*(\chi_P)=-\chi_P$. \end{proof} \paragraph{\textbf{Acknowledgements}} I am very grateful to Alexei Skorobogatov for suggesting this problem, for several enlightening discussions and for pointing out a mistake in an earlier version of Theorem \ref{thm:Brauer-Maninset}. I would like to thank Dennis Eriksson, Paul Ziegler, Martin Bright, Spiros Adams-Florou, Jack Thorne and David Holmes for their enthusiasm and for some useful discussions. I am grateful to Peter Stevenhagen for his input which led towards the current formulation of Theorem \ref{LinK}. I would like to thank Tim Dokchitser and Srilakshmi Krishnamoorthy for some helpful comments on an earlier draft of this paper. Most of this work was done during my stay at the Max Planck Institute for Mathematics in Bonn. I am grateful to the Max Planck Institute for financial support and for providing a very stimulating working environment.
proofpile-arXiv_067-8422
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }